The development of an ionospheric storm-time index for the South African region
- Authors: Tshisaphungo, Mpho
- Date: 2021-04
- Subjects: Ionospheric storms -- South Africa , Global Positioning System , Neural networks (Computer science) , Regression analysis , Ionosondes , Auroral electrojet , Geomagnetic indexes , Magnetic storms -- South Africa
- Language: English
- Type: thesis , text , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/178409 , vital:42937 , 10.21504/10962/178409
- Description: This thesis presents the development of a regional ionospheric storm-time model which forms the foundation of an index to provide a quick view of the ionospheric storm effects over South African mid-latitude region. The model is based on the foF2 measurements from four South African ionosonde stations. The data coverage for the model development over Grahamstown (33.3◦S, 26.5◦E), Hermanus (34.42◦S, 19.22◦E), Louisvale (28.50◦S, 21.20◦E), and Madimbo (22.39◦S, 30.88◦E) is 1996-2016, 2009-2016, 2000-2016, and 2000-2016 respectively. Data from the Global Positioning System (GPS) and radio occultation (RO) technique were used during validation. As the measure of either positive or negative storm effect, the variation of the critical frequency of the F2 layer (foF2) from the monthly median values (denoted as _foF2) is modeled. The modeling of _foF2 is based on only storm time data with the criteria of Dst 6 -50 nT and Kp > 4. The modeling methods used in the study were artificial neural network (ANN), linear regression (LR) and polynomial functions. The approach taken was to first test the modeling techniques on a single station before expanding the study to cover the regional aspect. The single station modeling was developed based on ionosonde data over Grahamstown. The inputs for the model which related to seasonal variation, diurnal variation, geomagnetic activity and solar activity were considered. For the geomagnetic activity, three indices namely; the symmetric disturbance in the horizontal component of the Earth’s magnetic field (SYM − H), the Auroral Electrojet (AE) index and local geomagnetic index A, were included as inputs. The performance of a single station model revealed that, of the three geomagnetic indices, SYM − H index has the largest contribution of 41% and 54% based on ANN and LR techniques respectively. The average correlation coefficients (R) for both ANN and LR models was 0.8, when validated during the selected storms falling within the period of model development. When validated using storms that fall outside the period of model development, the model gave R values of 0.6 and 0.5 for ANN and LR respectively. In addition, the GPS total electron content (TEC) derived measurements were used to estimate foF2 data. This is because there are more GPS receivers than ionosonde locations and the utilisation of this data increases the spatial coverage of the regional model. The estimation of foF2 from GPS TEC was done at GPS-ionosonde co-locations using polynomial functions. The average R values of 0.69 and 0.65 were obtained between actual and derived _foF2 over the co-locations and other GPS stations respectively. Validation of GPS TEC derived foF2 with RO data over regions out of ionospheric pierce points coverage with respect to ionosonde locations gave R greater than 0.9 for the selected storm period of 4-8 August 2011. The regional storm-time model was then developed based on the ANN technique using the four South African ionosonde stations. The maximum and minimum R values of 0.6 and 0.5 were obtained over ionosonde and GPS locations respectively. This model forms the basis towards the regional ionospheric storm-time index. , Thesis (PhD) -- Faculty of Science, Physics and Electronics, 2021
- Full Text:
- Date Issued: 2021-04
- Authors: Tshisaphungo, Mpho
- Date: 2021-04
- Subjects: Ionospheric storms -- South Africa , Global Positioning System , Neural networks (Computer science) , Regression analysis , Ionosondes , Auroral electrojet , Geomagnetic indexes , Magnetic storms -- South Africa
- Language: English
- Type: thesis , text , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/178409 , vital:42937 , 10.21504/10962/178409
- Description: This thesis presents the development of a regional ionospheric storm-time model which forms the foundation of an index to provide a quick view of the ionospheric storm effects over South African mid-latitude region. The model is based on the foF2 measurements from four South African ionosonde stations. The data coverage for the model development over Grahamstown (33.3◦S, 26.5◦E), Hermanus (34.42◦S, 19.22◦E), Louisvale (28.50◦S, 21.20◦E), and Madimbo (22.39◦S, 30.88◦E) is 1996-2016, 2009-2016, 2000-2016, and 2000-2016 respectively. Data from the Global Positioning System (GPS) and radio occultation (RO) technique were used during validation. As the measure of either positive or negative storm effect, the variation of the critical frequency of the F2 layer (foF2) from the monthly median values (denoted as _foF2) is modeled. The modeling of _foF2 is based on only storm time data with the criteria of Dst 6 -50 nT and Kp > 4. The modeling methods used in the study were artificial neural network (ANN), linear regression (LR) and polynomial functions. The approach taken was to first test the modeling techniques on a single station before expanding the study to cover the regional aspect. The single station modeling was developed based on ionosonde data over Grahamstown. The inputs for the model which related to seasonal variation, diurnal variation, geomagnetic activity and solar activity were considered. For the geomagnetic activity, three indices namely; the symmetric disturbance in the horizontal component of the Earth’s magnetic field (SYM − H), the Auroral Electrojet (AE) index and local geomagnetic index A, were included as inputs. The performance of a single station model revealed that, of the three geomagnetic indices, SYM − H index has the largest contribution of 41% and 54% based on ANN and LR techniques respectively. The average correlation coefficients (R) for both ANN and LR models was 0.8, when validated during the selected storms falling within the period of model development. When validated using storms that fall outside the period of model development, the model gave R values of 0.6 and 0.5 for ANN and LR respectively. In addition, the GPS total electron content (TEC) derived measurements were used to estimate foF2 data. This is because there are more GPS receivers than ionosonde locations and the utilisation of this data increases the spatial coverage of the regional model. The estimation of foF2 from GPS TEC was done at GPS-ionosonde co-locations using polynomial functions. The average R values of 0.69 and 0.65 were obtained between actual and derived _foF2 over the co-locations and other GPS stations respectively. Validation of GPS TEC derived foF2 with RO data over regions out of ionospheric pierce points coverage with respect to ionosonde locations gave R greater than 0.9 for the selected storm period of 4-8 August 2011. The regional storm-time model was then developed based on the ANN technique using the four South African ionosonde stations. The maximum and minimum R values of 0.6 and 0.5 were obtained over ionosonde and GPS locations respectively. This model forms the basis towards the regional ionospheric storm-time index. , Thesis (PhD) -- Faculty of Science, Physics and Electronics, 2021
- Full Text:
- Date Issued: 2021-04
Design patterns and software techniques for large-scale, open and reproducible data reduction
- Authors: Molenaar, Gijs Jan
- Date: 2021
- Subjects: Radio astronomy -- Data processing , Radio astronomy -- Data processing -- Software , Radio astronomy -- South Africa , ASTRODECONV2019 dataset , Radio telescopes -- South Africa , KERN (omputer software)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/172169 , vital:42172 , 10.21504/10962/172169
- Description: The preparation for the construction of the Square Kilometre Array, and the introduction of its operational precursors, such as LOFAR and MeerKAT, mark the beginning of an exciting era for astronomy. Impressive new data containing valuable science just waiting for discovery is already being generated, and these devices will produce far more data than has ever been collected before. However, with every new data instrument, the data rates grow to unprecedented quantities of data, requiring novel new data-processing tools. In addition, creating science grade data from the raw data still requires significant expert knowledge for processing this data. The software used is often developed by a scientist who lacks proper training in software development skills, resulting in the software not progressing beyond a prototype stage in quality. In the first chapter, we explore various organisational and technical approaches to address these issues by providing a historical overview of the development of radioastronomy pipelines since the inception of the field in the 1940s. In that, the steps required to create a radio image are investigated. We used the lessons-learned to identify patterns in the challenges experienced, and the solutions created to address these over the years. The second chapter describes the mathematical foundations that are essential for radio imaging. In the third chapter, we discuss the production of the KERN Linux distribution, which is a set of software packages containing most radio astronomy software currently in use. Considerable effort was put into making sure that the contained software installs appropriately, all items next to one other on the same system. Where required and possible, bugs and portability fixes were solved and reported with the upstream maintainers. The KERN project also has a website, and issue tracker, where users can report bugs and maintainers can coordinate the packaging effort and new releases. The software packages can be used inside Docker and Singularity containers, enabling the installation of these packages on a wide variety of platforms. In the fourth and fifth chapters, we discuss methods and frameworks for combining the available data reduction tools into recomposable pipelines and introduce the Kliko specification and software. This framework was created to enable end-user astronomers to chain and containerise operations of software in KERN packages. Next, we discuss the Common Workflow Language (CommonWL), a similar but more advanced and mature pipeline framework invented by bio-informatics scientists. CommonWL is supported by a wide range of tools already; among other schedulers, visualisers and editors. Consequently, when a pipeline is made with CommonWL, it can be deployed and manipulated with a wide range of tools. In the final chapter, we attempt something unconventional, applying a generative adversarial network based on deep learning techniques to perform the task of sky brightness reconstruction. Since deep learning methods often require a large number of training samples, we constructed a CommonWL simulation pipeline for creating dirty images and corresponding sky models. This simulated dataset has been made publicly available as the ASTRODECONV2019 dataset. It is shown that this method is useful to perform the restoration and matches the performance of a single clean cycle. In addition, we incorporated domain knowledge by adding the point spread function to the network and by utilising a custom loss function during training. Although it was not possible to improve the cleaning performance of commonly used existing tools, the computational time performance of the approach looks very promising. We suggest that a smaller scope should be the starting point for further studies and optimising of the training of the neural network could produce the desired results.
- Full Text:
- Date Issued: 2021
- Authors: Molenaar, Gijs Jan
- Date: 2021
- Subjects: Radio astronomy -- Data processing , Radio astronomy -- Data processing -- Software , Radio astronomy -- South Africa , ASTRODECONV2019 dataset , Radio telescopes -- South Africa , KERN (omputer software)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/172169 , vital:42172 , 10.21504/10962/172169
- Description: The preparation for the construction of the Square Kilometre Array, and the introduction of its operational precursors, such as LOFAR and MeerKAT, mark the beginning of an exciting era for astronomy. Impressive new data containing valuable science just waiting for discovery is already being generated, and these devices will produce far more data than has ever been collected before. However, with every new data instrument, the data rates grow to unprecedented quantities of data, requiring novel new data-processing tools. In addition, creating science grade data from the raw data still requires significant expert knowledge for processing this data. The software used is often developed by a scientist who lacks proper training in software development skills, resulting in the software not progressing beyond a prototype stage in quality. In the first chapter, we explore various organisational and technical approaches to address these issues by providing a historical overview of the development of radioastronomy pipelines since the inception of the field in the 1940s. In that, the steps required to create a radio image are investigated. We used the lessons-learned to identify patterns in the challenges experienced, and the solutions created to address these over the years. The second chapter describes the mathematical foundations that are essential for radio imaging. In the third chapter, we discuss the production of the KERN Linux distribution, which is a set of software packages containing most radio astronomy software currently in use. Considerable effort was put into making sure that the contained software installs appropriately, all items next to one other on the same system. Where required and possible, bugs and portability fixes were solved and reported with the upstream maintainers. The KERN project also has a website, and issue tracker, where users can report bugs and maintainers can coordinate the packaging effort and new releases. The software packages can be used inside Docker and Singularity containers, enabling the installation of these packages on a wide variety of platforms. In the fourth and fifth chapters, we discuss methods and frameworks for combining the available data reduction tools into recomposable pipelines and introduce the Kliko specification and software. This framework was created to enable end-user astronomers to chain and containerise operations of software in KERN packages. Next, we discuss the Common Workflow Language (CommonWL), a similar but more advanced and mature pipeline framework invented by bio-informatics scientists. CommonWL is supported by a wide range of tools already; among other schedulers, visualisers and editors. Consequently, when a pipeline is made with CommonWL, it can be deployed and manipulated with a wide range of tools. In the final chapter, we attempt something unconventional, applying a generative adversarial network based on deep learning techniques to perform the task of sky brightness reconstruction. Since deep learning methods often require a large number of training samples, we constructed a CommonWL simulation pipeline for creating dirty images and corresponding sky models. This simulated dataset has been made publicly available as the ASTRODECONV2019 dataset. It is shown that this method is useful to perform the restoration and matches the performance of a single clean cycle. In addition, we incorporated domain knowledge by adding the point spread function to the network and by utilising a custom loss function during training. Although it was not possible to improve the cleaning performance of commonly used existing tools, the computational time performance of the approach looks very promising. We suggest that a smaller scope should be the starting point for further studies and optimising of the training of the neural network could produce the desired results.
- Full Text:
- Date Issued: 2021
Addressing flux suppression, radio frequency interference, and selection of optimal solution intervals during radio interferometric calibration
- Authors: Sob, Ulrich Armel Mbou
- Date: 2020
- Subjects: CubiCal (Software) , Radio -- Interference , Imaging systems in astronomy , Algorithms , Astronomical instruments -- Calibration , Astronomy -- Data processing
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/147714 , vital:38663
- Description: The forthcoming Square Kilometre Array is expected to provide answers to some of the most intriguing questions about our Universe. However, as it is already noticeable from MeerKAT and other precursors, the amounts of data produced by these new instruments are significantly challenging to calibrate and image. Calibration of radio interferometric data is usually biased by incomplete sky models and radio frequency interference (RFI) resulting in calibration artefacts that limit the dynamic range and image fidelity of the resulting images. One of the most noticeable of these artefacts is the formation of spurious sources which causes suppression of real emissions. Fortunately, it has been shown that calibration algorithms employing heavy-tailed likelihood functions are less susceptible to this due to their robustness against outliers. Leveraging on recent developments in the field of complex optimisation, we implement a robust calibration algorithm using a Student’s t likelihood function and Wirtinger derivatives. The new algorithm, dubbed the robust solver, is incorporated as a subroutine into the newly released calibration software package CubiCal. We perform statistical analysis on the distribution of visibilities and provide an insight into the functioning of the robust solver and describe different scenarios where it will improve calibration. We use simulations to show that the robust solver effectively reduces the amount of flux suppressed from unmodelled sources both in direction independent and direction dependent calibration. Furthermore, the robust solver is shown to successfully mitigate the effects of low-level RFI when applied to a simulated and a real VLA dataset. Finally, we demonstrate that there are close links between the amount of flux suppressed from sources, the effects of the RFI and the employed solution interval during radio interferometric calibration. Hence, we investigate the effects of solution intervals and the different factors to consider in order to select adequate solution intervals. Furthermore, we propose a practical brute force method for selecting optimal solution intervals. The proposed method is successfully applied to a VLA dataset.
- Full Text:
- Date Issued: 2020
- Authors: Sob, Ulrich Armel Mbou
- Date: 2020
- Subjects: CubiCal (Software) , Radio -- Interference , Imaging systems in astronomy , Algorithms , Astronomical instruments -- Calibration , Astronomy -- Data processing
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/147714 , vital:38663
- Description: The forthcoming Square Kilometre Array is expected to provide answers to some of the most intriguing questions about our Universe. However, as it is already noticeable from MeerKAT and other precursors, the amounts of data produced by these new instruments are significantly challenging to calibrate and image. Calibration of radio interferometric data is usually biased by incomplete sky models and radio frequency interference (RFI) resulting in calibration artefacts that limit the dynamic range and image fidelity of the resulting images. One of the most noticeable of these artefacts is the formation of spurious sources which causes suppression of real emissions. Fortunately, it has been shown that calibration algorithms employing heavy-tailed likelihood functions are less susceptible to this due to their robustness against outliers. Leveraging on recent developments in the field of complex optimisation, we implement a robust calibration algorithm using a Student’s t likelihood function and Wirtinger derivatives. The new algorithm, dubbed the robust solver, is incorporated as a subroutine into the newly released calibration software package CubiCal. We perform statistical analysis on the distribution of visibilities and provide an insight into the functioning of the robust solver and describe different scenarios where it will improve calibration. We use simulations to show that the robust solver effectively reduces the amount of flux suppressed from unmodelled sources both in direction independent and direction dependent calibration. Furthermore, the robust solver is shown to successfully mitigate the effects of low-level RFI when applied to a simulated and a real VLA dataset. Finally, we demonstrate that there are close links between the amount of flux suppressed from sources, the effects of the RFI and the employed solution interval during radio interferometric calibration. Hence, we investigate the effects of solution intervals and the different factors to consider in order to select adequate solution intervals. Furthermore, we propose a practical brute force method for selecting optimal solution intervals. The proposed method is successfully applied to a VLA dataset.
- Full Text:
- Date Issued: 2020
Analysing emergent time within an isolated Universe through the application of interactions in the conditional probability approach
- Authors: Bryan, Kate Louise Halse
- Date: 2020
- Subjects: Space and time , Quantum gravity , Quantum theory , Relativity (Physics)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/146676 , vital:38547
- Description: Time remains a frequently discussed issue in physics and philosophy. One interpretation of growing popularity is the ‘timeless’ view which states that our experience of time is only an illusion. The isolated Universe model, provided by the Wheeler-DeWitt equation, supports this interpretation by describing time using clocks in the conditional probability interpretation (CPI). However, the CPI customarily dismisses interaction effects as negligible creating a potential blind spot which overlooks the potential influence of interaction effects. Accounting for interactions opens up a new avenue of analysis and a potential challenge to the interpretation of time. In aid of our assessment of the impact interaction effects have on the CPI, we present rudimentary definitions of time and its associated concepts. Defined in a minimalist manner, time is argued to require a postulate of causality as a means of accounting for temporal ordering in physical theories. Several of these theories are discussed here in terms of their respective approaches to time and, despite their differences, there are indications that the accounts of time are unified in a more fundamental theory. An analytic analysis of the CPI, incorporating two different clock choices, and a qualitative analysis both confirm that interactions have a necessary role within the CPI. The consequence of removing interactions is a maximised uncertainty in any measurement of the clock and a restriction to a two-state system, as indicated by the results of the toy models and qualitative argument respectively. The philosophical implication is that we are not restricted to the timeless view since including interactions as agents of causal interventions between systems provides an account of time as a real phenomenon. This result highlights the reliance on a postulate of causality which forms a pressing problem in explaining our experience of time.
- Full Text:
- Date Issued: 2020
- Authors: Bryan, Kate Louise Halse
- Date: 2020
- Subjects: Space and time , Quantum gravity , Quantum theory , Relativity (Physics)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/146676 , vital:38547
- Description: Time remains a frequently discussed issue in physics and philosophy. One interpretation of growing popularity is the ‘timeless’ view which states that our experience of time is only an illusion. The isolated Universe model, provided by the Wheeler-DeWitt equation, supports this interpretation by describing time using clocks in the conditional probability interpretation (CPI). However, the CPI customarily dismisses interaction effects as negligible creating a potential blind spot which overlooks the potential influence of interaction effects. Accounting for interactions opens up a new avenue of analysis and a potential challenge to the interpretation of time. In aid of our assessment of the impact interaction effects have on the CPI, we present rudimentary definitions of time and its associated concepts. Defined in a minimalist manner, time is argued to require a postulate of causality as a means of accounting for temporal ordering in physical theories. Several of these theories are discussed here in terms of their respective approaches to time and, despite their differences, there are indications that the accounts of time are unified in a more fundamental theory. An analytic analysis of the CPI, incorporating two different clock choices, and a qualitative analysis both confirm that interactions have a necessary role within the CPI. The consequence of removing interactions is a maximised uncertainty in any measurement of the clock and a restriction to a two-state system, as indicated by the results of the toy models and qualitative argument respectively. The philosophical implication is that we are not restricted to the timeless view since including interactions as agents of causal interventions between systems provides an account of time as a real phenomenon. This result highlights the reliance on a postulate of causality which forms a pressing problem in explaining our experience of time.
- Full Text:
- Date Issued: 2020
Dynamics of stimulated luminescence in natural quartz: Thermoluminescence and phototransferred thermoluminescence
- Authors: Folley, Damilola Esther
- Date: 2020
- Subjects: Thermoluminescence , Quartz
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/146255 , vital:38509
- Description: Natural quartz has remained an important mineral that is of topical interest in luminescence and dosimetry-related research. We investigate the dynamics of stimulated luminescence on this material through thermoluminescence (TL) and phototransferred thermoluminescence (PTTL). Measurements were made on unannealed natural quartz as well as quartz annealed at 800 and 1000̊C. The samples were annealed for 10 minutes and for 1 hour. The material, in its un- and annealed state has its main peak between 68 and 72̊C when measured at 1Cs ̃1 after a dose of 50 Gy. A study of dosimetric features and kinetic analysis was carried out on two prominent peaks, peak I and III for all the samples. The peaks show a sublinear dose response for irradiation doses between 10 and 300 Gy. Kinetic analysis shows that peak I is a first-order peak and peak III a general-order peak. Interestingly, we observe for peak I for the sample annealed at 800̊C for 1 hour an inverse thermal quenching behaviour. We demonstrate that a peak affected with an inverse thermal quenching-like behaviour can still show effect of thermal quenching when the dose the sample is irradiated to is significantly reduced. We ascribe the apparent dependence of thermal quenching on dose to competition between radiative and non-radiative transitions at the recombination centre. Peaks I, II, and III for all the samples were reproduced under phototransfer when the peaks, initially removed by preheating to a certain temperature are exposed to 470 and 525 nm light. The infuence of duration of illumination on the PTTL intensity of these peaks corresponding to various preheating temperatures is modelled using coupled first-order dfferential equations. The model is based on systems of acceptors and donors whose number and role depends on preheating temperature
- Full Text:
- Date Issued: 2020
- Authors: Folley, Damilola Esther
- Date: 2020
- Subjects: Thermoluminescence , Quartz
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/146255 , vital:38509
- Description: Natural quartz has remained an important mineral that is of topical interest in luminescence and dosimetry-related research. We investigate the dynamics of stimulated luminescence on this material through thermoluminescence (TL) and phototransferred thermoluminescence (PTTL). Measurements were made on unannealed natural quartz as well as quartz annealed at 800 and 1000̊C. The samples were annealed for 10 minutes and for 1 hour. The material, in its un- and annealed state has its main peak between 68 and 72̊C when measured at 1Cs ̃1 after a dose of 50 Gy. A study of dosimetric features and kinetic analysis was carried out on two prominent peaks, peak I and III for all the samples. The peaks show a sublinear dose response for irradiation doses between 10 and 300 Gy. Kinetic analysis shows that peak I is a first-order peak and peak III a general-order peak. Interestingly, we observe for peak I for the sample annealed at 800̊C for 1 hour an inverse thermal quenching behaviour. We demonstrate that a peak affected with an inverse thermal quenching-like behaviour can still show effect of thermal quenching when the dose the sample is irradiated to is significantly reduced. We ascribe the apparent dependence of thermal quenching on dose to competition between radiative and non-radiative transitions at the recombination centre. Peaks I, II, and III for all the samples were reproduced under phototransfer when the peaks, initially removed by preheating to a certain temperature are exposed to 470 and 525 nm light. The infuence of duration of illumination on the PTTL intensity of these peaks corresponding to various preheating temperatures is modelled using coupled first-order dfferential equations. The model is based on systems of acceptors and donors whose number and role depends on preheating temperature
- Full Text:
- Date Issued: 2020
Modelling and investigating primary beam effects of reflector antenna arrays
- Authors: Iheanetu, Kelachukwu
- Date: 2020
- Subjects: Antennas, Reflector , Radio telescopes , Astronomical instruments -- Calibration , Holography , Polynomials , Very large array telescopes -- South Africa , Astronomy -- Data processing , Primary beam effects , Jacobi-Bessel pattern , Cassbeam software , MeerKAT telescope
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/147425 , vital:38635
- Description: Signals received by a radio telescope are always affected by propagation and instrumental effects. These effects need to be modelled and accounted for during the process of calibration. The primary beam (PB) of the antenna is one major instrumental effect that needs to be accounted for during calibration. Producing accurate models of the radio antenna PB is crucial, and many approaches (like electromagnetic and optical simulations) have been used to model it. The cos³ function, Jacobi-Bessel pattern, characteristic basis function patterns (CBFP) and Cassbeam software (which uses optical ray-tracing with antenna parameters) have also been used to model it. These models capture the basic PB effects. Real-life PB patterns differ from these models due to various subtle effects such as mechanical deformation and effects introduced into the PB due to standing waves that exist in reflector antennas. The actual patterns can be measured via a process called astro-holography (or holography), but this is subject to noise, radio frequency interference, and other measurement errors. In our approach, we use principal component analysis and Zernike polynomials to model the PBs of the Very Large Array (VLA) and the MeerKAT telescopes from their holography measured data. The models have reconstruction errors of less than 5% at a compression factor of approximately 98% for both arrays. We also present steps that can be used to generate accurate beam models for any telescope (independent of its design) based on holography measured data. Analysis of the VLA measured PBs revealed that the graph of the beam sizes (and centre offset positions) have a fast oscillating trend (superimposed on a slow trend) with frequency. This spectral behaviour we termed ripple or characteristic effects. Most existing PB models that are used in calibrating VLA data do not incorporate these direction dependent effects (DDEs). We investigate the impact of using PB models that ignore this DDE in continuum calibration and imaging via simulations. Our experiments show that, although these effects translate into less than 10% errors in source flux recovery, they do lead to 30% reduction in the dynamic range. To prepare data for Hi and radio halo (faint emissions) science analysis requires carrying out foreground subtraction of bright (continuum) sources. We investigate the impact of using beam models that ignore these ripple effects during continuum subtraction. These show that using PB models which completely ignore the ripple effects in continuum subtraction could translate to error of more to 30% in the recovered Hi spectral properties. This implies that science inferences drawn from the results for Hi studies could have errors of the same magnitude.
- Full Text:
- Date Issued: 2020
- Authors: Iheanetu, Kelachukwu
- Date: 2020
- Subjects: Antennas, Reflector , Radio telescopes , Astronomical instruments -- Calibration , Holography , Polynomials , Very large array telescopes -- South Africa , Astronomy -- Data processing , Primary beam effects , Jacobi-Bessel pattern , Cassbeam software , MeerKAT telescope
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/147425 , vital:38635
- Description: Signals received by a radio telescope are always affected by propagation and instrumental effects. These effects need to be modelled and accounted for during the process of calibration. The primary beam (PB) of the antenna is one major instrumental effect that needs to be accounted for during calibration. Producing accurate models of the radio antenna PB is crucial, and many approaches (like electromagnetic and optical simulations) have been used to model it. The cos³ function, Jacobi-Bessel pattern, characteristic basis function patterns (CBFP) and Cassbeam software (which uses optical ray-tracing with antenna parameters) have also been used to model it. These models capture the basic PB effects. Real-life PB patterns differ from these models due to various subtle effects such as mechanical deformation and effects introduced into the PB due to standing waves that exist in reflector antennas. The actual patterns can be measured via a process called astro-holography (or holography), but this is subject to noise, radio frequency interference, and other measurement errors. In our approach, we use principal component analysis and Zernike polynomials to model the PBs of the Very Large Array (VLA) and the MeerKAT telescopes from their holography measured data. The models have reconstruction errors of less than 5% at a compression factor of approximately 98% for both arrays. We also present steps that can be used to generate accurate beam models for any telescope (independent of its design) based on holography measured data. Analysis of the VLA measured PBs revealed that the graph of the beam sizes (and centre offset positions) have a fast oscillating trend (superimposed on a slow trend) with frequency. This spectral behaviour we termed ripple or characteristic effects. Most existing PB models that are used in calibrating VLA data do not incorporate these direction dependent effects (DDEs). We investigate the impact of using PB models that ignore this DDE in continuum calibration and imaging via simulations. Our experiments show that, although these effects translate into less than 10% errors in source flux recovery, they do lead to 30% reduction in the dynamic range. To prepare data for Hi and radio halo (faint emissions) science analysis requires carrying out foreground subtraction of bright (continuum) sources. We investigate the impact of using beam models that ignore these ripple effects during continuum subtraction. These show that using PB models which completely ignore the ripple effects in continuum subtraction could translate to error of more to 30% in the recovered Hi spectral properties. This implies that science inferences drawn from the results for Hi studies could have errors of the same magnitude.
- Full Text:
- Date Issued: 2020
Thermoluminescence and phototransferred phermoluminescence of synthetic quartz
- Authors: Dawam, Robert Rangmou
- Date: 2020
- Subjects: Thermoluminescence , Quartz
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/145849 , vital:38472
- Description: The main aim of this investigation is on thermoluminescence and phototransferred thermoluminescence of synthetic quartz. Thermoluminescence was one of the tools used in characterising the electron traps parameters. The samples of quartz annealed at various temperatures up to 900̊C and the unannealed were used. The thermoluminescence glow curve was measured at 1̊C s~ 1 following beta irradiation to 40 Gy from the samples annealed at 500̊C and the unannealed consist of main peak at 70̊C and secondary peaks at 110, 180 and 310̊C. In comparison, the thermoluminescence glow curve for the sample annealed at 900̊C have main peak at 86̊C and the secondary ones at 170 and 310̊C. The kinetic analysis was carried out only on the main peak in each case. The activation energy was found to be decreasing with increase in annealing temperatures. The samples annealed at 500̊C and the unannealed were found to be affected by thermal quenching while sample annealed at 900̊C shows an inverse quenching for irradiation dose of 40 Gy. However, when the dose was reduce to 3 Gy the effects of thermal quenching was manifested. The activation energy of thermal quenching was also found to decrease with increase in annealing temperature. Thermally assisted optically stimulated luminescence measurement was carried out using continuous wave optical stimulated luminescence (CW-OSL). The samples studied were those annealed at 500̊C for 10 minutes, 900̊C for 10, 30, 60 minutes and 1000̊C for 10 minutes prior to use. The CW-OSL is stimulated using 470 nm blue LEDs at sample temperatures between 30 and 200̊C. It is measured after preheating to either 300 and 500̊C. When the integrated OSL intensity is plotted as a function of measurement temperature, the intensity goes through a peak. The increase in OSL intensity as a function of temperature is associated to thermal assistance and the decrease to thermal quenching. The kinetic parameters were evaluated by fitting the experimental data. The values of activation energies of thermal quenching are the same within experimental uncertainties for all the experimental conditions. This shows that annealing temperature, duration of annealing and irradiation dose have a negligible influence on the recombination site of luminescence using OSL. Phototransferred thermoluminescence (PTTL) induced from annealed samples using 470 nm blue light was also investigated. The quartz were annealed at 500 _C for 10 minutes, 900̊C for 10, 30, 60 minutes and 1000̊C for 10 minutes prior to use. The glow curves of conventional TL measured at 1 _C s1 following irradiation to 200 Gy shows six peaks in each case labelled I-VI for ease of reference whereas peaks observed under PTTL are referred to as A1 onwards. Only the first three peaks were reproduced under phototransfer for the sample annealed at 900̊C for 60 minutes and 1000̊C C for 10 minutes. Interestingly, for the intermediate duration of annealing of 30 minutes, the only peak that appears under phototransfer is the A1. For quartz annealed at 900̊C for 10 minutes, the PTTL appears as long as the preheating temperature does not exceed 560̊C. All other annealing temperatures, PTTL only appears for preheating to 450 and below. This shows that the occupancy of deep electron traps at temperatures beyond 450̊C or 560̊C is low. The activation energy for peaks A1, A2 and A3 were calculated. The PTTL peaks were studied for thermal quenching and peaks A1 and A3 were found to be affected. The activation energies for thermal quenching were determined as 0.62 ± 0.04 eV and 0.65 ± 0.02 eV for peaks A1 and A3 respectively. The experimental dependence of PTTL intensity on illumination time is modelled using sets of coupled linear differential equations based on systems of donors and acceptors whose number is determined by preheating temperature.
- Full Text:
- Date Issued: 2020
- Authors: Dawam, Robert Rangmou
- Date: 2020
- Subjects: Thermoluminescence , Quartz
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/145849 , vital:38472
- Description: The main aim of this investigation is on thermoluminescence and phototransferred thermoluminescence of synthetic quartz. Thermoluminescence was one of the tools used in characterising the electron traps parameters. The samples of quartz annealed at various temperatures up to 900̊C and the unannealed were used. The thermoluminescence glow curve was measured at 1̊C s~ 1 following beta irradiation to 40 Gy from the samples annealed at 500̊C and the unannealed consist of main peak at 70̊C and secondary peaks at 110, 180 and 310̊C. In comparison, the thermoluminescence glow curve for the sample annealed at 900̊C have main peak at 86̊C and the secondary ones at 170 and 310̊C. The kinetic analysis was carried out only on the main peak in each case. The activation energy was found to be decreasing with increase in annealing temperatures. The samples annealed at 500̊C and the unannealed were found to be affected by thermal quenching while sample annealed at 900̊C shows an inverse quenching for irradiation dose of 40 Gy. However, when the dose was reduce to 3 Gy the effects of thermal quenching was manifested. The activation energy of thermal quenching was also found to decrease with increase in annealing temperature. Thermally assisted optically stimulated luminescence measurement was carried out using continuous wave optical stimulated luminescence (CW-OSL). The samples studied were those annealed at 500̊C for 10 minutes, 900̊C for 10, 30, 60 minutes and 1000̊C for 10 minutes prior to use. The CW-OSL is stimulated using 470 nm blue LEDs at sample temperatures between 30 and 200̊C. It is measured after preheating to either 300 and 500̊C. When the integrated OSL intensity is plotted as a function of measurement temperature, the intensity goes through a peak. The increase in OSL intensity as a function of temperature is associated to thermal assistance and the decrease to thermal quenching. The kinetic parameters were evaluated by fitting the experimental data. The values of activation energies of thermal quenching are the same within experimental uncertainties for all the experimental conditions. This shows that annealing temperature, duration of annealing and irradiation dose have a negligible influence on the recombination site of luminescence using OSL. Phototransferred thermoluminescence (PTTL) induced from annealed samples using 470 nm blue light was also investigated. The quartz were annealed at 500 _C for 10 minutes, 900̊C for 10, 30, 60 minutes and 1000̊C for 10 minutes prior to use. The glow curves of conventional TL measured at 1 _C s1 following irradiation to 200 Gy shows six peaks in each case labelled I-VI for ease of reference whereas peaks observed under PTTL are referred to as A1 onwards. Only the first three peaks were reproduced under phototransfer for the sample annealed at 900̊C for 60 minutes and 1000̊C C for 10 minutes. Interestingly, for the intermediate duration of annealing of 30 minutes, the only peak that appears under phototransfer is the A1. For quartz annealed at 900̊C for 10 minutes, the PTTL appears as long as the preheating temperature does not exceed 560̊C. All other annealing temperatures, PTTL only appears for preheating to 450 and below. This shows that the occupancy of deep electron traps at temperatures beyond 450̊C or 560̊C is low. The activation energy for peaks A1, A2 and A3 were calculated. The PTTL peaks were studied for thermal quenching and peaks A1 and A3 were found to be affected. The activation energies for thermal quenching were determined as 0.62 ± 0.04 eV and 0.65 ± 0.02 eV for peaks A1 and A3 respectively. The experimental dependence of PTTL intensity on illumination time is modelled using sets of coupled linear differential equations based on systems of donors and acceptors whose number is determined by preheating temperature.
- Full Text:
- Date Issued: 2020
CubiCal: a fast radio interferometric calibration suite exploiting complex optimisation
- Authors: Kenyon, Jonathan
- Date: 2019
- Subjects: Interferometry , Radio astronomy , Python (Computer program language) , Square Kilometre Array (Project)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/92341 , vital:30711
- Description: The advent of the Square Kilometre Array and its precursors marks the start of an exciting era for radio interferometry. However, with new instruments producing unprecedented quantities of data, many existing calibration algorithms and implementations will be hard-pressed to keep up. Fortunately, it has recently been shown that the radio interferometric calibration problem can be expressed concisely using the ideas of complex optimisation. The resulting framework exposes properties of the calibration problem which can be exploited to accelerate traditional non-linear least squares algorithms. We extend the existing work on the topic by considering the more general problem of calibrating a Jones chain: the product of several unknown gain terms. We also derive specialised solvers for performing phase-only, delay and pointing error calibration. In doing so, we devise a method for determining update rules for arbitrary, real-valued parametrisations of a complex gain. The solvers are implemented in an optimised Python package called CubiCal. CubiCal makes use of Cython to generate fast C and C++ routines for performing computationally demanding tasks whilst leveraging multiprocessing and shared memory to take advantage of modern, parallel hardware. The package is fully compatible with the measurement set, the most common format for interferometer data, and is well integrated with Montblanc - a third party package which implements optimised model visibility prediction. CubiCal's calibration routines are applied successfully to both simulated and real data for the field surrounding source 3C147. These tests include direction-independent and direction dependent calibration, as well as tests of the specialised solvers. Finally, we conduct extensive performance benchmarks and verify that CubiCal convincingly outperforms its most comparable competitor.
- Full Text:
- Date Issued: 2019
- Authors: Kenyon, Jonathan
- Date: 2019
- Subjects: Interferometry , Radio astronomy , Python (Computer program language) , Square Kilometre Array (Project)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/92341 , vital:30711
- Description: The advent of the Square Kilometre Array and its precursors marks the start of an exciting era for radio interferometry. However, with new instruments producing unprecedented quantities of data, many existing calibration algorithms and implementations will be hard-pressed to keep up. Fortunately, it has recently been shown that the radio interferometric calibration problem can be expressed concisely using the ideas of complex optimisation. The resulting framework exposes properties of the calibration problem which can be exploited to accelerate traditional non-linear least squares algorithms. We extend the existing work on the topic by considering the more general problem of calibrating a Jones chain: the product of several unknown gain terms. We also derive specialised solvers for performing phase-only, delay and pointing error calibration. In doing so, we devise a method for determining update rules for arbitrary, real-valued parametrisations of a complex gain. The solvers are implemented in an optimised Python package called CubiCal. CubiCal makes use of Cython to generate fast C and C++ routines for performing computationally demanding tasks whilst leveraging multiprocessing and shared memory to take advantage of modern, parallel hardware. The package is fully compatible with the measurement set, the most common format for interferometer data, and is well integrated with Montblanc - a third party package which implements optimised model visibility prediction. CubiCal's calibration routines are applied successfully to both simulated and real data for the field surrounding source 3C147. These tests include direction-independent and direction dependent calibration, as well as tests of the specialised solvers. Finally, we conduct extensive performance benchmarks and verify that CubiCal convincingly outperforms its most comparable competitor.
- Full Text:
- Date Issued: 2019
Modelling storm-time TEC changes using linear and non-linear techniques
- Authors: Uwamahoro, Jean Claude
- Date: 2019
- Subjects: Magnetic storms , Astronomy -- Computer programs , Imaging systems in astronomy , Ionospheric storms , Electrons -- Measurement , Magnetosphere -- Observations
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/92908 , vital:30762
- Description: Statistical models based on empirical orthogonal functions (EOF) analysis and non-linear regression analysis (NLRA) were developed for the purpose of estimating the ionospheric total electron content (TEC) during geomagnetic storms. The well-known least squares method (LSM) and Metropolis-Hastings algorithm (MHA) were used as optimization techniques to determine the unknown coefficients of the developed analytical expressions. Artificial Neural Networks (ANNs), the International Reference Ionosphere (IRI) model, and the Multi-Instrument Data Analysis System (MIDAS) tomographic inversion algorithm were also applied to storm-time TEC modelling/reconstruction for various latitudes of the African sector and surrounding areas. This work presents some of the first statistical modeling of the mid-latitude and low-latitude ionosphere during geomagnetic storms that includes solar, geomagnetic and neutral wind drivers.Development and validation of the empirical models were based on storm-time TEC data derived from the global positioning system (GPS) measurements over ground receivers within Africa and surrounding areas. The storm criterion applied was Dst 6 −50 nT and/or Kp > 4. The performance evaluation of MIDAS compared with ANNs to reconstruct storm-time TEC over the African low- and mid-latitude regions showed that MIDAS and ANNs provide comparable results. Their respective mean absolute error (MAE) values were 4.81 and 4.18 TECU. The ANN model was, however, found to perform 24.37 % better than MIDAS at estimating storm-time TEC for low latitudes, while MIDAS is 13.44 % more accurate than ANN for the mid-latitudes. When their performances are compared with the IRI model, both MIDAS and ANN model were found to provide more accurate storm-time TEC reconstructions for the African low- and mid-latitude regions. A comparative study of the performances of EOF, NLRA, ANN, and IRI models to estimate TEC during geomagnetic storm conditions over various latitudes showed that the ANN model is about 10 %, 26 %, and 58 % more accurate than EOF, NLRA, and IRI models, respectively, while EOF was found to perform 15 %, and 44 % better than NLRA and IRI, respectively. It was further found that the NLRA model is 25 % more accurate than the IRI model. We have also investigated for the first time, the role of meridional neutral winds (from the Horizontal Wind Model) to storm-time TEC modelling in the low latitude, northern and southern hemisphere mid-latitude regions of the African sector, based on ANN models. Statistics have shown that the inclusion of the meridional wind velocity in TEC modelling during geomagnetic storms leads to percentage improvements of about 5 % for the low latitude, 10 % and 5 % for the northern and southern hemisphere mid-latitude regions, respectively. High-latitude storm-induced winds and the inter-hemispheric blows of the meridional winds from summer to winter hemisphere have been suggested to be associated with these improvements.
- Full Text:
- Date Issued: 2019
- Authors: Uwamahoro, Jean Claude
- Date: 2019
- Subjects: Magnetic storms , Astronomy -- Computer programs , Imaging systems in astronomy , Ionospheric storms , Electrons -- Measurement , Magnetosphere -- Observations
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/92908 , vital:30762
- Description: Statistical models based on empirical orthogonal functions (EOF) analysis and non-linear regression analysis (NLRA) were developed for the purpose of estimating the ionospheric total electron content (TEC) during geomagnetic storms. The well-known least squares method (LSM) and Metropolis-Hastings algorithm (MHA) were used as optimization techniques to determine the unknown coefficients of the developed analytical expressions. Artificial Neural Networks (ANNs), the International Reference Ionosphere (IRI) model, and the Multi-Instrument Data Analysis System (MIDAS) tomographic inversion algorithm were also applied to storm-time TEC modelling/reconstruction for various latitudes of the African sector and surrounding areas. This work presents some of the first statistical modeling of the mid-latitude and low-latitude ionosphere during geomagnetic storms that includes solar, geomagnetic and neutral wind drivers.Development and validation of the empirical models were based on storm-time TEC data derived from the global positioning system (GPS) measurements over ground receivers within Africa and surrounding areas. The storm criterion applied was Dst 6 −50 nT and/or Kp > 4. The performance evaluation of MIDAS compared with ANNs to reconstruct storm-time TEC over the African low- and mid-latitude regions showed that MIDAS and ANNs provide comparable results. Their respective mean absolute error (MAE) values were 4.81 and 4.18 TECU. The ANN model was, however, found to perform 24.37 % better than MIDAS at estimating storm-time TEC for low latitudes, while MIDAS is 13.44 % more accurate than ANN for the mid-latitudes. When their performances are compared with the IRI model, both MIDAS and ANN model were found to provide more accurate storm-time TEC reconstructions for the African low- and mid-latitude regions. A comparative study of the performances of EOF, NLRA, ANN, and IRI models to estimate TEC during geomagnetic storm conditions over various latitudes showed that the ANN model is about 10 %, 26 %, and 58 % more accurate than EOF, NLRA, and IRI models, respectively, while EOF was found to perform 15 %, and 44 % better than NLRA and IRI, respectively. It was further found that the NLRA model is 25 % more accurate than the IRI model. We have also investigated for the first time, the role of meridional neutral winds (from the Horizontal Wind Model) to storm-time TEC modelling in the low latitude, northern and southern hemisphere mid-latitude regions of the African sector, based on ANN models. Statistics have shown that the inclusion of the meridional wind velocity in TEC modelling during geomagnetic storms leads to percentage improvements of about 5 % for the low latitude, 10 % and 5 % for the northern and southern hemisphere mid-latitude regions, respectively. High-latitude storm-induced winds and the inter-hemispheric blows of the meridional winds from summer to winter hemisphere have been suggested to be associated with these improvements.
- Full Text:
- Date Issued: 2019
Observing cosmic reionization with PAPER: polarized foreground simulations and all sky images
- Authors: Nunhokee, Chuneeta Devi
- Date: 2019
- Subjects: Cosmic background radiation , Astronomy -- Observations , Epoch of reionization -- Research , Hydrogen -- Spectra , Radio interferometers
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/68203 , vital:29218
- Description: The Donald C. Backer Precision Array to Probe the Epoch of Reionization (PAPER, Parsons et al., 2010) was built with an aim to detect the redshifted 21 cm Hydrogen line, which is likely the best probe of thermal evolution of the intergalactic medium and reionization of neutral Hydrogen in our Universe. Observations of the 21 cm signal are challenged by bright astrophysical foregrounds and systematics that require precise modeling in order to extract the cosmological signal. In particular, the instrumental leakage of polarized foregrounds may contaminate the 21 cm power spectrum. In this work, we developed a formalism to describe the leakage due to instrumental widefield effects in visibility-based power spectra and used it to predict contaminations in observations. We find the leakage due to a population of point sources to be higher than the diffuse Galactic emission – for which we can predict minimal contaminations at k>0.3 h Mpc -¹ We also analyzed data from the last observing season of PAPER via all-sky imaging with a view to characterize the foregrounds. We generated an all-sky catalogue of 88 sources down to a flux density of 5 Jy. Moreover, we measured both polarized point source and the Galactic diffuse emission, and used these measurements to constrain our model of polarization leakage. We find the leakage due to a population of point sources to be 12% lower than the prediction from our polarized model.
- Full Text:
- Date Issued: 2019
- Authors: Nunhokee, Chuneeta Devi
- Date: 2019
- Subjects: Cosmic background radiation , Astronomy -- Observations , Epoch of reionization -- Research , Hydrogen -- Spectra , Radio interferometers
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/68203 , vital:29218
- Description: The Donald C. Backer Precision Array to Probe the Epoch of Reionization (PAPER, Parsons et al., 2010) was built with an aim to detect the redshifted 21 cm Hydrogen line, which is likely the best probe of thermal evolution of the intergalactic medium and reionization of neutral Hydrogen in our Universe. Observations of the 21 cm signal are challenged by bright astrophysical foregrounds and systematics that require precise modeling in order to extract the cosmological signal. In particular, the instrumental leakage of polarized foregrounds may contaminate the 21 cm power spectrum. In this work, we developed a formalism to describe the leakage due to instrumental widefield effects in visibility-based power spectra and used it to predict contaminations in observations. We find the leakage due to a population of point sources to be higher than the diffuse Galactic emission – for which we can predict minimal contaminations at k>0.3 h Mpc -¹ We also analyzed data from the last observing season of PAPER via all-sky imaging with a view to characterize the foregrounds. We generated an all-sky catalogue of 88 sources down to a flux density of 5 Jy. Moreover, we measured both polarized point source and the Galactic diffuse emission, and used these measurements to constrain our model of polarization leakage. We find the leakage due to a population of point sources to be 12% lower than the prediction from our polarized model.
- Full Text:
- Date Issued: 2019
Statistical Analysis of the Radio-Interferometric Measurement Equation, a derived adaptive weighting scheme, and applications to LOFAR-VLBI observation of the Extended Groth Strip
- Authors: Bonnassieux, Etienne
- Date: 2019
- Subjects: Radio astronomy , Astrophysics , Astrophysics -- Instruments -- Calibration , Imaging systems in astronomy , Radio interferometers , Radio telescopes , Astronomy -- Observations
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/93789 , vital:30942
- Description: J.R.R Tolkien wrote, in his Mythopoeia, that “He sees no stars who does not see them first, of living silver made that sudden burst, to flame like flowers beneath the ancient song”. In his defense of myth-making, he formulates the argument that the attribution of meaning is an act of creation - that “trees are not ‘trees’ until so named and seen” - and that this capacity for creation defines the human creature. The scientific endeavour, in this context, can be understood as a social expression of a fundamental feature of humanity, and from this endeavour flows much understanding. This thesis, one thread among many, focuses on the study of astronomical objects as seen by the radio waves they emit. What are radio waves? Electromagnetic waves were theorised by James Clerk Maxwell (Maxwell 1864) in his great theoretical contribution to modern physics, their speed matching the speed of light as measured by Ole Christensen R0mer and, later, James Bradley. It was not until Heinrich Rudolf Hertz’s 1887 experiment that these waves were measured in a laboratory, leading to the dawn of radio communications - and, later, radio astronomy. The link between radio waves and light was one of association: light is known to behave as a wave (Young double-slit experiment), with the same propagation speed as electromagnetic radiation. Light “proper” is also known to exist beyond the optical regime: Herschel’s experiment shows that when diffracted through a prism, sunlight warms even those parts of a desk which are not observed to be lit (first evidence of infrared light). The link between optical light and unseen electromagnetic radiation is then an easy step to make, and one confirmed through countless technological applications (e.g. optical fiber to name but one). And as soon as this link is established, a question immediately comes to the mind of the astronomer: what does the sky, our Universe, look like to the radio “eye”? Radio astronomy has a short but storied history: from Karl Jansky’s serendipitous observation of the centre of the Milky Way, which outshines our Sun in the radio regime, in 1933, to Grote Reber’s hand-built back-yard radio antenna in 1937, which successfully detected radio emission from the Milky Way itself, to such monumental projects as the Square Kilometer Array and its multiple pathfinders, it has led to countless discoveries and the opening of a truly new window on the Universe. The work presented in this thesis is a contribution to this discipline - the culmination of three years of study, which is a rather short time to get a firm grasp of radio interferometry both in theory and in practice. The need for robust, automated methods - which are improving daily, thanks to the tireless labour of the scientists in the field - is becoming ever stronger as the SKA approaches, looming large on the horizon; but even today, in the precursor era of LOFAR, MeerKAT and other pathfinders, it is keenly felt. When I started my doctorate, the sheer scale of the task at hand felt overwhelming - to actually be able to contribute to its resolution seemed daunting indeed! Thankfully, as the saying goes, no society sets for itself material goals which it cannot achieve. This thesis took place at an exciting time for radio interferometry: at the start of my doctorate, the LOFAR international stations were - to my knowledge - only beginning to be used, and even then, only tentatively; MeerKAT had not yet shown its first light; the techniques used throughout my work were still being developed. At the time of writing, great strides have been made. One of the greatest technical challenges of LOFAR - imaging using the international stations - is starting to become reality. This technical challenge is the key problem that this thesis set out to address. While we only achieved partial success so far, it is a testament to the difficulty of the task that it is not yet truly resolved. One of the major results of this thesis is a model of a bright resolved source near a famous extragalactic field: properly modeling this source not only allows the use of international LOFAR stations, but also grants deeper access to the extragalactic field itself, which is otherwise polluted by the 3C source’s sidelobes. This result was only achieved thanks to the other major result of this thesis: the development of a theoretical framework with which to better understand the effect of calibration errors on images made from interferometric data, and an algorithm to strongly mitigate them. The structure of this manuscript is as follows: we begin with an introduction to radio interferometry, LOFAR, and the emission mechanisms which dominate for our field of interest. These introductions are primarily intended to give a brief overview of the technical aspects of the data reduced in this thesis. We follow with an overview of the Measurement Equation formalism, which underpins our theoretical work. This is the keystone of this thesis. We then show the theoretical work that was developed as part of the research work done during the doctorate - which was published in Astronomy & Astrophysics. Its practical application - a quality-based weighting scheme - is used throughout our data reduction. This data reduction is the next topic of this thesis: we contextualise the scientific interest of the data we reduce, and explain both the methods and the results we achieve.
- Full Text:
- Date Issued: 2019
- Authors: Bonnassieux, Etienne
- Date: 2019
- Subjects: Radio astronomy , Astrophysics , Astrophysics -- Instruments -- Calibration , Imaging systems in astronomy , Radio interferometers , Radio telescopes , Astronomy -- Observations
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/93789 , vital:30942
- Description: J.R.R Tolkien wrote, in his Mythopoeia, that “He sees no stars who does not see them first, of living silver made that sudden burst, to flame like flowers beneath the ancient song”. In his defense of myth-making, he formulates the argument that the attribution of meaning is an act of creation - that “trees are not ‘trees’ until so named and seen” - and that this capacity for creation defines the human creature. The scientific endeavour, in this context, can be understood as a social expression of a fundamental feature of humanity, and from this endeavour flows much understanding. This thesis, one thread among many, focuses on the study of astronomical objects as seen by the radio waves they emit. What are radio waves? Electromagnetic waves were theorised by James Clerk Maxwell (Maxwell 1864) in his great theoretical contribution to modern physics, their speed matching the speed of light as measured by Ole Christensen R0mer and, later, James Bradley. It was not until Heinrich Rudolf Hertz’s 1887 experiment that these waves were measured in a laboratory, leading to the dawn of radio communications - and, later, radio astronomy. The link between radio waves and light was one of association: light is known to behave as a wave (Young double-slit experiment), with the same propagation speed as electromagnetic radiation. Light “proper” is also known to exist beyond the optical regime: Herschel’s experiment shows that when diffracted through a prism, sunlight warms even those parts of a desk which are not observed to be lit (first evidence of infrared light). The link between optical light and unseen electromagnetic radiation is then an easy step to make, and one confirmed through countless technological applications (e.g. optical fiber to name but one). And as soon as this link is established, a question immediately comes to the mind of the astronomer: what does the sky, our Universe, look like to the radio “eye”? Radio astronomy has a short but storied history: from Karl Jansky’s serendipitous observation of the centre of the Milky Way, which outshines our Sun in the radio regime, in 1933, to Grote Reber’s hand-built back-yard radio antenna in 1937, which successfully detected radio emission from the Milky Way itself, to such monumental projects as the Square Kilometer Array and its multiple pathfinders, it has led to countless discoveries and the opening of a truly new window on the Universe. The work presented in this thesis is a contribution to this discipline - the culmination of three years of study, which is a rather short time to get a firm grasp of radio interferometry both in theory and in practice. The need for robust, automated methods - which are improving daily, thanks to the tireless labour of the scientists in the field - is becoming ever stronger as the SKA approaches, looming large on the horizon; but even today, in the precursor era of LOFAR, MeerKAT and other pathfinders, it is keenly felt. When I started my doctorate, the sheer scale of the task at hand felt overwhelming - to actually be able to contribute to its resolution seemed daunting indeed! Thankfully, as the saying goes, no society sets for itself material goals which it cannot achieve. This thesis took place at an exciting time for radio interferometry: at the start of my doctorate, the LOFAR international stations were - to my knowledge - only beginning to be used, and even then, only tentatively; MeerKAT had not yet shown its first light; the techniques used throughout my work were still being developed. At the time of writing, great strides have been made. One of the greatest technical challenges of LOFAR - imaging using the international stations - is starting to become reality. This technical challenge is the key problem that this thesis set out to address. While we only achieved partial success so far, it is a testament to the difficulty of the task that it is not yet truly resolved. One of the major results of this thesis is a model of a bright resolved source near a famous extragalactic field: properly modeling this source not only allows the use of international LOFAR stations, but also grants deeper access to the extragalactic field itself, which is otherwise polluted by the 3C source’s sidelobes. This result was only achieved thanks to the other major result of this thesis: the development of a theoretical framework with which to better understand the effect of calibration errors on images made from interferometric data, and an algorithm to strongly mitigate them. The structure of this manuscript is as follows: we begin with an introduction to radio interferometry, LOFAR, and the emission mechanisms which dominate for our field of interest. These introductions are primarily intended to give a brief overview of the technical aspects of the data reduced in this thesis. We follow with an overview of the Measurement Equation formalism, which underpins our theoretical work. This is the keystone of this thesis. We then show the theoretical work that was developed as part of the research work done during the doctorate - which was published in Astronomy & Astrophysics. Its practical application - a quality-based weighting scheme - is used throughout our data reduction. This data reduction is the next topic of this thesis: we contextualise the scientific interest of the data we reduce, and explain both the methods and the results we achieve.
- Full Text:
- Date Issued: 2019
Advanced radio interferometric simulation and data reduction techniques
- Authors: Makhathini, Sphesihle
- Date: 2018
- Subjects: Interferometry , Radio interferometers , Algorithms , Radio telescopes , Square Kilometre Array (Project) , Very Large Array (Observatory : N.M.) , Radio astronomy
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/57348 , vital:26875
- Description: This work shows how legacy and novel radio Interferometry software packages and algorithms can be combined to produce high-quality reductions from modern telescopes, as well as end-to-end simulations for upcoming instruments such as the Square Kilometre Array (SKA) and its pathfinders. We first use a MeqTrees based simulations framework to quantify how artefacts due to direction-dependent effects accumulate with time, and the consequences of this accumulation when observing the same field multiple times in order to reach the survey depth. Our simulations suggest that a survey like LADUMA (Looking at the Distant Universe with MeerKAT Array), which aims to achieve its survey depth of 16 µJy/beam in a 72 kHz at 1.42 GHz by observing the same field for 1000 hours, will be able to reach its target depth in the presence of these artefacts. We also present stimela, a system agnostic scripting framework for simulating, processing and imaging radio interferometric data. This framework is then used to write an end-to-end simulation pipeline in order to quantify the resolution and sensitivity of the SKA1-MID telescope (the first phase of the SKA mid-frequency telescope) as a function of frequency, as well as the scale-dependent sensitivity of the telescope. Finally, a stimela-based reduction pipeline is used to process data of the field around the source 3C147, taken by the Karl G. Jansky Very Large Array (VLA). The reconstructed image from this reduction has a typical 1a noise level of 2.87 µJy/beam, and consequently a dynamic range of 8x106:1, given the 22.58 Jy/beam flux Density of the source 3C147.
- Full Text:
- Date Issued: 2018
- Authors: Makhathini, Sphesihle
- Date: 2018
- Subjects: Interferometry , Radio interferometers , Algorithms , Radio telescopes , Square Kilometre Array (Project) , Very Large Array (Observatory : N.M.) , Radio astronomy
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/57348 , vital:26875
- Description: This work shows how legacy and novel radio Interferometry software packages and algorithms can be combined to produce high-quality reductions from modern telescopes, as well as end-to-end simulations for upcoming instruments such as the Square Kilometre Array (SKA) and its pathfinders. We first use a MeqTrees based simulations framework to quantify how artefacts due to direction-dependent effects accumulate with time, and the consequences of this accumulation when observing the same field multiple times in order to reach the survey depth. Our simulations suggest that a survey like LADUMA (Looking at the Distant Universe with MeerKAT Array), which aims to achieve its survey depth of 16 µJy/beam in a 72 kHz at 1.42 GHz by observing the same field for 1000 hours, will be able to reach its target depth in the presence of these artefacts. We also present stimela, a system agnostic scripting framework for simulating, processing and imaging radio interferometric data. This framework is then used to write an end-to-end simulation pipeline in order to quantify the resolution and sensitivity of the SKA1-MID telescope (the first phase of the SKA mid-frequency telescope) as a function of frequency, as well as the scale-dependent sensitivity of the telescope. Finally, a stimela-based reduction pipeline is used to process data of the field around the source 3C147, taken by the Karl G. Jansky Very Large Array (VLA). The reconstructed image from this reduction has a typical 1a noise level of 2.87 µJy/beam, and consequently a dynamic range of 8x106:1, given the 22.58 Jy/beam flux Density of the source 3C147.
- Full Text:
- Date Issued: 2018
Behaviour of quiet time ionospheric disturbances at African equatorial and midlatitude regions
- Authors: Orford, Nicola Diane
- Date: 2018
- Subjects: Ionospheric storms , Ionospheric storms -- Africa , Ionosphere , Plasmasphere , Q-disturbances , Total electron content (TEC)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/62672 , vital:28228
- Description: Extreme ionospheric and geomagnetic disturbances affect technology adversely. Prestorm enhancements, considered a potential predictor of geomagnetic storms, occur during quiet conditions prior to geomagnetic disturbances. The ionosphere experiences general disturbances during quiet geomagnetic conditions and these Q- disturbances remain unexplored over Africa. This study used TEC data to characterize the morphology of Q-disturbances over Africa, exploring variations with solar cycle, season, time of occurrence and latitude. Observations from 10 African GPS stations in the equatorial and midlatitude regions show that Q-disturbances in the equatorial region are predominantly driven by E x B variations, while multiple mechanisms affect the midlatitude region. Q- disturbances occur more frequently during nighttime than during daytime and no seasonal trend is observed. Midlatitude Q-disturbance mechanisms are explored in depth, considering substorm activity, the plasmaspheric contribution to GPS TEC and plasma transfer between conjugate points. Substorm activity is not a dominant mechanism, although Q-disturbances occurring under elevated substorm conditions tend to have longer duration and larger amplitude than general Q-disturbances. Many observed Q-disturbances become non-significant once the plasmaspheric contribution to the TEC measurements is removed, indicating that these disturbances occur within the plasmasphere, and not the ionosphere. Transfer of plasma between conjugate points does not seem to be a mechanism driving Q-disturbances, as the corresponding nighttime behaviour expected between depletions in the summer hemisphere and enhancements in the winter hemisphere is not observed. Pre-storm enhancements occur infrequently, rendering them a poor predictor of geomagnetic disturbances. Pre-storm enhancement morphology does not differ significantly from general quiet time enhancement morphology, suggesting pre-storms are not a special case of Q-disturbances.
- Full Text:
- Date Issued: 2018
- Authors: Orford, Nicola Diane
- Date: 2018
- Subjects: Ionospheric storms , Ionospheric storms -- Africa , Ionosphere , Plasmasphere , Q-disturbances , Total electron content (TEC)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/62672 , vital:28228
- Description: Extreme ionospheric and geomagnetic disturbances affect technology adversely. Prestorm enhancements, considered a potential predictor of geomagnetic storms, occur during quiet conditions prior to geomagnetic disturbances. The ionosphere experiences general disturbances during quiet geomagnetic conditions and these Q- disturbances remain unexplored over Africa. This study used TEC data to characterize the morphology of Q-disturbances over Africa, exploring variations with solar cycle, season, time of occurrence and latitude. Observations from 10 African GPS stations in the equatorial and midlatitude regions show that Q-disturbances in the equatorial region are predominantly driven by E x B variations, while multiple mechanisms affect the midlatitude region. Q- disturbances occur more frequently during nighttime than during daytime and no seasonal trend is observed. Midlatitude Q-disturbance mechanisms are explored in depth, considering substorm activity, the plasmaspheric contribution to GPS TEC and plasma transfer between conjugate points. Substorm activity is not a dominant mechanism, although Q-disturbances occurring under elevated substorm conditions tend to have longer duration and larger amplitude than general Q-disturbances. Many observed Q-disturbances become non-significant once the plasmaspheric contribution to the TEC measurements is removed, indicating that these disturbances occur within the plasmasphere, and not the ionosphere. Transfer of plasma between conjugate points does not seem to be a mechanism driving Q-disturbances, as the corresponding nighttime behaviour expected between depletions in the summer hemisphere and enhancements in the winter hemisphere is not observed. Pre-storm enhancements occur infrequently, rendering them a poor predictor of geomagnetic disturbances. Pre-storm enhancement morphology does not differ significantly from general quiet time enhancement morphology, suggesting pre-storms are not a special case of Q-disturbances.
- Full Text:
- Date Issued: 2018
Combined spectral and stimulated luminescence study of charge trapping and recombination processes in α-Al2O3:C
- Authors: Nyirenda, Angel Newton
- Date: 2018
- Subjects: Luminescence , Thermoluminescence , Luminescence spectroscopy , Carbon-doped aluminium oxide , Radioluminescence , Time-resolved X-ray excited optical luminescence
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/62683 , vital:28235
- Description: The main objective of this project was to gain a deeper and better understanding of the luminescence processes in a-Al₂O₃:C, a highly-sensitive dosimetric material, using a combined spectral and stimulated luminescence study. The spectral studies concentrated on the emission spectra obtained using X-ray induced radioluminescence (XERL), thermoluminescence (XETL) and time-resolved X-ray excited optical luminescence (TR-XEOL) techniques. The stimulated luminescence studies were based on thermoluminescence (TL), optically stimulated luminescence (OSL) and phototransferred TL (PTTL) methods that were used in the study of the radiation-induced defects at high beta-doses and the deep traps, that is, traps with thermal depths beyond 500°C. The spectral and stimulated luminescence measurements were carried out using a high sensitivity luminescence spectrometer and a Ris0 TL/OSL Model DA-20 Reader, respectively. The XERL emission spectrum measured at room temperature shows seven gaussian peaks associated with F-centres (420 nm), F+-centres (334 nm), F2+-centres (559 nm), Stoke’s vibronic band of Cr3+ (671 nm), Cr3+ R-line emission (694 nm), anti-Stokes vibronic band of Cr3+ (710 nm) and an unidentified emission band (260-300 nm) which we associate with hole recombinations at a luminescence centre. The 694-nm R-line emission from Cr3+ impurity ions is most likely due to recombination of holes at Cr2+ during stimulated luminescence and as a result of an intracentre excitation of Cr3+ in photoluminescence (PL) due to photon absorption. The Cr3+ emission decreases in intensity, whereas the intensity of F-centre emission band is almost constant with repeated XERL measurements. Depending on the amount of X-ray irradiation dose, both holes and/or electrons may take place in the emission processes of peaks I (30-80°C), II (90-250°C) and III (250-320°C) during a TL readout, albeit, electron recombination is dominant regardless of dose. At higher doses, the XETL emission spectra indicate that the dominant band associated with TL peak III (250-320°C) in the material, shifts from F-centre to Cr3+. Using the deep-traps OSL, it has been confirmed that the main TL trap is also the main OSL trap whereas the TL traps lying in the temperature range of 400-550°C constitute the secondary OSL traps. There is evidence of strong retrapping at the main trap during optical stimulation of charges from the secondary OSL traps and the deep traps and that the retrapping occurs via the delocalized bands. At high-irradiation beta-doses, aggregate defect centres which significantly alter the TL and OSL properties, are induced in the material. The induced aggregate centres get completely obliterated by heating a sample to 700°C. The radiation-induced defects cause the main TL peak to shift towards higher temperatures, increase its FWHM, reduce its maximum intensity and cause an underestimation of both the activation energy and order of kinetics of the peak. On the other hand, the OSL response of the material is enhanced following a high-irradiation dose. During sample storage in the dark at ambient temperature, charges do migrate from the deep traps (donors) to the main and intermediate traps (acceptors) and that the major donor traps during this charge transfer phenomenon lie between 500-600°C.
- Full Text:
- Date Issued: 2018
- Authors: Nyirenda, Angel Newton
- Date: 2018
- Subjects: Luminescence , Thermoluminescence , Luminescence spectroscopy , Carbon-doped aluminium oxide , Radioluminescence , Time-resolved X-ray excited optical luminescence
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/62683 , vital:28235
- Description: The main objective of this project was to gain a deeper and better understanding of the luminescence processes in a-Al₂O₃:C, a highly-sensitive dosimetric material, using a combined spectral and stimulated luminescence study. The spectral studies concentrated on the emission spectra obtained using X-ray induced radioluminescence (XERL), thermoluminescence (XETL) and time-resolved X-ray excited optical luminescence (TR-XEOL) techniques. The stimulated luminescence studies were based on thermoluminescence (TL), optically stimulated luminescence (OSL) and phototransferred TL (PTTL) methods that were used in the study of the radiation-induced defects at high beta-doses and the deep traps, that is, traps with thermal depths beyond 500°C. The spectral and stimulated luminescence measurements were carried out using a high sensitivity luminescence spectrometer and a Ris0 TL/OSL Model DA-20 Reader, respectively. The XERL emission spectrum measured at room temperature shows seven gaussian peaks associated with F-centres (420 nm), F+-centres (334 nm), F2+-centres (559 nm), Stoke’s vibronic band of Cr3+ (671 nm), Cr3+ R-line emission (694 nm), anti-Stokes vibronic band of Cr3+ (710 nm) and an unidentified emission band (260-300 nm) which we associate with hole recombinations at a luminescence centre. The 694-nm R-line emission from Cr3+ impurity ions is most likely due to recombination of holes at Cr2+ during stimulated luminescence and as a result of an intracentre excitation of Cr3+ in photoluminescence (PL) due to photon absorption. The Cr3+ emission decreases in intensity, whereas the intensity of F-centre emission band is almost constant with repeated XERL measurements. Depending on the amount of X-ray irradiation dose, both holes and/or electrons may take place in the emission processes of peaks I (30-80°C), II (90-250°C) and III (250-320°C) during a TL readout, albeit, electron recombination is dominant regardless of dose. At higher doses, the XETL emission spectra indicate that the dominant band associated with TL peak III (250-320°C) in the material, shifts from F-centre to Cr3+. Using the deep-traps OSL, it has been confirmed that the main TL trap is also the main OSL trap whereas the TL traps lying in the temperature range of 400-550°C constitute the secondary OSL traps. There is evidence of strong retrapping at the main trap during optical stimulation of charges from the secondary OSL traps and the deep traps and that the retrapping occurs via the delocalized bands. At high-irradiation beta-doses, aggregate defect centres which significantly alter the TL and OSL properties, are induced in the material. The induced aggregate centres get completely obliterated by heating a sample to 700°C. The radiation-induced defects cause the main TL peak to shift towards higher temperatures, increase its FWHM, reduce its maximum intensity and cause an underestimation of both the activation energy and order of kinetics of the peak. On the other hand, the OSL response of the material is enhanced following a high-irradiation dose. During sample storage in the dark at ambient temperature, charges do migrate from the deep traps (donors) to the main and intermediate traps (acceptors) and that the major donor traps during this charge transfer phenomenon lie between 500-600°C.
- Full Text:
- Date Issued: 2018
Long-term analysis of ionospheric response during geomagnetic storms in mid, low and equatorial latitudes
- Matamba, Tshimangadzo Merline
- Authors: Matamba, Tshimangadzo Merline
- Date: 2018
- Subjects: Ionospheric storms , Coronal mass ejections , Corotating interaction regions , Solar flares , Global Positioning System , Ionospheric critical frequencies , Equatorial Ionization Anomaly (EIA)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/63991 , vital:28517
- Description: Understanding changes in the ionosphere is important for High Frequency (HF) communications and navigation systems. Ionospheric storms are the disturbances in the Earth’s upper atmosphere due to solar activities such as Coronal Mass Ejections (CMEs), Corotating interaction Regions (CIRs) and solar flares. This thesis reports for the first time on an investigation of ionospheric response to great geomagnetic storms (Disturbance storm time, Dst ≤ −350 nT) that occurred during solar cycle 23. The storm periods analysed were 29 March - 02 April 2001, 27 - 31 October 2003, 18 - 23 November 2003 and 06 - 11 November 2004. Global Navigation Satellite System (GNSS), Total Electron Content (TEC) and ionosonde critical frequency of F2 layer (foF2) data over northern hemisphere (European sector) and southern hemisphere (African sector) mid-latitudes were used to study the ionospheric responses within 15E° - 40°E longitude and ±31°- ±46° geomagnetic latitude. Mid-latitude regions within the same longitude sector in both hemispheres were selected in order to assess the contribution of the low latitude changes especially the expansion of Equatorial Ionization Anomaly (EIA) also known as the dayside ionospheric super-fountain effect during these storms. In all storm periods, both negative and positive ionospheric responses were observed in both hemispheres. Negative ionospheric responses were mainly due to changes in neutral composition, while the expansion of the EIA led to pronounced positive ionospheric storm effect at mid-latitudes for some storm periods. In other cases (e.g 29 October 2003), Prompt Penetration Electric Fields (PPEF), EIA expansion and large scale Traveling Ionospheric Disturbances (TIDs) were found to be present during the positive storm effect at mid-latitudes in both hemispheres. An increase in TEC on the 28 October 2003 was because of the large solar flare with previously determined intensity of X45± 5. A further report on statistical analysis of ionospheric storm effects due to Corotating Interaction Region (CIR)- and Coronal Mass Ejection (CME)-driven storms was performed. The storm periods analyzed occurred during the period 2001 - 2015 which covers part of solar cycles 23 and 24. Dst≤ -30 nT and Kp≥ 3 indices were used to identify the storm periods considered. Ionospheric TEC derived from IGS stations that lie within 30°E - 40°E geographic longitude in mid, low and equatorial latitude over the African sector were used. The statistical analysis of ionospheric storm effects were compared over mid, low and equatorial latitudes in the African sector for the first time. Positive ionospheric storm effects were more prevalent during CME-driven and CIR-driven over all stations considered in this study. Negative ionospheric storm effects occurred only during CME-driven storms over mid-latitude stations and were more prevalent in summer. The other interesting finding is that for the stations considered over mid-, low, and equatorial latitudes, negative-positive ionospheric responses were only observed over low and equatorial latitudes. A significant number of cases where the electron density changes remained within the background variability during storm conditions were observed over the low latitude stations compared to other latitude regions.
- Full Text:
- Date Issued: 2018
- Authors: Matamba, Tshimangadzo Merline
- Date: 2018
- Subjects: Ionospheric storms , Coronal mass ejections , Corotating interaction regions , Solar flares , Global Positioning System , Ionospheric critical frequencies , Equatorial Ionization Anomaly (EIA)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/63991 , vital:28517
- Description: Understanding changes in the ionosphere is important for High Frequency (HF) communications and navigation systems. Ionospheric storms are the disturbances in the Earth’s upper atmosphere due to solar activities such as Coronal Mass Ejections (CMEs), Corotating interaction Regions (CIRs) and solar flares. This thesis reports for the first time on an investigation of ionospheric response to great geomagnetic storms (Disturbance storm time, Dst ≤ −350 nT) that occurred during solar cycle 23. The storm periods analysed were 29 March - 02 April 2001, 27 - 31 October 2003, 18 - 23 November 2003 and 06 - 11 November 2004. Global Navigation Satellite System (GNSS), Total Electron Content (TEC) and ionosonde critical frequency of F2 layer (foF2) data over northern hemisphere (European sector) and southern hemisphere (African sector) mid-latitudes were used to study the ionospheric responses within 15E° - 40°E longitude and ±31°- ±46° geomagnetic latitude. Mid-latitude regions within the same longitude sector in both hemispheres were selected in order to assess the contribution of the low latitude changes especially the expansion of Equatorial Ionization Anomaly (EIA) also known as the dayside ionospheric super-fountain effect during these storms. In all storm periods, both negative and positive ionospheric responses were observed in both hemispheres. Negative ionospheric responses were mainly due to changes in neutral composition, while the expansion of the EIA led to pronounced positive ionospheric storm effect at mid-latitudes for some storm periods. In other cases (e.g 29 October 2003), Prompt Penetration Electric Fields (PPEF), EIA expansion and large scale Traveling Ionospheric Disturbances (TIDs) were found to be present during the positive storm effect at mid-latitudes in both hemispheres. An increase in TEC on the 28 October 2003 was because of the large solar flare with previously determined intensity of X45± 5. A further report on statistical analysis of ionospheric storm effects due to Corotating Interaction Region (CIR)- and Coronal Mass Ejection (CME)-driven storms was performed. The storm periods analyzed occurred during the period 2001 - 2015 which covers part of solar cycles 23 and 24. Dst≤ -30 nT and Kp≥ 3 indices were used to identify the storm periods considered. Ionospheric TEC derived from IGS stations that lie within 30°E - 40°E geographic longitude in mid, low and equatorial latitude over the African sector were used. The statistical analysis of ionospheric storm effects were compared over mid, low and equatorial latitudes in the African sector for the first time. Positive ionospheric storm effects were more prevalent during CME-driven and CIR-driven over all stations considered in this study. Negative ionospheric storm effects occurred only during CME-driven storms over mid-latitude stations and were more prevalent in summer. The other interesting finding is that for the stations considered over mid-, low, and equatorial latitudes, negative-positive ionospheric responses were only observed over low and equatorial latitudes. A significant number of cases where the electron density changes remained within the background variability during storm conditions were observed over the low latitude stations compared to other latitude regions.
- Full Text:
- Date Issued: 2018
Modelling Ionospheric vertical drifts over the African low latitude region
- Dubazane, Makhosonke Berthwell
- Authors: Dubazane, Makhosonke Berthwell
- Date: 2018
- Subjects: Ionospheric drift , Magnetometers , Functions, Orthogonal , Neural networks (Computer science) , Ionospheric electron density -- Africa , Communication and Navigation Outage Forecasting Systems (C/NOFS)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/63356 , vital:28396
- Description: Low/equatorial latitudes vertical plasma drifts and electric fields govern the formation and changes of ionospheric density structures which affect space-based systems such as communications, navigation and positioning. Dynamical and electrodynamical processes play important roles in plasma distribution at different altitudes. Because of the high variability of E × B drift in low latitude regions, coupled with various processes that sometimes originate from high latitudes especially during geomagnetic storm conditions, it is challenging to develop accurate vertical drift models. This is despite the fact that there are very few instruments dedicated to provide electric field and hence E × B drift data in low/equatorial latitude regions. To this effect, there exists no ground-based instrument for direct measurements of E×B drift data in the African sector. This study presents the first time investigation aimed at modelling the long-term variability of low latitude vertical E × B drift over the African sector using a combination of Communication and Navigation Outage Forecasting Systems (C/NOFS) and ground-based magnetometer observations/measurements during 2008-2013. Because the approach is based on the estimation of equatorial electrojet from ground-based magnetometer observations, the developed models are only valid for local daytime. Three modelling techniques have been considered. The application of Empirical Orthogonal Functions and partial least squares has been performed on vertical E × B drift modelling for the first time. The artificial neural networks that have the advantage of learning underlying changes between a set of inputs and known output were also used in vertical E × B drift modelling. Due to lack of E×B drift data over the African sector, the developed models were validated using satellite data and the climatological Scherliess-Fejer model incorporated within the International Reference Ionosphere model. Maximum correlation coefficient of ∼ 0.8 was achieved when validating the developed models with C/NOFS E × B drift observations that were not used in any model development. For most of the time, the climatological model overestimates the local daytime vertical E × B drift velocities. The methods and approach presented in this study provide a background for constructing vertical E ×B drift databases in longitude sectors that do not have radar instrumentation. This will in turn make it possible to study day-to-day variability of vertical E×B drift and hopefully lead to the development of regional and global models that will incorporate local time information in different longitude sectors.
- Full Text:
- Date Issued: 2018
- Authors: Dubazane, Makhosonke Berthwell
- Date: 2018
- Subjects: Ionospheric drift , Magnetometers , Functions, Orthogonal , Neural networks (Computer science) , Ionospheric electron density -- Africa , Communication and Navigation Outage Forecasting Systems (C/NOFS)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/63356 , vital:28396
- Description: Low/equatorial latitudes vertical plasma drifts and electric fields govern the formation and changes of ionospheric density structures which affect space-based systems such as communications, navigation and positioning. Dynamical and electrodynamical processes play important roles in plasma distribution at different altitudes. Because of the high variability of E × B drift in low latitude regions, coupled with various processes that sometimes originate from high latitudes especially during geomagnetic storm conditions, it is challenging to develop accurate vertical drift models. This is despite the fact that there are very few instruments dedicated to provide electric field and hence E × B drift data in low/equatorial latitude regions. To this effect, there exists no ground-based instrument for direct measurements of E×B drift data in the African sector. This study presents the first time investigation aimed at modelling the long-term variability of low latitude vertical E × B drift over the African sector using a combination of Communication and Navigation Outage Forecasting Systems (C/NOFS) and ground-based magnetometer observations/measurements during 2008-2013. Because the approach is based on the estimation of equatorial electrojet from ground-based magnetometer observations, the developed models are only valid for local daytime. Three modelling techniques have been considered. The application of Empirical Orthogonal Functions and partial least squares has been performed on vertical E × B drift modelling for the first time. The artificial neural networks that have the advantage of learning underlying changes between a set of inputs and known output were also used in vertical E × B drift modelling. Due to lack of E×B drift data over the African sector, the developed models were validated using satellite data and the climatological Scherliess-Fejer model incorporated within the International Reference Ionosphere model. Maximum correlation coefficient of ∼ 0.8 was achieved when validating the developed models with C/NOFS E × B drift observations that were not used in any model development. For most of the time, the climatological model overestimates the local daytime vertical E × B drift velocities. The methods and approach presented in this study provide a background for constructing vertical E ×B drift databases in longitude sectors that do not have radar instrumentation. This will in turn make it possible to study day-to-day variability of vertical E×B drift and hopefully lead to the development of regional and global models that will incorporate local time information in different longitude sectors.
- Full Text:
- Date Issued: 2018
Tomographic imaging of East African equatorial ionosphere and study of equatorial plasma bubbles
- Authors: Giday, Nigussie Mezgebe
- Date: 2018
- Subjects: Ionosphere -- Africa, Central , Tomography -- Africa, Central , Global Positioning System , Neural networks (Computer science) , Space environment , Multi-Instrument Data Analysis System (MIDAS) , Equatorial plasma bubbles
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/63980 , vital:28516
- Description: In spite of the fact that the African ionospheric equatorial region has the largest ground footprint along the geomagnetic equator, it has not been well studied due to the absence of adequate ground-based instruments. This thesis presents research on both tomographic imaging of the African equatorial ionosphere and the study of the ionospheric irregularities/equatorial plasma bubbles (EPBs) under varying geomagnetic conditions. The Multi-Instrument Data Analysis System (MIDAS), an inversion algorithm, was investigated for its validity and ability as a tool to reconstruct multi-scaled ionospheric structures for different geomagnetic conditions. This was done for the narrow East African longitude sector with data from the available ground Global Positioning Sys-tem (GPS) receivers. The MIDAS results were compared to the results of two models, namely the IRI and GIM. MIDAS results compared more favourably with the observation vertical total electron content (VTEC), with a computed maximum correlation coefficient (r) of 0.99 and minimum root-mean-square error (RMSE) of 2.91 TECU, than did the results of the IRI-2012 and GIM models with maximum r of 0.93 and 0.99, and minimum RMSE of 13.03 TECU and 6.52 TECU, respectively, over all the test stations and validation days. The ability of MIDAS to reconstruct storm-time TEC was also compared with the results produced by the use of a Artificial Neural Net-work (ANN) for the African low- and mid-latitude regions. In terms of latitude, on average,MIDAS performed 13.44 % better than ANN in the African mid-latitudes, while MIDAS under performed in low-latitudes. This thesis also reports on the effects of moderate geomagnetic conditions on the evolution of EPBs and/or ionospheric irregularities during their season of occurrence using data from (or measurements by) space- and ground-based instruments for the east African equatorial sector. The study showed that the strength of daytime equatorial electrojet (EEJ), the steepness of the TEC peak-to-trough gradient and/or the meridional/transequatorial thermospheric winds sometimes have collective/interwoven effects, while at other times one mechanism dominates. In summary, this research offered tomographic results that outperform the results of the commonly used (“standard”) global models (i.e. IRI and GIM) for a longitude sector of importance to space weather, which has not been adequately studied due to a lack of sufficient instrumentation.
- Full Text:
- Date Issued: 2018
- Authors: Giday, Nigussie Mezgebe
- Date: 2018
- Subjects: Ionosphere -- Africa, Central , Tomography -- Africa, Central , Global Positioning System , Neural networks (Computer science) , Space environment , Multi-Instrument Data Analysis System (MIDAS) , Equatorial plasma bubbles
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/63980 , vital:28516
- Description: In spite of the fact that the African ionospheric equatorial region has the largest ground footprint along the geomagnetic equator, it has not been well studied due to the absence of adequate ground-based instruments. This thesis presents research on both tomographic imaging of the African equatorial ionosphere and the study of the ionospheric irregularities/equatorial plasma bubbles (EPBs) under varying geomagnetic conditions. The Multi-Instrument Data Analysis System (MIDAS), an inversion algorithm, was investigated for its validity and ability as a tool to reconstruct multi-scaled ionospheric structures for different geomagnetic conditions. This was done for the narrow East African longitude sector with data from the available ground Global Positioning Sys-tem (GPS) receivers. The MIDAS results were compared to the results of two models, namely the IRI and GIM. MIDAS results compared more favourably with the observation vertical total electron content (VTEC), with a computed maximum correlation coefficient (r) of 0.99 and minimum root-mean-square error (RMSE) of 2.91 TECU, than did the results of the IRI-2012 and GIM models with maximum r of 0.93 and 0.99, and minimum RMSE of 13.03 TECU and 6.52 TECU, respectively, over all the test stations and validation days. The ability of MIDAS to reconstruct storm-time TEC was also compared with the results produced by the use of a Artificial Neural Net-work (ANN) for the African low- and mid-latitude regions. In terms of latitude, on average,MIDAS performed 13.44 % better than ANN in the African mid-latitudes, while MIDAS under performed in low-latitudes. This thesis also reports on the effects of moderate geomagnetic conditions on the evolution of EPBs and/or ionospheric irregularities during their season of occurrence using data from (or measurements by) space- and ground-based instruments for the east African equatorial sector. The study showed that the strength of daytime equatorial electrojet (EEJ), the steepness of the TEC peak-to-trough gradient and/or the meridional/transequatorial thermospheric winds sometimes have collective/interwoven effects, while at other times one mechanism dominates. In summary, this research offered tomographic results that outperform the results of the commonly used (“standard”) global models (i.e. IRI and GIM) for a longitude sector of importance to space weather, which has not been adequately studied due to a lack of sufficient instrumentation.
- Full Text:
- Date Issued: 2018
Data compression, field of interest shaping and fast algorithms for direction-dependent deconvolution in radio interferometry
- Authors: Atemkeng, Marcellin T
- Date: 2017
- Subjects: Radio astronomy , Solar radio emission , Radio interferometers , Signal processing -- Digital techniques , Algorithms , Data compression (Computer science)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/6324 , vital:21089
- Description: In radio interferometry, observed visibilities are intrinsically sampled at some interval in time and frequency. Modern interferometers are capable of producing data at very high time and frequency resolution; practical limits on storage and computation costs require that some form of data compression be imposed. The traditional form of compression is simple averaging of the visibilities over coarser time and frequency bins. This has an undesired side effect: the resulting averaged visibilities “decorrelate”, and do so differently depending on the baseline length and averaging interval. This translates into a non-trivial signature in the image domain known as “smearing”, which manifests itself as an attenuation in amplitude towards off-centre sources. With the increasing fields of view and/or longer baselines employed in modern and future instruments, the trade-off between data rate and smearing becomes increasingly unfavourable. Averaging also results in baseline length and a position-dependent point spread function (PSF). In this work, we investigate alternative approaches to low-loss data compression. We show that averaging of the visibility data can be understood as a form of convolution by a boxcar-like window function, and that by employing alternative baseline-dependent window functions a more optimal interferometer smearing response may be induced. Specifically, we can improve amplitude response over a chosen field of interest and attenuate sources outside the field of interest. The main cost of this technique is a reduction in nominal sensitivity; we investigate the smearing vs. sensitivity trade-off and show that in certain regimes a favourable compromise can be achieved. We show the application of this technique to simulated data from the Jansky Very Large Array and the European Very Long Baseline Interferometry Network. Furthermore, we show that the position-dependent PSF shape induced by averaging can be approximated using linear algebraic properties to effectively reduce the computational complexity for evaluating the PSF at each sky position. We conclude by implementing a position-dependent PSF deconvolution in an imaging and deconvolution framework. Using the Low-Frequency Array radio interferometer, we show that deconvolution with position-dependent PSFs results in higher image fidelity compared to a simple CLEAN algorithm and its derivatives.
- Full Text:
- Date Issued: 2017
- Authors: Atemkeng, Marcellin T
- Date: 2017
- Subjects: Radio astronomy , Solar radio emission , Radio interferometers , Signal processing -- Digital techniques , Algorithms , Data compression (Computer science)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/6324 , vital:21089
- Description: In radio interferometry, observed visibilities are intrinsically sampled at some interval in time and frequency. Modern interferometers are capable of producing data at very high time and frequency resolution; practical limits on storage and computation costs require that some form of data compression be imposed. The traditional form of compression is simple averaging of the visibilities over coarser time and frequency bins. This has an undesired side effect: the resulting averaged visibilities “decorrelate”, and do so differently depending on the baseline length and averaging interval. This translates into a non-trivial signature in the image domain known as “smearing”, which manifests itself as an attenuation in amplitude towards off-centre sources. With the increasing fields of view and/or longer baselines employed in modern and future instruments, the trade-off between data rate and smearing becomes increasingly unfavourable. Averaging also results in baseline length and a position-dependent point spread function (PSF). In this work, we investigate alternative approaches to low-loss data compression. We show that averaging of the visibility data can be understood as a form of convolution by a boxcar-like window function, and that by employing alternative baseline-dependent window functions a more optimal interferometer smearing response may be induced. Specifically, we can improve amplitude response over a chosen field of interest and attenuate sources outside the field of interest. The main cost of this technique is a reduction in nominal sensitivity; we investigate the smearing vs. sensitivity trade-off and show that in certain regimes a favourable compromise can be achieved. We show the application of this technique to simulated data from the Jansky Very Large Array and the European Very Long Baseline Interferometry Network. Furthermore, we show that the position-dependent PSF shape induced by averaging can be approximated using linear algebraic properties to effectively reduce the computational complexity for evaluating the PSF at each sky position. We conclude by implementing a position-dependent PSF deconvolution in an imaging and deconvolution framework. Using the Low-Frequency Array radio interferometer, we show that deconvolution with position-dependent PSFs results in higher image fidelity compared to a simple CLEAN algorithm and its derivatives.
- Full Text:
- Date Issued: 2017
Nonlinear optical responses of phthalocyanines in the presence of nanomaterials or when embedded in polymeric materials
- Authors: Bankole, Owolabi Mutolib
- Date: 2017
- Subjects: Phthalocyanines , Phthalocyanines -- Optical properties , Alkynes , Triazoles , Nonlinear optics , Photochemistry , Complex compounds , Amines , Mercaptopyridine
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/45794 , vital:25548
- Description: This work describes the synthesis, photophysical and nonlinear optical characterizations of alkynyl Pcs (1, 2, 3, 8 and 9), 1,2,3-triazole ZnPc (4), mercaptopyridine Pcs (5, 6 and 7) and amino Pcs (10 and 11). Complexes 1, 2, 4, 7, 8, 9 and 11 were newly synthesized and characterized using techniques including 1H-NMR, MALDI-TOF, UV-visible spectrophotometry, FTIR and elemental analysis. The results of the characterizations were in good agreement with their molecular structures, and confirmed the purity of the new molecules. Complex 10 was covalently linked to pristine graphene (GQDs), nitrogen- doped (NGQDs), and sulfur-nitrogen co-doped (SNGQDs) graphene quantum dots; gold nanoparticles (AuNPs); poly(acrylic acid) (PAA); Fe3O4@Ag core-shell and Fe3O4- Ag hybrid nanoparticles via covalent bonding. Complex 11 was linked to Agx Auy alloy nanoparticles via NH2-Au and/or Au-S bonding, 2 and 3 were linked to gold nanoparticles (AuNPs) via clicked reactions. Evidence of successful conjugation of 2, 3, 10 and 11 to nanomaterials was revealed within the UV-vis, EDS, TEM, XRD and XPS spectra. Optical limiting (OL) responses of the samples were evaluated using open aperture Z-scan technique at 532 nm and 10 ns radiation in solution or when embedded in polymer mixtures. The analyses of the Z-scan data for the studied samples did fit to a two-photon absorption mechanism (2PA), but the Pcs and Pc-nanomaterial or polymer composites also possess the multi-photon absorption mechanisms aided by the triplet-triplet population to have reverse saturable absorption (RSA) occur. Phthalocyanines doped in polymer matrices showed larger nonlinear absorption coefficients (ßeff), third-order susceptibility (Im [x(3)]) and second-order hyperpolarizability (y), with an accompanying low intensity threshold (Ium) than in solution. Aggregation in DMSO negatively affected NLO behaviour of Pcs (8 as a case study) at low laser power, and improved at relatively higher laser power. Heavy atom-substituted Pcs (6) enhanced NLO and OL properties than lighter atoms such as 5 and 7. Direct relationship between enhanced photophysical properties and nonlinear effects favoured by excited triplet absorption of the 2, 3, 10 and 11 in presence of nanomaterials was established. Major factor responsible for the enhanced nonlinearities of 10 in the presence of NGQDs and SNGQDs were fully described and attributed to the surface defects caused by the presence of heteroatoms such as nitrogen and sulfur. The studies showed that phthalocyanines-nanomaterial composites were useful in applications such as optical switching, pulse compressor and laser pulse narrowing.
- Full Text:
- Date Issued: 2017
- Authors: Bankole, Owolabi Mutolib
- Date: 2017
- Subjects: Phthalocyanines , Phthalocyanines -- Optical properties , Alkynes , Triazoles , Nonlinear optics , Photochemistry , Complex compounds , Amines , Mercaptopyridine
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/45794 , vital:25548
- Description: This work describes the synthesis, photophysical and nonlinear optical characterizations of alkynyl Pcs (1, 2, 3, 8 and 9), 1,2,3-triazole ZnPc (4), mercaptopyridine Pcs (5, 6 and 7) and amino Pcs (10 and 11). Complexes 1, 2, 4, 7, 8, 9 and 11 were newly synthesized and characterized using techniques including 1H-NMR, MALDI-TOF, UV-visible spectrophotometry, FTIR and elemental analysis. The results of the characterizations were in good agreement with their molecular structures, and confirmed the purity of the new molecules. Complex 10 was covalently linked to pristine graphene (GQDs), nitrogen- doped (NGQDs), and sulfur-nitrogen co-doped (SNGQDs) graphene quantum dots; gold nanoparticles (AuNPs); poly(acrylic acid) (PAA); Fe3O4@Ag core-shell and Fe3O4- Ag hybrid nanoparticles via covalent bonding. Complex 11 was linked to Agx Auy alloy nanoparticles via NH2-Au and/or Au-S bonding, 2 and 3 were linked to gold nanoparticles (AuNPs) via clicked reactions. Evidence of successful conjugation of 2, 3, 10 and 11 to nanomaterials was revealed within the UV-vis, EDS, TEM, XRD and XPS spectra. Optical limiting (OL) responses of the samples were evaluated using open aperture Z-scan technique at 532 nm and 10 ns radiation in solution or when embedded in polymer mixtures. The analyses of the Z-scan data for the studied samples did fit to a two-photon absorption mechanism (2PA), but the Pcs and Pc-nanomaterial or polymer composites also possess the multi-photon absorption mechanisms aided by the triplet-triplet population to have reverse saturable absorption (RSA) occur. Phthalocyanines doped in polymer matrices showed larger nonlinear absorption coefficients (ßeff), third-order susceptibility (Im [x(3)]) and second-order hyperpolarizability (y), with an accompanying low intensity threshold (Ium) than in solution. Aggregation in DMSO negatively affected NLO behaviour of Pcs (8 as a case study) at low laser power, and improved at relatively higher laser power. Heavy atom-substituted Pcs (6) enhanced NLO and OL properties than lighter atoms such as 5 and 7. Direct relationship between enhanced photophysical properties and nonlinear effects favoured by excited triplet absorption of the 2, 3, 10 and 11 in presence of nanomaterials was established. Major factor responsible for the enhanced nonlinearities of 10 in the presence of NGQDs and SNGQDs were fully described and attributed to the surface defects caused by the presence of heteroatoms such as nitrogen and sulfur. The studies showed that phthalocyanines-nanomaterial composites were useful in applications such as optical switching, pulse compressor and laser pulse narrowing.
- Full Text:
- Date Issued: 2017
Development of an ionospheric map for Africa
- Authors: Ssessanga, Nicholas
- Date: 2014
- Subjects: Ionosondes Ionosphere Ionosphere -- Observations Ionosphere -- Research -- Africa Ionospheric electron density -- Africa Ionospheric critical frequencies
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:5519 , http://hdl.handle.net/10962/d1011498
- Description: This thesis presents research pertaining to the development of an African Ionospheric Map (AIM). An ionospheric map is a computer program that is able to display spatial and temporal representations of ionospheric parameters such as, electron density and critical plasma frequencies, for every geographical location on the map. The purpose of this development was to make the most optimum use of all available data sources, namely ionosondes, satellites and models, and to implement error minimisation techniques in order to obtain the best result at any given location on the African continent. The focus was placed on the accurate estimation of three upper atmosphere parameters which are important for radio communications: critical frequency of the F2 layer (foF2), Total Electron Content (TEC) and the maximum usable frequency over a distance of 3000 km (M3000F2). The results show that AIM provided a more accurate estimation of the three parameters than the internationally recognised and recommended ionosphere model (IRI-2012) when used on its own. Therefore, the AIM is a more accurate solution than single independent data sources for applications requiring ionospheric mapping over the African continent.
- Full Text:
- Date Issued: 2014
- Authors: Ssessanga, Nicholas
- Date: 2014
- Subjects: Ionosondes Ionosphere Ionosphere -- Observations Ionosphere -- Research -- Africa Ionospheric electron density -- Africa Ionospheric critical frequencies
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:5519 , http://hdl.handle.net/10962/d1011498
- Description: This thesis presents research pertaining to the development of an African Ionospheric Map (AIM). An ionospheric map is a computer program that is able to display spatial and temporal representations of ionospheric parameters such as, electron density and critical plasma frequencies, for every geographical location on the map. The purpose of this development was to make the most optimum use of all available data sources, namely ionosondes, satellites and models, and to implement error minimisation techniques in order to obtain the best result at any given location on the African continent. The focus was placed on the accurate estimation of three upper atmosphere parameters which are important for radio communications: critical frequency of the F2 layer (foF2), Total Electron Content (TEC) and the maximum usable frequency over a distance of 3000 km (M3000F2). The results show that AIM provided a more accurate estimation of the three parameters than the internationally recognised and recommended ionosphere model (IRI-2012) when used on its own. Therefore, the AIM is a more accurate solution than single independent data sources for applications requiring ionospheric mapping over the African continent.
- Full Text:
- Date Issued: 2014