A comparative polarimetric study of the 43 GHz and 86 GHz SiO masers toward the supergiant star VY CMa
- Authors: Richter, Laura
- Date: 2012
- Subjects: Masers Supergiant stars Polarization (Light) Very long baseline interferometry
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:5454 , http://hdl.handle.net/10962/d1005239
- Description: The aim of this thesis is to perform observational tests of SiO maser polarisation and excitation models, using component-level comparisons of multiple SiO maser transitions in the 43 GHz and 86 GHz bands at milliarcsecond resolution. These observations reqwre very long baseline interferometric imaging with very accurate polarimetric calibration. The supergiant star VY CMa was chosen as the object of this study due to its high SiO maser luminosity, many detected SiO maser lines, and intrinsic scientific interest. Two epochs of full-polarisation VLBA observations of VY CMa were performed. The Epoch 2 observations were reduced using several new data reduction methods developed as part of this work, and designed specifically to improve the accuracy of circular polarisation calibration of spectral-line VLBI observations at millimetre wavelengths. The accuracy is estimated to be better than 1% using these methods. The Epoch 2 images show a concentration of v= l and v=2 J= 1-0 SiO masers to the east and northeast of the assumed stellar position. The v=l J=2-1 masers were more evenly distributed around the star, with a notable lack of emission in the northeast. There is appreciable spatial overlap between these three lines. The nature of the overlap is generally consistent with the predictions of hydrodynamical circumstellar SiO maser simulations. Where the v=l J = 1-0 and J =2-1 features overlap, the v=l J = 2-1 emission is usually considerably weaker. This is not predicted by current hydrodynamical models, but can be explained in the context of collisional pumping in a low density environment. Six observational tests of weak-splitting maser polarisation models were performed, including intercomparisons of linear polarisation in the v=l J=1-0 and J=2-1lines, linear polarisation versus saturation level, linear polarisation versus distance from the star, circular polarisation in the v= l J = 1-0 and J=2-1 lines, circular versus linear polarisation and modeling of ~ 900 electric-vector position angle rotations. The polarisation model tests generally do not support non-Zeeman circular polarisation mechanisms. For the linear polarisation tests, the results are more consistent with models that predict similar linear polarisation across transitions. The scientific importance of these tests is described in detail and avenues for future work are described.
- Full Text:
- Date Issued: 2012
- Authors: Richter, Laura
- Date: 2012
- Subjects: Masers Supergiant stars Polarization (Light) Very long baseline interferometry
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:5454 , http://hdl.handle.net/10962/d1005239
- Description: The aim of this thesis is to perform observational tests of SiO maser polarisation and excitation models, using component-level comparisons of multiple SiO maser transitions in the 43 GHz and 86 GHz bands at milliarcsecond resolution. These observations reqwre very long baseline interferometric imaging with very accurate polarimetric calibration. The supergiant star VY CMa was chosen as the object of this study due to its high SiO maser luminosity, many detected SiO maser lines, and intrinsic scientific interest. Two epochs of full-polarisation VLBA observations of VY CMa were performed. The Epoch 2 observations were reduced using several new data reduction methods developed as part of this work, and designed specifically to improve the accuracy of circular polarisation calibration of spectral-line VLBI observations at millimetre wavelengths. The accuracy is estimated to be better than 1% using these methods. The Epoch 2 images show a concentration of v= l and v=2 J= 1-0 SiO masers to the east and northeast of the assumed stellar position. The v=l J=2-1 masers were more evenly distributed around the star, with a notable lack of emission in the northeast. There is appreciable spatial overlap between these three lines. The nature of the overlap is generally consistent with the predictions of hydrodynamical circumstellar SiO maser simulations. Where the v=l J = 1-0 and J =2-1 features overlap, the v=l J = 2-1 emission is usually considerably weaker. This is not predicted by current hydrodynamical models, but can be explained in the context of collisional pumping in a low density environment. Six observational tests of weak-splitting maser polarisation models were performed, including intercomparisons of linear polarisation in the v=l J=1-0 and J=2-1lines, linear polarisation versus saturation level, linear polarisation versus distance from the star, circular polarisation in the v= l J = 1-0 and J=2-1 lines, circular versus linear polarisation and modeling of ~ 900 electric-vector position angle rotations. The polarisation model tests generally do not support non-Zeeman circular polarisation mechanisms. For the linear polarisation tests, the results are more consistent with models that predict similar linear polarisation across transitions. The scientific importance of these tests is described in detail and avenues for future work are described.
- Full Text:
- Date Issued: 2012
A contribution to TEC modelling over Southern Africa using GPS data
- Authors: Habarulema, John Bosco
- Date: 2010
- Subjects: Electrons -- Mathematical models Radio wave propagation Global positioning system -- Measurement Ionospheric radio wave propagation Atmospheric physics -- Africa, Southern
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:5456 , http://hdl.handle.net/10962/d1005241
- Description: Modelling ionospheric total electron content (TEC) is an important area of interest for radio wave propagation, geodesy, surveying, the understanding of space weather dynamics and error correction in relation to Global Navigation Satellite Systems (GNNS) applications. With the utilisation of improved ionosonde technology coupled with the use of GNSS, the response of technological systems due to changes in the ionosphere during both quiet and disturbed conditions can be historically inferred. TEC values are usually derived from GNSS measurements using mathematically intensive algorithms. However, the techniques used to estimate these TEC values depend heavily on the availability of near-real time GNSS data, and therefore, are sometimes unable to generate complete datasets. This thesis investigated possibilities for the modelling of TEC values derived from the South African Global Positioning System (GPS)receiver network using linear regression methods and artificial neural networks (NNs). GPS TEC values were derived using the Adjusted Spherical Harmonic Analysis (ASHA) algorithm. Considering TEC and the factors that influence its variability as “dependent and independent variables” respectively, the capabilities of linear regression methods and NNs for TEC modelling were first investigated using a small dataset from two GPS receiver stations. NN and regression models were separately developed and used to reproduce TEC fluctuations at different stations not included in the models’ development. For this purpose, TEC was modelled as a function of diurnal variation, seasonal variation, solar and magnetic activities. Comparative analysis showed that NN models provide predictions of GPS TEC that were an improvement on those predicted by the regression models developed. A separate study to empirically investigate the effects of solar wind on GPS TEC was carried out. Quantitative results indicated that solar wind does not have a significant influence on TEC variability. The final TEC simulation model developed makes use of the NN technique to find the relationship between historical TEC data variations and factors that are known to influence TEC variability (such as solar and magnetic activities, diurnal and seasonal variations and the geographical locations of the respective GPS stations) for the purposes of regional TEC modelling and mapping. The NN technique in conjunction with interpolation and extrapolation methods makes it possible to construct ionospheric TEC maps and to analyse the spatial and temporal TEC behaviour over Southern Africa. For independent validation, modelled TEC values were compared to ionosonde TEC and the International Reference Ionosphere (IRI) generated TEC values during both quiet and disturbed conditions. This thesis provides a comprehensive guide on the development of TEC models for predicting ionospheric variability over the South African region, and forms a significant contribution to ionospheric modelling efforts in Africa.
- Full Text:
- Date Issued: 2010
- Authors: Habarulema, John Bosco
- Date: 2010
- Subjects: Electrons -- Mathematical models Radio wave propagation Global positioning system -- Measurement Ionospheric radio wave propagation Atmospheric physics -- Africa, Southern
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:5456 , http://hdl.handle.net/10962/d1005241
- Description: Modelling ionospheric total electron content (TEC) is an important area of interest for radio wave propagation, geodesy, surveying, the understanding of space weather dynamics and error correction in relation to Global Navigation Satellite Systems (GNNS) applications. With the utilisation of improved ionosonde technology coupled with the use of GNSS, the response of technological systems due to changes in the ionosphere during both quiet and disturbed conditions can be historically inferred. TEC values are usually derived from GNSS measurements using mathematically intensive algorithms. However, the techniques used to estimate these TEC values depend heavily on the availability of near-real time GNSS data, and therefore, are sometimes unable to generate complete datasets. This thesis investigated possibilities for the modelling of TEC values derived from the South African Global Positioning System (GPS)receiver network using linear regression methods and artificial neural networks (NNs). GPS TEC values were derived using the Adjusted Spherical Harmonic Analysis (ASHA) algorithm. Considering TEC and the factors that influence its variability as “dependent and independent variables” respectively, the capabilities of linear regression methods and NNs for TEC modelling were first investigated using a small dataset from two GPS receiver stations. NN and regression models were separately developed and used to reproduce TEC fluctuations at different stations not included in the models’ development. For this purpose, TEC was modelled as a function of diurnal variation, seasonal variation, solar and magnetic activities. Comparative analysis showed that NN models provide predictions of GPS TEC that were an improvement on those predicted by the regression models developed. A separate study to empirically investigate the effects of solar wind on GPS TEC was carried out. Quantitative results indicated that solar wind does not have a significant influence on TEC variability. The final TEC simulation model developed makes use of the NN technique to find the relationship between historical TEC data variations and factors that are known to influence TEC variability (such as solar and magnetic activities, diurnal and seasonal variations and the geographical locations of the respective GPS stations) for the purposes of regional TEC modelling and mapping. The NN technique in conjunction with interpolation and extrapolation methods makes it possible to construct ionospheric TEC maps and to analyse the spatial and temporal TEC behaviour over Southern Africa. For independent validation, modelled TEC values were compared to ionosonde TEC and the International Reference Ionosphere (IRI) generated TEC values during both quiet and disturbed conditions. This thesis provides a comprehensive guide on the development of TEC models for predicting ionospheric variability over the South African region, and forms a significant contribution to ionospheric modelling efforts in Africa.
- Full Text:
- Date Issued: 2010
A global ionospheric F2 region peak electron density model using neural networks and extended geophysically relevant inputs
- Authors: Oyeyemi, Elijah Oyedola
- Date: 2006
- Subjects: Neural networks (Computer science) Ionospheric electron density Ionosphere Ionosphere -- Mathematical models
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:5470 , http://hdl.handle.net/10962/d1005255
- Description: This thesis presents my research on the development of a neural network (NN) based global empirical model of the ionospheric F2 region peak electron density using extended geophysically relevant inputs. The main principle behind this approach has been to utilize parameters other than simple geographic co-ordinates, on which the F2 peak electron density is known to depend, and to exploit the technique of NNs, thereby establishing and modeling the non-linear dynamic processes (both in space and time)associated with the F2 region electron density on a global scale. Four different models have been developed in this work. These are the foF2 NN model, M(3000)F2 NN model, short-term forecasting foF2 NN, and a near-real time foF2 NN model. Data used in the training of the NNs were obtained from the worldwide ionosonde stations spanning the period 1964 to 1986 based on availability, which included all periods of calm and disturbed magnetic activity. Common input parameters used in the training of all 4 models are day number (day of the year, DN), Universal Time (UT), a 2 month running mean of the sunspot number (R2), a 2 day running mean of the 3-hour planetary magnetic index ap (A16), solar zenith angle (CHI), geographic latitude (q), magnetic dip angle (I), angle of magnetic declination (D), angle of meridian relative to subsolar point (M). For the short-term and near-real time foF2 models, additional input parameters related to recent past observations of foF2 itself were included in the training of the NNs. The results of the foF2 NN model and M(3000)F2 NN model presented in this work, which compare favourably with the IRI (International Reference Ionosphere) model successfully demonstrate the potential of NNs for spatial and temporal modeling of the ionospheric parameters foF2 and M(3000)F2 globally. The results obtained from the short-term foF2 NN model and nearreal time foF2 NN model reveal that, in addition to the temporal and spatial input variables, short-term forecasting of foF2 is much improved by including past observations of foF2 itself. Results obtained from the near-real time foF2 NN model also reveal that there exists a correlation between measured foF2 values at different locations across the globe. Again, comparisons of the foF2 NN model and M(3000)F2 NN model predictions with that of the IRI model predictions and observed values at some selected high latitude stations, suggest that the NN technique can successfully be employed to model the complex irregularities associated with the high latitude regions. Based on the results obtained in this research and the comparison made with the IRI model (URSI and CCIR coefficients), these results justify consideration of the NN technique for the prediction of global ionospheric parameters. I believe that, after consideration by the IRI community, these models will prove to be valuable to both the high frequency (HF) communication and worldwide ionospheric communities.
- Full Text:
- Date Issued: 2006
- Authors: Oyeyemi, Elijah Oyedola
- Date: 2006
- Subjects: Neural networks (Computer science) Ionospheric electron density Ionosphere Ionosphere -- Mathematical models
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:5470 , http://hdl.handle.net/10962/d1005255
- Description: This thesis presents my research on the development of a neural network (NN) based global empirical model of the ionospheric F2 region peak electron density using extended geophysically relevant inputs. The main principle behind this approach has been to utilize parameters other than simple geographic co-ordinates, on which the F2 peak electron density is known to depend, and to exploit the technique of NNs, thereby establishing and modeling the non-linear dynamic processes (both in space and time)associated with the F2 region electron density on a global scale. Four different models have been developed in this work. These are the foF2 NN model, M(3000)F2 NN model, short-term forecasting foF2 NN, and a near-real time foF2 NN model. Data used in the training of the NNs were obtained from the worldwide ionosonde stations spanning the period 1964 to 1986 based on availability, which included all periods of calm and disturbed magnetic activity. Common input parameters used in the training of all 4 models are day number (day of the year, DN), Universal Time (UT), a 2 month running mean of the sunspot number (R2), a 2 day running mean of the 3-hour planetary magnetic index ap (A16), solar zenith angle (CHI), geographic latitude (q), magnetic dip angle (I), angle of magnetic declination (D), angle of meridian relative to subsolar point (M). For the short-term and near-real time foF2 models, additional input parameters related to recent past observations of foF2 itself were included in the training of the NNs. The results of the foF2 NN model and M(3000)F2 NN model presented in this work, which compare favourably with the IRI (International Reference Ionosphere) model successfully demonstrate the potential of NNs for spatial and temporal modeling of the ionospheric parameters foF2 and M(3000)F2 globally. The results obtained from the short-term foF2 NN model and nearreal time foF2 NN model reveal that, in addition to the temporal and spatial input variables, short-term forecasting of foF2 is much improved by including past observations of foF2 itself. Results obtained from the near-real time foF2 NN model also reveal that there exists a correlation between measured foF2 values at different locations across the globe. Again, comparisons of the foF2 NN model and M(3000)F2 NN model predictions with that of the IRI model predictions and observed values at some selected high latitude stations, suggest that the NN technique can successfully be employed to model the complex irregularities associated with the high latitude regions. Based on the results obtained in this research and the comparison made with the IRI model (URSI and CCIR coefficients), these results justify consideration of the NN technique for the prediction of global ionospheric parameters. I believe that, after consideration by the IRI community, these models will prove to be valuable to both the high frequency (HF) communication and worldwide ionospheric communities.
- Full Text:
- Date Issued: 2006
A neural network based ionospheric model for the bottomside electron density profile over Grahamstown, South Africa
- Authors: McKinnell, L A
- Date: 2003
- Subjects: Neural networks (Computer science) Ionospheric electron density -- South Africa -- Grahamstown
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:5477 , http://hdl.handle.net/10962/d1005262
- Description: This thesis describes the development and application of a neural network based ionospheric model for the bottomside electron density profile over Grahamstown, South Africa. All available ionospheric data from the archives of the Grahamstown (33.32ºS, 26.50ºE) ionospheric station were used for training neural networks (NNs) to predict the parameters required to produce the final profile. Inputs to the model, called the LAM model, are day number, hour, and measures of solar and magnetic activity. The output is a mathematical description of the bottomside electron density profile for that particular input set. The two main ionospheric layers, the E and F layers, are predicted separately and then combined at the final stage. For each layer, NNs have been trained to predict the individual ionospheric characteristics and coefficients that were required to describe the layer profile. NNs were also applied to the task of determining the hours between which an E layer is measurable by a groundbased ionosonde and the probability of the existence of an F1 layer. The F1 probability NN is innovative in that it provides information on the existence of the F1 layer as well as the probability of that layer being in a L-condition state - the state where an F1 layer is present on an ionogram but it is not possible to record any F1 parameters. In the event of an L-condition state being predicted as probable, an L algorithm has been designed to alter the shape of the profile to reflect this state. A smoothing algorithm has been implemented to remove discontinuities at the F1-F2 boundary and ensure that the profile represents realistic ionospheric behaviour in the F1 region. Tests show that the LAM model is more successful at predicting Grahamstown electron density profiles for a particular set of inputs than the International Reference Ionosphere (IRI). It is anticipated that the LAM model will be used as a tool in the pin-pointing of hostile HF transmitters, known as single-site location.
- Full Text:
- Date Issued: 2003
- Authors: McKinnell, L A
- Date: 2003
- Subjects: Neural networks (Computer science) Ionospheric electron density -- South Africa -- Grahamstown
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:5477 , http://hdl.handle.net/10962/d1005262
- Description: This thesis describes the development and application of a neural network based ionospheric model for the bottomside electron density profile over Grahamstown, South Africa. All available ionospheric data from the archives of the Grahamstown (33.32ºS, 26.50ºE) ionospheric station were used for training neural networks (NNs) to predict the parameters required to produce the final profile. Inputs to the model, called the LAM model, are day number, hour, and measures of solar and magnetic activity. The output is a mathematical description of the bottomside electron density profile for that particular input set. The two main ionospheric layers, the E and F layers, are predicted separately and then combined at the final stage. For each layer, NNs have been trained to predict the individual ionospheric characteristics and coefficients that were required to describe the layer profile. NNs were also applied to the task of determining the hours between which an E layer is measurable by a groundbased ionosonde and the probability of the existence of an F1 layer. The F1 probability NN is innovative in that it provides information on the existence of the F1 layer as well as the probability of that layer being in a L-condition state - the state where an F1 layer is present on an ionogram but it is not possible to record any F1 parameters. In the event of an L-condition state being predicted as probable, an L algorithm has been designed to alter the shape of the profile to reflect this state. A smoothing algorithm has been implemented to remove discontinuities at the F1-F2 boundary and ensure that the profile represents realistic ionospheric behaviour in the F1 region. Tests show that the LAM model is more successful at predicting Grahamstown electron density profiles for a particular set of inputs than the International Reference Ionosphere (IRI). It is anticipated that the LAM model will be used as a tool in the pin-pointing of hostile HF transmitters, known as single-site location.
- Full Text:
- Date Issued: 2003
Addressing flux suppression, radio frequency interference, and selection of optimal solution intervals during radio interferometric calibration
- Authors: Sob, Ulrich Armel Mbou
- Date: 2020
- Subjects: CubiCal (Software) , Radio -- Interference , Imaging systems in astronomy , Algorithms , Astronomical instruments -- Calibration , Astronomy -- Data processing
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/147714 , vital:38663
- Description: The forthcoming Square Kilometre Array is expected to provide answers to some of the most intriguing questions about our Universe. However, as it is already noticeable from MeerKAT and other precursors, the amounts of data produced by these new instruments are significantly challenging to calibrate and image. Calibration of radio interferometric data is usually biased by incomplete sky models and radio frequency interference (RFI) resulting in calibration artefacts that limit the dynamic range and image fidelity of the resulting images. One of the most noticeable of these artefacts is the formation of spurious sources which causes suppression of real emissions. Fortunately, it has been shown that calibration algorithms employing heavy-tailed likelihood functions are less susceptible to this due to their robustness against outliers. Leveraging on recent developments in the field of complex optimisation, we implement a robust calibration algorithm using a Student’s t likelihood function and Wirtinger derivatives. The new algorithm, dubbed the robust solver, is incorporated as a subroutine into the newly released calibration software package CubiCal. We perform statistical analysis on the distribution of visibilities and provide an insight into the functioning of the robust solver and describe different scenarios where it will improve calibration. We use simulations to show that the robust solver effectively reduces the amount of flux suppressed from unmodelled sources both in direction independent and direction dependent calibration. Furthermore, the robust solver is shown to successfully mitigate the effects of low-level RFI when applied to a simulated and a real VLA dataset. Finally, we demonstrate that there are close links between the amount of flux suppressed from sources, the effects of the RFI and the employed solution interval during radio interferometric calibration. Hence, we investigate the effects of solution intervals and the different factors to consider in order to select adequate solution intervals. Furthermore, we propose a practical brute force method for selecting optimal solution intervals. The proposed method is successfully applied to a VLA dataset.
- Full Text:
- Date Issued: 2020
- Authors: Sob, Ulrich Armel Mbou
- Date: 2020
- Subjects: CubiCal (Software) , Radio -- Interference , Imaging systems in astronomy , Algorithms , Astronomical instruments -- Calibration , Astronomy -- Data processing
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/147714 , vital:38663
- Description: The forthcoming Square Kilometre Array is expected to provide answers to some of the most intriguing questions about our Universe. However, as it is already noticeable from MeerKAT and other precursors, the amounts of data produced by these new instruments are significantly challenging to calibrate and image. Calibration of radio interferometric data is usually biased by incomplete sky models and radio frequency interference (RFI) resulting in calibration artefacts that limit the dynamic range and image fidelity of the resulting images. One of the most noticeable of these artefacts is the formation of spurious sources which causes suppression of real emissions. Fortunately, it has been shown that calibration algorithms employing heavy-tailed likelihood functions are less susceptible to this due to their robustness against outliers. Leveraging on recent developments in the field of complex optimisation, we implement a robust calibration algorithm using a Student’s t likelihood function and Wirtinger derivatives. The new algorithm, dubbed the robust solver, is incorporated as a subroutine into the newly released calibration software package CubiCal. We perform statistical analysis on the distribution of visibilities and provide an insight into the functioning of the robust solver and describe different scenarios where it will improve calibration. We use simulations to show that the robust solver effectively reduces the amount of flux suppressed from unmodelled sources both in direction independent and direction dependent calibration. Furthermore, the robust solver is shown to successfully mitigate the effects of low-level RFI when applied to a simulated and a real VLA dataset. Finally, we demonstrate that there are close links between the amount of flux suppressed from sources, the effects of the RFI and the employed solution interval during radio interferometric calibration. Hence, we investigate the effects of solution intervals and the different factors to consider in order to select adequate solution intervals. Furthermore, we propose a practical brute force method for selecting optimal solution intervals. The proposed method is successfully applied to a VLA dataset.
- Full Text:
- Date Issued: 2020
Advanced ionospheric chirpsounding
- Authors: Poole, Allon William Victor
- Date: 1984
- Subjects: Ionospheric sounds Ionosphere -- Research
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:5446 , http://hdl.handle.net/10962/d1001999
- Description: This dissertation reports research into the theory and practical application of linear frequency modulated ionospheric sounding, as an alternative to the more usual technique of pulse modulation. A comparison of this technique with that of conventional pulse sounders is given, based on the concepts of matched filters and ambiguity functions for both modulations. A theory is developed to relate the group range and phase velocity of the ionospheric target to the phase and frequency of the difference signal at the receiver output. A method is then described whereby the group range and phase velocity of the reflection point as well as the amplitude, arrival angle and polarisation mode of the reflected energy can be measured. A description of the implementation of the technique is given together with some initial results. Finally, some suggestions for improvements are given
- Full Text:
- Date Issued: 1984
- Authors: Poole, Allon William Victor
- Date: 1984
- Subjects: Ionospheric sounds Ionosphere -- Research
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:5446 , http://hdl.handle.net/10962/d1001999
- Description: This dissertation reports research into the theory and practical application of linear frequency modulated ionospheric sounding, as an alternative to the more usual technique of pulse modulation. A comparison of this technique with that of conventional pulse sounders is given, based on the concepts of matched filters and ambiguity functions for both modulations. A theory is developed to relate the group range and phase velocity of the ionospheric target to the phase and frequency of the difference signal at the receiver output. A method is then described whereby the group range and phase velocity of the reflection point as well as the amplitude, arrival angle and polarisation mode of the reflected energy can be measured. A description of the implementation of the technique is given together with some initial results. Finally, some suggestions for improvements are given
- Full Text:
- Date Issued: 1984
Advanced radio interferometric simulation and data reduction techniques
- Authors: Makhathini, Sphesihle
- Date: 2018
- Subjects: Interferometry , Radio interferometers , Algorithms , Radio telescopes , Square Kilometre Array (Project) , Very Large Array (Observatory : N.M.) , Radio astronomy
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/57348 , vital:26875
- Description: This work shows how legacy and novel radio Interferometry software packages and algorithms can be combined to produce high-quality reductions from modern telescopes, as well as end-to-end simulations for upcoming instruments such as the Square Kilometre Array (SKA) and its pathfinders. We first use a MeqTrees based simulations framework to quantify how artefacts due to direction-dependent effects accumulate with time, and the consequences of this accumulation when observing the same field multiple times in order to reach the survey depth. Our simulations suggest that a survey like LADUMA (Looking at the Distant Universe with MeerKAT Array), which aims to achieve its survey depth of 16 µJy/beam in a 72 kHz at 1.42 GHz by observing the same field for 1000 hours, will be able to reach its target depth in the presence of these artefacts. We also present stimela, a system agnostic scripting framework for simulating, processing and imaging radio interferometric data. This framework is then used to write an end-to-end simulation pipeline in order to quantify the resolution and sensitivity of the SKA1-MID telescope (the first phase of the SKA mid-frequency telescope) as a function of frequency, as well as the scale-dependent sensitivity of the telescope. Finally, a stimela-based reduction pipeline is used to process data of the field around the source 3C147, taken by the Karl G. Jansky Very Large Array (VLA). The reconstructed image from this reduction has a typical 1a noise level of 2.87 µJy/beam, and consequently a dynamic range of 8x106:1, given the 22.58 Jy/beam flux Density of the source 3C147.
- Full Text:
- Date Issued: 2018
- Authors: Makhathini, Sphesihle
- Date: 2018
- Subjects: Interferometry , Radio interferometers , Algorithms , Radio telescopes , Square Kilometre Array (Project) , Very Large Array (Observatory : N.M.) , Radio astronomy
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/57348 , vital:26875
- Description: This work shows how legacy and novel radio Interferometry software packages and algorithms can be combined to produce high-quality reductions from modern telescopes, as well as end-to-end simulations for upcoming instruments such as the Square Kilometre Array (SKA) and its pathfinders. We first use a MeqTrees based simulations framework to quantify how artefacts due to direction-dependent effects accumulate with time, and the consequences of this accumulation when observing the same field multiple times in order to reach the survey depth. Our simulations suggest that a survey like LADUMA (Looking at the Distant Universe with MeerKAT Array), which aims to achieve its survey depth of 16 µJy/beam in a 72 kHz at 1.42 GHz by observing the same field for 1000 hours, will be able to reach its target depth in the presence of these artefacts. We also present stimela, a system agnostic scripting framework for simulating, processing and imaging radio interferometric data. This framework is then used to write an end-to-end simulation pipeline in order to quantify the resolution and sensitivity of the SKA1-MID telescope (the first phase of the SKA mid-frequency telescope) as a function of frequency, as well as the scale-dependent sensitivity of the telescope. Finally, a stimela-based reduction pipeline is used to process data of the field around the source 3C147, taken by the Karl G. Jansky Very Large Array (VLA). The reconstructed image from this reduction has a typical 1a noise level of 2.87 µJy/beam, and consequently a dynamic range of 8x106:1, given the 22.58 Jy/beam flux Density of the source 3C147.
- Full Text:
- Date Issued: 2018
An analysis of ionospheric response to geomagnetic disturbances over South Africa and Antarctica
- Authors: Ngwira, Chigomezyo Mudala
- Date: 2012
- Subjects: Geomagnetism -- South Africa , Geomagnetism -- Antarctica , Ionospheric storms -- South Africa , Ionospheric storms -- Antarctica
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:5534 , http://hdl.handle.net/10962/d1012957
- Description: The ionosphere is of practical importance for satellite-based communication and navigation systems due to its variable refractive nature which affects the propagation of trans-ionospheric radio signals. This thesis reports on the first attempt to investigate the mechanisms responsible for the generation of positive ionospheric storm effects over mid-latitude South Africa. The storm response on 15 May 2005 was associated with equatorward neutral winds and the passage of travelling ionospheric disturbances (TIDs). The two TIDs reported in this thesis propagated with average velocities of ∼438 m/s and ∼515 m/s respectively. The velocity of the first TID (i.e. 438 m/s) is consistent with the velocities calculated in other studies for the same storm event. In a second case study, the positive storm enhancement on both 25 and 27 July 2004 lasted for more than 7 hours, and were classified as long-duration positive ionospheric storm effects. It has been suggested that the long-duration positive storm effects could have been caused by large-scale thermospheric wind circulation and enhanced equatorward neutral winds. These processes were in turn most likely to have been driven by enhanced and sustained energy input in the high-latitude ionosphere due to Joule heating and particle energy injection. This is evident by the prolonged high-level geomagnetic activity on both 25 and 27 July. This thesis also reports on the phase scintillation investigation at the South African Antarctic polar research station during solar minimum conditions. The multi-instrument approach that was used shows that the scintillation events were associated with auroral electron precipitation and that substorms play an essential role in the production of scintillation in the high latitudes. Furthermore, the investigation reveals that external energy injection into the ionosphere is necessary for the development of high-latitude irregularities which produce scintillation. Finally, this thesis highlights inadequate data resources as one of the major shortcomings to be addressed in order to fully understand and distinguish between the various ionospheric storm drivers over the Southern Africa mid-latitude region. The results presented in this thesis on the ionospheric response during geomagnetic storms provide essential information to direct further investigation aimed at developing this emerging field of study in South Africa.
- Full Text:
- Date Issued: 2012
- Authors: Ngwira, Chigomezyo Mudala
- Date: 2012
- Subjects: Geomagnetism -- South Africa , Geomagnetism -- Antarctica , Ionospheric storms -- South Africa , Ionospheric storms -- Antarctica
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:5534 , http://hdl.handle.net/10962/d1012957
- Description: The ionosphere is of practical importance for satellite-based communication and navigation systems due to its variable refractive nature which affects the propagation of trans-ionospheric radio signals. This thesis reports on the first attempt to investigate the mechanisms responsible for the generation of positive ionospheric storm effects over mid-latitude South Africa. The storm response on 15 May 2005 was associated with equatorward neutral winds and the passage of travelling ionospheric disturbances (TIDs). The two TIDs reported in this thesis propagated with average velocities of ∼438 m/s and ∼515 m/s respectively. The velocity of the first TID (i.e. 438 m/s) is consistent with the velocities calculated in other studies for the same storm event. In a second case study, the positive storm enhancement on both 25 and 27 July 2004 lasted for more than 7 hours, and were classified as long-duration positive ionospheric storm effects. It has been suggested that the long-duration positive storm effects could have been caused by large-scale thermospheric wind circulation and enhanced equatorward neutral winds. These processes were in turn most likely to have been driven by enhanced and sustained energy input in the high-latitude ionosphere due to Joule heating and particle energy injection. This is evident by the prolonged high-level geomagnetic activity on both 25 and 27 July. This thesis also reports on the phase scintillation investigation at the South African Antarctic polar research station during solar minimum conditions. The multi-instrument approach that was used shows that the scintillation events were associated with auroral electron precipitation and that substorms play an essential role in the production of scintillation in the high latitudes. Furthermore, the investigation reveals that external energy injection into the ionosphere is necessary for the development of high-latitude irregularities which produce scintillation. Finally, this thesis highlights inadequate data resources as one of the major shortcomings to be addressed in order to fully understand and distinguish between the various ionospheric storm drivers over the Southern Africa mid-latitude region. The results presented in this thesis on the ionospheric response during geomagnetic storms provide essential information to direct further investigation aimed at developing this emerging field of study in South Africa.
- Full Text:
- Date Issued: 2012
An analysis of sources and predictability of geomagnetic storms
- Authors: Uwamahoro, Jean
- Date: 2011
- Subjects: Ionospheric storms Solar flares Interplanetary magnetic fields Magnetospheric substorms Coronal mass ejections Space environment Neural networks (Computer science)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:5451 , http://hdl.handle.net/10962/d1005236
- Description: Solar transient eruptions are the main cause of interplanetary-magnetospheric disturbances leading to the phenomena known as geomagnetic storms. Eruptive solar events such as coronal mass ejections (CMEs) are currently considered the main cause of geomagnetic storms (GMS). GMS are strong perturbations of the Earth’s magnetic field that can affect space-borne and ground-based technological systems. The solar-terrestrial impact on modern technological systems is commonly known as Space Weather. Part of the research study described in this thesis was to investigate and establish a relationship between GMS (periods with Dst ≤ −50 nT) and their associated solar and interplanetary (IP) properties during solar cycle (SC) 23. Solar and IP geoeffective properties associated with or without CMEs were investigated and used to qualitatively characterise both intense and moderate storms. The results of this analysis specifically provide an estimate of the main sources of GMS during an average 11-year solar activity period. This study indicates that during SC 23, the majority of intense GMS (83%) were associated with CMEs, while the non-associated CME storms were dominant among moderate storms. GMS phenomena are the result of a complex and non-linear chaotic system involving the Sun, the IP medium, the magnetosphere and ionosphere, which make the prediction of these phenomena challenging. This thesis also explored the predictability of both the occurrence and strength of GMS. Due to their nonlinear driving mechanisms, the prediction of GMS was attempted by the use of neural network (NN) techniques, known for their non-linear modelling capabilities. To predict the occurrence of storms, a combination of solar and IP parameters were used as inputs in the NN model that proved to predict the occurrence of GMS with a probability of 87%. Using the solar wind (SW) and IP magnetic field (IMF) parameters, a separate NN-based model was developed to predict the storm-time strength as measured by the global Dst and ap geomagnetic indices, as well as by the locally measured K-index. The performance of the models was tested on data sets which were not part of the NN training process. The results obtained indicate that NN models provide a reliable alternative method for empirically predicting the occurrence and strength of GMS on the basis of solar and IP parameters. The demonstrated ability to predict the geoeffectiveness of solar and IP transient events is a key step in the goal towards improving space weather modelling and prediction.
- Full Text:
- Date Issued: 2011
- Authors: Uwamahoro, Jean
- Date: 2011
- Subjects: Ionospheric storms Solar flares Interplanetary magnetic fields Magnetospheric substorms Coronal mass ejections Space environment Neural networks (Computer science)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:5451 , http://hdl.handle.net/10962/d1005236
- Description: Solar transient eruptions are the main cause of interplanetary-magnetospheric disturbances leading to the phenomena known as geomagnetic storms. Eruptive solar events such as coronal mass ejections (CMEs) are currently considered the main cause of geomagnetic storms (GMS). GMS are strong perturbations of the Earth’s magnetic field that can affect space-borne and ground-based technological systems. The solar-terrestrial impact on modern technological systems is commonly known as Space Weather. Part of the research study described in this thesis was to investigate and establish a relationship between GMS (periods with Dst ≤ −50 nT) and their associated solar and interplanetary (IP) properties during solar cycle (SC) 23. Solar and IP geoeffective properties associated with or without CMEs were investigated and used to qualitatively characterise both intense and moderate storms. The results of this analysis specifically provide an estimate of the main sources of GMS during an average 11-year solar activity period. This study indicates that during SC 23, the majority of intense GMS (83%) were associated with CMEs, while the non-associated CME storms were dominant among moderate storms. GMS phenomena are the result of a complex and non-linear chaotic system involving the Sun, the IP medium, the magnetosphere and ionosphere, which make the prediction of these phenomena challenging. This thesis also explored the predictability of both the occurrence and strength of GMS. Due to their nonlinear driving mechanisms, the prediction of GMS was attempted by the use of neural network (NN) techniques, known for their non-linear modelling capabilities. To predict the occurrence of storms, a combination of solar and IP parameters were used as inputs in the NN model that proved to predict the occurrence of GMS with a probability of 87%. Using the solar wind (SW) and IP magnetic field (IMF) parameters, a separate NN-based model was developed to predict the storm-time strength as measured by the global Dst and ap geomagnetic indices, as well as by the locally measured K-index. The performance of the models was tested on data sets which were not part of the NN training process. The results obtained indicate that NN models provide a reliable alternative method for empirically predicting the occurrence and strength of GMS on the basis of solar and IP parameters. The demonstrated ability to predict the geoeffectiveness of solar and IP transient events is a key step in the goal towards improving space weather modelling and prediction.
- Full Text:
- Date Issued: 2011
Analysing emergent time within an isolated Universe through the application of interactions in the conditional probability approach
- Authors: Bryan, Kate Louise Halse
- Date: 2020
- Subjects: Space and time , Quantum gravity , Quantum theory , Relativity (Physics)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/146676 , vital:38547
- Description: Time remains a frequently discussed issue in physics and philosophy. One interpretation of growing popularity is the ‘timeless’ view which states that our experience of time is only an illusion. The isolated Universe model, provided by the Wheeler-DeWitt equation, supports this interpretation by describing time using clocks in the conditional probability interpretation (CPI). However, the CPI customarily dismisses interaction effects as negligible creating a potential blind spot which overlooks the potential influence of interaction effects. Accounting for interactions opens up a new avenue of analysis and a potential challenge to the interpretation of time. In aid of our assessment of the impact interaction effects have on the CPI, we present rudimentary definitions of time and its associated concepts. Defined in a minimalist manner, time is argued to require a postulate of causality as a means of accounting for temporal ordering in physical theories. Several of these theories are discussed here in terms of their respective approaches to time and, despite their differences, there are indications that the accounts of time are unified in a more fundamental theory. An analytic analysis of the CPI, incorporating two different clock choices, and a qualitative analysis both confirm that interactions have a necessary role within the CPI. The consequence of removing interactions is a maximised uncertainty in any measurement of the clock and a restriction to a two-state system, as indicated by the results of the toy models and qualitative argument respectively. The philosophical implication is that we are not restricted to the timeless view since including interactions as agents of causal interventions between systems provides an account of time as a real phenomenon. This result highlights the reliance on a postulate of causality which forms a pressing problem in explaining our experience of time.
- Full Text:
- Date Issued: 2020
- Authors: Bryan, Kate Louise Halse
- Date: 2020
- Subjects: Space and time , Quantum gravity , Quantum theory , Relativity (Physics)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/146676 , vital:38547
- Description: Time remains a frequently discussed issue in physics and philosophy. One interpretation of growing popularity is the ‘timeless’ view which states that our experience of time is only an illusion. The isolated Universe model, provided by the Wheeler-DeWitt equation, supports this interpretation by describing time using clocks in the conditional probability interpretation (CPI). However, the CPI customarily dismisses interaction effects as negligible creating a potential blind spot which overlooks the potential influence of interaction effects. Accounting for interactions opens up a new avenue of analysis and a potential challenge to the interpretation of time. In aid of our assessment of the impact interaction effects have on the CPI, we present rudimentary definitions of time and its associated concepts. Defined in a minimalist manner, time is argued to require a postulate of causality as a means of accounting for temporal ordering in physical theories. Several of these theories are discussed here in terms of their respective approaches to time and, despite their differences, there are indications that the accounts of time are unified in a more fundamental theory. An analytic analysis of the CPI, incorporating two different clock choices, and a qualitative analysis both confirm that interactions have a necessary role within the CPI. The consequence of removing interactions is a maximised uncertainty in any measurement of the clock and a restriction to a two-state system, as indicated by the results of the toy models and qualitative argument respectively. The philosophical implication is that we are not restricted to the timeless view since including interactions as agents of causal interventions between systems provides an account of time as a real phenomenon. This result highlights the reliance on a postulate of causality which forms a pressing problem in explaining our experience of time.
- Full Text:
- Date Issued: 2020
Aspects of the symplectic and metric geometry of classical and quantum physics
- Authors: Russell, Neil Eric
- Date: 1993
- Subjects: Symplectic manifolds Geometry, Differential Geometric quantization Quantum theory Clifford algebras
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:5452 , http://hdl.handle.net/10962/d1005237
- Description: I investigate some algebras and calculi naturally associated with the symplectic and metric Clifford algebras. In particular, I reformulate the well known Lepage decomposition for the symplectic exterior algebra in geometrical form and present some new results relating to the simple subspaces of the decomposition. I then present an analogous decomposition for the symmetric exterior algebra with a metric. Finally, I extend this symmetric exterior algebra into a new calculus for the symmetric differential forms on a pseudo-Riemannian manifold. The importance of this calculus lies in its potential for the description of bosonic systems in Quantum Theory.
- Full Text:
- Date Issued: 1993
- Authors: Russell, Neil Eric
- Date: 1993
- Subjects: Symplectic manifolds Geometry, Differential Geometric quantization Quantum theory Clifford algebras
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:5452 , http://hdl.handle.net/10962/d1005237
- Description: I investigate some algebras and calculi naturally associated with the symplectic and metric Clifford algebras. In particular, I reformulate the well known Lepage decomposition for the symplectic exterior algebra in geometrical form and present some new results relating to the simple subspaces of the decomposition. I then present an analogous decomposition for the symmetric exterior algebra with a metric. Finally, I extend this symmetric exterior algebra into a new calculus for the symmetric differential forms on a pseudo-Riemannian manifold. The importance of this calculus lies in its potential for the description of bosonic systems in Quantum Theory.
- Full Text:
- Date Issued: 1993
Behaviour of quiet time ionospheric disturbances at African equatorial and midlatitude regions
- Authors: Orford, Nicola Diane
- Date: 2018
- Subjects: Ionospheric storms , Ionospheric storms -- Africa , Ionosphere , Plasmasphere , Q-disturbances , Total electron content (TEC)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/62672 , vital:28228
- Description: Extreme ionospheric and geomagnetic disturbances affect technology adversely. Prestorm enhancements, considered a potential predictor of geomagnetic storms, occur during quiet conditions prior to geomagnetic disturbances. The ionosphere experiences general disturbances during quiet geomagnetic conditions and these Q- disturbances remain unexplored over Africa. This study used TEC data to characterize the morphology of Q-disturbances over Africa, exploring variations with solar cycle, season, time of occurrence and latitude. Observations from 10 African GPS stations in the equatorial and midlatitude regions show that Q-disturbances in the equatorial region are predominantly driven by E x B variations, while multiple mechanisms affect the midlatitude region. Q- disturbances occur more frequently during nighttime than during daytime and no seasonal trend is observed. Midlatitude Q-disturbance mechanisms are explored in depth, considering substorm activity, the plasmaspheric contribution to GPS TEC and plasma transfer between conjugate points. Substorm activity is not a dominant mechanism, although Q-disturbances occurring under elevated substorm conditions tend to have longer duration and larger amplitude than general Q-disturbances. Many observed Q-disturbances become non-significant once the plasmaspheric contribution to the TEC measurements is removed, indicating that these disturbances occur within the plasmasphere, and not the ionosphere. Transfer of plasma between conjugate points does not seem to be a mechanism driving Q-disturbances, as the corresponding nighttime behaviour expected between depletions in the summer hemisphere and enhancements in the winter hemisphere is not observed. Pre-storm enhancements occur infrequently, rendering them a poor predictor of geomagnetic disturbances. Pre-storm enhancement morphology does not differ significantly from general quiet time enhancement morphology, suggesting pre-storms are not a special case of Q-disturbances.
- Full Text:
- Date Issued: 2018
- Authors: Orford, Nicola Diane
- Date: 2018
- Subjects: Ionospheric storms , Ionospheric storms -- Africa , Ionosphere , Plasmasphere , Q-disturbances , Total electron content (TEC)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/62672 , vital:28228
- Description: Extreme ionospheric and geomagnetic disturbances affect technology adversely. Prestorm enhancements, considered a potential predictor of geomagnetic storms, occur during quiet conditions prior to geomagnetic disturbances. The ionosphere experiences general disturbances during quiet geomagnetic conditions and these Q- disturbances remain unexplored over Africa. This study used TEC data to characterize the morphology of Q-disturbances over Africa, exploring variations with solar cycle, season, time of occurrence and latitude. Observations from 10 African GPS stations in the equatorial and midlatitude regions show that Q-disturbances in the equatorial region are predominantly driven by E x B variations, while multiple mechanisms affect the midlatitude region. Q- disturbances occur more frequently during nighttime than during daytime and no seasonal trend is observed. Midlatitude Q-disturbance mechanisms are explored in depth, considering substorm activity, the plasmaspheric contribution to GPS TEC and plasma transfer between conjugate points. Substorm activity is not a dominant mechanism, although Q-disturbances occurring under elevated substorm conditions tend to have longer duration and larger amplitude than general Q-disturbances. Many observed Q-disturbances become non-significant once the plasmaspheric contribution to the TEC measurements is removed, indicating that these disturbances occur within the plasmasphere, and not the ionosphere. Transfer of plasma between conjugate points does not seem to be a mechanism driving Q-disturbances, as the corresponding nighttime behaviour expected between depletions in the summer hemisphere and enhancements in the winter hemisphere is not observed. Pre-storm enhancements occur infrequently, rendering them a poor predictor of geomagnetic disturbances. Pre-storm enhancement morphology does not differ significantly from general quiet time enhancement morphology, suggesting pre-storms are not a special case of Q-disturbances.
- Full Text:
- Date Issued: 2018
Challenges in topside ionospheric modelling over South Africa
- Authors: Sibanda, Patrick
- Date: 2010
- Subjects: Ionospheric electron density -- South Africa Neural networks (Computer science) Atmosphere, Upper Ionosphere
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:5453 , http://hdl.handle.net/10962/d1005238
- Description: This thesis creates a basic framework and provides the information necessary to create a more accurate description of the topside ionosphere in terms of the altitude variation of the electron density (Ne) over the South African region. The detailed overview of various topside ionospheric modelling techniques, with specific emphasis on their implications for the efforts to model the South African topside, provides a starting point towards achieving the goals. The novelty of the thesis lies in the investigation of the applicabilityof three different techniques to model the South African topside ionosphere: (1) The possibility of using Artificial Neural Network (ANN) techniques for empirical modelling of the topside ionosphere based on the available, however irregularly sampled, topside sounder measurements. The goal of this model was to test the ability of ANN techniques to capture the complex relationships between the various ionospheric variables using irregularly distributed measurements. While this technique is promising, the method did not show significant improvement over the International Reference Ionosphere (IRI) model results when compared with the actual measurements. (2) Application of the diffusive equilibrium theory. Although based on sound physics foundations, the method only operates on a generalised level leading to results that are not necessarily unique. Furthermore, the approach relies on many ionospheric variables as inputs which are derived from other models whose accuracy is not verified. (3) Attempts to complement the standard functional techniques, (Chapman, Epstein, Exponential and Parabolic), with Global Positioning System (GPS) and ionosonde measurements in an effort to provide deeper insights into the actual conditions within the ionosphere. The vertical Ne distribution is reconstructed by linking together the different aspects of the constituent ions and their transition height by considering how they influence the shape of the profile. While this approach has not been tested against actual measurements, results show that the method could be potentially useful for topside ionospheric studies. Due to the limitations of each technique reviewed, this thesis observes that the employment of an approach that incorporates both theoretical onsiderations and empirical aspects has the potential to lead to a more accurate characterisation of the topside ionospheric behaviour, and resulting in improved models in terms of reliability and forecasting ability. The point is made that a topside sounder mission for South Africa would provide the required measured topside ionospheric data and answer the many science questions that this region poses as well as solving a number of the limitations set out in this thesis.
- Full Text:
- Date Issued: 2010
- Authors: Sibanda, Patrick
- Date: 2010
- Subjects: Ionospheric electron density -- South Africa Neural networks (Computer science) Atmosphere, Upper Ionosphere
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:5453 , http://hdl.handle.net/10962/d1005238
- Description: This thesis creates a basic framework and provides the information necessary to create a more accurate description of the topside ionosphere in terms of the altitude variation of the electron density (Ne) over the South African region. The detailed overview of various topside ionospheric modelling techniques, with specific emphasis on their implications for the efforts to model the South African topside, provides a starting point towards achieving the goals. The novelty of the thesis lies in the investigation of the applicabilityof three different techniques to model the South African topside ionosphere: (1) The possibility of using Artificial Neural Network (ANN) techniques for empirical modelling of the topside ionosphere based on the available, however irregularly sampled, topside sounder measurements. The goal of this model was to test the ability of ANN techniques to capture the complex relationships between the various ionospheric variables using irregularly distributed measurements. While this technique is promising, the method did not show significant improvement over the International Reference Ionosphere (IRI) model results when compared with the actual measurements. (2) Application of the diffusive equilibrium theory. Although based on sound physics foundations, the method only operates on a generalised level leading to results that are not necessarily unique. Furthermore, the approach relies on many ionospheric variables as inputs which are derived from other models whose accuracy is not verified. (3) Attempts to complement the standard functional techniques, (Chapman, Epstein, Exponential and Parabolic), with Global Positioning System (GPS) and ionosonde measurements in an effort to provide deeper insights into the actual conditions within the ionosphere. The vertical Ne distribution is reconstructed by linking together the different aspects of the constituent ions and their transition height by considering how they influence the shape of the profile. While this approach has not been tested against actual measurements, results show that the method could be potentially useful for topside ionospheric studies. Due to the limitations of each technique reviewed, this thesis observes that the employment of an approach that incorporates both theoretical onsiderations and empirical aspects has the potential to lead to a more accurate characterisation of the topside ionospheric behaviour, and resulting in improved models in terms of reliability and forecasting ability. The point is made that a topside sounder mission for South Africa would provide the required measured topside ionospheric data and answer the many science questions that this region poses as well as solving a number of the limitations set out in this thesis.
- Full Text:
- Date Issued: 2010
Combined spectral and stimulated luminescence study of charge trapping and recombination processes in α-Al2O3:C
- Authors: Nyirenda, Angel Newton
- Date: 2018
- Subjects: Luminescence , Thermoluminescence , Luminescence spectroscopy , Carbon-doped aluminium oxide , Radioluminescence , Time-resolved X-ray excited optical luminescence
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/62683 , vital:28235
- Description: The main objective of this project was to gain a deeper and better understanding of the luminescence processes in a-Al₂O₃:C, a highly-sensitive dosimetric material, using a combined spectral and stimulated luminescence study. The spectral studies concentrated on the emission spectra obtained using X-ray induced radioluminescence (XERL), thermoluminescence (XETL) and time-resolved X-ray excited optical luminescence (TR-XEOL) techniques. The stimulated luminescence studies were based on thermoluminescence (TL), optically stimulated luminescence (OSL) and phototransferred TL (PTTL) methods that were used in the study of the radiation-induced defects at high beta-doses and the deep traps, that is, traps with thermal depths beyond 500°C. The spectral and stimulated luminescence measurements were carried out using a high sensitivity luminescence spectrometer and a Ris0 TL/OSL Model DA-20 Reader, respectively. The XERL emission spectrum measured at room temperature shows seven gaussian peaks associated with F-centres (420 nm), F+-centres (334 nm), F2+-centres (559 nm), Stoke’s vibronic band of Cr3+ (671 nm), Cr3+ R-line emission (694 nm), anti-Stokes vibronic band of Cr3+ (710 nm) and an unidentified emission band (260-300 nm) which we associate with hole recombinations at a luminescence centre. The 694-nm R-line emission from Cr3+ impurity ions is most likely due to recombination of holes at Cr2+ during stimulated luminescence and as a result of an intracentre excitation of Cr3+ in photoluminescence (PL) due to photon absorption. The Cr3+ emission decreases in intensity, whereas the intensity of F-centre emission band is almost constant with repeated XERL measurements. Depending on the amount of X-ray irradiation dose, both holes and/or electrons may take place in the emission processes of peaks I (30-80°C), II (90-250°C) and III (250-320°C) during a TL readout, albeit, electron recombination is dominant regardless of dose. At higher doses, the XETL emission spectra indicate that the dominant band associated with TL peak III (250-320°C) in the material, shifts from F-centre to Cr3+. Using the deep-traps OSL, it has been confirmed that the main TL trap is also the main OSL trap whereas the TL traps lying in the temperature range of 400-550°C constitute the secondary OSL traps. There is evidence of strong retrapping at the main trap during optical stimulation of charges from the secondary OSL traps and the deep traps and that the retrapping occurs via the delocalized bands. At high-irradiation beta-doses, aggregate defect centres which significantly alter the TL and OSL properties, are induced in the material. The induced aggregate centres get completely obliterated by heating a sample to 700°C. The radiation-induced defects cause the main TL peak to shift towards higher temperatures, increase its FWHM, reduce its maximum intensity and cause an underestimation of both the activation energy and order of kinetics of the peak. On the other hand, the OSL response of the material is enhanced following a high-irradiation dose. During sample storage in the dark at ambient temperature, charges do migrate from the deep traps (donors) to the main and intermediate traps (acceptors) and that the major donor traps during this charge transfer phenomenon lie between 500-600°C.
- Full Text:
- Date Issued: 2018
- Authors: Nyirenda, Angel Newton
- Date: 2018
- Subjects: Luminescence , Thermoluminescence , Luminescence spectroscopy , Carbon-doped aluminium oxide , Radioluminescence , Time-resolved X-ray excited optical luminescence
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/62683 , vital:28235
- Description: The main objective of this project was to gain a deeper and better understanding of the luminescence processes in a-Al₂O₃:C, a highly-sensitive dosimetric material, using a combined spectral and stimulated luminescence study. The spectral studies concentrated on the emission spectra obtained using X-ray induced radioluminescence (XERL), thermoluminescence (XETL) and time-resolved X-ray excited optical luminescence (TR-XEOL) techniques. The stimulated luminescence studies were based on thermoluminescence (TL), optically stimulated luminescence (OSL) and phototransferred TL (PTTL) methods that were used in the study of the radiation-induced defects at high beta-doses and the deep traps, that is, traps with thermal depths beyond 500°C. The spectral and stimulated luminescence measurements were carried out using a high sensitivity luminescence spectrometer and a Ris0 TL/OSL Model DA-20 Reader, respectively. The XERL emission spectrum measured at room temperature shows seven gaussian peaks associated with F-centres (420 nm), F+-centres (334 nm), F2+-centres (559 nm), Stoke’s vibronic band of Cr3+ (671 nm), Cr3+ R-line emission (694 nm), anti-Stokes vibronic band of Cr3+ (710 nm) and an unidentified emission band (260-300 nm) which we associate with hole recombinations at a luminescence centre. The 694-nm R-line emission from Cr3+ impurity ions is most likely due to recombination of holes at Cr2+ during stimulated luminescence and as a result of an intracentre excitation of Cr3+ in photoluminescence (PL) due to photon absorption. The Cr3+ emission decreases in intensity, whereas the intensity of F-centre emission band is almost constant with repeated XERL measurements. Depending on the amount of X-ray irradiation dose, both holes and/or electrons may take place in the emission processes of peaks I (30-80°C), II (90-250°C) and III (250-320°C) during a TL readout, albeit, electron recombination is dominant regardless of dose. At higher doses, the XETL emission spectra indicate that the dominant band associated with TL peak III (250-320°C) in the material, shifts from F-centre to Cr3+. Using the deep-traps OSL, it has been confirmed that the main TL trap is also the main OSL trap whereas the TL traps lying in the temperature range of 400-550°C constitute the secondary OSL traps. There is evidence of strong retrapping at the main trap during optical stimulation of charges from the secondary OSL traps and the deep traps and that the retrapping occurs via the delocalized bands. At high-irradiation beta-doses, aggregate defect centres which significantly alter the TL and OSL properties, are induced in the material. The induced aggregate centres get completely obliterated by heating a sample to 700°C. The radiation-induced defects cause the main TL peak to shift towards higher temperatures, increase its FWHM, reduce its maximum intensity and cause an underestimation of both the activation energy and order of kinetics of the peak. On the other hand, the OSL response of the material is enhanced following a high-irradiation dose. During sample storage in the dark at ambient temperature, charges do migrate from the deep traps (donors) to the main and intermediate traps (acceptors) and that the major donor traps during this charge transfer phenomenon lie between 500-600°C.
- Full Text:
- Date Issued: 2018
CubiCal: a fast radio interferometric calibration suite exploiting complex optimisation
- Authors: Kenyon, Jonathan
- Date: 2019
- Subjects: Interferometry , Radio astronomy , Python (Computer program language) , Square Kilometre Array (Project)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/92341 , vital:30711
- Description: The advent of the Square Kilometre Array and its precursors marks the start of an exciting era for radio interferometry. However, with new instruments producing unprecedented quantities of data, many existing calibration algorithms and implementations will be hard-pressed to keep up. Fortunately, it has recently been shown that the radio interferometric calibration problem can be expressed concisely using the ideas of complex optimisation. The resulting framework exposes properties of the calibration problem which can be exploited to accelerate traditional non-linear least squares algorithms. We extend the existing work on the topic by considering the more general problem of calibrating a Jones chain: the product of several unknown gain terms. We also derive specialised solvers for performing phase-only, delay and pointing error calibration. In doing so, we devise a method for determining update rules for arbitrary, real-valued parametrisations of a complex gain. The solvers are implemented in an optimised Python package called CubiCal. CubiCal makes use of Cython to generate fast C and C++ routines for performing computationally demanding tasks whilst leveraging multiprocessing and shared memory to take advantage of modern, parallel hardware. The package is fully compatible with the measurement set, the most common format for interferometer data, and is well integrated with Montblanc - a third party package which implements optimised model visibility prediction. CubiCal's calibration routines are applied successfully to both simulated and real data for the field surrounding source 3C147. These tests include direction-independent and direction dependent calibration, as well as tests of the specialised solvers. Finally, we conduct extensive performance benchmarks and verify that CubiCal convincingly outperforms its most comparable competitor.
- Full Text:
- Date Issued: 2019
- Authors: Kenyon, Jonathan
- Date: 2019
- Subjects: Interferometry , Radio astronomy , Python (Computer program language) , Square Kilometre Array (Project)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/92341 , vital:30711
- Description: The advent of the Square Kilometre Array and its precursors marks the start of an exciting era for radio interferometry. However, with new instruments producing unprecedented quantities of data, many existing calibration algorithms and implementations will be hard-pressed to keep up. Fortunately, it has recently been shown that the radio interferometric calibration problem can be expressed concisely using the ideas of complex optimisation. The resulting framework exposes properties of the calibration problem which can be exploited to accelerate traditional non-linear least squares algorithms. We extend the existing work on the topic by considering the more general problem of calibrating a Jones chain: the product of several unknown gain terms. We also derive specialised solvers for performing phase-only, delay and pointing error calibration. In doing so, we devise a method for determining update rules for arbitrary, real-valued parametrisations of a complex gain. The solvers are implemented in an optimised Python package called CubiCal. CubiCal makes use of Cython to generate fast C and C++ routines for performing computationally demanding tasks whilst leveraging multiprocessing and shared memory to take advantage of modern, parallel hardware. The package is fully compatible with the measurement set, the most common format for interferometer data, and is well integrated with Montblanc - a third party package which implements optimised model visibility prediction. CubiCal's calibration routines are applied successfully to both simulated and real data for the field surrounding source 3C147. These tests include direction-independent and direction dependent calibration, as well as tests of the specialised solvers. Finally, we conduct extensive performance benchmarks and verify that CubiCal convincingly outperforms its most comparable competitor.
- Full Text:
- Date Issued: 2019
Data compression, field of interest shaping and fast algorithms for direction-dependent deconvolution in radio interferometry
- Authors: Atemkeng, Marcellin T
- Date: 2017
- Subjects: Radio astronomy , Solar radio emission , Radio interferometers , Signal processing -- Digital techniques , Algorithms , Data compression (Computer science)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/6324 , vital:21089
- Description: In radio interferometry, observed visibilities are intrinsically sampled at some interval in time and frequency. Modern interferometers are capable of producing data at very high time and frequency resolution; practical limits on storage and computation costs require that some form of data compression be imposed. The traditional form of compression is simple averaging of the visibilities over coarser time and frequency bins. This has an undesired side effect: the resulting averaged visibilities “decorrelate”, and do so differently depending on the baseline length and averaging interval. This translates into a non-trivial signature in the image domain known as “smearing”, which manifests itself as an attenuation in amplitude towards off-centre sources. With the increasing fields of view and/or longer baselines employed in modern and future instruments, the trade-off between data rate and smearing becomes increasingly unfavourable. Averaging also results in baseline length and a position-dependent point spread function (PSF). In this work, we investigate alternative approaches to low-loss data compression. We show that averaging of the visibility data can be understood as a form of convolution by a boxcar-like window function, and that by employing alternative baseline-dependent window functions a more optimal interferometer smearing response may be induced. Specifically, we can improve amplitude response over a chosen field of interest and attenuate sources outside the field of interest. The main cost of this technique is a reduction in nominal sensitivity; we investigate the smearing vs. sensitivity trade-off and show that in certain regimes a favourable compromise can be achieved. We show the application of this technique to simulated data from the Jansky Very Large Array and the European Very Long Baseline Interferometry Network. Furthermore, we show that the position-dependent PSF shape induced by averaging can be approximated using linear algebraic properties to effectively reduce the computational complexity for evaluating the PSF at each sky position. We conclude by implementing a position-dependent PSF deconvolution in an imaging and deconvolution framework. Using the Low-Frequency Array radio interferometer, we show that deconvolution with position-dependent PSFs results in higher image fidelity compared to a simple CLEAN algorithm and its derivatives.
- Full Text:
- Date Issued: 2017
- Authors: Atemkeng, Marcellin T
- Date: 2017
- Subjects: Radio astronomy , Solar radio emission , Radio interferometers , Signal processing -- Digital techniques , Algorithms , Data compression (Computer science)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/6324 , vital:21089
- Description: In radio interferometry, observed visibilities are intrinsically sampled at some interval in time and frequency. Modern interferometers are capable of producing data at very high time and frequency resolution; practical limits on storage and computation costs require that some form of data compression be imposed. The traditional form of compression is simple averaging of the visibilities over coarser time and frequency bins. This has an undesired side effect: the resulting averaged visibilities “decorrelate”, and do so differently depending on the baseline length and averaging interval. This translates into a non-trivial signature in the image domain known as “smearing”, which manifests itself as an attenuation in amplitude towards off-centre sources. With the increasing fields of view and/or longer baselines employed in modern and future instruments, the trade-off between data rate and smearing becomes increasingly unfavourable. Averaging also results in baseline length and a position-dependent point spread function (PSF). In this work, we investigate alternative approaches to low-loss data compression. We show that averaging of the visibility data can be understood as a form of convolution by a boxcar-like window function, and that by employing alternative baseline-dependent window functions a more optimal interferometer smearing response may be induced. Specifically, we can improve amplitude response over a chosen field of interest and attenuate sources outside the field of interest. The main cost of this technique is a reduction in nominal sensitivity; we investigate the smearing vs. sensitivity trade-off and show that in certain regimes a favourable compromise can be achieved. We show the application of this technique to simulated data from the Jansky Very Large Array and the European Very Long Baseline Interferometry Network. Furthermore, we show that the position-dependent PSF shape induced by averaging can be approximated using linear algebraic properties to effectively reduce the computational complexity for evaluating the PSF at each sky position. We conclude by implementing a position-dependent PSF deconvolution in an imaging and deconvolution framework. Using the Low-Frequency Array radio interferometer, we show that deconvolution with position-dependent PSFs results in higher image fidelity compared to a simple CLEAN algorithm and its derivatives.
- Full Text:
- Date Issued: 2017
Data reduction techniques for Very Long Baseline Interferometric spectropolarimetry
- Authors: Kemball, Athol James
- Date: 1993
- Subjects: Very long baseline interferometry Radio interferometers Data reduction -- Research
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:5457 , http://hdl.handle.net/10962/d1005242
- Description: This thesis reports the results of an investigation into techniques for the calibration and imaging of spectral line polarization observations in Very Long Baseline Interferometry (VLBI). A review is given of the instrumental and propagation effects which need to be removed in the course of calibrating such obervations, with particular reference to their polarization dependence. The removal of amplitude and phase errors and the determination of the instrumental feed response is described. The polarization imaging of such data is discussed with particular reference to the case of poorly sampled cross-polarization data. The software implementation of the algorithms within the Astronomical Image Processing System (AlPS) is discussed and the specific case of spectral line polarization reduction for data observed using the MK3 VLBI system is considered in detail. VLBI observations at two separate epochs of the 1612 MHz OH masers towards the source IRC+ 10420 are reduced as part of this work. Spectral line polarization maps of the source structure are presented, including a discussion of source morphology and variability. The source is sigmficantly circularly polarized at VLBI resolution, but does not display appreciable linear polarization. A proper motion study of the circumstellar envelope is presented, which supports an ellipsoidal kinematic model with anisotropic radial outflow. Kinematic modelling of the measured proper motions suggests a distance to the source of ~ 3 kpc. The cirumstellar magnetic field strength in the masing regions is determined as 1-3 mG, assuming Zeeman splitting as the polarization mechanism.
- Full Text:
- Date Issued: 1993
- Authors: Kemball, Athol James
- Date: 1993
- Subjects: Very long baseline interferometry Radio interferometers Data reduction -- Research
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:5457 , http://hdl.handle.net/10962/d1005242
- Description: This thesis reports the results of an investigation into techniques for the calibration and imaging of spectral line polarization observations in Very Long Baseline Interferometry (VLBI). A review is given of the instrumental and propagation effects which need to be removed in the course of calibrating such obervations, with particular reference to their polarization dependence. The removal of amplitude and phase errors and the determination of the instrumental feed response is described. The polarization imaging of such data is discussed with particular reference to the case of poorly sampled cross-polarization data. The software implementation of the algorithms within the Astronomical Image Processing System (AlPS) is discussed and the specific case of spectral line polarization reduction for data observed using the MK3 VLBI system is considered in detail. VLBI observations at two separate epochs of the 1612 MHz OH masers towards the source IRC+ 10420 are reduced as part of this work. Spectral line polarization maps of the source structure are presented, including a discussion of source morphology and variability. The source is sigmficantly circularly polarized at VLBI resolution, but does not display appreciable linear polarization. A proper motion study of the circumstellar envelope is presented, which supports an ellipsoidal kinematic model with anisotropic radial outflow. Kinematic modelling of the measured proper motions suggests a distance to the source of ~ 3 kpc. The cirumstellar magnetic field strength in the masing regions is determined as 1-3 mG, assuming Zeeman splitting as the polarization mechanism.
- Full Text:
- Date Issued: 1993
Design patterns and software techniques for large-scale, open and reproducible data reduction
- Authors: Molenaar, Gijs Jan
- Date: 2021
- Subjects: Radio astronomy -- Data processing , Radio astronomy -- Data processing -- Software , Radio astronomy -- South Africa , ASTRODECONV2019 dataset , Radio telescopes -- South Africa , KERN (omputer software)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/172169 , vital:42172 , 10.21504/10962/172169
- Description: The preparation for the construction of the Square Kilometre Array, and the introduction of its operational precursors, such as LOFAR and MeerKAT, mark the beginning of an exciting era for astronomy. Impressive new data containing valuable science just waiting for discovery is already being generated, and these devices will produce far more data than has ever been collected before. However, with every new data instrument, the data rates grow to unprecedented quantities of data, requiring novel new data-processing tools. In addition, creating science grade data from the raw data still requires significant expert knowledge for processing this data. The software used is often developed by a scientist who lacks proper training in software development skills, resulting in the software not progressing beyond a prototype stage in quality. In the first chapter, we explore various organisational and technical approaches to address these issues by providing a historical overview of the development of radioastronomy pipelines since the inception of the field in the 1940s. In that, the steps required to create a radio image are investigated. We used the lessons-learned to identify patterns in the challenges experienced, and the solutions created to address these over the years. The second chapter describes the mathematical foundations that are essential for radio imaging. In the third chapter, we discuss the production of the KERN Linux distribution, which is a set of software packages containing most radio astronomy software currently in use. Considerable effort was put into making sure that the contained software installs appropriately, all items next to one other on the same system. Where required and possible, bugs and portability fixes were solved and reported with the upstream maintainers. The KERN project also has a website, and issue tracker, where users can report bugs and maintainers can coordinate the packaging effort and new releases. The software packages can be used inside Docker and Singularity containers, enabling the installation of these packages on a wide variety of platforms. In the fourth and fifth chapters, we discuss methods and frameworks for combining the available data reduction tools into recomposable pipelines and introduce the Kliko specification and software. This framework was created to enable end-user astronomers to chain and containerise operations of software in KERN packages. Next, we discuss the Common Workflow Language (CommonWL), a similar but more advanced and mature pipeline framework invented by bio-informatics scientists. CommonWL is supported by a wide range of tools already; among other schedulers, visualisers and editors. Consequently, when a pipeline is made with CommonWL, it can be deployed and manipulated with a wide range of tools. In the final chapter, we attempt something unconventional, applying a generative adversarial network based on deep learning techniques to perform the task of sky brightness reconstruction. Since deep learning methods often require a large number of training samples, we constructed a CommonWL simulation pipeline for creating dirty images and corresponding sky models. This simulated dataset has been made publicly available as the ASTRODECONV2019 dataset. It is shown that this method is useful to perform the restoration and matches the performance of a single clean cycle. In addition, we incorporated domain knowledge by adding the point spread function to the network and by utilising a custom loss function during training. Although it was not possible to improve the cleaning performance of commonly used existing tools, the computational time performance of the approach looks very promising. We suggest that a smaller scope should be the starting point for further studies and optimising of the training of the neural network could produce the desired results.
- Full Text:
- Date Issued: 2021
- Authors: Molenaar, Gijs Jan
- Date: 2021
- Subjects: Radio astronomy -- Data processing , Radio astronomy -- Data processing -- Software , Radio astronomy -- South Africa , ASTRODECONV2019 dataset , Radio telescopes -- South Africa , KERN (omputer software)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/172169 , vital:42172 , 10.21504/10962/172169
- Description: The preparation for the construction of the Square Kilometre Array, and the introduction of its operational precursors, such as LOFAR and MeerKAT, mark the beginning of an exciting era for astronomy. Impressive new data containing valuable science just waiting for discovery is already being generated, and these devices will produce far more data than has ever been collected before. However, with every new data instrument, the data rates grow to unprecedented quantities of data, requiring novel new data-processing tools. In addition, creating science grade data from the raw data still requires significant expert knowledge for processing this data. The software used is often developed by a scientist who lacks proper training in software development skills, resulting in the software not progressing beyond a prototype stage in quality. In the first chapter, we explore various organisational and technical approaches to address these issues by providing a historical overview of the development of radioastronomy pipelines since the inception of the field in the 1940s. In that, the steps required to create a radio image are investigated. We used the lessons-learned to identify patterns in the challenges experienced, and the solutions created to address these over the years. The second chapter describes the mathematical foundations that are essential for radio imaging. In the third chapter, we discuss the production of the KERN Linux distribution, which is a set of software packages containing most radio astronomy software currently in use. Considerable effort was put into making sure that the contained software installs appropriately, all items next to one other on the same system. Where required and possible, bugs and portability fixes were solved and reported with the upstream maintainers. The KERN project also has a website, and issue tracker, where users can report bugs and maintainers can coordinate the packaging effort and new releases. The software packages can be used inside Docker and Singularity containers, enabling the installation of these packages on a wide variety of platforms. In the fourth and fifth chapters, we discuss methods and frameworks for combining the available data reduction tools into recomposable pipelines and introduce the Kliko specification and software. This framework was created to enable end-user astronomers to chain and containerise operations of software in KERN packages. Next, we discuss the Common Workflow Language (CommonWL), a similar but more advanced and mature pipeline framework invented by bio-informatics scientists. CommonWL is supported by a wide range of tools already; among other schedulers, visualisers and editors. Consequently, when a pipeline is made with CommonWL, it can be deployed and manipulated with a wide range of tools. In the final chapter, we attempt something unconventional, applying a generative adversarial network based on deep learning techniques to perform the task of sky brightness reconstruction. Since deep learning methods often require a large number of training samples, we constructed a CommonWL simulation pipeline for creating dirty images and corresponding sky models. This simulated dataset has been made publicly available as the ASTRODECONV2019 dataset. It is shown that this method is useful to perform the restoration and matches the performance of a single clean cycle. In addition, we incorporated domain knowledge by adding the point spread function to the network and by utilising a custom loss function during training. Although it was not possible to improve the cleaning performance of commonly used existing tools, the computational time performance of the approach looks very promising. We suggest that a smaller scope should be the starting point for further studies and optimising of the training of the neural network could produce the desired results.
- Full Text:
- Date Issued: 2021
Development of an ionospheric map for Africa
- Authors: Ssessanga, Nicholas
- Date: 2014
- Subjects: Ionosondes Ionosphere Ionosphere -- Observations Ionosphere -- Research -- Africa Ionospheric electron density -- Africa Ionospheric critical frequencies
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:5519 , http://hdl.handle.net/10962/d1011498
- Description: This thesis presents research pertaining to the development of an African Ionospheric Map (AIM). An ionospheric map is a computer program that is able to display spatial and temporal representations of ionospheric parameters such as, electron density and critical plasma frequencies, for every geographical location on the map. The purpose of this development was to make the most optimum use of all available data sources, namely ionosondes, satellites and models, and to implement error minimisation techniques in order to obtain the best result at any given location on the African continent. The focus was placed on the accurate estimation of three upper atmosphere parameters which are important for radio communications: critical frequency of the F2 layer (foF2), Total Electron Content (TEC) and the maximum usable frequency over a distance of 3000 km (M3000F2). The results show that AIM provided a more accurate estimation of the three parameters than the internationally recognised and recommended ionosphere model (IRI-2012) when used on its own. Therefore, the AIM is a more accurate solution than single independent data sources for applications requiring ionospheric mapping over the African continent.
- Full Text:
- Date Issued: 2014
- Authors: Ssessanga, Nicholas
- Date: 2014
- Subjects: Ionosondes Ionosphere Ionosphere -- Observations Ionosphere -- Research -- Africa Ionospheric electron density -- Africa Ionospheric critical frequencies
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:5519 , http://hdl.handle.net/10962/d1011498
- Description: This thesis presents research pertaining to the development of an African Ionospheric Map (AIM). An ionospheric map is a computer program that is able to display spatial and temporal representations of ionospheric parameters such as, electron density and critical plasma frequencies, for every geographical location on the map. The purpose of this development was to make the most optimum use of all available data sources, namely ionosondes, satellites and models, and to implement error minimisation techniques in order to obtain the best result at any given location on the African continent. The focus was placed on the accurate estimation of three upper atmosphere parameters which are important for radio communications: critical frequency of the F2 layer (foF2), Total Electron Content (TEC) and the maximum usable frequency over a distance of 3000 km (M3000F2). The results show that AIM provided a more accurate estimation of the three parameters than the internationally recognised and recommended ionosphere model (IRI-2012) when used on its own. Therefore, the AIM is a more accurate solution than single independent data sources for applications requiring ionospheric mapping over the African continent.
- Full Text:
- Date Issued: 2014
Dynamics of stimulated luminescence in natural quartz: Thermoluminescence and phototransferred thermoluminescence
- Authors: Folley, Damilola Esther
- Date: 2020
- Subjects: Thermoluminescence , Quartz
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/146255 , vital:38509
- Description: Natural quartz has remained an important mineral that is of topical interest in luminescence and dosimetry-related research. We investigate the dynamics of stimulated luminescence on this material through thermoluminescence (TL) and phototransferred thermoluminescence (PTTL). Measurements were made on unannealed natural quartz as well as quartz annealed at 800 and 1000̊C. The samples were annealed for 10 minutes and for 1 hour. The material, in its un- and annealed state has its main peak between 68 and 72̊C when measured at 1Cs ̃1 after a dose of 50 Gy. A study of dosimetric features and kinetic analysis was carried out on two prominent peaks, peak I and III for all the samples. The peaks show a sublinear dose response for irradiation doses between 10 and 300 Gy. Kinetic analysis shows that peak I is a first-order peak and peak III a general-order peak. Interestingly, we observe for peak I for the sample annealed at 800̊C for 1 hour an inverse thermal quenching behaviour. We demonstrate that a peak affected with an inverse thermal quenching-like behaviour can still show effect of thermal quenching when the dose the sample is irradiated to is significantly reduced. We ascribe the apparent dependence of thermal quenching on dose to competition between radiative and non-radiative transitions at the recombination centre. Peaks I, II, and III for all the samples were reproduced under phototransfer when the peaks, initially removed by preheating to a certain temperature are exposed to 470 and 525 nm light. The infuence of duration of illumination on the PTTL intensity of these peaks corresponding to various preheating temperatures is modelled using coupled first-order dfferential equations. The model is based on systems of acceptors and donors whose number and role depends on preheating temperature
- Full Text:
- Date Issued: 2020
- Authors: Folley, Damilola Esther
- Date: 2020
- Subjects: Thermoluminescence , Quartz
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/146255 , vital:38509
- Description: Natural quartz has remained an important mineral that is of topical interest in luminescence and dosimetry-related research. We investigate the dynamics of stimulated luminescence on this material through thermoluminescence (TL) and phototransferred thermoluminescence (PTTL). Measurements were made on unannealed natural quartz as well as quartz annealed at 800 and 1000̊C. The samples were annealed for 10 minutes and for 1 hour. The material, in its un- and annealed state has its main peak between 68 and 72̊C when measured at 1Cs ̃1 after a dose of 50 Gy. A study of dosimetric features and kinetic analysis was carried out on two prominent peaks, peak I and III for all the samples. The peaks show a sublinear dose response for irradiation doses between 10 and 300 Gy. Kinetic analysis shows that peak I is a first-order peak and peak III a general-order peak. Interestingly, we observe for peak I for the sample annealed at 800̊C for 1 hour an inverse thermal quenching behaviour. We demonstrate that a peak affected with an inverse thermal quenching-like behaviour can still show effect of thermal quenching when the dose the sample is irradiated to is significantly reduced. We ascribe the apparent dependence of thermal quenching on dose to competition between radiative and non-radiative transitions at the recombination centre. Peaks I, II, and III for all the samples were reproduced under phototransfer when the peaks, initially removed by preheating to a certain temperature are exposed to 470 and 525 nm light. The infuence of duration of illumination on the PTTL intensity of these peaks corresponding to various preheating temperatures is modelled using coupled first-order dfferential equations. The model is based on systems of acceptors and donors whose number and role depends on preheating temperature
- Full Text:
- Date Issued: 2020