PyMORESANE: A Pythonic and CUDA-accelerated implementation of the MORESANE deconvolution algorithm
- Authors: Kenyon, Jonathan
- Date: 2015
- Subjects: Radio astronomy , Imaging systems in astronomy , MOdel REconstruction by Synthesis-ANalysis Estimators (MORESANE)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5563 , http://hdl.handle.net/10962/d1020098
- Description: The inadequacies of the current generation of deconvolution algorithms are rapidly becoming apparent as new, more sensitive radio interferometers are constructed. In light of these inadequacies, there is renewed interest in the field of deconvolution. Many new algorithms are being developed using the mathematical framework of compressed sensing. One such technique, MORESANE, has recently been shown to be a powerful tool for the recovery of faint difuse emission from synthetic and simulated data. However, the original implementation is not well-suited to large problem sizes due to its computational complexity. Additionally, its use of proprietary software prevents it from being freely distributed and used. This has motivated the development of a freely available Python implementation, PyMORESANE. This thesis describes the implementation of PyMORESANE as well as its subsequent augmentation with MPU and GPGPU code. These additions accelerate the algorithm and thus make it competitive with its legacy counterparts. The acceleration of the algorithm is verified by means of benchmarking tests for varying image size and complexity. Additionally, PyMORESANE is shown to work not only on synthetic data, but on real observational data. This verification means that the MORESANE algorithm, and consequently the PyMORESANE implementation, can be added to the current arsenal of deconvolution tools.
- Full Text:
- Date Issued: 2015
- Authors: Kenyon, Jonathan
- Date: 2015
- Subjects: Radio astronomy , Imaging systems in astronomy , MOdel REconstruction by Synthesis-ANalysis Estimators (MORESANE)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5563 , http://hdl.handle.net/10962/d1020098
- Description: The inadequacies of the current generation of deconvolution algorithms are rapidly becoming apparent as new, more sensitive radio interferometers are constructed. In light of these inadequacies, there is renewed interest in the field of deconvolution. Many new algorithms are being developed using the mathematical framework of compressed sensing. One such technique, MORESANE, has recently been shown to be a powerful tool for the recovery of faint difuse emission from synthetic and simulated data. However, the original implementation is not well-suited to large problem sizes due to its computational complexity. Additionally, its use of proprietary software prevents it from being freely distributed and used. This has motivated the development of a freely available Python implementation, PyMORESANE. This thesis describes the implementation of PyMORESANE as well as its subsequent augmentation with MPU and GPGPU code. These additions accelerate the algorithm and thus make it competitive with its legacy counterparts. The acceleration of the algorithm is verified by means of benchmarking tests for varying image size and complexity. Additionally, PyMORESANE is shown to work not only on synthetic data, but on real observational data. This verification means that the MORESANE algorithm, and consequently the PyMORESANE implementation, can be added to the current arsenal of deconvolution tools.
- Full Text:
- Date Issued: 2015
Statistical analysis of the ionospheric response during storm conditions over South Africa using ionosonde and GPS data
- Matamba, Tshimangadzo Merline
- Authors: Matamba, Tshimangadzo Merline
- Date: 2015
- Subjects: Ionospheric storms -- South Africa -- Grahamstown , Ionospheric storms -- South Africa -- Madimbo , Magnetic storms -- South Africa -- Grahamstown , Magnetic storms -- South Africa -- Madimbo , Ionosondes , Global Positioning System
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5555 , http://hdl.handle.net/10962/d1017899
- Description: Ionospheric storms are an extreme form of space weather phenomena which affect space- and ground-based technological systems. Extreme solar activity may give rise to Coronal Mass Ejections (CME) and solar flares that may result in ionospheric storms. This thesis reports on a statistical analysis of the ionospheric response over the ionosonde stations Grahamstown (33.3◦S, 26.5◦E) and Madimbo (22.4◦S,30.9◦E), South Africa, during geomagnetic storm conditions which occurred during the period 1996 - 2011. Total Electron Content (TEC) derived from Global Positioning System (GPS) data by a dual Frequency receiver and an ionosonde at Grahamstown, was analysed for the storms that occurred during the period 2006 - 2011. A comprehensive analysis of the critical frequency of the F2 layer (foF2) and TEC was done. To identify the geomagnetically disturbed conditions the Disturbance storm time (Dst) index with a storm criteria of Dst ≤ −50 nT was used. The ionospheric disturbances were categorized into three responses, namely single disturbance, double disturbance and not significant (NS) ionospheric storms. Single disturbance ionospheric storms refer to positive (P) and negative (N) ionospheric storms observed separately, while double disturbance storms refer to negative and positive ionospheric storms observed during the same storm period. The statistics show the impact of geomagnetic storms on the ionosphere and indicate that negative ionospheric effects follow the solar cycle. In general, only a few ionospheric storms (0.11%) were observed during solar minimum. Positive ionospheric storms occurred most frequently (47.54%) during the declining phase of solar cycle 23. Seasonally, negative ionospheric storms occurred mostly during the summer (63.24%), while positive ionospheric storms occurred frequently during the winter (53.62%). An important finding is that only negative ionospheric storms were observed during great geomagnetic storm activity (Dst ≤ −350 nT). For periods when both ionosonde and GPS was available, the two data sets indicated similar ionospheric responses. Hence, GPS data can be used to effectively identify the ionospheric response in the absence of ionosonde data.
- Full Text:
- Date Issued: 2015
- Authors: Matamba, Tshimangadzo Merline
- Date: 2015
- Subjects: Ionospheric storms -- South Africa -- Grahamstown , Ionospheric storms -- South Africa -- Madimbo , Magnetic storms -- South Africa -- Grahamstown , Magnetic storms -- South Africa -- Madimbo , Ionosondes , Global Positioning System
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5555 , http://hdl.handle.net/10962/d1017899
- Description: Ionospheric storms are an extreme form of space weather phenomena which affect space- and ground-based technological systems. Extreme solar activity may give rise to Coronal Mass Ejections (CME) and solar flares that may result in ionospheric storms. This thesis reports on a statistical analysis of the ionospheric response over the ionosonde stations Grahamstown (33.3◦S, 26.5◦E) and Madimbo (22.4◦S,30.9◦E), South Africa, during geomagnetic storm conditions which occurred during the period 1996 - 2011. Total Electron Content (TEC) derived from Global Positioning System (GPS) data by a dual Frequency receiver and an ionosonde at Grahamstown, was analysed for the storms that occurred during the period 2006 - 2011. A comprehensive analysis of the critical frequency of the F2 layer (foF2) and TEC was done. To identify the geomagnetically disturbed conditions the Disturbance storm time (Dst) index with a storm criteria of Dst ≤ −50 nT was used. The ionospheric disturbances were categorized into three responses, namely single disturbance, double disturbance and not significant (NS) ionospheric storms. Single disturbance ionospheric storms refer to positive (P) and negative (N) ionospheric storms observed separately, while double disturbance storms refer to negative and positive ionospheric storms observed during the same storm period. The statistics show the impact of geomagnetic storms on the ionosphere and indicate that negative ionospheric effects follow the solar cycle. In general, only a few ionospheric storms (0.11%) were observed during solar minimum. Positive ionospheric storms occurred most frequently (47.54%) during the declining phase of solar cycle 23. Seasonally, negative ionospheric storms occurred mostly during the summer (63.24%), while positive ionospheric storms occurred frequently during the winter (53.62%). An important finding is that only negative ionospheric storms were observed during great geomagnetic storm activity (Dst ≤ −350 nT). For periods when both ionosonde and GPS was available, the two data sets indicated similar ionospheric responses. Hence, GPS data can be used to effectively identify the ionospheric response in the absence of ionosonde data.
- Full Text:
- Date Issued: 2015
Structure of the nucleus ¹¹⁴Sn using gamma-ray coincidence data
- Authors: Oates, Sean Benjamin
- Date: 2015
- Subjects: High spin physics , Nuclear structure , Nuclear shell theory , Neutron counters , Decay schemes (Radioactivity) , Coincidence circuits , Collective excitations , Anisotropy
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5562 , http://hdl.handle.net/10962/d1019870
- Full Text:
- Date Issued: 2015
- Authors: Oates, Sean Benjamin
- Date: 2015
- Subjects: High spin physics , Nuclear structure , Nuclear shell theory , Neutron counters , Decay schemes (Radioactivity) , Coincidence circuits , Collective excitations , Anisotropy
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5562 , http://hdl.handle.net/10962/d1019870
- Full Text:
- Date Issued: 2015
Beta decay of 100/400 Zr produced in neutron-induced fission of natural uranium
- Authors: Kamoto, Thokozani
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3024 , vital:20353
- Description: Fission fragments, produced by neutron bombardment of natural uranium at the Physics Department, Jyväskylä, Finland, are studied in this work. The data had been sorted into 25 Y — y coincidence matrices which were then analysed. In this work we aimed to identify the fission products using Y-Y coincidence analysis and then study the beta-decay of some of the fission products. Sixteen fission products ranging from A = 94 to A = 136 were identified. Out of these fission products beta decay of the A = 100 (100/40 Zr – 100/41 Nb – 100/42 Mo) chain was studied in greater detail. We have also studied the variation of the relative intensities as a function of time of the 159-, 528-, 600-, 768-, 928- and 1502-keV Y-rav lines in 100/42 Mo and the profiles of the relative intensities have been modelled with the variation of the activity of 100/41 Nb against time. Configuration assignments of 100 Zr and 100/42 Mo are discussed.
- Full Text:
- Date Issued: 2016
- Authors: Kamoto, Thokozani
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3024 , vital:20353
- Description: Fission fragments, produced by neutron bombardment of natural uranium at the Physics Department, Jyväskylä, Finland, are studied in this work. The data had been sorted into 25 Y — y coincidence matrices which were then analysed. In this work we aimed to identify the fission products using Y-Y coincidence analysis and then study the beta-decay of some of the fission products. Sixteen fission products ranging from A = 94 to A = 136 were identified. Out of these fission products beta decay of the A = 100 (100/40 Zr – 100/41 Nb – 100/42 Mo) chain was studied in greater detail. We have also studied the variation of the relative intensities as a function of time of the 159-, 528-, 600-, 768-, 928- and 1502-keV Y-rav lines in 100/42 Mo and the profiles of the relative intensities have been modelled with the variation of the activity of 100/41 Nb against time. Configuration assignments of 100 Zr and 100/42 Mo are discussed.
- Full Text:
- Date Issued: 2016
Calibration and wide field imaging with PAPER: a catalogue of compact sources
- Authors: Philip, Liju
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/2397 , vital:20285
- Description: Observations of the redshifted 21 cm HI line promise to be a formidable tool for cosmology, allowing the investigation of the end of the so-called dark ages, when the first galaxies formed, and the subsequent Epoch of Reionization when the intergalactic medium transitioned from neutral to ionized. Such observations are plagued by foreground emission which is a few orders of magnitude brighter than the 21 cm line. In this thesis I analyzed data from the Donald C. Backer Precision Array for Probing the Epoch of Reionization (PAPER) in order to improve the characterization of the extragalactic foreground component. I derived a catalogue of unresolved radio sources down to a 5 Jy flux density limit at 150 MHz and derived their spectral index distribution using literature data at 408 MHz. I implemented advanced techniques to calibrate radio interferometric data that led to a few percent accuracy on the flux density scale of the derived catalogue. This work, therefore, represents a further step towards creating an accurate, global sky model that is crucial to improve calibration of Epoch of Reionization observations.
- Full Text:
- Date Issued: 2016
- Authors: Philip, Liju
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/2397 , vital:20285
- Description: Observations of the redshifted 21 cm HI line promise to be a formidable tool for cosmology, allowing the investigation of the end of the so-called dark ages, when the first galaxies formed, and the subsequent Epoch of Reionization when the intergalactic medium transitioned from neutral to ionized. Such observations are plagued by foreground emission which is a few orders of magnitude brighter than the 21 cm line. In this thesis I analyzed data from the Donald C. Backer Precision Array for Probing the Epoch of Reionization (PAPER) in order to improve the characterization of the extragalactic foreground component. I derived a catalogue of unresolved radio sources down to a 5 Jy flux density limit at 150 MHz and derived their spectral index distribution using literature data at 408 MHz. I implemented advanced techniques to calibrate radio interferometric data that led to a few percent accuracy on the flux density scale of the derived catalogue. This work, therefore, represents a further step towards creating an accurate, global sky model that is crucial to improve calibration of Epoch of Reionization observations.
- Full Text:
- Date Issued: 2016
Classical and quantum picture of the interior of two-dimensional black holes
- Authors: Shawa, Mark
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3629 , vital:20531
- Description: A quantum-mechanical description of black holes would represent the final step in our understanding of the nature of space-time. However, any progress towards that end is usually foiled by persistent space-time singularities that exist at the center of black holes. From the four-dimensional point of view, black holes seem to resist quantization. Under highly symmetric conditions, all higher-dimensional black holes are two-dimensional. Unlike their higher-dimensional counterparts, two dimensional black holes may not resist quantization. A non-trivial description of gravity in two dimensions is not possible using Einstein’s theory of gravity alone. However, we may still arrive at a consistent description of gravity by introducing a scalar field known as the dilaton. In this thesis, we study both the classical and quantum aspects of the interior of two-dimensional black holes using a generalized dilaton-gravity theory. Classically, we will find that the interior of most two-dimensional black holes is not much different from that of four-dimensional black holes. But by introducing quantized matter into the theory, the fluctuations in space-time will give a different picture of the structure of interior of black holes. Using a low-energy effective field theory, we will show that it is indeed possible to identify quantum modes in the interior of black holes and perform quantum-mechanical calculations near the singularity.
- Full Text:
- Date Issued: 2016
- Authors: Shawa, Mark
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3629 , vital:20531
- Description: A quantum-mechanical description of black holes would represent the final step in our understanding of the nature of space-time. However, any progress towards that end is usually foiled by persistent space-time singularities that exist at the center of black holes. From the four-dimensional point of view, black holes seem to resist quantization. Under highly symmetric conditions, all higher-dimensional black holes are two-dimensional. Unlike their higher-dimensional counterparts, two dimensional black holes may not resist quantization. A non-trivial description of gravity in two dimensions is not possible using Einstein’s theory of gravity alone. However, we may still arrive at a consistent description of gravity by introducing a scalar field known as the dilaton. In this thesis, we study both the classical and quantum aspects of the interior of two-dimensional black holes using a generalized dilaton-gravity theory. Classically, we will find that the interior of most two-dimensional black holes is not much different from that of four-dimensional black holes. But by introducing quantized matter into the theory, the fluctuations in space-time will give a different picture of the structure of interior of black holes. Using a low-energy effective field theory, we will show that it is indeed possible to identify quantum modes in the interior of black holes and perform quantum-mechanical calculations near the singularity.
- Full Text:
- Date Issued: 2016
Single station TEC modelling during storm conditions
- Authors: Uwamahoro, Jean Claude
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3812 , vital:20545
- Description: It has been shown in ionospheric research that modelling total electron content (TEC) during storm conditions is a big challenge. In this study, mathematical equations were developed to estimate TEC over Sutherland (32.38oS, 20.81oE), during storm conditions, using the Empirical Orthogonal Function (EOF) analysis, combined with regression analysis. TEC was derived from GPS observations and a geomagnetic storm was defined for Dst ≤ -50 nT. The inputs for the model were chosen based on the factors that influence TEC variation, such as diurnal, seasonal, solar and geomagnetic activity variation, and these were represented by hour of the day, day number of the year, F10.7 and A index respectively. The EOF model was developed using GPS TEC data from 1999 to 2013 and tested on different storms. For the model validation (interpolation), three storms were chosen in 2000 (solar maximum period) and three others in 2006 (solar minimum period), while for extrapolation six storms including three in 2014 and three in 2015 were chosen. Before building the model, TEC values for the selected 2000 and 2006 storms were removed from the dataset used to construct the model in order to make the model validation independent on data. A comparison of the observed and modelled TEC showed that the EOF model works well for storms with non-significant ionospheric TEC response and storms that occurred during periods of low solar activity. High correlation coefficients between the observed and modelled TEC were obtained showing that the model covers most of the information contained in the observed TEC. Furthermore, it has been shown that the EOF model developed for a specific station may be used to estimate TEC over other locations within a latitudinal and longitudinal coverage of 8.7o and 10.6o respectively. This is an important result as it reduces the data dimensionality problem for computational purposes. It may therefore not be necessary for regional storm-time TEC modelling to compute TEC data for all the closest GPS receiver stations since most of the needed information can be extracted from measurements at one location.
- Full Text:
- Date Issued: 2016
- Authors: Uwamahoro, Jean Claude
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3812 , vital:20545
- Description: It has been shown in ionospheric research that modelling total electron content (TEC) during storm conditions is a big challenge. In this study, mathematical equations were developed to estimate TEC over Sutherland (32.38oS, 20.81oE), during storm conditions, using the Empirical Orthogonal Function (EOF) analysis, combined with regression analysis. TEC was derived from GPS observations and a geomagnetic storm was defined for Dst ≤ -50 nT. The inputs for the model were chosen based on the factors that influence TEC variation, such as diurnal, seasonal, solar and geomagnetic activity variation, and these were represented by hour of the day, day number of the year, F10.7 and A index respectively. The EOF model was developed using GPS TEC data from 1999 to 2013 and tested on different storms. For the model validation (interpolation), three storms were chosen in 2000 (solar maximum period) and three others in 2006 (solar minimum period), while for extrapolation six storms including three in 2014 and three in 2015 were chosen. Before building the model, TEC values for the selected 2000 and 2006 storms were removed from the dataset used to construct the model in order to make the model validation independent on data. A comparison of the observed and modelled TEC showed that the EOF model works well for storms with non-significant ionospheric TEC response and storms that occurred during periods of low solar activity. High correlation coefficients between the observed and modelled TEC were obtained showing that the model covers most of the information contained in the observed TEC. Furthermore, it has been shown that the EOF model developed for a specific station may be used to estimate TEC over other locations within a latitudinal and longitudinal coverage of 8.7o and 10.6o respectively. This is an important result as it reduces the data dimensionality problem for computational purposes. It may therefore not be necessary for regional storm-time TEC modelling to compute TEC data for all the closest GPS receiver stations since most of the needed information can be extracted from measurements at one location.
- Full Text:
- Date Issued: 2016
The EPR paradox: back from the future
- Authors: Bryan, Kate Louise Halse
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/2881 , vital:20338
- Description: The Einstein-Podolsky-Rosen (EPR) thought experiment produced a problem regarding the interpretation of quantum mechanics provided for entangled systems. Although the thought experiment was reformulated mathematically in Bell's Theorem, the conclusion regarding entanglement correlations is still debated today. In an attempt to provide an explanation of how entangled systems maintain their correlations, this thesis investigates the theory of post-state teleportation as a possible interpretation of how information moves between entangled systems without resorting to nonlocal action. Post-state teleportation describes a method of communicating to the past via a quantum information channel. The resulting picture of the EPR thought experiment relied on information propagating backward from a final boundary condition to ensure all correlations were maintained. Similarities were found between this resolution of the EPR paradox and the final state solution to the black hole information paradox and the closely related firewall problem. The latter refers to an apparent conflict between unitary evaporation of a black hole and the strong subadditivity condition. The use of observer complementarity allows this solution of the black hole problem to be shown to be the same as a seemingly different solution known as “ER=EPR", where ‘ER’ refers to an Einstein-Rosen bridge or wormhole.
- Full Text:
- Date Issued: 2016
- Authors: Bryan, Kate Louise Halse
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/2881 , vital:20338
- Description: The Einstein-Podolsky-Rosen (EPR) thought experiment produced a problem regarding the interpretation of quantum mechanics provided for entangled systems. Although the thought experiment was reformulated mathematically in Bell's Theorem, the conclusion regarding entanglement correlations is still debated today. In an attempt to provide an explanation of how entangled systems maintain their correlations, this thesis investigates the theory of post-state teleportation as a possible interpretation of how information moves between entangled systems without resorting to nonlocal action. Post-state teleportation describes a method of communicating to the past via a quantum information channel. The resulting picture of the EPR thought experiment relied on information propagating backward from a final boundary condition to ensure all correlations were maintained. Similarities were found between this resolution of the EPR paradox and the final state solution to the black hole information paradox and the closely related firewall problem. The latter refers to an apparent conflict between unitary evaporation of a black hole and the strong subadditivity condition. The use of observer complementarity allows this solution of the black hole problem to be shown to be the same as a seemingly different solution known as “ER=EPR", where ‘ER’ refers to an Einstein-Rosen bridge or wormhole.
- Full Text:
- Date Issued: 2016
Thermoluminescence of annealed synthetic quartz
- Atang, Elizabeth Fende Midiki
- Authors: Atang, Elizabeth Fende Midiki
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/420 , vital:19957
- Description: The kinetic and dosimetric features of the main thermoluminescent peak of synthetic quartz have been investigated in quartz ordinarily annealed at 500_C as well as quartz annealed at 500_C for 10 minutes. The main peak is found at 78 _C for the samples annealed at 500_C for 10 minutes irradiated to 10 Gy and heated at 1.0 _C/s. For the samples ordinarily annealed at 500_C the main peak is found at 106 _C after the sample has been irradiated to 30 Gy and heated at 5.0 _C/s. In these samples, the intensity of the main peak is enhanced with repetitive measurement whereas its maximum temperature is unaffected. The peak position of the main peak in the sample is independent of the irradiation dose and this, together with its fading characteristics, are consistent with first-order kinetics. For doses between 5 and 25 Gy, the dose response of the main peak of the annealed sample is superlinear. The half-life of the main TL peak of the annealed sample is about 1 h. The activation energy E of the main peak is around 0.90 eV. For a heating rate of 0.4 _C/s, its order of kinetics b derived from the whole curve method of analysis is 1.0. Following irradiation, preheating and illumination with 470 nm blue light, the main peak in the annealed sample is regenerated during heating. The resulting phototransferred peak occurs at the same temperature as the original peak and has similar kinetic and dosimetric features, with a half-life of about 1 h. For a preheat temperature of 200 _C, the intensity of the phototransferred peak in the sample increases with illumination time up to a maximum and decreases thereafter. At longer illumination times, no further decrease in the intensity of the phototransferred peak is observed. The traps associated with the 325 _C peak are the main source of the electrons responsible for the regenerated peak.
- Full Text:
- Date Issued: 2016
- Authors: Atang, Elizabeth Fende Midiki
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/420 , vital:19957
- Description: The kinetic and dosimetric features of the main thermoluminescent peak of synthetic quartz have been investigated in quartz ordinarily annealed at 500_C as well as quartz annealed at 500_C for 10 minutes. The main peak is found at 78 _C for the samples annealed at 500_C for 10 minutes irradiated to 10 Gy and heated at 1.0 _C/s. For the samples ordinarily annealed at 500_C the main peak is found at 106 _C after the sample has been irradiated to 30 Gy and heated at 5.0 _C/s. In these samples, the intensity of the main peak is enhanced with repetitive measurement whereas its maximum temperature is unaffected. The peak position of the main peak in the sample is independent of the irradiation dose and this, together with its fading characteristics, are consistent with first-order kinetics. For doses between 5 and 25 Gy, the dose response of the main peak of the annealed sample is superlinear. The half-life of the main TL peak of the annealed sample is about 1 h. The activation energy E of the main peak is around 0.90 eV. For a heating rate of 0.4 _C/s, its order of kinetics b derived from the whole curve method of analysis is 1.0. Following irradiation, preheating and illumination with 470 nm blue light, the main peak in the annealed sample is regenerated during heating. The resulting phototransferred peak occurs at the same temperature as the original peak and has similar kinetic and dosimetric features, with a half-life of about 1 h. For a preheat temperature of 200 _C, the intensity of the phototransferred peak in the sample increases with illumination time up to a maximum and decreases thereafter. At longer illumination times, no further decrease in the intensity of the phototransferred peak is observed. The traps associated with the 325 _C peak are the main source of the electrons responsible for the regenerated peak.
- Full Text:
- Date Issued: 2016
Automation of source-artefact classification
- Sebokolodi, Makhuduga Lerato Lydia
- Authors: Sebokolodi, Makhuduga Lerato Lydia
- Date: 2017
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/4920 , vital:20743
- Description: The high sensitivities of modern radio telescopes will enable the detection of very faint astrophysical sources in the distant Universe. However, these high sensitivities also imply that calibration artefacts, which were below the noise for less sensitive instruments, will emerge above the noise and may limit the dynamic range capabilities of these instruments. Detecting faint emission will require detection thresholds close to the noise and this may cause some of the artefacts to be incorrectly detected as real emission. The current approach is to manually remove the artefacts, or set high detection thresholds in order to avoid them. The former will not be possible given the large quantities of data that these instruments will produce, and the latter results in very shallow and incomplete catalogues. This work uses the negative detection method developed by Serra et al. (2012) to distinguish artefacts from astrophysical emission in radio images. We also present a technique that automates the identification of sources subject to severe direction-dependent (DD) effects and thus allows them to be flagged for DD calibration. The negative detection approach is shown to provide high reliability and high completeness catalogues for simulated data, as well as a JVLA observation of the 3C147 field (Mitra et al., 2015). We also show that our technique correctly identifies sources that require DD calibration for datasets from the KAT-7, LOFAR, JVLA and GMRT instruments.
- Full Text:
- Date Issued: 2017
- Authors: Sebokolodi, Makhuduga Lerato Lydia
- Date: 2017
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/4920 , vital:20743
- Description: The high sensitivities of modern radio telescopes will enable the detection of very faint astrophysical sources in the distant Universe. However, these high sensitivities also imply that calibration artefacts, which were below the noise for less sensitive instruments, will emerge above the noise and may limit the dynamic range capabilities of these instruments. Detecting faint emission will require detection thresholds close to the noise and this may cause some of the artefacts to be incorrectly detected as real emission. The current approach is to manually remove the artefacts, or set high detection thresholds in order to avoid them. The former will not be possible given the large quantities of data that these instruments will produce, and the latter results in very shallow and incomplete catalogues. This work uses the negative detection method developed by Serra et al. (2012) to distinguish artefacts from astrophysical emission in radio images. We also present a technique that automates the identification of sources subject to severe direction-dependent (DD) effects and thus allows them to be flagged for DD calibration. The negative detection approach is shown to provide high reliability and high completeness catalogues for simulated data, as well as a JVLA observation of the 3C147 field (Mitra et al., 2015). We also show that our technique correctly identifies sources that require DD calibration for datasets from the KAT-7, LOFAR, JVLA and GMRT instruments.
- Full Text:
- Date Issued: 2017
Calibration and imaging with variable radio sources
- Authors: Mbou Sob, Ulrich Armel
- Date: 2017
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/37977 , vital:24721
- Description: Calibration of radio interferometric data is one of the most important steps that are required to produce high dynamic range radio maps with high fidelity. However, naive calibration (inaccurate knowledge of the sky and instruments) leads to the formation of calibration artefacts: the generation of spurious sources and the deformations in the structure of extended sources. A particular class of calibration artefacts, called ghost sources, which results from calibration with incomplete sky models has been extensively studied by Grobler et al. (2014, 2016) and Wijnholds et al. (2016). They developed a framework which can be used to predict the fluxes and positions of ghost sources. This work uses the approach initiated by these authors to study the calibration artefacts and ghost sources that are produced when variable sources are not considered in sky models during calibration. This work investigates both long-term and short-term variability and uses the root mean square (rms) and power spectrum as metrics to evaluate the “quality” of the residual visibilities obtained through calibration. We show that the overestimation and underestimation of source flux density during calibration produces similar but symmetrically opposite results. We show that calibration artefacts from sky model errors are not normally distributed. This prevents them from being removed by employing advanced techniques, such as stacking. The power spectrums measured from the residuals with a variable source was significantly higher than those from residuals without a variable source. This implies advanced calibration techniques and sky model completeness will be required for studies such as probing the Epoch of Reoinization, where we seek to detect faint signals below thermal noise.
- Full Text:
- Date Issued: 2017
- Authors: Mbou Sob, Ulrich Armel
- Date: 2017
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/37977 , vital:24721
- Description: Calibration of radio interferometric data is one of the most important steps that are required to produce high dynamic range radio maps with high fidelity. However, naive calibration (inaccurate knowledge of the sky and instruments) leads to the formation of calibration artefacts: the generation of spurious sources and the deformations in the structure of extended sources. A particular class of calibration artefacts, called ghost sources, which results from calibration with incomplete sky models has been extensively studied by Grobler et al. (2014, 2016) and Wijnholds et al. (2016). They developed a framework which can be used to predict the fluxes and positions of ghost sources. This work uses the approach initiated by these authors to study the calibration artefacts and ghost sources that are produced when variable sources are not considered in sky models during calibration. This work investigates both long-term and short-term variability and uses the root mean square (rms) and power spectrum as metrics to evaluate the “quality” of the residual visibilities obtained through calibration. We show that the overestimation and underestimation of source flux density during calibration produces similar but symmetrically opposite results. We show that calibration artefacts from sky model errors are not normally distributed. This prevents them from being removed by employing advanced techniques, such as stacking. The power spectrums measured from the residuals with a variable source was significantly higher than those from residuals without a variable source. This implies advanced calibration techniques and sky model completeness will be required for studies such as probing the Epoch of Reoinization, where we seek to detect faint signals below thermal noise.
- Full Text:
- Date Issued: 2017
Ionospheric disturbances during magnetic storms at SANAE
- Authors: Hiyadutuje, Alicreance
- Date: 2017
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/54956 , vital:26639
- Description: The coronal mass ejections (CMEs) and solar flares associated with extreme solar activity may strike the Earth's magnetosphere and give rise to geomagnetic storms. During geomagnetic storms, the polar plasma dynamics may influence the middle and low-latitude ionosphere via travelling ionospheric disturbances (TIDs). These are wave-like electron density disturbances caused by atmospheric gravity waves propagating in the ionosphere. TIDs focus and defocus SuperDARN signals producing a characteristic pattern of ground backscattered power (Samson et al., 1989). Geomagnetic storms may cause a decrease of total electron content (TEC), i.e. a negative storm effect, or/and an increase of TEC, i.e. a positive storm effect. The aim of this project was to investigate the ionospheric response to strong storms (Dst < -100 nT) between 2011 and 2015, using TEC and scintillation measurements derived from GPS receivers as well as SuperDARN power, Doppler velocity and convection maps. In this study the ionosphere's response to geomagnetic storms is determined by the magnitude and time of occurrence of the geomagnetic storm. The ionospheric TEC results of this study show that most of the storm effects observed were a combination of both negative and positive per storm per station (77.8%), and only 8.9% and 13.3% of effects on TEC were negative and positive respectively. The highest number of storm effects occurred in autumn (36.4%), while 31.6%, 28.4% and 3.6% occurred in winter, spring and summer respectively. During the storms studied, 71.4% had phase scintillation in the range of 0.7 - 1 radians, and only 14.3% of the storms had amplitude scintillations near 0.4. The storms studied at SANAE station generated TIDs with periods of less than an hour and amplitudes in the range 0.2 - 5 TECU. These TIDs were found to originate from the high-velocity plasma flows, some of which are visible in SuperDARN convection maps. Early studies concluded that likely sources of these disturbances correspond to ionospheric current surges (Bristow et al., 1994) in the dayside auroral zone (Huang et al., 1998).
- Full Text:
- Date Issued: 2017
- Authors: Hiyadutuje, Alicreance
- Date: 2017
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/54956 , vital:26639
- Description: The coronal mass ejections (CMEs) and solar flares associated with extreme solar activity may strike the Earth's magnetosphere and give rise to geomagnetic storms. During geomagnetic storms, the polar plasma dynamics may influence the middle and low-latitude ionosphere via travelling ionospheric disturbances (TIDs). These are wave-like electron density disturbances caused by atmospheric gravity waves propagating in the ionosphere. TIDs focus and defocus SuperDARN signals producing a characteristic pattern of ground backscattered power (Samson et al., 1989). Geomagnetic storms may cause a decrease of total electron content (TEC), i.e. a negative storm effect, or/and an increase of TEC, i.e. a positive storm effect. The aim of this project was to investigate the ionospheric response to strong storms (Dst < -100 nT) between 2011 and 2015, using TEC and scintillation measurements derived from GPS receivers as well as SuperDARN power, Doppler velocity and convection maps. In this study the ionosphere's response to geomagnetic storms is determined by the magnitude and time of occurrence of the geomagnetic storm. The ionospheric TEC results of this study show that most of the storm effects observed were a combination of both negative and positive per storm per station (77.8%), and only 8.9% and 13.3% of effects on TEC were negative and positive respectively. The highest number of storm effects occurred in autumn (36.4%), while 31.6%, 28.4% and 3.6% occurred in winter, spring and summer respectively. During the storms studied, 71.4% had phase scintillation in the range of 0.7 - 1 radians, and only 14.3% of the storms had amplitude scintillations near 0.4. The storms studied at SANAE station generated TIDs with periods of less than an hour and amplitudes in the range 0.2 - 5 TECU. These TIDs were found to originate from the high-velocity plasma flows, some of which are visible in SuperDARN convection maps. Early studies concluded that likely sources of these disturbances correspond to ionospheric current surges (Bristow et al., 1994) in the dayside auroral zone (Huang et al., 1998).
- Full Text:
- Date Issued: 2017
MEQSILHOUETTE: a mm-VLBI observation and signal corruption simulator
- Authors: Blecher, Tariq
- Date: 2017
- Subjects: Large astronomical telescopes , Very long baseline interferometry , MEQSILHOUETTE (Software) , Event horizon telescope
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/40713 , vital:25019
- Description: The Event Horizon Telescope (EHT) aims to resolve the innermost emission of nearby supermassive black holes, Sgr A* and M87, on event horizon scales. This emission is predicted to be gravitationally lensed by the black hole which should produce a shadow (or silhouette) feature, a precise measurement of which is a test of gravity in the strong-field regime. This emission is also an ideal probe of the innermost accretion and jet-launch physics, offering the new insights into this data-limited observing regime. The EHT will use the technique of Very Long Baseline Interferometry (VLBI) at (sub)millimetre wavelengths, which has a diffraction limited angular resolution of order ~ 10 µ-arcsec. However, this technique suffers from unique challenges, including scattering and attenuation in the troposphere and interstellar medium; variable source structure; as well as antenna pointing errors comparable to the size of the primary beam. In this thesis, we present the meqsilhouette software package which is focused towards simulating realistic EHT data. It has the capability to simulate a time-variable source, and includes realistic descriptions of the effects of the troposphere, the interstellar medium as well as primary beams and associated antenna pointing errors. We have demonstrated through several examples simulations that these effects can limit the ability to measure the key science parameters. This simulator can be used to research calibration, parameter estimation and imaging strategies, as well as gain insight into possible systematic uncertainties.
- Full Text:
- Date Issued: 2017
- Authors: Blecher, Tariq
- Date: 2017
- Subjects: Large astronomical telescopes , Very long baseline interferometry , MEQSILHOUETTE (Software) , Event horizon telescope
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/40713 , vital:25019
- Description: The Event Horizon Telescope (EHT) aims to resolve the innermost emission of nearby supermassive black holes, Sgr A* and M87, on event horizon scales. This emission is predicted to be gravitationally lensed by the black hole which should produce a shadow (or silhouette) feature, a precise measurement of which is a test of gravity in the strong-field regime. This emission is also an ideal probe of the innermost accretion and jet-launch physics, offering the new insights into this data-limited observing regime. The EHT will use the technique of Very Long Baseline Interferometry (VLBI) at (sub)millimetre wavelengths, which has a diffraction limited angular resolution of order ~ 10 µ-arcsec. However, this technique suffers from unique challenges, including scattering and attenuation in the troposphere and interstellar medium; variable source structure; as well as antenna pointing errors comparable to the size of the primary beam. In this thesis, we present the meqsilhouette software package which is focused towards simulating realistic EHT data. It has the capability to simulate a time-variable source, and includes realistic descriptions of the effects of the troposphere, the interstellar medium as well as primary beams and associated antenna pointing errors. We have demonstrated through several examples simulations that these effects can limit the ability to measure the key science parameters. This simulator can be used to research calibration, parameter estimation and imaging strategies, as well as gain insight into possible systematic uncertainties.
- Full Text:
- Date Issued: 2017
Real-time audio spectrum analyser research, design, development and implementation using the 32 bit ARMR Cortex-M4 microcontroller
- Authors: Just, Stefan Antonio
- Date: 2017
- Subjects: Spectrum analyzers , Sound -- Recording and reproducing -- Digital techniques , Real-time data processing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/50536 , vital:25997
- Description: This thesis describes the design and testing of a low-cost hand-held real-time audio analyser (RTAA). This includes the design of an embedded system, the development of the firmware executed by the embedded system, and the implementation of a real-time signal processing algorithms. One of the objectives of this project was to design an alternative low-cost audio analyser to the current commercially available solutions. The device was tested with the audio standard test signal (pink noise) and was compared to the expected at-spectrum response corresponding to a balanced audio system. The design makes use of an 32-bit Reduced Instruction Set Computer (RISC) processor core (ARM Cortex-M4), namely the STM32F4 family of microcontrollers. Due to the pin compatibility of the microcontroller (designed and manufactured by STMicroelectronics), the new development board can also be upgraded with the newly released Cortex-M7 microcontroller, namely the STM32F7 family of microcontrollers. Moreover, the low-cost hardware design features 256kB Random Access Memory (RAM); on-board Micro-Electro-Mechanical System (MEMS) microphone; on-chip 12-bit Analogue-to-Digital (A/D) and Digital-to-Analogue (D/A) Converters; 3.2" Thin-Film-Transistor Liquid-Crystal Display (TFT-LCD) with a resistive touch screen sensor and SD-Card Socket. Furthermore, two additional expansion modules were designed and can extend the functionality of the designed real-time audio analyser. Firstly, an audio/video module featuring a professional 24-bit 192kHz sampling rate audio CODEC; balanced audio microphone input; unbalanced line output; three MEMS microphone inputs; headphone output; and a Video Graphics Array (VGA) controller allowing the display of the analysed audio spectrum on either a projector or monitor. The second expansion module features two external memories: 1MB Static Random Access Memory (SRAM) and 16MB Synchronous Dynamic Random Access Memory (SDRAM). While the two additional expansion modules were not completely utilised by the firmware presented in this thesis, upgrades of the real-time audio analyser firmware in future revisions will provide a higher performing and more accurate analysis of the audio spectrum. The full research and design process for the real-time audio analyser is discussed and both Problems and pitfalls with the final implemented design are highlighted and possible resolutions were investigated. The development costs (excluding labour) are given in the form of a bill of materials (BOM) with the total costs averaging around R1000. Moreover, the additional VGA controller could further decrease the overall costs with the removal of the TFT-LCD screen from the audio analyser and provided the external display was not included in the BOM.
- Full Text:
- Date Issued: 2017
- Authors: Just, Stefan Antonio
- Date: 2017
- Subjects: Spectrum analyzers , Sound -- Recording and reproducing -- Digital techniques , Real-time data processing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/50536 , vital:25997
- Description: This thesis describes the design and testing of a low-cost hand-held real-time audio analyser (RTAA). This includes the design of an embedded system, the development of the firmware executed by the embedded system, and the implementation of a real-time signal processing algorithms. One of the objectives of this project was to design an alternative low-cost audio analyser to the current commercially available solutions. The device was tested with the audio standard test signal (pink noise) and was compared to the expected at-spectrum response corresponding to a balanced audio system. The design makes use of an 32-bit Reduced Instruction Set Computer (RISC) processor core (ARM Cortex-M4), namely the STM32F4 family of microcontrollers. Due to the pin compatibility of the microcontroller (designed and manufactured by STMicroelectronics), the new development board can also be upgraded with the newly released Cortex-M7 microcontroller, namely the STM32F7 family of microcontrollers. Moreover, the low-cost hardware design features 256kB Random Access Memory (RAM); on-board Micro-Electro-Mechanical System (MEMS) microphone; on-chip 12-bit Analogue-to-Digital (A/D) and Digital-to-Analogue (D/A) Converters; 3.2" Thin-Film-Transistor Liquid-Crystal Display (TFT-LCD) with a resistive touch screen sensor and SD-Card Socket. Furthermore, two additional expansion modules were designed and can extend the functionality of the designed real-time audio analyser. Firstly, an audio/video module featuring a professional 24-bit 192kHz sampling rate audio CODEC; balanced audio microphone input; unbalanced line output; three MEMS microphone inputs; headphone output; and a Video Graphics Array (VGA) controller allowing the display of the analysed audio spectrum on either a projector or monitor. The second expansion module features two external memories: 1MB Static Random Access Memory (SRAM) and 16MB Synchronous Dynamic Random Access Memory (SDRAM). While the two additional expansion modules were not completely utilised by the firmware presented in this thesis, upgrades of the real-time audio analyser firmware in future revisions will provide a higher performing and more accurate analysis of the audio spectrum. The full research and design process for the real-time audio analyser is discussed and both Problems and pitfalls with the final implemented design are highlighted and possible resolutions were investigated. The development costs (excluding labour) are given in the form of a bill of materials (BOM) with the total costs averaging around R1000. Moreover, the additional VGA controller could further decrease the overall costs with the removal of the TFT-LCD screen from the audio analyser and provided the external display was not included in the BOM.
- Full Text:
- Date Issued: 2017
Thermoluminescence of synthetic quartz annealed beyond its second phase inversion temperature
- Authors: Mthwesi, Zuko
- Date: 2017
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/46077 , vital:25577
- Description: Thermoluminescence of synthetic quartz annealed at 1000 ºC for 10 minutes has been studied. The aim was to study mechanisms of thermoluminescence in annealed synthetic quartz and to discuss the results in terms of the physics of point defects. The sample was irradiated with a beta dose of 10 Gy of beta radiation and then heated at a linear heating rate of 1 ºC.s-1 up to 500 ºC. The thermoluminescence (TL) glow curve consists of three glow peaks. Peak I at 74 0C (main peak) with high intensity as compared to the other two peaks. Peak II at 144 ºC is more intense than peak III at 180 ºC. This study was on the main peak (MP) at 74 ºC and peak III at 180 ºC. Kinetic analysis was carried out to determine the trap depth E, frequency factor s and the order of kinetics b of both peaks using the initial rise, peak shape, variable heating rate, glow curve deconvolution and isothermal TL methods. The values of kinetic parameters obtained were around 0.7 to 1.0 eV for trap depth and in the interval of 108 to 1015 s-¹ for frequency factor for both peaks. The effect of heating rate from 0.5 to 5 ºC.s-¹ on the TL peak intensity and peak temperature was observed. Also the effect of thermal quenching was observed at high heating rates. Since the TL glow curve has overlapping TL peaks, the Tm-Tstop method from 54 ºC up to 64 ºC and E -Tstop methods were introduced where a first order single peak was observed. Phototransfered thermoluminescence (PTTL) was investigated and characterized by three peaks. First PTTL peak I at 72 ºC, peak II at 134 ºC and peak III at 176 ºC. Analysis was carried out on peaks I and III for the effect of dose dependence from 20-200 Gy. Thermal fading was observed on PTTL peaks I and III, after storage time of 30 minutes.
- Full Text:
- Date Issued: 2017
- Authors: Mthwesi, Zuko
- Date: 2017
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/46077 , vital:25577
- Description: Thermoluminescence of synthetic quartz annealed at 1000 ºC for 10 minutes has been studied. The aim was to study mechanisms of thermoluminescence in annealed synthetic quartz and to discuss the results in terms of the physics of point defects. The sample was irradiated with a beta dose of 10 Gy of beta radiation and then heated at a linear heating rate of 1 ºC.s-1 up to 500 ºC. The thermoluminescence (TL) glow curve consists of three glow peaks. Peak I at 74 0C (main peak) with high intensity as compared to the other two peaks. Peak II at 144 ºC is more intense than peak III at 180 ºC. This study was on the main peak (MP) at 74 ºC and peak III at 180 ºC. Kinetic analysis was carried out to determine the trap depth E, frequency factor s and the order of kinetics b of both peaks using the initial rise, peak shape, variable heating rate, glow curve deconvolution and isothermal TL methods. The values of kinetic parameters obtained were around 0.7 to 1.0 eV for trap depth and in the interval of 108 to 1015 s-¹ for frequency factor for both peaks. The effect of heating rate from 0.5 to 5 ºC.s-¹ on the TL peak intensity and peak temperature was observed. Also the effect of thermal quenching was observed at high heating rates. Since the TL glow curve has overlapping TL peaks, the Tm-Tstop method from 54 ºC up to 64 ºC and E -Tstop methods were introduced where a first order single peak was observed. Phototransfered thermoluminescence (PTTL) was investigated and characterized by three peaks. First PTTL peak I at 72 ºC, peak II at 134 ºC and peak III at 176 ºC. Analysis was carried out on peaks I and III for the effect of dose dependence from 20-200 Gy. Thermal fading was observed on PTTL peaks I and III, after storage time of 30 minutes.
- Full Text:
- Date Issued: 2017
A pilot wide-field VLBI survey of the GOODS-North field
- Authors: Akoto-Danso, Alexander
- Date: 2019
- Subjects: Radio astronomy , Very long baseline interferometry , Radio interometers , Imaging systems in astronomy , Hubble Space Telescope (Spacecraft) -- Observations
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/72296 , vital:30027
- Description: Very Long Baseline Interferometry (VLBI) has significant advantages in disentangling active galactic nuclei (AGN) from star formation, particularly at intermediate to high-redshift due to its high angular resolution and insensitivity to dust. Surveys using VLBI arrays are only just becoming practical over wide areas with numerous developments and innovations (such as multi-phase centre techniques) in observation and data analysis techniques. However, fully automated pipelines for VLBI data analysis are based on old software packages and are unable to incorporate new calibration and imaging algorithms. In this work, the researcher developed a pipeline for VLBI data analysis which integrates a recent wide-field imaging algorithm, RFI excision, and a purpose-built source finding algorithm specifically developed for the 64kx64k wide-field VLBI images. The researcher used this novel pipeline to process 6% (~ 9 arcmin2 of the total 160 arcmin2) of the data from the CANDELS GOODS- North extragalactic field at 1.6 GHz. The milli-arcsec scale images have an average rms of a ~ 10 uJy/beam. Forty four (44) candidate sources were detected, most of which are at sub-mJy flux densities, having brightness temperatures and luminosities of >5x105 K and >6x1021 W Hz-1 respectively. This work demonstrates that automated post-processing pipelines for wide-field, uniform sensitivity VLBI surveys are feasible and indeed made more efficient with new software, wide-field imaging algorithms and more purpose-built source- finders. This broadens the discovery space for future wide-field surveys with upcoming arrays such as the African VLBI Network (AVN), MeerKAT and the Square Kilometre Array (SKA).
- Full Text:
- Date Issued: 2019
- Authors: Akoto-Danso, Alexander
- Date: 2019
- Subjects: Radio astronomy , Very long baseline interferometry , Radio interometers , Imaging systems in astronomy , Hubble Space Telescope (Spacecraft) -- Observations
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/72296 , vital:30027
- Description: Very Long Baseline Interferometry (VLBI) has significant advantages in disentangling active galactic nuclei (AGN) from star formation, particularly at intermediate to high-redshift due to its high angular resolution and insensitivity to dust. Surveys using VLBI arrays are only just becoming practical over wide areas with numerous developments and innovations (such as multi-phase centre techniques) in observation and data analysis techniques. However, fully automated pipelines for VLBI data analysis are based on old software packages and are unable to incorporate new calibration and imaging algorithms. In this work, the researcher developed a pipeline for VLBI data analysis which integrates a recent wide-field imaging algorithm, RFI excision, and a purpose-built source finding algorithm specifically developed for the 64kx64k wide-field VLBI images. The researcher used this novel pipeline to process 6% (~ 9 arcmin2 of the total 160 arcmin2) of the data from the CANDELS GOODS- North extragalactic field at 1.6 GHz. The milli-arcsec scale images have an average rms of a ~ 10 uJy/beam. Forty four (44) candidate sources were detected, most of which are at sub-mJy flux densities, having brightness temperatures and luminosities of >5x105 K and >6x1021 W Hz-1 respectively. This work demonstrates that automated post-processing pipelines for wide-field, uniform sensitivity VLBI surveys are feasible and indeed made more efficient with new software, wide-field imaging algorithms and more purpose-built source- finders. This broadens the discovery space for future wide-field surveys with upcoming arrays such as the African VLBI Network (AVN), MeerKAT and the Square Kilometre Array (SKA).
- Full Text:
- Date Issued: 2019
Foreground simulations for observations of the global 21-cm signal
- Authors: Klutse, Diana
- Date: 2019
- Subjects: Cosmic background radiation , Astronomy -- Observations , Electromagnetic waves , Radiation, Background
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/76398 , vital:30557
- Description: The sky-averaged (global) spectrum of the redshifted 21-cm line promises to be a direct probe of the Dark Ages, the period before the first luminous sources formed and the Epoch of Reionization during which these sources produced enough ionizing photons to ionize the neutral intergalactic medium. However, observations of this signal are contaminated by both astrophysical foregrounds which are orders of magnitude brighter than the cosmological signal and by non-astrophysical and non-ideal instrumental effects. It is therefore crucial to understand all these data components and their impacts on the cosmological signal, for successful signal extraction. In this view, we investigated the impact that small scale spatial structures of diffuse Galactic foreground has on the foreground spectrum as observed by a global 21-cm observation. We simulated two different sets of observations using a realistic dipole beam model of two synchotron foreground templates that differ from each other in the small scale structure: the original 408 MHz all-sky map by Haslam et al. (1982) and a version where the calibration was improved to remove artifcats and point sources (Remazeilles et al., 2015). We generated simulated foreground spectra and modeled them using a polynomial expansion in frequency. We found that the different foreground templates have a modest impact on the simulated spectra, generate differences up to 2% in the root mean square of residual spectra after the log-polynomial best fit was subtracted out.
- Full Text:
- Date Issued: 2019
- Authors: Klutse, Diana
- Date: 2019
- Subjects: Cosmic background radiation , Astronomy -- Observations , Electromagnetic waves , Radiation, Background
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/76398 , vital:30557
- Description: The sky-averaged (global) spectrum of the redshifted 21-cm line promises to be a direct probe of the Dark Ages, the period before the first luminous sources formed and the Epoch of Reionization during which these sources produced enough ionizing photons to ionize the neutral intergalactic medium. However, observations of this signal are contaminated by both astrophysical foregrounds which are orders of magnitude brighter than the cosmological signal and by non-astrophysical and non-ideal instrumental effects. It is therefore crucial to understand all these data components and their impacts on the cosmological signal, for successful signal extraction. In this view, we investigated the impact that small scale spatial structures of diffuse Galactic foreground has on the foreground spectrum as observed by a global 21-cm observation. We simulated two different sets of observations using a realistic dipole beam model of two synchotron foreground templates that differ from each other in the small scale structure: the original 408 MHz all-sky map by Haslam et al. (1982) and a version where the calibration was improved to remove artifcats and point sources (Remazeilles et al., 2015). We generated simulated foreground spectra and modeled them using a polynomial expansion in frequency. We found that the different foreground templates have a modest impact on the simulated spectra, generate differences up to 2% in the root mean square of residual spectra after the log-polynomial best fit was subtracted out.
- Full Text:
- Date Issued: 2019
Machine learning methods for calibrating radio interferometric data
- Authors: Zitha, Simphiwe Nhlanhla
- Date: 2019
- Subjects: Calibration , Radio astronomy -- Data processing , Radio astronomy -- South Africa , Karoo Array Telescope (South Africa) , Radio telescopes -- South Africa , Common Astronomy Software Application (Computer software)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/97096 , vital:31398
- Description: The applications of machine learning have created an opportunity to deal with complex problems currently encountered in radio astronomy data processing. Calibration is one of the most important data processing steps required to produce high dynamic range images. This process involves the determination of calibration parameters, both instrumental and astronomical, to correct the collected data. Typically, astronomers use a package such as Common Astronomy Software Applications (CASA) to compute the gain solutions based on regular observations of a known calibrator source. In this work we present applications of machine learning to first generation calibration (1GC), using the KAT-7 telescope environmental and pointing sensor data recorded during observations. Applying machine learning to 1GC, as opposed to calculating the gain solutions in CASA, has shown evidence of reducing computation, as well as accurately predict the 1GC gain solutions representing the behaviour of the antenna during an observation. These methods are computationally less expensive, however they have not fully learned to generalise in predicting accurate 1GC solutions by looking at environmental and pointing sensors. We call this multi-output regression model ZCal, which is based on random forest, decision trees, extremely randomized trees and K-nearest neighbor algorithms. The prediction error obtained during the testing of our model on testing data is ≈ 0.01 < rmse < 0.09 for gain amplitude per antenna, and 0.2 rad < rmse <0.5 rad for gain phase. This shows that the instrumental parameters used to train our model more strongly correlate with gain amplitude effects than phase.
- Full Text:
- Date Issued: 2019
- Authors: Zitha, Simphiwe Nhlanhla
- Date: 2019
- Subjects: Calibration , Radio astronomy -- Data processing , Radio astronomy -- South Africa , Karoo Array Telescope (South Africa) , Radio telescopes -- South Africa , Common Astronomy Software Application (Computer software)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/97096 , vital:31398
- Description: The applications of machine learning have created an opportunity to deal with complex problems currently encountered in radio astronomy data processing. Calibration is one of the most important data processing steps required to produce high dynamic range images. This process involves the determination of calibration parameters, both instrumental and astronomical, to correct the collected data. Typically, astronomers use a package such as Common Astronomy Software Applications (CASA) to compute the gain solutions based on regular observations of a known calibrator source. In this work we present applications of machine learning to first generation calibration (1GC), using the KAT-7 telescope environmental and pointing sensor data recorded during observations. Applying machine learning to 1GC, as opposed to calculating the gain solutions in CASA, has shown evidence of reducing computation, as well as accurately predict the 1GC gain solutions representing the behaviour of the antenna during an observation. These methods are computationally less expensive, however they have not fully learned to generalise in predicting accurate 1GC solutions by looking at environmental and pointing sensors. We call this multi-output regression model ZCal, which is based on random forest, decision trees, extremely randomized trees and K-nearest neighbor algorithms. The prediction error obtained during the testing of our model on testing data is ≈ 0.01 < rmse < 0.09 for gain amplitude per antenna, and 0.2 rad < rmse <0.5 rad for gain phase. This shows that the instrumental parameters used to train our model more strongly correlate with gain amplitude effects than phase.
- Full Text:
- Date Issued: 2019
Statistical study of traveling ionospheric disturbances over South Africa
- Authors: Mahlangu, Daniel Fiso
- Date: 2019
- Subjects: Ionosphere -- Research , Sudden ionospheric disturbances , Gravity waves , Magnetic storms
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/76387 , vital:30556
- Description: This thesis provides a statistical analysis of traveling ionospheric disturbances (TIDs) in South Africa. The velocities of the TIDs were determined from total electron content (TEC) maps using particle image velocimetry (PIV). The periods were determined using Morlet function in wavelet analysis. The TIDs were grouped into four categories: daytime, twilight, nighttime TIDs, and those TIDs that occurred during magnetic storms. It was found that daytime medium scale TIDs (MSTIDs) propagated equatorward in all seasons (summer, autumn, winter, and spring), with velocities of about 114 to 213 m/s. Their maximum occurrence was in winter between 15:00 and 16:00 LT. The daytime large scale (TIDs) LSTIDs propagated equatorward with velocities of approximately 455 to 767 m/s. Their highest occurrence was in summer, between 12:00-13:00 LT. Most of the these TIDs (about 78%) were observed during the passing of the morning solar terminator. This implied that the morning terminator was more effective in instigating TIDs. Only a few nighttime TIDs were observed and therefore their behavior could not be statistically inferred. The TIDs that occurred during magnetically disturbed conditions propagated equatorward. This indicated that their source mechanism was atmospheric gravity waves generated at the onset of geomagnetic storms.
- Full Text:
- Date Issued: 2019
- Authors: Mahlangu, Daniel Fiso
- Date: 2019
- Subjects: Ionosphere -- Research , Sudden ionospheric disturbances , Gravity waves , Magnetic storms
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/76387 , vital:30556
- Description: This thesis provides a statistical analysis of traveling ionospheric disturbances (TIDs) in South Africa. The velocities of the TIDs were determined from total electron content (TEC) maps using particle image velocimetry (PIV). The periods were determined using Morlet function in wavelet analysis. The TIDs were grouped into four categories: daytime, twilight, nighttime TIDs, and those TIDs that occurred during magnetic storms. It was found that daytime medium scale TIDs (MSTIDs) propagated equatorward in all seasons (summer, autumn, winter, and spring), with velocities of about 114 to 213 m/s. Their maximum occurrence was in winter between 15:00 and 16:00 LT. The daytime large scale (TIDs) LSTIDs propagated equatorward with velocities of approximately 455 to 767 m/s. Their highest occurrence was in summer, between 12:00-13:00 LT. Most of the these TIDs (about 78%) were observed during the passing of the morning solar terminator. This implied that the morning terminator was more effective in instigating TIDs. Only a few nighttime TIDs were observed and therefore their behavior could not be statistically inferred. The TIDs that occurred during magnetically disturbed conditions propagated equatorward. This indicated that their source mechanism was atmospheric gravity waves generated at the onset of geomagnetic storms.
- Full Text:
- Date Issued: 2019
The dispersion measure in broadband data from radio pulsars
- Authors: Rammala, Isabella
- Date: 2019
- Subjects: Pulsars , Radio astrophysics , Astrophsyics , Broadband communication systems
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/67857 , vital:29157
- Description: Modern day radio telescopes make use of wideband receivers to take advantage of the broadband nature of the radio pulsar emission. We ask how does the use of such broadband pulsar data affect the measured pulsar dispersion measure (DM). Previous works have shown that, although the exact pulsar radio emission processes are not well understood, observations reveal evidence of possible frequency dependence on the emission altitudes in the pulsar magnetosphere, a phenomenon known as the radius-to-frequency mapping (RFM). This frequency dependence due to RFM can be embedded in the dispersive delay of the pulse profiles, normally interpreted as an interstellar effect (DM). Thus we interpret this intrinsic effect as an additional component δDM to the interstellar DM, and investigate how it can be statistically attributed to intrinsic profile evolution, as well as profile scattering. We make use of Monte-Carlo simulations of beam models to simulate realistic pulsar beams of various geometry, from which we generate intrinsic profiles at various frequency bands. The results show that the excess DM due to intrinsic profile evolution is more pronounced at high frequencies, whereas scattering dominates the excess DM at low frequency. The implications of these results are presented with relation to broadband pulsar timing.
- Full Text:
- Date Issued: 2019
- Authors: Rammala, Isabella
- Date: 2019
- Subjects: Pulsars , Radio astrophysics , Astrophsyics , Broadband communication systems
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/67857 , vital:29157
- Description: Modern day radio telescopes make use of wideband receivers to take advantage of the broadband nature of the radio pulsar emission. We ask how does the use of such broadband pulsar data affect the measured pulsar dispersion measure (DM). Previous works have shown that, although the exact pulsar radio emission processes are not well understood, observations reveal evidence of possible frequency dependence on the emission altitudes in the pulsar magnetosphere, a phenomenon known as the radius-to-frequency mapping (RFM). This frequency dependence due to RFM can be embedded in the dispersive delay of the pulse profiles, normally interpreted as an interstellar effect (DM). Thus we interpret this intrinsic effect as an additional component δDM to the interstellar DM, and investigate how it can be statistically attributed to intrinsic profile evolution, as well as profile scattering. We make use of Monte-Carlo simulations of beam models to simulate realistic pulsar beams of various geometry, from which we generate intrinsic profiles at various frequency bands. The results show that the excess DM due to intrinsic profile evolution is more pronounced at high frequencies, whereas scattering dominates the excess DM at low frequency. The implications of these results are presented with relation to broadband pulsar timing.
- Full Text:
- Date Issued: 2019