Ionospheric disturbances during magnetic storms at SANAE
- Authors: Hiyadutuje, Alicreance
- Date: 2017
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/54956 , vital:26639
- Description: The coronal mass ejections (CMEs) and solar flares associated with extreme solar activity may strike the Earth's magnetosphere and give rise to geomagnetic storms. During geomagnetic storms, the polar plasma dynamics may influence the middle and low-latitude ionosphere via travelling ionospheric disturbances (TIDs). These are wave-like electron density disturbances caused by atmospheric gravity waves propagating in the ionosphere. TIDs focus and defocus SuperDARN signals producing a characteristic pattern of ground backscattered power (Samson et al., 1989). Geomagnetic storms may cause a decrease of total electron content (TEC), i.e. a negative storm effect, or/and an increase of TEC, i.e. a positive storm effect. The aim of this project was to investigate the ionospheric response to strong storms (Dst < -100 nT) between 2011 and 2015, using TEC and scintillation measurements derived from GPS receivers as well as SuperDARN power, Doppler velocity and convection maps. In this study the ionosphere's response to geomagnetic storms is determined by the magnitude and time of occurrence of the geomagnetic storm. The ionospheric TEC results of this study show that most of the storm effects observed were a combination of both negative and positive per storm per station (77.8%), and only 8.9% and 13.3% of effects on TEC were negative and positive respectively. The highest number of storm effects occurred in autumn (36.4%), while 31.6%, 28.4% and 3.6% occurred in winter, spring and summer respectively. During the storms studied, 71.4% had phase scintillation in the range of 0.7 - 1 radians, and only 14.3% of the storms had amplitude scintillations near 0.4. The storms studied at SANAE station generated TIDs with periods of less than an hour and amplitudes in the range 0.2 - 5 TECU. These TIDs were found to originate from the high-velocity plasma flows, some of which are visible in SuperDARN convection maps. Early studies concluded that likely sources of these disturbances correspond to ionospheric current surges (Bristow et al., 1994) in the dayside auroral zone (Huang et al., 1998).
- Full Text:
- Date Issued: 2017
- Authors: Hiyadutuje, Alicreance
- Date: 2017
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/54956 , vital:26639
- Description: The coronal mass ejections (CMEs) and solar flares associated with extreme solar activity may strike the Earth's magnetosphere and give rise to geomagnetic storms. During geomagnetic storms, the polar plasma dynamics may influence the middle and low-latitude ionosphere via travelling ionospheric disturbances (TIDs). These are wave-like electron density disturbances caused by atmospheric gravity waves propagating in the ionosphere. TIDs focus and defocus SuperDARN signals producing a characteristic pattern of ground backscattered power (Samson et al., 1989). Geomagnetic storms may cause a decrease of total electron content (TEC), i.e. a negative storm effect, or/and an increase of TEC, i.e. a positive storm effect. The aim of this project was to investigate the ionospheric response to strong storms (Dst < -100 nT) between 2011 and 2015, using TEC and scintillation measurements derived from GPS receivers as well as SuperDARN power, Doppler velocity and convection maps. In this study the ionosphere's response to geomagnetic storms is determined by the magnitude and time of occurrence of the geomagnetic storm. The ionospheric TEC results of this study show that most of the storm effects observed were a combination of both negative and positive per storm per station (77.8%), and only 8.9% and 13.3% of effects on TEC were negative and positive respectively. The highest number of storm effects occurred in autumn (36.4%), while 31.6%, 28.4% and 3.6% occurred in winter, spring and summer respectively. During the storms studied, 71.4% had phase scintillation in the range of 0.7 - 1 radians, and only 14.3% of the storms had amplitude scintillations near 0.4. The storms studied at SANAE station generated TIDs with periods of less than an hour and amplitudes in the range 0.2 - 5 TECU. These TIDs were found to originate from the high-velocity plasma flows, some of which are visible in SuperDARN convection maps. Early studies concluded that likely sources of these disturbances correspond to ionospheric current surges (Bristow et al., 1994) in the dayside auroral zone (Huang et al., 1998).
- Full Text:
- Date Issued: 2017
Advanced radio interferometric simulation and data reduction techniques
- Authors: Makhathini, Sphesihle
- Date: 2018
- Subjects: Interferometry , Radio interferometers , Algorithms , Radio telescopes , Square Kilometre Array (Project) , Very Large Array (Observatory : N.M.) , Radio astronomy
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/57348 , vital:26875
- Description: This work shows how legacy and novel radio Interferometry software packages and algorithms can be combined to produce high-quality reductions from modern telescopes, as well as end-to-end simulations for upcoming instruments such as the Square Kilometre Array (SKA) and its pathfinders. We first use a MeqTrees based simulations framework to quantify how artefacts due to direction-dependent effects accumulate with time, and the consequences of this accumulation when observing the same field multiple times in order to reach the survey depth. Our simulations suggest that a survey like LADUMA (Looking at the Distant Universe with MeerKAT Array), which aims to achieve its survey depth of 16 µJy/beam in a 72 kHz at 1.42 GHz by observing the same field for 1000 hours, will be able to reach its target depth in the presence of these artefacts. We also present stimela, a system agnostic scripting framework for simulating, processing and imaging radio interferometric data. This framework is then used to write an end-to-end simulation pipeline in order to quantify the resolution and sensitivity of the SKA1-MID telescope (the first phase of the SKA mid-frequency telescope) as a function of frequency, as well as the scale-dependent sensitivity of the telescope. Finally, a stimela-based reduction pipeline is used to process data of the field around the source 3C147, taken by the Karl G. Jansky Very Large Array (VLA). The reconstructed image from this reduction has a typical 1a noise level of 2.87 µJy/beam, and consequently a dynamic range of 8x106:1, given the 22.58 Jy/beam flux Density of the source 3C147.
- Full Text:
- Date Issued: 2018
- Authors: Makhathini, Sphesihle
- Date: 2018
- Subjects: Interferometry , Radio interferometers , Algorithms , Radio telescopes , Square Kilometre Array (Project) , Very Large Array (Observatory : N.M.) , Radio astronomy
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/57348 , vital:26875
- Description: This work shows how legacy and novel radio Interferometry software packages and algorithms can be combined to produce high-quality reductions from modern telescopes, as well as end-to-end simulations for upcoming instruments such as the Square Kilometre Array (SKA) and its pathfinders. We first use a MeqTrees based simulations framework to quantify how artefacts due to direction-dependent effects accumulate with time, and the consequences of this accumulation when observing the same field multiple times in order to reach the survey depth. Our simulations suggest that a survey like LADUMA (Looking at the Distant Universe with MeerKAT Array), which aims to achieve its survey depth of 16 µJy/beam in a 72 kHz at 1.42 GHz by observing the same field for 1000 hours, will be able to reach its target depth in the presence of these artefacts. We also present stimela, a system agnostic scripting framework for simulating, processing and imaging radio interferometric data. This framework is then used to write an end-to-end simulation pipeline in order to quantify the resolution and sensitivity of the SKA1-MID telescope (the first phase of the SKA mid-frequency telescope) as a function of frequency, as well as the scale-dependent sensitivity of the telescope. Finally, a stimela-based reduction pipeline is used to process data of the field around the source 3C147, taken by the Karl G. Jansky Very Large Array (VLA). The reconstructed image from this reduction has a typical 1a noise level of 2.87 µJy/beam, and consequently a dynamic range of 8x106:1, given the 22.58 Jy/beam flux Density of the source 3C147.
- Full Text:
- Date Issued: 2018
Behaviour of quiet time ionospheric disturbances at African equatorial and midlatitude regions
- Authors: Orford, Nicola Diane
- Date: 2018
- Subjects: Ionospheric storms , Ionospheric storms -- Africa , Ionosphere , Plasmasphere , Q-disturbances , Total electron content (TEC)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/62672 , vital:28228
- Description: Extreme ionospheric and geomagnetic disturbances affect technology adversely. Prestorm enhancements, considered a potential predictor of geomagnetic storms, occur during quiet conditions prior to geomagnetic disturbances. The ionosphere experiences general disturbances during quiet geomagnetic conditions and these Q- disturbances remain unexplored over Africa. This study used TEC data to characterize the morphology of Q-disturbances over Africa, exploring variations with solar cycle, season, time of occurrence and latitude. Observations from 10 African GPS stations in the equatorial and midlatitude regions show that Q-disturbances in the equatorial region are predominantly driven by E x B variations, while multiple mechanisms affect the midlatitude region. Q- disturbances occur more frequently during nighttime than during daytime and no seasonal trend is observed. Midlatitude Q-disturbance mechanisms are explored in depth, considering substorm activity, the plasmaspheric contribution to GPS TEC and plasma transfer between conjugate points. Substorm activity is not a dominant mechanism, although Q-disturbances occurring under elevated substorm conditions tend to have longer duration and larger amplitude than general Q-disturbances. Many observed Q-disturbances become non-significant once the plasmaspheric contribution to the TEC measurements is removed, indicating that these disturbances occur within the plasmasphere, and not the ionosphere. Transfer of plasma between conjugate points does not seem to be a mechanism driving Q-disturbances, as the corresponding nighttime behaviour expected between depletions in the summer hemisphere and enhancements in the winter hemisphere is not observed. Pre-storm enhancements occur infrequently, rendering them a poor predictor of geomagnetic disturbances. Pre-storm enhancement morphology does not differ significantly from general quiet time enhancement morphology, suggesting pre-storms are not a special case of Q-disturbances.
- Full Text:
- Date Issued: 2018
- Authors: Orford, Nicola Diane
- Date: 2018
- Subjects: Ionospheric storms , Ionospheric storms -- Africa , Ionosphere , Plasmasphere , Q-disturbances , Total electron content (TEC)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/62672 , vital:28228
- Description: Extreme ionospheric and geomagnetic disturbances affect technology adversely. Prestorm enhancements, considered a potential predictor of geomagnetic storms, occur during quiet conditions prior to geomagnetic disturbances. The ionosphere experiences general disturbances during quiet geomagnetic conditions and these Q- disturbances remain unexplored over Africa. This study used TEC data to characterize the morphology of Q-disturbances over Africa, exploring variations with solar cycle, season, time of occurrence and latitude. Observations from 10 African GPS stations in the equatorial and midlatitude regions show that Q-disturbances in the equatorial region are predominantly driven by E x B variations, while multiple mechanisms affect the midlatitude region. Q- disturbances occur more frequently during nighttime than during daytime and no seasonal trend is observed. Midlatitude Q-disturbance mechanisms are explored in depth, considering substorm activity, the plasmaspheric contribution to GPS TEC and plasma transfer between conjugate points. Substorm activity is not a dominant mechanism, although Q-disturbances occurring under elevated substorm conditions tend to have longer duration and larger amplitude than general Q-disturbances. Many observed Q-disturbances become non-significant once the plasmaspheric contribution to the TEC measurements is removed, indicating that these disturbances occur within the plasmasphere, and not the ionosphere. Transfer of plasma between conjugate points does not seem to be a mechanism driving Q-disturbances, as the corresponding nighttime behaviour expected between depletions in the summer hemisphere and enhancements in the winter hemisphere is not observed. Pre-storm enhancements occur infrequently, rendering them a poor predictor of geomagnetic disturbances. Pre-storm enhancement morphology does not differ significantly from general quiet time enhancement morphology, suggesting pre-storms are not a special case of Q-disturbances.
- Full Text:
- Date Issued: 2018
Combined spectral and stimulated luminescence study of charge trapping and recombination processes in α-Al2O3:C
- Authors: Nyirenda, Angel Newton
- Date: 2018
- Subjects: Luminescence , Thermoluminescence , Luminescence spectroscopy , Carbon-doped aluminium oxide , Radioluminescence , Time-resolved X-ray excited optical luminescence
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/62683 , vital:28235
- Description: The main objective of this project was to gain a deeper and better understanding of the luminescence processes in a-Al₂O₃:C, a highly-sensitive dosimetric material, using a combined spectral and stimulated luminescence study. The spectral studies concentrated on the emission spectra obtained using X-ray induced radioluminescence (XERL), thermoluminescence (XETL) and time-resolved X-ray excited optical luminescence (TR-XEOL) techniques. The stimulated luminescence studies were based on thermoluminescence (TL), optically stimulated luminescence (OSL) and phototransferred TL (PTTL) methods that were used in the study of the radiation-induced defects at high beta-doses and the deep traps, that is, traps with thermal depths beyond 500°C. The spectral and stimulated luminescence measurements were carried out using a high sensitivity luminescence spectrometer and a Ris0 TL/OSL Model DA-20 Reader, respectively. The XERL emission spectrum measured at room temperature shows seven gaussian peaks associated with F-centres (420 nm), F+-centres (334 nm), F2+-centres (559 nm), Stoke’s vibronic band of Cr3+ (671 nm), Cr3+ R-line emission (694 nm), anti-Stokes vibronic band of Cr3+ (710 nm) and an unidentified emission band (260-300 nm) which we associate with hole recombinations at a luminescence centre. The 694-nm R-line emission from Cr3+ impurity ions is most likely due to recombination of holes at Cr2+ during stimulated luminescence and as a result of an intracentre excitation of Cr3+ in photoluminescence (PL) due to photon absorption. The Cr3+ emission decreases in intensity, whereas the intensity of F-centre emission band is almost constant with repeated XERL measurements. Depending on the amount of X-ray irradiation dose, both holes and/or electrons may take place in the emission processes of peaks I (30-80°C), II (90-250°C) and III (250-320°C) during a TL readout, albeit, electron recombination is dominant regardless of dose. At higher doses, the XETL emission spectra indicate that the dominant band associated with TL peak III (250-320°C) in the material, shifts from F-centre to Cr3+. Using the deep-traps OSL, it has been confirmed that the main TL trap is also the main OSL trap whereas the TL traps lying in the temperature range of 400-550°C constitute the secondary OSL traps. There is evidence of strong retrapping at the main trap during optical stimulation of charges from the secondary OSL traps and the deep traps and that the retrapping occurs via the delocalized bands. At high-irradiation beta-doses, aggregate defect centres which significantly alter the TL and OSL properties, are induced in the material. The induced aggregate centres get completely obliterated by heating a sample to 700°C. The radiation-induced defects cause the main TL peak to shift towards higher temperatures, increase its FWHM, reduce its maximum intensity and cause an underestimation of both the activation energy and order of kinetics of the peak. On the other hand, the OSL response of the material is enhanced following a high-irradiation dose. During sample storage in the dark at ambient temperature, charges do migrate from the deep traps (donors) to the main and intermediate traps (acceptors) and that the major donor traps during this charge transfer phenomenon lie between 500-600°C.
- Full Text:
- Date Issued: 2018
- Authors: Nyirenda, Angel Newton
- Date: 2018
- Subjects: Luminescence , Thermoluminescence , Luminescence spectroscopy , Carbon-doped aluminium oxide , Radioluminescence , Time-resolved X-ray excited optical luminescence
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/62683 , vital:28235
- Description: The main objective of this project was to gain a deeper and better understanding of the luminescence processes in a-Al₂O₃:C, a highly-sensitive dosimetric material, using a combined spectral and stimulated luminescence study. The spectral studies concentrated on the emission spectra obtained using X-ray induced radioluminescence (XERL), thermoluminescence (XETL) and time-resolved X-ray excited optical luminescence (TR-XEOL) techniques. The stimulated luminescence studies were based on thermoluminescence (TL), optically stimulated luminescence (OSL) and phototransferred TL (PTTL) methods that were used in the study of the radiation-induced defects at high beta-doses and the deep traps, that is, traps with thermal depths beyond 500°C. The spectral and stimulated luminescence measurements were carried out using a high sensitivity luminescence spectrometer and a Ris0 TL/OSL Model DA-20 Reader, respectively. The XERL emission spectrum measured at room temperature shows seven gaussian peaks associated with F-centres (420 nm), F+-centres (334 nm), F2+-centres (559 nm), Stoke’s vibronic band of Cr3+ (671 nm), Cr3+ R-line emission (694 nm), anti-Stokes vibronic band of Cr3+ (710 nm) and an unidentified emission band (260-300 nm) which we associate with hole recombinations at a luminescence centre. The 694-nm R-line emission from Cr3+ impurity ions is most likely due to recombination of holes at Cr2+ during stimulated luminescence and as a result of an intracentre excitation of Cr3+ in photoluminescence (PL) due to photon absorption. The Cr3+ emission decreases in intensity, whereas the intensity of F-centre emission band is almost constant with repeated XERL measurements. Depending on the amount of X-ray irradiation dose, both holes and/or electrons may take place in the emission processes of peaks I (30-80°C), II (90-250°C) and III (250-320°C) during a TL readout, albeit, electron recombination is dominant regardless of dose. At higher doses, the XETL emission spectra indicate that the dominant band associated with TL peak III (250-320°C) in the material, shifts from F-centre to Cr3+. Using the deep-traps OSL, it has been confirmed that the main TL trap is also the main OSL trap whereas the TL traps lying in the temperature range of 400-550°C constitute the secondary OSL traps. There is evidence of strong retrapping at the main trap during optical stimulation of charges from the secondary OSL traps and the deep traps and that the retrapping occurs via the delocalized bands. At high-irradiation beta-doses, aggregate defect centres which significantly alter the TL and OSL properties, are induced in the material. The induced aggregate centres get completely obliterated by heating a sample to 700°C. The radiation-induced defects cause the main TL peak to shift towards higher temperatures, increase its FWHM, reduce its maximum intensity and cause an underestimation of both the activation energy and order of kinetics of the peak. On the other hand, the OSL response of the material is enhanced following a high-irradiation dose. During sample storage in the dark at ambient temperature, charges do migrate from the deep traps (donors) to the main and intermediate traps (acceptors) and that the major donor traps during this charge transfer phenomenon lie between 500-600°C.
- Full Text:
- Date Issued: 2018
Long-term analysis of ionospheric response during geomagnetic storms in mid, low and equatorial latitudes
- Matamba, Tshimangadzo Merline
- Authors: Matamba, Tshimangadzo Merline
- Date: 2018
- Subjects: Ionospheric storms , Coronal mass ejections , Corotating interaction regions , Solar flares , Global Positioning System , Ionospheric critical frequencies , Equatorial Ionization Anomaly (EIA)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/63991 , vital:28517
- Description: Understanding changes in the ionosphere is important for High Frequency (HF) communications and navigation systems. Ionospheric storms are the disturbances in the Earth’s upper atmosphere due to solar activities such as Coronal Mass Ejections (CMEs), Corotating interaction Regions (CIRs) and solar flares. This thesis reports for the first time on an investigation of ionospheric response to great geomagnetic storms (Disturbance storm time, Dst ≤ −350 nT) that occurred during solar cycle 23. The storm periods analysed were 29 March - 02 April 2001, 27 - 31 October 2003, 18 - 23 November 2003 and 06 - 11 November 2004. Global Navigation Satellite System (GNSS), Total Electron Content (TEC) and ionosonde critical frequency of F2 layer (foF2) data over northern hemisphere (European sector) and southern hemisphere (African sector) mid-latitudes were used to study the ionospheric responses within 15E° - 40°E longitude and ±31°- ±46° geomagnetic latitude. Mid-latitude regions within the same longitude sector in both hemispheres were selected in order to assess the contribution of the low latitude changes especially the expansion of Equatorial Ionization Anomaly (EIA) also known as the dayside ionospheric super-fountain effect during these storms. In all storm periods, both negative and positive ionospheric responses were observed in both hemispheres. Negative ionospheric responses were mainly due to changes in neutral composition, while the expansion of the EIA led to pronounced positive ionospheric storm effect at mid-latitudes for some storm periods. In other cases (e.g 29 October 2003), Prompt Penetration Electric Fields (PPEF), EIA expansion and large scale Traveling Ionospheric Disturbances (TIDs) were found to be present during the positive storm effect at mid-latitudes in both hemispheres. An increase in TEC on the 28 October 2003 was because of the large solar flare with previously determined intensity of X45± 5. A further report on statistical analysis of ionospheric storm effects due to Corotating Interaction Region (CIR)- and Coronal Mass Ejection (CME)-driven storms was performed. The storm periods analyzed occurred during the period 2001 - 2015 which covers part of solar cycles 23 and 24. Dst≤ -30 nT and Kp≥ 3 indices were used to identify the storm periods considered. Ionospheric TEC derived from IGS stations that lie within 30°E - 40°E geographic longitude in mid, low and equatorial latitude over the African sector were used. The statistical analysis of ionospheric storm effects were compared over mid, low and equatorial latitudes in the African sector for the first time. Positive ionospheric storm effects were more prevalent during CME-driven and CIR-driven over all stations considered in this study. Negative ionospheric storm effects occurred only during CME-driven storms over mid-latitude stations and were more prevalent in summer. The other interesting finding is that for the stations considered over mid-, low, and equatorial latitudes, negative-positive ionospheric responses were only observed over low and equatorial latitudes. A significant number of cases where the electron density changes remained within the background variability during storm conditions were observed over the low latitude stations compared to other latitude regions.
- Full Text:
- Date Issued: 2018
- Authors: Matamba, Tshimangadzo Merline
- Date: 2018
- Subjects: Ionospheric storms , Coronal mass ejections , Corotating interaction regions , Solar flares , Global Positioning System , Ionospheric critical frequencies , Equatorial Ionization Anomaly (EIA)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/63991 , vital:28517
- Description: Understanding changes in the ionosphere is important for High Frequency (HF) communications and navigation systems. Ionospheric storms are the disturbances in the Earth’s upper atmosphere due to solar activities such as Coronal Mass Ejections (CMEs), Corotating interaction Regions (CIRs) and solar flares. This thesis reports for the first time on an investigation of ionospheric response to great geomagnetic storms (Disturbance storm time, Dst ≤ −350 nT) that occurred during solar cycle 23. The storm periods analysed were 29 March - 02 April 2001, 27 - 31 October 2003, 18 - 23 November 2003 and 06 - 11 November 2004. Global Navigation Satellite System (GNSS), Total Electron Content (TEC) and ionosonde critical frequency of F2 layer (foF2) data over northern hemisphere (European sector) and southern hemisphere (African sector) mid-latitudes were used to study the ionospheric responses within 15E° - 40°E longitude and ±31°- ±46° geomagnetic latitude. Mid-latitude regions within the same longitude sector in both hemispheres were selected in order to assess the contribution of the low latitude changes especially the expansion of Equatorial Ionization Anomaly (EIA) also known as the dayside ionospheric super-fountain effect during these storms. In all storm periods, both negative and positive ionospheric responses were observed in both hemispheres. Negative ionospheric responses were mainly due to changes in neutral composition, while the expansion of the EIA led to pronounced positive ionospheric storm effect at mid-latitudes for some storm periods. In other cases (e.g 29 October 2003), Prompt Penetration Electric Fields (PPEF), EIA expansion and large scale Traveling Ionospheric Disturbances (TIDs) were found to be present during the positive storm effect at mid-latitudes in both hemispheres. An increase in TEC on the 28 October 2003 was because of the large solar flare with previously determined intensity of X45± 5. A further report on statistical analysis of ionospheric storm effects due to Corotating Interaction Region (CIR)- and Coronal Mass Ejection (CME)-driven storms was performed. The storm periods analyzed occurred during the period 2001 - 2015 which covers part of solar cycles 23 and 24. Dst≤ -30 nT and Kp≥ 3 indices were used to identify the storm periods considered. Ionospheric TEC derived from IGS stations that lie within 30°E - 40°E geographic longitude in mid, low and equatorial latitude over the African sector were used. The statistical analysis of ionospheric storm effects were compared over mid, low and equatorial latitudes in the African sector for the first time. Positive ionospheric storm effects were more prevalent during CME-driven and CIR-driven over all stations considered in this study. Negative ionospheric storm effects occurred only during CME-driven storms over mid-latitude stations and were more prevalent in summer. The other interesting finding is that for the stations considered over mid-, low, and equatorial latitudes, negative-positive ionospheric responses were only observed over low and equatorial latitudes. A significant number of cases where the electron density changes remained within the background variability during storm conditions were observed over the low latitude stations compared to other latitude regions.
- Full Text:
- Date Issued: 2018
Modelling Ionospheric vertical drifts over the African low latitude region
- Dubazane, Makhosonke Berthwell
- Authors: Dubazane, Makhosonke Berthwell
- Date: 2018
- Subjects: Ionospheric drift , Magnetometers , Functions, Orthogonal , Neural networks (Computer science) , Ionospheric electron density -- Africa , Communication and Navigation Outage Forecasting Systems (C/NOFS)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/63356 , vital:28396
- Description: Low/equatorial latitudes vertical plasma drifts and electric fields govern the formation and changes of ionospheric density structures which affect space-based systems such as communications, navigation and positioning. Dynamical and electrodynamical processes play important roles in plasma distribution at different altitudes. Because of the high variability of E × B drift in low latitude regions, coupled with various processes that sometimes originate from high latitudes especially during geomagnetic storm conditions, it is challenging to develop accurate vertical drift models. This is despite the fact that there are very few instruments dedicated to provide electric field and hence E × B drift data in low/equatorial latitude regions. To this effect, there exists no ground-based instrument for direct measurements of E×B drift data in the African sector. This study presents the first time investigation aimed at modelling the long-term variability of low latitude vertical E × B drift over the African sector using a combination of Communication and Navigation Outage Forecasting Systems (C/NOFS) and ground-based magnetometer observations/measurements during 2008-2013. Because the approach is based on the estimation of equatorial electrojet from ground-based magnetometer observations, the developed models are only valid for local daytime. Three modelling techniques have been considered. The application of Empirical Orthogonal Functions and partial least squares has been performed on vertical E × B drift modelling for the first time. The artificial neural networks that have the advantage of learning underlying changes between a set of inputs and known output were also used in vertical E × B drift modelling. Due to lack of E×B drift data over the African sector, the developed models were validated using satellite data and the climatological Scherliess-Fejer model incorporated within the International Reference Ionosphere model. Maximum correlation coefficient of ∼ 0.8 was achieved when validating the developed models with C/NOFS E × B drift observations that were not used in any model development. For most of the time, the climatological model overestimates the local daytime vertical E × B drift velocities. The methods and approach presented in this study provide a background for constructing vertical E ×B drift databases in longitude sectors that do not have radar instrumentation. This will in turn make it possible to study day-to-day variability of vertical E×B drift and hopefully lead to the development of regional and global models that will incorporate local time information in different longitude sectors.
- Full Text:
- Date Issued: 2018
- Authors: Dubazane, Makhosonke Berthwell
- Date: 2018
- Subjects: Ionospheric drift , Magnetometers , Functions, Orthogonal , Neural networks (Computer science) , Ionospheric electron density -- Africa , Communication and Navigation Outage Forecasting Systems (C/NOFS)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/63356 , vital:28396
- Description: Low/equatorial latitudes vertical plasma drifts and electric fields govern the formation and changes of ionospheric density structures which affect space-based systems such as communications, navigation and positioning. Dynamical and electrodynamical processes play important roles in plasma distribution at different altitudes. Because of the high variability of E × B drift in low latitude regions, coupled with various processes that sometimes originate from high latitudes especially during geomagnetic storm conditions, it is challenging to develop accurate vertical drift models. This is despite the fact that there are very few instruments dedicated to provide electric field and hence E × B drift data in low/equatorial latitude regions. To this effect, there exists no ground-based instrument for direct measurements of E×B drift data in the African sector. This study presents the first time investigation aimed at modelling the long-term variability of low latitude vertical E × B drift over the African sector using a combination of Communication and Navigation Outage Forecasting Systems (C/NOFS) and ground-based magnetometer observations/measurements during 2008-2013. Because the approach is based on the estimation of equatorial electrojet from ground-based magnetometer observations, the developed models are only valid for local daytime. Three modelling techniques have been considered. The application of Empirical Orthogonal Functions and partial least squares has been performed on vertical E × B drift modelling for the first time. The artificial neural networks that have the advantage of learning underlying changes between a set of inputs and known output were also used in vertical E × B drift modelling. Due to lack of E×B drift data over the African sector, the developed models were validated using satellite data and the climatological Scherliess-Fejer model incorporated within the International Reference Ionosphere model. Maximum correlation coefficient of ∼ 0.8 was achieved when validating the developed models with C/NOFS E × B drift observations that were not used in any model development. For most of the time, the climatological model overestimates the local daytime vertical E × B drift velocities. The methods and approach presented in this study provide a background for constructing vertical E ×B drift databases in longitude sectors that do not have radar instrumentation. This will in turn make it possible to study day-to-day variability of vertical E×B drift and hopefully lead to the development of regional and global models that will incorporate local time information in different longitude sectors.
- Full Text:
- Date Issued: 2018
Tomographic imaging of East African equatorial ionosphere and study of equatorial plasma bubbles
- Authors: Giday, Nigussie Mezgebe
- Date: 2018
- Subjects: Ionosphere -- Africa, Central , Tomography -- Africa, Central , Global Positioning System , Neural networks (Computer science) , Space environment , Multi-Instrument Data Analysis System (MIDAS) , Equatorial plasma bubbles
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/63980 , vital:28516
- Description: In spite of the fact that the African ionospheric equatorial region has the largest ground footprint along the geomagnetic equator, it has not been well studied due to the absence of adequate ground-based instruments. This thesis presents research on both tomographic imaging of the African equatorial ionosphere and the study of the ionospheric irregularities/equatorial plasma bubbles (EPBs) under varying geomagnetic conditions. The Multi-Instrument Data Analysis System (MIDAS), an inversion algorithm, was investigated for its validity and ability as a tool to reconstruct multi-scaled ionospheric structures for different geomagnetic conditions. This was done for the narrow East African longitude sector with data from the available ground Global Positioning Sys-tem (GPS) receivers. The MIDAS results were compared to the results of two models, namely the IRI and GIM. MIDAS results compared more favourably with the observation vertical total electron content (VTEC), with a computed maximum correlation coefficient (r) of 0.99 and minimum root-mean-square error (RMSE) of 2.91 TECU, than did the results of the IRI-2012 and GIM models with maximum r of 0.93 and 0.99, and minimum RMSE of 13.03 TECU and 6.52 TECU, respectively, over all the test stations and validation days. The ability of MIDAS to reconstruct storm-time TEC was also compared with the results produced by the use of a Artificial Neural Net-work (ANN) for the African low- and mid-latitude regions. In terms of latitude, on average,MIDAS performed 13.44 % better than ANN in the African mid-latitudes, while MIDAS under performed in low-latitudes. This thesis also reports on the effects of moderate geomagnetic conditions on the evolution of EPBs and/or ionospheric irregularities during their season of occurrence using data from (or measurements by) space- and ground-based instruments for the east African equatorial sector. The study showed that the strength of daytime equatorial electrojet (EEJ), the steepness of the TEC peak-to-trough gradient and/or the meridional/transequatorial thermospheric winds sometimes have collective/interwoven effects, while at other times one mechanism dominates. In summary, this research offered tomographic results that outperform the results of the commonly used (“standard”) global models (i.e. IRI and GIM) for a longitude sector of importance to space weather, which has not been adequately studied due to a lack of sufficient instrumentation.
- Full Text:
- Date Issued: 2018
- Authors: Giday, Nigussie Mezgebe
- Date: 2018
- Subjects: Ionosphere -- Africa, Central , Tomography -- Africa, Central , Global Positioning System , Neural networks (Computer science) , Space environment , Multi-Instrument Data Analysis System (MIDAS) , Equatorial plasma bubbles
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/63980 , vital:28516
- Description: In spite of the fact that the African ionospheric equatorial region has the largest ground footprint along the geomagnetic equator, it has not been well studied due to the absence of adequate ground-based instruments. This thesis presents research on both tomographic imaging of the African equatorial ionosphere and the study of the ionospheric irregularities/equatorial plasma bubbles (EPBs) under varying geomagnetic conditions. The Multi-Instrument Data Analysis System (MIDAS), an inversion algorithm, was investigated for its validity and ability as a tool to reconstruct multi-scaled ionospheric structures for different geomagnetic conditions. This was done for the narrow East African longitude sector with data from the available ground Global Positioning Sys-tem (GPS) receivers. The MIDAS results were compared to the results of two models, namely the IRI and GIM. MIDAS results compared more favourably with the observation vertical total electron content (VTEC), with a computed maximum correlation coefficient (r) of 0.99 and minimum root-mean-square error (RMSE) of 2.91 TECU, than did the results of the IRI-2012 and GIM models with maximum r of 0.93 and 0.99, and minimum RMSE of 13.03 TECU and 6.52 TECU, respectively, over all the test stations and validation days. The ability of MIDAS to reconstruct storm-time TEC was also compared with the results produced by the use of a Artificial Neural Net-work (ANN) for the African low- and mid-latitude regions. In terms of latitude, on average,MIDAS performed 13.44 % better than ANN in the African mid-latitudes, while MIDAS under performed in low-latitudes. This thesis also reports on the effects of moderate geomagnetic conditions on the evolution of EPBs and/or ionospheric irregularities during their season of occurrence using data from (or measurements by) space- and ground-based instruments for the east African equatorial sector. The study showed that the strength of daytime equatorial electrojet (EEJ), the steepness of the TEC peak-to-trough gradient and/or the meridional/transequatorial thermospheric winds sometimes have collective/interwoven effects, while at other times one mechanism dominates. In summary, this research offered tomographic results that outperform the results of the commonly used (“standard”) global models (i.e. IRI and GIM) for a longitude sector of importance to space weather, which has not been adequately studied due to a lack of sufficient instrumentation.
- Full Text:
- Date Issued: 2018
A pilot wide-field VLBI survey of the GOODS-North field
- Authors: Akoto-Danso, Alexander
- Date: 2019
- Subjects: Radio astronomy , Very long baseline interferometry , Radio interometers , Imaging systems in astronomy , Hubble Space Telescope (Spacecraft) -- Observations
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/72296 , vital:30027
- Description: Very Long Baseline Interferometry (VLBI) has significant advantages in disentangling active galactic nuclei (AGN) from star formation, particularly at intermediate to high-redshift due to its high angular resolution and insensitivity to dust. Surveys using VLBI arrays are only just becoming practical over wide areas with numerous developments and innovations (such as multi-phase centre techniques) in observation and data analysis techniques. However, fully automated pipelines for VLBI data analysis are based on old software packages and are unable to incorporate new calibration and imaging algorithms. In this work, the researcher developed a pipeline for VLBI data analysis which integrates a recent wide-field imaging algorithm, RFI excision, and a purpose-built source finding algorithm specifically developed for the 64kx64k wide-field VLBI images. The researcher used this novel pipeline to process 6% (~ 9 arcmin2 of the total 160 arcmin2) of the data from the CANDELS GOODS- North extragalactic field at 1.6 GHz. The milli-arcsec scale images have an average rms of a ~ 10 uJy/beam. Forty four (44) candidate sources were detected, most of which are at sub-mJy flux densities, having brightness temperatures and luminosities of >5x105 K and >6x1021 W Hz-1 respectively. This work demonstrates that automated post-processing pipelines for wide-field, uniform sensitivity VLBI surveys are feasible and indeed made more efficient with new software, wide-field imaging algorithms and more purpose-built source- finders. This broadens the discovery space for future wide-field surveys with upcoming arrays such as the African VLBI Network (AVN), MeerKAT and the Square Kilometre Array (SKA).
- Full Text:
- Date Issued: 2019
- Authors: Akoto-Danso, Alexander
- Date: 2019
- Subjects: Radio astronomy , Very long baseline interferometry , Radio interometers , Imaging systems in astronomy , Hubble Space Telescope (Spacecraft) -- Observations
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/72296 , vital:30027
- Description: Very Long Baseline Interferometry (VLBI) has significant advantages in disentangling active galactic nuclei (AGN) from star formation, particularly at intermediate to high-redshift due to its high angular resolution and insensitivity to dust. Surveys using VLBI arrays are only just becoming practical over wide areas with numerous developments and innovations (such as multi-phase centre techniques) in observation and data analysis techniques. However, fully automated pipelines for VLBI data analysis are based on old software packages and are unable to incorporate new calibration and imaging algorithms. In this work, the researcher developed a pipeline for VLBI data analysis which integrates a recent wide-field imaging algorithm, RFI excision, and a purpose-built source finding algorithm specifically developed for the 64kx64k wide-field VLBI images. The researcher used this novel pipeline to process 6% (~ 9 arcmin2 of the total 160 arcmin2) of the data from the CANDELS GOODS- North extragalactic field at 1.6 GHz. The milli-arcsec scale images have an average rms of a ~ 10 uJy/beam. Forty four (44) candidate sources were detected, most of which are at sub-mJy flux densities, having brightness temperatures and luminosities of >5x105 K and >6x1021 W Hz-1 respectively. This work demonstrates that automated post-processing pipelines for wide-field, uniform sensitivity VLBI surveys are feasible and indeed made more efficient with new software, wide-field imaging algorithms and more purpose-built source- finders. This broadens the discovery space for future wide-field surveys with upcoming arrays such as the African VLBI Network (AVN), MeerKAT and the Square Kilometre Array (SKA).
- Full Text:
- Date Issued: 2019
CubiCal: a fast radio interferometric calibration suite exploiting complex optimisation
- Authors: Kenyon, Jonathan
- Date: 2019
- Subjects: Interferometry , Radio astronomy , Python (Computer program language) , Square Kilometre Array (Project)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/92341 , vital:30711
- Description: The advent of the Square Kilometre Array and its precursors marks the start of an exciting era for radio interferometry. However, with new instruments producing unprecedented quantities of data, many existing calibration algorithms and implementations will be hard-pressed to keep up. Fortunately, it has recently been shown that the radio interferometric calibration problem can be expressed concisely using the ideas of complex optimisation. The resulting framework exposes properties of the calibration problem which can be exploited to accelerate traditional non-linear least squares algorithms. We extend the existing work on the topic by considering the more general problem of calibrating a Jones chain: the product of several unknown gain terms. We also derive specialised solvers for performing phase-only, delay and pointing error calibration. In doing so, we devise a method for determining update rules for arbitrary, real-valued parametrisations of a complex gain. The solvers are implemented in an optimised Python package called CubiCal. CubiCal makes use of Cython to generate fast C and C++ routines for performing computationally demanding tasks whilst leveraging multiprocessing and shared memory to take advantage of modern, parallel hardware. The package is fully compatible with the measurement set, the most common format for interferometer data, and is well integrated with Montblanc - a third party package which implements optimised model visibility prediction. CubiCal's calibration routines are applied successfully to both simulated and real data for the field surrounding source 3C147. These tests include direction-independent and direction dependent calibration, as well as tests of the specialised solvers. Finally, we conduct extensive performance benchmarks and verify that CubiCal convincingly outperforms its most comparable competitor.
- Full Text:
- Date Issued: 2019
- Authors: Kenyon, Jonathan
- Date: 2019
- Subjects: Interferometry , Radio astronomy , Python (Computer program language) , Square Kilometre Array (Project)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/92341 , vital:30711
- Description: The advent of the Square Kilometre Array and its precursors marks the start of an exciting era for radio interferometry. However, with new instruments producing unprecedented quantities of data, many existing calibration algorithms and implementations will be hard-pressed to keep up. Fortunately, it has recently been shown that the radio interferometric calibration problem can be expressed concisely using the ideas of complex optimisation. The resulting framework exposes properties of the calibration problem which can be exploited to accelerate traditional non-linear least squares algorithms. We extend the existing work on the topic by considering the more general problem of calibrating a Jones chain: the product of several unknown gain terms. We also derive specialised solvers for performing phase-only, delay and pointing error calibration. In doing so, we devise a method for determining update rules for arbitrary, real-valued parametrisations of a complex gain. The solvers are implemented in an optimised Python package called CubiCal. CubiCal makes use of Cython to generate fast C and C++ routines for performing computationally demanding tasks whilst leveraging multiprocessing and shared memory to take advantage of modern, parallel hardware. The package is fully compatible with the measurement set, the most common format for interferometer data, and is well integrated with Montblanc - a third party package which implements optimised model visibility prediction. CubiCal's calibration routines are applied successfully to both simulated and real data for the field surrounding source 3C147. These tests include direction-independent and direction dependent calibration, as well as tests of the specialised solvers. Finally, we conduct extensive performance benchmarks and verify that CubiCal convincingly outperforms its most comparable competitor.
- Full Text:
- Date Issued: 2019
Foreground simulations for observations of the global 21-cm signal
- Authors: Klutse, Diana
- Date: 2019
- Subjects: Cosmic background radiation , Astronomy -- Observations , Electromagnetic waves , Radiation, Background
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/76398 , vital:30557
- Description: The sky-averaged (global) spectrum of the redshifted 21-cm line promises to be a direct probe of the Dark Ages, the period before the first luminous sources formed and the Epoch of Reionization during which these sources produced enough ionizing photons to ionize the neutral intergalactic medium. However, observations of this signal are contaminated by both astrophysical foregrounds which are orders of magnitude brighter than the cosmological signal and by non-astrophysical and non-ideal instrumental effects. It is therefore crucial to understand all these data components and their impacts on the cosmological signal, for successful signal extraction. In this view, we investigated the impact that small scale spatial structures of diffuse Galactic foreground has on the foreground spectrum as observed by a global 21-cm observation. We simulated two different sets of observations using a realistic dipole beam model of two synchotron foreground templates that differ from each other in the small scale structure: the original 408 MHz all-sky map by Haslam et al. (1982) and a version where the calibration was improved to remove artifcats and point sources (Remazeilles et al., 2015). We generated simulated foreground spectra and modeled them using a polynomial expansion in frequency. We found that the different foreground templates have a modest impact on the simulated spectra, generate differences up to 2% in the root mean square of residual spectra after the log-polynomial best fit was subtracted out.
- Full Text:
- Date Issued: 2019
- Authors: Klutse, Diana
- Date: 2019
- Subjects: Cosmic background radiation , Astronomy -- Observations , Electromagnetic waves , Radiation, Background
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/76398 , vital:30557
- Description: The sky-averaged (global) spectrum of the redshifted 21-cm line promises to be a direct probe of the Dark Ages, the period before the first luminous sources formed and the Epoch of Reionization during which these sources produced enough ionizing photons to ionize the neutral intergalactic medium. However, observations of this signal are contaminated by both astrophysical foregrounds which are orders of magnitude brighter than the cosmological signal and by non-astrophysical and non-ideal instrumental effects. It is therefore crucial to understand all these data components and their impacts on the cosmological signal, for successful signal extraction. In this view, we investigated the impact that small scale spatial structures of diffuse Galactic foreground has on the foreground spectrum as observed by a global 21-cm observation. We simulated two different sets of observations using a realistic dipole beam model of two synchotron foreground templates that differ from each other in the small scale structure: the original 408 MHz all-sky map by Haslam et al. (1982) and a version where the calibration was improved to remove artifcats and point sources (Remazeilles et al., 2015). We generated simulated foreground spectra and modeled them using a polynomial expansion in frequency. We found that the different foreground templates have a modest impact on the simulated spectra, generate differences up to 2% in the root mean square of residual spectra after the log-polynomial best fit was subtracted out.
- Full Text:
- Date Issued: 2019
Machine learning methods for calibrating radio interferometric data
- Authors: Zitha, Simphiwe Nhlanhla
- Date: 2019
- Subjects: Calibration , Radio astronomy -- Data processing , Radio astronomy -- South Africa , Karoo Array Telescope (South Africa) , Radio telescopes -- South Africa , Common Astronomy Software Application (Computer software)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/97096 , vital:31398
- Description: The applications of machine learning have created an opportunity to deal with complex problems currently encountered in radio astronomy data processing. Calibration is one of the most important data processing steps required to produce high dynamic range images. This process involves the determination of calibration parameters, both instrumental and astronomical, to correct the collected data. Typically, astronomers use a package such as Common Astronomy Software Applications (CASA) to compute the gain solutions based on regular observations of a known calibrator source. In this work we present applications of machine learning to first generation calibration (1GC), using the KAT-7 telescope environmental and pointing sensor data recorded during observations. Applying machine learning to 1GC, as opposed to calculating the gain solutions in CASA, has shown evidence of reducing computation, as well as accurately predict the 1GC gain solutions representing the behaviour of the antenna during an observation. These methods are computationally less expensive, however they have not fully learned to generalise in predicting accurate 1GC solutions by looking at environmental and pointing sensors. We call this multi-output regression model ZCal, which is based on random forest, decision trees, extremely randomized trees and K-nearest neighbor algorithms. The prediction error obtained during the testing of our model on testing data is ≈ 0.01 < rmse < 0.09 for gain amplitude per antenna, and 0.2 rad < rmse <0.5 rad for gain phase. This shows that the instrumental parameters used to train our model more strongly correlate with gain amplitude effects than phase.
- Full Text:
- Date Issued: 2019
- Authors: Zitha, Simphiwe Nhlanhla
- Date: 2019
- Subjects: Calibration , Radio astronomy -- Data processing , Radio astronomy -- South Africa , Karoo Array Telescope (South Africa) , Radio telescopes -- South Africa , Common Astronomy Software Application (Computer software)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/97096 , vital:31398
- Description: The applications of machine learning have created an opportunity to deal with complex problems currently encountered in radio astronomy data processing. Calibration is one of the most important data processing steps required to produce high dynamic range images. This process involves the determination of calibration parameters, both instrumental and astronomical, to correct the collected data. Typically, astronomers use a package such as Common Astronomy Software Applications (CASA) to compute the gain solutions based on regular observations of a known calibrator source. In this work we present applications of machine learning to first generation calibration (1GC), using the KAT-7 telescope environmental and pointing sensor data recorded during observations. Applying machine learning to 1GC, as opposed to calculating the gain solutions in CASA, has shown evidence of reducing computation, as well as accurately predict the 1GC gain solutions representing the behaviour of the antenna during an observation. These methods are computationally less expensive, however they have not fully learned to generalise in predicting accurate 1GC solutions by looking at environmental and pointing sensors. We call this multi-output regression model ZCal, which is based on random forest, decision trees, extremely randomized trees and K-nearest neighbor algorithms. The prediction error obtained during the testing of our model on testing data is ≈ 0.01 < rmse < 0.09 for gain amplitude per antenna, and 0.2 rad < rmse <0.5 rad for gain phase. This shows that the instrumental parameters used to train our model more strongly correlate with gain amplitude effects than phase.
- Full Text:
- Date Issued: 2019
Modelling storm-time TEC changes using linear and non-linear techniques
- Authors: Uwamahoro, Jean Claude
- Date: 2019
- Subjects: Magnetic storms , Astronomy -- Computer programs , Imaging systems in astronomy , Ionospheric storms , Electrons -- Measurement , Magnetosphere -- Observations
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/92908 , vital:30762
- Description: Statistical models based on empirical orthogonal functions (EOF) analysis and non-linear regression analysis (NLRA) were developed for the purpose of estimating the ionospheric total electron content (TEC) during geomagnetic storms. The well-known least squares method (LSM) and Metropolis-Hastings algorithm (MHA) were used as optimization techniques to determine the unknown coefficients of the developed analytical expressions. Artificial Neural Networks (ANNs), the International Reference Ionosphere (IRI) model, and the Multi-Instrument Data Analysis System (MIDAS) tomographic inversion algorithm were also applied to storm-time TEC modelling/reconstruction for various latitudes of the African sector and surrounding areas. This work presents some of the first statistical modeling of the mid-latitude and low-latitude ionosphere during geomagnetic storms that includes solar, geomagnetic and neutral wind drivers.Development and validation of the empirical models were based on storm-time TEC data derived from the global positioning system (GPS) measurements over ground receivers within Africa and surrounding areas. The storm criterion applied was Dst 6 −50 nT and/or Kp > 4. The performance evaluation of MIDAS compared with ANNs to reconstruct storm-time TEC over the African low- and mid-latitude regions showed that MIDAS and ANNs provide comparable results. Their respective mean absolute error (MAE) values were 4.81 and 4.18 TECU. The ANN model was, however, found to perform 24.37 % better than MIDAS at estimating storm-time TEC for low latitudes, while MIDAS is 13.44 % more accurate than ANN for the mid-latitudes. When their performances are compared with the IRI model, both MIDAS and ANN model were found to provide more accurate storm-time TEC reconstructions for the African low- and mid-latitude regions. A comparative study of the performances of EOF, NLRA, ANN, and IRI models to estimate TEC during geomagnetic storm conditions over various latitudes showed that the ANN model is about 10 %, 26 %, and 58 % more accurate than EOF, NLRA, and IRI models, respectively, while EOF was found to perform 15 %, and 44 % better than NLRA and IRI, respectively. It was further found that the NLRA model is 25 % more accurate than the IRI model. We have also investigated for the first time, the role of meridional neutral winds (from the Horizontal Wind Model) to storm-time TEC modelling in the low latitude, northern and southern hemisphere mid-latitude regions of the African sector, based on ANN models. Statistics have shown that the inclusion of the meridional wind velocity in TEC modelling during geomagnetic storms leads to percentage improvements of about 5 % for the low latitude, 10 % and 5 % for the northern and southern hemisphere mid-latitude regions, respectively. High-latitude storm-induced winds and the inter-hemispheric blows of the meridional winds from summer to winter hemisphere have been suggested to be associated with these improvements.
- Full Text:
- Date Issued: 2019
- Authors: Uwamahoro, Jean Claude
- Date: 2019
- Subjects: Magnetic storms , Astronomy -- Computer programs , Imaging systems in astronomy , Ionospheric storms , Electrons -- Measurement , Magnetosphere -- Observations
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/92908 , vital:30762
- Description: Statistical models based on empirical orthogonal functions (EOF) analysis and non-linear regression analysis (NLRA) were developed for the purpose of estimating the ionospheric total electron content (TEC) during geomagnetic storms. The well-known least squares method (LSM) and Metropolis-Hastings algorithm (MHA) were used as optimization techniques to determine the unknown coefficients of the developed analytical expressions. Artificial Neural Networks (ANNs), the International Reference Ionosphere (IRI) model, and the Multi-Instrument Data Analysis System (MIDAS) tomographic inversion algorithm were also applied to storm-time TEC modelling/reconstruction for various latitudes of the African sector and surrounding areas. This work presents some of the first statistical modeling of the mid-latitude and low-latitude ionosphere during geomagnetic storms that includes solar, geomagnetic and neutral wind drivers.Development and validation of the empirical models were based on storm-time TEC data derived from the global positioning system (GPS) measurements over ground receivers within Africa and surrounding areas. The storm criterion applied was Dst 6 −50 nT and/or Kp > 4. The performance evaluation of MIDAS compared with ANNs to reconstruct storm-time TEC over the African low- and mid-latitude regions showed that MIDAS and ANNs provide comparable results. Their respective mean absolute error (MAE) values were 4.81 and 4.18 TECU. The ANN model was, however, found to perform 24.37 % better than MIDAS at estimating storm-time TEC for low latitudes, while MIDAS is 13.44 % more accurate than ANN for the mid-latitudes. When their performances are compared with the IRI model, both MIDAS and ANN model were found to provide more accurate storm-time TEC reconstructions for the African low- and mid-latitude regions. A comparative study of the performances of EOF, NLRA, ANN, and IRI models to estimate TEC during geomagnetic storm conditions over various latitudes showed that the ANN model is about 10 %, 26 %, and 58 % more accurate than EOF, NLRA, and IRI models, respectively, while EOF was found to perform 15 %, and 44 % better than NLRA and IRI, respectively. It was further found that the NLRA model is 25 % more accurate than the IRI model. We have also investigated for the first time, the role of meridional neutral winds (from the Horizontal Wind Model) to storm-time TEC modelling in the low latitude, northern and southern hemisphere mid-latitude regions of the African sector, based on ANN models. Statistics have shown that the inclusion of the meridional wind velocity in TEC modelling during geomagnetic storms leads to percentage improvements of about 5 % for the low latitude, 10 % and 5 % for the northern and southern hemisphere mid-latitude regions, respectively. High-latitude storm-induced winds and the inter-hemispheric blows of the meridional winds from summer to winter hemisphere have been suggested to be associated with these improvements.
- Full Text:
- Date Issued: 2019
Observing cosmic reionization with PAPER: polarized foreground simulations and all sky images
- Authors: Nunhokee, Chuneeta Devi
- Date: 2019
- Subjects: Cosmic background radiation , Astronomy -- Observations , Epoch of reionization -- Research , Hydrogen -- Spectra , Radio interferometers
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/68203 , vital:29218
- Description: The Donald C. Backer Precision Array to Probe the Epoch of Reionization (PAPER, Parsons et al., 2010) was built with an aim to detect the redshifted 21 cm Hydrogen line, which is likely the best probe of thermal evolution of the intergalactic medium and reionization of neutral Hydrogen in our Universe. Observations of the 21 cm signal are challenged by bright astrophysical foregrounds and systematics that require precise modeling in order to extract the cosmological signal. In particular, the instrumental leakage of polarized foregrounds may contaminate the 21 cm power spectrum. In this work, we developed a formalism to describe the leakage due to instrumental widefield effects in visibility-based power spectra and used it to predict contaminations in observations. We find the leakage due to a population of point sources to be higher than the diffuse Galactic emission – for which we can predict minimal contaminations at k>0.3 h Mpc -¹ We also analyzed data from the last observing season of PAPER via all-sky imaging with a view to characterize the foregrounds. We generated an all-sky catalogue of 88 sources down to a flux density of 5 Jy. Moreover, we measured both polarized point source and the Galactic diffuse emission, and used these measurements to constrain our model of polarization leakage. We find the leakage due to a population of point sources to be 12% lower than the prediction from our polarized model.
- Full Text:
- Date Issued: 2019
- Authors: Nunhokee, Chuneeta Devi
- Date: 2019
- Subjects: Cosmic background radiation , Astronomy -- Observations , Epoch of reionization -- Research , Hydrogen -- Spectra , Radio interferometers
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/68203 , vital:29218
- Description: The Donald C. Backer Precision Array to Probe the Epoch of Reionization (PAPER, Parsons et al., 2010) was built with an aim to detect the redshifted 21 cm Hydrogen line, which is likely the best probe of thermal evolution of the intergalactic medium and reionization of neutral Hydrogen in our Universe. Observations of the 21 cm signal are challenged by bright astrophysical foregrounds and systematics that require precise modeling in order to extract the cosmological signal. In particular, the instrumental leakage of polarized foregrounds may contaminate the 21 cm power spectrum. In this work, we developed a formalism to describe the leakage due to instrumental widefield effects in visibility-based power spectra and used it to predict contaminations in observations. We find the leakage due to a population of point sources to be higher than the diffuse Galactic emission – for which we can predict minimal contaminations at k>0.3 h Mpc -¹ We also analyzed data from the last observing season of PAPER via all-sky imaging with a view to characterize the foregrounds. We generated an all-sky catalogue of 88 sources down to a flux density of 5 Jy. Moreover, we measured both polarized point source and the Galactic diffuse emission, and used these measurements to constrain our model of polarization leakage. We find the leakage due to a population of point sources to be 12% lower than the prediction from our polarized model.
- Full Text:
- Date Issued: 2019
Statistical Analysis of the Radio-Interferometric Measurement Equation, a derived adaptive weighting scheme, and applications to LOFAR-VLBI observation of the Extended Groth Strip
- Authors: Bonnassieux, Etienne
- Date: 2019
- Subjects: Radio astronomy , Astrophysics , Astrophysics -- Instruments -- Calibration , Imaging systems in astronomy , Radio interferometers , Radio telescopes , Astronomy -- Observations
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/93789 , vital:30942
- Description: J.R.R Tolkien wrote, in his Mythopoeia, that “He sees no stars who does not see them first, of living silver made that sudden burst, to flame like flowers beneath the ancient song”. In his defense of myth-making, he formulates the argument that the attribution of meaning is an act of creation - that “trees are not ‘trees’ until so named and seen” - and that this capacity for creation defines the human creature. The scientific endeavour, in this context, can be understood as a social expression of a fundamental feature of humanity, and from this endeavour flows much understanding. This thesis, one thread among many, focuses on the study of astronomical objects as seen by the radio waves they emit. What are radio waves? Electromagnetic waves were theorised by James Clerk Maxwell (Maxwell 1864) in his great theoretical contribution to modern physics, their speed matching the speed of light as measured by Ole Christensen R0mer and, later, James Bradley. It was not until Heinrich Rudolf Hertz’s 1887 experiment that these waves were measured in a laboratory, leading to the dawn of radio communications - and, later, radio astronomy. The link between radio waves and light was one of association: light is known to behave as a wave (Young double-slit experiment), with the same propagation speed as electromagnetic radiation. Light “proper” is also known to exist beyond the optical regime: Herschel’s experiment shows that when diffracted through a prism, sunlight warms even those parts of a desk which are not observed to be lit (first evidence of infrared light). The link between optical light and unseen electromagnetic radiation is then an easy step to make, and one confirmed through countless technological applications (e.g. optical fiber to name but one). And as soon as this link is established, a question immediately comes to the mind of the astronomer: what does the sky, our Universe, look like to the radio “eye”? Radio astronomy has a short but storied history: from Karl Jansky’s serendipitous observation of the centre of the Milky Way, which outshines our Sun in the radio regime, in 1933, to Grote Reber’s hand-built back-yard radio antenna in 1937, which successfully detected radio emission from the Milky Way itself, to such monumental projects as the Square Kilometer Array and its multiple pathfinders, it has led to countless discoveries and the opening of a truly new window on the Universe. The work presented in this thesis is a contribution to this discipline - the culmination of three years of study, which is a rather short time to get a firm grasp of radio interferometry both in theory and in practice. The need for robust, automated methods - which are improving daily, thanks to the tireless labour of the scientists in the field - is becoming ever stronger as the SKA approaches, looming large on the horizon; but even today, in the precursor era of LOFAR, MeerKAT and other pathfinders, it is keenly felt. When I started my doctorate, the sheer scale of the task at hand felt overwhelming - to actually be able to contribute to its resolution seemed daunting indeed! Thankfully, as the saying goes, no society sets for itself material goals which it cannot achieve. This thesis took place at an exciting time for radio interferometry: at the start of my doctorate, the LOFAR international stations were - to my knowledge - only beginning to be used, and even then, only tentatively; MeerKAT had not yet shown its first light; the techniques used throughout my work were still being developed. At the time of writing, great strides have been made. One of the greatest technical challenges of LOFAR - imaging using the international stations - is starting to become reality. This technical challenge is the key problem that this thesis set out to address. While we only achieved partial success so far, it is a testament to the difficulty of the task that it is not yet truly resolved. One of the major results of this thesis is a model of a bright resolved source near a famous extragalactic field: properly modeling this source not only allows the use of international LOFAR stations, but also grants deeper access to the extragalactic field itself, which is otherwise polluted by the 3C source’s sidelobes. This result was only achieved thanks to the other major result of this thesis: the development of a theoretical framework with which to better understand the effect of calibration errors on images made from interferometric data, and an algorithm to strongly mitigate them. The structure of this manuscript is as follows: we begin with an introduction to radio interferometry, LOFAR, and the emission mechanisms which dominate for our field of interest. These introductions are primarily intended to give a brief overview of the technical aspects of the data reduced in this thesis. We follow with an overview of the Measurement Equation formalism, which underpins our theoretical work. This is the keystone of this thesis. We then show the theoretical work that was developed as part of the research work done during the doctorate - which was published in Astronomy & Astrophysics. Its practical application - a quality-based weighting scheme - is used throughout our data reduction. This data reduction is the next topic of this thesis: we contextualise the scientific interest of the data we reduce, and explain both the methods and the results we achieve.
- Full Text:
- Date Issued: 2019
- Authors: Bonnassieux, Etienne
- Date: 2019
- Subjects: Radio astronomy , Astrophysics , Astrophysics -- Instruments -- Calibration , Imaging systems in astronomy , Radio interferometers , Radio telescopes , Astronomy -- Observations
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/93789 , vital:30942
- Description: J.R.R Tolkien wrote, in his Mythopoeia, that “He sees no stars who does not see them first, of living silver made that sudden burst, to flame like flowers beneath the ancient song”. In his defense of myth-making, he formulates the argument that the attribution of meaning is an act of creation - that “trees are not ‘trees’ until so named and seen” - and that this capacity for creation defines the human creature. The scientific endeavour, in this context, can be understood as a social expression of a fundamental feature of humanity, and from this endeavour flows much understanding. This thesis, one thread among many, focuses on the study of astronomical objects as seen by the radio waves they emit. What are radio waves? Electromagnetic waves were theorised by James Clerk Maxwell (Maxwell 1864) in his great theoretical contribution to modern physics, their speed matching the speed of light as measured by Ole Christensen R0mer and, later, James Bradley. It was not until Heinrich Rudolf Hertz’s 1887 experiment that these waves were measured in a laboratory, leading to the dawn of radio communications - and, later, radio astronomy. The link between radio waves and light was one of association: light is known to behave as a wave (Young double-slit experiment), with the same propagation speed as electromagnetic radiation. Light “proper” is also known to exist beyond the optical regime: Herschel’s experiment shows that when diffracted through a prism, sunlight warms even those parts of a desk which are not observed to be lit (first evidence of infrared light). The link between optical light and unseen electromagnetic radiation is then an easy step to make, and one confirmed through countless technological applications (e.g. optical fiber to name but one). And as soon as this link is established, a question immediately comes to the mind of the astronomer: what does the sky, our Universe, look like to the radio “eye”? Radio astronomy has a short but storied history: from Karl Jansky’s serendipitous observation of the centre of the Milky Way, which outshines our Sun in the radio regime, in 1933, to Grote Reber’s hand-built back-yard radio antenna in 1937, which successfully detected radio emission from the Milky Way itself, to such monumental projects as the Square Kilometer Array and its multiple pathfinders, it has led to countless discoveries and the opening of a truly new window on the Universe. The work presented in this thesis is a contribution to this discipline - the culmination of three years of study, which is a rather short time to get a firm grasp of radio interferometry both in theory and in practice. The need for robust, automated methods - which are improving daily, thanks to the tireless labour of the scientists in the field - is becoming ever stronger as the SKA approaches, looming large on the horizon; but even today, in the precursor era of LOFAR, MeerKAT and other pathfinders, it is keenly felt. When I started my doctorate, the sheer scale of the task at hand felt overwhelming - to actually be able to contribute to its resolution seemed daunting indeed! Thankfully, as the saying goes, no society sets for itself material goals which it cannot achieve. This thesis took place at an exciting time for radio interferometry: at the start of my doctorate, the LOFAR international stations were - to my knowledge - only beginning to be used, and even then, only tentatively; MeerKAT had not yet shown its first light; the techniques used throughout my work were still being developed. At the time of writing, great strides have been made. One of the greatest technical challenges of LOFAR - imaging using the international stations - is starting to become reality. This technical challenge is the key problem that this thesis set out to address. While we only achieved partial success so far, it is a testament to the difficulty of the task that it is not yet truly resolved. One of the major results of this thesis is a model of a bright resolved source near a famous extragalactic field: properly modeling this source not only allows the use of international LOFAR stations, but also grants deeper access to the extragalactic field itself, which is otherwise polluted by the 3C source’s sidelobes. This result was only achieved thanks to the other major result of this thesis: the development of a theoretical framework with which to better understand the effect of calibration errors on images made from interferometric data, and an algorithm to strongly mitigate them. The structure of this manuscript is as follows: we begin with an introduction to radio interferometry, LOFAR, and the emission mechanisms which dominate for our field of interest. These introductions are primarily intended to give a brief overview of the technical aspects of the data reduced in this thesis. We follow with an overview of the Measurement Equation formalism, which underpins our theoretical work. This is the keystone of this thesis. We then show the theoretical work that was developed as part of the research work done during the doctorate - which was published in Astronomy & Astrophysics. Its practical application - a quality-based weighting scheme - is used throughout our data reduction. This data reduction is the next topic of this thesis: we contextualise the scientific interest of the data we reduce, and explain both the methods and the results we achieve.
- Full Text:
- Date Issued: 2019
Statistical study of traveling ionospheric disturbances over South Africa
- Authors: Mahlangu, Daniel Fiso
- Date: 2019
- Subjects: Ionosphere -- Research , Sudden ionospheric disturbances , Gravity waves , Magnetic storms
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/76387 , vital:30556
- Description: This thesis provides a statistical analysis of traveling ionospheric disturbances (TIDs) in South Africa. The velocities of the TIDs were determined from total electron content (TEC) maps using particle image velocimetry (PIV). The periods were determined using Morlet function in wavelet analysis. The TIDs were grouped into four categories: daytime, twilight, nighttime TIDs, and those TIDs that occurred during magnetic storms. It was found that daytime medium scale TIDs (MSTIDs) propagated equatorward in all seasons (summer, autumn, winter, and spring), with velocities of about 114 to 213 m/s. Their maximum occurrence was in winter between 15:00 and 16:00 LT. The daytime large scale (TIDs) LSTIDs propagated equatorward with velocities of approximately 455 to 767 m/s. Their highest occurrence was in summer, between 12:00-13:00 LT. Most of the these TIDs (about 78%) were observed during the passing of the morning solar terminator. This implied that the morning terminator was more effective in instigating TIDs. Only a few nighttime TIDs were observed and therefore their behavior could not be statistically inferred. The TIDs that occurred during magnetically disturbed conditions propagated equatorward. This indicated that their source mechanism was atmospheric gravity waves generated at the onset of geomagnetic storms.
- Full Text:
- Date Issued: 2019
- Authors: Mahlangu, Daniel Fiso
- Date: 2019
- Subjects: Ionosphere -- Research , Sudden ionospheric disturbances , Gravity waves , Magnetic storms
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/76387 , vital:30556
- Description: This thesis provides a statistical analysis of traveling ionospheric disturbances (TIDs) in South Africa. The velocities of the TIDs were determined from total electron content (TEC) maps using particle image velocimetry (PIV). The periods were determined using Morlet function in wavelet analysis. The TIDs were grouped into four categories: daytime, twilight, nighttime TIDs, and those TIDs that occurred during magnetic storms. It was found that daytime medium scale TIDs (MSTIDs) propagated equatorward in all seasons (summer, autumn, winter, and spring), with velocities of about 114 to 213 m/s. Their maximum occurrence was in winter between 15:00 and 16:00 LT. The daytime large scale (TIDs) LSTIDs propagated equatorward with velocities of approximately 455 to 767 m/s. Their highest occurrence was in summer, between 12:00-13:00 LT. Most of the these TIDs (about 78%) were observed during the passing of the morning solar terminator. This implied that the morning terminator was more effective in instigating TIDs. Only a few nighttime TIDs were observed and therefore their behavior could not be statistically inferred. The TIDs that occurred during magnetically disturbed conditions propagated equatorward. This indicated that their source mechanism was atmospheric gravity waves generated at the onset of geomagnetic storms.
- Full Text:
- Date Issued: 2019
The dispersion measure in broadband data from radio pulsars
- Authors: Rammala, Isabella
- Date: 2019
- Subjects: Pulsars , Radio astrophysics , Astrophsyics , Broadband communication systems
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/67857 , vital:29157
- Description: Modern day radio telescopes make use of wideband receivers to take advantage of the broadband nature of the radio pulsar emission. We ask how does the use of such broadband pulsar data affect the measured pulsar dispersion measure (DM). Previous works have shown that, although the exact pulsar radio emission processes are not well understood, observations reveal evidence of possible frequency dependence on the emission altitudes in the pulsar magnetosphere, a phenomenon known as the radius-to-frequency mapping (RFM). This frequency dependence due to RFM can be embedded in the dispersive delay of the pulse profiles, normally interpreted as an interstellar effect (DM). Thus we interpret this intrinsic effect as an additional component δDM to the interstellar DM, and investigate how it can be statistically attributed to intrinsic profile evolution, as well as profile scattering. We make use of Monte-Carlo simulations of beam models to simulate realistic pulsar beams of various geometry, from which we generate intrinsic profiles at various frequency bands. The results show that the excess DM due to intrinsic profile evolution is more pronounced at high frequencies, whereas scattering dominates the excess DM at low frequency. The implications of these results are presented with relation to broadband pulsar timing.
- Full Text:
- Date Issued: 2019
- Authors: Rammala, Isabella
- Date: 2019
- Subjects: Pulsars , Radio astrophysics , Astrophsyics , Broadband communication systems
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/67857 , vital:29157
- Description: Modern day radio telescopes make use of wideband receivers to take advantage of the broadband nature of the radio pulsar emission. We ask how does the use of such broadband pulsar data affect the measured pulsar dispersion measure (DM). Previous works have shown that, although the exact pulsar radio emission processes are not well understood, observations reveal evidence of possible frequency dependence on the emission altitudes in the pulsar magnetosphere, a phenomenon known as the radius-to-frequency mapping (RFM). This frequency dependence due to RFM can be embedded in the dispersive delay of the pulse profiles, normally interpreted as an interstellar effect (DM). Thus we interpret this intrinsic effect as an additional component δDM to the interstellar DM, and investigate how it can be statistically attributed to intrinsic profile evolution, as well as profile scattering. We make use of Monte-Carlo simulations of beam models to simulate realistic pulsar beams of various geometry, from which we generate intrinsic profiles at various frequency bands. The results show that the excess DM due to intrinsic profile evolution is more pronounced at high frequencies, whereas scattering dominates the excess DM at low frequency. The implications of these results are presented with relation to broadband pulsar timing.
- Full Text:
- Date Issued: 2019
TiRiFiG, a graphical 3D kinematic modelling tool
- Authors: Twum, Samuel Nyarko
- Date: 2019
- Subjects: Tilted Ring Fitting GUI , Astronomy -- Observations , Galaxies -- Observations , Galaxies -- Measurement , Galaxies -- Measurement -- Data processing , Kinematics
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/76409 , vital:30558
- Description: Galaxy kinematics is of crucial importance to understanding the structure, formation and evolution of galaxies. The studies of mass distributions giving rise to the missing mass problem, first raised by Zwicky (1933), give us an insight into dark matter distributions which are tightly linked to cosmology. Neutral hydrogen (H i) has been widely used as a tracer in the kinematic studies of galaxies. The Square Kilometre Array (SKA) and its precursors will produce large Hi datasets which will require kinematic modelling tools to extract kinematic parameters such as rotation curves. TiRiFiC (Józsa et al., 2007) is an example of such a tool for 3D kinematic modelling of resolved spectroscopic observations of rotating disks in terms of the tilted-ring model with varying complexities. TiRiFiC can be used to model a large number (20+) of parameters which are set in a configuration file (.def) for its execution. However, manually editing these parameters in a text editor is uncomfortable. In this work, we present TiRiFiG, Tilted Ring Fitting GUI, which is the graphical user interface that provides an easy way for parameter inputs to be modified in an interactive manner.
- Full Text:
- Date Issued: 2019
- Authors: Twum, Samuel Nyarko
- Date: 2019
- Subjects: Tilted Ring Fitting GUI , Astronomy -- Observations , Galaxies -- Observations , Galaxies -- Measurement , Galaxies -- Measurement -- Data processing , Kinematics
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/76409 , vital:30558
- Description: Galaxy kinematics is of crucial importance to understanding the structure, formation and evolution of galaxies. The studies of mass distributions giving rise to the missing mass problem, first raised by Zwicky (1933), give us an insight into dark matter distributions which are tightly linked to cosmology. Neutral hydrogen (H i) has been widely used as a tracer in the kinematic studies of galaxies. The Square Kilometre Array (SKA) and its precursors will produce large Hi datasets which will require kinematic modelling tools to extract kinematic parameters such as rotation curves. TiRiFiC (Józsa et al., 2007) is an example of such a tool for 3D kinematic modelling of resolved spectroscopic observations of rotating disks in terms of the tilted-ring model with varying complexities. TiRiFiC can be used to model a large number (20+) of parameters which are set in a configuration file (.def) for its execution. However, manually editing these parameters in a text editor is uncomfortable. In this work, we present TiRiFiG, Tilted Ring Fitting GUI, which is the graphical user interface that provides an easy way for parameter inputs to be modified in an interactive manner.
- Full Text:
- Date Issued: 2019
A 150 MHz all sky survey with the Precision Array to Probe the Epoch of Reionization
- Authors: Chege, James Kariuki
- Date: 2020
- Subjects: Epoch of reionization -- Research , Astronomy -- Observations , Radio interferometers
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/117733 , vital:34556
- Description: The Precision Array to Probe the Epoch of Reionization (PAPER) was built to measure the redshifted 21 cm line of hydrogen from cosmic reionization. Such low frequency observations promise to be the best means of understanding the cosmic dawn; when the first galaxies in the universe formed, and also the Epoch of Reionization; when the intergalactic medium changed from neutral to ionized. The major challenges to these observations is the presence of astrophysical foregrounds that are much brighter than the cosmological signal. Here, I present an all-sky survey at 150 MHz obtained from the analysis of 300 hours of PAPER observations. Particular focus is given to the calibration and imaging techniques that need to deal with the wide field of view of a non-tracking instrument. The survey covers ~ 7000 square degrees of the southern sky. From a sky area of 4400 square degrees out of the total survey area, I extract a catalogue of sources brighter than 4 Jy whose accuracy was tested against the published GLEAM catalogue, leading to a fractional difference rms better than 20%. The catalogue provides an all-sky accurate model of the extragalactic foreground to be used for the calibration of future Epoch of Reionization observations and to be subtracted from the PAPER observations themselves in order to mitigate the foreground contamination.
- Full Text:
- Date Issued: 2020
- Authors: Chege, James Kariuki
- Date: 2020
- Subjects: Epoch of reionization -- Research , Astronomy -- Observations , Radio interferometers
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/117733 , vital:34556
- Description: The Precision Array to Probe the Epoch of Reionization (PAPER) was built to measure the redshifted 21 cm line of hydrogen from cosmic reionization. Such low frequency observations promise to be the best means of understanding the cosmic dawn; when the first galaxies in the universe formed, and also the Epoch of Reionization; when the intergalactic medium changed from neutral to ionized. The major challenges to these observations is the presence of astrophysical foregrounds that are much brighter than the cosmological signal. Here, I present an all-sky survey at 150 MHz obtained from the analysis of 300 hours of PAPER observations. Particular focus is given to the calibration and imaging techniques that need to deal with the wide field of view of a non-tracking instrument. The survey covers ~ 7000 square degrees of the southern sky. From a sky area of 4400 square degrees out of the total survey area, I extract a catalogue of sources brighter than 4 Jy whose accuracy was tested against the published GLEAM catalogue, leading to a fractional difference rms better than 20%. The catalogue provides an all-sky accurate model of the extragalactic foreground to be used for the calibration of future Epoch of Reionization observations and to be subtracted from the PAPER observations themselves in order to mitigate the foreground contamination.
- Full Text:
- Date Issued: 2020
A Bayesian approach to tilted-ring modelling of galaxies
- Authors: Maina, Eric Kamau
- Date: 2020
- Subjects: Bayesian statistical decision theory , Galaxies , Radio astronomy , TiRiFiC (Tilted Ring Fitting Code) , Neutral hydrogen , Spectroscopic data cubes , Galaxy parametrisation
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/145783 , vital:38466
- Description: The orbits of neutral hydrogen (H I) gas found in most disk galaxies are circular and also exhibit long-lived warps at large radii where the restoring gravitational forces of the inner disk become weak (Spekkens and Giovanelli 2006). These warps make the tilted-ring model an ideal choice for galaxy parametrisation. Analysis software utilizing the tilted-ring-model can be grouped into two and three-dimensional based software. Józsa et al. (2007b) demonstrated that three dimensional based software is better suited for galaxy parametrisation because it is affected by the effect of beam smearing only by increasing the uncertainty of parameters but not with the notorious systematic effects observed for two-dimensional fitting techniques. TiRiFiC, The Tilted Ring Fitting Code (Józsa et al. 2007b), is a software to construct parameterised models of high-resolution data cubes of rotating galaxies. It uses the tilted-ring model, and with that, a combination of some parameters such as surface brightness, position angle, rotation velocity and inclination, to describe galaxies. TiRiFiC works by directly fitting tilted-ring models to spectroscopic data cubes and hence is not affected by beam smearing or line-of-site-effects, e.g. strong warps. Because of that, the method is unavoidable as an analytic method in future Hi surveys. In the current implementation, though, there are several drawbacks. The implemented optimisers search for local solutions in parameter space only, do not quantify correlations between parameters and cannot find errors of single parameters. In theory, these drawbacks can be overcome by using Bayesian statistics, implemented in Multinest (Feroz et al. 2008), as it allows for sampling a posterior distribution irrespective of its multimodal nature resulting in parameter samples that correspond to the maximum in the posterior distribution. These parameter samples can be used as well to quantify correlations and find errors of single parameters. Since this method employs Bayesian statistics, it also allows the user to leverage any prior information they may have on parameter values.
- Full Text:
- Date Issued: 2020
- Authors: Maina, Eric Kamau
- Date: 2020
- Subjects: Bayesian statistical decision theory , Galaxies , Radio astronomy , TiRiFiC (Tilted Ring Fitting Code) , Neutral hydrogen , Spectroscopic data cubes , Galaxy parametrisation
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/145783 , vital:38466
- Description: The orbits of neutral hydrogen (H I) gas found in most disk galaxies are circular and also exhibit long-lived warps at large radii where the restoring gravitational forces of the inner disk become weak (Spekkens and Giovanelli 2006). These warps make the tilted-ring model an ideal choice for galaxy parametrisation. Analysis software utilizing the tilted-ring-model can be grouped into two and three-dimensional based software. Józsa et al. (2007b) demonstrated that three dimensional based software is better suited for galaxy parametrisation because it is affected by the effect of beam smearing only by increasing the uncertainty of parameters but not with the notorious systematic effects observed for two-dimensional fitting techniques. TiRiFiC, The Tilted Ring Fitting Code (Józsa et al. 2007b), is a software to construct parameterised models of high-resolution data cubes of rotating galaxies. It uses the tilted-ring model, and with that, a combination of some parameters such as surface brightness, position angle, rotation velocity and inclination, to describe galaxies. TiRiFiC works by directly fitting tilted-ring models to spectroscopic data cubes and hence is not affected by beam smearing or line-of-site-effects, e.g. strong warps. Because of that, the method is unavoidable as an analytic method in future Hi surveys. In the current implementation, though, there are several drawbacks. The implemented optimisers search for local solutions in parameter space only, do not quantify correlations between parameters and cannot find errors of single parameters. In theory, these drawbacks can be overcome by using Bayesian statistics, implemented in Multinest (Feroz et al. 2008), as it allows for sampling a posterior distribution irrespective of its multimodal nature resulting in parameter samples that correspond to the maximum in the posterior distribution. These parameter samples can be used as well to quantify correlations and find errors of single parameters. Since this method employs Bayesian statistics, it also allows the user to leverage any prior information they may have on parameter values.
- Full Text:
- Date Issued: 2020
A study of why some physic concepts in the South African Physical Science curriculum are poorly understood in order to develop a targeted action-research intervention for Newton’s second law
- Authors: Cobbing, Kathleen Margaret
- Date: 2020
- Subjects: Physics -- Study and teaching (Secondary) -- South Africa , Physics -- Examinations, questions, etc. -- South Africa , Motion -- Study and teaching (Secondary) -- South Africa
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/146903 , vital:38575
- Description: Globally, many students show a poor understanding of concepts in high school physics and lack the necessary problem-solving skills that the course demands. The application of Newton’s second law was found to be particularly problematic through document analysis of South African examination feedback reports, as well as from an analysis of the physics examinations at a pair of well-resourced South African independent schools that follow the Independent Examination Board curriculum. Through an action-research approach, a resource for use by students was designed and modified to improve students’ understanding of this concept, while modelling problemsolving methods. The resource consisted of brief revision notes, worked examples and scaffolded exercises. The design of the resource was influenced by the theory of cognitive apprenticeship, cognitive load theory and conceptual change theory. One of the aims of the resource was to encourage students to translate between the different representations of a problem situation: symbolic, abstract, model and concrete. The impact of this resource was evaluated at a pair of schools using a mixed methods approach. This incorporated pre- and post-tests for a quantitative assessment, qualitative student evaluations and the analysis of examination scripts. There was an improvement from pre- to post-test for all four iterations of the intervention and these improvements were shown to be significant. The use of the resource led to an increase in the quality and quantity of diagrams drawn by students in subsequent assessments.
- Full Text:
- Date Issued: 2020
- Authors: Cobbing, Kathleen Margaret
- Date: 2020
- Subjects: Physics -- Study and teaching (Secondary) -- South Africa , Physics -- Examinations, questions, etc. -- South Africa , Motion -- Study and teaching (Secondary) -- South Africa
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/146903 , vital:38575
- Description: Globally, many students show a poor understanding of concepts in high school physics and lack the necessary problem-solving skills that the course demands. The application of Newton’s second law was found to be particularly problematic through document analysis of South African examination feedback reports, as well as from an analysis of the physics examinations at a pair of well-resourced South African independent schools that follow the Independent Examination Board curriculum. Through an action-research approach, a resource for use by students was designed and modified to improve students’ understanding of this concept, while modelling problemsolving methods. The resource consisted of brief revision notes, worked examples and scaffolded exercises. The design of the resource was influenced by the theory of cognitive apprenticeship, cognitive load theory and conceptual change theory. One of the aims of the resource was to encourage students to translate between the different representations of a problem situation: symbolic, abstract, model and concrete. The impact of this resource was evaluated at a pair of schools using a mixed methods approach. This incorporated pre- and post-tests for a quantitative assessment, qualitative student evaluations and the analysis of examination scripts. There was an improvement from pre- to post-test for all four iterations of the intervention and these improvements were shown to be significant. The use of the resource led to an increase in the quality and quantity of diagrams drawn by students in subsequent assessments.
- Full Text:
- Date Issued: 2020