Third generation calibrations for Meerkat Observation of Saraswati Supercluster
- Authors: Kincaid, Robert Daniel
- Date: 2022-10-14
- Subjects: Square Kilometre Array (Project) , Superclusters , Saraswati Supercluster , Radio astronomy , MeerKAT , Calibration
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/362916 , vital:65374
- Description: The international collaboration of the Square Kilometre Array (SKA), which is one of the largest and most challenging science projects of the 21st century, will bring a revolution in radio astronomy in terms of sensitivity and resolution. The recent launch of several new radio instruments, combined with the subsequent developments in calibration and imaging techniques, has dramatically advanced this field over the past few years, thus enhancing our knowledge of the radio universe. Various SKA pathfinders around the world have been developed (and more are planned for construction) that have laid down a firm foundation for the SKA in terms of science while additionally giving insight into the technological requirements required for the projected data outputs to become manageable. South Africa has recently built the new MeerKAT telescope, which is a SKA precursor forming an integral part of SKA-mid component. The MeerKAT instrument has unprecedented sensitivity that can cater for the required science goals of the current and future SKA era. It is noticeable from MeerKAT and other precursors that the data produced by these instruments are significantly challenging to calibrate and image. Calibration-related artefacts intrinsic to bright sources are of major concern since, they limit the Dynamic Range (DR) and image fidelity of the resulting images and cause flux suppression of extended sources. Diffuse radio sources from galaxy clusters in the form of halos, relics and most recently bridges on the Mpc scale, because of their diffuse nature combined with wide field of view (FoV) observations, make them particularly good candidates for testing the different approaches of calibration. , Thesis (MSc) -- Faculty of Science, Physics and Electronics, 2022
- Full Text:
- Date Issued: 2022-10-14
- Authors: Kincaid, Robert Daniel
- Date: 2022-10-14
- Subjects: Square Kilometre Array (Project) , Superclusters , Saraswati Supercluster , Radio astronomy , MeerKAT , Calibration
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/362916 , vital:65374
- Description: The international collaboration of the Square Kilometre Array (SKA), which is one of the largest and most challenging science projects of the 21st century, will bring a revolution in radio astronomy in terms of sensitivity and resolution. The recent launch of several new radio instruments, combined with the subsequent developments in calibration and imaging techniques, has dramatically advanced this field over the past few years, thus enhancing our knowledge of the radio universe. Various SKA pathfinders around the world have been developed (and more are planned for construction) that have laid down a firm foundation for the SKA in terms of science while additionally giving insight into the technological requirements required for the projected data outputs to become manageable. South Africa has recently built the new MeerKAT telescope, which is a SKA precursor forming an integral part of SKA-mid component. The MeerKAT instrument has unprecedented sensitivity that can cater for the required science goals of the current and future SKA era. It is noticeable from MeerKAT and other precursors that the data produced by these instruments are significantly challenging to calibrate and image. Calibration-related artefacts intrinsic to bright sources are of major concern since, they limit the Dynamic Range (DR) and image fidelity of the resulting images and cause flux suppression of extended sources. Diffuse radio sources from galaxy clusters in the form of halos, relics and most recently bridges on the Mpc scale, because of their diffuse nature combined with wide field of view (FoV) observations, make them particularly good candidates for testing the different approaches of calibration. , Thesis (MSc) -- Faculty of Science, Physics and Electronics, 2022
- Full Text:
- Date Issued: 2022-10-14
Dynamics of charge movement in ∞-Al2O3:C,Mg using thermoluminescence phototransferred and optically stimulated luminescence
- Authors: Lontsi Sob, Aaron Joel
- Date: 2022-04-08
- Subjects: Thermoluminescence , Optically stimulated luminescence , Phototransfer , Deep traps , Phototransferred thermoluminescence (PTTL)
- Language: English
- Type: Academic theses , Doctoral theses , text
- Identifier: http://hdl.handle.net/10962/294607 , vital:57237 , DOI 10.21504/10962/294607
- Description: The dosimetric features of ∞-Al2O3:C,Mg have been investigated for unannealed and annealed samples. The unannealed sample is referred to as sample A whereas the samples annealed at 700, 900 and 1200°C for 15 minutes each are referred to as samples B, C and D respectively. A glow curve of unannealed ∞-Al2O3:C,Mg measured at 1°C/s after irradiation to 2.0 Gy consists of peaks at 43, 73, 164, 195, 246, 284, 336 and 374°C respectively. For sample B (annealed at 700°C), a glow curve measured at 1°C/s after irradiation to 3.0 Gy has peaks at 46, 76, 100, 170, 199, 290, 330 and 375°C whereas the glow curve of sample C (annealed at 900°C) recorded under the same conditions consists of peaks at 49, 80, 100, 174, 206, 235, 290, 335 and 375°C respectively. Sample D (annealed at 1200°C) is the most sensitive of the four samples. A glow curve of sample D measured at 1°C/s after irradiation to 0.2 Gy has peaks at 52, 82, 102, 174, 234, 288 and 384°C respectively. The peaks are labelled I-VIII in order of appearance. The 100°C peak, labelled IIa, is induced by annealing at or above 700°C. The dose response of these peaks was studied for doses within 0.1-8.2 Gy. The reported peaks follow first-order kinetics irrespective of annealing temperature. Peaks I-III of each sample are reproduced under phototransfer for preheating up to 400°C. For the unannealed sample, the reproduced peaks are labelled A1-A3 whereas for the annealed samples, they are labelled B1-B3, C1-C3 and D1-D3 respectively. The annealing-induced peak at 100°C is reproduced as B2a, C2a and D2a for samples B, C and D respectively. A PTTL peak labelled C2b or D2b is also observed near 140°C in samples C and D. In addition to these PTTL peaks, a PTTL peak corresponding to peak IV is also found for sample D and for the unannealed sample. As the corresponding conventional peaks, the PTTL peaks of each sample follow first-order kinetics. Peak I and its corresponding PTTL peak for each sample are unstable and fade to a minimal level after 300 s of storage time. On the other hand, peak II of each sample and its corresponding PTTL peak could still be observed with delay up to 5000 s. Peak III of the unannealed sample remains stable with storage time up to 48 hours. Irrespective of annealing, the trap corresponding to peak III is the most sensitive to optical stimulation. Time-dependent profiles of PTTL from unannealed and annealed ∞-Al2O3:C,Mg were also studied. The mathematical analysis of the PTTL time-response profiles is based on experimental results. The role of various electron traps in PTTL was determined by using pulse annealing and by monitoring the dependence of peak intensity on duration of illumination for peaks not removed by preheating. The presence and role of deep traps were further demonstrated with thermally assisted optically stimulated luminescence. For the unannealed sample, the activation energy for thermal assistance is 0.033 ± 0.001 eV and the activation energy for thermal i quenching is 1.043 ± 0.001 eV. For sample C, the activation energy for thermal assistance is 0.044 ± 0.003 eV whereas that for thermal quenching is 1.110 ± 0.006 eV. The values for the activation energy for thermal assistance are lower than those reported in literature. Only the values for the activation energy for thermal quenching are somewhat comparable to values reported elsewhere. , Thesis (PhD) -- Faculty of Science, Physics and Electronics, 2022
- Full Text:
- Date Issued: 2022-04-08
- Authors: Lontsi Sob, Aaron Joel
- Date: 2022-04-08
- Subjects: Thermoluminescence , Optically stimulated luminescence , Phototransfer , Deep traps , Phototransferred thermoluminescence (PTTL)
- Language: English
- Type: Academic theses , Doctoral theses , text
- Identifier: http://hdl.handle.net/10962/294607 , vital:57237 , DOI 10.21504/10962/294607
- Description: The dosimetric features of ∞-Al2O3:C,Mg have been investigated for unannealed and annealed samples. The unannealed sample is referred to as sample A whereas the samples annealed at 700, 900 and 1200°C for 15 minutes each are referred to as samples B, C and D respectively. A glow curve of unannealed ∞-Al2O3:C,Mg measured at 1°C/s after irradiation to 2.0 Gy consists of peaks at 43, 73, 164, 195, 246, 284, 336 and 374°C respectively. For sample B (annealed at 700°C), a glow curve measured at 1°C/s after irradiation to 3.0 Gy has peaks at 46, 76, 100, 170, 199, 290, 330 and 375°C whereas the glow curve of sample C (annealed at 900°C) recorded under the same conditions consists of peaks at 49, 80, 100, 174, 206, 235, 290, 335 and 375°C respectively. Sample D (annealed at 1200°C) is the most sensitive of the four samples. A glow curve of sample D measured at 1°C/s after irradiation to 0.2 Gy has peaks at 52, 82, 102, 174, 234, 288 and 384°C respectively. The peaks are labelled I-VIII in order of appearance. The 100°C peak, labelled IIa, is induced by annealing at or above 700°C. The dose response of these peaks was studied for doses within 0.1-8.2 Gy. The reported peaks follow first-order kinetics irrespective of annealing temperature. Peaks I-III of each sample are reproduced under phototransfer for preheating up to 400°C. For the unannealed sample, the reproduced peaks are labelled A1-A3 whereas for the annealed samples, they are labelled B1-B3, C1-C3 and D1-D3 respectively. The annealing-induced peak at 100°C is reproduced as B2a, C2a and D2a for samples B, C and D respectively. A PTTL peak labelled C2b or D2b is also observed near 140°C in samples C and D. In addition to these PTTL peaks, a PTTL peak corresponding to peak IV is also found for sample D and for the unannealed sample. As the corresponding conventional peaks, the PTTL peaks of each sample follow first-order kinetics. Peak I and its corresponding PTTL peak for each sample are unstable and fade to a minimal level after 300 s of storage time. On the other hand, peak II of each sample and its corresponding PTTL peak could still be observed with delay up to 5000 s. Peak III of the unannealed sample remains stable with storage time up to 48 hours. Irrespective of annealing, the trap corresponding to peak III is the most sensitive to optical stimulation. Time-dependent profiles of PTTL from unannealed and annealed ∞-Al2O3:C,Mg were also studied. The mathematical analysis of the PTTL time-response profiles is based on experimental results. The role of various electron traps in PTTL was determined by using pulse annealing and by monitoring the dependence of peak intensity on duration of illumination for peaks not removed by preheating. The presence and role of deep traps were further demonstrated with thermally assisted optically stimulated luminescence. For the unannealed sample, the activation energy for thermal assistance is 0.033 ± 0.001 eV and the activation energy for thermal i quenching is 1.043 ± 0.001 eV. For sample C, the activation energy for thermal assistance is 0.044 ± 0.003 eV whereas that for thermal quenching is 1.110 ± 0.006 eV. The values for the activation energy for thermal assistance are lower than those reported in literature. Only the values for the activation energy for thermal quenching are somewhat comparable to values reported elsewhere. , Thesis (PhD) -- Faculty of Science, Physics and Electronics, 2022
- Full Text:
- Date Issued: 2022-04-08
Neutral winds and tides over South Africa
- Authors: Ojo, Taiwo Theophilus
- Date: 2022-04-08
- Subjects: Atmospheric tides , Ionosondes , Fabry-Perot interferometers , Thermospheric winds , Servomechanisms , Climatology , Neutral winds , Horizontal Wind Model (HWM)
- Language: English
- Type: Doctoral thesis , text
- Identifier: http://hdl.handle.net/10962/232459 , vital:49993 , DOI 10.21504/10962/232459
- Description: This thesis presents the first results of a climatology of nighttime thermospheric neutral winds between February 2018 and January 2019 measured by a Fabry-Perot interferometer (FPI) in Sutherland, South Africa (32.2°S, 20.48°E; geomagnetic latitude: 40.7°S). This FPI measures the nighttime oxygen airglow emission at 630.0 nm, which has a peak intensity at an altitude of roughly 250 km. The performance of the Horizontal Wind Model (HWM14) was evaluated by comparing results from HWM14 with the FPI measurements. The results showed that the model had a better agreement with the measurements for meridional component compared to the zonal component. In addition, the HWM14 zonal wind consistently peaked several hours (~3 h) prior to the measured wind, creating what looks like a phase shift compared to the measured wind. An investigation of this apparent phase shift revealed it to be a consequence of a difference in phase shift of the terdiunal tide. Since ionosondes are more prolific with wider temporal and spatial coverage than FPIs, nighttime meridional winds aligned to the magnetic meridian were inferred from the peak height (hmF2) of ionospheric data taken from South Africa ionosonde network using the servo model during February 2018-June 2019. These were compared with FPI measured meridional wind and benchmarked with HWM14 and Magnetic mEridional NeuTrAl Thermospheric (MENTAT) model. The amplitudes and trends of the calculated meridional winds across all four ionosonde stations agreed relatively well with the observed data, especially during the summer months. Furthermore, the results confirmed that the ionosonde station located closest to the FPI, i.e. Hermanus station, had better agreement with measurements compared to the stations located at further distances. The extraction and analysis of atmospheric tides, namely the diurnal, semidiurnal, terdiurnal and 6-hour components from the FPI as well as the long-term tidal winds variations from the thermospheric wind measurements were investigated. The results showed that the semidiurnal peak mostly had the highest peak across all the months, indicating that the semidiurnal tides dominate the dynamic structure of the upper mesosphere at midlatitudes, consistent with previous observation over midlatitudes. Futhermore, the signature of the diurnal tide in the meridional (zonal) wind was stronger in winter (summer) and weaker in summer (winter). Also, semidiurnal tide didn't show any trend with season, while the terdiurnal tide was dominant in summer (zonal) and winter (meridional). Lastly, the 6 hour tide was detected intermittently during the period of the study and had the weakest signature (i.e. lowest amplitudes). , Thesis (PhD) -- Faculty of Science, Physics and Electronics, 2022
- Full Text:
- Date Issued: 2022-04-08
- Authors: Ojo, Taiwo Theophilus
- Date: 2022-04-08
- Subjects: Atmospheric tides , Ionosondes , Fabry-Perot interferometers , Thermospheric winds , Servomechanisms , Climatology , Neutral winds , Horizontal Wind Model (HWM)
- Language: English
- Type: Doctoral thesis , text
- Identifier: http://hdl.handle.net/10962/232459 , vital:49993 , DOI 10.21504/10962/232459
- Description: This thesis presents the first results of a climatology of nighttime thermospheric neutral winds between February 2018 and January 2019 measured by a Fabry-Perot interferometer (FPI) in Sutherland, South Africa (32.2°S, 20.48°E; geomagnetic latitude: 40.7°S). This FPI measures the nighttime oxygen airglow emission at 630.0 nm, which has a peak intensity at an altitude of roughly 250 km. The performance of the Horizontal Wind Model (HWM14) was evaluated by comparing results from HWM14 with the FPI measurements. The results showed that the model had a better agreement with the measurements for meridional component compared to the zonal component. In addition, the HWM14 zonal wind consistently peaked several hours (~3 h) prior to the measured wind, creating what looks like a phase shift compared to the measured wind. An investigation of this apparent phase shift revealed it to be a consequence of a difference in phase shift of the terdiunal tide. Since ionosondes are more prolific with wider temporal and spatial coverage than FPIs, nighttime meridional winds aligned to the magnetic meridian were inferred from the peak height (hmF2) of ionospheric data taken from South Africa ionosonde network using the servo model during February 2018-June 2019. These were compared with FPI measured meridional wind and benchmarked with HWM14 and Magnetic mEridional NeuTrAl Thermospheric (MENTAT) model. The amplitudes and trends of the calculated meridional winds across all four ionosonde stations agreed relatively well with the observed data, especially during the summer months. Furthermore, the results confirmed that the ionosonde station located closest to the FPI, i.e. Hermanus station, had better agreement with measurements compared to the stations located at further distances. The extraction and analysis of atmospheric tides, namely the diurnal, semidiurnal, terdiurnal and 6-hour components from the FPI as well as the long-term tidal winds variations from the thermospheric wind measurements were investigated. The results showed that the semidiurnal peak mostly had the highest peak across all the months, indicating that the semidiurnal tides dominate the dynamic structure of the upper mesosphere at midlatitudes, consistent with previous observation over midlatitudes. Futhermore, the signature of the diurnal tide in the meridional (zonal) wind was stronger in winter (summer) and weaker in summer (winter). Also, semidiurnal tide didn't show any trend with season, while the terdiurnal tide was dominant in summer (zonal) and winter (meridional). Lastly, the 6 hour tide was detected intermittently during the period of the study and had the weakest signature (i.e. lowest amplitudes). , Thesis (PhD) -- Faculty of Science, Physics and Electronics, 2022
- Full Text:
- Date Issued: 2022-04-08
An investigation of traveling ionospheric disturbances (TIDs) in the SANAE HF radar data
- Authors: Atilaw, Tsige Yared
- Date: 2022-04-07
- Subjects: Ionospheric storms Antarctica , Radar Antarctica , Range time-intensity (RTI) , South African National Antarctic Expedition (SANAE) , Super Dual Auroral Radar Network (SuperDARN)
- Language: English
- Type: Doctoral thesis , text
- Identifier: http://hdl.handle.net/10962/232377 , vital:49986 , DOI 10.21504/10962/232377
- Description: This thesis aims to study the characteristics of traveling ionospheric disturbances (TIDs) as identified in the radar data of the South African National Antarctic Expedition (SANAE) Super Dual Auroral Radar Network (SuperDARN) radar located in Antarctica. For this project, 22 TIDs were identified from visual inspection of range time-intensity (RTI) plots of backscattered power and Doppler velocity parameters of the SANAE radar between 2005âAS2015. These events were studied to determine their characteristics and driving mechanisms. Where good quality data were available, the SANAE HF radar data were supplemented by Halley radar data, which has large area of overlapping field of view (FOV) with the SANAE radar, and also by GPS TEC data. This provided a multi-instrument data analysis of some TID events. Different spectral analysis methods, namely the multitaper method (MTM), Fast Fourier transform (FFT) and the Lomb-Scargle periodogram were used to obtain spectral information of the observed waves. The advantage of using multiple windowing in MTM over the traditional windowing method was illustrated using one of the TID events. In addition, the analytic signal of the wave from the MTM method was used to estimate the instantaneous phase velocity and propagation azimuth of the wave, which was able to track the change in the characteristics of the medium-scale TID (MSTID) efficiently throughout the duration of the event. This is a clear advantage over other windowing techniques. The energy contribution by this MSTID through Joule heating was estimated over the region where spectral analysis of both SANAE and Halley data showed it to be present. The majority of the TIDs (65.4%) could be classified as MSTIDs with periods of 20–60 minutes, velocities of 50–333 ms1 and wavelengths of 129–833 km. The TID occurrence rate was high around the March equinox with 12 out of the 16 event days being during March–May. March had a particularly high number of occurrences of TIDs (46%). The majority of the TIDs observed during this month propagated northward or southeastward. In terms of prevailing geomagnetic conditions, 6 out of 16 event days were geomagnetically quiet, while 10 occurred during geomagnetic storms and substorms. During quiet conditions, TIDs could be linked to Es and polarised electric fields in 2 of these events. The other quiet time events could not be related to Es instability and polarised electric field either because their exact propagation direction could not be determined or data quality from the Es region scatter was too poor to perform spectral analysis. The storm-/substorm-related TIDs are possibly generated through Joule heating, the Lorentz force and energetic particle precipitation. , Thesis (PhD) -- Faculty of Science, Physics and Electronics, 2022
- Full Text:
- Date Issued: 2022-04-07
- Authors: Atilaw, Tsige Yared
- Date: 2022-04-07
- Subjects: Ionospheric storms Antarctica , Radar Antarctica , Range time-intensity (RTI) , South African National Antarctic Expedition (SANAE) , Super Dual Auroral Radar Network (SuperDARN)
- Language: English
- Type: Doctoral thesis , text
- Identifier: http://hdl.handle.net/10962/232377 , vital:49986 , DOI 10.21504/10962/232377
- Description: This thesis aims to study the characteristics of traveling ionospheric disturbances (TIDs) as identified in the radar data of the South African National Antarctic Expedition (SANAE) Super Dual Auroral Radar Network (SuperDARN) radar located in Antarctica. For this project, 22 TIDs were identified from visual inspection of range time-intensity (RTI) plots of backscattered power and Doppler velocity parameters of the SANAE radar between 2005âAS2015. These events were studied to determine their characteristics and driving mechanisms. Where good quality data were available, the SANAE HF radar data were supplemented by Halley radar data, which has large area of overlapping field of view (FOV) with the SANAE radar, and also by GPS TEC data. This provided a multi-instrument data analysis of some TID events. Different spectral analysis methods, namely the multitaper method (MTM), Fast Fourier transform (FFT) and the Lomb-Scargle periodogram were used to obtain spectral information of the observed waves. The advantage of using multiple windowing in MTM over the traditional windowing method was illustrated using one of the TID events. In addition, the analytic signal of the wave from the MTM method was used to estimate the instantaneous phase velocity and propagation azimuth of the wave, which was able to track the change in the characteristics of the medium-scale TID (MSTID) efficiently throughout the duration of the event. This is a clear advantage over other windowing techniques. The energy contribution by this MSTID through Joule heating was estimated over the region where spectral analysis of both SANAE and Halley data showed it to be present. The majority of the TIDs (65.4%) could be classified as MSTIDs with periods of 20–60 minutes, velocities of 50–333 ms1 and wavelengths of 129–833 km. The TID occurrence rate was high around the March equinox with 12 out of the 16 event days being during March–May. March had a particularly high number of occurrences of TIDs (46%). The majority of the TIDs observed during this month propagated northward or southeastward. In terms of prevailing geomagnetic conditions, 6 out of 16 event days were geomagnetically quiet, while 10 occurred during geomagnetic storms and substorms. During quiet conditions, TIDs could be linked to Es and polarised electric fields in 2 of these events. The other quiet time events could not be related to Es instability and polarised electric field either because their exact propagation direction could not be determined or data quality from the Es region scatter was too poor to perform spectral analysis. The storm-/substorm-related TIDs are possibly generated through Joule heating, the Lorentz force and energetic particle precipitation. , Thesis (PhD) -- Faculty of Science, Physics and Electronics, 2022
- Full Text:
- Date Issued: 2022-04-07
Influence of argon ion implantation on the thermoluminescence properties of aluminium oxide
- Authors: Khabo, Bokang
- Date: 2022-04-06
- Subjects: Aluminum oxide , Thermoluminescence , Ion implantation , Kinetic analysis , Oxygen vacancies , Argon , Irradiation
- Language: English
- Type: Master's thesis , text
- Identifier: http://hdl.handle.net/10962/234220 , vital:50173
- Description: The influence of argon ion implantation on the thermoluminescence properties (TL) of aluminium oxide (alumina) was investigated. Aluminium oxide (Al2O3) samples were implanted with 80 keV Ar ions. An unimplanted sample and samples implanted at fluences of 1×1014, 5×1014, 1×1015, 5×1015, 1×1016 Ar+/cm2 were irradiated at a dose of 40 Gy and heated at a rate of 1°C/s using a Risø reader model TL/OSL-DA-20 equipped with a Hoya U-340 filter. The thermoluminescence glow curves showed five distinct peaks with main peaks at 178°C, 188°C, 176°C, 208°C, 216°C and 204°C for the unimplanted sample as well as implanted samples. The peak positions of the samples were independent of the irradiation dose suggesting that the samples were characterised by first order kinetics. This was also confirmed by the TM-TSTOP analysis. It was observed that the TL intensity decreases with fluence of implantation. This observation suggests that the concentration of electron traps responsible for thermoluminescence decreases with ion implantation. The decrease in electron concentration might be due to the formation of non-radiative transition bands or the creation of defect clusters and extended defects following the ion implantation and ion fluence increases. The stopping and range of atoms in matter (SRIM) program was used to correlate the TL response of Al2O3 with defects under ion implantation. Subsequent to ion implantation, it was found that the number of oxygen vacancies which are related to electron traps are higher than the number of aluminium vacancies. Kinetic analysis was carried out using the initial rise, Chens peak shape, various heating rate, the whole glow curve, glow curve fitting and the isothermal decay methods. The activation energy was found to be around 0.8 eV and the frequency factor to be of the order 108 𝑠−1 regardless of the implantation fluence. This means that argon ion implantation did not affect the nature of electron traps. The dosimetric features of samples were also investigated at doses in the range of 40 – 200 Gy. Samples generally showed a superlinear response at doses less than 140 Gy and sublinear response at doses higher than 160 Gy. , Thesis (MSc) -- Faculty of Science, Physics and Electronics, 2022
- Full Text:
- Date Issued: 2022-04-06
- Authors: Khabo, Bokang
- Date: 2022-04-06
- Subjects: Aluminum oxide , Thermoluminescence , Ion implantation , Kinetic analysis , Oxygen vacancies , Argon , Irradiation
- Language: English
- Type: Master's thesis , text
- Identifier: http://hdl.handle.net/10962/234220 , vital:50173
- Description: The influence of argon ion implantation on the thermoluminescence properties (TL) of aluminium oxide (alumina) was investigated. Aluminium oxide (Al2O3) samples were implanted with 80 keV Ar ions. An unimplanted sample and samples implanted at fluences of 1×1014, 5×1014, 1×1015, 5×1015, 1×1016 Ar+/cm2 were irradiated at a dose of 40 Gy and heated at a rate of 1°C/s using a Risø reader model TL/OSL-DA-20 equipped with a Hoya U-340 filter. The thermoluminescence glow curves showed five distinct peaks with main peaks at 178°C, 188°C, 176°C, 208°C, 216°C and 204°C for the unimplanted sample as well as implanted samples. The peak positions of the samples were independent of the irradiation dose suggesting that the samples were characterised by first order kinetics. This was also confirmed by the TM-TSTOP analysis. It was observed that the TL intensity decreases with fluence of implantation. This observation suggests that the concentration of electron traps responsible for thermoluminescence decreases with ion implantation. The decrease in electron concentration might be due to the formation of non-radiative transition bands or the creation of defect clusters and extended defects following the ion implantation and ion fluence increases. The stopping and range of atoms in matter (SRIM) program was used to correlate the TL response of Al2O3 with defects under ion implantation. Subsequent to ion implantation, it was found that the number of oxygen vacancies which are related to electron traps are higher than the number of aluminium vacancies. Kinetic analysis was carried out using the initial rise, Chens peak shape, various heating rate, the whole glow curve, glow curve fitting and the isothermal decay methods. The activation energy was found to be around 0.8 eV and the frequency factor to be of the order 108 𝑠−1 regardless of the implantation fluence. This means that argon ion implantation did not affect the nature of electron traps. The dosimetric features of samples were also investigated at doses in the range of 40 – 200 Gy. Samples generally showed a superlinear response at doses less than 140 Gy and sublinear response at doses higher than 160 Gy. , Thesis (MSc) -- Faculty of Science, Physics and Electronics, 2022
- Full Text:
- Date Issued: 2022-04-06
Neutral Atomic Hydrogen in Gravitationally Lensed Systems
- Authors: Blecher, Tariq Dylan
- Date: 2021-10-29
- Subjects: Uncatalogued
- Language: English
- Type: Doctoral theses , text
- Identifier: http://hdl.handle.net/10962/192776 , vital:45263
- Description: Thesis (PhD) -- Faculty of Law, Law, 2021
- Full Text:
- Date Issued: 2021-10-29
- Authors: Blecher, Tariq Dylan
- Date: 2021-10-29
- Subjects: Uncatalogued
- Language: English
- Type: Doctoral theses , text
- Identifier: http://hdl.handle.net/10962/192776 , vital:45263
- Description: Thesis (PhD) -- Faculty of Law, Law, 2021
- Full Text:
- Date Issued: 2021-10-29
On the gravitational dual to strongly coupled fluids
- Authors: Shawa, Mark Musonda Webster
- Date: 2021-10-29
- Subjects: Quantum gravity , String models , Gauge fields (Physics) , Scattering amplitude (Nuclear physics) , Quark-gluon plasma , Anti-de Sitter/Conformal Field Theory (AdS/CFT) , Gauge/gravity duality
- Language: English
- Type: Doctoral theses , text
- Identifier: http://hdl.handle.net/10962/192933 , vital:45280 , 10.21504/10962/192933
- Description: This thesis discusses the prospect of finding the gravitational dual to the strongly coupled conformal fluids, with a special interest in the quark-gluon plasma. Such a task can be achieved by matching certain physical observables of two apparently different theories that are dually related owing to the fact that the same string theory can be viewed in two different ways. This is particularly useful when one of the theories is intractable while its dual is manageable. We begin by postulating a particular type of gravitational theory from which we determine graviton scattering amplitudes in a special regime of high momentum. Using the gauge–gravity duality dictionary, the graviton scattering amplitudes can be mapped to stress-tensor correlation functions in the gauge theory. One of the outcomes of high-energy scattering experiments involving the quark-gluon plasma is stress-tensor correlator data. This thesis provides an algorithm for matching graviton scattering amplitudes with stress-tensor correlator data which, in principle, can be used to identify the gravitational dual to the quark-gluon plasma. , Thesis (PhD) -- Faculty of Science, Physics and Electronics, 2021
- Full Text:
- Date Issued: 2021-10-29
- Authors: Shawa, Mark Musonda Webster
- Date: 2021-10-29
- Subjects: Quantum gravity , String models , Gauge fields (Physics) , Scattering amplitude (Nuclear physics) , Quark-gluon plasma , Anti-de Sitter/Conformal Field Theory (AdS/CFT) , Gauge/gravity duality
- Language: English
- Type: Doctoral theses , text
- Identifier: http://hdl.handle.net/10962/192933 , vital:45280 , 10.21504/10962/192933
- Description: This thesis discusses the prospect of finding the gravitational dual to the strongly coupled conformal fluids, with a special interest in the quark-gluon plasma. Such a task can be achieved by matching certain physical observables of two apparently different theories that are dually related owing to the fact that the same string theory can be viewed in two different ways. This is particularly useful when one of the theories is intractable while its dual is manageable. We begin by postulating a particular type of gravitational theory from which we determine graviton scattering amplitudes in a special regime of high momentum. Using the gauge–gravity duality dictionary, the graviton scattering amplitudes can be mapped to stress-tensor correlation functions in the gauge theory. One of the outcomes of high-energy scattering experiments involving the quark-gluon plasma is stress-tensor correlator data. This thesis provides an algorithm for matching graviton scattering amplitudes with stress-tensor correlator data which, in principle, can be used to identify the gravitational dual to the quark-gluon plasma. , Thesis (PhD) -- Faculty of Science, Physics and Electronics, 2021
- Full Text:
- Date Issued: 2021-10-29
Night-time gravity waves detected with multi-frequency airglow imager
- Authors: Machubeng, Karabo Pebane
- Date: 2021-04
- Subjects: Gravity waves , Airglow , Gravity waves -- Seasonal variations , All Sky Imager
- Language: English
- Type: thesis , text , Masters , MSc
- Identifier: http://hdl.handle.net/10962/178341 , vital:42931
- Description: This thesis shows the statistics of atmospheric gravity waves (AGWs) observed in the OI emission 557.7 nm at _97 km altitude using an all-sky imager based in Sutherland, South Africa (32.37_ S, 20.81_ E) in the year 2017. The wavelengths were determined using the propagation vector method, velocity was determined using the cross correlation of 1D FFT and the period was determined using the equation that relates wavelength and velocity. It was found that the horizontal wavelength in summer was almost evenly distributed between 10 and 40 km and for autumn, winter and spring were mostly between 10 and 30 km. The favoured speeds were between 40 and 50 m/s in autumn, as well as 30 and 50 m/s in summer, but the AGWs in winter had a bimodal speed distribution of 20 - 40 and 50 - 70 m/s. The majority of periods observed in all seasons were less than 20 minutes with a dominant peak of 5 - 10 minutes in autumn and spring. There was no favoured propagation direction for spring, but AGWs favoured a southeastward propagation in summer, and a southward propagation in autumn and winter. , Thesis (MSc) -- Faculty of Science, Physics and Electronics, 2021
- Full Text:
- Date Issued: 2021-04
- Authors: Machubeng, Karabo Pebane
- Date: 2021-04
- Subjects: Gravity waves , Airglow , Gravity waves -- Seasonal variations , All Sky Imager
- Language: English
- Type: thesis , text , Masters , MSc
- Identifier: http://hdl.handle.net/10962/178341 , vital:42931
- Description: This thesis shows the statistics of atmospheric gravity waves (AGWs) observed in the OI emission 557.7 nm at _97 km altitude using an all-sky imager based in Sutherland, South Africa (32.37_ S, 20.81_ E) in the year 2017. The wavelengths were determined using the propagation vector method, velocity was determined using the cross correlation of 1D FFT and the period was determined using the equation that relates wavelength and velocity. It was found that the horizontal wavelength in summer was almost evenly distributed between 10 and 40 km and for autumn, winter and spring were mostly between 10 and 30 km. The favoured speeds were between 40 and 50 m/s in autumn, as well as 30 and 50 m/s in summer, but the AGWs in winter had a bimodal speed distribution of 20 - 40 and 50 - 70 m/s. The majority of periods observed in all seasons were less than 20 minutes with a dominant peak of 5 - 10 minutes in autumn and spring. There was no favoured propagation direction for spring, but AGWs favoured a southeastward propagation in summer, and a southward propagation in autumn and winter. , Thesis (MSc) -- Faculty of Science, Physics and Electronics, 2021
- Full Text:
- Date Issued: 2021-04
Observations of cosmic re-ionisation with the Hydrogen Epoch of Reionization Array: simulations of closure phase spectra
- Authors: Charles, Ntsikelelo
- Date: 2021-04
- Subjects: Epoch of reionization , Space interferometry , Astronomy -- Observations , Closure phase spectra
- Language: English
- Type: thesis , text , Masters , MSc
- Identifier: http://hdl.handle.net/10962/174470 , vital:42480
- Description: The 21 cm transition from neutral Hydrogen promises to be the best observational probe of the Epoch of Reionisation. It has driven the construction of the new generation of low frequency radio interferometric arrays, including the Hydrogen Epoch of Reionization Array (HERA). The main difficulty in measuring the 21 cm signal is the presence of bright foregrounds that require very accurate interferometric calibration. Thyagarajan et al. (2018) proposed the use of closure phase quantities as a means to detect the 21 cm signal, which has the advantage of being independent (to first order) from calibration errors and therefore, bypasses the need for accurate calibration. Closure phases are, however, affected by so-called direction dependent effects, e.g. the fact that the dishes - or antennas - of an interferometric array are not identical to each other and , therefore, yield different antenna primary beam responses. In this thesis, we investigate the impact of direction dependent effects on closure quantities and simulate the impact that primary antenna beams affected by mutual coupling have on the foreground closure phase and its power spectrum i.e. the power spectrum of the bispectrum phase (Thyagarajan et al., 2020). Our simulations show that primary beams affected by mutual coupling lead to an overall leakage of foreground power in the so-called EoR window, i.e. power from smooth-spectrum foregrounds is confined to low k modes. We quantified this effect and found that the leakage is up to ~ 8 orders magnitude higher than the case of an ideal beam at kǁ > 0:5 h Mpc-1. We also found that the foreground leakage is worse when edge antennas are included, as they have a more different primary beam compared to antennas at the centre of the array. The leakage magnitude is worse when bright foregrounds appear in the antenna sidelobes, as expected. Our simulations provide a useful framework to interpret observations and assess which power spectrum region is expected to be most contaminated by foreground power leakage.
- Full Text:
- Date Issued: 2021-04
- Authors: Charles, Ntsikelelo
- Date: 2021-04
- Subjects: Epoch of reionization , Space interferometry , Astronomy -- Observations , Closure phase spectra
- Language: English
- Type: thesis , text , Masters , MSc
- Identifier: http://hdl.handle.net/10962/174470 , vital:42480
- Description: The 21 cm transition from neutral Hydrogen promises to be the best observational probe of the Epoch of Reionisation. It has driven the construction of the new generation of low frequency radio interferometric arrays, including the Hydrogen Epoch of Reionization Array (HERA). The main difficulty in measuring the 21 cm signal is the presence of bright foregrounds that require very accurate interferometric calibration. Thyagarajan et al. (2018) proposed the use of closure phase quantities as a means to detect the 21 cm signal, which has the advantage of being independent (to first order) from calibration errors and therefore, bypasses the need for accurate calibration. Closure phases are, however, affected by so-called direction dependent effects, e.g. the fact that the dishes - or antennas - of an interferometric array are not identical to each other and , therefore, yield different antenna primary beam responses. In this thesis, we investigate the impact of direction dependent effects on closure quantities and simulate the impact that primary antenna beams affected by mutual coupling have on the foreground closure phase and its power spectrum i.e. the power spectrum of the bispectrum phase (Thyagarajan et al., 2020). Our simulations show that primary beams affected by mutual coupling lead to an overall leakage of foreground power in the so-called EoR window, i.e. power from smooth-spectrum foregrounds is confined to low k modes. We quantified this effect and found that the leakage is up to ~ 8 orders magnitude higher than the case of an ideal beam at kǁ > 0:5 h Mpc-1. We also found that the foreground leakage is worse when edge antennas are included, as they have a more different primary beam compared to antennas at the centre of the array. The leakage magnitude is worse when bright foregrounds appear in the antenna sidelobes, as expected. Our simulations provide a useful framework to interpret observations and assess which power spectrum region is expected to be most contaminated by foreground power leakage.
- Full Text:
- Date Issued: 2021-04
The development of an ionospheric storm-time index for the South African region
- Authors: Tshisaphungo, Mpho
- Date: 2021-04
- Subjects: Ionospheric storms -- South Africa , Global Positioning System , Neural networks (Computer science) , Regression analysis , Ionosondes , Auroral electrojet , Geomagnetic indexes , Magnetic storms -- South Africa
- Language: English
- Type: thesis , text , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/178409 , vital:42937 , 10.21504/10962/178409
- Description: This thesis presents the development of a regional ionospheric storm-time model which forms the foundation of an index to provide a quick view of the ionospheric storm effects over South African mid-latitude region. The model is based on the foF2 measurements from four South African ionosonde stations. The data coverage for the model development over Grahamstown (33.3◦S, 26.5◦E), Hermanus (34.42◦S, 19.22◦E), Louisvale (28.50◦S, 21.20◦E), and Madimbo (22.39◦S, 30.88◦E) is 1996-2016, 2009-2016, 2000-2016, and 2000-2016 respectively. Data from the Global Positioning System (GPS) and radio occultation (RO) technique were used during validation. As the measure of either positive or negative storm effect, the variation of the critical frequency of the F2 layer (foF2) from the monthly median values (denoted as _foF2) is modeled. The modeling of _foF2 is based on only storm time data with the criteria of Dst 6 -50 nT and Kp > 4. The modeling methods used in the study were artificial neural network (ANN), linear regression (LR) and polynomial functions. The approach taken was to first test the modeling techniques on a single station before expanding the study to cover the regional aspect. The single station modeling was developed based on ionosonde data over Grahamstown. The inputs for the model which related to seasonal variation, diurnal variation, geomagnetic activity and solar activity were considered. For the geomagnetic activity, three indices namely; the symmetric disturbance in the horizontal component of the Earth’s magnetic field (SYM − H), the Auroral Electrojet (AE) index and local geomagnetic index A, were included as inputs. The performance of a single station model revealed that, of the three geomagnetic indices, SYM − H index has the largest contribution of 41% and 54% based on ANN and LR techniques respectively. The average correlation coefficients (R) for both ANN and LR models was 0.8, when validated during the selected storms falling within the period of model development. When validated using storms that fall outside the period of model development, the model gave R values of 0.6 and 0.5 for ANN and LR respectively. In addition, the GPS total electron content (TEC) derived measurements were used to estimate foF2 data. This is because there are more GPS receivers than ionosonde locations and the utilisation of this data increases the spatial coverage of the regional model. The estimation of foF2 from GPS TEC was done at GPS-ionosonde co-locations using polynomial functions. The average R values of 0.69 and 0.65 were obtained between actual and derived _foF2 over the co-locations and other GPS stations respectively. Validation of GPS TEC derived foF2 with RO data over regions out of ionospheric pierce points coverage with respect to ionosonde locations gave R greater than 0.9 for the selected storm period of 4-8 August 2011. The regional storm-time model was then developed based on the ANN technique using the four South African ionosonde stations. The maximum and minimum R values of 0.6 and 0.5 were obtained over ionosonde and GPS locations respectively. This model forms the basis towards the regional ionospheric storm-time index. , Thesis (PhD) -- Faculty of Science, Physics and Electronics, 2021
- Full Text:
- Date Issued: 2021-04
- Authors: Tshisaphungo, Mpho
- Date: 2021-04
- Subjects: Ionospheric storms -- South Africa , Global Positioning System , Neural networks (Computer science) , Regression analysis , Ionosondes , Auroral electrojet , Geomagnetic indexes , Magnetic storms -- South Africa
- Language: English
- Type: thesis , text , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/178409 , vital:42937 , 10.21504/10962/178409
- Description: This thesis presents the development of a regional ionospheric storm-time model which forms the foundation of an index to provide a quick view of the ionospheric storm effects over South African mid-latitude region. The model is based on the foF2 measurements from four South African ionosonde stations. The data coverage for the model development over Grahamstown (33.3◦S, 26.5◦E), Hermanus (34.42◦S, 19.22◦E), Louisvale (28.50◦S, 21.20◦E), and Madimbo (22.39◦S, 30.88◦E) is 1996-2016, 2009-2016, 2000-2016, and 2000-2016 respectively. Data from the Global Positioning System (GPS) and radio occultation (RO) technique were used during validation. As the measure of either positive or negative storm effect, the variation of the critical frequency of the F2 layer (foF2) from the monthly median values (denoted as _foF2) is modeled. The modeling of _foF2 is based on only storm time data with the criteria of Dst 6 -50 nT and Kp > 4. The modeling methods used in the study were artificial neural network (ANN), linear regression (LR) and polynomial functions. The approach taken was to first test the modeling techniques on a single station before expanding the study to cover the regional aspect. The single station modeling was developed based on ionosonde data over Grahamstown. The inputs for the model which related to seasonal variation, diurnal variation, geomagnetic activity and solar activity were considered. For the geomagnetic activity, three indices namely; the symmetric disturbance in the horizontal component of the Earth’s magnetic field (SYM − H), the Auroral Electrojet (AE) index and local geomagnetic index A, were included as inputs. The performance of a single station model revealed that, of the three geomagnetic indices, SYM − H index has the largest contribution of 41% and 54% based on ANN and LR techniques respectively. The average correlation coefficients (R) for both ANN and LR models was 0.8, when validated during the selected storms falling within the period of model development. When validated using storms that fall outside the period of model development, the model gave R values of 0.6 and 0.5 for ANN and LR respectively. In addition, the GPS total electron content (TEC) derived measurements were used to estimate foF2 data. This is because there are more GPS receivers than ionosonde locations and the utilisation of this data increases the spatial coverage of the regional model. The estimation of foF2 from GPS TEC was done at GPS-ionosonde co-locations using polynomial functions. The average R values of 0.69 and 0.65 were obtained between actual and derived _foF2 over the co-locations and other GPS stations respectively. Validation of GPS TEC derived foF2 with RO data over regions out of ionospheric pierce points coverage with respect to ionosonde locations gave R greater than 0.9 for the selected storm period of 4-8 August 2011. The regional storm-time model was then developed based on the ANN technique using the four South African ionosonde stations. The maximum and minimum R values of 0.6 and 0.5 were obtained over ionosonde and GPS locations respectively. This model forms the basis towards the regional ionospheric storm-time index. , Thesis (PhD) -- Faculty of Science, Physics and Electronics, 2021
- Full Text:
- Date Issued: 2021-04
Accelerated implementations of the RIME for DDE calibration and source modelling
- Authors: Van Staden, Joshua
- Date: 2021
- Subjects: Radio astronomy , Radio inferometers , Radio inferometers -- Calibration , Radio astronomy -- Data processing , Radio inferometers -- Data processing , Radio inferometers -- Calibration -- Data processing
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/172422 , vital:42199
- Description: Second- and third-generation calibration methods filter out subtle effects in interferometer data, and therefore yield significantly higher dynamic ranges. The basis of these calibration techniques relies on building a model of the sky and corrupting it with models of the effects acting on the sources. The sensitivities of modern instruments call for more elaborate models to capture the level of detail that is required to achieve accurate calibration. This thesis implements two types of models to be used in for second- and third-generation calibration. The first model implemented is shapelets, which can be used to model radio source morphologies directly in uv space. The second model implemented is Zernike polynomials, which can be used to represent the primary beam of the antenna. We implement these models in the CODEX-AFRICANUS package and provide a set of unit tests for each model. Additionally, we compare our implementations against other methods of representing these objects and instrumental effects, namely NIFTY-GRIDDER against shapelets and a FITS-interpolation method against the Zernike polynomials. We find that to achieve sufficient accuracy, our implementation of the shapelet model has a higher runtime to that of the NIFTY-GRIDDER. However, the NIFTY-GRIDDER cannot simulate a component-based sky model while the shapelet model can. Additionally, the shapelet model is fully parametric, which allows for integration into a parameterised solver. We find that, while having a smaller memory footprint, our Zernike model has a greater computational complexity than that of the FITS-interpolated method. However, we find that the Zernike implementation has floating-point accuracy in its modelling, while the FITS-interpolated model loses some accuracy through the discretisation of the beam.
- Full Text:
- Date Issued: 2021
- Authors: Van Staden, Joshua
- Date: 2021
- Subjects: Radio astronomy , Radio inferometers , Radio inferometers -- Calibration , Radio astronomy -- Data processing , Radio inferometers -- Data processing , Radio inferometers -- Calibration -- Data processing
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/172422 , vital:42199
- Description: Second- and third-generation calibration methods filter out subtle effects in interferometer data, and therefore yield significantly higher dynamic ranges. The basis of these calibration techniques relies on building a model of the sky and corrupting it with models of the effects acting on the sources. The sensitivities of modern instruments call for more elaborate models to capture the level of detail that is required to achieve accurate calibration. This thesis implements two types of models to be used in for second- and third-generation calibration. The first model implemented is shapelets, which can be used to model radio source morphologies directly in uv space. The second model implemented is Zernike polynomials, which can be used to represent the primary beam of the antenna. We implement these models in the CODEX-AFRICANUS package and provide a set of unit tests for each model. Additionally, we compare our implementations against other methods of representing these objects and instrumental effects, namely NIFTY-GRIDDER against shapelets and a FITS-interpolation method against the Zernike polynomials. We find that to achieve sufficient accuracy, our implementation of the shapelet model has a higher runtime to that of the NIFTY-GRIDDER. However, the NIFTY-GRIDDER cannot simulate a component-based sky model while the shapelet model can. Additionally, the shapelet model is fully parametric, which allows for integration into a parameterised solver. We find that, while having a smaller memory footprint, our Zernike model has a greater computational complexity than that of the FITS-interpolated method. However, we find that the Zernike implementation has floating-point accuracy in its modelling, while the FITS-interpolated model loses some accuracy through the discretisation of the beam.
- Full Text:
- Date Issued: 2021
Design patterns and software techniques for large-scale, open and reproducible data reduction
- Authors: Molenaar, Gijs Jan
- Date: 2021
- Subjects: Radio astronomy -- Data processing , Radio astronomy -- Data processing -- Software , Radio astronomy -- South Africa , ASTRODECONV2019 dataset , Radio telescopes -- South Africa , KERN (omputer software)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/172169 , vital:42172 , 10.21504/10962/172169
- Description: The preparation for the construction of the Square Kilometre Array, and the introduction of its operational precursors, such as LOFAR and MeerKAT, mark the beginning of an exciting era for astronomy. Impressive new data containing valuable science just waiting for discovery is already being generated, and these devices will produce far more data than has ever been collected before. However, with every new data instrument, the data rates grow to unprecedented quantities of data, requiring novel new data-processing tools. In addition, creating science grade data from the raw data still requires significant expert knowledge for processing this data. The software used is often developed by a scientist who lacks proper training in software development skills, resulting in the software not progressing beyond a prototype stage in quality. In the first chapter, we explore various organisational and technical approaches to address these issues by providing a historical overview of the development of radioastronomy pipelines since the inception of the field in the 1940s. In that, the steps required to create a radio image are investigated. We used the lessons-learned to identify patterns in the challenges experienced, and the solutions created to address these over the years. The second chapter describes the mathematical foundations that are essential for radio imaging. In the third chapter, we discuss the production of the KERN Linux distribution, which is a set of software packages containing most radio astronomy software currently in use. Considerable effort was put into making sure that the contained software installs appropriately, all items next to one other on the same system. Where required and possible, bugs and portability fixes were solved and reported with the upstream maintainers. The KERN project also has a website, and issue tracker, where users can report bugs and maintainers can coordinate the packaging effort and new releases. The software packages can be used inside Docker and Singularity containers, enabling the installation of these packages on a wide variety of platforms. In the fourth and fifth chapters, we discuss methods and frameworks for combining the available data reduction tools into recomposable pipelines and introduce the Kliko specification and software. This framework was created to enable end-user astronomers to chain and containerise operations of software in KERN packages. Next, we discuss the Common Workflow Language (CommonWL), a similar but more advanced and mature pipeline framework invented by bio-informatics scientists. CommonWL is supported by a wide range of tools already; among other schedulers, visualisers and editors. Consequently, when a pipeline is made with CommonWL, it can be deployed and manipulated with a wide range of tools. In the final chapter, we attempt something unconventional, applying a generative adversarial network based on deep learning techniques to perform the task of sky brightness reconstruction. Since deep learning methods often require a large number of training samples, we constructed a CommonWL simulation pipeline for creating dirty images and corresponding sky models. This simulated dataset has been made publicly available as the ASTRODECONV2019 dataset. It is shown that this method is useful to perform the restoration and matches the performance of a single clean cycle. In addition, we incorporated domain knowledge by adding the point spread function to the network and by utilising a custom loss function during training. Although it was not possible to improve the cleaning performance of commonly used existing tools, the computational time performance of the approach looks very promising. We suggest that a smaller scope should be the starting point for further studies and optimising of the training of the neural network could produce the desired results.
- Full Text:
- Date Issued: 2021
- Authors: Molenaar, Gijs Jan
- Date: 2021
- Subjects: Radio astronomy -- Data processing , Radio astronomy -- Data processing -- Software , Radio astronomy -- South Africa , ASTRODECONV2019 dataset , Radio telescopes -- South Africa , KERN (omputer software)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/172169 , vital:42172 , 10.21504/10962/172169
- Description: The preparation for the construction of the Square Kilometre Array, and the introduction of its operational precursors, such as LOFAR and MeerKAT, mark the beginning of an exciting era for astronomy. Impressive new data containing valuable science just waiting for discovery is already being generated, and these devices will produce far more data than has ever been collected before. However, with every new data instrument, the data rates grow to unprecedented quantities of data, requiring novel new data-processing tools. In addition, creating science grade data from the raw data still requires significant expert knowledge for processing this data. The software used is often developed by a scientist who lacks proper training in software development skills, resulting in the software not progressing beyond a prototype stage in quality. In the first chapter, we explore various organisational and technical approaches to address these issues by providing a historical overview of the development of radioastronomy pipelines since the inception of the field in the 1940s. In that, the steps required to create a radio image are investigated. We used the lessons-learned to identify patterns in the challenges experienced, and the solutions created to address these over the years. The second chapter describes the mathematical foundations that are essential for radio imaging. In the third chapter, we discuss the production of the KERN Linux distribution, which is a set of software packages containing most radio astronomy software currently in use. Considerable effort was put into making sure that the contained software installs appropriately, all items next to one other on the same system. Where required and possible, bugs and portability fixes were solved and reported with the upstream maintainers. The KERN project also has a website, and issue tracker, where users can report bugs and maintainers can coordinate the packaging effort and new releases. The software packages can be used inside Docker and Singularity containers, enabling the installation of these packages on a wide variety of platforms. In the fourth and fifth chapters, we discuss methods and frameworks for combining the available data reduction tools into recomposable pipelines and introduce the Kliko specification and software. This framework was created to enable end-user astronomers to chain and containerise operations of software in KERN packages. Next, we discuss the Common Workflow Language (CommonWL), a similar but more advanced and mature pipeline framework invented by bio-informatics scientists. CommonWL is supported by a wide range of tools already; among other schedulers, visualisers and editors. Consequently, when a pipeline is made with CommonWL, it can be deployed and manipulated with a wide range of tools. In the final chapter, we attempt something unconventional, applying a generative adversarial network based on deep learning techniques to perform the task of sky brightness reconstruction. Since deep learning methods often require a large number of training samples, we constructed a CommonWL simulation pipeline for creating dirty images and corresponding sky models. This simulated dataset has been made publicly available as the ASTRODECONV2019 dataset. It is shown that this method is useful to perform the restoration and matches the performance of a single clean cycle. In addition, we incorporated domain knowledge by adding the point spread function to the network and by utilising a custom loss function during training. Although it was not possible to improve the cleaning performance of commonly used existing tools, the computational time performance of the approach looks very promising. We suggest that a smaller scope should be the starting point for further studies and optimising of the training of the neural network could produce the desired results.
- Full Text:
- Date Issued: 2021
Parametrised gains for direction-dependent calibration
- Authors: Russeeaeon, Cyndie
- Date: 2021
- Subjects: Radio astronomy , Radio inferometers , Radio inferometers -- Calibration
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/172400 , vital:42196
- Description: Calibration in radio interferometry describes the process of estimating and correcting for instrumental errors from data. Direction-Dependent (DD) calibration entails correcting for corruptions which vary across the sky. For small field of view observations, DD corruptions can be ignored but for wide fild observations, it is crucial to account for them. Traditional maximum likelihood calibration is not necessarily efficient in low signal-to-noise ratio (SNR) scenarios and this can lead to ovefitting. This can bias continuum subtraction and hence, restrict the spectral line studies. Since DD effects are expected to vary smoothly across the sky, the gains can be parametrised as a smooth function of the sky coordinates. Hence, we implement a solver where the atmosphere is modelled using a time-variant 2-dimensional phase screen with an arbitrary known frequency dependence. We assume arbitrary linear basis functions for the gains over the phase screen. The implemented solver is ptimised using the diagonal approximation of the Hessian as shown in previous studies. We present a few simulations to illustrate the performance of the solver.
- Full Text:
- Date Issued: 2021
- Authors: Russeeaeon, Cyndie
- Date: 2021
- Subjects: Radio astronomy , Radio inferometers , Radio inferometers -- Calibration
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/172400 , vital:42196
- Description: Calibration in radio interferometry describes the process of estimating and correcting for instrumental errors from data. Direction-Dependent (DD) calibration entails correcting for corruptions which vary across the sky. For small field of view observations, DD corruptions can be ignored but for wide fild observations, it is crucial to account for them. Traditional maximum likelihood calibration is not necessarily efficient in low signal-to-noise ratio (SNR) scenarios and this can lead to ovefitting. This can bias continuum subtraction and hence, restrict the spectral line studies. Since DD effects are expected to vary smoothly across the sky, the gains can be parametrised as a smooth function of the sky coordinates. Hence, we implement a solver where the atmosphere is modelled using a time-variant 2-dimensional phase screen with an arbitrary known frequency dependence. We assume arbitrary linear basis functions for the gains over the phase screen. The implemented solver is ptimised using the diagonal approximation of the Hessian as shown in previous studies. We present a few simulations to illustrate the performance of the solver.
- Full Text:
- Date Issued: 2021
A 150 MHz all sky survey with the Precision Array to Probe the Epoch of Reionization
- Authors: Chege, James Kariuki
- Date: 2020
- Subjects: Epoch of reionization -- Research , Astronomy -- Observations , Radio interferometers
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/117733 , vital:34556
- Description: The Precision Array to Probe the Epoch of Reionization (PAPER) was built to measure the redshifted 21 cm line of hydrogen from cosmic reionization. Such low frequency observations promise to be the best means of understanding the cosmic dawn; when the first galaxies in the universe formed, and also the Epoch of Reionization; when the intergalactic medium changed from neutral to ionized. The major challenges to these observations is the presence of astrophysical foregrounds that are much brighter than the cosmological signal. Here, I present an all-sky survey at 150 MHz obtained from the analysis of 300 hours of PAPER observations. Particular focus is given to the calibration and imaging techniques that need to deal with the wide field of view of a non-tracking instrument. The survey covers ~ 7000 square degrees of the southern sky. From a sky area of 4400 square degrees out of the total survey area, I extract a catalogue of sources brighter than 4 Jy whose accuracy was tested against the published GLEAM catalogue, leading to a fractional difference rms better than 20%. The catalogue provides an all-sky accurate model of the extragalactic foreground to be used for the calibration of future Epoch of Reionization observations and to be subtracted from the PAPER observations themselves in order to mitigate the foreground contamination.
- Full Text:
- Date Issued: 2020
- Authors: Chege, James Kariuki
- Date: 2020
- Subjects: Epoch of reionization -- Research , Astronomy -- Observations , Radio interferometers
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/117733 , vital:34556
- Description: The Precision Array to Probe the Epoch of Reionization (PAPER) was built to measure the redshifted 21 cm line of hydrogen from cosmic reionization. Such low frequency observations promise to be the best means of understanding the cosmic dawn; when the first galaxies in the universe formed, and also the Epoch of Reionization; when the intergalactic medium changed from neutral to ionized. The major challenges to these observations is the presence of astrophysical foregrounds that are much brighter than the cosmological signal. Here, I present an all-sky survey at 150 MHz obtained from the analysis of 300 hours of PAPER observations. Particular focus is given to the calibration and imaging techniques that need to deal with the wide field of view of a non-tracking instrument. The survey covers ~ 7000 square degrees of the southern sky. From a sky area of 4400 square degrees out of the total survey area, I extract a catalogue of sources brighter than 4 Jy whose accuracy was tested against the published GLEAM catalogue, leading to a fractional difference rms better than 20%. The catalogue provides an all-sky accurate model of the extragalactic foreground to be used for the calibration of future Epoch of Reionization observations and to be subtracted from the PAPER observations themselves in order to mitigate the foreground contamination.
- Full Text:
- Date Issued: 2020
A Bayesian approach to tilted-ring modelling of galaxies
- Authors: Maina, Eric Kamau
- Date: 2020
- Subjects: Bayesian statistical decision theory , Galaxies , Radio astronomy , TiRiFiC (Tilted Ring Fitting Code) , Neutral hydrogen , Spectroscopic data cubes , Galaxy parametrisation
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/145783 , vital:38466
- Description: The orbits of neutral hydrogen (H I) gas found in most disk galaxies are circular and also exhibit long-lived warps at large radii where the restoring gravitational forces of the inner disk become weak (Spekkens and Giovanelli 2006). These warps make the tilted-ring model an ideal choice for galaxy parametrisation. Analysis software utilizing the tilted-ring-model can be grouped into two and three-dimensional based software. Józsa et al. (2007b) demonstrated that three dimensional based software is better suited for galaxy parametrisation because it is affected by the effect of beam smearing only by increasing the uncertainty of parameters but not with the notorious systematic effects observed for two-dimensional fitting techniques. TiRiFiC, The Tilted Ring Fitting Code (Józsa et al. 2007b), is a software to construct parameterised models of high-resolution data cubes of rotating galaxies. It uses the tilted-ring model, and with that, a combination of some parameters such as surface brightness, position angle, rotation velocity and inclination, to describe galaxies. TiRiFiC works by directly fitting tilted-ring models to spectroscopic data cubes and hence is not affected by beam smearing or line-of-site-effects, e.g. strong warps. Because of that, the method is unavoidable as an analytic method in future Hi surveys. In the current implementation, though, there are several drawbacks. The implemented optimisers search for local solutions in parameter space only, do not quantify correlations between parameters and cannot find errors of single parameters. In theory, these drawbacks can be overcome by using Bayesian statistics, implemented in Multinest (Feroz et al. 2008), as it allows for sampling a posterior distribution irrespective of its multimodal nature resulting in parameter samples that correspond to the maximum in the posterior distribution. These parameter samples can be used as well to quantify correlations and find errors of single parameters. Since this method employs Bayesian statistics, it also allows the user to leverage any prior information they may have on parameter values.
- Full Text:
- Date Issued: 2020
- Authors: Maina, Eric Kamau
- Date: 2020
- Subjects: Bayesian statistical decision theory , Galaxies , Radio astronomy , TiRiFiC (Tilted Ring Fitting Code) , Neutral hydrogen , Spectroscopic data cubes , Galaxy parametrisation
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/145783 , vital:38466
- Description: The orbits of neutral hydrogen (H I) gas found in most disk galaxies are circular and also exhibit long-lived warps at large radii where the restoring gravitational forces of the inner disk become weak (Spekkens and Giovanelli 2006). These warps make the tilted-ring model an ideal choice for galaxy parametrisation. Analysis software utilizing the tilted-ring-model can be grouped into two and three-dimensional based software. Józsa et al. (2007b) demonstrated that three dimensional based software is better suited for galaxy parametrisation because it is affected by the effect of beam smearing only by increasing the uncertainty of parameters but not with the notorious systematic effects observed for two-dimensional fitting techniques. TiRiFiC, The Tilted Ring Fitting Code (Józsa et al. 2007b), is a software to construct parameterised models of high-resolution data cubes of rotating galaxies. It uses the tilted-ring model, and with that, a combination of some parameters such as surface brightness, position angle, rotation velocity and inclination, to describe galaxies. TiRiFiC works by directly fitting tilted-ring models to spectroscopic data cubes and hence is not affected by beam smearing or line-of-site-effects, e.g. strong warps. Because of that, the method is unavoidable as an analytic method in future Hi surveys. In the current implementation, though, there are several drawbacks. The implemented optimisers search for local solutions in parameter space only, do not quantify correlations between parameters and cannot find errors of single parameters. In theory, these drawbacks can be overcome by using Bayesian statistics, implemented in Multinest (Feroz et al. 2008), as it allows for sampling a posterior distribution irrespective of its multimodal nature resulting in parameter samples that correspond to the maximum in the posterior distribution. These parameter samples can be used as well to quantify correlations and find errors of single parameters. Since this method employs Bayesian statistics, it also allows the user to leverage any prior information they may have on parameter values.
- Full Text:
- Date Issued: 2020
A study of why some physic concepts in the South African Physical Science curriculum are poorly understood in order to develop a targeted action-research intervention for Newton’s second law
- Authors: Cobbing, Kathleen Margaret
- Date: 2020
- Subjects: Physics -- Study and teaching (Secondary) -- South Africa , Physics -- Examinations, questions, etc. -- South Africa , Motion -- Study and teaching (Secondary) -- South Africa
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/146903 , vital:38575
- Description: Globally, many students show a poor understanding of concepts in high school physics and lack the necessary problem-solving skills that the course demands. The application of Newton’s second law was found to be particularly problematic through document analysis of South African examination feedback reports, as well as from an analysis of the physics examinations at a pair of well-resourced South African independent schools that follow the Independent Examination Board curriculum. Through an action-research approach, a resource for use by students was designed and modified to improve students’ understanding of this concept, while modelling problemsolving methods. The resource consisted of brief revision notes, worked examples and scaffolded exercises. The design of the resource was influenced by the theory of cognitive apprenticeship, cognitive load theory and conceptual change theory. One of the aims of the resource was to encourage students to translate between the different representations of a problem situation: symbolic, abstract, model and concrete. The impact of this resource was evaluated at a pair of schools using a mixed methods approach. This incorporated pre- and post-tests for a quantitative assessment, qualitative student evaluations and the analysis of examination scripts. There was an improvement from pre- to post-test for all four iterations of the intervention and these improvements were shown to be significant. The use of the resource led to an increase in the quality and quantity of diagrams drawn by students in subsequent assessments.
- Full Text:
- Date Issued: 2020
- Authors: Cobbing, Kathleen Margaret
- Date: 2020
- Subjects: Physics -- Study and teaching (Secondary) -- South Africa , Physics -- Examinations, questions, etc. -- South Africa , Motion -- Study and teaching (Secondary) -- South Africa
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/146903 , vital:38575
- Description: Globally, many students show a poor understanding of concepts in high school physics and lack the necessary problem-solving skills that the course demands. The application of Newton’s second law was found to be particularly problematic through document analysis of South African examination feedback reports, as well as from an analysis of the physics examinations at a pair of well-resourced South African independent schools that follow the Independent Examination Board curriculum. Through an action-research approach, a resource for use by students was designed and modified to improve students’ understanding of this concept, while modelling problemsolving methods. The resource consisted of brief revision notes, worked examples and scaffolded exercises. The design of the resource was influenced by the theory of cognitive apprenticeship, cognitive load theory and conceptual change theory. One of the aims of the resource was to encourage students to translate between the different representations of a problem situation: symbolic, abstract, model and concrete. The impact of this resource was evaluated at a pair of schools using a mixed methods approach. This incorporated pre- and post-tests for a quantitative assessment, qualitative student evaluations and the analysis of examination scripts. There was an improvement from pre- to post-test for all four iterations of the intervention and these improvements were shown to be significant. The use of the resource led to an increase in the quality and quantity of diagrams drawn by students in subsequent assessments.
- Full Text:
- Date Issued: 2020
Addressing flux suppression, radio frequency interference, and selection of optimal solution intervals during radio interferometric calibration
- Authors: Sob, Ulrich Armel Mbou
- Date: 2020
- Subjects: CubiCal (Software) , Radio -- Interference , Imaging systems in astronomy , Algorithms , Astronomical instruments -- Calibration , Astronomy -- Data processing
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/147714 , vital:38663
- Description: The forthcoming Square Kilometre Array is expected to provide answers to some of the most intriguing questions about our Universe. However, as it is already noticeable from MeerKAT and other precursors, the amounts of data produced by these new instruments are significantly challenging to calibrate and image. Calibration of radio interferometric data is usually biased by incomplete sky models and radio frequency interference (RFI) resulting in calibration artefacts that limit the dynamic range and image fidelity of the resulting images. One of the most noticeable of these artefacts is the formation of spurious sources which causes suppression of real emissions. Fortunately, it has been shown that calibration algorithms employing heavy-tailed likelihood functions are less susceptible to this due to their robustness against outliers. Leveraging on recent developments in the field of complex optimisation, we implement a robust calibration algorithm using a Student’s t likelihood function and Wirtinger derivatives. The new algorithm, dubbed the robust solver, is incorporated as a subroutine into the newly released calibration software package CubiCal. We perform statistical analysis on the distribution of visibilities and provide an insight into the functioning of the robust solver and describe different scenarios where it will improve calibration. We use simulations to show that the robust solver effectively reduces the amount of flux suppressed from unmodelled sources both in direction independent and direction dependent calibration. Furthermore, the robust solver is shown to successfully mitigate the effects of low-level RFI when applied to a simulated and a real VLA dataset. Finally, we demonstrate that there are close links between the amount of flux suppressed from sources, the effects of the RFI and the employed solution interval during radio interferometric calibration. Hence, we investigate the effects of solution intervals and the different factors to consider in order to select adequate solution intervals. Furthermore, we propose a practical brute force method for selecting optimal solution intervals. The proposed method is successfully applied to a VLA dataset.
- Full Text:
- Date Issued: 2020
- Authors: Sob, Ulrich Armel Mbou
- Date: 2020
- Subjects: CubiCal (Software) , Radio -- Interference , Imaging systems in astronomy , Algorithms , Astronomical instruments -- Calibration , Astronomy -- Data processing
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/147714 , vital:38663
- Description: The forthcoming Square Kilometre Array is expected to provide answers to some of the most intriguing questions about our Universe. However, as it is already noticeable from MeerKAT and other precursors, the amounts of data produced by these new instruments are significantly challenging to calibrate and image. Calibration of radio interferometric data is usually biased by incomplete sky models and radio frequency interference (RFI) resulting in calibration artefacts that limit the dynamic range and image fidelity of the resulting images. One of the most noticeable of these artefacts is the formation of spurious sources which causes suppression of real emissions. Fortunately, it has been shown that calibration algorithms employing heavy-tailed likelihood functions are less susceptible to this due to their robustness against outliers. Leveraging on recent developments in the field of complex optimisation, we implement a robust calibration algorithm using a Student’s t likelihood function and Wirtinger derivatives. The new algorithm, dubbed the robust solver, is incorporated as a subroutine into the newly released calibration software package CubiCal. We perform statistical analysis on the distribution of visibilities and provide an insight into the functioning of the robust solver and describe different scenarios where it will improve calibration. We use simulations to show that the robust solver effectively reduces the amount of flux suppressed from unmodelled sources both in direction independent and direction dependent calibration. Furthermore, the robust solver is shown to successfully mitigate the effects of low-level RFI when applied to a simulated and a real VLA dataset. Finally, we demonstrate that there are close links between the amount of flux suppressed from sources, the effects of the RFI and the employed solution interval during radio interferometric calibration. Hence, we investigate the effects of solution intervals and the different factors to consider in order to select adequate solution intervals. Furthermore, we propose a practical brute force method for selecting optimal solution intervals. The proposed method is successfully applied to a VLA dataset.
- Full Text:
- Date Issued: 2020
Analysing emergent time within an isolated Universe through the application of interactions in the conditional probability approach
- Authors: Bryan, Kate Louise Halse
- Date: 2020
- Subjects: Space and time , Quantum gravity , Quantum theory , Relativity (Physics)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/146676 , vital:38547
- Description: Time remains a frequently discussed issue in physics and philosophy. One interpretation of growing popularity is the ‘timeless’ view which states that our experience of time is only an illusion. The isolated Universe model, provided by the Wheeler-DeWitt equation, supports this interpretation by describing time using clocks in the conditional probability interpretation (CPI). However, the CPI customarily dismisses interaction effects as negligible creating a potential blind spot which overlooks the potential influence of interaction effects. Accounting for interactions opens up a new avenue of analysis and a potential challenge to the interpretation of time. In aid of our assessment of the impact interaction effects have on the CPI, we present rudimentary definitions of time and its associated concepts. Defined in a minimalist manner, time is argued to require a postulate of causality as a means of accounting for temporal ordering in physical theories. Several of these theories are discussed here in terms of their respective approaches to time and, despite their differences, there are indications that the accounts of time are unified in a more fundamental theory. An analytic analysis of the CPI, incorporating two different clock choices, and a qualitative analysis both confirm that interactions have a necessary role within the CPI. The consequence of removing interactions is a maximised uncertainty in any measurement of the clock and a restriction to a two-state system, as indicated by the results of the toy models and qualitative argument respectively. The philosophical implication is that we are not restricted to the timeless view since including interactions as agents of causal interventions between systems provides an account of time as a real phenomenon. This result highlights the reliance on a postulate of causality which forms a pressing problem in explaining our experience of time.
- Full Text:
- Date Issued: 2020
- Authors: Bryan, Kate Louise Halse
- Date: 2020
- Subjects: Space and time , Quantum gravity , Quantum theory , Relativity (Physics)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/146676 , vital:38547
- Description: Time remains a frequently discussed issue in physics and philosophy. One interpretation of growing popularity is the ‘timeless’ view which states that our experience of time is only an illusion. The isolated Universe model, provided by the Wheeler-DeWitt equation, supports this interpretation by describing time using clocks in the conditional probability interpretation (CPI). However, the CPI customarily dismisses interaction effects as negligible creating a potential blind spot which overlooks the potential influence of interaction effects. Accounting for interactions opens up a new avenue of analysis and a potential challenge to the interpretation of time. In aid of our assessment of the impact interaction effects have on the CPI, we present rudimentary definitions of time and its associated concepts. Defined in a minimalist manner, time is argued to require a postulate of causality as a means of accounting for temporal ordering in physical theories. Several of these theories are discussed here in terms of their respective approaches to time and, despite their differences, there are indications that the accounts of time are unified in a more fundamental theory. An analytic analysis of the CPI, incorporating two different clock choices, and a qualitative analysis both confirm that interactions have a necessary role within the CPI. The consequence of removing interactions is a maximised uncertainty in any measurement of the clock and a restriction to a two-state system, as indicated by the results of the toy models and qualitative argument respectively. The philosophical implication is that we are not restricted to the timeless view since including interactions as agents of causal interventions between systems provides an account of time as a real phenomenon. This result highlights the reliance on a postulate of causality which forms a pressing problem in explaining our experience of time.
- Full Text:
- Date Issued: 2020
Dynamics of stimulated luminescence in natural quartz: Thermoluminescence and phototransferred thermoluminescence
- Authors: Folley, Damilola Esther
- Date: 2020
- Subjects: Thermoluminescence , Quartz
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/146255 , vital:38509
- Description: Natural quartz has remained an important mineral that is of topical interest in luminescence and dosimetry-related research. We investigate the dynamics of stimulated luminescence on this material through thermoluminescence (TL) and phototransferred thermoluminescence (PTTL). Measurements were made on unannealed natural quartz as well as quartz annealed at 800 and 1000̊C. The samples were annealed for 10 minutes and for 1 hour. The material, in its un- and annealed state has its main peak between 68 and 72̊C when measured at 1Cs ̃1 after a dose of 50 Gy. A study of dosimetric features and kinetic analysis was carried out on two prominent peaks, peak I and III for all the samples. The peaks show a sublinear dose response for irradiation doses between 10 and 300 Gy. Kinetic analysis shows that peak I is a first-order peak and peak III a general-order peak. Interestingly, we observe for peak I for the sample annealed at 800̊C for 1 hour an inverse thermal quenching behaviour. We demonstrate that a peak affected with an inverse thermal quenching-like behaviour can still show effect of thermal quenching when the dose the sample is irradiated to is significantly reduced. We ascribe the apparent dependence of thermal quenching on dose to competition between radiative and non-radiative transitions at the recombination centre. Peaks I, II, and III for all the samples were reproduced under phototransfer when the peaks, initially removed by preheating to a certain temperature are exposed to 470 and 525 nm light. The infuence of duration of illumination on the PTTL intensity of these peaks corresponding to various preheating temperatures is modelled using coupled first-order dfferential equations. The model is based on systems of acceptors and donors whose number and role depends on preheating temperature
- Full Text:
- Date Issued: 2020
- Authors: Folley, Damilola Esther
- Date: 2020
- Subjects: Thermoluminescence , Quartz
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/146255 , vital:38509
- Description: Natural quartz has remained an important mineral that is of topical interest in luminescence and dosimetry-related research. We investigate the dynamics of stimulated luminescence on this material through thermoluminescence (TL) and phototransferred thermoluminescence (PTTL). Measurements were made on unannealed natural quartz as well as quartz annealed at 800 and 1000̊C. The samples were annealed for 10 minutes and for 1 hour. The material, in its un- and annealed state has its main peak between 68 and 72̊C when measured at 1Cs ̃1 after a dose of 50 Gy. A study of dosimetric features and kinetic analysis was carried out on two prominent peaks, peak I and III for all the samples. The peaks show a sublinear dose response for irradiation doses between 10 and 300 Gy. Kinetic analysis shows that peak I is a first-order peak and peak III a general-order peak. Interestingly, we observe for peak I for the sample annealed at 800̊C for 1 hour an inverse thermal quenching behaviour. We demonstrate that a peak affected with an inverse thermal quenching-like behaviour can still show effect of thermal quenching when the dose the sample is irradiated to is significantly reduced. We ascribe the apparent dependence of thermal quenching on dose to competition between radiative and non-radiative transitions at the recombination centre. Peaks I, II, and III for all the samples were reproduced under phototransfer when the peaks, initially removed by preheating to a certain temperature are exposed to 470 and 525 nm light. The infuence of duration of illumination on the PTTL intensity of these peaks corresponding to various preheating temperatures is modelled using coupled first-order dfferential equations. The model is based on systems of acceptors and donors whose number and role depends on preheating temperature
- Full Text:
- Date Issued: 2020
Finite precision arithmetic in Polyphase Filterbank implementations
- Authors: Myburgh, Talon
- Date: 2020
- Subjects: Radio interferometers , Interferometry , Radio telescopes , Gate array circuits , Floating-point arithmetic , Python (Computer program language) , Polyphase Filterbank , Finite precision arithmetic , MeerKAT
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/146187 , vital:38503
- Description: The MeerKAT is the most sensitive radio telescope in its class, and it is important that systematic effects do not limit the dynamic range of the instrument, preventing this sensitivity from being harnessed for deep integrations. During commissioning, spurious artefacts were noted in the MeerKAT passband and the root cause was attributed to systematic errors in the digital signal path. Finite precision arithmetic used by the Polyphase Filterbank (PFB) was one of the main factors contributing to the spurious responses, together with bugs in the firmware. This thesis describes a software PFB simulator that was built to mimic the MeerKAT PFB and allow investigation into the origin and mitigation of the effects seen on the telescope. This simulator was used to investigate the effects in signal integrity of various rounding techniques, overflow strategies and dual polarisation processing in the PFB. Using the simulator to investigate a number of different signal levels, bit-width and algorithmic scenarios, it gave insight into how the periodic dips occurring in the MeerKAT passband were the result of the implementation using an inappropriate rounding strategy. It further indicated how to select the best strategy for preventing overflow while maintaining high quantization effciency in the FFT. This practice of simulating the design behaviour in the PFB independently of the tools used to design the DSP firmware, is a step towards an end-to-end simulation of the MeerKAT system (or any radio telescope using nite precision digital signal processing systems). This would be useful for design, diagnostics, signal analysis and prototyping of the overall instrument.
- Full Text:
- Date Issued: 2020
- Authors: Myburgh, Talon
- Date: 2020
- Subjects: Radio interferometers , Interferometry , Radio telescopes , Gate array circuits , Floating-point arithmetic , Python (Computer program language) , Polyphase Filterbank , Finite precision arithmetic , MeerKAT
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/146187 , vital:38503
- Description: The MeerKAT is the most sensitive radio telescope in its class, and it is important that systematic effects do not limit the dynamic range of the instrument, preventing this sensitivity from being harnessed for deep integrations. During commissioning, spurious artefacts were noted in the MeerKAT passband and the root cause was attributed to systematic errors in the digital signal path. Finite precision arithmetic used by the Polyphase Filterbank (PFB) was one of the main factors contributing to the spurious responses, together with bugs in the firmware. This thesis describes a software PFB simulator that was built to mimic the MeerKAT PFB and allow investigation into the origin and mitigation of the effects seen on the telescope. This simulator was used to investigate the effects in signal integrity of various rounding techniques, overflow strategies and dual polarisation processing in the PFB. Using the simulator to investigate a number of different signal levels, bit-width and algorithmic scenarios, it gave insight into how the periodic dips occurring in the MeerKAT passband were the result of the implementation using an inappropriate rounding strategy. It further indicated how to select the best strategy for preventing overflow while maintaining high quantization effciency in the FFT. This practice of simulating the design behaviour in the PFB independently of the tools used to design the DSP firmware, is a step towards an end-to-end simulation of the MeerKAT system (or any radio telescope using nite precision digital signal processing systems). This would be useful for design, diagnostics, signal analysis and prototyping of the overall instrument.
- Full Text:
- Date Issued: 2020