Combined spectral and stimulated luminescence study of charge trapping and recombination processes in α-Al2O3:C
- Authors: Nyirenda, Angel Newton
- Date: 2018
- Subjects: Luminescence , Thermoluminescence , Luminescence spectroscopy , Carbon-doped aluminium oxide , Radioluminescence , Time-resolved X-ray excited optical luminescence
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/62683 , vital:28235
- Description: The main objective of this project was to gain a deeper and better understanding of the luminescence processes in a-Al₂O₃:C, a highly-sensitive dosimetric material, using a combined spectral and stimulated luminescence study. The spectral studies concentrated on the emission spectra obtained using X-ray induced radioluminescence (XERL), thermoluminescence (XETL) and time-resolved X-ray excited optical luminescence (TR-XEOL) techniques. The stimulated luminescence studies were based on thermoluminescence (TL), optically stimulated luminescence (OSL) and phototransferred TL (PTTL) methods that were used in the study of the radiation-induced defects at high beta-doses and the deep traps, that is, traps with thermal depths beyond 500°C. The spectral and stimulated luminescence measurements were carried out using a high sensitivity luminescence spectrometer and a Ris0 TL/OSL Model DA-20 Reader, respectively. The XERL emission spectrum measured at room temperature shows seven gaussian peaks associated with F-centres (420 nm), F+-centres (334 nm), F2+-centres (559 nm), Stoke’s vibronic band of Cr3+ (671 nm), Cr3+ R-line emission (694 nm), anti-Stokes vibronic band of Cr3+ (710 nm) and an unidentified emission band (260-300 nm) which we associate with hole recombinations at a luminescence centre. The 694-nm R-line emission from Cr3+ impurity ions is most likely due to recombination of holes at Cr2+ during stimulated luminescence and as a result of an intracentre excitation of Cr3+ in photoluminescence (PL) due to photon absorption. The Cr3+ emission decreases in intensity, whereas the intensity of F-centre emission band is almost constant with repeated XERL measurements. Depending on the amount of X-ray irradiation dose, both holes and/or electrons may take place in the emission processes of peaks I (30-80°C), II (90-250°C) and III (250-320°C) during a TL readout, albeit, electron recombination is dominant regardless of dose. At higher doses, the XETL emission spectra indicate that the dominant band associated with TL peak III (250-320°C) in the material, shifts from F-centre to Cr3+. Using the deep-traps OSL, it has been confirmed that the main TL trap is also the main OSL trap whereas the TL traps lying in the temperature range of 400-550°C constitute the secondary OSL traps. There is evidence of strong retrapping at the main trap during optical stimulation of charges from the secondary OSL traps and the deep traps and that the retrapping occurs via the delocalized bands. At high-irradiation beta-doses, aggregate defect centres which significantly alter the TL and OSL properties, are induced in the material. The induced aggregate centres get completely obliterated by heating a sample to 700°C. The radiation-induced defects cause the main TL peak to shift towards higher temperatures, increase its FWHM, reduce its maximum intensity and cause an underestimation of both the activation energy and order of kinetics of the peak. On the other hand, the OSL response of the material is enhanced following a high-irradiation dose. During sample storage in the dark at ambient temperature, charges do migrate from the deep traps (donors) to the main and intermediate traps (acceptors) and that the major donor traps during this charge transfer phenomenon lie between 500-600°C.
- Full Text:
- Date Issued: 2018
- Authors: Nyirenda, Angel Newton
- Date: 2018
- Subjects: Luminescence , Thermoluminescence , Luminescence spectroscopy , Carbon-doped aluminium oxide , Radioluminescence , Time-resolved X-ray excited optical luminescence
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/62683 , vital:28235
- Description: The main objective of this project was to gain a deeper and better understanding of the luminescence processes in a-Al₂O₃:C, a highly-sensitive dosimetric material, using a combined spectral and stimulated luminescence study. The spectral studies concentrated on the emission spectra obtained using X-ray induced radioluminescence (XERL), thermoluminescence (XETL) and time-resolved X-ray excited optical luminescence (TR-XEOL) techniques. The stimulated luminescence studies were based on thermoluminescence (TL), optically stimulated luminescence (OSL) and phototransferred TL (PTTL) methods that were used in the study of the radiation-induced defects at high beta-doses and the deep traps, that is, traps with thermal depths beyond 500°C. The spectral and stimulated luminescence measurements were carried out using a high sensitivity luminescence spectrometer and a Ris0 TL/OSL Model DA-20 Reader, respectively. The XERL emission spectrum measured at room temperature shows seven gaussian peaks associated with F-centres (420 nm), F+-centres (334 nm), F2+-centres (559 nm), Stoke’s vibronic band of Cr3+ (671 nm), Cr3+ R-line emission (694 nm), anti-Stokes vibronic band of Cr3+ (710 nm) and an unidentified emission band (260-300 nm) which we associate with hole recombinations at a luminescence centre. The 694-nm R-line emission from Cr3+ impurity ions is most likely due to recombination of holes at Cr2+ during stimulated luminescence and as a result of an intracentre excitation of Cr3+ in photoluminescence (PL) due to photon absorption. The Cr3+ emission decreases in intensity, whereas the intensity of F-centre emission band is almost constant with repeated XERL measurements. Depending on the amount of X-ray irradiation dose, both holes and/or electrons may take place in the emission processes of peaks I (30-80°C), II (90-250°C) and III (250-320°C) during a TL readout, albeit, electron recombination is dominant regardless of dose. At higher doses, the XETL emission spectra indicate that the dominant band associated with TL peak III (250-320°C) in the material, shifts from F-centre to Cr3+. Using the deep-traps OSL, it has been confirmed that the main TL trap is also the main OSL trap whereas the TL traps lying in the temperature range of 400-550°C constitute the secondary OSL traps. There is evidence of strong retrapping at the main trap during optical stimulation of charges from the secondary OSL traps and the deep traps and that the retrapping occurs via the delocalized bands. At high-irradiation beta-doses, aggregate defect centres which significantly alter the TL and OSL properties, are induced in the material. The induced aggregate centres get completely obliterated by heating a sample to 700°C. The radiation-induced defects cause the main TL peak to shift towards higher temperatures, increase its FWHM, reduce its maximum intensity and cause an underestimation of both the activation energy and order of kinetics of the peak. On the other hand, the OSL response of the material is enhanced following a high-irradiation dose. During sample storage in the dark at ambient temperature, charges do migrate from the deep traps (donors) to the main and intermediate traps (acceptors) and that the major donor traps during this charge transfer phenomenon lie between 500-600°C.
- Full Text:
- Date Issued: 2018
Long-term analysis of ionospheric response during geomagnetic storms in mid, low and equatorial latitudes
- Matamba, Tshimangadzo Merline
- Authors: Matamba, Tshimangadzo Merline
- Date: 2018
- Subjects: Ionospheric storms , Coronal mass ejections , Corotating interaction regions , Solar flares , Global Positioning System , Ionospheric critical frequencies , Equatorial Ionization Anomaly (EIA)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/63991 , vital:28517
- Description: Understanding changes in the ionosphere is important for High Frequency (HF) communications and navigation systems. Ionospheric storms are the disturbances in the Earth’s upper atmosphere due to solar activities such as Coronal Mass Ejections (CMEs), Corotating interaction Regions (CIRs) and solar flares. This thesis reports for the first time on an investigation of ionospheric response to great geomagnetic storms (Disturbance storm time, Dst ≤ −350 nT) that occurred during solar cycle 23. The storm periods analysed were 29 March - 02 April 2001, 27 - 31 October 2003, 18 - 23 November 2003 and 06 - 11 November 2004. Global Navigation Satellite System (GNSS), Total Electron Content (TEC) and ionosonde critical frequency of F2 layer (foF2) data over northern hemisphere (European sector) and southern hemisphere (African sector) mid-latitudes were used to study the ionospheric responses within 15E° - 40°E longitude and ±31°- ±46° geomagnetic latitude. Mid-latitude regions within the same longitude sector in both hemispheres were selected in order to assess the contribution of the low latitude changes especially the expansion of Equatorial Ionization Anomaly (EIA) also known as the dayside ionospheric super-fountain effect during these storms. In all storm periods, both negative and positive ionospheric responses were observed in both hemispheres. Negative ionospheric responses were mainly due to changes in neutral composition, while the expansion of the EIA led to pronounced positive ionospheric storm effect at mid-latitudes for some storm periods. In other cases (e.g 29 October 2003), Prompt Penetration Electric Fields (PPEF), EIA expansion and large scale Traveling Ionospheric Disturbances (TIDs) were found to be present during the positive storm effect at mid-latitudes in both hemispheres. An increase in TEC on the 28 October 2003 was because of the large solar flare with previously determined intensity of X45± 5. A further report on statistical analysis of ionospheric storm effects due to Corotating Interaction Region (CIR)- and Coronal Mass Ejection (CME)-driven storms was performed. The storm periods analyzed occurred during the period 2001 - 2015 which covers part of solar cycles 23 and 24. Dst≤ -30 nT and Kp≥ 3 indices were used to identify the storm periods considered. Ionospheric TEC derived from IGS stations that lie within 30°E - 40°E geographic longitude in mid, low and equatorial latitude over the African sector were used. The statistical analysis of ionospheric storm effects were compared over mid, low and equatorial latitudes in the African sector for the first time. Positive ionospheric storm effects were more prevalent during CME-driven and CIR-driven over all stations considered in this study. Negative ionospheric storm effects occurred only during CME-driven storms over mid-latitude stations and were more prevalent in summer. The other interesting finding is that for the stations considered over mid-, low, and equatorial latitudes, negative-positive ionospheric responses were only observed over low and equatorial latitudes. A significant number of cases where the electron density changes remained within the background variability during storm conditions were observed over the low latitude stations compared to other latitude regions.
- Full Text:
- Date Issued: 2018
- Authors: Matamba, Tshimangadzo Merline
- Date: 2018
- Subjects: Ionospheric storms , Coronal mass ejections , Corotating interaction regions , Solar flares , Global Positioning System , Ionospheric critical frequencies , Equatorial Ionization Anomaly (EIA)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/63991 , vital:28517
- Description: Understanding changes in the ionosphere is important for High Frequency (HF) communications and navigation systems. Ionospheric storms are the disturbances in the Earth’s upper atmosphere due to solar activities such as Coronal Mass Ejections (CMEs), Corotating interaction Regions (CIRs) and solar flares. This thesis reports for the first time on an investigation of ionospheric response to great geomagnetic storms (Disturbance storm time, Dst ≤ −350 nT) that occurred during solar cycle 23. The storm periods analysed were 29 March - 02 April 2001, 27 - 31 October 2003, 18 - 23 November 2003 and 06 - 11 November 2004. Global Navigation Satellite System (GNSS), Total Electron Content (TEC) and ionosonde critical frequency of F2 layer (foF2) data over northern hemisphere (European sector) and southern hemisphere (African sector) mid-latitudes were used to study the ionospheric responses within 15E° - 40°E longitude and ±31°- ±46° geomagnetic latitude. Mid-latitude regions within the same longitude sector in both hemispheres were selected in order to assess the contribution of the low latitude changes especially the expansion of Equatorial Ionization Anomaly (EIA) also known as the dayside ionospheric super-fountain effect during these storms. In all storm periods, both negative and positive ionospheric responses were observed in both hemispheres. Negative ionospheric responses were mainly due to changes in neutral composition, while the expansion of the EIA led to pronounced positive ionospheric storm effect at mid-latitudes for some storm periods. In other cases (e.g 29 October 2003), Prompt Penetration Electric Fields (PPEF), EIA expansion and large scale Traveling Ionospheric Disturbances (TIDs) were found to be present during the positive storm effect at mid-latitudes in both hemispheres. An increase in TEC on the 28 October 2003 was because of the large solar flare with previously determined intensity of X45± 5. A further report on statistical analysis of ionospheric storm effects due to Corotating Interaction Region (CIR)- and Coronal Mass Ejection (CME)-driven storms was performed. The storm periods analyzed occurred during the period 2001 - 2015 which covers part of solar cycles 23 and 24. Dst≤ -30 nT and Kp≥ 3 indices were used to identify the storm periods considered. Ionospheric TEC derived from IGS stations that lie within 30°E - 40°E geographic longitude in mid, low and equatorial latitude over the African sector were used. The statistical analysis of ionospheric storm effects were compared over mid, low and equatorial latitudes in the African sector for the first time. Positive ionospheric storm effects were more prevalent during CME-driven and CIR-driven over all stations considered in this study. Negative ionospheric storm effects occurred only during CME-driven storms over mid-latitude stations and were more prevalent in summer. The other interesting finding is that for the stations considered over mid-, low, and equatorial latitudes, negative-positive ionospheric responses were only observed over low and equatorial latitudes. A significant number of cases where the electron density changes remained within the background variability during storm conditions were observed over the low latitude stations compared to other latitude regions.
- Full Text:
- Date Issued: 2018
Modelling Ionospheric vertical drifts over the African low latitude region
- Dubazane, Makhosonke Berthwell
- Authors: Dubazane, Makhosonke Berthwell
- Date: 2018
- Subjects: Ionospheric drift , Magnetometers , Functions, Orthogonal , Neural networks (Computer science) , Ionospheric electron density -- Africa , Communication and Navigation Outage Forecasting Systems (C/NOFS)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/63356 , vital:28396
- Description: Low/equatorial latitudes vertical plasma drifts and electric fields govern the formation and changes of ionospheric density structures which affect space-based systems such as communications, navigation and positioning. Dynamical and electrodynamical processes play important roles in plasma distribution at different altitudes. Because of the high variability of E × B drift in low latitude regions, coupled with various processes that sometimes originate from high latitudes especially during geomagnetic storm conditions, it is challenging to develop accurate vertical drift models. This is despite the fact that there are very few instruments dedicated to provide electric field and hence E × B drift data in low/equatorial latitude regions. To this effect, there exists no ground-based instrument for direct measurements of E×B drift data in the African sector. This study presents the first time investigation aimed at modelling the long-term variability of low latitude vertical E × B drift over the African sector using a combination of Communication and Navigation Outage Forecasting Systems (C/NOFS) and ground-based magnetometer observations/measurements during 2008-2013. Because the approach is based on the estimation of equatorial electrojet from ground-based magnetometer observations, the developed models are only valid for local daytime. Three modelling techniques have been considered. The application of Empirical Orthogonal Functions and partial least squares has been performed on vertical E × B drift modelling for the first time. The artificial neural networks that have the advantage of learning underlying changes between a set of inputs and known output were also used in vertical E × B drift modelling. Due to lack of E×B drift data over the African sector, the developed models were validated using satellite data and the climatological Scherliess-Fejer model incorporated within the International Reference Ionosphere model. Maximum correlation coefficient of ∼ 0.8 was achieved when validating the developed models with C/NOFS E × B drift observations that were not used in any model development. For most of the time, the climatological model overestimates the local daytime vertical E × B drift velocities. The methods and approach presented in this study provide a background for constructing vertical E ×B drift databases in longitude sectors that do not have radar instrumentation. This will in turn make it possible to study day-to-day variability of vertical E×B drift and hopefully lead to the development of regional and global models that will incorporate local time information in different longitude sectors.
- Full Text:
- Date Issued: 2018
- Authors: Dubazane, Makhosonke Berthwell
- Date: 2018
- Subjects: Ionospheric drift , Magnetometers , Functions, Orthogonal , Neural networks (Computer science) , Ionospheric electron density -- Africa , Communication and Navigation Outage Forecasting Systems (C/NOFS)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/63356 , vital:28396
- Description: Low/equatorial latitudes vertical plasma drifts and electric fields govern the formation and changes of ionospheric density structures which affect space-based systems such as communications, navigation and positioning. Dynamical and electrodynamical processes play important roles in plasma distribution at different altitudes. Because of the high variability of E × B drift in low latitude regions, coupled with various processes that sometimes originate from high latitudes especially during geomagnetic storm conditions, it is challenging to develop accurate vertical drift models. This is despite the fact that there are very few instruments dedicated to provide electric field and hence E × B drift data in low/equatorial latitude regions. To this effect, there exists no ground-based instrument for direct measurements of E×B drift data in the African sector. This study presents the first time investigation aimed at modelling the long-term variability of low latitude vertical E × B drift over the African sector using a combination of Communication and Navigation Outage Forecasting Systems (C/NOFS) and ground-based magnetometer observations/measurements during 2008-2013. Because the approach is based on the estimation of equatorial electrojet from ground-based magnetometer observations, the developed models are only valid for local daytime. Three modelling techniques have been considered. The application of Empirical Orthogonal Functions and partial least squares has been performed on vertical E × B drift modelling for the first time. The artificial neural networks that have the advantage of learning underlying changes between a set of inputs and known output were also used in vertical E × B drift modelling. Due to lack of E×B drift data over the African sector, the developed models were validated using satellite data and the climatological Scherliess-Fejer model incorporated within the International Reference Ionosphere model. Maximum correlation coefficient of ∼ 0.8 was achieved when validating the developed models with C/NOFS E × B drift observations that were not used in any model development. For most of the time, the climatological model overestimates the local daytime vertical E × B drift velocities. The methods and approach presented in this study provide a background for constructing vertical E ×B drift databases in longitude sectors that do not have radar instrumentation. This will in turn make it possible to study day-to-day variability of vertical E×B drift and hopefully lead to the development of regional and global models that will incorporate local time information in different longitude sectors.
- Full Text:
- Date Issued: 2018
Tomographic imaging of East African equatorial ionosphere and study of equatorial plasma bubbles
- Authors: Giday, Nigussie Mezgebe
- Date: 2018
- Subjects: Ionosphere -- Africa, Central , Tomography -- Africa, Central , Global Positioning System , Neural networks (Computer science) , Space environment , Multi-Instrument Data Analysis System (MIDAS) , Equatorial plasma bubbles
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/63980 , vital:28516
- Description: In spite of the fact that the African ionospheric equatorial region has the largest ground footprint along the geomagnetic equator, it has not been well studied due to the absence of adequate ground-based instruments. This thesis presents research on both tomographic imaging of the African equatorial ionosphere and the study of the ionospheric irregularities/equatorial plasma bubbles (EPBs) under varying geomagnetic conditions. The Multi-Instrument Data Analysis System (MIDAS), an inversion algorithm, was investigated for its validity and ability as a tool to reconstruct multi-scaled ionospheric structures for different geomagnetic conditions. This was done for the narrow East African longitude sector with data from the available ground Global Positioning Sys-tem (GPS) receivers. The MIDAS results were compared to the results of two models, namely the IRI and GIM. MIDAS results compared more favourably with the observation vertical total electron content (VTEC), with a computed maximum correlation coefficient (r) of 0.99 and minimum root-mean-square error (RMSE) of 2.91 TECU, than did the results of the IRI-2012 and GIM models with maximum r of 0.93 and 0.99, and minimum RMSE of 13.03 TECU and 6.52 TECU, respectively, over all the test stations and validation days. The ability of MIDAS to reconstruct storm-time TEC was also compared with the results produced by the use of a Artificial Neural Net-work (ANN) for the African low- and mid-latitude regions. In terms of latitude, on average,MIDAS performed 13.44 % better than ANN in the African mid-latitudes, while MIDAS under performed in low-latitudes. This thesis also reports on the effects of moderate geomagnetic conditions on the evolution of EPBs and/or ionospheric irregularities during their season of occurrence using data from (or measurements by) space- and ground-based instruments for the east African equatorial sector. The study showed that the strength of daytime equatorial electrojet (EEJ), the steepness of the TEC peak-to-trough gradient and/or the meridional/transequatorial thermospheric winds sometimes have collective/interwoven effects, while at other times one mechanism dominates. In summary, this research offered tomographic results that outperform the results of the commonly used (“standard”) global models (i.e. IRI and GIM) for a longitude sector of importance to space weather, which has not been adequately studied due to a lack of sufficient instrumentation.
- Full Text:
- Date Issued: 2018
- Authors: Giday, Nigussie Mezgebe
- Date: 2018
- Subjects: Ionosphere -- Africa, Central , Tomography -- Africa, Central , Global Positioning System , Neural networks (Computer science) , Space environment , Multi-Instrument Data Analysis System (MIDAS) , Equatorial plasma bubbles
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/63980 , vital:28516
- Description: In spite of the fact that the African ionospheric equatorial region has the largest ground footprint along the geomagnetic equator, it has not been well studied due to the absence of adequate ground-based instruments. This thesis presents research on both tomographic imaging of the African equatorial ionosphere and the study of the ionospheric irregularities/equatorial plasma bubbles (EPBs) under varying geomagnetic conditions. The Multi-Instrument Data Analysis System (MIDAS), an inversion algorithm, was investigated for its validity and ability as a tool to reconstruct multi-scaled ionospheric structures for different geomagnetic conditions. This was done for the narrow East African longitude sector with data from the available ground Global Positioning Sys-tem (GPS) receivers. The MIDAS results were compared to the results of two models, namely the IRI and GIM. MIDAS results compared more favourably with the observation vertical total electron content (VTEC), with a computed maximum correlation coefficient (r) of 0.99 and minimum root-mean-square error (RMSE) of 2.91 TECU, than did the results of the IRI-2012 and GIM models with maximum r of 0.93 and 0.99, and minimum RMSE of 13.03 TECU and 6.52 TECU, respectively, over all the test stations and validation days. The ability of MIDAS to reconstruct storm-time TEC was also compared with the results produced by the use of a Artificial Neural Net-work (ANN) for the African low- and mid-latitude regions. In terms of latitude, on average,MIDAS performed 13.44 % better than ANN in the African mid-latitudes, while MIDAS under performed in low-latitudes. This thesis also reports on the effects of moderate geomagnetic conditions on the evolution of EPBs and/or ionospheric irregularities during their season of occurrence using data from (or measurements by) space- and ground-based instruments for the east African equatorial sector. The study showed that the strength of daytime equatorial electrojet (EEJ), the steepness of the TEC peak-to-trough gradient and/or the meridional/transequatorial thermospheric winds sometimes have collective/interwoven effects, while at other times one mechanism dominates. In summary, this research offered tomographic results that outperform the results of the commonly used (“standard”) global models (i.e. IRI and GIM) for a longitude sector of importance to space weather, which has not been adequately studied due to a lack of sufficient instrumentation.
- Full Text:
- Date Issued: 2018
Automation of source-artefact classification
- Sebokolodi, Makhuduga Lerato Lydia
- Authors: Sebokolodi, Makhuduga Lerato Lydia
- Date: 2017
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/4920 , vital:20743
- Description: The high sensitivities of modern radio telescopes will enable the detection of very faint astrophysical sources in the distant Universe. However, these high sensitivities also imply that calibration artefacts, which were below the noise for less sensitive instruments, will emerge above the noise and may limit the dynamic range capabilities of these instruments. Detecting faint emission will require detection thresholds close to the noise and this may cause some of the artefacts to be incorrectly detected as real emission. The current approach is to manually remove the artefacts, or set high detection thresholds in order to avoid them. The former will not be possible given the large quantities of data that these instruments will produce, and the latter results in very shallow and incomplete catalogues. This work uses the negative detection method developed by Serra et al. (2012) to distinguish artefacts from astrophysical emission in radio images. We also present a technique that automates the identification of sources subject to severe direction-dependent (DD) effects and thus allows them to be flagged for DD calibration. The negative detection approach is shown to provide high reliability and high completeness catalogues for simulated data, as well as a JVLA observation of the 3C147 field (Mitra et al., 2015). We also show that our technique correctly identifies sources that require DD calibration for datasets from the KAT-7, LOFAR, JVLA and GMRT instruments.
- Full Text:
- Date Issued: 2017
- Authors: Sebokolodi, Makhuduga Lerato Lydia
- Date: 2017
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/4920 , vital:20743
- Description: The high sensitivities of modern radio telescopes will enable the detection of very faint astrophysical sources in the distant Universe. However, these high sensitivities also imply that calibration artefacts, which were below the noise for less sensitive instruments, will emerge above the noise and may limit the dynamic range capabilities of these instruments. Detecting faint emission will require detection thresholds close to the noise and this may cause some of the artefacts to be incorrectly detected as real emission. The current approach is to manually remove the artefacts, or set high detection thresholds in order to avoid them. The former will not be possible given the large quantities of data that these instruments will produce, and the latter results in very shallow and incomplete catalogues. This work uses the negative detection method developed by Serra et al. (2012) to distinguish artefacts from astrophysical emission in radio images. We also present a technique that automates the identification of sources subject to severe direction-dependent (DD) effects and thus allows them to be flagged for DD calibration. The negative detection approach is shown to provide high reliability and high completeness catalogues for simulated data, as well as a JVLA observation of the 3C147 field (Mitra et al., 2015). We also show that our technique correctly identifies sources that require DD calibration for datasets from the KAT-7, LOFAR, JVLA and GMRT instruments.
- Full Text:
- Date Issued: 2017
Calibration and imaging with variable radio sources
- Authors: Mbou Sob, Ulrich Armel
- Date: 2017
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/37977 , vital:24721
- Description: Calibration of radio interferometric data is one of the most important steps that are required to produce high dynamic range radio maps with high fidelity. However, naive calibration (inaccurate knowledge of the sky and instruments) leads to the formation of calibration artefacts: the generation of spurious sources and the deformations in the structure of extended sources. A particular class of calibration artefacts, called ghost sources, which results from calibration with incomplete sky models has been extensively studied by Grobler et al. (2014, 2016) and Wijnholds et al. (2016). They developed a framework which can be used to predict the fluxes and positions of ghost sources. This work uses the approach initiated by these authors to study the calibration artefacts and ghost sources that are produced when variable sources are not considered in sky models during calibration. This work investigates both long-term and short-term variability and uses the root mean square (rms) and power spectrum as metrics to evaluate the “quality” of the residual visibilities obtained through calibration. We show that the overestimation and underestimation of source flux density during calibration produces similar but symmetrically opposite results. We show that calibration artefacts from sky model errors are not normally distributed. This prevents them from being removed by employing advanced techniques, such as stacking. The power spectrums measured from the residuals with a variable source was significantly higher than those from residuals without a variable source. This implies advanced calibration techniques and sky model completeness will be required for studies such as probing the Epoch of Reoinization, where we seek to detect faint signals below thermal noise.
- Full Text:
- Date Issued: 2017
- Authors: Mbou Sob, Ulrich Armel
- Date: 2017
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/37977 , vital:24721
- Description: Calibration of radio interferometric data is one of the most important steps that are required to produce high dynamic range radio maps with high fidelity. However, naive calibration (inaccurate knowledge of the sky and instruments) leads to the formation of calibration artefacts: the generation of spurious sources and the deformations in the structure of extended sources. A particular class of calibration artefacts, called ghost sources, which results from calibration with incomplete sky models has been extensively studied by Grobler et al. (2014, 2016) and Wijnholds et al. (2016). They developed a framework which can be used to predict the fluxes and positions of ghost sources. This work uses the approach initiated by these authors to study the calibration artefacts and ghost sources that are produced when variable sources are not considered in sky models during calibration. This work investigates both long-term and short-term variability and uses the root mean square (rms) and power spectrum as metrics to evaluate the “quality” of the residual visibilities obtained through calibration. We show that the overestimation and underestimation of source flux density during calibration produces similar but symmetrically opposite results. We show that calibration artefacts from sky model errors are not normally distributed. This prevents them from being removed by employing advanced techniques, such as stacking. The power spectrums measured from the residuals with a variable source was significantly higher than those from residuals without a variable source. This implies advanced calibration techniques and sky model completeness will be required for studies such as probing the Epoch of Reoinization, where we seek to detect faint signals below thermal noise.
- Full Text:
- Date Issued: 2017
Data compression, field of interest shaping and fast algorithms for direction-dependent deconvolution in radio interferometry
- Authors: Atemkeng, Marcellin T
- Date: 2017
- Subjects: Radio astronomy , Solar radio emission , Radio interferometers , Signal processing -- Digital techniques , Algorithms , Data compression (Computer science)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/6324 , vital:21089
- Description: In radio interferometry, observed visibilities are intrinsically sampled at some interval in time and frequency. Modern interferometers are capable of producing data at very high time and frequency resolution; practical limits on storage and computation costs require that some form of data compression be imposed. The traditional form of compression is simple averaging of the visibilities over coarser time and frequency bins. This has an undesired side effect: the resulting averaged visibilities “decorrelate”, and do so differently depending on the baseline length and averaging interval. This translates into a non-trivial signature in the image domain known as “smearing”, which manifests itself as an attenuation in amplitude towards off-centre sources. With the increasing fields of view and/or longer baselines employed in modern and future instruments, the trade-off between data rate and smearing becomes increasingly unfavourable. Averaging also results in baseline length and a position-dependent point spread function (PSF). In this work, we investigate alternative approaches to low-loss data compression. We show that averaging of the visibility data can be understood as a form of convolution by a boxcar-like window function, and that by employing alternative baseline-dependent window functions a more optimal interferometer smearing response may be induced. Specifically, we can improve amplitude response over a chosen field of interest and attenuate sources outside the field of interest. The main cost of this technique is a reduction in nominal sensitivity; we investigate the smearing vs. sensitivity trade-off and show that in certain regimes a favourable compromise can be achieved. We show the application of this technique to simulated data from the Jansky Very Large Array and the European Very Long Baseline Interferometry Network. Furthermore, we show that the position-dependent PSF shape induced by averaging can be approximated using linear algebraic properties to effectively reduce the computational complexity for evaluating the PSF at each sky position. We conclude by implementing a position-dependent PSF deconvolution in an imaging and deconvolution framework. Using the Low-Frequency Array radio interferometer, we show that deconvolution with position-dependent PSFs results in higher image fidelity compared to a simple CLEAN algorithm and its derivatives.
- Full Text:
- Date Issued: 2017
- Authors: Atemkeng, Marcellin T
- Date: 2017
- Subjects: Radio astronomy , Solar radio emission , Radio interferometers , Signal processing -- Digital techniques , Algorithms , Data compression (Computer science)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/6324 , vital:21089
- Description: In radio interferometry, observed visibilities are intrinsically sampled at some interval in time and frequency. Modern interferometers are capable of producing data at very high time and frequency resolution; practical limits on storage and computation costs require that some form of data compression be imposed. The traditional form of compression is simple averaging of the visibilities over coarser time and frequency bins. This has an undesired side effect: the resulting averaged visibilities “decorrelate”, and do so differently depending on the baseline length and averaging interval. This translates into a non-trivial signature in the image domain known as “smearing”, which manifests itself as an attenuation in amplitude towards off-centre sources. With the increasing fields of view and/or longer baselines employed in modern and future instruments, the trade-off between data rate and smearing becomes increasingly unfavourable. Averaging also results in baseline length and a position-dependent point spread function (PSF). In this work, we investigate alternative approaches to low-loss data compression. We show that averaging of the visibility data can be understood as a form of convolution by a boxcar-like window function, and that by employing alternative baseline-dependent window functions a more optimal interferometer smearing response may be induced. Specifically, we can improve amplitude response over a chosen field of interest and attenuate sources outside the field of interest. The main cost of this technique is a reduction in nominal sensitivity; we investigate the smearing vs. sensitivity trade-off and show that in certain regimes a favourable compromise can be achieved. We show the application of this technique to simulated data from the Jansky Very Large Array and the European Very Long Baseline Interferometry Network. Furthermore, we show that the position-dependent PSF shape induced by averaging can be approximated using linear algebraic properties to effectively reduce the computational complexity for evaluating the PSF at each sky position. We conclude by implementing a position-dependent PSF deconvolution in an imaging and deconvolution framework. Using the Low-Frequency Array radio interferometer, we show that deconvolution with position-dependent PSFs results in higher image fidelity compared to a simple CLEAN algorithm and its derivatives.
- Full Text:
- Date Issued: 2017
Ionospheric disturbances during magnetic storms at SANAE
- Authors: Hiyadutuje, Alicreance
- Date: 2017
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/54956 , vital:26639
- Description: The coronal mass ejections (CMEs) and solar flares associated with extreme solar activity may strike the Earth's magnetosphere and give rise to geomagnetic storms. During geomagnetic storms, the polar plasma dynamics may influence the middle and low-latitude ionosphere via travelling ionospheric disturbances (TIDs). These are wave-like electron density disturbances caused by atmospheric gravity waves propagating in the ionosphere. TIDs focus and defocus SuperDARN signals producing a characteristic pattern of ground backscattered power (Samson et al., 1989). Geomagnetic storms may cause a decrease of total electron content (TEC), i.e. a negative storm effect, or/and an increase of TEC, i.e. a positive storm effect. The aim of this project was to investigate the ionospheric response to strong storms (Dst < -100 nT) between 2011 and 2015, using TEC and scintillation measurements derived from GPS receivers as well as SuperDARN power, Doppler velocity and convection maps. In this study the ionosphere's response to geomagnetic storms is determined by the magnitude and time of occurrence of the geomagnetic storm. The ionospheric TEC results of this study show that most of the storm effects observed were a combination of both negative and positive per storm per station (77.8%), and only 8.9% and 13.3% of effects on TEC were negative and positive respectively. The highest number of storm effects occurred in autumn (36.4%), while 31.6%, 28.4% and 3.6% occurred in winter, spring and summer respectively. During the storms studied, 71.4% had phase scintillation in the range of 0.7 - 1 radians, and only 14.3% of the storms had amplitude scintillations near 0.4. The storms studied at SANAE station generated TIDs with periods of less than an hour and amplitudes in the range 0.2 - 5 TECU. These TIDs were found to originate from the high-velocity plasma flows, some of which are visible in SuperDARN convection maps. Early studies concluded that likely sources of these disturbances correspond to ionospheric current surges (Bristow et al., 1994) in the dayside auroral zone (Huang et al., 1998).
- Full Text:
- Date Issued: 2017
- Authors: Hiyadutuje, Alicreance
- Date: 2017
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/54956 , vital:26639
- Description: The coronal mass ejections (CMEs) and solar flares associated with extreme solar activity may strike the Earth's magnetosphere and give rise to geomagnetic storms. During geomagnetic storms, the polar plasma dynamics may influence the middle and low-latitude ionosphere via travelling ionospheric disturbances (TIDs). These are wave-like electron density disturbances caused by atmospheric gravity waves propagating in the ionosphere. TIDs focus and defocus SuperDARN signals producing a characteristic pattern of ground backscattered power (Samson et al., 1989). Geomagnetic storms may cause a decrease of total electron content (TEC), i.e. a negative storm effect, or/and an increase of TEC, i.e. a positive storm effect. The aim of this project was to investigate the ionospheric response to strong storms (Dst < -100 nT) between 2011 and 2015, using TEC and scintillation measurements derived from GPS receivers as well as SuperDARN power, Doppler velocity and convection maps. In this study the ionosphere's response to geomagnetic storms is determined by the magnitude and time of occurrence of the geomagnetic storm. The ionospheric TEC results of this study show that most of the storm effects observed were a combination of both negative and positive per storm per station (77.8%), and only 8.9% and 13.3% of effects on TEC were negative and positive respectively. The highest number of storm effects occurred in autumn (36.4%), while 31.6%, 28.4% and 3.6% occurred in winter, spring and summer respectively. During the storms studied, 71.4% had phase scintillation in the range of 0.7 - 1 radians, and only 14.3% of the storms had amplitude scintillations near 0.4. The storms studied at SANAE station generated TIDs with periods of less than an hour and amplitudes in the range 0.2 - 5 TECU. These TIDs were found to originate from the high-velocity plasma flows, some of which are visible in SuperDARN convection maps. Early studies concluded that likely sources of these disturbances correspond to ionospheric current surges (Bristow et al., 1994) in the dayside auroral zone (Huang et al., 1998).
- Full Text:
- Date Issued: 2017
MEQSILHOUETTE: a mm-VLBI observation and signal corruption simulator
- Authors: Blecher, Tariq
- Date: 2017
- Subjects: Large astronomical telescopes , Very long baseline interferometry , MEQSILHOUETTE (Software) , Event horizon telescope
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/40713 , vital:25019
- Description: The Event Horizon Telescope (EHT) aims to resolve the innermost emission of nearby supermassive black holes, Sgr A* and M87, on event horizon scales. This emission is predicted to be gravitationally lensed by the black hole which should produce a shadow (or silhouette) feature, a precise measurement of which is a test of gravity in the strong-field regime. This emission is also an ideal probe of the innermost accretion and jet-launch physics, offering the new insights into this data-limited observing regime. The EHT will use the technique of Very Long Baseline Interferometry (VLBI) at (sub)millimetre wavelengths, which has a diffraction limited angular resolution of order ~ 10 µ-arcsec. However, this technique suffers from unique challenges, including scattering and attenuation in the troposphere and interstellar medium; variable source structure; as well as antenna pointing errors comparable to the size of the primary beam. In this thesis, we present the meqsilhouette software package which is focused towards simulating realistic EHT data. It has the capability to simulate a time-variable source, and includes realistic descriptions of the effects of the troposphere, the interstellar medium as well as primary beams and associated antenna pointing errors. We have demonstrated through several examples simulations that these effects can limit the ability to measure the key science parameters. This simulator can be used to research calibration, parameter estimation and imaging strategies, as well as gain insight into possible systematic uncertainties.
- Full Text:
- Date Issued: 2017
- Authors: Blecher, Tariq
- Date: 2017
- Subjects: Large astronomical telescopes , Very long baseline interferometry , MEQSILHOUETTE (Software) , Event horizon telescope
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/40713 , vital:25019
- Description: The Event Horizon Telescope (EHT) aims to resolve the innermost emission of nearby supermassive black holes, Sgr A* and M87, on event horizon scales. This emission is predicted to be gravitationally lensed by the black hole which should produce a shadow (or silhouette) feature, a precise measurement of which is a test of gravity in the strong-field regime. This emission is also an ideal probe of the innermost accretion and jet-launch physics, offering the new insights into this data-limited observing regime. The EHT will use the technique of Very Long Baseline Interferometry (VLBI) at (sub)millimetre wavelengths, which has a diffraction limited angular resolution of order ~ 10 µ-arcsec. However, this technique suffers from unique challenges, including scattering and attenuation in the troposphere and interstellar medium; variable source structure; as well as antenna pointing errors comparable to the size of the primary beam. In this thesis, we present the meqsilhouette software package which is focused towards simulating realistic EHT data. It has the capability to simulate a time-variable source, and includes realistic descriptions of the effects of the troposphere, the interstellar medium as well as primary beams and associated antenna pointing errors. We have demonstrated through several examples simulations that these effects can limit the ability to measure the key science parameters. This simulator can be used to research calibration, parameter estimation and imaging strategies, as well as gain insight into possible systematic uncertainties.
- Full Text:
- Date Issued: 2017
Nonlinear optical responses of phthalocyanines in the presence of nanomaterials or when embedded in polymeric materials
- Authors: Bankole, Owolabi Mutolib
- Date: 2017
- Subjects: Phthalocyanines , Phthalocyanines -- Optical properties , Alkynes , Triazoles , Nonlinear optics , Photochemistry , Complex compounds , Amines , Mercaptopyridine
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/45794 , vital:25548
- Description: This work describes the synthesis, photophysical and nonlinear optical characterizations of alkynyl Pcs (1, 2, 3, 8 and 9), 1,2,3-triazole ZnPc (4), mercaptopyridine Pcs (5, 6 and 7) and amino Pcs (10 and 11). Complexes 1, 2, 4, 7, 8, 9 and 11 were newly synthesized and characterized using techniques including 1H-NMR, MALDI-TOF, UV-visible spectrophotometry, FTIR and elemental analysis. The results of the characterizations were in good agreement with their molecular structures, and confirmed the purity of the new molecules. Complex 10 was covalently linked to pristine graphene (GQDs), nitrogen- doped (NGQDs), and sulfur-nitrogen co-doped (SNGQDs) graphene quantum dots; gold nanoparticles (AuNPs); poly(acrylic acid) (PAA); Fe3O4@Ag core-shell and Fe3O4- Ag hybrid nanoparticles via covalent bonding. Complex 11 was linked to Agx Auy alloy nanoparticles via NH2-Au and/or Au-S bonding, 2 and 3 were linked to gold nanoparticles (AuNPs) via clicked reactions. Evidence of successful conjugation of 2, 3, 10 and 11 to nanomaterials was revealed within the UV-vis, EDS, TEM, XRD and XPS spectra. Optical limiting (OL) responses of the samples were evaluated using open aperture Z-scan technique at 532 nm and 10 ns radiation in solution or when embedded in polymer mixtures. The analyses of the Z-scan data for the studied samples did fit to a two-photon absorption mechanism (2PA), but the Pcs and Pc-nanomaterial or polymer composites also possess the multi-photon absorption mechanisms aided by the triplet-triplet population to have reverse saturable absorption (RSA) occur. Phthalocyanines doped in polymer matrices showed larger nonlinear absorption coefficients (ßeff), third-order susceptibility (Im [x(3)]) and second-order hyperpolarizability (y), with an accompanying low intensity threshold (Ium) than in solution. Aggregation in DMSO negatively affected NLO behaviour of Pcs (8 as a case study) at low laser power, and improved at relatively higher laser power. Heavy atom-substituted Pcs (6) enhanced NLO and OL properties than lighter atoms such as 5 and 7. Direct relationship between enhanced photophysical properties and nonlinear effects favoured by excited triplet absorption of the 2, 3, 10 and 11 in presence of nanomaterials was established. Major factor responsible for the enhanced nonlinearities of 10 in the presence of NGQDs and SNGQDs were fully described and attributed to the surface defects caused by the presence of heteroatoms such as nitrogen and sulfur. The studies showed that phthalocyanines-nanomaterial composites were useful in applications such as optical switching, pulse compressor and laser pulse narrowing.
- Full Text:
- Date Issued: 2017
- Authors: Bankole, Owolabi Mutolib
- Date: 2017
- Subjects: Phthalocyanines , Phthalocyanines -- Optical properties , Alkynes , Triazoles , Nonlinear optics , Photochemistry , Complex compounds , Amines , Mercaptopyridine
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/45794 , vital:25548
- Description: This work describes the synthesis, photophysical and nonlinear optical characterizations of alkynyl Pcs (1, 2, 3, 8 and 9), 1,2,3-triazole ZnPc (4), mercaptopyridine Pcs (5, 6 and 7) and amino Pcs (10 and 11). Complexes 1, 2, 4, 7, 8, 9 and 11 were newly synthesized and characterized using techniques including 1H-NMR, MALDI-TOF, UV-visible spectrophotometry, FTIR and elemental analysis. The results of the characterizations were in good agreement with their molecular structures, and confirmed the purity of the new molecules. Complex 10 was covalently linked to pristine graphene (GQDs), nitrogen- doped (NGQDs), and sulfur-nitrogen co-doped (SNGQDs) graphene quantum dots; gold nanoparticles (AuNPs); poly(acrylic acid) (PAA); Fe3O4@Ag core-shell and Fe3O4- Ag hybrid nanoparticles via covalent bonding. Complex 11 was linked to Agx Auy alloy nanoparticles via NH2-Au and/or Au-S bonding, 2 and 3 were linked to gold nanoparticles (AuNPs) via clicked reactions. Evidence of successful conjugation of 2, 3, 10 and 11 to nanomaterials was revealed within the UV-vis, EDS, TEM, XRD and XPS spectra. Optical limiting (OL) responses of the samples were evaluated using open aperture Z-scan technique at 532 nm and 10 ns radiation in solution or when embedded in polymer mixtures. The analyses of the Z-scan data for the studied samples did fit to a two-photon absorption mechanism (2PA), but the Pcs and Pc-nanomaterial or polymer composites also possess the multi-photon absorption mechanisms aided by the triplet-triplet population to have reverse saturable absorption (RSA) occur. Phthalocyanines doped in polymer matrices showed larger nonlinear absorption coefficients (ßeff), third-order susceptibility (Im [x(3)]) and second-order hyperpolarizability (y), with an accompanying low intensity threshold (Ium) than in solution. Aggregation in DMSO negatively affected NLO behaviour of Pcs (8 as a case study) at low laser power, and improved at relatively higher laser power. Heavy atom-substituted Pcs (6) enhanced NLO and OL properties than lighter atoms such as 5 and 7. Direct relationship between enhanced photophysical properties and nonlinear effects favoured by excited triplet absorption of the 2, 3, 10 and 11 in presence of nanomaterials was established. Major factor responsible for the enhanced nonlinearities of 10 in the presence of NGQDs and SNGQDs were fully described and attributed to the surface defects caused by the presence of heteroatoms such as nitrogen and sulfur. The studies showed that phthalocyanines-nanomaterial composites were useful in applications such as optical switching, pulse compressor and laser pulse narrowing.
- Full Text:
- Date Issued: 2017
Real-time audio spectrum analyser research, design, development and implementation using the 32 bit ARMR Cortex-M4 microcontroller
- Authors: Just, Stefan Antonio
- Date: 2017
- Subjects: Spectrum analyzers , Sound -- Recording and reproducing -- Digital techniques , Real-time data processing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/50536 , vital:25997
- Description: This thesis describes the design and testing of a low-cost hand-held real-time audio analyser (RTAA). This includes the design of an embedded system, the development of the firmware executed by the embedded system, and the implementation of a real-time signal processing algorithms. One of the objectives of this project was to design an alternative low-cost audio analyser to the current commercially available solutions. The device was tested with the audio standard test signal (pink noise) and was compared to the expected at-spectrum response corresponding to a balanced audio system. The design makes use of an 32-bit Reduced Instruction Set Computer (RISC) processor core (ARM Cortex-M4), namely the STM32F4 family of microcontrollers. Due to the pin compatibility of the microcontroller (designed and manufactured by STMicroelectronics), the new development board can also be upgraded with the newly released Cortex-M7 microcontroller, namely the STM32F7 family of microcontrollers. Moreover, the low-cost hardware design features 256kB Random Access Memory (RAM); on-board Micro-Electro-Mechanical System (MEMS) microphone; on-chip 12-bit Analogue-to-Digital (A/D) and Digital-to-Analogue (D/A) Converters; 3.2" Thin-Film-Transistor Liquid-Crystal Display (TFT-LCD) with a resistive touch screen sensor and SD-Card Socket. Furthermore, two additional expansion modules were designed and can extend the functionality of the designed real-time audio analyser. Firstly, an audio/video module featuring a professional 24-bit 192kHz sampling rate audio CODEC; balanced audio microphone input; unbalanced line output; three MEMS microphone inputs; headphone output; and a Video Graphics Array (VGA) controller allowing the display of the analysed audio spectrum on either a projector or monitor. The second expansion module features two external memories: 1MB Static Random Access Memory (SRAM) and 16MB Synchronous Dynamic Random Access Memory (SDRAM). While the two additional expansion modules were not completely utilised by the firmware presented in this thesis, upgrades of the real-time audio analyser firmware in future revisions will provide a higher performing and more accurate analysis of the audio spectrum. The full research and design process for the real-time audio analyser is discussed and both Problems and pitfalls with the final implemented design are highlighted and possible resolutions were investigated. The development costs (excluding labour) are given in the form of a bill of materials (BOM) with the total costs averaging around R1000. Moreover, the additional VGA controller could further decrease the overall costs with the removal of the TFT-LCD screen from the audio analyser and provided the external display was not included in the BOM.
- Full Text:
- Date Issued: 2017
- Authors: Just, Stefan Antonio
- Date: 2017
- Subjects: Spectrum analyzers , Sound -- Recording and reproducing -- Digital techniques , Real-time data processing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/50536 , vital:25997
- Description: This thesis describes the design and testing of a low-cost hand-held real-time audio analyser (RTAA). This includes the design of an embedded system, the development of the firmware executed by the embedded system, and the implementation of a real-time signal processing algorithms. One of the objectives of this project was to design an alternative low-cost audio analyser to the current commercially available solutions. The device was tested with the audio standard test signal (pink noise) and was compared to the expected at-spectrum response corresponding to a balanced audio system. The design makes use of an 32-bit Reduced Instruction Set Computer (RISC) processor core (ARM Cortex-M4), namely the STM32F4 family of microcontrollers. Due to the pin compatibility of the microcontroller (designed and manufactured by STMicroelectronics), the new development board can also be upgraded with the newly released Cortex-M7 microcontroller, namely the STM32F7 family of microcontrollers. Moreover, the low-cost hardware design features 256kB Random Access Memory (RAM); on-board Micro-Electro-Mechanical System (MEMS) microphone; on-chip 12-bit Analogue-to-Digital (A/D) and Digital-to-Analogue (D/A) Converters; 3.2" Thin-Film-Transistor Liquid-Crystal Display (TFT-LCD) with a resistive touch screen sensor and SD-Card Socket. Furthermore, two additional expansion modules were designed and can extend the functionality of the designed real-time audio analyser. Firstly, an audio/video module featuring a professional 24-bit 192kHz sampling rate audio CODEC; balanced audio microphone input; unbalanced line output; three MEMS microphone inputs; headphone output; and a Video Graphics Array (VGA) controller allowing the display of the analysed audio spectrum on either a projector or monitor. The second expansion module features two external memories: 1MB Static Random Access Memory (SRAM) and 16MB Synchronous Dynamic Random Access Memory (SDRAM). While the two additional expansion modules were not completely utilised by the firmware presented in this thesis, upgrades of the real-time audio analyser firmware in future revisions will provide a higher performing and more accurate analysis of the audio spectrum. The full research and design process for the real-time audio analyser is discussed and both Problems and pitfalls with the final implemented design are highlighted and possible resolutions were investigated. The development costs (excluding labour) are given in the form of a bill of materials (BOM) with the total costs averaging around R1000. Moreover, the additional VGA controller could further decrease the overall costs with the removal of the TFT-LCD screen from the audio analyser and provided the external display was not included in the BOM.
- Full Text:
- Date Issued: 2017
Thermoluminescence of synthetic quartz annealed beyond its second phase inversion temperature
- Authors: Mthwesi, Zuko
- Date: 2017
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/46077 , vital:25577
- Description: Thermoluminescence of synthetic quartz annealed at 1000 ºC for 10 minutes has been studied. The aim was to study mechanisms of thermoluminescence in annealed synthetic quartz and to discuss the results in terms of the physics of point defects. The sample was irradiated with a beta dose of 10 Gy of beta radiation and then heated at a linear heating rate of 1 ºC.s-1 up to 500 ºC. The thermoluminescence (TL) glow curve consists of three glow peaks. Peak I at 74 0C (main peak) with high intensity as compared to the other two peaks. Peak II at 144 ºC is more intense than peak III at 180 ºC. This study was on the main peak (MP) at 74 ºC and peak III at 180 ºC. Kinetic analysis was carried out to determine the trap depth E, frequency factor s and the order of kinetics b of both peaks using the initial rise, peak shape, variable heating rate, glow curve deconvolution and isothermal TL methods. The values of kinetic parameters obtained were around 0.7 to 1.0 eV for trap depth and in the interval of 108 to 1015 s-¹ for frequency factor for both peaks. The effect of heating rate from 0.5 to 5 ºC.s-¹ on the TL peak intensity and peak temperature was observed. Also the effect of thermal quenching was observed at high heating rates. Since the TL glow curve has overlapping TL peaks, the Tm-Tstop method from 54 ºC up to 64 ºC and E -Tstop methods were introduced where a first order single peak was observed. Phototransfered thermoluminescence (PTTL) was investigated and characterized by three peaks. First PTTL peak I at 72 ºC, peak II at 134 ºC and peak III at 176 ºC. Analysis was carried out on peaks I and III for the effect of dose dependence from 20-200 Gy. Thermal fading was observed on PTTL peaks I and III, after storage time of 30 minutes.
- Full Text:
- Date Issued: 2017
- Authors: Mthwesi, Zuko
- Date: 2017
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/46077 , vital:25577
- Description: Thermoluminescence of synthetic quartz annealed at 1000 ºC for 10 minutes has been studied. The aim was to study mechanisms of thermoluminescence in annealed synthetic quartz and to discuss the results in terms of the physics of point defects. The sample was irradiated with a beta dose of 10 Gy of beta radiation and then heated at a linear heating rate of 1 ºC.s-1 up to 500 ºC. The thermoluminescence (TL) glow curve consists of three glow peaks. Peak I at 74 0C (main peak) with high intensity as compared to the other two peaks. Peak II at 144 ºC is more intense than peak III at 180 ºC. This study was on the main peak (MP) at 74 ºC and peak III at 180 ºC. Kinetic analysis was carried out to determine the trap depth E, frequency factor s and the order of kinetics b of both peaks using the initial rise, peak shape, variable heating rate, glow curve deconvolution and isothermal TL methods. The values of kinetic parameters obtained were around 0.7 to 1.0 eV for trap depth and in the interval of 108 to 1015 s-¹ for frequency factor for both peaks. The effect of heating rate from 0.5 to 5 ºC.s-¹ on the TL peak intensity and peak temperature was observed. Also the effect of thermal quenching was observed at high heating rates. Since the TL glow curve has overlapping TL peaks, the Tm-Tstop method from 54 ºC up to 64 ºC and E -Tstop methods were introduced where a first order single peak was observed. Phototransfered thermoluminescence (PTTL) was investigated and characterized by three peaks. First PTTL peak I at 72 ºC, peak II at 134 ºC and peak III at 176 ºC. Analysis was carried out on peaks I and III for the effect of dose dependence from 20-200 Gy. Thermal fading was observed on PTTL peaks I and III, after storage time of 30 minutes.
- Full Text:
- Date Issued: 2017
Beta decay of 100/400 Zr produced in neutron-induced fission of natural uranium
- Authors: Kamoto, Thokozani
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3024 , vital:20353
- Description: Fission fragments, produced by neutron bombardment of natural uranium at the Physics Department, Jyväskylä, Finland, are studied in this work. The data had been sorted into 25 Y — y coincidence matrices which were then analysed. In this work we aimed to identify the fission products using Y-Y coincidence analysis and then study the beta-decay of some of the fission products. Sixteen fission products ranging from A = 94 to A = 136 were identified. Out of these fission products beta decay of the A = 100 (100/40 Zr – 100/41 Nb – 100/42 Mo) chain was studied in greater detail. We have also studied the variation of the relative intensities as a function of time of the 159-, 528-, 600-, 768-, 928- and 1502-keV Y-rav lines in 100/42 Mo and the profiles of the relative intensities have been modelled with the variation of the activity of 100/41 Nb against time. Configuration assignments of 100 Zr and 100/42 Mo are discussed.
- Full Text:
- Date Issued: 2016
- Authors: Kamoto, Thokozani
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3024 , vital:20353
- Description: Fission fragments, produced by neutron bombardment of natural uranium at the Physics Department, Jyväskylä, Finland, are studied in this work. The data had been sorted into 25 Y — y coincidence matrices which were then analysed. In this work we aimed to identify the fission products using Y-Y coincidence analysis and then study the beta-decay of some of the fission products. Sixteen fission products ranging from A = 94 to A = 136 were identified. Out of these fission products beta decay of the A = 100 (100/40 Zr – 100/41 Nb – 100/42 Mo) chain was studied in greater detail. We have also studied the variation of the relative intensities as a function of time of the 159-, 528-, 600-, 768-, 928- and 1502-keV Y-rav lines in 100/42 Mo and the profiles of the relative intensities have been modelled with the variation of the activity of 100/41 Nb against time. Configuration assignments of 100 Zr and 100/42 Mo are discussed.
- Full Text:
- Date Issued: 2016
Calibration and wide field imaging with PAPER: a catalogue of compact sources
- Authors: Philip, Liju
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/2397 , vital:20285
- Description: Observations of the redshifted 21 cm HI line promise to be a formidable tool for cosmology, allowing the investigation of the end of the so-called dark ages, when the first galaxies formed, and the subsequent Epoch of Reionization when the intergalactic medium transitioned from neutral to ionized. Such observations are plagued by foreground emission which is a few orders of magnitude brighter than the 21 cm line. In this thesis I analyzed data from the Donald C. Backer Precision Array for Probing the Epoch of Reionization (PAPER) in order to improve the characterization of the extragalactic foreground component. I derived a catalogue of unresolved radio sources down to a 5 Jy flux density limit at 150 MHz and derived their spectral index distribution using literature data at 408 MHz. I implemented advanced techniques to calibrate radio interferometric data that led to a few percent accuracy on the flux density scale of the derived catalogue. This work, therefore, represents a further step towards creating an accurate, global sky model that is crucial to improve calibration of Epoch of Reionization observations.
- Full Text:
- Date Issued: 2016
- Authors: Philip, Liju
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/2397 , vital:20285
- Description: Observations of the redshifted 21 cm HI line promise to be a formidable tool for cosmology, allowing the investigation of the end of the so-called dark ages, when the first galaxies formed, and the subsequent Epoch of Reionization when the intergalactic medium transitioned from neutral to ionized. Such observations are plagued by foreground emission which is a few orders of magnitude brighter than the 21 cm line. In this thesis I analyzed data from the Donald C. Backer Precision Array for Probing the Epoch of Reionization (PAPER) in order to improve the characterization of the extragalactic foreground component. I derived a catalogue of unresolved radio sources down to a 5 Jy flux density limit at 150 MHz and derived their spectral index distribution using literature data at 408 MHz. I implemented advanced techniques to calibrate radio interferometric data that led to a few percent accuracy on the flux density scale of the derived catalogue. This work, therefore, represents a further step towards creating an accurate, global sky model that is crucial to improve calibration of Epoch of Reionization observations.
- Full Text:
- Date Issued: 2016
Classical and quantum picture of the interior of two-dimensional black holes
- Authors: Shawa, Mark
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3629 , vital:20531
- Description: A quantum-mechanical description of black holes would represent the final step in our understanding of the nature of space-time. However, any progress towards that end is usually foiled by persistent space-time singularities that exist at the center of black holes. From the four-dimensional point of view, black holes seem to resist quantization. Under highly symmetric conditions, all higher-dimensional black holes are two-dimensional. Unlike their higher-dimensional counterparts, two dimensional black holes may not resist quantization. A non-trivial description of gravity in two dimensions is not possible using Einstein’s theory of gravity alone. However, we may still arrive at a consistent description of gravity by introducing a scalar field known as the dilaton. In this thesis, we study both the classical and quantum aspects of the interior of two-dimensional black holes using a generalized dilaton-gravity theory. Classically, we will find that the interior of most two-dimensional black holes is not much different from that of four-dimensional black holes. But by introducing quantized matter into the theory, the fluctuations in space-time will give a different picture of the structure of interior of black holes. Using a low-energy effective field theory, we will show that it is indeed possible to identify quantum modes in the interior of black holes and perform quantum-mechanical calculations near the singularity.
- Full Text:
- Date Issued: 2016
- Authors: Shawa, Mark
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3629 , vital:20531
- Description: A quantum-mechanical description of black holes would represent the final step in our understanding of the nature of space-time. However, any progress towards that end is usually foiled by persistent space-time singularities that exist at the center of black holes. From the four-dimensional point of view, black holes seem to resist quantization. Under highly symmetric conditions, all higher-dimensional black holes are two-dimensional. Unlike their higher-dimensional counterparts, two dimensional black holes may not resist quantization. A non-trivial description of gravity in two dimensions is not possible using Einstein’s theory of gravity alone. However, we may still arrive at a consistent description of gravity by introducing a scalar field known as the dilaton. In this thesis, we study both the classical and quantum aspects of the interior of two-dimensional black holes using a generalized dilaton-gravity theory. Classically, we will find that the interior of most two-dimensional black holes is not much different from that of four-dimensional black holes. But by introducing quantized matter into the theory, the fluctuations in space-time will give a different picture of the structure of interior of black holes. Using a low-energy effective field theory, we will show that it is indeed possible to identify quantum modes in the interior of black holes and perform quantum-mechanical calculations near the singularity.
- Full Text:
- Date Issued: 2016
Single station TEC modelling during storm conditions
- Authors: Uwamahoro, Jean Claude
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3812 , vital:20545
- Description: It has been shown in ionospheric research that modelling total electron content (TEC) during storm conditions is a big challenge. In this study, mathematical equations were developed to estimate TEC over Sutherland (32.38oS, 20.81oE), during storm conditions, using the Empirical Orthogonal Function (EOF) analysis, combined with regression analysis. TEC was derived from GPS observations and a geomagnetic storm was defined for Dst ≤ -50 nT. The inputs for the model were chosen based on the factors that influence TEC variation, such as diurnal, seasonal, solar and geomagnetic activity variation, and these were represented by hour of the day, day number of the year, F10.7 and A index respectively. The EOF model was developed using GPS TEC data from 1999 to 2013 and tested on different storms. For the model validation (interpolation), three storms were chosen in 2000 (solar maximum period) and three others in 2006 (solar minimum period), while for extrapolation six storms including three in 2014 and three in 2015 were chosen. Before building the model, TEC values for the selected 2000 and 2006 storms were removed from the dataset used to construct the model in order to make the model validation independent on data. A comparison of the observed and modelled TEC showed that the EOF model works well for storms with non-significant ionospheric TEC response and storms that occurred during periods of low solar activity. High correlation coefficients between the observed and modelled TEC were obtained showing that the model covers most of the information contained in the observed TEC. Furthermore, it has been shown that the EOF model developed for a specific station may be used to estimate TEC over other locations within a latitudinal and longitudinal coverage of 8.7o and 10.6o respectively. This is an important result as it reduces the data dimensionality problem for computational purposes. It may therefore not be necessary for regional storm-time TEC modelling to compute TEC data for all the closest GPS receiver stations since most of the needed information can be extracted from measurements at one location.
- Full Text:
- Date Issued: 2016
- Authors: Uwamahoro, Jean Claude
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3812 , vital:20545
- Description: It has been shown in ionospheric research that modelling total electron content (TEC) during storm conditions is a big challenge. In this study, mathematical equations were developed to estimate TEC over Sutherland (32.38oS, 20.81oE), during storm conditions, using the Empirical Orthogonal Function (EOF) analysis, combined with regression analysis. TEC was derived from GPS observations and a geomagnetic storm was defined for Dst ≤ -50 nT. The inputs for the model were chosen based on the factors that influence TEC variation, such as diurnal, seasonal, solar and geomagnetic activity variation, and these were represented by hour of the day, day number of the year, F10.7 and A index respectively. The EOF model was developed using GPS TEC data from 1999 to 2013 and tested on different storms. For the model validation (interpolation), three storms were chosen in 2000 (solar maximum period) and three others in 2006 (solar minimum period), while for extrapolation six storms including three in 2014 and three in 2015 were chosen. Before building the model, TEC values for the selected 2000 and 2006 storms were removed from the dataset used to construct the model in order to make the model validation independent on data. A comparison of the observed and modelled TEC showed that the EOF model works well for storms with non-significant ionospheric TEC response and storms that occurred during periods of low solar activity. High correlation coefficients between the observed and modelled TEC were obtained showing that the model covers most of the information contained in the observed TEC. Furthermore, it has been shown that the EOF model developed for a specific station may be used to estimate TEC over other locations within a latitudinal and longitudinal coverage of 8.7o and 10.6o respectively. This is an important result as it reduces the data dimensionality problem for computational purposes. It may therefore not be necessary for regional storm-time TEC modelling to compute TEC data for all the closest GPS receiver stations since most of the needed information can be extracted from measurements at one location.
- Full Text:
- Date Issued: 2016
The EPR paradox: back from the future
- Authors: Bryan, Kate Louise Halse
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/2881 , vital:20338
- Description: The Einstein-Podolsky-Rosen (EPR) thought experiment produced a problem regarding the interpretation of quantum mechanics provided for entangled systems. Although the thought experiment was reformulated mathematically in Bell's Theorem, the conclusion regarding entanglement correlations is still debated today. In an attempt to provide an explanation of how entangled systems maintain their correlations, this thesis investigates the theory of post-state teleportation as a possible interpretation of how information moves between entangled systems without resorting to nonlocal action. Post-state teleportation describes a method of communicating to the past via a quantum information channel. The resulting picture of the EPR thought experiment relied on information propagating backward from a final boundary condition to ensure all correlations were maintained. Similarities were found between this resolution of the EPR paradox and the final state solution to the black hole information paradox and the closely related firewall problem. The latter refers to an apparent conflict between unitary evaporation of a black hole and the strong subadditivity condition. The use of observer complementarity allows this solution of the black hole problem to be shown to be the same as a seemingly different solution known as “ER=EPR", where ‘ER’ refers to an Einstein-Rosen bridge or wormhole.
- Full Text:
- Date Issued: 2016
- Authors: Bryan, Kate Louise Halse
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/2881 , vital:20338
- Description: The Einstein-Podolsky-Rosen (EPR) thought experiment produced a problem regarding the interpretation of quantum mechanics provided for entangled systems. Although the thought experiment was reformulated mathematically in Bell's Theorem, the conclusion regarding entanglement correlations is still debated today. In an attempt to provide an explanation of how entangled systems maintain their correlations, this thesis investigates the theory of post-state teleportation as a possible interpretation of how information moves between entangled systems without resorting to nonlocal action. Post-state teleportation describes a method of communicating to the past via a quantum information channel. The resulting picture of the EPR thought experiment relied on information propagating backward from a final boundary condition to ensure all correlations were maintained. Similarities were found between this resolution of the EPR paradox and the final state solution to the black hole information paradox and the closely related firewall problem. The latter refers to an apparent conflict between unitary evaporation of a black hole and the strong subadditivity condition. The use of observer complementarity allows this solution of the black hole problem to be shown to be the same as a seemingly different solution known as “ER=EPR", where ‘ER’ refers to an Einstein-Rosen bridge or wormhole.
- Full Text:
- Date Issued: 2016
Thermoluminescence of annealed synthetic quartz
- Atang, Elizabeth Fende Midiki
- Authors: Atang, Elizabeth Fende Midiki
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/420 , vital:19957
- Description: The kinetic and dosimetric features of the main thermoluminescent peak of synthetic quartz have been investigated in quartz ordinarily annealed at 500_C as well as quartz annealed at 500_C for 10 minutes. The main peak is found at 78 _C for the samples annealed at 500_C for 10 minutes irradiated to 10 Gy and heated at 1.0 _C/s. For the samples ordinarily annealed at 500_C the main peak is found at 106 _C after the sample has been irradiated to 30 Gy and heated at 5.0 _C/s. In these samples, the intensity of the main peak is enhanced with repetitive measurement whereas its maximum temperature is unaffected. The peak position of the main peak in the sample is independent of the irradiation dose and this, together with its fading characteristics, are consistent with first-order kinetics. For doses between 5 and 25 Gy, the dose response of the main peak of the annealed sample is superlinear. The half-life of the main TL peak of the annealed sample is about 1 h. The activation energy E of the main peak is around 0.90 eV. For a heating rate of 0.4 _C/s, its order of kinetics b derived from the whole curve method of analysis is 1.0. Following irradiation, preheating and illumination with 470 nm blue light, the main peak in the annealed sample is regenerated during heating. The resulting phototransferred peak occurs at the same temperature as the original peak and has similar kinetic and dosimetric features, with a half-life of about 1 h. For a preheat temperature of 200 _C, the intensity of the phototransferred peak in the sample increases with illumination time up to a maximum and decreases thereafter. At longer illumination times, no further decrease in the intensity of the phototransferred peak is observed. The traps associated with the 325 _C peak are the main source of the electrons responsible for the regenerated peak.
- Full Text:
- Date Issued: 2016
- Authors: Atang, Elizabeth Fende Midiki
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/420 , vital:19957
- Description: The kinetic and dosimetric features of the main thermoluminescent peak of synthetic quartz have been investigated in quartz ordinarily annealed at 500_C as well as quartz annealed at 500_C for 10 minutes. The main peak is found at 78 _C for the samples annealed at 500_C for 10 minutes irradiated to 10 Gy and heated at 1.0 _C/s. For the samples ordinarily annealed at 500_C the main peak is found at 106 _C after the sample has been irradiated to 30 Gy and heated at 5.0 _C/s. In these samples, the intensity of the main peak is enhanced with repetitive measurement whereas its maximum temperature is unaffected. The peak position of the main peak in the sample is independent of the irradiation dose and this, together with its fading characteristics, are consistent with first-order kinetics. For doses between 5 and 25 Gy, the dose response of the main peak of the annealed sample is superlinear. The half-life of the main TL peak of the annealed sample is about 1 h. The activation energy E of the main peak is around 0.90 eV. For a heating rate of 0.4 _C/s, its order of kinetics b derived from the whole curve method of analysis is 1.0. Following irradiation, preheating and illumination with 470 nm blue light, the main peak in the annealed sample is regenerated during heating. The resulting phototransferred peak occurs at the same temperature as the original peak and has similar kinetic and dosimetric features, with a half-life of about 1 h. For a preheat temperature of 200 _C, the intensity of the phototransferred peak in the sample increases with illumination time up to a maximum and decreases thereafter. At longer illumination times, no further decrease in the intensity of the phototransferred peak is observed. The traps associated with the 325 _C peak are the main source of the electrons responsible for the regenerated peak.
- Full Text:
- Date Issued: 2016
A light-emitting-diode pulsing system for measurement of time-resolved luminescence
- Authors: Uriri, Solomon Akpore
- Date: 2015
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:20976 , http://hdl.handle.net/10962/5788
- Description: A new light-emitting-diode based pulsing system for measurement of time-resolved luminescence has been developed. The light-emitting-diodes are pulsed at various pulse-widths by a 555-timer operated as a monostable multivibrator. The light-emitting-diodes are arranged in a dural holder connected in parallel in sets of four, each containing four diodes in series. The output pulse from the 555-timer is fed into an 2N7000 MOSFET to produce a pulse-current of 500 mA to drive the set of 16 light-emitting-diodes. This size of current is sufficient to drive the diodes with each driven at a pulse-current of 90 mA with a possible maximum of 110 mA per diode. A multichannel scaler is used to trigger the pulsing system and to record data at selectable dwell times. The system is capable of generating pulse-widths in the range of microseconds upwards.
- Full Text:
- Date Issued: 2015
- Authors: Uriri, Solomon Akpore
- Date: 2015
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:20976 , http://hdl.handle.net/10962/5788
- Description: A new light-emitting-diode based pulsing system for measurement of time-resolved luminescence has been developed. The light-emitting-diodes are pulsed at various pulse-widths by a 555-timer operated as a monostable multivibrator. The light-emitting-diodes are arranged in a dural holder connected in parallel in sets of four, each containing four diodes in series. The output pulse from the 555-timer is fed into an 2N7000 MOSFET to produce a pulse-current of 500 mA to drive the set of 16 light-emitting-diodes. This size of current is sufficient to drive the diodes with each driven at a pulse-current of 90 mA with a possible maximum of 110 mA per diode. A multichannel scaler is used to trigger the pulsing system and to record data at selectable dwell times. The system is capable of generating pulse-widths in the range of microseconds upwards.
- Full Text:
- Date Issued: 2015
Assignment of spin and parity to states in the nucleus ¹⁹⁶T1
- Authors: Uwitonze, Pierre Celestin
- Date: 2015
- Subjects: Nuclear spin , Particles (Nuclear physics) -- Chirality
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5558 , http://hdl.handle.net/10962/d1017903
- Description: This work presents a study of high-spin states in the nucleus ¹⁹⁶Tl via γ-spectroscopy. ¹⁹⁶Tl was produced via the ¹⁹⁷Au(⁴He,5n) ¹⁹⁶Tl reaction at a beam energy of 63 MeV. The γ-γ coincidence measurements were performed using the AFRODITE γ-spectrometer array at iThemba LABS. The previous level scheme of ¹⁹⁶Tl has been extended up to an excitation of 4071 keV including 24 new γ-ray transitions. The spin and parity assignment to levels was made from the directional correlation of oriented nuclei (DCO) and linear polarization anisotropy ratios. An analysis of the B(M1)/B(E2) ratios was found to be consistent with the configuration of πh₉/₂♁vi₁₃/₂ for the ground state band. Although no chiral band was found in ¹⁹⁶TI and ¹⁹⁸TI.
- Full Text:
- Date Issued: 2015
- Authors: Uwitonze, Pierre Celestin
- Date: 2015
- Subjects: Nuclear spin , Particles (Nuclear physics) -- Chirality
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5558 , http://hdl.handle.net/10962/d1017903
- Description: This work presents a study of high-spin states in the nucleus ¹⁹⁶Tl via γ-spectroscopy. ¹⁹⁶Tl was produced via the ¹⁹⁷Au(⁴He,5n) ¹⁹⁶Tl reaction at a beam energy of 63 MeV. The γ-γ coincidence measurements were performed using the AFRODITE γ-spectrometer array at iThemba LABS. The previous level scheme of ¹⁹⁶Tl has been extended up to an excitation of 4071 keV including 24 new γ-ray transitions. The spin and parity assignment to levels was made from the directional correlation of oriented nuclei (DCO) and linear polarization anisotropy ratios. An analysis of the B(M1)/B(E2) ratios was found to be consistent with the configuration of πh₉/₂♁vi₁₃/₂ for the ground state band. Although no chiral band was found in ¹⁹⁶TI and ¹⁹⁸TI.
- Full Text:
- Date Issued: 2015