Foreground simulations for observations of the global 21-cm signal
- Authors: Klutse, Diana
- Date: 2019
- Subjects: Cosmic background radiation , Astronomy -- Observations , Electromagnetic waves , Radiation, Background
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/76398 , vital:30557
- Description: The sky-averaged (global) spectrum of the redshifted 21-cm line promises to be a direct probe of the Dark Ages, the period before the first luminous sources formed and the Epoch of Reionization during which these sources produced enough ionizing photons to ionize the neutral intergalactic medium. However, observations of this signal are contaminated by both astrophysical foregrounds which are orders of magnitude brighter than the cosmological signal and by non-astrophysical and non-ideal instrumental effects. It is therefore crucial to understand all these data components and their impacts on the cosmological signal, for successful signal extraction. In this view, we investigated the impact that small scale spatial structures of diffuse Galactic foreground has on the foreground spectrum as observed by a global 21-cm observation. We simulated two different sets of observations using a realistic dipole beam model of two synchotron foreground templates that differ from each other in the small scale structure: the original 408 MHz all-sky map by Haslam et al. (1982) and a version where the calibration was improved to remove artifcats and point sources (Remazeilles et al., 2015). We generated simulated foreground spectra and modeled them using a polynomial expansion in frequency. We found that the different foreground templates have a modest impact on the simulated spectra, generate differences up to 2% in the root mean square of residual spectra after the log-polynomial best fit was subtracted out.
- Full Text:
- Date Issued: 2019
- Authors: Klutse, Diana
- Date: 2019
- Subjects: Cosmic background radiation , Astronomy -- Observations , Electromagnetic waves , Radiation, Background
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/76398 , vital:30557
- Description: The sky-averaged (global) spectrum of the redshifted 21-cm line promises to be a direct probe of the Dark Ages, the period before the first luminous sources formed and the Epoch of Reionization during which these sources produced enough ionizing photons to ionize the neutral intergalactic medium. However, observations of this signal are contaminated by both astrophysical foregrounds which are orders of magnitude brighter than the cosmological signal and by non-astrophysical and non-ideal instrumental effects. It is therefore crucial to understand all these data components and their impacts on the cosmological signal, for successful signal extraction. In this view, we investigated the impact that small scale spatial structures of diffuse Galactic foreground has on the foreground spectrum as observed by a global 21-cm observation. We simulated two different sets of observations using a realistic dipole beam model of two synchotron foreground templates that differ from each other in the small scale structure: the original 408 MHz all-sky map by Haslam et al. (1982) and a version where the calibration was improved to remove artifcats and point sources (Remazeilles et al., 2015). We generated simulated foreground spectra and modeled them using a polynomial expansion in frequency. We found that the different foreground templates have a modest impact on the simulated spectra, generate differences up to 2% in the root mean square of residual spectra after the log-polynomial best fit was subtracted out.
- Full Text:
- Date Issued: 2019
A global ionospheric F2 region peak electron density model using neural networks and extended geophysically relevant inputs
- Authors: Oyeyemi, Elijah Oyedola
- Date: 2006
- Subjects: Neural networks (Computer science) Ionospheric electron density Ionosphere Ionosphere -- Mathematical models
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:5470 , http://hdl.handle.net/10962/d1005255
- Description: This thesis presents my research on the development of a neural network (NN) based global empirical model of the ionospheric F2 region peak electron density using extended geophysically relevant inputs. The main principle behind this approach has been to utilize parameters other than simple geographic co-ordinates, on which the F2 peak electron density is known to depend, and to exploit the technique of NNs, thereby establishing and modeling the non-linear dynamic processes (both in space and time)associated with the F2 region electron density on a global scale. Four different models have been developed in this work. These are the foF2 NN model, M(3000)F2 NN model, short-term forecasting foF2 NN, and a near-real time foF2 NN model. Data used in the training of the NNs were obtained from the worldwide ionosonde stations spanning the period 1964 to 1986 based on availability, which included all periods of calm and disturbed magnetic activity. Common input parameters used in the training of all 4 models are day number (day of the year, DN), Universal Time (UT), a 2 month running mean of the sunspot number (R2), a 2 day running mean of the 3-hour planetary magnetic index ap (A16), solar zenith angle (CHI), geographic latitude (q), magnetic dip angle (I), angle of magnetic declination (D), angle of meridian relative to subsolar point (M). For the short-term and near-real time foF2 models, additional input parameters related to recent past observations of foF2 itself were included in the training of the NNs. The results of the foF2 NN model and M(3000)F2 NN model presented in this work, which compare favourably with the IRI (International Reference Ionosphere) model successfully demonstrate the potential of NNs for spatial and temporal modeling of the ionospheric parameters foF2 and M(3000)F2 globally. The results obtained from the short-term foF2 NN model and nearreal time foF2 NN model reveal that, in addition to the temporal and spatial input variables, short-term forecasting of foF2 is much improved by including past observations of foF2 itself. Results obtained from the near-real time foF2 NN model also reveal that there exists a correlation between measured foF2 values at different locations across the globe. Again, comparisons of the foF2 NN model and M(3000)F2 NN model predictions with that of the IRI model predictions and observed values at some selected high latitude stations, suggest that the NN technique can successfully be employed to model the complex irregularities associated with the high latitude regions. Based on the results obtained in this research and the comparison made with the IRI model (URSI and CCIR coefficients), these results justify consideration of the NN technique for the prediction of global ionospheric parameters. I believe that, after consideration by the IRI community, these models will prove to be valuable to both the high frequency (HF) communication and worldwide ionospheric communities.
- Full Text:
- Date Issued: 2006
- Authors: Oyeyemi, Elijah Oyedola
- Date: 2006
- Subjects: Neural networks (Computer science) Ionospheric electron density Ionosphere Ionosphere -- Mathematical models
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:5470 , http://hdl.handle.net/10962/d1005255
- Description: This thesis presents my research on the development of a neural network (NN) based global empirical model of the ionospheric F2 region peak electron density using extended geophysically relevant inputs. The main principle behind this approach has been to utilize parameters other than simple geographic co-ordinates, on which the F2 peak electron density is known to depend, and to exploit the technique of NNs, thereby establishing and modeling the non-linear dynamic processes (both in space and time)associated with the F2 region electron density on a global scale. Four different models have been developed in this work. These are the foF2 NN model, M(3000)F2 NN model, short-term forecasting foF2 NN, and a near-real time foF2 NN model. Data used in the training of the NNs were obtained from the worldwide ionosonde stations spanning the period 1964 to 1986 based on availability, which included all periods of calm and disturbed magnetic activity. Common input parameters used in the training of all 4 models are day number (day of the year, DN), Universal Time (UT), a 2 month running mean of the sunspot number (R2), a 2 day running mean of the 3-hour planetary magnetic index ap (A16), solar zenith angle (CHI), geographic latitude (q), magnetic dip angle (I), angle of magnetic declination (D), angle of meridian relative to subsolar point (M). For the short-term and near-real time foF2 models, additional input parameters related to recent past observations of foF2 itself were included in the training of the NNs. The results of the foF2 NN model and M(3000)F2 NN model presented in this work, which compare favourably with the IRI (International Reference Ionosphere) model successfully demonstrate the potential of NNs for spatial and temporal modeling of the ionospheric parameters foF2 and M(3000)F2 globally. The results obtained from the short-term foF2 NN model and nearreal time foF2 NN model reveal that, in addition to the temporal and spatial input variables, short-term forecasting of foF2 is much improved by including past observations of foF2 itself. Results obtained from the near-real time foF2 NN model also reveal that there exists a correlation between measured foF2 values at different locations across the globe. Again, comparisons of the foF2 NN model and M(3000)F2 NN model predictions with that of the IRI model predictions and observed values at some selected high latitude stations, suggest that the NN technique can successfully be employed to model the complex irregularities associated with the high latitude regions. Based on the results obtained in this research and the comparison made with the IRI model (URSI and CCIR coefficients), these results justify consideration of the NN technique for the prediction of global ionospheric parameters. I believe that, after consideration by the IRI community, these models will prove to be valuable to both the high frequency (HF) communication and worldwide ionospheric communities.
- Full Text:
- Date Issued: 2006
An analysis of sources and predictability of geomagnetic storms
- Authors: Uwamahoro, Jean
- Date: 2011
- Subjects: Ionospheric storms Solar flares Interplanetary magnetic fields Magnetospheric substorms Coronal mass ejections Space environment Neural networks (Computer science)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:5451 , http://hdl.handle.net/10962/d1005236
- Description: Solar transient eruptions are the main cause of interplanetary-magnetospheric disturbances leading to the phenomena known as geomagnetic storms. Eruptive solar events such as coronal mass ejections (CMEs) are currently considered the main cause of geomagnetic storms (GMS). GMS are strong perturbations of the Earth’s magnetic field that can affect space-borne and ground-based technological systems. The solar-terrestrial impact on modern technological systems is commonly known as Space Weather. Part of the research study described in this thesis was to investigate and establish a relationship between GMS (periods with Dst ≤ −50 nT) and their associated solar and interplanetary (IP) properties during solar cycle (SC) 23. Solar and IP geoeffective properties associated with or without CMEs were investigated and used to qualitatively characterise both intense and moderate storms. The results of this analysis specifically provide an estimate of the main sources of GMS during an average 11-year solar activity period. This study indicates that during SC 23, the majority of intense GMS (83%) were associated with CMEs, while the non-associated CME storms were dominant among moderate storms. GMS phenomena are the result of a complex and non-linear chaotic system involving the Sun, the IP medium, the magnetosphere and ionosphere, which make the prediction of these phenomena challenging. This thesis also explored the predictability of both the occurrence and strength of GMS. Due to their nonlinear driving mechanisms, the prediction of GMS was attempted by the use of neural network (NN) techniques, known for their non-linear modelling capabilities. To predict the occurrence of storms, a combination of solar and IP parameters were used as inputs in the NN model that proved to predict the occurrence of GMS with a probability of 87%. Using the solar wind (SW) and IP magnetic field (IMF) parameters, a separate NN-based model was developed to predict the storm-time strength as measured by the global Dst and ap geomagnetic indices, as well as by the locally measured K-index. The performance of the models was tested on data sets which were not part of the NN training process. The results obtained indicate that NN models provide a reliable alternative method for empirically predicting the occurrence and strength of GMS on the basis of solar and IP parameters. The demonstrated ability to predict the geoeffectiveness of solar and IP transient events is a key step in the goal towards improving space weather modelling and prediction.
- Full Text:
- Date Issued: 2011
- Authors: Uwamahoro, Jean
- Date: 2011
- Subjects: Ionospheric storms Solar flares Interplanetary magnetic fields Magnetospheric substorms Coronal mass ejections Space environment Neural networks (Computer science)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:5451 , http://hdl.handle.net/10962/d1005236
- Description: Solar transient eruptions are the main cause of interplanetary-magnetospheric disturbances leading to the phenomena known as geomagnetic storms. Eruptive solar events such as coronal mass ejections (CMEs) are currently considered the main cause of geomagnetic storms (GMS). GMS are strong perturbations of the Earth’s magnetic field that can affect space-borne and ground-based technological systems. The solar-terrestrial impact on modern technological systems is commonly known as Space Weather. Part of the research study described in this thesis was to investigate and establish a relationship between GMS (periods with Dst ≤ −50 nT) and their associated solar and interplanetary (IP) properties during solar cycle (SC) 23. Solar and IP geoeffective properties associated with or without CMEs were investigated and used to qualitatively characterise both intense and moderate storms. The results of this analysis specifically provide an estimate of the main sources of GMS during an average 11-year solar activity period. This study indicates that during SC 23, the majority of intense GMS (83%) were associated with CMEs, while the non-associated CME storms were dominant among moderate storms. GMS phenomena are the result of a complex and non-linear chaotic system involving the Sun, the IP medium, the magnetosphere and ionosphere, which make the prediction of these phenomena challenging. This thesis also explored the predictability of both the occurrence and strength of GMS. Due to their nonlinear driving mechanisms, the prediction of GMS was attempted by the use of neural network (NN) techniques, known for their non-linear modelling capabilities. To predict the occurrence of storms, a combination of solar and IP parameters were used as inputs in the NN model that proved to predict the occurrence of GMS with a probability of 87%. Using the solar wind (SW) and IP magnetic field (IMF) parameters, a separate NN-based model was developed to predict the storm-time strength as measured by the global Dst and ap geomagnetic indices, as well as by the locally measured K-index. The performance of the models was tested on data sets which were not part of the NN training process. The results obtained indicate that NN models provide a reliable alternative method for empirically predicting the occurrence and strength of GMS on the basis of solar and IP parameters. The demonstrated ability to predict the geoeffectiveness of solar and IP transient events is a key step in the goal towards improving space weather modelling and prediction.
- Full Text:
- Date Issued: 2011
Combined spectral and stimulated luminescence study of charge trapping and recombination processes in α-Al2O3:C
- Authors: Nyirenda, Angel Newton
- Date: 2018
- Subjects: Luminescence , Thermoluminescence , Luminescence spectroscopy , Carbon-doped aluminium oxide , Radioluminescence , Time-resolved X-ray excited optical luminescence
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/62683 , vital:28235
- Description: The main objective of this project was to gain a deeper and better understanding of the luminescence processes in a-Al₂O₃:C, a highly-sensitive dosimetric material, using a combined spectral and stimulated luminescence study. The spectral studies concentrated on the emission spectra obtained using X-ray induced radioluminescence (XERL), thermoluminescence (XETL) and time-resolved X-ray excited optical luminescence (TR-XEOL) techniques. The stimulated luminescence studies were based on thermoluminescence (TL), optically stimulated luminescence (OSL) and phototransferred TL (PTTL) methods that were used in the study of the radiation-induced defects at high beta-doses and the deep traps, that is, traps with thermal depths beyond 500°C. The spectral and stimulated luminescence measurements were carried out using a high sensitivity luminescence spectrometer and a Ris0 TL/OSL Model DA-20 Reader, respectively. The XERL emission spectrum measured at room temperature shows seven gaussian peaks associated with F-centres (420 nm), F+-centres (334 nm), F2+-centres (559 nm), Stoke’s vibronic band of Cr3+ (671 nm), Cr3+ R-line emission (694 nm), anti-Stokes vibronic band of Cr3+ (710 nm) and an unidentified emission band (260-300 nm) which we associate with hole recombinations at a luminescence centre. The 694-nm R-line emission from Cr3+ impurity ions is most likely due to recombination of holes at Cr2+ during stimulated luminescence and as a result of an intracentre excitation of Cr3+ in photoluminescence (PL) due to photon absorption. The Cr3+ emission decreases in intensity, whereas the intensity of F-centre emission band is almost constant with repeated XERL measurements. Depending on the amount of X-ray irradiation dose, both holes and/or electrons may take place in the emission processes of peaks I (30-80°C), II (90-250°C) and III (250-320°C) during a TL readout, albeit, electron recombination is dominant regardless of dose. At higher doses, the XETL emission spectra indicate that the dominant band associated with TL peak III (250-320°C) in the material, shifts from F-centre to Cr3+. Using the deep-traps OSL, it has been confirmed that the main TL trap is also the main OSL trap whereas the TL traps lying in the temperature range of 400-550°C constitute the secondary OSL traps. There is evidence of strong retrapping at the main trap during optical stimulation of charges from the secondary OSL traps and the deep traps and that the retrapping occurs via the delocalized bands. At high-irradiation beta-doses, aggregate defect centres which significantly alter the TL and OSL properties, are induced in the material. The induced aggregate centres get completely obliterated by heating a sample to 700°C. The radiation-induced defects cause the main TL peak to shift towards higher temperatures, increase its FWHM, reduce its maximum intensity and cause an underestimation of both the activation energy and order of kinetics of the peak. On the other hand, the OSL response of the material is enhanced following a high-irradiation dose. During sample storage in the dark at ambient temperature, charges do migrate from the deep traps (donors) to the main and intermediate traps (acceptors) and that the major donor traps during this charge transfer phenomenon lie between 500-600°C.
- Full Text:
- Date Issued: 2018
- Authors: Nyirenda, Angel Newton
- Date: 2018
- Subjects: Luminescence , Thermoluminescence , Luminescence spectroscopy , Carbon-doped aluminium oxide , Radioluminescence , Time-resolved X-ray excited optical luminescence
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/62683 , vital:28235
- Description: The main objective of this project was to gain a deeper and better understanding of the luminescence processes in a-Al₂O₃:C, a highly-sensitive dosimetric material, using a combined spectral and stimulated luminescence study. The spectral studies concentrated on the emission spectra obtained using X-ray induced radioluminescence (XERL), thermoluminescence (XETL) and time-resolved X-ray excited optical luminescence (TR-XEOL) techniques. The stimulated luminescence studies were based on thermoluminescence (TL), optically stimulated luminescence (OSL) and phototransferred TL (PTTL) methods that were used in the study of the radiation-induced defects at high beta-doses and the deep traps, that is, traps with thermal depths beyond 500°C. The spectral and stimulated luminescence measurements were carried out using a high sensitivity luminescence spectrometer and a Ris0 TL/OSL Model DA-20 Reader, respectively. The XERL emission spectrum measured at room temperature shows seven gaussian peaks associated with F-centres (420 nm), F+-centres (334 nm), F2+-centres (559 nm), Stoke’s vibronic band of Cr3+ (671 nm), Cr3+ R-line emission (694 nm), anti-Stokes vibronic band of Cr3+ (710 nm) and an unidentified emission band (260-300 nm) which we associate with hole recombinations at a luminescence centre. The 694-nm R-line emission from Cr3+ impurity ions is most likely due to recombination of holes at Cr2+ during stimulated luminescence and as a result of an intracentre excitation of Cr3+ in photoluminescence (PL) due to photon absorption. The Cr3+ emission decreases in intensity, whereas the intensity of F-centre emission band is almost constant with repeated XERL measurements. Depending on the amount of X-ray irradiation dose, both holes and/or electrons may take place in the emission processes of peaks I (30-80°C), II (90-250°C) and III (250-320°C) during a TL readout, albeit, electron recombination is dominant regardless of dose. At higher doses, the XETL emission spectra indicate that the dominant band associated with TL peak III (250-320°C) in the material, shifts from F-centre to Cr3+. Using the deep-traps OSL, it has been confirmed that the main TL trap is also the main OSL trap whereas the TL traps lying in the temperature range of 400-550°C constitute the secondary OSL traps. There is evidence of strong retrapping at the main trap during optical stimulation of charges from the secondary OSL traps and the deep traps and that the retrapping occurs via the delocalized bands. At high-irradiation beta-doses, aggregate defect centres which significantly alter the TL and OSL properties, are induced in the material. The induced aggregate centres get completely obliterated by heating a sample to 700°C. The radiation-induced defects cause the main TL peak to shift towards higher temperatures, increase its FWHM, reduce its maximum intensity and cause an underestimation of both the activation energy and order of kinetics of the peak. On the other hand, the OSL response of the material is enhanced following a high-irradiation dose. During sample storage in the dark at ambient temperature, charges do migrate from the deep traps (donors) to the main and intermediate traps (acceptors) and that the major donor traps during this charge transfer phenomenon lie between 500-600°C.
- Full Text:
- Date Issued: 2018
The dispersion measure in broadband data from radio pulsars
- Authors: Rammala, Isabella
- Date: 2019
- Subjects: Pulsars , Radio astrophysics , Astrophsyics , Broadband communication systems
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/67857 , vital:29157
- Description: Modern day radio telescopes make use of wideband receivers to take advantage of the broadband nature of the radio pulsar emission. We ask how does the use of such broadband pulsar data affect the measured pulsar dispersion measure (DM). Previous works have shown that, although the exact pulsar radio emission processes are not well understood, observations reveal evidence of possible frequency dependence on the emission altitudes in the pulsar magnetosphere, a phenomenon known as the radius-to-frequency mapping (RFM). This frequency dependence due to RFM can be embedded in the dispersive delay of the pulse profiles, normally interpreted as an interstellar effect (DM). Thus we interpret this intrinsic effect as an additional component δDM to the interstellar DM, and investigate how it can be statistically attributed to intrinsic profile evolution, as well as profile scattering. We make use of Monte-Carlo simulations of beam models to simulate realistic pulsar beams of various geometry, from which we generate intrinsic profiles at various frequency bands. The results show that the excess DM due to intrinsic profile evolution is more pronounced at high frequencies, whereas scattering dominates the excess DM at low frequency. The implications of these results are presented with relation to broadband pulsar timing.
- Full Text:
- Date Issued: 2019
- Authors: Rammala, Isabella
- Date: 2019
- Subjects: Pulsars , Radio astrophysics , Astrophsyics , Broadband communication systems
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/67857 , vital:29157
- Description: Modern day radio telescopes make use of wideband receivers to take advantage of the broadband nature of the radio pulsar emission. We ask how does the use of such broadband pulsar data affect the measured pulsar dispersion measure (DM). Previous works have shown that, although the exact pulsar radio emission processes are not well understood, observations reveal evidence of possible frequency dependence on the emission altitudes in the pulsar magnetosphere, a phenomenon known as the radius-to-frequency mapping (RFM). This frequency dependence due to RFM can be embedded in the dispersive delay of the pulse profiles, normally interpreted as an interstellar effect (DM). Thus we interpret this intrinsic effect as an additional component δDM to the interstellar DM, and investigate how it can be statistically attributed to intrinsic profile evolution, as well as profile scattering. We make use of Monte-Carlo simulations of beam models to simulate realistic pulsar beams of various geometry, from which we generate intrinsic profiles at various frequency bands. The results show that the excess DM due to intrinsic profile evolution is more pronounced at high frequencies, whereas scattering dominates the excess DM at low frequency. The implications of these results are presented with relation to broadband pulsar timing.
- Full Text:
- Date Issued: 2019
Calibration and imaging with variable radio sources
- Authors: Mbou Sob, Ulrich Armel
- Date: 2017
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/37977 , vital:24721
- Description: Calibration of radio interferometric data is one of the most important steps that are required to produce high dynamic range radio maps with high fidelity. However, naive calibration (inaccurate knowledge of the sky and instruments) leads to the formation of calibration artefacts: the generation of spurious sources and the deformations in the structure of extended sources. A particular class of calibration artefacts, called ghost sources, which results from calibration with incomplete sky models has been extensively studied by Grobler et al. (2014, 2016) and Wijnholds et al. (2016). They developed a framework which can be used to predict the fluxes and positions of ghost sources. This work uses the approach initiated by these authors to study the calibration artefacts and ghost sources that are produced when variable sources are not considered in sky models during calibration. This work investigates both long-term and short-term variability and uses the root mean square (rms) and power spectrum as metrics to evaluate the “quality” of the residual visibilities obtained through calibration. We show that the overestimation and underestimation of source flux density during calibration produces similar but symmetrically opposite results. We show that calibration artefacts from sky model errors are not normally distributed. This prevents them from being removed by employing advanced techniques, such as stacking. The power spectrums measured from the residuals with a variable source was significantly higher than those from residuals without a variable source. This implies advanced calibration techniques and sky model completeness will be required for studies such as probing the Epoch of Reoinization, where we seek to detect faint signals below thermal noise.
- Full Text:
- Date Issued: 2017
- Authors: Mbou Sob, Ulrich Armel
- Date: 2017
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/37977 , vital:24721
- Description: Calibration of radio interferometric data is one of the most important steps that are required to produce high dynamic range radio maps with high fidelity. However, naive calibration (inaccurate knowledge of the sky and instruments) leads to the formation of calibration artefacts: the generation of spurious sources and the deformations in the structure of extended sources. A particular class of calibration artefacts, called ghost sources, which results from calibration with incomplete sky models has been extensively studied by Grobler et al. (2014, 2016) and Wijnholds et al. (2016). They developed a framework which can be used to predict the fluxes and positions of ghost sources. This work uses the approach initiated by these authors to study the calibration artefacts and ghost sources that are produced when variable sources are not considered in sky models during calibration. This work investigates both long-term and short-term variability and uses the root mean square (rms) and power spectrum as metrics to evaluate the “quality” of the residual visibilities obtained through calibration. We show that the overestimation and underestimation of source flux density during calibration produces similar but symmetrically opposite results. We show that calibration artefacts from sky model errors are not normally distributed. This prevents them from being removed by employing advanced techniques, such as stacking. The power spectrums measured from the residuals with a variable source was significantly higher than those from residuals without a variable source. This implies advanced calibration techniques and sky model completeness will be required for studies such as probing the Epoch of Reoinization, where we seek to detect faint signals below thermal noise.
- Full Text:
- Date Issued: 2017
An investigation into improved ionospheric F1 layer predictions over Grahamstown, South Africa
- Authors: Jacobs, Linda
- Date: 2005
- Subjects: Ionosphere , Ionospheric electron density -- South Africa -- Grahamstown , Neural networks (Computer Science)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5511 , http://hdl.handle.net/10962/d1008094 , Ionosphere , Ionospheric electron density -- South Africa -- Grahamstown , Neural networks (Computer Science)
- Description: This thesis describes an analysis of the F1 layer data obtained from the Grahamstown (33.32°S, 26.500 E), South Africa ionospheric station and the use of this data in improving a Neural Network (NN) based model of the F1 layer of the ionosphere. An application for real-time ray tracing through the South African ionosphere was identified, and for this application real-time evaluation of the electron density profile is essential. Raw real-time virtual height data are provided by a Lowell Digisonde (DPS), which employs the automatic scaling software, ARTIST whose output includes the virtual-toreal height data conversion. Experience has shown that there are times when the ray tracing performance is degraded because of difficulties surrounding the real-time characterization of the F1 region by ARTIST. Therefore available DPS data from the archives of the Grahamstown station were re-scaled manually in order to establish the extent of the problem and the times and conditions under which most inaccuracies occur. The re-scaled data were used to update the F1 contribution of an existing NN based ionospheric model, the LAM model, which predicts the values of the parameters required to produce an electron density profile. This thesis describes the development of three separate NNs required to predict the ionospheric characteristics and coefficients that are required to describe the F1 layer profile. Inputs to the NNs include day number, hour and measures of solar and magnetic activity. Outputs include the value of the critical frequency of the F1 layer, foF1, the real height of reflection at the peak, hmFl, as well as information on the state of the F1 layer. All data from the Grahamstown station from 1973 to 2003 was used to train these NNs. Tests show that the predictive ability of the LAM model has been improved by incorporating the re-scaled data.
- Full Text:
- Date Issued: 2005
- Authors: Jacobs, Linda
- Date: 2005
- Subjects: Ionosphere , Ionospheric electron density -- South Africa -- Grahamstown , Neural networks (Computer Science)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5511 , http://hdl.handle.net/10962/d1008094 , Ionosphere , Ionospheric electron density -- South Africa -- Grahamstown , Neural networks (Computer Science)
- Description: This thesis describes an analysis of the F1 layer data obtained from the Grahamstown (33.32°S, 26.500 E), South Africa ionospheric station and the use of this data in improving a Neural Network (NN) based model of the F1 layer of the ionosphere. An application for real-time ray tracing through the South African ionosphere was identified, and for this application real-time evaluation of the electron density profile is essential. Raw real-time virtual height data are provided by a Lowell Digisonde (DPS), which employs the automatic scaling software, ARTIST whose output includes the virtual-toreal height data conversion. Experience has shown that there are times when the ray tracing performance is degraded because of difficulties surrounding the real-time characterization of the F1 region by ARTIST. Therefore available DPS data from the archives of the Grahamstown station were re-scaled manually in order to establish the extent of the problem and the times and conditions under which most inaccuracies occur. The re-scaled data were used to update the F1 contribution of an existing NN based ionospheric model, the LAM model, which predicts the values of the parameters required to produce an electron density profile. This thesis describes the development of three separate NNs required to predict the ionospheric characteristics and coefficients that are required to describe the F1 layer profile. Inputs to the NNs include day number, hour and measures of solar and magnetic activity. Outputs include the value of the critical frequency of the F1 layer, foF1, the real height of reflection at the peak, hmFl, as well as information on the state of the F1 layer. All data from the Grahamstown station from 1973 to 2003 was used to train these NNs. Tests show that the predictive ability of the LAM model has been improved by incorporating the re-scaled data.
- Full Text:
- Date Issued: 2005
Challenges in topside ionospheric modelling over South Africa
- Authors: Sibanda, Patrick
- Date: 2010
- Subjects: Ionospheric electron density -- South Africa Neural networks (Computer science) Atmosphere, Upper Ionosphere
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:5453 , http://hdl.handle.net/10962/d1005238
- Description: This thesis creates a basic framework and provides the information necessary to create a more accurate description of the topside ionosphere in terms of the altitude variation of the electron density (Ne) over the South African region. The detailed overview of various topside ionospheric modelling techniques, with specific emphasis on their implications for the efforts to model the South African topside, provides a starting point towards achieving the goals. The novelty of the thesis lies in the investigation of the applicabilityof three different techniques to model the South African topside ionosphere: (1) The possibility of using Artificial Neural Network (ANN) techniques for empirical modelling of the topside ionosphere based on the available, however irregularly sampled, topside sounder measurements. The goal of this model was to test the ability of ANN techniques to capture the complex relationships between the various ionospheric variables using irregularly distributed measurements. While this technique is promising, the method did not show significant improvement over the International Reference Ionosphere (IRI) model results when compared with the actual measurements. (2) Application of the diffusive equilibrium theory. Although based on sound physics foundations, the method only operates on a generalised level leading to results that are not necessarily unique. Furthermore, the approach relies on many ionospheric variables as inputs which are derived from other models whose accuracy is not verified. (3) Attempts to complement the standard functional techniques, (Chapman, Epstein, Exponential and Parabolic), with Global Positioning System (GPS) and ionosonde measurements in an effort to provide deeper insights into the actual conditions within the ionosphere. The vertical Ne distribution is reconstructed by linking together the different aspects of the constituent ions and their transition height by considering how they influence the shape of the profile. While this approach has not been tested against actual measurements, results show that the method could be potentially useful for topside ionospheric studies. Due to the limitations of each technique reviewed, this thesis observes that the employment of an approach that incorporates both theoretical onsiderations and empirical aspects has the potential to lead to a more accurate characterisation of the topside ionospheric behaviour, and resulting in improved models in terms of reliability and forecasting ability. The point is made that a topside sounder mission for South Africa would provide the required measured topside ionospheric data and answer the many science questions that this region poses as well as solving a number of the limitations set out in this thesis.
- Full Text:
- Date Issued: 2010
- Authors: Sibanda, Patrick
- Date: 2010
- Subjects: Ionospheric electron density -- South Africa Neural networks (Computer science) Atmosphere, Upper Ionosphere
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:5453 , http://hdl.handle.net/10962/d1005238
- Description: This thesis creates a basic framework and provides the information necessary to create a more accurate description of the topside ionosphere in terms of the altitude variation of the electron density (Ne) over the South African region. The detailed overview of various topside ionospheric modelling techniques, with specific emphasis on their implications for the efforts to model the South African topside, provides a starting point towards achieving the goals. The novelty of the thesis lies in the investigation of the applicabilityof three different techniques to model the South African topside ionosphere: (1) The possibility of using Artificial Neural Network (ANN) techniques for empirical modelling of the topside ionosphere based on the available, however irregularly sampled, topside sounder measurements. The goal of this model was to test the ability of ANN techniques to capture the complex relationships between the various ionospheric variables using irregularly distributed measurements. While this technique is promising, the method did not show significant improvement over the International Reference Ionosphere (IRI) model results when compared with the actual measurements. (2) Application of the diffusive equilibrium theory. Although based on sound physics foundations, the method only operates on a generalised level leading to results that are not necessarily unique. Furthermore, the approach relies on many ionospheric variables as inputs which are derived from other models whose accuracy is not verified. (3) Attempts to complement the standard functional techniques, (Chapman, Epstein, Exponential and Parabolic), with Global Positioning System (GPS) and ionosonde measurements in an effort to provide deeper insights into the actual conditions within the ionosphere. The vertical Ne distribution is reconstructed by linking together the different aspects of the constituent ions and their transition height by considering how they influence the shape of the profile. While this approach has not been tested against actual measurements, results show that the method could be potentially useful for topside ionospheric studies. Due to the limitations of each technique reviewed, this thesis observes that the employment of an approach that incorporates both theoretical onsiderations and empirical aspects has the potential to lead to a more accurate characterisation of the topside ionospheric behaviour, and resulting in improved models in terms of reliability and forecasting ability. The point is made that a topside sounder mission for South Africa would provide the required measured topside ionospheric data and answer the many science questions that this region poses as well as solving a number of the limitations set out in this thesis.
- Full Text:
- Date Issued: 2010
Design patterns and software techniques for large-scale, open and reproducible data reduction
- Authors: Molenaar, Gijs Jan
- Date: 2021
- Subjects: Radio astronomy -- Data processing , Radio astronomy -- Data processing -- Software , Radio astronomy -- South Africa , ASTRODECONV2019 dataset , Radio telescopes -- South Africa , KERN (omputer software)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/172169 , vital:42172 , 10.21504/10962/172169
- Description: The preparation for the construction of the Square Kilometre Array, and the introduction of its operational precursors, such as LOFAR and MeerKAT, mark the beginning of an exciting era for astronomy. Impressive new data containing valuable science just waiting for discovery is already being generated, and these devices will produce far more data than has ever been collected before. However, with every new data instrument, the data rates grow to unprecedented quantities of data, requiring novel new data-processing tools. In addition, creating science grade data from the raw data still requires significant expert knowledge for processing this data. The software used is often developed by a scientist who lacks proper training in software development skills, resulting in the software not progressing beyond a prototype stage in quality. In the first chapter, we explore various organisational and technical approaches to address these issues by providing a historical overview of the development of radioastronomy pipelines since the inception of the field in the 1940s. In that, the steps required to create a radio image are investigated. We used the lessons-learned to identify patterns in the challenges experienced, and the solutions created to address these over the years. The second chapter describes the mathematical foundations that are essential for radio imaging. In the third chapter, we discuss the production of the KERN Linux distribution, which is a set of software packages containing most radio astronomy software currently in use. Considerable effort was put into making sure that the contained software installs appropriately, all items next to one other on the same system. Where required and possible, bugs and portability fixes were solved and reported with the upstream maintainers. The KERN project also has a website, and issue tracker, where users can report bugs and maintainers can coordinate the packaging effort and new releases. The software packages can be used inside Docker and Singularity containers, enabling the installation of these packages on a wide variety of platforms. In the fourth and fifth chapters, we discuss methods and frameworks for combining the available data reduction tools into recomposable pipelines and introduce the Kliko specification and software. This framework was created to enable end-user astronomers to chain and containerise operations of software in KERN packages. Next, we discuss the Common Workflow Language (CommonWL), a similar but more advanced and mature pipeline framework invented by bio-informatics scientists. CommonWL is supported by a wide range of tools already; among other schedulers, visualisers and editors. Consequently, when a pipeline is made with CommonWL, it can be deployed and manipulated with a wide range of tools. In the final chapter, we attempt something unconventional, applying a generative adversarial network based on deep learning techniques to perform the task of sky brightness reconstruction. Since deep learning methods often require a large number of training samples, we constructed a CommonWL simulation pipeline for creating dirty images and corresponding sky models. This simulated dataset has been made publicly available as the ASTRODECONV2019 dataset. It is shown that this method is useful to perform the restoration and matches the performance of a single clean cycle. In addition, we incorporated domain knowledge by adding the point spread function to the network and by utilising a custom loss function during training. Although it was not possible to improve the cleaning performance of commonly used existing tools, the computational time performance of the approach looks very promising. We suggest that a smaller scope should be the starting point for further studies and optimising of the training of the neural network could produce the desired results.
- Full Text:
- Date Issued: 2021
- Authors: Molenaar, Gijs Jan
- Date: 2021
- Subjects: Radio astronomy -- Data processing , Radio astronomy -- Data processing -- Software , Radio astronomy -- South Africa , ASTRODECONV2019 dataset , Radio telescopes -- South Africa , KERN (omputer software)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/172169 , vital:42172 , 10.21504/10962/172169
- Description: The preparation for the construction of the Square Kilometre Array, and the introduction of its operational precursors, such as LOFAR and MeerKAT, mark the beginning of an exciting era for astronomy. Impressive new data containing valuable science just waiting for discovery is already being generated, and these devices will produce far more data than has ever been collected before. However, with every new data instrument, the data rates grow to unprecedented quantities of data, requiring novel new data-processing tools. In addition, creating science grade data from the raw data still requires significant expert knowledge for processing this data. The software used is often developed by a scientist who lacks proper training in software development skills, resulting in the software not progressing beyond a prototype stage in quality. In the first chapter, we explore various organisational and technical approaches to address these issues by providing a historical overview of the development of radioastronomy pipelines since the inception of the field in the 1940s. In that, the steps required to create a radio image are investigated. We used the lessons-learned to identify patterns in the challenges experienced, and the solutions created to address these over the years. The second chapter describes the mathematical foundations that are essential for radio imaging. In the third chapter, we discuss the production of the KERN Linux distribution, which is a set of software packages containing most radio astronomy software currently in use. Considerable effort was put into making sure that the contained software installs appropriately, all items next to one other on the same system. Where required and possible, bugs and portability fixes were solved and reported with the upstream maintainers. The KERN project also has a website, and issue tracker, where users can report bugs and maintainers can coordinate the packaging effort and new releases. The software packages can be used inside Docker and Singularity containers, enabling the installation of these packages on a wide variety of platforms. In the fourth and fifth chapters, we discuss methods and frameworks for combining the available data reduction tools into recomposable pipelines and introduce the Kliko specification and software. This framework was created to enable end-user astronomers to chain and containerise operations of software in KERN packages. Next, we discuss the Common Workflow Language (CommonWL), a similar but more advanced and mature pipeline framework invented by bio-informatics scientists. CommonWL is supported by a wide range of tools already; among other schedulers, visualisers and editors. Consequently, when a pipeline is made with CommonWL, it can be deployed and manipulated with a wide range of tools. In the final chapter, we attempt something unconventional, applying a generative adversarial network based on deep learning techniques to perform the task of sky brightness reconstruction. Since deep learning methods often require a large number of training samples, we constructed a CommonWL simulation pipeline for creating dirty images and corresponding sky models. This simulated dataset has been made publicly available as the ASTRODECONV2019 dataset. It is shown that this method is useful to perform the restoration and matches the performance of a single clean cycle. In addition, we incorporated domain knowledge by adding the point spread function to the network and by utilising a custom loss function during training. Although it was not possible to improve the cleaning performance of commonly used existing tools, the computational time performance of the approach looks very promising. We suggest that a smaller scope should be the starting point for further studies and optimising of the training of the neural network could produce the desired results.
- Full Text:
- Date Issued: 2021
Influence of argon ion implantation on the thermoluminescence properties of aluminium oxide
- Authors: Khabo, Bokang
- Date: 2022-04-06
- Subjects: Aluminum oxide , Thermoluminescence , Ion implantation , Kinetic analysis , Oxygen vacancies , Argon , Irradiation
- Language: English
- Type: Master's thesis , text
- Identifier: http://hdl.handle.net/10962/234220 , vital:50173
- Description: The influence of argon ion implantation on the thermoluminescence properties (TL) of aluminium oxide (alumina) was investigated. Aluminium oxide (Al2O3) samples were implanted with 80 keV Ar ions. An unimplanted sample and samples implanted at fluences of 1×1014, 5×1014, 1×1015, 5×1015, 1×1016 Ar+/cm2 were irradiated at a dose of 40 Gy and heated at a rate of 1°C/s using a Risø reader model TL/OSL-DA-20 equipped with a Hoya U-340 filter. The thermoluminescence glow curves showed five distinct peaks with main peaks at 178°C, 188°C, 176°C, 208°C, 216°C and 204°C for the unimplanted sample as well as implanted samples. The peak positions of the samples were independent of the irradiation dose suggesting that the samples were characterised by first order kinetics. This was also confirmed by the TM-TSTOP analysis. It was observed that the TL intensity decreases with fluence of implantation. This observation suggests that the concentration of electron traps responsible for thermoluminescence decreases with ion implantation. The decrease in electron concentration might be due to the formation of non-radiative transition bands or the creation of defect clusters and extended defects following the ion implantation and ion fluence increases. The stopping and range of atoms in matter (SRIM) program was used to correlate the TL response of Al2O3 with defects under ion implantation. Subsequent to ion implantation, it was found that the number of oxygen vacancies which are related to electron traps are higher than the number of aluminium vacancies. Kinetic analysis was carried out using the initial rise, Chens peak shape, various heating rate, the whole glow curve, glow curve fitting and the isothermal decay methods. The activation energy was found to be around 0.8 eV and the frequency factor to be of the order 108 𝑠−1 regardless of the implantation fluence. This means that argon ion implantation did not affect the nature of electron traps. The dosimetric features of samples were also investigated at doses in the range of 40 – 200 Gy. Samples generally showed a superlinear response at doses less than 140 Gy and sublinear response at doses higher than 160 Gy. , Thesis (MSc) -- Faculty of Science, Physics and Electronics, 2022
- Full Text:
- Date Issued: 2022-04-06
- Authors: Khabo, Bokang
- Date: 2022-04-06
- Subjects: Aluminum oxide , Thermoluminescence , Ion implantation , Kinetic analysis , Oxygen vacancies , Argon , Irradiation
- Language: English
- Type: Master's thesis , text
- Identifier: http://hdl.handle.net/10962/234220 , vital:50173
- Description: The influence of argon ion implantation on the thermoluminescence properties (TL) of aluminium oxide (alumina) was investigated. Aluminium oxide (Al2O3) samples were implanted with 80 keV Ar ions. An unimplanted sample and samples implanted at fluences of 1×1014, 5×1014, 1×1015, 5×1015, 1×1016 Ar+/cm2 were irradiated at a dose of 40 Gy and heated at a rate of 1°C/s using a Risø reader model TL/OSL-DA-20 equipped with a Hoya U-340 filter. The thermoluminescence glow curves showed five distinct peaks with main peaks at 178°C, 188°C, 176°C, 208°C, 216°C and 204°C for the unimplanted sample as well as implanted samples. The peak positions of the samples were independent of the irradiation dose suggesting that the samples were characterised by first order kinetics. This was also confirmed by the TM-TSTOP analysis. It was observed that the TL intensity decreases with fluence of implantation. This observation suggests that the concentration of electron traps responsible for thermoluminescence decreases with ion implantation. The decrease in electron concentration might be due to the formation of non-radiative transition bands or the creation of defect clusters and extended defects following the ion implantation and ion fluence increases. The stopping and range of atoms in matter (SRIM) program was used to correlate the TL response of Al2O3 with defects under ion implantation. Subsequent to ion implantation, it was found that the number of oxygen vacancies which are related to electron traps are higher than the number of aluminium vacancies. Kinetic analysis was carried out using the initial rise, Chens peak shape, various heating rate, the whole glow curve, glow curve fitting and the isothermal decay methods. The activation energy was found to be around 0.8 eV and the frequency factor to be of the order 108 𝑠−1 regardless of the implantation fluence. This means that argon ion implantation did not affect the nature of electron traps. The dosimetric features of samples were also investigated at doses in the range of 40 – 200 Gy. Samples generally showed a superlinear response at doses less than 140 Gy and sublinear response at doses higher than 160 Gy. , Thesis (MSc) -- Faculty of Science, Physics and Electronics, 2022
- Full Text:
- Date Issued: 2022-04-06
Single station TEC modelling during storm conditions
- Authors: Uwamahoro, Jean Claude
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3812 , vital:20545
- Description: It has been shown in ionospheric research that modelling total electron content (TEC) during storm conditions is a big challenge. In this study, mathematical equations were developed to estimate TEC over Sutherland (32.38oS, 20.81oE), during storm conditions, using the Empirical Orthogonal Function (EOF) analysis, combined with regression analysis. TEC was derived from GPS observations and a geomagnetic storm was defined for Dst ≤ -50 nT. The inputs for the model were chosen based on the factors that influence TEC variation, such as diurnal, seasonal, solar and geomagnetic activity variation, and these were represented by hour of the day, day number of the year, F10.7 and A index respectively. The EOF model was developed using GPS TEC data from 1999 to 2013 and tested on different storms. For the model validation (interpolation), three storms were chosen in 2000 (solar maximum period) and three others in 2006 (solar minimum period), while for extrapolation six storms including three in 2014 and three in 2015 were chosen. Before building the model, TEC values for the selected 2000 and 2006 storms were removed from the dataset used to construct the model in order to make the model validation independent on data. A comparison of the observed and modelled TEC showed that the EOF model works well for storms with non-significant ionospheric TEC response and storms that occurred during periods of low solar activity. High correlation coefficients between the observed and modelled TEC were obtained showing that the model covers most of the information contained in the observed TEC. Furthermore, it has been shown that the EOF model developed for a specific station may be used to estimate TEC over other locations within a latitudinal and longitudinal coverage of 8.7o and 10.6o respectively. This is an important result as it reduces the data dimensionality problem for computational purposes. It may therefore not be necessary for regional storm-time TEC modelling to compute TEC data for all the closest GPS receiver stations since most of the needed information can be extracted from measurements at one location.
- Full Text:
- Date Issued: 2016
- Authors: Uwamahoro, Jean Claude
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3812 , vital:20545
- Description: It has been shown in ionospheric research that modelling total electron content (TEC) during storm conditions is a big challenge. In this study, mathematical equations were developed to estimate TEC over Sutherland (32.38oS, 20.81oE), during storm conditions, using the Empirical Orthogonal Function (EOF) analysis, combined with regression analysis. TEC was derived from GPS observations and a geomagnetic storm was defined for Dst ≤ -50 nT. The inputs for the model were chosen based on the factors that influence TEC variation, such as diurnal, seasonal, solar and geomagnetic activity variation, and these were represented by hour of the day, day number of the year, F10.7 and A index respectively. The EOF model was developed using GPS TEC data from 1999 to 2013 and tested on different storms. For the model validation (interpolation), three storms were chosen in 2000 (solar maximum period) and three others in 2006 (solar minimum period), while for extrapolation six storms including three in 2014 and three in 2015 were chosen. Before building the model, TEC values for the selected 2000 and 2006 storms were removed from the dataset used to construct the model in order to make the model validation independent on data. A comparison of the observed and modelled TEC showed that the EOF model works well for storms with non-significant ionospheric TEC response and storms that occurred during periods of low solar activity. High correlation coefficients between the observed and modelled TEC were obtained showing that the model covers most of the information contained in the observed TEC. Furthermore, it has been shown that the EOF model developed for a specific station may be used to estimate TEC over other locations within a latitudinal and longitudinal coverage of 8.7o and 10.6o respectively. This is an important result as it reduces the data dimensionality problem for computational purposes. It may therefore not be necessary for regional storm-time TEC modelling to compute TEC data for all the closest GPS receiver stations since most of the needed information can be extracted from measurements at one location.
- Full Text:
- Date Issued: 2016
Addressing flux suppression, radio frequency interference, and selection of optimal solution intervals during radio interferometric calibration
- Authors: Sob, Ulrich Armel Mbou
- Date: 2020
- Subjects: CubiCal (Software) , Radio -- Interference , Imaging systems in astronomy , Algorithms , Astronomical instruments -- Calibration , Astronomy -- Data processing
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/147714 , vital:38663
- Description: The forthcoming Square Kilometre Array is expected to provide answers to some of the most intriguing questions about our Universe. However, as it is already noticeable from MeerKAT and other precursors, the amounts of data produced by these new instruments are significantly challenging to calibrate and image. Calibration of radio interferometric data is usually biased by incomplete sky models and radio frequency interference (RFI) resulting in calibration artefacts that limit the dynamic range and image fidelity of the resulting images. One of the most noticeable of these artefacts is the formation of spurious sources which causes suppression of real emissions. Fortunately, it has been shown that calibration algorithms employing heavy-tailed likelihood functions are less susceptible to this due to their robustness against outliers. Leveraging on recent developments in the field of complex optimisation, we implement a robust calibration algorithm using a Student’s t likelihood function and Wirtinger derivatives. The new algorithm, dubbed the robust solver, is incorporated as a subroutine into the newly released calibration software package CubiCal. We perform statistical analysis on the distribution of visibilities and provide an insight into the functioning of the robust solver and describe different scenarios where it will improve calibration. We use simulations to show that the robust solver effectively reduces the amount of flux suppressed from unmodelled sources both in direction independent and direction dependent calibration. Furthermore, the robust solver is shown to successfully mitigate the effects of low-level RFI when applied to a simulated and a real VLA dataset. Finally, we demonstrate that there are close links between the amount of flux suppressed from sources, the effects of the RFI and the employed solution interval during radio interferometric calibration. Hence, we investigate the effects of solution intervals and the different factors to consider in order to select adequate solution intervals. Furthermore, we propose a practical brute force method for selecting optimal solution intervals. The proposed method is successfully applied to a VLA dataset.
- Full Text:
- Date Issued: 2020
- Authors: Sob, Ulrich Armel Mbou
- Date: 2020
- Subjects: CubiCal (Software) , Radio -- Interference , Imaging systems in astronomy , Algorithms , Astronomical instruments -- Calibration , Astronomy -- Data processing
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/147714 , vital:38663
- Description: The forthcoming Square Kilometre Array is expected to provide answers to some of the most intriguing questions about our Universe. However, as it is already noticeable from MeerKAT and other precursors, the amounts of data produced by these new instruments are significantly challenging to calibrate and image. Calibration of radio interferometric data is usually biased by incomplete sky models and radio frequency interference (RFI) resulting in calibration artefacts that limit the dynamic range and image fidelity of the resulting images. One of the most noticeable of these artefacts is the formation of spurious sources which causes suppression of real emissions. Fortunately, it has been shown that calibration algorithms employing heavy-tailed likelihood functions are less susceptible to this due to their robustness against outliers. Leveraging on recent developments in the field of complex optimisation, we implement a robust calibration algorithm using a Student’s t likelihood function and Wirtinger derivatives. The new algorithm, dubbed the robust solver, is incorporated as a subroutine into the newly released calibration software package CubiCal. We perform statistical analysis on the distribution of visibilities and provide an insight into the functioning of the robust solver and describe different scenarios where it will improve calibration. We use simulations to show that the robust solver effectively reduces the amount of flux suppressed from unmodelled sources both in direction independent and direction dependent calibration. Furthermore, the robust solver is shown to successfully mitigate the effects of low-level RFI when applied to a simulated and a real VLA dataset. Finally, we demonstrate that there are close links between the amount of flux suppressed from sources, the effects of the RFI and the employed solution interval during radio interferometric calibration. Hence, we investigate the effects of solution intervals and the different factors to consider in order to select adequate solution intervals. Furthermore, we propose a practical brute force method for selecting optimal solution intervals. The proposed method is successfully applied to a VLA dataset.
- Full Text:
- Date Issued: 2020
Reconstructing ionospheric TEC over South Africa using signals from a regional GPS network
- Authors: Opperman, B D L
- Date: 2008
- Subjects: Global Positioning System Global Positioning System -- Data processing Electrons -- South Africa Ionosphere -- South Africa Ionospheric radio wave propagation -- South Africa
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:5487 , http://hdl.handle.net/10962/d1005273
- Description: Radio signals transmitted by GPS satellites orbiting the Earth are modulated as they propagate through the electrically charged plasmasphere and ionosphere in the near-Earth space environment. Through a linear combination of GPS range and phase measurements observed on two carrier frequencies by terrestrial-based GPS receivers, the ionospheric total electron content (TEC) along oblique GPS signal paths may be quantified. Simultaneous observations of signals transmitted by multiple GPS satellites and observed from a network of South African dual frequency GPS receivers, constitute a spatially dense ionospheric measurement source over the region. A new methodology, based on an adjusted spherical harmonic (ASHA) expansion, was developed to estimate diurnal vertical TEC over the region using GPS observations over the region. The performance of the ASHA methodology to estimate diurnal TEC and satellite and receiver differential clock biases (DCBs) for a single GPS receiver was first tested with simulation data and subsequently applied to observed GPS data. The resulting diurnal TEC profiles estimated from GPS observations compared favourably to measurements from three South African ionosondes and two other GPS-based methodologies for 2006 solstice and equinox dates. The ASHA methodology was applied to calculating diurnal two-dimensional TEC maps from multiple receivers in the South African GPS network. The space physics application of the newly developed methodology was demonstrated by investigating the ionosphere’s behaviour during a severe geomagnetic storm and investigating the long-term ionospheric stability in support of the proposed Square Kilometre Array (SKA) radio astronomy project. The feasibility of employing the newly developed technique in an operational near real-time system for estimating and dissimenating TEC values over Southern Africa using observations from a regional GPS receiver network, was investigated.
- Full Text:
- Date Issued: 2008
- Authors: Opperman, B D L
- Date: 2008
- Subjects: Global Positioning System Global Positioning System -- Data processing Electrons -- South Africa Ionosphere -- South Africa Ionospheric radio wave propagation -- South Africa
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:5487 , http://hdl.handle.net/10962/d1005273
- Description: Radio signals transmitted by GPS satellites orbiting the Earth are modulated as they propagate through the electrically charged plasmasphere and ionosphere in the near-Earth space environment. Through a linear combination of GPS range and phase measurements observed on two carrier frequencies by terrestrial-based GPS receivers, the ionospheric total electron content (TEC) along oblique GPS signal paths may be quantified. Simultaneous observations of signals transmitted by multiple GPS satellites and observed from a network of South African dual frequency GPS receivers, constitute a spatially dense ionospheric measurement source over the region. A new methodology, based on an adjusted spherical harmonic (ASHA) expansion, was developed to estimate diurnal vertical TEC over the region using GPS observations over the region. The performance of the ASHA methodology to estimate diurnal TEC and satellite and receiver differential clock biases (DCBs) for a single GPS receiver was first tested with simulation data and subsequently applied to observed GPS data. The resulting diurnal TEC profiles estimated from GPS observations compared favourably to measurements from three South African ionosondes and two other GPS-based methodologies for 2006 solstice and equinox dates. The ASHA methodology was applied to calculating diurnal two-dimensional TEC maps from multiple receivers in the South African GPS network. The space physics application of the newly developed methodology was demonstrated by investigating the ionosphere’s behaviour during a severe geomagnetic storm and investigating the long-term ionospheric stability in support of the proposed Square Kilometre Array (SKA) radio astronomy project. The feasibility of employing the newly developed technique in an operational near real-time system for estimating and dissimenating TEC values over Southern Africa using observations from a regional GPS receiver network, was investigated.
- Full Text:
- Date Issued: 2008
Investigation into the extended capabilities of the new DPS-4D ionosonde
- Authors: Ssessanga, Nicholas
- Date: 2011
- Subjects: Ionosondes , Ionosphere , Ionosphere -- Observations -- South Africa -- Hermanus (Cape of Good Hope) , Ionosphere -- Research -- South Africa -- Hermanus (Cape of Good Hope)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5472 , http://hdl.handle.net/10962/d1005257 , Ionosondes , Ionosphere , Ionosphere -- Observations -- South Africa -- Hermanus (Cape of Good Hope) , Ionosphere -- Research -- South Africa -- Hermanus (Cape of Good Hope)
- Description: The DPS-4D is the latest version of digital ionosonde developed by the UMLCAR (University of Massachusetts in Lowell Center for Atmospheric Research) in 2008. This new ionosonde has advances in both the hardware and software which allows for the promised advanced capabilities. The aim of this thesis was to present results from an experiment undertaken using the Hermanus DPS-4D (34.4°S 19.2°E, South Africa), the first of this version to be installed globally, to answer a science question outside of the normally expected capabilities of an ionosonde. The science question posed focused on the ability of the DPS-4D to provide information on day-time Pc3 pulsations evident in the ionosphere. Day-time Pc3 ULF waves propagating down through the ionosphere cause oscillations in the Doppler shift of High Frequency (HF) radio transmissions that are correlated with the magnetic pulsations recorded on the ground. Evidence is presented which shows that no correlation exists between the ground magnetic pulsation data and DPS-4D ionospheric data. The conclusion was reached that although the DPS-4D is more advanced in its eld of technology than its predecessors it may not be used to observe Pc3 pulsations.
- Full Text:
- Date Issued: 2011
- Authors: Ssessanga, Nicholas
- Date: 2011
- Subjects: Ionosondes , Ionosphere , Ionosphere -- Observations -- South Africa -- Hermanus (Cape of Good Hope) , Ionosphere -- Research -- South Africa -- Hermanus (Cape of Good Hope)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5472 , http://hdl.handle.net/10962/d1005257 , Ionosondes , Ionosphere , Ionosphere -- Observations -- South Africa -- Hermanus (Cape of Good Hope) , Ionosphere -- Research -- South Africa -- Hermanus (Cape of Good Hope)
- Description: The DPS-4D is the latest version of digital ionosonde developed by the UMLCAR (University of Massachusetts in Lowell Center for Atmospheric Research) in 2008. This new ionosonde has advances in both the hardware and software which allows for the promised advanced capabilities. The aim of this thesis was to present results from an experiment undertaken using the Hermanus DPS-4D (34.4°S 19.2°E, South Africa), the first of this version to be installed globally, to answer a science question outside of the normally expected capabilities of an ionosonde. The science question posed focused on the ability of the DPS-4D to provide information on day-time Pc3 pulsations evident in the ionosphere. Day-time Pc3 ULF waves propagating down through the ionosphere cause oscillations in the Doppler shift of High Frequency (HF) radio transmissions that are correlated with the magnetic pulsations recorded on the ground. Evidence is presented which shows that no correlation exists between the ground magnetic pulsation data and DPS-4D ionospheric data. The conclusion was reached that although the DPS-4D is more advanced in its eld of technology than its predecessors it may not be used to observe Pc3 pulsations.
- Full Text:
- Date Issued: 2011
MEQSILHOUETTE: a mm-VLBI observation and signal corruption simulator
- Authors: Blecher, Tariq
- Date: 2017
- Subjects: Large astronomical telescopes , Very long baseline interferometry , MEQSILHOUETTE (Software) , Event horizon telescope
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/40713 , vital:25019
- Description: The Event Horizon Telescope (EHT) aims to resolve the innermost emission of nearby supermassive black holes, Sgr A* and M87, on event horizon scales. This emission is predicted to be gravitationally lensed by the black hole which should produce a shadow (or silhouette) feature, a precise measurement of which is a test of gravity in the strong-field regime. This emission is also an ideal probe of the innermost accretion and jet-launch physics, offering the new insights into this data-limited observing regime. The EHT will use the technique of Very Long Baseline Interferometry (VLBI) at (sub)millimetre wavelengths, which has a diffraction limited angular resolution of order ~ 10 µ-arcsec. However, this technique suffers from unique challenges, including scattering and attenuation in the troposphere and interstellar medium; variable source structure; as well as antenna pointing errors comparable to the size of the primary beam. In this thesis, we present the meqsilhouette software package which is focused towards simulating realistic EHT data. It has the capability to simulate a time-variable source, and includes realistic descriptions of the effects of the troposphere, the interstellar medium as well as primary beams and associated antenna pointing errors. We have demonstrated through several examples simulations that these effects can limit the ability to measure the key science parameters. This simulator can be used to research calibration, parameter estimation and imaging strategies, as well as gain insight into possible systematic uncertainties.
- Full Text:
- Date Issued: 2017
- Authors: Blecher, Tariq
- Date: 2017
- Subjects: Large astronomical telescopes , Very long baseline interferometry , MEQSILHOUETTE (Software) , Event horizon telescope
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/40713 , vital:25019
- Description: The Event Horizon Telescope (EHT) aims to resolve the innermost emission of nearby supermassive black holes, Sgr A* and M87, on event horizon scales. This emission is predicted to be gravitationally lensed by the black hole which should produce a shadow (or silhouette) feature, a precise measurement of which is a test of gravity in the strong-field regime. This emission is also an ideal probe of the innermost accretion and jet-launch physics, offering the new insights into this data-limited observing regime. The EHT will use the technique of Very Long Baseline Interferometry (VLBI) at (sub)millimetre wavelengths, which has a diffraction limited angular resolution of order ~ 10 µ-arcsec. However, this technique suffers from unique challenges, including scattering and attenuation in the troposphere and interstellar medium; variable source structure; as well as antenna pointing errors comparable to the size of the primary beam. In this thesis, we present the meqsilhouette software package which is focused towards simulating realistic EHT data. It has the capability to simulate a time-variable source, and includes realistic descriptions of the effects of the troposphere, the interstellar medium as well as primary beams and associated antenna pointing errors. We have demonstrated through several examples simulations that these effects can limit the ability to measure the key science parameters. This simulator can be used to research calibration, parameter estimation and imaging strategies, as well as gain insight into possible systematic uncertainties.
- Full Text:
- Date Issued: 2017
Expanding the capabilities of the DPS lonosonde system
- Authors: Magnus, Lindsay Gerald
- Date: 2001
- Subjects: Ionosondes
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5560 , http://hdl.handle.net/10962/d1018243
- Description: The Digisonde Portable Sounder (DPS) is a low power pulse ionosonde capable of recording a wealth of scientific information about the ionosphere. The routine vertical incidence mode, that produces the scaled ionospheric parameters, only records limited Doppler and no precise angle of arrival (AoA) information. The drift mode produces precise scientific information but only limited range information. This thesis explains the operation of the DPS and then examines the drift data by first showing the Doppler velocities (V*) calculated for a fixed frequency ionogram as well as the velocities calculated from an interesting ionospheric disturbance measured with a stepped frequency ionogram and second by illustrating the presence of a variation in the AoA of ionospheric echoes at sunrise. The conclusion of the thesis is that a drift vertical incidence mode be developed to allow the simultaneous measurement of the scaled ionospheric parameters and the precise AoA and full Doppler spectrum information.
- Full Text:
- Date Issued: 2001
- Authors: Magnus, Lindsay Gerald
- Date: 2001
- Subjects: Ionosondes
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5560 , http://hdl.handle.net/10962/d1018243
- Description: The Digisonde Portable Sounder (DPS) is a low power pulse ionosonde capable of recording a wealth of scientific information about the ionosphere. The routine vertical incidence mode, that produces the scaled ionospheric parameters, only records limited Doppler and no precise angle of arrival (AoA) information. The drift mode produces precise scientific information but only limited range information. This thesis explains the operation of the DPS and then examines the drift data by first showing the Doppler velocities (V*) calculated for a fixed frequency ionogram as well as the velocities calculated from an interesting ionospheric disturbance measured with a stepped frequency ionogram and second by illustrating the presence of a variation in the AoA of ionospheric echoes at sunrise. The conclusion of the thesis is that a drift vertical incidence mode be developed to allow the simultaneous measurement of the scaled ionospheric parameters and the precise AoA and full Doppler spectrum information.
- Full Text:
- Date Issued: 2001
Modelling storm-time TEC changes using linear and non-linear techniques
- Authors: Uwamahoro, Jean Claude
- Date: 2019
- Subjects: Magnetic storms , Astronomy -- Computer programs , Imaging systems in astronomy , Ionospheric storms , Electrons -- Measurement , Magnetosphere -- Observations
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/92908 , vital:30762
- Description: Statistical models based on empirical orthogonal functions (EOF) analysis and non-linear regression analysis (NLRA) were developed for the purpose of estimating the ionospheric total electron content (TEC) during geomagnetic storms. The well-known least squares method (LSM) and Metropolis-Hastings algorithm (MHA) were used as optimization techniques to determine the unknown coefficients of the developed analytical expressions. Artificial Neural Networks (ANNs), the International Reference Ionosphere (IRI) model, and the Multi-Instrument Data Analysis System (MIDAS) tomographic inversion algorithm were also applied to storm-time TEC modelling/reconstruction for various latitudes of the African sector and surrounding areas. This work presents some of the first statistical modeling of the mid-latitude and low-latitude ionosphere during geomagnetic storms that includes solar, geomagnetic and neutral wind drivers.Development and validation of the empirical models were based on storm-time TEC data derived from the global positioning system (GPS) measurements over ground receivers within Africa and surrounding areas. The storm criterion applied was Dst 6 −50 nT and/or Kp > 4. The performance evaluation of MIDAS compared with ANNs to reconstruct storm-time TEC over the African low- and mid-latitude regions showed that MIDAS and ANNs provide comparable results. Their respective mean absolute error (MAE) values were 4.81 and 4.18 TECU. The ANN model was, however, found to perform 24.37 % better than MIDAS at estimating storm-time TEC for low latitudes, while MIDAS is 13.44 % more accurate than ANN for the mid-latitudes. When their performances are compared with the IRI model, both MIDAS and ANN model were found to provide more accurate storm-time TEC reconstructions for the African low- and mid-latitude regions. A comparative study of the performances of EOF, NLRA, ANN, and IRI models to estimate TEC during geomagnetic storm conditions over various latitudes showed that the ANN model is about 10 %, 26 %, and 58 % more accurate than EOF, NLRA, and IRI models, respectively, while EOF was found to perform 15 %, and 44 % better than NLRA and IRI, respectively. It was further found that the NLRA model is 25 % more accurate than the IRI model. We have also investigated for the first time, the role of meridional neutral winds (from the Horizontal Wind Model) to storm-time TEC modelling in the low latitude, northern and southern hemisphere mid-latitude regions of the African sector, based on ANN models. Statistics have shown that the inclusion of the meridional wind velocity in TEC modelling during geomagnetic storms leads to percentage improvements of about 5 % for the low latitude, 10 % and 5 % for the northern and southern hemisphere mid-latitude regions, respectively. High-latitude storm-induced winds and the inter-hemispheric blows of the meridional winds from summer to winter hemisphere have been suggested to be associated with these improvements.
- Full Text:
- Date Issued: 2019
- Authors: Uwamahoro, Jean Claude
- Date: 2019
- Subjects: Magnetic storms , Astronomy -- Computer programs , Imaging systems in astronomy , Ionospheric storms , Electrons -- Measurement , Magnetosphere -- Observations
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/92908 , vital:30762
- Description: Statistical models based on empirical orthogonal functions (EOF) analysis and non-linear regression analysis (NLRA) were developed for the purpose of estimating the ionospheric total electron content (TEC) during geomagnetic storms. The well-known least squares method (LSM) and Metropolis-Hastings algorithm (MHA) were used as optimization techniques to determine the unknown coefficients of the developed analytical expressions. Artificial Neural Networks (ANNs), the International Reference Ionosphere (IRI) model, and the Multi-Instrument Data Analysis System (MIDAS) tomographic inversion algorithm were also applied to storm-time TEC modelling/reconstruction for various latitudes of the African sector and surrounding areas. This work presents some of the first statistical modeling of the mid-latitude and low-latitude ionosphere during geomagnetic storms that includes solar, geomagnetic and neutral wind drivers.Development and validation of the empirical models were based on storm-time TEC data derived from the global positioning system (GPS) measurements over ground receivers within Africa and surrounding areas. The storm criterion applied was Dst 6 −50 nT and/or Kp > 4. The performance evaluation of MIDAS compared with ANNs to reconstruct storm-time TEC over the African low- and mid-latitude regions showed that MIDAS and ANNs provide comparable results. Their respective mean absolute error (MAE) values were 4.81 and 4.18 TECU. The ANN model was, however, found to perform 24.37 % better than MIDAS at estimating storm-time TEC for low latitudes, while MIDAS is 13.44 % more accurate than ANN for the mid-latitudes. When their performances are compared with the IRI model, both MIDAS and ANN model were found to provide more accurate storm-time TEC reconstructions for the African low- and mid-latitude regions. A comparative study of the performances of EOF, NLRA, ANN, and IRI models to estimate TEC during geomagnetic storm conditions over various latitudes showed that the ANN model is about 10 %, 26 %, and 58 % more accurate than EOF, NLRA, and IRI models, respectively, while EOF was found to perform 15 %, and 44 % better than NLRA and IRI, respectively. It was further found that the NLRA model is 25 % more accurate than the IRI model. We have also investigated for the first time, the role of meridional neutral winds (from the Horizontal Wind Model) to storm-time TEC modelling in the low latitude, northern and southern hemisphere mid-latitude regions of the African sector, based on ANN models. Statistics have shown that the inclusion of the meridional wind velocity in TEC modelling during geomagnetic storms leads to percentage improvements of about 5 % for the low latitude, 10 % and 5 % for the northern and southern hemisphere mid-latitude regions, respectively. High-latitude storm-induced winds and the inter-hemispheric blows of the meridional winds from summer to winter hemisphere have been suggested to be associated with these improvements.
- Full Text:
- Date Issued: 2019
The selection and evaluation of grey-level thresholds applied to digital images
- Authors: Brink, Anton David
- Date: 1988
- Subjects: Image processing -- Digital techniques
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5443 , http://hdl.handle.net/10962/d1001996
- Description: Many applications of image processing require the initial segmentation of the image by means of grey-level thresholding. In this thesis, the problems of automatic threshold selection and evaluation are addressed in order to find a universally applicable thresholding method. Three previously proposed threshold selection techniques are investigated, and two new methods are introduced. The results of applying these methods to several different images are evaluated using two threshold evaluation techniques, one subjective and one quantitative. It is found that no threshold selection technique is universally acceptable, as different methods work best with different images and applications
- Full Text:
- Date Issued: 1988
- Authors: Brink, Anton David
- Date: 1988
- Subjects: Image processing -- Digital techniques
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5443 , http://hdl.handle.net/10962/d1001996
- Description: Many applications of image processing require the initial segmentation of the image by means of grey-level thresholding. In this thesis, the problems of automatic threshold selection and evaluation are addressed in order to find a universally applicable thresholding method. Three previously proposed threshold selection techniques are investigated, and two new methods are introduced. The results of applying these methods to several different images are evaluated using two threshold evaluation techniques, one subjective and one quantitative. It is found that no threshold selection technique is universally acceptable, as different methods work best with different images and applications
- Full Text:
- Date Issued: 1988
Progress towards the development and implementation of an unambiguous copper wire fingerprinting system
- Authors: Poole, Martin
- Date: 2003
- Subjects: Electroplating , Copper , Telecommunication , Theft -- Prevention
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5484 , http://hdl.handle.net/10962/d1005270 , Electroplating , Copper , Telecommunication , Theft -- Prevention
- Description: The Telecommunications industry in Southern Africa is faced with the problem of theft of the signal carrying copper wire, both from the ground and from telephone poles. In many cases, if the offenders are caught, the prosecuting party has no way of proving that the wire is the property of any one Telecommunication company, as any inked markings on the insulating sheaths have been burned off along with the insulation and protective coatings themselves. Through this work we * describe the problem, * specify the necessary and preferred technical properties of a viable solution, * report the preliminary investigations into the devising of an unambiguous "fingerprinting" of the 0.5 mm wires, including some of those solutions that, upon investigation, appear non-viable, * describe the development and implementation of an electrochemical marker with detection mechanism which has shown in proof-of-principle to work, * outline the road-map of necessary future work.
- Full Text:
- Date Issued: 2003
- Authors: Poole, Martin
- Date: 2003
- Subjects: Electroplating , Copper , Telecommunication , Theft -- Prevention
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5484 , http://hdl.handle.net/10962/d1005270 , Electroplating , Copper , Telecommunication , Theft -- Prevention
- Description: The Telecommunications industry in Southern Africa is faced with the problem of theft of the signal carrying copper wire, both from the ground and from telephone poles. In many cases, if the offenders are caught, the prosecuting party has no way of proving that the wire is the property of any one Telecommunication company, as any inked markings on the insulating sheaths have been burned off along with the insulation and protective coatings themselves. Through this work we * describe the problem, * specify the necessary and preferred technical properties of a viable solution, * report the preliminary investigations into the devising of an unambiguous "fingerprinting" of the 0.5 mm wires, including some of those solutions that, upon investigation, appear non-viable, * describe the development and implementation of an electrochemical marker with detection mechanism which has shown in proof-of-principle to work, * outline the road-map of necessary future work.
- Full Text:
- Date Issued: 2003
Observations of glitches in PSR 0833-45 and 1641-45
- Authors: Flanagan, Claire Susan
- Date: 1996
- Subjects: Pulsars Neutron stars
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:5480 , http://hdl.handle.net/10962/d1005266
- Description: An eleven-year series of radio timing observations of 0833- 45 (Vela) and PSR 1641- 45 is presented. During this time, five large spin-ups ("glitches") were observed in 0833- 45 and one in 1641-45. The stellar response to these events is investigated, and the three relat ively long complete inter-glitch intervals in 0833-45 are modeled. The results are of relevance to studies of the interiors of neutron stars. The initial aim of the project - to obtain good observational coverage of large glitches in the Vela pulsar - was successfully achieved, and high quality observations of the periods between glitches were obtained as a by-product. The results of the analysis presented here provide support for the existence of both linear and non-linear coupling in the Vela pulsar, and put a limit on the former in PSR 1641- 45. The recently observed existence of a rapidly recovering component of part of a glitch in Vela was verified in the subsequent glitch, although there is now evidence to contradict the suggestion that this component involves a particular region of the star that is implicated in every glitch. Observations of a recent glitch in the same pulsar have resolved a small component of the spin-up; such a component has not been reported for any other large glitch.
- Full Text:
- Date Issued: 1996
- Authors: Flanagan, Claire Susan
- Date: 1996
- Subjects: Pulsars Neutron stars
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:5480 , http://hdl.handle.net/10962/d1005266
- Description: An eleven-year series of radio timing observations of 0833- 45 (Vela) and PSR 1641- 45 is presented. During this time, five large spin-ups ("glitches") were observed in 0833- 45 and one in 1641-45. The stellar response to these events is investigated, and the three relat ively long complete inter-glitch intervals in 0833-45 are modeled. The results are of relevance to studies of the interiors of neutron stars. The initial aim of the project - to obtain good observational coverage of large glitches in the Vela pulsar - was successfully achieved, and high quality observations of the periods between glitches were obtained as a by-product. The results of the analysis presented here provide support for the existence of both linear and non-linear coupling in the Vela pulsar, and put a limit on the former in PSR 1641- 45. The recently observed existence of a rapidly recovering component of part of a glitch in Vela was verified in the subsequent glitch, although there is now evidence to contradict the suggestion that this component involves a particular region of the star that is implicated in every glitch. Observations of a recent glitch in the same pulsar have resolved a small component of the spin-up; such a component has not been reported for any other large glitch.
- Full Text:
- Date Issued: 1996