A 150 MHz all sky survey with the Precision Array to Probe the Epoch of Reionization
- Authors: Chege, James Kariuki
- Date: 2020
- Subjects: Epoch of reionization -- Research , Astronomy -- Observations , Radio interferometers
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/117733 , vital:34556
- Description: The Precision Array to Probe the Epoch of Reionization (PAPER) was built to measure the redshifted 21 cm line of hydrogen from cosmic reionization. Such low frequency observations promise to be the best means of understanding the cosmic dawn; when the first galaxies in the universe formed, and also the Epoch of Reionization; when the intergalactic medium changed from neutral to ionized. The major challenges to these observations is the presence of astrophysical foregrounds that are much brighter than the cosmological signal. Here, I present an all-sky survey at 150 MHz obtained from the analysis of 300 hours of PAPER observations. Particular focus is given to the calibration and imaging techniques that need to deal with the wide field of view of a non-tracking instrument. The survey covers ~ 7000 square degrees of the southern sky. From a sky area of 4400 square degrees out of the total survey area, I extract a catalogue of sources brighter than 4 Jy whose accuracy was tested against the published GLEAM catalogue, leading to a fractional difference rms better than 20%. The catalogue provides an all-sky accurate model of the extragalactic foreground to be used for the calibration of future Epoch of Reionization observations and to be subtracted from the PAPER observations themselves in order to mitigate the foreground contamination.
- Full Text:
- Date Issued: 2020
- Authors: Chege, James Kariuki
- Date: 2020
- Subjects: Epoch of reionization -- Research , Astronomy -- Observations , Radio interferometers
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/117733 , vital:34556
- Description: The Precision Array to Probe the Epoch of Reionization (PAPER) was built to measure the redshifted 21 cm line of hydrogen from cosmic reionization. Such low frequency observations promise to be the best means of understanding the cosmic dawn; when the first galaxies in the universe formed, and also the Epoch of Reionization; when the intergalactic medium changed from neutral to ionized. The major challenges to these observations is the presence of astrophysical foregrounds that are much brighter than the cosmological signal. Here, I present an all-sky survey at 150 MHz obtained from the analysis of 300 hours of PAPER observations. Particular focus is given to the calibration and imaging techniques that need to deal with the wide field of view of a non-tracking instrument. The survey covers ~ 7000 square degrees of the southern sky. From a sky area of 4400 square degrees out of the total survey area, I extract a catalogue of sources brighter than 4 Jy whose accuracy was tested against the published GLEAM catalogue, leading to a fractional difference rms better than 20%. The catalogue provides an all-sky accurate model of the extragalactic foreground to be used for the calibration of future Epoch of Reionization observations and to be subtracted from the PAPER observations themselves in order to mitigate the foreground contamination.
- Full Text:
- Date Issued: 2020
A Bayesian approach to tilted-ring modelling of galaxies
- Authors: Maina, Eric Kamau
- Date: 2020
- Subjects: Bayesian statistical decision theory , Galaxies , Radio astronomy , TiRiFiC (Tilted Ring Fitting Code) , Neutral hydrogen , Spectroscopic data cubes , Galaxy parametrisation
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/145783 , vital:38466
- Description: The orbits of neutral hydrogen (H I) gas found in most disk galaxies are circular and also exhibit long-lived warps at large radii where the restoring gravitational forces of the inner disk become weak (Spekkens and Giovanelli 2006). These warps make the tilted-ring model an ideal choice for galaxy parametrisation. Analysis software utilizing the tilted-ring-model can be grouped into two and three-dimensional based software. Józsa et al. (2007b) demonstrated that three dimensional based software is better suited for galaxy parametrisation because it is affected by the effect of beam smearing only by increasing the uncertainty of parameters but not with the notorious systematic effects observed for two-dimensional fitting techniques. TiRiFiC, The Tilted Ring Fitting Code (Józsa et al. 2007b), is a software to construct parameterised models of high-resolution data cubes of rotating galaxies. It uses the tilted-ring model, and with that, a combination of some parameters such as surface brightness, position angle, rotation velocity and inclination, to describe galaxies. TiRiFiC works by directly fitting tilted-ring models to spectroscopic data cubes and hence is not affected by beam smearing or line-of-site-effects, e.g. strong warps. Because of that, the method is unavoidable as an analytic method in future Hi surveys. In the current implementation, though, there are several drawbacks. The implemented optimisers search for local solutions in parameter space only, do not quantify correlations between parameters and cannot find errors of single parameters. In theory, these drawbacks can be overcome by using Bayesian statistics, implemented in Multinest (Feroz et al. 2008), as it allows for sampling a posterior distribution irrespective of its multimodal nature resulting in parameter samples that correspond to the maximum in the posterior distribution. These parameter samples can be used as well to quantify correlations and find errors of single parameters. Since this method employs Bayesian statistics, it also allows the user to leverage any prior information they may have on parameter values.
- Full Text:
- Date Issued: 2020
- Authors: Maina, Eric Kamau
- Date: 2020
- Subjects: Bayesian statistical decision theory , Galaxies , Radio astronomy , TiRiFiC (Tilted Ring Fitting Code) , Neutral hydrogen , Spectroscopic data cubes , Galaxy parametrisation
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/145783 , vital:38466
- Description: The orbits of neutral hydrogen (H I) gas found in most disk galaxies are circular and also exhibit long-lived warps at large radii where the restoring gravitational forces of the inner disk become weak (Spekkens and Giovanelli 2006). These warps make the tilted-ring model an ideal choice for galaxy parametrisation. Analysis software utilizing the tilted-ring-model can be grouped into two and three-dimensional based software. Józsa et al. (2007b) demonstrated that three dimensional based software is better suited for galaxy parametrisation because it is affected by the effect of beam smearing only by increasing the uncertainty of parameters but not with the notorious systematic effects observed for two-dimensional fitting techniques. TiRiFiC, The Tilted Ring Fitting Code (Józsa et al. 2007b), is a software to construct parameterised models of high-resolution data cubes of rotating galaxies. It uses the tilted-ring model, and with that, a combination of some parameters such as surface brightness, position angle, rotation velocity and inclination, to describe galaxies. TiRiFiC works by directly fitting tilted-ring models to spectroscopic data cubes and hence is not affected by beam smearing or line-of-site-effects, e.g. strong warps. Because of that, the method is unavoidable as an analytic method in future Hi surveys. In the current implementation, though, there are several drawbacks. The implemented optimisers search for local solutions in parameter space only, do not quantify correlations between parameters and cannot find errors of single parameters. In theory, these drawbacks can be overcome by using Bayesian statistics, implemented in Multinest (Feroz et al. 2008), as it allows for sampling a posterior distribution irrespective of its multimodal nature resulting in parameter samples that correspond to the maximum in the posterior distribution. These parameter samples can be used as well to quantify correlations and find errors of single parameters. Since this method employs Bayesian statistics, it also allows the user to leverage any prior information they may have on parameter values.
- Full Text:
- Date Issued: 2020
A study of why some physic concepts in the South African Physical Science curriculum are poorly understood in order to develop a targeted action-research intervention for Newton’s second law
- Authors: Cobbing, Kathleen Margaret
- Date: 2020
- Subjects: Physics -- Study and teaching (Secondary) -- South Africa , Physics -- Examinations, questions, etc. -- South Africa , Motion -- Study and teaching (Secondary) -- South Africa
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/146903 , vital:38575
- Description: Globally, many students show a poor understanding of concepts in high school physics and lack the necessary problem-solving skills that the course demands. The application of Newton’s second law was found to be particularly problematic through document analysis of South African examination feedback reports, as well as from an analysis of the physics examinations at a pair of well-resourced South African independent schools that follow the Independent Examination Board curriculum. Through an action-research approach, a resource for use by students was designed and modified to improve students’ understanding of this concept, while modelling problemsolving methods. The resource consisted of brief revision notes, worked examples and scaffolded exercises. The design of the resource was influenced by the theory of cognitive apprenticeship, cognitive load theory and conceptual change theory. One of the aims of the resource was to encourage students to translate between the different representations of a problem situation: symbolic, abstract, model and concrete. The impact of this resource was evaluated at a pair of schools using a mixed methods approach. This incorporated pre- and post-tests for a quantitative assessment, qualitative student evaluations and the analysis of examination scripts. There was an improvement from pre- to post-test for all four iterations of the intervention and these improvements were shown to be significant. The use of the resource led to an increase in the quality and quantity of diagrams drawn by students in subsequent assessments.
- Full Text:
- Date Issued: 2020
- Authors: Cobbing, Kathleen Margaret
- Date: 2020
- Subjects: Physics -- Study and teaching (Secondary) -- South Africa , Physics -- Examinations, questions, etc. -- South Africa , Motion -- Study and teaching (Secondary) -- South Africa
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/146903 , vital:38575
- Description: Globally, many students show a poor understanding of concepts in high school physics and lack the necessary problem-solving skills that the course demands. The application of Newton’s second law was found to be particularly problematic through document analysis of South African examination feedback reports, as well as from an analysis of the physics examinations at a pair of well-resourced South African independent schools that follow the Independent Examination Board curriculum. Through an action-research approach, a resource for use by students was designed and modified to improve students’ understanding of this concept, while modelling problemsolving methods. The resource consisted of brief revision notes, worked examples and scaffolded exercises. The design of the resource was influenced by the theory of cognitive apprenticeship, cognitive load theory and conceptual change theory. One of the aims of the resource was to encourage students to translate between the different representations of a problem situation: symbolic, abstract, model and concrete. The impact of this resource was evaluated at a pair of schools using a mixed methods approach. This incorporated pre- and post-tests for a quantitative assessment, qualitative student evaluations and the analysis of examination scripts. There was an improvement from pre- to post-test for all four iterations of the intervention and these improvements were shown to be significant. The use of the resource led to an increase in the quality and quantity of diagrams drawn by students in subsequent assessments.
- Full Text:
- Date Issued: 2020
Addressing flux suppression, radio frequency interference, and selection of optimal solution intervals during radio interferometric calibration
- Authors: Sob, Ulrich Armel Mbou
- Date: 2020
- Subjects: CubiCal (Software) , Radio -- Interference , Imaging systems in astronomy , Algorithms , Astronomical instruments -- Calibration , Astronomy -- Data processing
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/147714 , vital:38663
- Description: The forthcoming Square Kilometre Array is expected to provide answers to some of the most intriguing questions about our Universe. However, as it is already noticeable from MeerKAT and other precursors, the amounts of data produced by these new instruments are significantly challenging to calibrate and image. Calibration of radio interferometric data is usually biased by incomplete sky models and radio frequency interference (RFI) resulting in calibration artefacts that limit the dynamic range and image fidelity of the resulting images. One of the most noticeable of these artefacts is the formation of spurious sources which causes suppression of real emissions. Fortunately, it has been shown that calibration algorithms employing heavy-tailed likelihood functions are less susceptible to this due to their robustness against outliers. Leveraging on recent developments in the field of complex optimisation, we implement a robust calibration algorithm using a Student’s t likelihood function and Wirtinger derivatives. The new algorithm, dubbed the robust solver, is incorporated as a subroutine into the newly released calibration software package CubiCal. We perform statistical analysis on the distribution of visibilities and provide an insight into the functioning of the robust solver and describe different scenarios where it will improve calibration. We use simulations to show that the robust solver effectively reduces the amount of flux suppressed from unmodelled sources both in direction independent and direction dependent calibration. Furthermore, the robust solver is shown to successfully mitigate the effects of low-level RFI when applied to a simulated and a real VLA dataset. Finally, we demonstrate that there are close links between the amount of flux suppressed from sources, the effects of the RFI and the employed solution interval during radio interferometric calibration. Hence, we investigate the effects of solution intervals and the different factors to consider in order to select adequate solution intervals. Furthermore, we propose a practical brute force method for selecting optimal solution intervals. The proposed method is successfully applied to a VLA dataset.
- Full Text:
- Date Issued: 2020
- Authors: Sob, Ulrich Armel Mbou
- Date: 2020
- Subjects: CubiCal (Software) , Radio -- Interference , Imaging systems in astronomy , Algorithms , Astronomical instruments -- Calibration , Astronomy -- Data processing
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/147714 , vital:38663
- Description: The forthcoming Square Kilometre Array is expected to provide answers to some of the most intriguing questions about our Universe. However, as it is already noticeable from MeerKAT and other precursors, the amounts of data produced by these new instruments are significantly challenging to calibrate and image. Calibration of radio interferometric data is usually biased by incomplete sky models and radio frequency interference (RFI) resulting in calibration artefacts that limit the dynamic range and image fidelity of the resulting images. One of the most noticeable of these artefacts is the formation of spurious sources which causes suppression of real emissions. Fortunately, it has been shown that calibration algorithms employing heavy-tailed likelihood functions are less susceptible to this due to their robustness against outliers. Leveraging on recent developments in the field of complex optimisation, we implement a robust calibration algorithm using a Student’s t likelihood function and Wirtinger derivatives. The new algorithm, dubbed the robust solver, is incorporated as a subroutine into the newly released calibration software package CubiCal. We perform statistical analysis on the distribution of visibilities and provide an insight into the functioning of the robust solver and describe different scenarios where it will improve calibration. We use simulations to show that the robust solver effectively reduces the amount of flux suppressed from unmodelled sources both in direction independent and direction dependent calibration. Furthermore, the robust solver is shown to successfully mitigate the effects of low-level RFI when applied to a simulated and a real VLA dataset. Finally, we demonstrate that there are close links between the amount of flux suppressed from sources, the effects of the RFI and the employed solution interval during radio interferometric calibration. Hence, we investigate the effects of solution intervals and the different factors to consider in order to select adequate solution intervals. Furthermore, we propose a practical brute force method for selecting optimal solution intervals. The proposed method is successfully applied to a VLA dataset.
- Full Text:
- Date Issued: 2020
Analysing emergent time within an isolated Universe through the application of interactions in the conditional probability approach
- Authors: Bryan, Kate Louise Halse
- Date: 2020
- Subjects: Space and time , Quantum gravity , Quantum theory , Relativity (Physics)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/146676 , vital:38547
- Description: Time remains a frequently discussed issue in physics and philosophy. One interpretation of growing popularity is the ‘timeless’ view which states that our experience of time is only an illusion. The isolated Universe model, provided by the Wheeler-DeWitt equation, supports this interpretation by describing time using clocks in the conditional probability interpretation (CPI). However, the CPI customarily dismisses interaction effects as negligible creating a potential blind spot which overlooks the potential influence of interaction effects. Accounting for interactions opens up a new avenue of analysis and a potential challenge to the interpretation of time. In aid of our assessment of the impact interaction effects have on the CPI, we present rudimentary definitions of time and its associated concepts. Defined in a minimalist manner, time is argued to require a postulate of causality as a means of accounting for temporal ordering in physical theories. Several of these theories are discussed here in terms of their respective approaches to time and, despite their differences, there are indications that the accounts of time are unified in a more fundamental theory. An analytic analysis of the CPI, incorporating two different clock choices, and a qualitative analysis both confirm that interactions have a necessary role within the CPI. The consequence of removing interactions is a maximised uncertainty in any measurement of the clock and a restriction to a two-state system, as indicated by the results of the toy models and qualitative argument respectively. The philosophical implication is that we are not restricted to the timeless view since including interactions as agents of causal interventions between systems provides an account of time as a real phenomenon. This result highlights the reliance on a postulate of causality which forms a pressing problem in explaining our experience of time.
- Full Text:
- Date Issued: 2020
- Authors: Bryan, Kate Louise Halse
- Date: 2020
- Subjects: Space and time , Quantum gravity , Quantum theory , Relativity (Physics)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/146676 , vital:38547
- Description: Time remains a frequently discussed issue in physics and philosophy. One interpretation of growing popularity is the ‘timeless’ view which states that our experience of time is only an illusion. The isolated Universe model, provided by the Wheeler-DeWitt equation, supports this interpretation by describing time using clocks in the conditional probability interpretation (CPI). However, the CPI customarily dismisses interaction effects as negligible creating a potential blind spot which overlooks the potential influence of interaction effects. Accounting for interactions opens up a new avenue of analysis and a potential challenge to the interpretation of time. In aid of our assessment of the impact interaction effects have on the CPI, we present rudimentary definitions of time and its associated concepts. Defined in a minimalist manner, time is argued to require a postulate of causality as a means of accounting for temporal ordering in physical theories. Several of these theories are discussed here in terms of their respective approaches to time and, despite their differences, there are indications that the accounts of time are unified in a more fundamental theory. An analytic analysis of the CPI, incorporating two different clock choices, and a qualitative analysis both confirm that interactions have a necessary role within the CPI. The consequence of removing interactions is a maximised uncertainty in any measurement of the clock and a restriction to a two-state system, as indicated by the results of the toy models and qualitative argument respectively. The philosophical implication is that we are not restricted to the timeless view since including interactions as agents of causal interventions between systems provides an account of time as a real phenomenon. This result highlights the reliance on a postulate of causality which forms a pressing problem in explaining our experience of time.
- Full Text:
- Date Issued: 2020
Dynamics of stimulated luminescence in natural quartz: Thermoluminescence and phototransferred thermoluminescence
- Authors: Folley, Damilola Esther
- Date: 2020
- Subjects: Thermoluminescence , Quartz
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/146255 , vital:38509
- Description: Natural quartz has remained an important mineral that is of topical interest in luminescence and dosimetry-related research. We investigate the dynamics of stimulated luminescence on this material through thermoluminescence (TL) and phototransferred thermoluminescence (PTTL). Measurements were made on unannealed natural quartz as well as quartz annealed at 800 and 1000̊C. The samples were annealed for 10 minutes and for 1 hour. The material, in its un- and annealed state has its main peak between 68 and 72̊C when measured at 1Cs ̃1 after a dose of 50 Gy. A study of dosimetric features and kinetic analysis was carried out on two prominent peaks, peak I and III for all the samples. The peaks show a sublinear dose response for irradiation doses between 10 and 300 Gy. Kinetic analysis shows that peak I is a first-order peak and peak III a general-order peak. Interestingly, we observe for peak I for the sample annealed at 800̊C for 1 hour an inverse thermal quenching behaviour. We demonstrate that a peak affected with an inverse thermal quenching-like behaviour can still show effect of thermal quenching when the dose the sample is irradiated to is significantly reduced. We ascribe the apparent dependence of thermal quenching on dose to competition between radiative and non-radiative transitions at the recombination centre. Peaks I, II, and III for all the samples were reproduced under phototransfer when the peaks, initially removed by preheating to a certain temperature are exposed to 470 and 525 nm light. The infuence of duration of illumination on the PTTL intensity of these peaks corresponding to various preheating temperatures is modelled using coupled first-order dfferential equations. The model is based on systems of acceptors and donors whose number and role depends on preheating temperature
- Full Text:
- Date Issued: 2020
- Authors: Folley, Damilola Esther
- Date: 2020
- Subjects: Thermoluminescence , Quartz
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/146255 , vital:38509
- Description: Natural quartz has remained an important mineral that is of topical interest in luminescence and dosimetry-related research. We investigate the dynamics of stimulated luminescence on this material through thermoluminescence (TL) and phototransferred thermoluminescence (PTTL). Measurements were made on unannealed natural quartz as well as quartz annealed at 800 and 1000̊C. The samples were annealed for 10 minutes and for 1 hour. The material, in its un- and annealed state has its main peak between 68 and 72̊C when measured at 1Cs ̃1 after a dose of 50 Gy. A study of dosimetric features and kinetic analysis was carried out on two prominent peaks, peak I and III for all the samples. The peaks show a sublinear dose response for irradiation doses between 10 and 300 Gy. Kinetic analysis shows that peak I is a first-order peak and peak III a general-order peak. Interestingly, we observe for peak I for the sample annealed at 800̊C for 1 hour an inverse thermal quenching behaviour. We demonstrate that a peak affected with an inverse thermal quenching-like behaviour can still show effect of thermal quenching when the dose the sample is irradiated to is significantly reduced. We ascribe the apparent dependence of thermal quenching on dose to competition between radiative and non-radiative transitions at the recombination centre. Peaks I, II, and III for all the samples were reproduced under phototransfer when the peaks, initially removed by preheating to a certain temperature are exposed to 470 and 525 nm light. The infuence of duration of illumination on the PTTL intensity of these peaks corresponding to various preheating temperatures is modelled using coupled first-order dfferential equations. The model is based on systems of acceptors and donors whose number and role depends on preheating temperature
- Full Text:
- Date Issued: 2020
Finite precision arithmetic in Polyphase Filterbank implementations
- Authors: Myburgh, Talon
- Date: 2020
- Subjects: Radio interferometers , Interferometry , Radio telescopes , Gate array circuits , Floating-point arithmetic , Python (Computer program language) , Polyphase Filterbank , Finite precision arithmetic , MeerKAT
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/146187 , vital:38503
- Description: The MeerKAT is the most sensitive radio telescope in its class, and it is important that systematic effects do not limit the dynamic range of the instrument, preventing this sensitivity from being harnessed for deep integrations. During commissioning, spurious artefacts were noted in the MeerKAT passband and the root cause was attributed to systematic errors in the digital signal path. Finite precision arithmetic used by the Polyphase Filterbank (PFB) was one of the main factors contributing to the spurious responses, together with bugs in the firmware. This thesis describes a software PFB simulator that was built to mimic the MeerKAT PFB and allow investigation into the origin and mitigation of the effects seen on the telescope. This simulator was used to investigate the effects in signal integrity of various rounding techniques, overflow strategies and dual polarisation processing in the PFB. Using the simulator to investigate a number of different signal levels, bit-width and algorithmic scenarios, it gave insight into how the periodic dips occurring in the MeerKAT passband were the result of the implementation using an inappropriate rounding strategy. It further indicated how to select the best strategy for preventing overflow while maintaining high quantization effciency in the FFT. This practice of simulating the design behaviour in the PFB independently of the tools used to design the DSP firmware, is a step towards an end-to-end simulation of the MeerKAT system (or any radio telescope using nite precision digital signal processing systems). This would be useful for design, diagnostics, signal analysis and prototyping of the overall instrument.
- Full Text:
- Date Issued: 2020
- Authors: Myburgh, Talon
- Date: 2020
- Subjects: Radio interferometers , Interferometry , Radio telescopes , Gate array circuits , Floating-point arithmetic , Python (Computer program language) , Polyphase Filterbank , Finite precision arithmetic , MeerKAT
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/146187 , vital:38503
- Description: The MeerKAT is the most sensitive radio telescope in its class, and it is important that systematic effects do not limit the dynamic range of the instrument, preventing this sensitivity from being harnessed for deep integrations. During commissioning, spurious artefacts were noted in the MeerKAT passband and the root cause was attributed to systematic errors in the digital signal path. Finite precision arithmetic used by the Polyphase Filterbank (PFB) was one of the main factors contributing to the spurious responses, together with bugs in the firmware. This thesis describes a software PFB simulator that was built to mimic the MeerKAT PFB and allow investigation into the origin and mitigation of the effects seen on the telescope. This simulator was used to investigate the effects in signal integrity of various rounding techniques, overflow strategies and dual polarisation processing in the PFB. Using the simulator to investigate a number of different signal levels, bit-width and algorithmic scenarios, it gave insight into how the periodic dips occurring in the MeerKAT passband were the result of the implementation using an inappropriate rounding strategy. It further indicated how to select the best strategy for preventing overflow while maintaining high quantization effciency in the FFT. This practice of simulating the design behaviour in the PFB independently of the tools used to design the DSP firmware, is a step towards an end-to-end simulation of the MeerKAT system (or any radio telescope using nite precision digital signal processing systems). This would be useful for design, diagnostics, signal analysis and prototyping of the overall instrument.
- Full Text:
- Date Issued: 2020
Modelling and investigating primary beam effects of reflector antenna arrays
- Authors: Iheanetu, Kelachukwu
- Date: 2020
- Subjects: Antennas, Reflector , Radio telescopes , Astronomical instruments -- Calibration , Holography , Polynomials , Very large array telescopes -- South Africa , Astronomy -- Data processing , Primary beam effects , Jacobi-Bessel pattern , Cassbeam software , MeerKAT telescope
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/147425 , vital:38635
- Description: Signals received by a radio telescope are always affected by propagation and instrumental effects. These effects need to be modelled and accounted for during the process of calibration. The primary beam (PB) of the antenna is one major instrumental effect that needs to be accounted for during calibration. Producing accurate models of the radio antenna PB is crucial, and many approaches (like electromagnetic and optical simulations) have been used to model it. The cos³ function, Jacobi-Bessel pattern, characteristic basis function patterns (CBFP) and Cassbeam software (which uses optical ray-tracing with antenna parameters) have also been used to model it. These models capture the basic PB effects. Real-life PB patterns differ from these models due to various subtle effects such as mechanical deformation and effects introduced into the PB due to standing waves that exist in reflector antennas. The actual patterns can be measured via a process called astro-holography (or holography), but this is subject to noise, radio frequency interference, and other measurement errors. In our approach, we use principal component analysis and Zernike polynomials to model the PBs of the Very Large Array (VLA) and the MeerKAT telescopes from their holography measured data. The models have reconstruction errors of less than 5% at a compression factor of approximately 98% for both arrays. We also present steps that can be used to generate accurate beam models for any telescope (independent of its design) based on holography measured data. Analysis of the VLA measured PBs revealed that the graph of the beam sizes (and centre offset positions) have a fast oscillating trend (superimposed on a slow trend) with frequency. This spectral behaviour we termed ripple or characteristic effects. Most existing PB models that are used in calibrating VLA data do not incorporate these direction dependent effects (DDEs). We investigate the impact of using PB models that ignore this DDE in continuum calibration and imaging via simulations. Our experiments show that, although these effects translate into less than 10% errors in source flux recovery, they do lead to 30% reduction in the dynamic range. To prepare data for Hi and radio halo (faint emissions) science analysis requires carrying out foreground subtraction of bright (continuum) sources. We investigate the impact of using beam models that ignore these ripple effects during continuum subtraction. These show that using PB models which completely ignore the ripple effects in continuum subtraction could translate to error of more to 30% in the recovered Hi spectral properties. This implies that science inferences drawn from the results for Hi studies could have errors of the same magnitude.
- Full Text:
- Date Issued: 2020
- Authors: Iheanetu, Kelachukwu
- Date: 2020
- Subjects: Antennas, Reflector , Radio telescopes , Astronomical instruments -- Calibration , Holography , Polynomials , Very large array telescopes -- South Africa , Astronomy -- Data processing , Primary beam effects , Jacobi-Bessel pattern , Cassbeam software , MeerKAT telescope
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/147425 , vital:38635
- Description: Signals received by a radio telescope are always affected by propagation and instrumental effects. These effects need to be modelled and accounted for during the process of calibration. The primary beam (PB) of the antenna is one major instrumental effect that needs to be accounted for during calibration. Producing accurate models of the radio antenna PB is crucial, and many approaches (like electromagnetic and optical simulations) have been used to model it. The cos³ function, Jacobi-Bessel pattern, characteristic basis function patterns (CBFP) and Cassbeam software (which uses optical ray-tracing with antenna parameters) have also been used to model it. These models capture the basic PB effects. Real-life PB patterns differ from these models due to various subtle effects such as mechanical deformation and effects introduced into the PB due to standing waves that exist in reflector antennas. The actual patterns can be measured via a process called astro-holography (or holography), but this is subject to noise, radio frequency interference, and other measurement errors. In our approach, we use principal component analysis and Zernike polynomials to model the PBs of the Very Large Array (VLA) and the MeerKAT telescopes from their holography measured data. The models have reconstruction errors of less than 5% at a compression factor of approximately 98% for both arrays. We also present steps that can be used to generate accurate beam models for any telescope (independent of its design) based on holography measured data. Analysis of the VLA measured PBs revealed that the graph of the beam sizes (and centre offset positions) have a fast oscillating trend (superimposed on a slow trend) with frequency. This spectral behaviour we termed ripple or characteristic effects. Most existing PB models that are used in calibrating VLA data do not incorporate these direction dependent effects (DDEs). We investigate the impact of using PB models that ignore this DDE in continuum calibration and imaging via simulations. Our experiments show that, although these effects translate into less than 10% errors in source flux recovery, they do lead to 30% reduction in the dynamic range. To prepare data for Hi and radio halo (faint emissions) science analysis requires carrying out foreground subtraction of bright (continuum) sources. We investigate the impact of using beam models that ignore these ripple effects during continuum subtraction. These show that using PB models which completely ignore the ripple effects in continuum subtraction could translate to error of more to 30% in the recovered Hi spectral properties. This implies that science inferences drawn from the results for Hi studies could have errors of the same magnitude.
- Full Text:
- Date Issued: 2020
Observations of diffuse radio emission in the Abell 773 galaxy cluster
- Authors: Sichone, Gift L
- Date: 2020
- Subjects: Galaxies -- Clusters -- Observations , Radio astronomy -- Observations , Astrophysics -- South Africa , Westerbork Radio Telescope , A773 galaxy cluster , Astronomy -- Observations , Radio sources (Astronomy
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/144945 , vital:38394
- Description: In this thesis, we present 18 and 21 cm observations of the A773 galaxy cluster observed with the Westerbork radio telescope. The final 18 and 21 cm images achieve a noise level of 0.018 mJy beam‾ 1 and 0.025 mJy beam-1 respectively. After subtracting the compact sources, the low resolution images show evidence of a radio halo at 18 cm, whereas its presence is more uncertain in the low resolution 21 cm images due the presence of residual sidelobes from bright sources. In the joint analysis of both frequencies, the radio halo has a 5.37 arcmin2 area with a 6.76 mJy flux density. Further observations and analysis are, however, required to fully characterize its properties.
- Full Text:
- Date Issued: 2020
- Authors: Sichone, Gift L
- Date: 2020
- Subjects: Galaxies -- Clusters -- Observations , Radio astronomy -- Observations , Astrophysics -- South Africa , Westerbork Radio Telescope , A773 galaxy cluster , Astronomy -- Observations , Radio sources (Astronomy
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/144945 , vital:38394
- Description: In this thesis, we present 18 and 21 cm observations of the A773 galaxy cluster observed with the Westerbork radio telescope. The final 18 and 21 cm images achieve a noise level of 0.018 mJy beam‾ 1 and 0.025 mJy beam-1 respectively. After subtracting the compact sources, the low resolution images show evidence of a radio halo at 18 cm, whereas its presence is more uncertain in the low resolution 21 cm images due the presence of residual sidelobes from bright sources. In the joint analysis of both frequencies, the radio halo has a 5.37 arcmin2 area with a 6.76 mJy flux density. Further observations and analysis are, however, required to fully characterize its properties.
- Full Text:
- Date Issued: 2020
Observations of diffuse radio emission in the Perseus Galaxy Cluster
- Authors: Mungwariri, Clemence
- Date: 2020
- Subjects: Galaxies -- Clusters , Radio sources (Astronomy) , Radio interferometers , Perseus Galaxy Cluster , Diffuse radio emission
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/143325 , vital:38233
- Description: In this thesis we analysed Westerbork observations of the Perseus Galaxy Cluster at 1380 MHz. Observations consist of two different pointings, covering a total of ∼ 0.5 square degrees, one including the known mini halo and the source 3C 84, the other centred on the source 3C 83.1 B. We obtained images with 83 μJy beam⁻¹ and 240 μJy beam⁻¹ noise rms for the two pointings respectively. We achieved a 60000 : 1 dynamic range in the image containing the bright 3C 84 source. We imaged the mini halo surrounding 3C 84 at high sensitivity, measuring its diameter to be ∼140 kpc and its power 4 x 10²⁴ W Hz⁻¹. Its morphology agrees quite well with that observed at 240 MHz (e.g. Gendron-Marsolais et al., 2017). We measured the flux density of 3C 84 to be 20.5 ± 0.4 Jy at the 2007 epoch, consistent with a factor of ∼2 increase since the 1960s.
- Full Text:
- Date Issued: 2020
- Authors: Mungwariri, Clemence
- Date: 2020
- Subjects: Galaxies -- Clusters , Radio sources (Astronomy) , Radio interferometers , Perseus Galaxy Cluster , Diffuse radio emission
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/143325 , vital:38233
- Description: In this thesis we analysed Westerbork observations of the Perseus Galaxy Cluster at 1380 MHz. Observations consist of two different pointings, covering a total of ∼ 0.5 square degrees, one including the known mini halo and the source 3C 84, the other centred on the source 3C 83.1 B. We obtained images with 83 μJy beam⁻¹ and 240 μJy beam⁻¹ noise rms for the two pointings respectively. We achieved a 60000 : 1 dynamic range in the image containing the bright 3C 84 source. We imaged the mini halo surrounding 3C 84 at high sensitivity, measuring its diameter to be ∼140 kpc and its power 4 x 10²⁴ W Hz⁻¹. Its morphology agrees quite well with that observed at 240 MHz (e.g. Gendron-Marsolais et al., 2017). We measured the flux density of 3C 84 to be 20.5 ± 0.4 Jy at the 2007 epoch, consistent with a factor of ∼2 increase since the 1960s.
- Full Text:
- Date Issued: 2020
Thermoluminescence and phototransferred phermoluminescence of synthetic quartz
- Authors: Dawam, Robert Rangmou
- Date: 2020
- Subjects: Thermoluminescence , Quartz
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/145849 , vital:38472
- Description: The main aim of this investigation is on thermoluminescence and phototransferred thermoluminescence of synthetic quartz. Thermoluminescence was one of the tools used in characterising the electron traps parameters. The samples of quartz annealed at various temperatures up to 900̊C and the unannealed were used. The thermoluminescence glow curve was measured at 1̊C s~ 1 following beta irradiation to 40 Gy from the samples annealed at 500̊C and the unannealed consist of main peak at 70̊C and secondary peaks at 110, 180 and 310̊C. In comparison, the thermoluminescence glow curve for the sample annealed at 900̊C have main peak at 86̊C and the secondary ones at 170 and 310̊C. The kinetic analysis was carried out only on the main peak in each case. The activation energy was found to be decreasing with increase in annealing temperatures. The samples annealed at 500̊C and the unannealed were found to be affected by thermal quenching while sample annealed at 900̊C shows an inverse quenching for irradiation dose of 40 Gy. However, when the dose was reduce to 3 Gy the effects of thermal quenching was manifested. The activation energy of thermal quenching was also found to decrease with increase in annealing temperature. Thermally assisted optically stimulated luminescence measurement was carried out using continuous wave optical stimulated luminescence (CW-OSL). The samples studied were those annealed at 500̊C for 10 minutes, 900̊C for 10, 30, 60 minutes and 1000̊C for 10 minutes prior to use. The CW-OSL is stimulated using 470 nm blue LEDs at sample temperatures between 30 and 200̊C. It is measured after preheating to either 300 and 500̊C. When the integrated OSL intensity is plotted as a function of measurement temperature, the intensity goes through a peak. The increase in OSL intensity as a function of temperature is associated to thermal assistance and the decrease to thermal quenching. The kinetic parameters were evaluated by fitting the experimental data. The values of activation energies of thermal quenching are the same within experimental uncertainties for all the experimental conditions. This shows that annealing temperature, duration of annealing and irradiation dose have a negligible influence on the recombination site of luminescence using OSL. Phototransferred thermoluminescence (PTTL) induced from annealed samples using 470 nm blue light was also investigated. The quartz were annealed at 500 _C for 10 minutes, 900̊C for 10, 30, 60 minutes and 1000̊C for 10 minutes prior to use. The glow curves of conventional TL measured at 1 _C s1 following irradiation to 200 Gy shows six peaks in each case labelled I-VI for ease of reference whereas peaks observed under PTTL are referred to as A1 onwards. Only the first three peaks were reproduced under phototransfer for the sample annealed at 900̊C for 60 minutes and 1000̊C C for 10 minutes. Interestingly, for the intermediate duration of annealing of 30 minutes, the only peak that appears under phototransfer is the A1. For quartz annealed at 900̊C for 10 minutes, the PTTL appears as long as the preheating temperature does not exceed 560̊C. All other annealing temperatures, PTTL only appears for preheating to 450 and below. This shows that the occupancy of deep electron traps at temperatures beyond 450̊C or 560̊C is low. The activation energy for peaks A1, A2 and A3 were calculated. The PTTL peaks were studied for thermal quenching and peaks A1 and A3 were found to be affected. The activation energies for thermal quenching were determined as 0.62 ± 0.04 eV and 0.65 ± 0.02 eV for peaks A1 and A3 respectively. The experimental dependence of PTTL intensity on illumination time is modelled using sets of coupled linear differential equations based on systems of donors and acceptors whose number is determined by preheating temperature.
- Full Text:
- Date Issued: 2020
- Authors: Dawam, Robert Rangmou
- Date: 2020
- Subjects: Thermoluminescence , Quartz
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/145849 , vital:38472
- Description: The main aim of this investigation is on thermoluminescence and phototransferred thermoluminescence of synthetic quartz. Thermoluminescence was one of the tools used in characterising the electron traps parameters. The samples of quartz annealed at various temperatures up to 900̊C and the unannealed were used. The thermoluminescence glow curve was measured at 1̊C s~ 1 following beta irradiation to 40 Gy from the samples annealed at 500̊C and the unannealed consist of main peak at 70̊C and secondary peaks at 110, 180 and 310̊C. In comparison, the thermoluminescence glow curve for the sample annealed at 900̊C have main peak at 86̊C and the secondary ones at 170 and 310̊C. The kinetic analysis was carried out only on the main peak in each case. The activation energy was found to be decreasing with increase in annealing temperatures. The samples annealed at 500̊C and the unannealed were found to be affected by thermal quenching while sample annealed at 900̊C shows an inverse quenching for irradiation dose of 40 Gy. However, when the dose was reduce to 3 Gy the effects of thermal quenching was manifested. The activation energy of thermal quenching was also found to decrease with increase in annealing temperature. Thermally assisted optically stimulated luminescence measurement was carried out using continuous wave optical stimulated luminescence (CW-OSL). The samples studied were those annealed at 500̊C for 10 minutes, 900̊C for 10, 30, 60 minutes and 1000̊C for 10 minutes prior to use. The CW-OSL is stimulated using 470 nm blue LEDs at sample temperatures between 30 and 200̊C. It is measured after preheating to either 300 and 500̊C. When the integrated OSL intensity is plotted as a function of measurement temperature, the intensity goes through a peak. The increase in OSL intensity as a function of temperature is associated to thermal assistance and the decrease to thermal quenching. The kinetic parameters were evaluated by fitting the experimental data. The values of activation energies of thermal quenching are the same within experimental uncertainties for all the experimental conditions. This shows that annealing temperature, duration of annealing and irradiation dose have a negligible influence on the recombination site of luminescence using OSL. Phototransferred thermoluminescence (PTTL) induced from annealed samples using 470 nm blue light was also investigated. The quartz were annealed at 500 _C for 10 minutes, 900̊C for 10, 30, 60 minutes and 1000̊C for 10 minutes prior to use. The glow curves of conventional TL measured at 1 _C s1 following irradiation to 200 Gy shows six peaks in each case labelled I-VI for ease of reference whereas peaks observed under PTTL are referred to as A1 onwards. Only the first three peaks were reproduced under phototransfer for the sample annealed at 900̊C for 60 minutes and 1000̊C C for 10 minutes. Interestingly, for the intermediate duration of annealing of 30 minutes, the only peak that appears under phototransfer is the A1. For quartz annealed at 900̊C for 10 minutes, the PTTL appears as long as the preheating temperature does not exceed 560̊C. All other annealing temperatures, PTTL only appears for preheating to 450 and below. This shows that the occupancy of deep electron traps at temperatures beyond 450̊C or 560̊C is low. The activation energy for peaks A1, A2 and A3 were calculated. The PTTL peaks were studied for thermal quenching and peaks A1 and A3 were found to be affected. The activation energies for thermal quenching were determined as 0.62 ± 0.04 eV and 0.65 ± 0.02 eV for peaks A1 and A3 respectively. The experimental dependence of PTTL intensity on illumination time is modelled using sets of coupled linear differential equations based on systems of donors and acceptors whose number is determined by preheating temperature.
- Full Text:
- Date Issued: 2020
A pilot wide-field VLBI survey of the GOODS-North field
- Authors: Akoto-Danso, Alexander
- Date: 2019
- Subjects: Radio astronomy , Very long baseline interferometry , Radio interometers , Imaging systems in astronomy , Hubble Space Telescope (Spacecraft) -- Observations
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/72296 , vital:30027
- Description: Very Long Baseline Interferometry (VLBI) has significant advantages in disentangling active galactic nuclei (AGN) from star formation, particularly at intermediate to high-redshift due to its high angular resolution and insensitivity to dust. Surveys using VLBI arrays are only just becoming practical over wide areas with numerous developments and innovations (such as multi-phase centre techniques) in observation and data analysis techniques. However, fully automated pipelines for VLBI data analysis are based on old software packages and are unable to incorporate new calibration and imaging algorithms. In this work, the researcher developed a pipeline for VLBI data analysis which integrates a recent wide-field imaging algorithm, RFI excision, and a purpose-built source finding algorithm specifically developed for the 64kx64k wide-field VLBI images. The researcher used this novel pipeline to process 6% (~ 9 arcmin2 of the total 160 arcmin2) of the data from the CANDELS GOODS- North extragalactic field at 1.6 GHz. The milli-arcsec scale images have an average rms of a ~ 10 uJy/beam. Forty four (44) candidate sources were detected, most of which are at sub-mJy flux densities, having brightness temperatures and luminosities of >5x105 K and >6x1021 W Hz-1 respectively. This work demonstrates that automated post-processing pipelines for wide-field, uniform sensitivity VLBI surveys are feasible and indeed made more efficient with new software, wide-field imaging algorithms and more purpose-built source- finders. This broadens the discovery space for future wide-field surveys with upcoming arrays such as the African VLBI Network (AVN), MeerKAT and the Square Kilometre Array (SKA).
- Full Text:
- Date Issued: 2019
- Authors: Akoto-Danso, Alexander
- Date: 2019
- Subjects: Radio astronomy , Very long baseline interferometry , Radio interometers , Imaging systems in astronomy , Hubble Space Telescope (Spacecraft) -- Observations
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/72296 , vital:30027
- Description: Very Long Baseline Interferometry (VLBI) has significant advantages in disentangling active galactic nuclei (AGN) from star formation, particularly at intermediate to high-redshift due to its high angular resolution and insensitivity to dust. Surveys using VLBI arrays are only just becoming practical over wide areas with numerous developments and innovations (such as multi-phase centre techniques) in observation and data analysis techniques. However, fully automated pipelines for VLBI data analysis are based on old software packages and are unable to incorporate new calibration and imaging algorithms. In this work, the researcher developed a pipeline for VLBI data analysis which integrates a recent wide-field imaging algorithm, RFI excision, and a purpose-built source finding algorithm specifically developed for the 64kx64k wide-field VLBI images. The researcher used this novel pipeline to process 6% (~ 9 arcmin2 of the total 160 arcmin2) of the data from the CANDELS GOODS- North extragalactic field at 1.6 GHz. The milli-arcsec scale images have an average rms of a ~ 10 uJy/beam. Forty four (44) candidate sources were detected, most of which are at sub-mJy flux densities, having brightness temperatures and luminosities of >5x105 K and >6x1021 W Hz-1 respectively. This work demonstrates that automated post-processing pipelines for wide-field, uniform sensitivity VLBI surveys are feasible and indeed made more efficient with new software, wide-field imaging algorithms and more purpose-built source- finders. This broadens the discovery space for future wide-field surveys with upcoming arrays such as the African VLBI Network (AVN), MeerKAT and the Square Kilometre Array (SKA).
- Full Text:
- Date Issued: 2019
CubiCal: a fast radio interferometric calibration suite exploiting complex optimisation
- Authors: Kenyon, Jonathan
- Date: 2019
- Subjects: Interferometry , Radio astronomy , Python (Computer program language) , Square Kilometre Array (Project)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/92341 , vital:30711
- Description: The advent of the Square Kilometre Array and its precursors marks the start of an exciting era for radio interferometry. However, with new instruments producing unprecedented quantities of data, many existing calibration algorithms and implementations will be hard-pressed to keep up. Fortunately, it has recently been shown that the radio interferometric calibration problem can be expressed concisely using the ideas of complex optimisation. The resulting framework exposes properties of the calibration problem which can be exploited to accelerate traditional non-linear least squares algorithms. We extend the existing work on the topic by considering the more general problem of calibrating a Jones chain: the product of several unknown gain terms. We also derive specialised solvers for performing phase-only, delay and pointing error calibration. In doing so, we devise a method for determining update rules for arbitrary, real-valued parametrisations of a complex gain. The solvers are implemented in an optimised Python package called CubiCal. CubiCal makes use of Cython to generate fast C and C++ routines for performing computationally demanding tasks whilst leveraging multiprocessing and shared memory to take advantage of modern, parallel hardware. The package is fully compatible with the measurement set, the most common format for interferometer data, and is well integrated with Montblanc - a third party package which implements optimised model visibility prediction. CubiCal's calibration routines are applied successfully to both simulated and real data for the field surrounding source 3C147. These tests include direction-independent and direction dependent calibration, as well as tests of the specialised solvers. Finally, we conduct extensive performance benchmarks and verify that CubiCal convincingly outperforms its most comparable competitor.
- Full Text:
- Date Issued: 2019
- Authors: Kenyon, Jonathan
- Date: 2019
- Subjects: Interferometry , Radio astronomy , Python (Computer program language) , Square Kilometre Array (Project)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/92341 , vital:30711
- Description: The advent of the Square Kilometre Array and its precursors marks the start of an exciting era for radio interferometry. However, with new instruments producing unprecedented quantities of data, many existing calibration algorithms and implementations will be hard-pressed to keep up. Fortunately, it has recently been shown that the radio interferometric calibration problem can be expressed concisely using the ideas of complex optimisation. The resulting framework exposes properties of the calibration problem which can be exploited to accelerate traditional non-linear least squares algorithms. We extend the existing work on the topic by considering the more general problem of calibrating a Jones chain: the product of several unknown gain terms. We also derive specialised solvers for performing phase-only, delay and pointing error calibration. In doing so, we devise a method for determining update rules for arbitrary, real-valued parametrisations of a complex gain. The solvers are implemented in an optimised Python package called CubiCal. CubiCal makes use of Cython to generate fast C and C++ routines for performing computationally demanding tasks whilst leveraging multiprocessing and shared memory to take advantage of modern, parallel hardware. The package is fully compatible with the measurement set, the most common format for interferometer data, and is well integrated with Montblanc - a third party package which implements optimised model visibility prediction. CubiCal's calibration routines are applied successfully to both simulated and real data for the field surrounding source 3C147. These tests include direction-independent and direction dependent calibration, as well as tests of the specialised solvers. Finally, we conduct extensive performance benchmarks and verify that CubiCal convincingly outperforms its most comparable competitor.
- Full Text:
- Date Issued: 2019
Foreground simulations for observations of the global 21-cm signal
- Authors: Klutse, Diana
- Date: 2019
- Subjects: Cosmic background radiation , Astronomy -- Observations , Electromagnetic waves , Radiation, Background
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/76398 , vital:30557
- Description: The sky-averaged (global) spectrum of the redshifted 21-cm line promises to be a direct probe of the Dark Ages, the period before the first luminous sources formed and the Epoch of Reionization during which these sources produced enough ionizing photons to ionize the neutral intergalactic medium. However, observations of this signal are contaminated by both astrophysical foregrounds which are orders of magnitude brighter than the cosmological signal and by non-astrophysical and non-ideal instrumental effects. It is therefore crucial to understand all these data components and their impacts on the cosmological signal, for successful signal extraction. In this view, we investigated the impact that small scale spatial structures of diffuse Galactic foreground has on the foreground spectrum as observed by a global 21-cm observation. We simulated two different sets of observations using a realistic dipole beam model of two synchotron foreground templates that differ from each other in the small scale structure: the original 408 MHz all-sky map by Haslam et al. (1982) and a version where the calibration was improved to remove artifcats and point sources (Remazeilles et al., 2015). We generated simulated foreground spectra and modeled them using a polynomial expansion in frequency. We found that the different foreground templates have a modest impact on the simulated spectra, generate differences up to 2% in the root mean square of residual spectra after the log-polynomial best fit was subtracted out.
- Full Text:
- Date Issued: 2019
- Authors: Klutse, Diana
- Date: 2019
- Subjects: Cosmic background radiation , Astronomy -- Observations , Electromagnetic waves , Radiation, Background
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/76398 , vital:30557
- Description: The sky-averaged (global) spectrum of the redshifted 21-cm line promises to be a direct probe of the Dark Ages, the period before the first luminous sources formed and the Epoch of Reionization during which these sources produced enough ionizing photons to ionize the neutral intergalactic medium. However, observations of this signal are contaminated by both astrophysical foregrounds which are orders of magnitude brighter than the cosmological signal and by non-astrophysical and non-ideal instrumental effects. It is therefore crucial to understand all these data components and their impacts on the cosmological signal, for successful signal extraction. In this view, we investigated the impact that small scale spatial structures of diffuse Galactic foreground has on the foreground spectrum as observed by a global 21-cm observation. We simulated two different sets of observations using a realistic dipole beam model of two synchotron foreground templates that differ from each other in the small scale structure: the original 408 MHz all-sky map by Haslam et al. (1982) and a version where the calibration was improved to remove artifcats and point sources (Remazeilles et al., 2015). We generated simulated foreground spectra and modeled them using a polynomial expansion in frequency. We found that the different foreground templates have a modest impact on the simulated spectra, generate differences up to 2% in the root mean square of residual spectra after the log-polynomial best fit was subtracted out.
- Full Text:
- Date Issued: 2019
Machine learning methods for calibrating radio interferometric data
- Authors: Zitha, Simphiwe Nhlanhla
- Date: 2019
- Subjects: Calibration , Radio astronomy -- Data processing , Radio astronomy -- South Africa , Karoo Array Telescope (South Africa) , Radio telescopes -- South Africa , Common Astronomy Software Application (Computer software)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/97096 , vital:31398
- Description: The applications of machine learning have created an opportunity to deal with complex problems currently encountered in radio astronomy data processing. Calibration is one of the most important data processing steps required to produce high dynamic range images. This process involves the determination of calibration parameters, both instrumental and astronomical, to correct the collected data. Typically, astronomers use a package such as Common Astronomy Software Applications (CASA) to compute the gain solutions based on regular observations of a known calibrator source. In this work we present applications of machine learning to first generation calibration (1GC), using the KAT-7 telescope environmental and pointing sensor data recorded during observations. Applying machine learning to 1GC, as opposed to calculating the gain solutions in CASA, has shown evidence of reducing computation, as well as accurately predict the 1GC gain solutions representing the behaviour of the antenna during an observation. These methods are computationally less expensive, however they have not fully learned to generalise in predicting accurate 1GC solutions by looking at environmental and pointing sensors. We call this multi-output regression model ZCal, which is based on random forest, decision trees, extremely randomized trees and K-nearest neighbor algorithms. The prediction error obtained during the testing of our model on testing data is ≈ 0.01 < rmse < 0.09 for gain amplitude per antenna, and 0.2 rad < rmse <0.5 rad for gain phase. This shows that the instrumental parameters used to train our model more strongly correlate with gain amplitude effects than phase.
- Full Text:
- Date Issued: 2019
- Authors: Zitha, Simphiwe Nhlanhla
- Date: 2019
- Subjects: Calibration , Radio astronomy -- Data processing , Radio astronomy -- South Africa , Karoo Array Telescope (South Africa) , Radio telescopes -- South Africa , Common Astronomy Software Application (Computer software)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/97096 , vital:31398
- Description: The applications of machine learning have created an opportunity to deal with complex problems currently encountered in radio astronomy data processing. Calibration is one of the most important data processing steps required to produce high dynamic range images. This process involves the determination of calibration parameters, both instrumental and astronomical, to correct the collected data. Typically, astronomers use a package such as Common Astronomy Software Applications (CASA) to compute the gain solutions based on regular observations of a known calibrator source. In this work we present applications of machine learning to first generation calibration (1GC), using the KAT-7 telescope environmental and pointing sensor data recorded during observations. Applying machine learning to 1GC, as opposed to calculating the gain solutions in CASA, has shown evidence of reducing computation, as well as accurately predict the 1GC gain solutions representing the behaviour of the antenna during an observation. These methods are computationally less expensive, however they have not fully learned to generalise in predicting accurate 1GC solutions by looking at environmental and pointing sensors. We call this multi-output regression model ZCal, which is based on random forest, decision trees, extremely randomized trees and K-nearest neighbor algorithms. The prediction error obtained during the testing of our model on testing data is ≈ 0.01 < rmse < 0.09 for gain amplitude per antenna, and 0.2 rad < rmse <0.5 rad for gain phase. This shows that the instrumental parameters used to train our model more strongly correlate with gain amplitude effects than phase.
- Full Text:
- Date Issued: 2019
Modelling storm-time TEC changes using linear and non-linear techniques
- Authors: Uwamahoro, Jean Claude
- Date: 2019
- Subjects: Magnetic storms , Astronomy -- Computer programs , Imaging systems in astronomy , Ionospheric storms , Electrons -- Measurement , Magnetosphere -- Observations
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/92908 , vital:30762
- Description: Statistical models based on empirical orthogonal functions (EOF) analysis and non-linear regression analysis (NLRA) were developed for the purpose of estimating the ionospheric total electron content (TEC) during geomagnetic storms. The well-known least squares method (LSM) and Metropolis-Hastings algorithm (MHA) were used as optimization techniques to determine the unknown coefficients of the developed analytical expressions. Artificial Neural Networks (ANNs), the International Reference Ionosphere (IRI) model, and the Multi-Instrument Data Analysis System (MIDAS) tomographic inversion algorithm were also applied to storm-time TEC modelling/reconstruction for various latitudes of the African sector and surrounding areas. This work presents some of the first statistical modeling of the mid-latitude and low-latitude ionosphere during geomagnetic storms that includes solar, geomagnetic and neutral wind drivers.Development and validation of the empirical models were based on storm-time TEC data derived from the global positioning system (GPS) measurements over ground receivers within Africa and surrounding areas. The storm criterion applied was Dst 6 −50 nT and/or Kp > 4. The performance evaluation of MIDAS compared with ANNs to reconstruct storm-time TEC over the African low- and mid-latitude regions showed that MIDAS and ANNs provide comparable results. Their respective mean absolute error (MAE) values were 4.81 and 4.18 TECU. The ANN model was, however, found to perform 24.37 % better than MIDAS at estimating storm-time TEC for low latitudes, while MIDAS is 13.44 % more accurate than ANN for the mid-latitudes. When their performances are compared with the IRI model, both MIDAS and ANN model were found to provide more accurate storm-time TEC reconstructions for the African low- and mid-latitude regions. A comparative study of the performances of EOF, NLRA, ANN, and IRI models to estimate TEC during geomagnetic storm conditions over various latitudes showed that the ANN model is about 10 %, 26 %, and 58 % more accurate than EOF, NLRA, and IRI models, respectively, while EOF was found to perform 15 %, and 44 % better than NLRA and IRI, respectively. It was further found that the NLRA model is 25 % more accurate than the IRI model. We have also investigated for the first time, the role of meridional neutral winds (from the Horizontal Wind Model) to storm-time TEC modelling in the low latitude, northern and southern hemisphere mid-latitude regions of the African sector, based on ANN models. Statistics have shown that the inclusion of the meridional wind velocity in TEC modelling during geomagnetic storms leads to percentage improvements of about 5 % for the low latitude, 10 % and 5 % for the northern and southern hemisphere mid-latitude regions, respectively. High-latitude storm-induced winds and the inter-hemispheric blows of the meridional winds from summer to winter hemisphere have been suggested to be associated with these improvements.
- Full Text:
- Date Issued: 2019
- Authors: Uwamahoro, Jean Claude
- Date: 2019
- Subjects: Magnetic storms , Astronomy -- Computer programs , Imaging systems in astronomy , Ionospheric storms , Electrons -- Measurement , Magnetosphere -- Observations
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/92908 , vital:30762
- Description: Statistical models based on empirical orthogonal functions (EOF) analysis and non-linear regression analysis (NLRA) were developed for the purpose of estimating the ionospheric total electron content (TEC) during geomagnetic storms. The well-known least squares method (LSM) and Metropolis-Hastings algorithm (MHA) were used as optimization techniques to determine the unknown coefficients of the developed analytical expressions. Artificial Neural Networks (ANNs), the International Reference Ionosphere (IRI) model, and the Multi-Instrument Data Analysis System (MIDAS) tomographic inversion algorithm were also applied to storm-time TEC modelling/reconstruction for various latitudes of the African sector and surrounding areas. This work presents some of the first statistical modeling of the mid-latitude and low-latitude ionosphere during geomagnetic storms that includes solar, geomagnetic and neutral wind drivers.Development and validation of the empirical models were based on storm-time TEC data derived from the global positioning system (GPS) measurements over ground receivers within Africa and surrounding areas. The storm criterion applied was Dst 6 −50 nT and/or Kp > 4. The performance evaluation of MIDAS compared with ANNs to reconstruct storm-time TEC over the African low- and mid-latitude regions showed that MIDAS and ANNs provide comparable results. Their respective mean absolute error (MAE) values were 4.81 and 4.18 TECU. The ANN model was, however, found to perform 24.37 % better than MIDAS at estimating storm-time TEC for low latitudes, while MIDAS is 13.44 % more accurate than ANN for the mid-latitudes. When their performances are compared with the IRI model, both MIDAS and ANN model were found to provide more accurate storm-time TEC reconstructions for the African low- and mid-latitude regions. A comparative study of the performances of EOF, NLRA, ANN, and IRI models to estimate TEC during geomagnetic storm conditions over various latitudes showed that the ANN model is about 10 %, 26 %, and 58 % more accurate than EOF, NLRA, and IRI models, respectively, while EOF was found to perform 15 %, and 44 % better than NLRA and IRI, respectively. It was further found that the NLRA model is 25 % more accurate than the IRI model. We have also investigated for the first time, the role of meridional neutral winds (from the Horizontal Wind Model) to storm-time TEC modelling in the low latitude, northern and southern hemisphere mid-latitude regions of the African sector, based on ANN models. Statistics have shown that the inclusion of the meridional wind velocity in TEC modelling during geomagnetic storms leads to percentage improvements of about 5 % for the low latitude, 10 % and 5 % for the northern and southern hemisphere mid-latitude regions, respectively. High-latitude storm-induced winds and the inter-hemispheric blows of the meridional winds from summer to winter hemisphere have been suggested to be associated with these improvements.
- Full Text:
- Date Issued: 2019
Observing cosmic reionization with PAPER: polarized foreground simulations and all sky images
- Authors: Nunhokee, Chuneeta Devi
- Date: 2019
- Subjects: Cosmic background radiation , Astronomy -- Observations , Epoch of reionization -- Research , Hydrogen -- Spectra , Radio interferometers
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/68203 , vital:29218
- Description: The Donald C. Backer Precision Array to Probe the Epoch of Reionization (PAPER, Parsons et al., 2010) was built with an aim to detect the redshifted 21 cm Hydrogen line, which is likely the best probe of thermal evolution of the intergalactic medium and reionization of neutral Hydrogen in our Universe. Observations of the 21 cm signal are challenged by bright astrophysical foregrounds and systematics that require precise modeling in order to extract the cosmological signal. In particular, the instrumental leakage of polarized foregrounds may contaminate the 21 cm power spectrum. In this work, we developed a formalism to describe the leakage due to instrumental widefield effects in visibility-based power spectra and used it to predict contaminations in observations. We find the leakage due to a population of point sources to be higher than the diffuse Galactic emission – for which we can predict minimal contaminations at k>0.3 h Mpc -¹ We also analyzed data from the last observing season of PAPER via all-sky imaging with a view to characterize the foregrounds. We generated an all-sky catalogue of 88 sources down to a flux density of 5 Jy. Moreover, we measured both polarized point source and the Galactic diffuse emission, and used these measurements to constrain our model of polarization leakage. We find the leakage due to a population of point sources to be 12% lower than the prediction from our polarized model.
- Full Text:
- Date Issued: 2019
- Authors: Nunhokee, Chuneeta Devi
- Date: 2019
- Subjects: Cosmic background radiation , Astronomy -- Observations , Epoch of reionization -- Research , Hydrogen -- Spectra , Radio interferometers
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/68203 , vital:29218
- Description: The Donald C. Backer Precision Array to Probe the Epoch of Reionization (PAPER, Parsons et al., 2010) was built with an aim to detect the redshifted 21 cm Hydrogen line, which is likely the best probe of thermal evolution of the intergalactic medium and reionization of neutral Hydrogen in our Universe. Observations of the 21 cm signal are challenged by bright astrophysical foregrounds and systematics that require precise modeling in order to extract the cosmological signal. In particular, the instrumental leakage of polarized foregrounds may contaminate the 21 cm power spectrum. In this work, we developed a formalism to describe the leakage due to instrumental widefield effects in visibility-based power spectra and used it to predict contaminations in observations. We find the leakage due to a population of point sources to be higher than the diffuse Galactic emission – for which we can predict minimal contaminations at k>0.3 h Mpc -¹ We also analyzed data from the last observing season of PAPER via all-sky imaging with a view to characterize the foregrounds. We generated an all-sky catalogue of 88 sources down to a flux density of 5 Jy. Moreover, we measured both polarized point source and the Galactic diffuse emission, and used these measurements to constrain our model of polarization leakage. We find the leakage due to a population of point sources to be 12% lower than the prediction from our polarized model.
- Full Text:
- Date Issued: 2019
Statistical Analysis of the Radio-Interferometric Measurement Equation, a derived adaptive weighting scheme, and applications to LOFAR-VLBI observation of the Extended Groth Strip
- Authors: Bonnassieux, Etienne
- Date: 2019
- Subjects: Radio astronomy , Astrophysics , Astrophysics -- Instruments -- Calibration , Imaging systems in astronomy , Radio interferometers , Radio telescopes , Astronomy -- Observations
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/93789 , vital:30942
- Description: J.R.R Tolkien wrote, in his Mythopoeia, that “He sees no stars who does not see them first, of living silver made that sudden burst, to flame like flowers beneath the ancient song”. In his defense of myth-making, he formulates the argument that the attribution of meaning is an act of creation - that “trees are not ‘trees’ until so named and seen” - and that this capacity for creation defines the human creature. The scientific endeavour, in this context, can be understood as a social expression of a fundamental feature of humanity, and from this endeavour flows much understanding. This thesis, one thread among many, focuses on the study of astronomical objects as seen by the radio waves they emit. What are radio waves? Electromagnetic waves were theorised by James Clerk Maxwell (Maxwell 1864) in his great theoretical contribution to modern physics, their speed matching the speed of light as measured by Ole Christensen R0mer and, later, James Bradley. It was not until Heinrich Rudolf Hertz’s 1887 experiment that these waves were measured in a laboratory, leading to the dawn of radio communications - and, later, radio astronomy. The link between radio waves and light was one of association: light is known to behave as a wave (Young double-slit experiment), with the same propagation speed as electromagnetic radiation. Light “proper” is also known to exist beyond the optical regime: Herschel’s experiment shows that when diffracted through a prism, sunlight warms even those parts of a desk which are not observed to be lit (first evidence of infrared light). The link between optical light and unseen electromagnetic radiation is then an easy step to make, and one confirmed through countless technological applications (e.g. optical fiber to name but one). And as soon as this link is established, a question immediately comes to the mind of the astronomer: what does the sky, our Universe, look like to the radio “eye”? Radio astronomy has a short but storied history: from Karl Jansky’s serendipitous observation of the centre of the Milky Way, which outshines our Sun in the radio regime, in 1933, to Grote Reber’s hand-built back-yard radio antenna in 1937, which successfully detected radio emission from the Milky Way itself, to such monumental projects as the Square Kilometer Array and its multiple pathfinders, it has led to countless discoveries and the opening of a truly new window on the Universe. The work presented in this thesis is a contribution to this discipline - the culmination of three years of study, which is a rather short time to get a firm grasp of radio interferometry both in theory and in practice. The need for robust, automated methods - which are improving daily, thanks to the tireless labour of the scientists in the field - is becoming ever stronger as the SKA approaches, looming large on the horizon; but even today, in the precursor era of LOFAR, MeerKAT and other pathfinders, it is keenly felt. When I started my doctorate, the sheer scale of the task at hand felt overwhelming - to actually be able to contribute to its resolution seemed daunting indeed! Thankfully, as the saying goes, no society sets for itself material goals which it cannot achieve. This thesis took place at an exciting time for radio interferometry: at the start of my doctorate, the LOFAR international stations were - to my knowledge - only beginning to be used, and even then, only tentatively; MeerKAT had not yet shown its first light; the techniques used throughout my work were still being developed. At the time of writing, great strides have been made. One of the greatest technical challenges of LOFAR - imaging using the international stations - is starting to become reality. This technical challenge is the key problem that this thesis set out to address. While we only achieved partial success so far, it is a testament to the difficulty of the task that it is not yet truly resolved. One of the major results of this thesis is a model of a bright resolved source near a famous extragalactic field: properly modeling this source not only allows the use of international LOFAR stations, but also grants deeper access to the extragalactic field itself, which is otherwise polluted by the 3C source’s sidelobes. This result was only achieved thanks to the other major result of this thesis: the development of a theoretical framework with which to better understand the effect of calibration errors on images made from interferometric data, and an algorithm to strongly mitigate them. The structure of this manuscript is as follows: we begin with an introduction to radio interferometry, LOFAR, and the emission mechanisms which dominate for our field of interest. These introductions are primarily intended to give a brief overview of the technical aspects of the data reduced in this thesis. We follow with an overview of the Measurement Equation formalism, which underpins our theoretical work. This is the keystone of this thesis. We then show the theoretical work that was developed as part of the research work done during the doctorate - which was published in Astronomy & Astrophysics. Its practical application - a quality-based weighting scheme - is used throughout our data reduction. This data reduction is the next topic of this thesis: we contextualise the scientific interest of the data we reduce, and explain both the methods and the results we achieve.
- Full Text:
- Date Issued: 2019
- Authors: Bonnassieux, Etienne
- Date: 2019
- Subjects: Radio astronomy , Astrophysics , Astrophysics -- Instruments -- Calibration , Imaging systems in astronomy , Radio interferometers , Radio telescopes , Astronomy -- Observations
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/93789 , vital:30942
- Description: J.R.R Tolkien wrote, in his Mythopoeia, that “He sees no stars who does not see them first, of living silver made that sudden burst, to flame like flowers beneath the ancient song”. In his defense of myth-making, he formulates the argument that the attribution of meaning is an act of creation - that “trees are not ‘trees’ until so named and seen” - and that this capacity for creation defines the human creature. The scientific endeavour, in this context, can be understood as a social expression of a fundamental feature of humanity, and from this endeavour flows much understanding. This thesis, one thread among many, focuses on the study of astronomical objects as seen by the radio waves they emit. What are radio waves? Electromagnetic waves were theorised by James Clerk Maxwell (Maxwell 1864) in his great theoretical contribution to modern physics, their speed matching the speed of light as measured by Ole Christensen R0mer and, later, James Bradley. It was not until Heinrich Rudolf Hertz’s 1887 experiment that these waves were measured in a laboratory, leading to the dawn of radio communications - and, later, radio astronomy. The link between radio waves and light was one of association: light is known to behave as a wave (Young double-slit experiment), with the same propagation speed as electromagnetic radiation. Light “proper” is also known to exist beyond the optical regime: Herschel’s experiment shows that when diffracted through a prism, sunlight warms even those parts of a desk which are not observed to be lit (first evidence of infrared light). The link between optical light and unseen electromagnetic radiation is then an easy step to make, and one confirmed through countless technological applications (e.g. optical fiber to name but one). And as soon as this link is established, a question immediately comes to the mind of the astronomer: what does the sky, our Universe, look like to the radio “eye”? Radio astronomy has a short but storied history: from Karl Jansky’s serendipitous observation of the centre of the Milky Way, which outshines our Sun in the radio regime, in 1933, to Grote Reber’s hand-built back-yard radio antenna in 1937, which successfully detected radio emission from the Milky Way itself, to such monumental projects as the Square Kilometer Array and its multiple pathfinders, it has led to countless discoveries and the opening of a truly new window on the Universe. The work presented in this thesis is a contribution to this discipline - the culmination of three years of study, which is a rather short time to get a firm grasp of radio interferometry both in theory and in practice. The need for robust, automated methods - which are improving daily, thanks to the tireless labour of the scientists in the field - is becoming ever stronger as the SKA approaches, looming large on the horizon; but even today, in the precursor era of LOFAR, MeerKAT and other pathfinders, it is keenly felt. When I started my doctorate, the sheer scale of the task at hand felt overwhelming - to actually be able to contribute to its resolution seemed daunting indeed! Thankfully, as the saying goes, no society sets for itself material goals which it cannot achieve. This thesis took place at an exciting time for radio interferometry: at the start of my doctorate, the LOFAR international stations were - to my knowledge - only beginning to be used, and even then, only tentatively; MeerKAT had not yet shown its first light; the techniques used throughout my work were still being developed. At the time of writing, great strides have been made. One of the greatest technical challenges of LOFAR - imaging using the international stations - is starting to become reality. This technical challenge is the key problem that this thesis set out to address. While we only achieved partial success so far, it is a testament to the difficulty of the task that it is not yet truly resolved. One of the major results of this thesis is a model of a bright resolved source near a famous extragalactic field: properly modeling this source not only allows the use of international LOFAR stations, but also grants deeper access to the extragalactic field itself, which is otherwise polluted by the 3C source’s sidelobes. This result was only achieved thanks to the other major result of this thesis: the development of a theoretical framework with which to better understand the effect of calibration errors on images made from interferometric data, and an algorithm to strongly mitigate them. The structure of this manuscript is as follows: we begin with an introduction to radio interferometry, LOFAR, and the emission mechanisms which dominate for our field of interest. These introductions are primarily intended to give a brief overview of the technical aspects of the data reduced in this thesis. We follow with an overview of the Measurement Equation formalism, which underpins our theoretical work. This is the keystone of this thesis. We then show the theoretical work that was developed as part of the research work done during the doctorate - which was published in Astronomy & Astrophysics. Its practical application - a quality-based weighting scheme - is used throughout our data reduction. This data reduction is the next topic of this thesis: we contextualise the scientific interest of the data we reduce, and explain both the methods and the results we achieve.
- Full Text:
- Date Issued: 2019
Statistical study of traveling ionospheric disturbances over South Africa
- Authors: Mahlangu, Daniel Fiso
- Date: 2019
- Subjects: Ionosphere -- Research , Sudden ionospheric disturbances , Gravity waves , Magnetic storms
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/76387 , vital:30556
- Description: This thesis provides a statistical analysis of traveling ionospheric disturbances (TIDs) in South Africa. The velocities of the TIDs were determined from total electron content (TEC) maps using particle image velocimetry (PIV). The periods were determined using Morlet function in wavelet analysis. The TIDs were grouped into four categories: daytime, twilight, nighttime TIDs, and those TIDs that occurred during magnetic storms. It was found that daytime medium scale TIDs (MSTIDs) propagated equatorward in all seasons (summer, autumn, winter, and spring), with velocities of about 114 to 213 m/s. Their maximum occurrence was in winter between 15:00 and 16:00 LT. The daytime large scale (TIDs) LSTIDs propagated equatorward with velocities of approximately 455 to 767 m/s. Their highest occurrence was in summer, between 12:00-13:00 LT. Most of the these TIDs (about 78%) were observed during the passing of the morning solar terminator. This implied that the morning terminator was more effective in instigating TIDs. Only a few nighttime TIDs were observed and therefore their behavior could not be statistically inferred. The TIDs that occurred during magnetically disturbed conditions propagated equatorward. This indicated that their source mechanism was atmospheric gravity waves generated at the onset of geomagnetic storms.
- Full Text:
- Date Issued: 2019
- Authors: Mahlangu, Daniel Fiso
- Date: 2019
- Subjects: Ionosphere -- Research , Sudden ionospheric disturbances , Gravity waves , Magnetic storms
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/76387 , vital:30556
- Description: This thesis provides a statistical analysis of traveling ionospheric disturbances (TIDs) in South Africa. The velocities of the TIDs were determined from total electron content (TEC) maps using particle image velocimetry (PIV). The periods were determined using Morlet function in wavelet analysis. The TIDs were grouped into four categories: daytime, twilight, nighttime TIDs, and those TIDs that occurred during magnetic storms. It was found that daytime medium scale TIDs (MSTIDs) propagated equatorward in all seasons (summer, autumn, winter, and spring), with velocities of about 114 to 213 m/s. Their maximum occurrence was in winter between 15:00 and 16:00 LT. The daytime large scale (TIDs) LSTIDs propagated equatorward with velocities of approximately 455 to 767 m/s. Their highest occurrence was in summer, between 12:00-13:00 LT. Most of the these TIDs (about 78%) were observed during the passing of the morning solar terminator. This implied that the morning terminator was more effective in instigating TIDs. Only a few nighttime TIDs were observed and therefore their behavior could not be statistically inferred. The TIDs that occurred during magnetically disturbed conditions propagated equatorward. This indicated that their source mechanism was atmospheric gravity waves generated at the onset of geomagnetic storms.
- Full Text:
- Date Issued: 2019
The dispersion measure in broadband data from radio pulsars
- Authors: Rammala, Isabella
- Date: 2019
- Subjects: Pulsars , Radio astrophysics , Astrophsyics , Broadband communication systems
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/67857 , vital:29157
- Description: Modern day radio telescopes make use of wideband receivers to take advantage of the broadband nature of the radio pulsar emission. We ask how does the use of such broadband pulsar data affect the measured pulsar dispersion measure (DM). Previous works have shown that, although the exact pulsar radio emission processes are not well understood, observations reveal evidence of possible frequency dependence on the emission altitudes in the pulsar magnetosphere, a phenomenon known as the radius-to-frequency mapping (RFM). This frequency dependence due to RFM can be embedded in the dispersive delay of the pulse profiles, normally interpreted as an interstellar effect (DM). Thus we interpret this intrinsic effect as an additional component δDM to the interstellar DM, and investigate how it can be statistically attributed to intrinsic profile evolution, as well as profile scattering. We make use of Monte-Carlo simulations of beam models to simulate realistic pulsar beams of various geometry, from which we generate intrinsic profiles at various frequency bands. The results show that the excess DM due to intrinsic profile evolution is more pronounced at high frequencies, whereas scattering dominates the excess DM at low frequency. The implications of these results are presented with relation to broadband pulsar timing.
- Full Text:
- Date Issued: 2019
- Authors: Rammala, Isabella
- Date: 2019
- Subjects: Pulsars , Radio astrophysics , Astrophsyics , Broadband communication systems
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/67857 , vital:29157
- Description: Modern day radio telescopes make use of wideband receivers to take advantage of the broadband nature of the radio pulsar emission. We ask how does the use of such broadband pulsar data affect the measured pulsar dispersion measure (DM). Previous works have shown that, although the exact pulsar radio emission processes are not well understood, observations reveal evidence of possible frequency dependence on the emission altitudes in the pulsar magnetosphere, a phenomenon known as the radius-to-frequency mapping (RFM). This frequency dependence due to RFM can be embedded in the dispersive delay of the pulse profiles, normally interpreted as an interstellar effect (DM). Thus we interpret this intrinsic effect as an additional component δDM to the interstellar DM, and investigate how it can be statistically attributed to intrinsic profile evolution, as well as profile scattering. We make use of Monte-Carlo simulations of beam models to simulate realistic pulsar beams of various geometry, from which we generate intrinsic profiles at various frequency bands. The results show that the excess DM due to intrinsic profile evolution is more pronounced at high frequencies, whereas scattering dominates the excess DM at low frequency. The implications of these results are presented with relation to broadband pulsar timing.
- Full Text:
- Date Issued: 2019