A Bayesian approach to tilted-ring modelling of galaxies
- Authors: Maina, Eric Kamau
- Date: 2020
- Subjects: Bayesian statistical decision theory , Galaxies , Radio astronomy , TiRiFiC (Tilted Ring Fitting Code) , Neutral hydrogen , Spectroscopic data cubes , Galaxy parametrisation
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/145783 , vital:38466
- Description: The orbits of neutral hydrogen (H I) gas found in most disk galaxies are circular and also exhibit long-lived warps at large radii where the restoring gravitational forces of the inner disk become weak (Spekkens and Giovanelli 2006). These warps make the tilted-ring model an ideal choice for galaxy parametrisation. Analysis software utilizing the tilted-ring-model can be grouped into two and three-dimensional based software. Józsa et al. (2007b) demonstrated that three dimensional based software is better suited for galaxy parametrisation because it is affected by the effect of beam smearing only by increasing the uncertainty of parameters but not with the notorious systematic effects observed for two-dimensional fitting techniques. TiRiFiC, The Tilted Ring Fitting Code (Józsa et al. 2007b), is a software to construct parameterised models of high-resolution data cubes of rotating galaxies. It uses the tilted-ring model, and with that, a combination of some parameters such as surface brightness, position angle, rotation velocity and inclination, to describe galaxies. TiRiFiC works by directly fitting tilted-ring models to spectroscopic data cubes and hence is not affected by beam smearing or line-of-site-effects, e.g. strong warps. Because of that, the method is unavoidable as an analytic method in future Hi surveys. In the current implementation, though, there are several drawbacks. The implemented optimisers search for local solutions in parameter space only, do not quantify correlations between parameters and cannot find errors of single parameters. In theory, these drawbacks can be overcome by using Bayesian statistics, implemented in Multinest (Feroz et al. 2008), as it allows for sampling a posterior distribution irrespective of its multimodal nature resulting in parameter samples that correspond to the maximum in the posterior distribution. These parameter samples can be used as well to quantify correlations and find errors of single parameters. Since this method employs Bayesian statistics, it also allows the user to leverage any prior information they may have on parameter values.
- Full Text:
- Date Issued: 2020
- Authors: Maina, Eric Kamau
- Date: 2020
- Subjects: Bayesian statistical decision theory , Galaxies , Radio astronomy , TiRiFiC (Tilted Ring Fitting Code) , Neutral hydrogen , Spectroscopic data cubes , Galaxy parametrisation
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/145783 , vital:38466
- Description: The orbits of neutral hydrogen (H I) gas found in most disk galaxies are circular and also exhibit long-lived warps at large radii where the restoring gravitational forces of the inner disk become weak (Spekkens and Giovanelli 2006). These warps make the tilted-ring model an ideal choice for galaxy parametrisation. Analysis software utilizing the tilted-ring-model can be grouped into two and three-dimensional based software. Józsa et al. (2007b) demonstrated that three dimensional based software is better suited for galaxy parametrisation because it is affected by the effect of beam smearing only by increasing the uncertainty of parameters but not with the notorious systematic effects observed for two-dimensional fitting techniques. TiRiFiC, The Tilted Ring Fitting Code (Józsa et al. 2007b), is a software to construct parameterised models of high-resolution data cubes of rotating galaxies. It uses the tilted-ring model, and with that, a combination of some parameters such as surface brightness, position angle, rotation velocity and inclination, to describe galaxies. TiRiFiC works by directly fitting tilted-ring models to spectroscopic data cubes and hence is not affected by beam smearing or line-of-site-effects, e.g. strong warps. Because of that, the method is unavoidable as an analytic method in future Hi surveys. In the current implementation, though, there are several drawbacks. The implemented optimisers search for local solutions in parameter space only, do not quantify correlations between parameters and cannot find errors of single parameters. In theory, these drawbacks can be overcome by using Bayesian statistics, implemented in Multinest (Feroz et al. 2008), as it allows for sampling a posterior distribution irrespective of its multimodal nature resulting in parameter samples that correspond to the maximum in the posterior distribution. These parameter samples can be used as well to quantify correlations and find errors of single parameters. Since this method employs Bayesian statistics, it also allows the user to leverage any prior information they may have on parameter values.
- Full Text:
- Date Issued: 2020
A study of why some physic concepts in the South African Physical Science curriculum are poorly understood in order to develop a targeted action-research intervention for Newton’s second law
- Authors: Cobbing, Kathleen Margaret
- Date: 2020
- Subjects: Physics -- Study and teaching (Secondary) -- South Africa , Physics -- Examinations, questions, etc. -- South Africa , Motion -- Study and teaching (Secondary) -- South Africa
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/146903 , vital:38575
- Description: Globally, many students show a poor understanding of concepts in high school physics and lack the necessary problem-solving skills that the course demands. The application of Newton’s second law was found to be particularly problematic through document analysis of South African examination feedback reports, as well as from an analysis of the physics examinations at a pair of well-resourced South African independent schools that follow the Independent Examination Board curriculum. Through an action-research approach, a resource for use by students was designed and modified to improve students’ understanding of this concept, while modelling problemsolving methods. The resource consisted of brief revision notes, worked examples and scaffolded exercises. The design of the resource was influenced by the theory of cognitive apprenticeship, cognitive load theory and conceptual change theory. One of the aims of the resource was to encourage students to translate between the different representations of a problem situation: symbolic, abstract, model and concrete. The impact of this resource was evaluated at a pair of schools using a mixed methods approach. This incorporated pre- and post-tests for a quantitative assessment, qualitative student evaluations and the analysis of examination scripts. There was an improvement from pre- to post-test for all four iterations of the intervention and these improvements were shown to be significant. The use of the resource led to an increase in the quality and quantity of diagrams drawn by students in subsequent assessments.
- Full Text:
- Date Issued: 2020
- Authors: Cobbing, Kathleen Margaret
- Date: 2020
- Subjects: Physics -- Study and teaching (Secondary) -- South Africa , Physics -- Examinations, questions, etc. -- South Africa , Motion -- Study and teaching (Secondary) -- South Africa
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/146903 , vital:38575
- Description: Globally, many students show a poor understanding of concepts in high school physics and lack the necessary problem-solving skills that the course demands. The application of Newton’s second law was found to be particularly problematic through document analysis of South African examination feedback reports, as well as from an analysis of the physics examinations at a pair of well-resourced South African independent schools that follow the Independent Examination Board curriculum. Through an action-research approach, a resource for use by students was designed and modified to improve students’ understanding of this concept, while modelling problemsolving methods. The resource consisted of brief revision notes, worked examples and scaffolded exercises. The design of the resource was influenced by the theory of cognitive apprenticeship, cognitive load theory and conceptual change theory. One of the aims of the resource was to encourage students to translate between the different representations of a problem situation: symbolic, abstract, model and concrete. The impact of this resource was evaluated at a pair of schools using a mixed methods approach. This incorporated pre- and post-tests for a quantitative assessment, qualitative student evaluations and the analysis of examination scripts. There was an improvement from pre- to post-test for all four iterations of the intervention and these improvements were shown to be significant. The use of the resource led to an increase in the quality and quantity of diagrams drawn by students in subsequent assessments.
- Full Text:
- Date Issued: 2020
Addressing flux suppression, radio frequency interference, and selection of optimal solution intervals during radio interferometric calibration
- Authors: Sob, Ulrich Armel Mbou
- Date: 2020
- Subjects: CubiCal (Software) , Radio -- Interference , Imaging systems in astronomy , Algorithms , Astronomical instruments -- Calibration , Astronomy -- Data processing
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/147714 , vital:38663
- Description: The forthcoming Square Kilometre Array is expected to provide answers to some of the most intriguing questions about our Universe. However, as it is already noticeable from MeerKAT and other precursors, the amounts of data produced by these new instruments are significantly challenging to calibrate and image. Calibration of radio interferometric data is usually biased by incomplete sky models and radio frequency interference (RFI) resulting in calibration artefacts that limit the dynamic range and image fidelity of the resulting images. One of the most noticeable of these artefacts is the formation of spurious sources which causes suppression of real emissions. Fortunately, it has been shown that calibration algorithms employing heavy-tailed likelihood functions are less susceptible to this due to their robustness against outliers. Leveraging on recent developments in the field of complex optimisation, we implement a robust calibration algorithm using a Student’s t likelihood function and Wirtinger derivatives. The new algorithm, dubbed the robust solver, is incorporated as a subroutine into the newly released calibration software package CubiCal. We perform statistical analysis on the distribution of visibilities and provide an insight into the functioning of the robust solver and describe different scenarios where it will improve calibration. We use simulations to show that the robust solver effectively reduces the amount of flux suppressed from unmodelled sources both in direction independent and direction dependent calibration. Furthermore, the robust solver is shown to successfully mitigate the effects of low-level RFI when applied to a simulated and a real VLA dataset. Finally, we demonstrate that there are close links between the amount of flux suppressed from sources, the effects of the RFI and the employed solution interval during radio interferometric calibration. Hence, we investigate the effects of solution intervals and the different factors to consider in order to select adequate solution intervals. Furthermore, we propose a practical brute force method for selecting optimal solution intervals. The proposed method is successfully applied to a VLA dataset.
- Full Text:
- Date Issued: 2020
- Authors: Sob, Ulrich Armel Mbou
- Date: 2020
- Subjects: CubiCal (Software) , Radio -- Interference , Imaging systems in astronomy , Algorithms , Astronomical instruments -- Calibration , Astronomy -- Data processing
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/147714 , vital:38663
- Description: The forthcoming Square Kilometre Array is expected to provide answers to some of the most intriguing questions about our Universe. However, as it is already noticeable from MeerKAT and other precursors, the amounts of data produced by these new instruments are significantly challenging to calibrate and image. Calibration of radio interferometric data is usually biased by incomplete sky models and radio frequency interference (RFI) resulting in calibration artefacts that limit the dynamic range and image fidelity of the resulting images. One of the most noticeable of these artefacts is the formation of spurious sources which causes suppression of real emissions. Fortunately, it has been shown that calibration algorithms employing heavy-tailed likelihood functions are less susceptible to this due to their robustness against outliers. Leveraging on recent developments in the field of complex optimisation, we implement a robust calibration algorithm using a Student’s t likelihood function and Wirtinger derivatives. The new algorithm, dubbed the robust solver, is incorporated as a subroutine into the newly released calibration software package CubiCal. We perform statistical analysis on the distribution of visibilities and provide an insight into the functioning of the robust solver and describe different scenarios where it will improve calibration. We use simulations to show that the robust solver effectively reduces the amount of flux suppressed from unmodelled sources both in direction independent and direction dependent calibration. Furthermore, the robust solver is shown to successfully mitigate the effects of low-level RFI when applied to a simulated and a real VLA dataset. Finally, we demonstrate that there are close links between the amount of flux suppressed from sources, the effects of the RFI and the employed solution interval during radio interferometric calibration. Hence, we investigate the effects of solution intervals and the different factors to consider in order to select adequate solution intervals. Furthermore, we propose a practical brute force method for selecting optimal solution intervals. The proposed method is successfully applied to a VLA dataset.
- Full Text:
- Date Issued: 2020
Analysing emergent time within an isolated Universe through the application of interactions in the conditional probability approach
- Authors: Bryan, Kate Louise Halse
- Date: 2020
- Subjects: Space and time , Quantum gravity , Quantum theory , Relativity (Physics)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/146676 , vital:38547
- Description: Time remains a frequently discussed issue in physics and philosophy. One interpretation of growing popularity is the ‘timeless’ view which states that our experience of time is only an illusion. The isolated Universe model, provided by the Wheeler-DeWitt equation, supports this interpretation by describing time using clocks in the conditional probability interpretation (CPI). However, the CPI customarily dismisses interaction effects as negligible creating a potential blind spot which overlooks the potential influence of interaction effects. Accounting for interactions opens up a new avenue of analysis and a potential challenge to the interpretation of time. In aid of our assessment of the impact interaction effects have on the CPI, we present rudimentary definitions of time and its associated concepts. Defined in a minimalist manner, time is argued to require a postulate of causality as a means of accounting for temporal ordering in physical theories. Several of these theories are discussed here in terms of their respective approaches to time and, despite their differences, there are indications that the accounts of time are unified in a more fundamental theory. An analytic analysis of the CPI, incorporating two different clock choices, and a qualitative analysis both confirm that interactions have a necessary role within the CPI. The consequence of removing interactions is a maximised uncertainty in any measurement of the clock and a restriction to a two-state system, as indicated by the results of the toy models and qualitative argument respectively. The philosophical implication is that we are not restricted to the timeless view since including interactions as agents of causal interventions between systems provides an account of time as a real phenomenon. This result highlights the reliance on a postulate of causality which forms a pressing problem in explaining our experience of time.
- Full Text:
- Date Issued: 2020
- Authors: Bryan, Kate Louise Halse
- Date: 2020
- Subjects: Space and time , Quantum gravity , Quantum theory , Relativity (Physics)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/146676 , vital:38547
- Description: Time remains a frequently discussed issue in physics and philosophy. One interpretation of growing popularity is the ‘timeless’ view which states that our experience of time is only an illusion. The isolated Universe model, provided by the Wheeler-DeWitt equation, supports this interpretation by describing time using clocks in the conditional probability interpretation (CPI). However, the CPI customarily dismisses interaction effects as negligible creating a potential blind spot which overlooks the potential influence of interaction effects. Accounting for interactions opens up a new avenue of analysis and a potential challenge to the interpretation of time. In aid of our assessment of the impact interaction effects have on the CPI, we present rudimentary definitions of time and its associated concepts. Defined in a minimalist manner, time is argued to require a postulate of causality as a means of accounting for temporal ordering in physical theories. Several of these theories are discussed here in terms of their respective approaches to time and, despite their differences, there are indications that the accounts of time are unified in a more fundamental theory. An analytic analysis of the CPI, incorporating two different clock choices, and a qualitative analysis both confirm that interactions have a necessary role within the CPI. The consequence of removing interactions is a maximised uncertainty in any measurement of the clock and a restriction to a two-state system, as indicated by the results of the toy models and qualitative argument respectively. The philosophical implication is that we are not restricted to the timeless view since including interactions as agents of causal interventions between systems provides an account of time as a real phenomenon. This result highlights the reliance on a postulate of causality which forms a pressing problem in explaining our experience of time.
- Full Text:
- Date Issued: 2020
Dynamics of stimulated luminescence in natural quartz: Thermoluminescence and phototransferred thermoluminescence
- Authors: Folley, Damilola Esther
- Date: 2020
- Subjects: Thermoluminescence , Quartz
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/146255 , vital:38509
- Description: Natural quartz has remained an important mineral that is of topical interest in luminescence and dosimetry-related research. We investigate the dynamics of stimulated luminescence on this material through thermoluminescence (TL) and phototransferred thermoluminescence (PTTL). Measurements were made on unannealed natural quartz as well as quartz annealed at 800 and 1000̊C. The samples were annealed for 10 minutes and for 1 hour. The material, in its un- and annealed state has its main peak between 68 and 72̊C when measured at 1Cs ̃1 after a dose of 50 Gy. A study of dosimetric features and kinetic analysis was carried out on two prominent peaks, peak I and III for all the samples. The peaks show a sublinear dose response for irradiation doses between 10 and 300 Gy. Kinetic analysis shows that peak I is a first-order peak and peak III a general-order peak. Interestingly, we observe for peak I for the sample annealed at 800̊C for 1 hour an inverse thermal quenching behaviour. We demonstrate that a peak affected with an inverse thermal quenching-like behaviour can still show effect of thermal quenching when the dose the sample is irradiated to is significantly reduced. We ascribe the apparent dependence of thermal quenching on dose to competition between radiative and non-radiative transitions at the recombination centre. Peaks I, II, and III for all the samples were reproduced under phototransfer when the peaks, initially removed by preheating to a certain temperature are exposed to 470 and 525 nm light. The infuence of duration of illumination on the PTTL intensity of these peaks corresponding to various preheating temperatures is modelled using coupled first-order dfferential equations. The model is based on systems of acceptors and donors whose number and role depends on preheating temperature
- Full Text:
- Date Issued: 2020
- Authors: Folley, Damilola Esther
- Date: 2020
- Subjects: Thermoluminescence , Quartz
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/146255 , vital:38509
- Description: Natural quartz has remained an important mineral that is of topical interest in luminescence and dosimetry-related research. We investigate the dynamics of stimulated luminescence on this material through thermoluminescence (TL) and phototransferred thermoluminescence (PTTL). Measurements were made on unannealed natural quartz as well as quartz annealed at 800 and 1000̊C. The samples were annealed for 10 minutes and for 1 hour. The material, in its un- and annealed state has its main peak between 68 and 72̊C when measured at 1Cs ̃1 after a dose of 50 Gy. A study of dosimetric features and kinetic analysis was carried out on two prominent peaks, peak I and III for all the samples. The peaks show a sublinear dose response for irradiation doses between 10 and 300 Gy. Kinetic analysis shows that peak I is a first-order peak and peak III a general-order peak. Interestingly, we observe for peak I for the sample annealed at 800̊C for 1 hour an inverse thermal quenching behaviour. We demonstrate that a peak affected with an inverse thermal quenching-like behaviour can still show effect of thermal quenching when the dose the sample is irradiated to is significantly reduced. We ascribe the apparent dependence of thermal quenching on dose to competition between radiative and non-radiative transitions at the recombination centre. Peaks I, II, and III for all the samples were reproduced under phototransfer when the peaks, initially removed by preheating to a certain temperature are exposed to 470 and 525 nm light. The infuence of duration of illumination on the PTTL intensity of these peaks corresponding to various preheating temperatures is modelled using coupled first-order dfferential equations. The model is based on systems of acceptors and donors whose number and role depends on preheating temperature
- Full Text:
- Date Issued: 2020
Finite precision arithmetic in Polyphase Filterbank implementations
- Authors: Myburgh, Talon
- Date: 2020
- Subjects: Radio interferometers , Interferometry , Radio telescopes , Gate array circuits , Floating-point arithmetic , Python (Computer program language) , Polyphase Filterbank , Finite precision arithmetic , MeerKAT
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/146187 , vital:38503
- Description: The MeerKAT is the most sensitive radio telescope in its class, and it is important that systematic effects do not limit the dynamic range of the instrument, preventing this sensitivity from being harnessed for deep integrations. During commissioning, spurious artefacts were noted in the MeerKAT passband and the root cause was attributed to systematic errors in the digital signal path. Finite precision arithmetic used by the Polyphase Filterbank (PFB) was one of the main factors contributing to the spurious responses, together with bugs in the firmware. This thesis describes a software PFB simulator that was built to mimic the MeerKAT PFB and allow investigation into the origin and mitigation of the effects seen on the telescope. This simulator was used to investigate the effects in signal integrity of various rounding techniques, overflow strategies and dual polarisation processing in the PFB. Using the simulator to investigate a number of different signal levels, bit-width and algorithmic scenarios, it gave insight into how the periodic dips occurring in the MeerKAT passband were the result of the implementation using an inappropriate rounding strategy. It further indicated how to select the best strategy for preventing overflow while maintaining high quantization effciency in the FFT. This practice of simulating the design behaviour in the PFB independently of the tools used to design the DSP firmware, is a step towards an end-to-end simulation of the MeerKAT system (or any radio telescope using nite precision digital signal processing systems). This would be useful for design, diagnostics, signal analysis and prototyping of the overall instrument.
- Full Text:
- Date Issued: 2020
- Authors: Myburgh, Talon
- Date: 2020
- Subjects: Radio interferometers , Interferometry , Radio telescopes , Gate array circuits , Floating-point arithmetic , Python (Computer program language) , Polyphase Filterbank , Finite precision arithmetic , MeerKAT
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/146187 , vital:38503
- Description: The MeerKAT is the most sensitive radio telescope in its class, and it is important that systematic effects do not limit the dynamic range of the instrument, preventing this sensitivity from being harnessed for deep integrations. During commissioning, spurious artefacts were noted in the MeerKAT passband and the root cause was attributed to systematic errors in the digital signal path. Finite precision arithmetic used by the Polyphase Filterbank (PFB) was one of the main factors contributing to the spurious responses, together with bugs in the firmware. This thesis describes a software PFB simulator that was built to mimic the MeerKAT PFB and allow investigation into the origin and mitigation of the effects seen on the telescope. This simulator was used to investigate the effects in signal integrity of various rounding techniques, overflow strategies and dual polarisation processing in the PFB. Using the simulator to investigate a number of different signal levels, bit-width and algorithmic scenarios, it gave insight into how the periodic dips occurring in the MeerKAT passband were the result of the implementation using an inappropriate rounding strategy. It further indicated how to select the best strategy for preventing overflow while maintaining high quantization effciency in the FFT. This practice of simulating the design behaviour in the PFB independently of the tools used to design the DSP firmware, is a step towards an end-to-end simulation of the MeerKAT system (or any radio telescope using nite precision digital signal processing systems). This would be useful for design, diagnostics, signal analysis and prototyping of the overall instrument.
- Full Text:
- Date Issued: 2020
Modelling and investigating primary beam effects of reflector antenna arrays
- Authors: Iheanetu, Kelachukwu
- Date: 2020
- Subjects: Antennas, Reflector , Radio telescopes , Astronomical instruments -- Calibration , Holography , Polynomials , Very large array telescopes -- South Africa , Astronomy -- Data processing , Primary beam effects , Jacobi-Bessel pattern , Cassbeam software , MeerKAT telescope
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/147425 , vital:38635
- Description: Signals received by a radio telescope are always affected by propagation and instrumental effects. These effects need to be modelled and accounted for during the process of calibration. The primary beam (PB) of the antenna is one major instrumental effect that needs to be accounted for during calibration. Producing accurate models of the radio antenna PB is crucial, and many approaches (like electromagnetic and optical simulations) have been used to model it. The cos³ function, Jacobi-Bessel pattern, characteristic basis function patterns (CBFP) and Cassbeam software (which uses optical ray-tracing with antenna parameters) have also been used to model it. These models capture the basic PB effects. Real-life PB patterns differ from these models due to various subtle effects such as mechanical deformation and effects introduced into the PB due to standing waves that exist in reflector antennas. The actual patterns can be measured via a process called astro-holography (or holography), but this is subject to noise, radio frequency interference, and other measurement errors. In our approach, we use principal component analysis and Zernike polynomials to model the PBs of the Very Large Array (VLA) and the MeerKAT telescopes from their holography measured data. The models have reconstruction errors of less than 5% at a compression factor of approximately 98% for both arrays. We also present steps that can be used to generate accurate beam models for any telescope (independent of its design) based on holography measured data. Analysis of the VLA measured PBs revealed that the graph of the beam sizes (and centre offset positions) have a fast oscillating trend (superimposed on a slow trend) with frequency. This spectral behaviour we termed ripple or characteristic effects. Most existing PB models that are used in calibrating VLA data do not incorporate these direction dependent effects (DDEs). We investigate the impact of using PB models that ignore this DDE in continuum calibration and imaging via simulations. Our experiments show that, although these effects translate into less than 10% errors in source flux recovery, they do lead to 30% reduction in the dynamic range. To prepare data for Hi and radio halo (faint emissions) science analysis requires carrying out foreground subtraction of bright (continuum) sources. We investigate the impact of using beam models that ignore these ripple effects during continuum subtraction. These show that using PB models which completely ignore the ripple effects in continuum subtraction could translate to error of more to 30% in the recovered Hi spectral properties. This implies that science inferences drawn from the results for Hi studies could have errors of the same magnitude.
- Full Text:
- Date Issued: 2020
- Authors: Iheanetu, Kelachukwu
- Date: 2020
- Subjects: Antennas, Reflector , Radio telescopes , Astronomical instruments -- Calibration , Holography , Polynomials , Very large array telescopes -- South Africa , Astronomy -- Data processing , Primary beam effects , Jacobi-Bessel pattern , Cassbeam software , MeerKAT telescope
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/147425 , vital:38635
- Description: Signals received by a radio telescope are always affected by propagation and instrumental effects. These effects need to be modelled and accounted for during the process of calibration. The primary beam (PB) of the antenna is one major instrumental effect that needs to be accounted for during calibration. Producing accurate models of the radio antenna PB is crucial, and many approaches (like electromagnetic and optical simulations) have been used to model it. The cos³ function, Jacobi-Bessel pattern, characteristic basis function patterns (CBFP) and Cassbeam software (which uses optical ray-tracing with antenna parameters) have also been used to model it. These models capture the basic PB effects. Real-life PB patterns differ from these models due to various subtle effects such as mechanical deformation and effects introduced into the PB due to standing waves that exist in reflector antennas. The actual patterns can be measured via a process called astro-holography (or holography), but this is subject to noise, radio frequency interference, and other measurement errors. In our approach, we use principal component analysis and Zernike polynomials to model the PBs of the Very Large Array (VLA) and the MeerKAT telescopes from their holography measured data. The models have reconstruction errors of less than 5% at a compression factor of approximately 98% for both arrays. We also present steps that can be used to generate accurate beam models for any telescope (independent of its design) based on holography measured data. Analysis of the VLA measured PBs revealed that the graph of the beam sizes (and centre offset positions) have a fast oscillating trend (superimposed on a slow trend) with frequency. This spectral behaviour we termed ripple or characteristic effects. Most existing PB models that are used in calibrating VLA data do not incorporate these direction dependent effects (DDEs). We investigate the impact of using PB models that ignore this DDE in continuum calibration and imaging via simulations. Our experiments show that, although these effects translate into less than 10% errors in source flux recovery, they do lead to 30% reduction in the dynamic range. To prepare data for Hi and radio halo (faint emissions) science analysis requires carrying out foreground subtraction of bright (continuum) sources. We investigate the impact of using beam models that ignore these ripple effects during continuum subtraction. These show that using PB models which completely ignore the ripple effects in continuum subtraction could translate to error of more to 30% in the recovered Hi spectral properties. This implies that science inferences drawn from the results for Hi studies could have errors of the same magnitude.
- Full Text:
- Date Issued: 2020
Observations of diffuse radio emission in the Abell 773 galaxy cluster
- Authors: Sichone, Gift L
- Date: 2020
- Subjects: Galaxies -- Clusters -- Observations , Radio astronomy -- Observations , Astrophysics -- South Africa , Westerbork Radio Telescope , A773 galaxy cluster , Astronomy -- Observations , Radio sources (Astronomy
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/144945 , vital:38394
- Description: In this thesis, we present 18 and 21 cm observations of the A773 galaxy cluster observed with the Westerbork radio telescope. The final 18 and 21 cm images achieve a noise level of 0.018 mJy beam‾ 1 and 0.025 mJy beam-1 respectively. After subtracting the compact sources, the low resolution images show evidence of a radio halo at 18 cm, whereas its presence is more uncertain in the low resolution 21 cm images due the presence of residual sidelobes from bright sources. In the joint analysis of both frequencies, the radio halo has a 5.37 arcmin2 area with a 6.76 mJy flux density. Further observations and analysis are, however, required to fully characterize its properties.
- Full Text:
- Date Issued: 2020
- Authors: Sichone, Gift L
- Date: 2020
- Subjects: Galaxies -- Clusters -- Observations , Radio astronomy -- Observations , Astrophysics -- South Africa , Westerbork Radio Telescope , A773 galaxy cluster , Astronomy -- Observations , Radio sources (Astronomy
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/144945 , vital:38394
- Description: In this thesis, we present 18 and 21 cm observations of the A773 galaxy cluster observed with the Westerbork radio telescope. The final 18 and 21 cm images achieve a noise level of 0.018 mJy beam‾ 1 and 0.025 mJy beam-1 respectively. After subtracting the compact sources, the low resolution images show evidence of a radio halo at 18 cm, whereas its presence is more uncertain in the low resolution 21 cm images due the presence of residual sidelobes from bright sources. In the joint analysis of both frequencies, the radio halo has a 5.37 arcmin2 area with a 6.76 mJy flux density. Further observations and analysis are, however, required to fully characterize its properties.
- Full Text:
- Date Issued: 2020
Observations of diffuse radio emission in the Perseus Galaxy Cluster
- Authors: Mungwariri, Clemence
- Date: 2020
- Subjects: Galaxies -- Clusters , Radio sources (Astronomy) , Radio interferometers , Perseus Galaxy Cluster , Diffuse radio emission
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/143325 , vital:38233
- Description: In this thesis we analysed Westerbork observations of the Perseus Galaxy Cluster at 1380 MHz. Observations consist of two different pointings, covering a total of ∼ 0.5 square degrees, one including the known mini halo and the source 3C 84, the other centred on the source 3C 83.1 B. We obtained images with 83 μJy beam⁻¹ and 240 μJy beam⁻¹ noise rms for the two pointings respectively. We achieved a 60000 : 1 dynamic range in the image containing the bright 3C 84 source. We imaged the mini halo surrounding 3C 84 at high sensitivity, measuring its diameter to be ∼140 kpc and its power 4 x 10²⁴ W Hz⁻¹. Its morphology agrees quite well with that observed at 240 MHz (e.g. Gendron-Marsolais et al., 2017). We measured the flux density of 3C 84 to be 20.5 ± 0.4 Jy at the 2007 epoch, consistent with a factor of ∼2 increase since the 1960s.
- Full Text:
- Date Issued: 2020
- Authors: Mungwariri, Clemence
- Date: 2020
- Subjects: Galaxies -- Clusters , Radio sources (Astronomy) , Radio interferometers , Perseus Galaxy Cluster , Diffuse radio emission
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/143325 , vital:38233
- Description: In this thesis we analysed Westerbork observations of the Perseus Galaxy Cluster at 1380 MHz. Observations consist of two different pointings, covering a total of ∼ 0.5 square degrees, one including the known mini halo and the source 3C 84, the other centred on the source 3C 83.1 B. We obtained images with 83 μJy beam⁻¹ and 240 μJy beam⁻¹ noise rms for the two pointings respectively. We achieved a 60000 : 1 dynamic range in the image containing the bright 3C 84 source. We imaged the mini halo surrounding 3C 84 at high sensitivity, measuring its diameter to be ∼140 kpc and its power 4 x 10²⁴ W Hz⁻¹. Its morphology agrees quite well with that observed at 240 MHz (e.g. Gendron-Marsolais et al., 2017). We measured the flux density of 3C 84 to be 20.5 ± 0.4 Jy at the 2007 epoch, consistent with a factor of ∼2 increase since the 1960s.
- Full Text:
- Date Issued: 2020
Thermoluminescence and phototransferred phermoluminescence of synthetic quartz
- Authors: Dawam, Robert Rangmou
- Date: 2020
- Subjects: Thermoluminescence , Quartz
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/145849 , vital:38472
- Description: The main aim of this investigation is on thermoluminescence and phototransferred thermoluminescence of synthetic quartz. Thermoluminescence was one of the tools used in characterising the electron traps parameters. The samples of quartz annealed at various temperatures up to 900̊C and the unannealed were used. The thermoluminescence glow curve was measured at 1̊C s~ 1 following beta irradiation to 40 Gy from the samples annealed at 500̊C and the unannealed consist of main peak at 70̊C and secondary peaks at 110, 180 and 310̊C. In comparison, the thermoluminescence glow curve for the sample annealed at 900̊C have main peak at 86̊C and the secondary ones at 170 and 310̊C. The kinetic analysis was carried out only on the main peak in each case. The activation energy was found to be decreasing with increase in annealing temperatures. The samples annealed at 500̊C and the unannealed were found to be affected by thermal quenching while sample annealed at 900̊C shows an inverse quenching for irradiation dose of 40 Gy. However, when the dose was reduce to 3 Gy the effects of thermal quenching was manifested. The activation energy of thermal quenching was also found to decrease with increase in annealing temperature. Thermally assisted optically stimulated luminescence measurement was carried out using continuous wave optical stimulated luminescence (CW-OSL). The samples studied were those annealed at 500̊C for 10 minutes, 900̊C for 10, 30, 60 minutes and 1000̊C for 10 minutes prior to use. The CW-OSL is stimulated using 470 nm blue LEDs at sample temperatures between 30 and 200̊C. It is measured after preheating to either 300 and 500̊C. When the integrated OSL intensity is plotted as a function of measurement temperature, the intensity goes through a peak. The increase in OSL intensity as a function of temperature is associated to thermal assistance and the decrease to thermal quenching. The kinetic parameters were evaluated by fitting the experimental data. The values of activation energies of thermal quenching are the same within experimental uncertainties for all the experimental conditions. This shows that annealing temperature, duration of annealing and irradiation dose have a negligible influence on the recombination site of luminescence using OSL. Phototransferred thermoluminescence (PTTL) induced from annealed samples using 470 nm blue light was also investigated. The quartz were annealed at 500 _C for 10 minutes, 900̊C for 10, 30, 60 minutes and 1000̊C for 10 minutes prior to use. The glow curves of conventional TL measured at 1 _C s1 following irradiation to 200 Gy shows six peaks in each case labelled I-VI for ease of reference whereas peaks observed under PTTL are referred to as A1 onwards. Only the first three peaks were reproduced under phototransfer for the sample annealed at 900̊C for 60 minutes and 1000̊C C for 10 minutes. Interestingly, for the intermediate duration of annealing of 30 minutes, the only peak that appears under phototransfer is the A1. For quartz annealed at 900̊C for 10 minutes, the PTTL appears as long as the preheating temperature does not exceed 560̊C. All other annealing temperatures, PTTL only appears for preheating to 450 and below. This shows that the occupancy of deep electron traps at temperatures beyond 450̊C or 560̊C is low. The activation energy for peaks A1, A2 and A3 were calculated. The PTTL peaks were studied for thermal quenching and peaks A1 and A3 were found to be affected. The activation energies for thermal quenching were determined as 0.62 ± 0.04 eV and 0.65 ± 0.02 eV for peaks A1 and A3 respectively. The experimental dependence of PTTL intensity on illumination time is modelled using sets of coupled linear differential equations based on systems of donors and acceptors whose number is determined by preheating temperature.
- Full Text:
- Date Issued: 2020
- Authors: Dawam, Robert Rangmou
- Date: 2020
- Subjects: Thermoluminescence , Quartz
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/145849 , vital:38472
- Description: The main aim of this investigation is on thermoluminescence and phototransferred thermoluminescence of synthetic quartz. Thermoluminescence was one of the tools used in characterising the electron traps parameters. The samples of quartz annealed at various temperatures up to 900̊C and the unannealed were used. The thermoluminescence glow curve was measured at 1̊C s~ 1 following beta irradiation to 40 Gy from the samples annealed at 500̊C and the unannealed consist of main peak at 70̊C and secondary peaks at 110, 180 and 310̊C. In comparison, the thermoluminescence glow curve for the sample annealed at 900̊C have main peak at 86̊C and the secondary ones at 170 and 310̊C. The kinetic analysis was carried out only on the main peak in each case. The activation energy was found to be decreasing with increase in annealing temperatures. The samples annealed at 500̊C and the unannealed were found to be affected by thermal quenching while sample annealed at 900̊C shows an inverse quenching for irradiation dose of 40 Gy. However, when the dose was reduce to 3 Gy the effects of thermal quenching was manifested. The activation energy of thermal quenching was also found to decrease with increase in annealing temperature. Thermally assisted optically stimulated luminescence measurement was carried out using continuous wave optical stimulated luminescence (CW-OSL). The samples studied were those annealed at 500̊C for 10 minutes, 900̊C for 10, 30, 60 minutes and 1000̊C for 10 minutes prior to use. The CW-OSL is stimulated using 470 nm blue LEDs at sample temperatures between 30 and 200̊C. It is measured after preheating to either 300 and 500̊C. When the integrated OSL intensity is plotted as a function of measurement temperature, the intensity goes through a peak. The increase in OSL intensity as a function of temperature is associated to thermal assistance and the decrease to thermal quenching. The kinetic parameters were evaluated by fitting the experimental data. The values of activation energies of thermal quenching are the same within experimental uncertainties for all the experimental conditions. This shows that annealing temperature, duration of annealing and irradiation dose have a negligible influence on the recombination site of luminescence using OSL. Phototransferred thermoluminescence (PTTL) induced from annealed samples using 470 nm blue light was also investigated. The quartz were annealed at 500 _C for 10 minutes, 900̊C for 10, 30, 60 minutes and 1000̊C for 10 minutes prior to use. The glow curves of conventional TL measured at 1 _C s1 following irradiation to 200 Gy shows six peaks in each case labelled I-VI for ease of reference whereas peaks observed under PTTL are referred to as A1 onwards. Only the first three peaks were reproduced under phototransfer for the sample annealed at 900̊C for 60 minutes and 1000̊C C for 10 minutes. Interestingly, for the intermediate duration of annealing of 30 minutes, the only peak that appears under phototransfer is the A1. For quartz annealed at 900̊C for 10 minutes, the PTTL appears as long as the preheating temperature does not exceed 560̊C. All other annealing temperatures, PTTL only appears for preheating to 450 and below. This shows that the occupancy of deep electron traps at temperatures beyond 450̊C or 560̊C is low. The activation energy for peaks A1, A2 and A3 were calculated. The PTTL peaks were studied for thermal quenching and peaks A1 and A3 were found to be affected. The activation energies for thermal quenching were determined as 0.62 ± 0.04 eV and 0.65 ± 0.02 eV for peaks A1 and A3 respectively. The experimental dependence of PTTL intensity on illumination time is modelled using sets of coupled linear differential equations based on systems of donors and acceptors whose number is determined by preheating temperature.
- Full Text:
- Date Issued: 2020
Accelerated implementations of the RIME for DDE calibration and source modelling
- Authors: Van Staden, Joshua
- Date: 2021
- Subjects: Radio astronomy , Radio inferometers , Radio inferometers -- Calibration , Radio astronomy -- Data processing , Radio inferometers -- Data processing , Radio inferometers -- Calibration -- Data processing
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/172422 , vital:42199
- Description: Second- and third-generation calibration methods filter out subtle effects in interferometer data, and therefore yield significantly higher dynamic ranges. The basis of these calibration techniques relies on building a model of the sky and corrupting it with models of the effects acting on the sources. The sensitivities of modern instruments call for more elaborate models to capture the level of detail that is required to achieve accurate calibration. This thesis implements two types of models to be used in for second- and third-generation calibration. The first model implemented is shapelets, which can be used to model radio source morphologies directly in uv space. The second model implemented is Zernike polynomials, which can be used to represent the primary beam of the antenna. We implement these models in the CODEX-AFRICANUS package and provide a set of unit tests for each model. Additionally, we compare our implementations against other methods of representing these objects and instrumental effects, namely NIFTY-GRIDDER against shapelets and a FITS-interpolation method against the Zernike polynomials. We find that to achieve sufficient accuracy, our implementation of the shapelet model has a higher runtime to that of the NIFTY-GRIDDER. However, the NIFTY-GRIDDER cannot simulate a component-based sky model while the shapelet model can. Additionally, the shapelet model is fully parametric, which allows for integration into a parameterised solver. We find that, while having a smaller memory footprint, our Zernike model has a greater computational complexity than that of the FITS-interpolated method. However, we find that the Zernike implementation has floating-point accuracy in its modelling, while the FITS-interpolated model loses some accuracy through the discretisation of the beam.
- Full Text:
- Date Issued: 2021
- Authors: Van Staden, Joshua
- Date: 2021
- Subjects: Radio astronomy , Radio inferometers , Radio inferometers -- Calibration , Radio astronomy -- Data processing , Radio inferometers -- Data processing , Radio inferometers -- Calibration -- Data processing
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/172422 , vital:42199
- Description: Second- and third-generation calibration methods filter out subtle effects in interferometer data, and therefore yield significantly higher dynamic ranges. The basis of these calibration techniques relies on building a model of the sky and corrupting it with models of the effects acting on the sources. The sensitivities of modern instruments call for more elaborate models to capture the level of detail that is required to achieve accurate calibration. This thesis implements two types of models to be used in for second- and third-generation calibration. The first model implemented is shapelets, which can be used to model radio source morphologies directly in uv space. The second model implemented is Zernike polynomials, which can be used to represent the primary beam of the antenna. We implement these models in the CODEX-AFRICANUS package and provide a set of unit tests for each model. Additionally, we compare our implementations against other methods of representing these objects and instrumental effects, namely NIFTY-GRIDDER against shapelets and a FITS-interpolation method against the Zernike polynomials. We find that to achieve sufficient accuracy, our implementation of the shapelet model has a higher runtime to that of the NIFTY-GRIDDER. However, the NIFTY-GRIDDER cannot simulate a component-based sky model while the shapelet model can. Additionally, the shapelet model is fully parametric, which allows for integration into a parameterised solver. We find that, while having a smaller memory footprint, our Zernike model has a greater computational complexity than that of the FITS-interpolated method. However, we find that the Zernike implementation has floating-point accuracy in its modelling, while the FITS-interpolated model loses some accuracy through the discretisation of the beam.
- Full Text:
- Date Issued: 2021
Design patterns and software techniques for large-scale, open and reproducible data reduction
- Authors: Molenaar, Gijs Jan
- Date: 2021
- Subjects: Radio astronomy -- Data processing , Radio astronomy -- Data processing -- Software , Radio astronomy -- South Africa , ASTRODECONV2019 dataset , Radio telescopes -- South Africa , KERN (omputer software)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/172169 , vital:42172 , 10.21504/10962/172169
- Description: The preparation for the construction of the Square Kilometre Array, and the introduction of its operational precursors, such as LOFAR and MeerKAT, mark the beginning of an exciting era for astronomy. Impressive new data containing valuable science just waiting for discovery is already being generated, and these devices will produce far more data than has ever been collected before. However, with every new data instrument, the data rates grow to unprecedented quantities of data, requiring novel new data-processing tools. In addition, creating science grade data from the raw data still requires significant expert knowledge for processing this data. The software used is often developed by a scientist who lacks proper training in software development skills, resulting in the software not progressing beyond a prototype stage in quality. In the first chapter, we explore various organisational and technical approaches to address these issues by providing a historical overview of the development of radioastronomy pipelines since the inception of the field in the 1940s. In that, the steps required to create a radio image are investigated. We used the lessons-learned to identify patterns in the challenges experienced, and the solutions created to address these over the years. The second chapter describes the mathematical foundations that are essential for radio imaging. In the third chapter, we discuss the production of the KERN Linux distribution, which is a set of software packages containing most radio astronomy software currently in use. Considerable effort was put into making sure that the contained software installs appropriately, all items next to one other on the same system. Where required and possible, bugs and portability fixes were solved and reported with the upstream maintainers. The KERN project also has a website, and issue tracker, where users can report bugs and maintainers can coordinate the packaging effort and new releases. The software packages can be used inside Docker and Singularity containers, enabling the installation of these packages on a wide variety of platforms. In the fourth and fifth chapters, we discuss methods and frameworks for combining the available data reduction tools into recomposable pipelines and introduce the Kliko specification and software. This framework was created to enable end-user astronomers to chain and containerise operations of software in KERN packages. Next, we discuss the Common Workflow Language (CommonWL), a similar but more advanced and mature pipeline framework invented by bio-informatics scientists. CommonWL is supported by a wide range of tools already; among other schedulers, visualisers and editors. Consequently, when a pipeline is made with CommonWL, it can be deployed and manipulated with a wide range of tools. In the final chapter, we attempt something unconventional, applying a generative adversarial network based on deep learning techniques to perform the task of sky brightness reconstruction. Since deep learning methods often require a large number of training samples, we constructed a CommonWL simulation pipeline for creating dirty images and corresponding sky models. This simulated dataset has been made publicly available as the ASTRODECONV2019 dataset. It is shown that this method is useful to perform the restoration and matches the performance of a single clean cycle. In addition, we incorporated domain knowledge by adding the point spread function to the network and by utilising a custom loss function during training. Although it was not possible to improve the cleaning performance of commonly used existing tools, the computational time performance of the approach looks very promising. We suggest that a smaller scope should be the starting point for further studies and optimising of the training of the neural network could produce the desired results.
- Full Text:
- Date Issued: 2021
- Authors: Molenaar, Gijs Jan
- Date: 2021
- Subjects: Radio astronomy -- Data processing , Radio astronomy -- Data processing -- Software , Radio astronomy -- South Africa , ASTRODECONV2019 dataset , Radio telescopes -- South Africa , KERN (omputer software)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/172169 , vital:42172 , 10.21504/10962/172169
- Description: The preparation for the construction of the Square Kilometre Array, and the introduction of its operational precursors, such as LOFAR and MeerKAT, mark the beginning of an exciting era for astronomy. Impressive new data containing valuable science just waiting for discovery is already being generated, and these devices will produce far more data than has ever been collected before. However, with every new data instrument, the data rates grow to unprecedented quantities of data, requiring novel new data-processing tools. In addition, creating science grade data from the raw data still requires significant expert knowledge for processing this data. The software used is often developed by a scientist who lacks proper training in software development skills, resulting in the software not progressing beyond a prototype stage in quality. In the first chapter, we explore various organisational and technical approaches to address these issues by providing a historical overview of the development of radioastronomy pipelines since the inception of the field in the 1940s. In that, the steps required to create a radio image are investigated. We used the lessons-learned to identify patterns in the challenges experienced, and the solutions created to address these over the years. The second chapter describes the mathematical foundations that are essential for radio imaging. In the third chapter, we discuss the production of the KERN Linux distribution, which is a set of software packages containing most radio astronomy software currently in use. Considerable effort was put into making sure that the contained software installs appropriately, all items next to one other on the same system. Where required and possible, bugs and portability fixes were solved and reported with the upstream maintainers. The KERN project also has a website, and issue tracker, where users can report bugs and maintainers can coordinate the packaging effort and new releases. The software packages can be used inside Docker and Singularity containers, enabling the installation of these packages on a wide variety of platforms. In the fourth and fifth chapters, we discuss methods and frameworks for combining the available data reduction tools into recomposable pipelines and introduce the Kliko specification and software. This framework was created to enable end-user astronomers to chain and containerise operations of software in KERN packages. Next, we discuss the Common Workflow Language (CommonWL), a similar but more advanced and mature pipeline framework invented by bio-informatics scientists. CommonWL is supported by a wide range of tools already; among other schedulers, visualisers and editors. Consequently, when a pipeline is made with CommonWL, it can be deployed and manipulated with a wide range of tools. In the final chapter, we attempt something unconventional, applying a generative adversarial network based on deep learning techniques to perform the task of sky brightness reconstruction. Since deep learning methods often require a large number of training samples, we constructed a CommonWL simulation pipeline for creating dirty images and corresponding sky models. This simulated dataset has been made publicly available as the ASTRODECONV2019 dataset. It is shown that this method is useful to perform the restoration and matches the performance of a single clean cycle. In addition, we incorporated domain knowledge by adding the point spread function to the network and by utilising a custom loss function during training. Although it was not possible to improve the cleaning performance of commonly used existing tools, the computational time performance of the approach looks very promising. We suggest that a smaller scope should be the starting point for further studies and optimising of the training of the neural network could produce the desired results.
- Full Text:
- Date Issued: 2021
Parametrised gains for direction-dependent calibration
- Authors: Russeeaeon, Cyndie
- Date: 2021
- Subjects: Radio astronomy , Radio inferometers , Radio inferometers -- Calibration
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/172400 , vital:42196
- Description: Calibration in radio interferometry describes the process of estimating and correcting for instrumental errors from data. Direction-Dependent (DD) calibration entails correcting for corruptions which vary across the sky. For small field of view observations, DD corruptions can be ignored but for wide fild observations, it is crucial to account for them. Traditional maximum likelihood calibration is not necessarily efficient in low signal-to-noise ratio (SNR) scenarios and this can lead to ovefitting. This can bias continuum subtraction and hence, restrict the spectral line studies. Since DD effects are expected to vary smoothly across the sky, the gains can be parametrised as a smooth function of the sky coordinates. Hence, we implement a solver where the atmosphere is modelled using a time-variant 2-dimensional phase screen with an arbitrary known frequency dependence. We assume arbitrary linear basis functions for the gains over the phase screen. The implemented solver is ptimised using the diagonal approximation of the Hessian as shown in previous studies. We present a few simulations to illustrate the performance of the solver.
- Full Text:
- Date Issued: 2021
- Authors: Russeeaeon, Cyndie
- Date: 2021
- Subjects: Radio astronomy , Radio inferometers , Radio inferometers -- Calibration
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/172400 , vital:42196
- Description: Calibration in radio interferometry describes the process of estimating and correcting for instrumental errors from data. Direction-Dependent (DD) calibration entails correcting for corruptions which vary across the sky. For small field of view observations, DD corruptions can be ignored but for wide fild observations, it is crucial to account for them. Traditional maximum likelihood calibration is not necessarily efficient in low signal-to-noise ratio (SNR) scenarios and this can lead to ovefitting. This can bias continuum subtraction and hence, restrict the spectral line studies. Since DD effects are expected to vary smoothly across the sky, the gains can be parametrised as a smooth function of the sky coordinates. Hence, we implement a solver where the atmosphere is modelled using a time-variant 2-dimensional phase screen with an arbitrary known frequency dependence. We assume arbitrary linear basis functions for the gains over the phase screen. The implemented solver is ptimised using the diagonal approximation of the Hessian as shown in previous studies. We present a few simulations to illustrate the performance of the solver.
- Full Text:
- Date Issued: 2021
Night-time gravity waves detected with multi-frequency airglow imager
- Authors: Machubeng, Karabo Pebane
- Date: 2021-04
- Subjects: Gravity waves , Airglow , Gravity waves -- Seasonal variations , All Sky Imager
- Language: English
- Type: thesis , text , Masters , MSc
- Identifier: http://hdl.handle.net/10962/178341 , vital:42931
- Description: This thesis shows the statistics of atmospheric gravity waves (AGWs) observed in the OI emission 557.7 nm at _97 km altitude using an all-sky imager based in Sutherland, South Africa (32.37_ S, 20.81_ E) in the year 2017. The wavelengths were determined using the propagation vector method, velocity was determined using the cross correlation of 1D FFT and the period was determined using the equation that relates wavelength and velocity. It was found that the horizontal wavelength in summer was almost evenly distributed between 10 and 40 km and for autumn, winter and spring were mostly between 10 and 30 km. The favoured speeds were between 40 and 50 m/s in autumn, as well as 30 and 50 m/s in summer, but the AGWs in winter had a bimodal speed distribution of 20 - 40 and 50 - 70 m/s. The majority of periods observed in all seasons were less than 20 minutes with a dominant peak of 5 - 10 minutes in autumn and spring. There was no favoured propagation direction for spring, but AGWs favoured a southeastward propagation in summer, and a southward propagation in autumn and winter. , Thesis (MSc) -- Faculty of Science, Physics and Electronics, 2021
- Full Text:
- Date Issued: 2021-04
- Authors: Machubeng, Karabo Pebane
- Date: 2021-04
- Subjects: Gravity waves , Airglow , Gravity waves -- Seasonal variations , All Sky Imager
- Language: English
- Type: thesis , text , Masters , MSc
- Identifier: http://hdl.handle.net/10962/178341 , vital:42931
- Description: This thesis shows the statistics of atmospheric gravity waves (AGWs) observed in the OI emission 557.7 nm at _97 km altitude using an all-sky imager based in Sutherland, South Africa (32.37_ S, 20.81_ E) in the year 2017. The wavelengths were determined using the propagation vector method, velocity was determined using the cross correlation of 1D FFT and the period was determined using the equation that relates wavelength and velocity. It was found that the horizontal wavelength in summer was almost evenly distributed between 10 and 40 km and for autumn, winter and spring were mostly between 10 and 30 km. The favoured speeds were between 40 and 50 m/s in autumn, as well as 30 and 50 m/s in summer, but the AGWs in winter had a bimodal speed distribution of 20 - 40 and 50 - 70 m/s. The majority of periods observed in all seasons were less than 20 minutes with a dominant peak of 5 - 10 minutes in autumn and spring. There was no favoured propagation direction for spring, but AGWs favoured a southeastward propagation in summer, and a southward propagation in autumn and winter. , Thesis (MSc) -- Faculty of Science, Physics and Electronics, 2021
- Full Text:
- Date Issued: 2021-04
Observations of cosmic re-ionisation with the Hydrogen Epoch of Reionization Array: simulations of closure phase spectra
- Authors: Charles, Ntsikelelo
- Date: 2021-04
- Subjects: Epoch of reionization , Space interferometry , Astronomy -- Observations , Closure phase spectra
- Language: English
- Type: thesis , text , Masters , MSc
- Identifier: http://hdl.handle.net/10962/174470 , vital:42480
- Description: The 21 cm transition from neutral Hydrogen promises to be the best observational probe of the Epoch of Reionisation. It has driven the construction of the new generation of low frequency radio interferometric arrays, including the Hydrogen Epoch of Reionization Array (HERA). The main difficulty in measuring the 21 cm signal is the presence of bright foregrounds that require very accurate interferometric calibration. Thyagarajan et al. (2018) proposed the use of closure phase quantities as a means to detect the 21 cm signal, which has the advantage of being independent (to first order) from calibration errors and therefore, bypasses the need for accurate calibration. Closure phases are, however, affected by so-called direction dependent effects, e.g. the fact that the dishes - or antennas - of an interferometric array are not identical to each other and , therefore, yield different antenna primary beam responses. In this thesis, we investigate the impact of direction dependent effects on closure quantities and simulate the impact that primary antenna beams affected by mutual coupling have on the foreground closure phase and its power spectrum i.e. the power spectrum of the bispectrum phase (Thyagarajan et al., 2020). Our simulations show that primary beams affected by mutual coupling lead to an overall leakage of foreground power in the so-called EoR window, i.e. power from smooth-spectrum foregrounds is confined to low k modes. We quantified this effect and found that the leakage is up to ~ 8 orders magnitude higher than the case of an ideal beam at kǁ > 0:5 h Mpc-1. We also found that the foreground leakage is worse when edge antennas are included, as they have a more different primary beam compared to antennas at the centre of the array. The leakage magnitude is worse when bright foregrounds appear in the antenna sidelobes, as expected. Our simulations provide a useful framework to interpret observations and assess which power spectrum region is expected to be most contaminated by foreground power leakage.
- Full Text:
- Date Issued: 2021-04
- Authors: Charles, Ntsikelelo
- Date: 2021-04
- Subjects: Epoch of reionization , Space interferometry , Astronomy -- Observations , Closure phase spectra
- Language: English
- Type: thesis , text , Masters , MSc
- Identifier: http://hdl.handle.net/10962/174470 , vital:42480
- Description: The 21 cm transition from neutral Hydrogen promises to be the best observational probe of the Epoch of Reionisation. It has driven the construction of the new generation of low frequency radio interferometric arrays, including the Hydrogen Epoch of Reionization Array (HERA). The main difficulty in measuring the 21 cm signal is the presence of bright foregrounds that require very accurate interferometric calibration. Thyagarajan et al. (2018) proposed the use of closure phase quantities as a means to detect the 21 cm signal, which has the advantage of being independent (to first order) from calibration errors and therefore, bypasses the need for accurate calibration. Closure phases are, however, affected by so-called direction dependent effects, e.g. the fact that the dishes - or antennas - of an interferometric array are not identical to each other and , therefore, yield different antenna primary beam responses. In this thesis, we investigate the impact of direction dependent effects on closure quantities and simulate the impact that primary antenna beams affected by mutual coupling have on the foreground closure phase and its power spectrum i.e. the power spectrum of the bispectrum phase (Thyagarajan et al., 2020). Our simulations show that primary beams affected by mutual coupling lead to an overall leakage of foreground power in the so-called EoR window, i.e. power from smooth-spectrum foregrounds is confined to low k modes. We quantified this effect and found that the leakage is up to ~ 8 orders magnitude higher than the case of an ideal beam at kǁ > 0:5 h Mpc-1. We also found that the foreground leakage is worse when edge antennas are included, as they have a more different primary beam compared to antennas at the centre of the array. The leakage magnitude is worse when bright foregrounds appear in the antenna sidelobes, as expected. Our simulations provide a useful framework to interpret observations and assess which power spectrum region is expected to be most contaminated by foreground power leakage.
- Full Text:
- Date Issued: 2021-04
The development of an ionospheric storm-time index for the South African region
- Authors: Tshisaphungo, Mpho
- Date: 2021-04
- Subjects: Ionospheric storms -- South Africa , Global Positioning System , Neural networks (Computer science) , Regression analysis , Ionosondes , Auroral electrojet , Geomagnetic indexes , Magnetic storms -- South Africa
- Language: English
- Type: thesis , text , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/178409 , vital:42937 , 10.21504/10962/178409
- Description: This thesis presents the development of a regional ionospheric storm-time model which forms the foundation of an index to provide a quick view of the ionospheric storm effects over South African mid-latitude region. The model is based on the foF2 measurements from four South African ionosonde stations. The data coverage for the model development over Grahamstown (33.3◦S, 26.5◦E), Hermanus (34.42◦S, 19.22◦E), Louisvale (28.50◦S, 21.20◦E), and Madimbo (22.39◦S, 30.88◦E) is 1996-2016, 2009-2016, 2000-2016, and 2000-2016 respectively. Data from the Global Positioning System (GPS) and radio occultation (RO) technique were used during validation. As the measure of either positive or negative storm effect, the variation of the critical frequency of the F2 layer (foF2) from the monthly median values (denoted as _foF2) is modeled. The modeling of _foF2 is based on only storm time data with the criteria of Dst 6 -50 nT and Kp > 4. The modeling methods used in the study were artificial neural network (ANN), linear regression (LR) and polynomial functions. The approach taken was to first test the modeling techniques on a single station before expanding the study to cover the regional aspect. The single station modeling was developed based on ionosonde data over Grahamstown. The inputs for the model which related to seasonal variation, diurnal variation, geomagnetic activity and solar activity were considered. For the geomagnetic activity, three indices namely; the symmetric disturbance in the horizontal component of the Earth’s magnetic field (SYM − H), the Auroral Electrojet (AE) index and local geomagnetic index A, were included as inputs. The performance of a single station model revealed that, of the three geomagnetic indices, SYM − H index has the largest contribution of 41% and 54% based on ANN and LR techniques respectively. The average correlation coefficients (R) for both ANN and LR models was 0.8, when validated during the selected storms falling within the period of model development. When validated using storms that fall outside the period of model development, the model gave R values of 0.6 and 0.5 for ANN and LR respectively. In addition, the GPS total electron content (TEC) derived measurements were used to estimate foF2 data. This is because there are more GPS receivers than ionosonde locations and the utilisation of this data increases the spatial coverage of the regional model. The estimation of foF2 from GPS TEC was done at GPS-ionosonde co-locations using polynomial functions. The average R values of 0.69 and 0.65 were obtained between actual and derived _foF2 over the co-locations and other GPS stations respectively. Validation of GPS TEC derived foF2 with RO data over regions out of ionospheric pierce points coverage with respect to ionosonde locations gave R greater than 0.9 for the selected storm period of 4-8 August 2011. The regional storm-time model was then developed based on the ANN technique using the four South African ionosonde stations. The maximum and minimum R values of 0.6 and 0.5 were obtained over ionosonde and GPS locations respectively. This model forms the basis towards the regional ionospheric storm-time index. , Thesis (PhD) -- Faculty of Science, Physics and Electronics, 2021
- Full Text:
- Date Issued: 2021-04
- Authors: Tshisaphungo, Mpho
- Date: 2021-04
- Subjects: Ionospheric storms -- South Africa , Global Positioning System , Neural networks (Computer science) , Regression analysis , Ionosondes , Auroral electrojet , Geomagnetic indexes , Magnetic storms -- South Africa
- Language: English
- Type: thesis , text , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/178409 , vital:42937 , 10.21504/10962/178409
- Description: This thesis presents the development of a regional ionospheric storm-time model which forms the foundation of an index to provide a quick view of the ionospheric storm effects over South African mid-latitude region. The model is based on the foF2 measurements from four South African ionosonde stations. The data coverage for the model development over Grahamstown (33.3◦S, 26.5◦E), Hermanus (34.42◦S, 19.22◦E), Louisvale (28.50◦S, 21.20◦E), and Madimbo (22.39◦S, 30.88◦E) is 1996-2016, 2009-2016, 2000-2016, and 2000-2016 respectively. Data from the Global Positioning System (GPS) and radio occultation (RO) technique were used during validation. As the measure of either positive or negative storm effect, the variation of the critical frequency of the F2 layer (foF2) from the monthly median values (denoted as _foF2) is modeled. The modeling of _foF2 is based on only storm time data with the criteria of Dst 6 -50 nT and Kp > 4. The modeling methods used in the study were artificial neural network (ANN), linear regression (LR) and polynomial functions. The approach taken was to first test the modeling techniques on a single station before expanding the study to cover the regional aspect. The single station modeling was developed based on ionosonde data over Grahamstown. The inputs for the model which related to seasonal variation, diurnal variation, geomagnetic activity and solar activity were considered. For the geomagnetic activity, three indices namely; the symmetric disturbance in the horizontal component of the Earth’s magnetic field (SYM − H), the Auroral Electrojet (AE) index and local geomagnetic index A, were included as inputs. The performance of a single station model revealed that, of the three geomagnetic indices, SYM − H index has the largest contribution of 41% and 54% based on ANN and LR techniques respectively. The average correlation coefficients (R) for both ANN and LR models was 0.8, when validated during the selected storms falling within the period of model development. When validated using storms that fall outside the period of model development, the model gave R values of 0.6 and 0.5 for ANN and LR respectively. In addition, the GPS total electron content (TEC) derived measurements were used to estimate foF2 data. This is because there are more GPS receivers than ionosonde locations and the utilisation of this data increases the spatial coverage of the regional model. The estimation of foF2 from GPS TEC was done at GPS-ionosonde co-locations using polynomial functions. The average R values of 0.69 and 0.65 were obtained between actual and derived _foF2 over the co-locations and other GPS stations respectively. Validation of GPS TEC derived foF2 with RO data over regions out of ionospheric pierce points coverage with respect to ionosonde locations gave R greater than 0.9 for the selected storm period of 4-8 August 2011. The regional storm-time model was then developed based on the ANN technique using the four South African ionosonde stations. The maximum and minimum R values of 0.6 and 0.5 were obtained over ionosonde and GPS locations respectively. This model forms the basis towards the regional ionospheric storm-time index. , Thesis (PhD) -- Faculty of Science, Physics and Electronics, 2021
- Full Text:
- Date Issued: 2021-04
Neutral Atomic Hydrogen in Gravitationally Lensed Systems
- Authors: Blecher, Tariq Dylan
- Date: 2021-10-29
- Subjects: Uncatalogued
- Language: English
- Type: Doctoral theses , text
- Identifier: http://hdl.handle.net/10962/192776 , vital:45263
- Description: Thesis (PhD) -- Faculty of Law, Law, 2021
- Full Text:
- Date Issued: 2021-10-29
- Authors: Blecher, Tariq Dylan
- Date: 2021-10-29
- Subjects: Uncatalogued
- Language: English
- Type: Doctoral theses , text
- Identifier: http://hdl.handle.net/10962/192776 , vital:45263
- Description: Thesis (PhD) -- Faculty of Law, Law, 2021
- Full Text:
- Date Issued: 2021-10-29
On the gravitational dual to strongly coupled fluids
- Authors: Shawa, Mark Musonda Webster
- Date: 2021-10-29
- Subjects: Quantum gravity , String models , Gauge fields (Physics) , Scattering amplitude (Nuclear physics) , Quark-gluon plasma , Anti-de Sitter/Conformal Field Theory (AdS/CFT) , Gauge/gravity duality
- Language: English
- Type: Doctoral theses , text
- Identifier: http://hdl.handle.net/10962/192933 , vital:45280 , 10.21504/10962/192933
- Description: This thesis discusses the prospect of finding the gravitational dual to the strongly coupled conformal fluids, with a special interest in the quark-gluon plasma. Such a task can be achieved by matching certain physical observables of two apparently different theories that are dually related owing to the fact that the same string theory can be viewed in two different ways. This is particularly useful when one of the theories is intractable while its dual is manageable. We begin by postulating a particular type of gravitational theory from which we determine graviton scattering amplitudes in a special regime of high momentum. Using the gauge–gravity duality dictionary, the graviton scattering amplitudes can be mapped to stress-tensor correlation functions in the gauge theory. One of the outcomes of high-energy scattering experiments involving the quark-gluon plasma is stress-tensor correlator data. This thesis provides an algorithm for matching graviton scattering amplitudes with stress-tensor correlator data which, in principle, can be used to identify the gravitational dual to the quark-gluon plasma. , Thesis (PhD) -- Faculty of Science, Physics and Electronics, 2021
- Full Text:
- Date Issued: 2021-10-29
- Authors: Shawa, Mark Musonda Webster
- Date: 2021-10-29
- Subjects: Quantum gravity , String models , Gauge fields (Physics) , Scattering amplitude (Nuclear physics) , Quark-gluon plasma , Anti-de Sitter/Conformal Field Theory (AdS/CFT) , Gauge/gravity duality
- Language: English
- Type: Doctoral theses , text
- Identifier: http://hdl.handle.net/10962/192933 , vital:45280 , 10.21504/10962/192933
- Description: This thesis discusses the prospect of finding the gravitational dual to the strongly coupled conformal fluids, with a special interest in the quark-gluon plasma. Such a task can be achieved by matching certain physical observables of two apparently different theories that are dually related owing to the fact that the same string theory can be viewed in two different ways. This is particularly useful when one of the theories is intractable while its dual is manageable. We begin by postulating a particular type of gravitational theory from which we determine graviton scattering amplitudes in a special regime of high momentum. Using the gauge–gravity duality dictionary, the graviton scattering amplitudes can be mapped to stress-tensor correlation functions in the gauge theory. One of the outcomes of high-energy scattering experiments involving the quark-gluon plasma is stress-tensor correlator data. This thesis provides an algorithm for matching graviton scattering amplitudes with stress-tensor correlator data which, in principle, can be used to identify the gravitational dual to the quark-gluon plasma. , Thesis (PhD) -- Faculty of Science, Physics and Electronics, 2021
- Full Text:
- Date Issued: 2021-10-29
Influence of argon ion implantation on the thermoluminescence properties of aluminium oxide
- Authors: Khabo, Bokang
- Date: 2022-04-06
- Subjects: Aluminum oxide , Thermoluminescence , Ion implantation , Kinetic analysis , Oxygen vacancies , Argon , Irradiation
- Language: English
- Type: Master's thesis , text
- Identifier: http://hdl.handle.net/10962/234220 , vital:50173
- Description: The influence of argon ion implantation on the thermoluminescence properties (TL) of aluminium oxide (alumina) was investigated. Aluminium oxide (Al2O3) samples were implanted with 80 keV Ar ions. An unimplanted sample and samples implanted at fluences of 1×1014, 5×1014, 1×1015, 5×1015, 1×1016 Ar+/cm2 were irradiated at a dose of 40 Gy and heated at a rate of 1°C/s using a Risø reader model TL/OSL-DA-20 equipped with a Hoya U-340 filter. The thermoluminescence glow curves showed five distinct peaks with main peaks at 178°C, 188°C, 176°C, 208°C, 216°C and 204°C for the unimplanted sample as well as implanted samples. The peak positions of the samples were independent of the irradiation dose suggesting that the samples were characterised by first order kinetics. This was also confirmed by the TM-TSTOP analysis. It was observed that the TL intensity decreases with fluence of implantation. This observation suggests that the concentration of electron traps responsible for thermoluminescence decreases with ion implantation. The decrease in electron concentration might be due to the formation of non-radiative transition bands or the creation of defect clusters and extended defects following the ion implantation and ion fluence increases. The stopping and range of atoms in matter (SRIM) program was used to correlate the TL response of Al2O3 with defects under ion implantation. Subsequent to ion implantation, it was found that the number of oxygen vacancies which are related to electron traps are higher than the number of aluminium vacancies. Kinetic analysis was carried out using the initial rise, Chens peak shape, various heating rate, the whole glow curve, glow curve fitting and the isothermal decay methods. The activation energy was found to be around 0.8 eV and the frequency factor to be of the order 108 𝑠−1 regardless of the implantation fluence. This means that argon ion implantation did not affect the nature of electron traps. The dosimetric features of samples were also investigated at doses in the range of 40 – 200 Gy. Samples generally showed a superlinear response at doses less than 140 Gy and sublinear response at doses higher than 160 Gy. , Thesis (MSc) -- Faculty of Science, Physics and Electronics, 2022
- Full Text:
- Date Issued: 2022-04-06
- Authors: Khabo, Bokang
- Date: 2022-04-06
- Subjects: Aluminum oxide , Thermoluminescence , Ion implantation , Kinetic analysis , Oxygen vacancies , Argon , Irradiation
- Language: English
- Type: Master's thesis , text
- Identifier: http://hdl.handle.net/10962/234220 , vital:50173
- Description: The influence of argon ion implantation on the thermoluminescence properties (TL) of aluminium oxide (alumina) was investigated. Aluminium oxide (Al2O3) samples were implanted with 80 keV Ar ions. An unimplanted sample and samples implanted at fluences of 1×1014, 5×1014, 1×1015, 5×1015, 1×1016 Ar+/cm2 were irradiated at a dose of 40 Gy and heated at a rate of 1°C/s using a Risø reader model TL/OSL-DA-20 equipped with a Hoya U-340 filter. The thermoluminescence glow curves showed five distinct peaks with main peaks at 178°C, 188°C, 176°C, 208°C, 216°C and 204°C for the unimplanted sample as well as implanted samples. The peak positions of the samples were independent of the irradiation dose suggesting that the samples were characterised by first order kinetics. This was also confirmed by the TM-TSTOP analysis. It was observed that the TL intensity decreases with fluence of implantation. This observation suggests that the concentration of electron traps responsible for thermoluminescence decreases with ion implantation. The decrease in electron concentration might be due to the formation of non-radiative transition bands or the creation of defect clusters and extended defects following the ion implantation and ion fluence increases. The stopping and range of atoms in matter (SRIM) program was used to correlate the TL response of Al2O3 with defects under ion implantation. Subsequent to ion implantation, it was found that the number of oxygen vacancies which are related to electron traps are higher than the number of aluminium vacancies. Kinetic analysis was carried out using the initial rise, Chens peak shape, various heating rate, the whole glow curve, glow curve fitting and the isothermal decay methods. The activation energy was found to be around 0.8 eV and the frequency factor to be of the order 108 𝑠−1 regardless of the implantation fluence. This means that argon ion implantation did not affect the nature of electron traps. The dosimetric features of samples were also investigated at doses in the range of 40 – 200 Gy. Samples generally showed a superlinear response at doses less than 140 Gy and sublinear response at doses higher than 160 Gy. , Thesis (MSc) -- Faculty of Science, Physics and Electronics, 2022
- Full Text:
- Date Issued: 2022-04-06
An investigation of traveling ionospheric disturbances (TIDs) in the SANAE HF radar data
- Authors: Atilaw, Tsige Yared
- Date: 2022-04-07
- Subjects: Ionospheric storms Antarctica , Radar Antarctica , Range time-intensity (RTI) , South African National Antarctic Expedition (SANAE) , Super Dual Auroral Radar Network (SuperDARN)
- Language: English
- Type: Doctoral thesis , text
- Identifier: http://hdl.handle.net/10962/232377 , vital:49986 , DOI 10.21504/10962/232377
- Description: This thesis aims to study the characteristics of traveling ionospheric disturbances (TIDs) as identified in the radar data of the South African National Antarctic Expedition (SANAE) Super Dual Auroral Radar Network (SuperDARN) radar located in Antarctica. For this project, 22 TIDs were identified from visual inspection of range time-intensity (RTI) plots of backscattered power and Doppler velocity parameters of the SANAE radar between 2005âAS2015. These events were studied to determine their characteristics and driving mechanisms. Where good quality data were available, the SANAE HF radar data were supplemented by Halley radar data, which has large area of overlapping field of view (FOV) with the SANAE radar, and also by GPS TEC data. This provided a multi-instrument data analysis of some TID events. Different spectral analysis methods, namely the multitaper method (MTM), Fast Fourier transform (FFT) and the Lomb-Scargle periodogram were used to obtain spectral information of the observed waves. The advantage of using multiple windowing in MTM over the traditional windowing method was illustrated using one of the TID events. In addition, the analytic signal of the wave from the MTM method was used to estimate the instantaneous phase velocity and propagation azimuth of the wave, which was able to track the change in the characteristics of the medium-scale TID (MSTID) efficiently throughout the duration of the event. This is a clear advantage over other windowing techniques. The energy contribution by this MSTID through Joule heating was estimated over the region where spectral analysis of both SANAE and Halley data showed it to be present. The majority of the TIDs (65.4%) could be classified as MSTIDs with periods of 20–60 minutes, velocities of 50–333 ms1 and wavelengths of 129–833 km. The TID occurrence rate was high around the March equinox with 12 out of the 16 event days being during March–May. March had a particularly high number of occurrences of TIDs (46%). The majority of the TIDs observed during this month propagated northward or southeastward. In terms of prevailing geomagnetic conditions, 6 out of 16 event days were geomagnetically quiet, while 10 occurred during geomagnetic storms and substorms. During quiet conditions, TIDs could be linked to Es and polarised electric fields in 2 of these events. The other quiet time events could not be related to Es instability and polarised electric field either because their exact propagation direction could not be determined or data quality from the Es region scatter was too poor to perform spectral analysis. The storm-/substorm-related TIDs are possibly generated through Joule heating, the Lorentz force and energetic particle precipitation. , Thesis (PhD) -- Faculty of Science, Physics and Electronics, 2022
- Full Text:
- Date Issued: 2022-04-07
- Authors: Atilaw, Tsige Yared
- Date: 2022-04-07
- Subjects: Ionospheric storms Antarctica , Radar Antarctica , Range time-intensity (RTI) , South African National Antarctic Expedition (SANAE) , Super Dual Auroral Radar Network (SuperDARN)
- Language: English
- Type: Doctoral thesis , text
- Identifier: http://hdl.handle.net/10962/232377 , vital:49986 , DOI 10.21504/10962/232377
- Description: This thesis aims to study the characteristics of traveling ionospheric disturbances (TIDs) as identified in the radar data of the South African National Antarctic Expedition (SANAE) Super Dual Auroral Radar Network (SuperDARN) radar located in Antarctica. For this project, 22 TIDs were identified from visual inspection of range time-intensity (RTI) plots of backscattered power and Doppler velocity parameters of the SANAE radar between 2005âAS2015. These events were studied to determine their characteristics and driving mechanisms. Where good quality data were available, the SANAE HF radar data were supplemented by Halley radar data, which has large area of overlapping field of view (FOV) with the SANAE radar, and also by GPS TEC data. This provided a multi-instrument data analysis of some TID events. Different spectral analysis methods, namely the multitaper method (MTM), Fast Fourier transform (FFT) and the Lomb-Scargle periodogram were used to obtain spectral information of the observed waves. The advantage of using multiple windowing in MTM over the traditional windowing method was illustrated using one of the TID events. In addition, the analytic signal of the wave from the MTM method was used to estimate the instantaneous phase velocity and propagation azimuth of the wave, which was able to track the change in the characteristics of the medium-scale TID (MSTID) efficiently throughout the duration of the event. This is a clear advantage over other windowing techniques. The energy contribution by this MSTID through Joule heating was estimated over the region where spectral analysis of both SANAE and Halley data showed it to be present. The majority of the TIDs (65.4%) could be classified as MSTIDs with periods of 20–60 minutes, velocities of 50–333 ms1 and wavelengths of 129–833 km. The TID occurrence rate was high around the March equinox with 12 out of the 16 event days being during March–May. March had a particularly high number of occurrences of TIDs (46%). The majority of the TIDs observed during this month propagated northward or southeastward. In terms of prevailing geomagnetic conditions, 6 out of 16 event days were geomagnetically quiet, while 10 occurred during geomagnetic storms and substorms. During quiet conditions, TIDs could be linked to Es and polarised electric fields in 2 of these events. The other quiet time events could not be related to Es instability and polarised electric field either because their exact propagation direction could not be determined or data quality from the Es region scatter was too poor to perform spectral analysis. The storm-/substorm-related TIDs are possibly generated through Joule heating, the Lorentz force and energetic particle precipitation. , Thesis (PhD) -- Faculty of Science, Physics and Electronics, 2022
- Full Text:
- Date Issued: 2022-04-07