Systematic effects and mitigation strategies in observations of cosmic re-ionisation with the Hydrogen Epoch of Reionization Array
- Authors: Charles, Ntsikelelo
- Date: 2024
- Subjects: Cosmology , Astrophysics , Radio astronomy , Hydrogen Epoch of Reionization Array , Epoch of reionization
- Language: English
- Type: Academic theses , Doctoral theses , text
- Identifier: http://hdl.handle.net/10962/432605 , vital:72886 , DOI 10.21504/10962/432605
- Description: The 21 cm transition from neutral Hydrogen promises to be the best observational probe of the Epoch of Reionisation (EoR). It has driven the construction of the new generation of lowfrequency radio interferometric arrays, including the Hydrogen Epoch of Reionization Array (HERA). The main difficulty in measuring the 21 cm signal is the presence of bright foregrounds that require very accurate interferometric calibration. However, the non-smooth instrumental response of the antenna as a result of mutual coupling complicates the calibration process by introducing non-smooth calibration errors. Additionally, incomplete sky models are typically used in calibration due to the limited depth and resolution of current source catalogues. Combined with the instrumental response, the use of incomplete sky models during calibration can result in non-smooth calibration errors. These, overall, impart spectral structure on smooth foregrounds, leading to foreground power leakage into the EoR window. In this thesis we explored the use of fringe rate filters (Parsons et al., 2016) as a mean to mitigate calibration errors resulting from the effects of mutual coupling and the use of an incomplete sky model during calibration. We found that the use of a simple notch filter mitigates calibration errors reducing the foreground power leakage into the EoR window by a factor of ∼ 102. Thyagarajan et al. (2018) proposed the use of closure phase quantities as a means to detect the 21 cm signal, which has the advantage of being independent (to first order) from calibration errors and, therefore, bypasses the need for accurate calibration. In this thesis, we explore the impact of primary beam patterns affected by mutual coupling on the closure phase. We found that primary beams affected by mutual coupling lead to a leakage of foreground power into the EoR window, which can be up to ∼ 104 times and is mainly caused by the unsmooth spectral structure primary of primary beam sidelobes affected by mutual coupling. This power leakage was confined to k < 0.3 pseudo h Mpc−1. Lastly, we also proposed and demonstrated an analysis technique that can be used to derive a flux scale correction in post-calibrated HERA data. We found that after applying flux scale correction to calibrated HERA data, the bandpass error reduces significantly, with an improvement of 6%. The derived flux scale correction was antenna-independent, and it can be applied to fix the overall visibility spectrum scale of H4C data post-calibration in a fashion similar to Jacobs et al. (2013). , Thesis (PhD) -- Faculty of Science, Physics and Electronics, 2024
- Full Text:
- Date Issued: 2024
- Authors: Charles, Ntsikelelo
- Date: 2024
- Subjects: Cosmology , Astrophysics , Radio astronomy , Hydrogen Epoch of Reionization Array , Epoch of reionization
- Language: English
- Type: Academic theses , Doctoral theses , text
- Identifier: http://hdl.handle.net/10962/432605 , vital:72886 , DOI 10.21504/10962/432605
- Description: The 21 cm transition from neutral Hydrogen promises to be the best observational probe of the Epoch of Reionisation (EoR). It has driven the construction of the new generation of lowfrequency radio interferometric arrays, including the Hydrogen Epoch of Reionization Array (HERA). The main difficulty in measuring the 21 cm signal is the presence of bright foregrounds that require very accurate interferometric calibration. However, the non-smooth instrumental response of the antenna as a result of mutual coupling complicates the calibration process by introducing non-smooth calibration errors. Additionally, incomplete sky models are typically used in calibration due to the limited depth and resolution of current source catalogues. Combined with the instrumental response, the use of incomplete sky models during calibration can result in non-smooth calibration errors. These, overall, impart spectral structure on smooth foregrounds, leading to foreground power leakage into the EoR window. In this thesis we explored the use of fringe rate filters (Parsons et al., 2016) as a mean to mitigate calibration errors resulting from the effects of mutual coupling and the use of an incomplete sky model during calibration. We found that the use of a simple notch filter mitigates calibration errors reducing the foreground power leakage into the EoR window by a factor of ∼ 102. Thyagarajan et al. (2018) proposed the use of closure phase quantities as a means to detect the 21 cm signal, which has the advantage of being independent (to first order) from calibration errors and, therefore, bypasses the need for accurate calibration. In this thesis, we explore the impact of primary beam patterns affected by mutual coupling on the closure phase. We found that primary beams affected by mutual coupling lead to a leakage of foreground power into the EoR window, which can be up to ∼ 104 times and is mainly caused by the unsmooth spectral structure primary of primary beam sidelobes affected by mutual coupling. This power leakage was confined to k < 0.3 pseudo h Mpc−1. Lastly, we also proposed and demonstrated an analysis technique that can be used to derive a flux scale correction in post-calibrated HERA data. We found that after applying flux scale correction to calibrated HERA data, the bandpass error reduces significantly, with an improvement of 6%. The derived flux scale correction was antenna-independent, and it can be applied to fix the overall visibility spectrum scale of H4C data post-calibration in a fashion similar to Jacobs et al. (2013). , Thesis (PhD) -- Faculty of Science, Physics and Electronics, 2024
- Full Text:
- Date Issued: 2024
Semantic segmentation of astronomical radio images: a computer vision approach
- Authors: Kupa, Ramadimetse Sydil
- Date: 2023-03-29
- Subjects: Semantic segmentation , Radio astronomy , Radio telescopes , Deep learning (Machine learning) , Image segmentation
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/422378 , vital:71937
- Description: The new generation of radio telescopes, such as the MeerKAT, ASKAP (Australian Square Kilometre Array Pathfinder) and the future Square Kilometre Array (SKA), are expected to produce vast amounts of data and images in the petabyte region. Therefore, the amount of incoming data at a specific point in time will overwhelm any current traditional data analysis method being deployed. Deep learning architectures have been applied in many fields, such as, in computer vision, machine vision, natural language processing, social network filtering, speech recognition, machine translation, bioinformatics, medical image analysis, and board game programs. They have produced results which are comparable to human expert performance. Hence, it is appealing to apply it to radio astronomy data. Image segmentation is one such area where deep learning techniques are prominent. The images from the new generation of telescopes have a high density of radio sources, making it difficult to classify the sources in the image. Identifying and segmenting sources from radio images is a pre-processing step that occurs before sources are put into different classes. There is thus a need for automatic segmentation of the sources from the images before they can be classified. This work uses the Unet architecture (originally developed for biomedical image segmentation) to segment radio sources from radio astronomical images with 99.8% accuracy. After segmenting the sources we use OpenCV tools to detect the sources on the mask images, then the detection is translated to the original image where borders are drawn around each detected source. This process automates and simplifies the pre-processing of images for classification tools and any other post-processing tool that requires a specific source as an input. , Thesis (MSc) -- Faculty of Science, Physics and Electronics, 2023
- Full Text:
- Date Issued: 2023-03-29
- Authors: Kupa, Ramadimetse Sydil
- Date: 2023-03-29
- Subjects: Semantic segmentation , Radio astronomy , Radio telescopes , Deep learning (Machine learning) , Image segmentation
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/422378 , vital:71937
- Description: The new generation of radio telescopes, such as the MeerKAT, ASKAP (Australian Square Kilometre Array Pathfinder) and the future Square Kilometre Array (SKA), are expected to produce vast amounts of data and images in the petabyte region. Therefore, the amount of incoming data at a specific point in time will overwhelm any current traditional data analysis method being deployed. Deep learning architectures have been applied in many fields, such as, in computer vision, machine vision, natural language processing, social network filtering, speech recognition, machine translation, bioinformatics, medical image analysis, and board game programs. They have produced results which are comparable to human expert performance. Hence, it is appealing to apply it to radio astronomy data. Image segmentation is one such area where deep learning techniques are prominent. The images from the new generation of telescopes have a high density of radio sources, making it difficult to classify the sources in the image. Identifying and segmenting sources from radio images is a pre-processing step that occurs before sources are put into different classes. There is thus a need for automatic segmentation of the sources from the images before they can be classified. This work uses the Unet architecture (originally developed for biomedical image segmentation) to segment radio sources from radio astronomical images with 99.8% accuracy. After segmenting the sources we use OpenCV tools to detect the sources on the mask images, then the detection is translated to the original image where borders are drawn around each detected source. This process automates and simplifies the pre-processing of images for classification tools and any other post-processing tool that requires a specific source as an input. , Thesis (MSc) -- Faculty of Science, Physics and Electronics, 2023
- Full Text:
- Date Issued: 2023-03-29
Third generation calibrations for Meerkat Observation of Saraswati Supercluster
- Authors: Kincaid, Robert Daniel
- Date: 2022-10-14
- Subjects: Square Kilometre Array (Project) , Superclusters , Saraswati Supercluster , Radio astronomy , MeerKAT , Calibration
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/362916 , vital:65374
- Description: The international collaboration of the Square Kilometre Array (SKA), which is one of the largest and most challenging science projects of the 21st century, will bring a revolution in radio astronomy in terms of sensitivity and resolution. The recent launch of several new radio instruments, combined with the subsequent developments in calibration and imaging techniques, has dramatically advanced this field over the past few years, thus enhancing our knowledge of the radio universe. Various SKA pathfinders around the world have been developed (and more are planned for construction) that have laid down a firm foundation for the SKA in terms of science while additionally giving insight into the technological requirements required for the projected data outputs to become manageable. South Africa has recently built the new MeerKAT telescope, which is a SKA precursor forming an integral part of SKA-mid component. The MeerKAT instrument has unprecedented sensitivity that can cater for the required science goals of the current and future SKA era. It is noticeable from MeerKAT and other precursors that the data produced by these instruments are significantly challenging to calibrate and image. Calibration-related artefacts intrinsic to bright sources are of major concern since, they limit the Dynamic Range (DR) and image fidelity of the resulting images and cause flux suppression of extended sources. Diffuse radio sources from galaxy clusters in the form of halos, relics and most recently bridges on the Mpc scale, because of their diffuse nature combined with wide field of view (FoV) observations, make them particularly good candidates for testing the different approaches of calibration. , Thesis (MSc) -- Faculty of Science, Physics and Electronics, 2022
- Full Text:
- Date Issued: 2022-10-14
- Authors: Kincaid, Robert Daniel
- Date: 2022-10-14
- Subjects: Square Kilometre Array (Project) , Superclusters , Saraswati Supercluster , Radio astronomy , MeerKAT , Calibration
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/362916 , vital:65374
- Description: The international collaboration of the Square Kilometre Array (SKA), which is one of the largest and most challenging science projects of the 21st century, will bring a revolution in radio astronomy in terms of sensitivity and resolution. The recent launch of several new radio instruments, combined with the subsequent developments in calibration and imaging techniques, has dramatically advanced this field over the past few years, thus enhancing our knowledge of the radio universe. Various SKA pathfinders around the world have been developed (and more are planned for construction) that have laid down a firm foundation for the SKA in terms of science while additionally giving insight into the technological requirements required for the projected data outputs to become manageable. South Africa has recently built the new MeerKAT telescope, which is a SKA precursor forming an integral part of SKA-mid component. The MeerKAT instrument has unprecedented sensitivity that can cater for the required science goals of the current and future SKA era. It is noticeable from MeerKAT and other precursors that the data produced by these instruments are significantly challenging to calibrate and image. Calibration-related artefacts intrinsic to bright sources are of major concern since, they limit the Dynamic Range (DR) and image fidelity of the resulting images and cause flux suppression of extended sources. Diffuse radio sources from galaxy clusters in the form of halos, relics and most recently bridges on the Mpc scale, because of their diffuse nature combined with wide field of view (FoV) observations, make them particularly good candidates for testing the different approaches of calibration. , Thesis (MSc) -- Faculty of Science, Physics and Electronics, 2022
- Full Text:
- Date Issued: 2022-10-14
Accelerated implementations of the RIME for DDE calibration and source modelling
- Authors: Van Staden, Joshua
- Date: 2021
- Subjects: Radio astronomy , Radio inferometers , Radio inferometers -- Calibration , Radio astronomy -- Data processing , Radio inferometers -- Data processing , Radio inferometers -- Calibration -- Data processing
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/172422 , vital:42199
- Description: Second- and third-generation calibration methods filter out subtle effects in interferometer data, and therefore yield significantly higher dynamic ranges. The basis of these calibration techniques relies on building a model of the sky and corrupting it with models of the effects acting on the sources. The sensitivities of modern instruments call for more elaborate models to capture the level of detail that is required to achieve accurate calibration. This thesis implements two types of models to be used in for second- and third-generation calibration. The first model implemented is shapelets, which can be used to model radio source morphologies directly in uv space. The second model implemented is Zernike polynomials, which can be used to represent the primary beam of the antenna. We implement these models in the CODEX-AFRICANUS package and provide a set of unit tests for each model. Additionally, we compare our implementations against other methods of representing these objects and instrumental effects, namely NIFTY-GRIDDER against shapelets and a FITS-interpolation method against the Zernike polynomials. We find that to achieve sufficient accuracy, our implementation of the shapelet model has a higher runtime to that of the NIFTY-GRIDDER. However, the NIFTY-GRIDDER cannot simulate a component-based sky model while the shapelet model can. Additionally, the shapelet model is fully parametric, which allows for integration into a parameterised solver. We find that, while having a smaller memory footprint, our Zernike model has a greater computational complexity than that of the FITS-interpolated method. However, we find that the Zernike implementation has floating-point accuracy in its modelling, while the FITS-interpolated model loses some accuracy through the discretisation of the beam.
- Full Text:
- Date Issued: 2021
- Authors: Van Staden, Joshua
- Date: 2021
- Subjects: Radio astronomy , Radio inferometers , Radio inferometers -- Calibration , Radio astronomy -- Data processing , Radio inferometers -- Data processing , Radio inferometers -- Calibration -- Data processing
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/172422 , vital:42199
- Description: Second- and third-generation calibration methods filter out subtle effects in interferometer data, and therefore yield significantly higher dynamic ranges. The basis of these calibration techniques relies on building a model of the sky and corrupting it with models of the effects acting on the sources. The sensitivities of modern instruments call for more elaborate models to capture the level of detail that is required to achieve accurate calibration. This thesis implements two types of models to be used in for second- and third-generation calibration. The first model implemented is shapelets, which can be used to model radio source morphologies directly in uv space. The second model implemented is Zernike polynomials, which can be used to represent the primary beam of the antenna. We implement these models in the CODEX-AFRICANUS package and provide a set of unit tests for each model. Additionally, we compare our implementations against other methods of representing these objects and instrumental effects, namely NIFTY-GRIDDER against shapelets and a FITS-interpolation method against the Zernike polynomials. We find that to achieve sufficient accuracy, our implementation of the shapelet model has a higher runtime to that of the NIFTY-GRIDDER. However, the NIFTY-GRIDDER cannot simulate a component-based sky model while the shapelet model can. Additionally, the shapelet model is fully parametric, which allows for integration into a parameterised solver. We find that, while having a smaller memory footprint, our Zernike model has a greater computational complexity than that of the FITS-interpolated method. However, we find that the Zernike implementation has floating-point accuracy in its modelling, while the FITS-interpolated model loses some accuracy through the discretisation of the beam.
- Full Text:
- Date Issued: 2021
Parametrised gains for direction-dependent calibration
- Authors: Russeeaeon, Cyndie
- Date: 2021
- Subjects: Radio astronomy , Radio inferometers , Radio inferometers -- Calibration
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/172400 , vital:42196
- Description: Calibration in radio interferometry describes the process of estimating and correcting for instrumental errors from data. Direction-Dependent (DD) calibration entails correcting for corruptions which vary across the sky. For small field of view observations, DD corruptions can be ignored but for wide fild observations, it is crucial to account for them. Traditional maximum likelihood calibration is not necessarily efficient in low signal-to-noise ratio (SNR) scenarios and this can lead to ovefitting. This can bias continuum subtraction and hence, restrict the spectral line studies. Since DD effects are expected to vary smoothly across the sky, the gains can be parametrised as a smooth function of the sky coordinates. Hence, we implement a solver where the atmosphere is modelled using a time-variant 2-dimensional phase screen with an arbitrary known frequency dependence. We assume arbitrary linear basis functions for the gains over the phase screen. The implemented solver is ptimised using the diagonal approximation of the Hessian as shown in previous studies. We present a few simulations to illustrate the performance of the solver.
- Full Text:
- Date Issued: 2021
- Authors: Russeeaeon, Cyndie
- Date: 2021
- Subjects: Radio astronomy , Radio inferometers , Radio inferometers -- Calibration
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/172400 , vital:42196
- Description: Calibration in radio interferometry describes the process of estimating and correcting for instrumental errors from data. Direction-Dependent (DD) calibration entails correcting for corruptions which vary across the sky. For small field of view observations, DD corruptions can be ignored but for wide fild observations, it is crucial to account for them. Traditional maximum likelihood calibration is not necessarily efficient in low signal-to-noise ratio (SNR) scenarios and this can lead to ovefitting. This can bias continuum subtraction and hence, restrict the spectral line studies. Since DD effects are expected to vary smoothly across the sky, the gains can be parametrised as a smooth function of the sky coordinates. Hence, we implement a solver where the atmosphere is modelled using a time-variant 2-dimensional phase screen with an arbitrary known frequency dependence. We assume arbitrary linear basis functions for the gains over the phase screen. The implemented solver is ptimised using the diagonal approximation of the Hessian as shown in previous studies. We present a few simulations to illustrate the performance of the solver.
- Full Text:
- Date Issued: 2021
A Bayesian approach to tilted-ring modelling of galaxies
- Authors: Maina, Eric Kamau
- Date: 2020
- Subjects: Bayesian statistical decision theory , Galaxies , Radio astronomy , TiRiFiC (Tilted Ring Fitting Code) , Neutral hydrogen , Spectroscopic data cubes , Galaxy parametrisation
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/145783 , vital:38466
- Description: The orbits of neutral hydrogen (H I) gas found in most disk galaxies are circular and also exhibit long-lived warps at large radii where the restoring gravitational forces of the inner disk become weak (Spekkens and Giovanelli 2006). These warps make the tilted-ring model an ideal choice for galaxy parametrisation. Analysis software utilizing the tilted-ring-model can be grouped into two and three-dimensional based software. Józsa et al. (2007b) demonstrated that three dimensional based software is better suited for galaxy parametrisation because it is affected by the effect of beam smearing only by increasing the uncertainty of parameters but not with the notorious systematic effects observed for two-dimensional fitting techniques. TiRiFiC, The Tilted Ring Fitting Code (Józsa et al. 2007b), is a software to construct parameterised models of high-resolution data cubes of rotating galaxies. It uses the tilted-ring model, and with that, a combination of some parameters such as surface brightness, position angle, rotation velocity and inclination, to describe galaxies. TiRiFiC works by directly fitting tilted-ring models to spectroscopic data cubes and hence is not affected by beam smearing or line-of-site-effects, e.g. strong warps. Because of that, the method is unavoidable as an analytic method in future Hi surveys. In the current implementation, though, there are several drawbacks. The implemented optimisers search for local solutions in parameter space only, do not quantify correlations between parameters and cannot find errors of single parameters. In theory, these drawbacks can be overcome by using Bayesian statistics, implemented in Multinest (Feroz et al. 2008), as it allows for sampling a posterior distribution irrespective of its multimodal nature resulting in parameter samples that correspond to the maximum in the posterior distribution. These parameter samples can be used as well to quantify correlations and find errors of single parameters. Since this method employs Bayesian statistics, it also allows the user to leverage any prior information they may have on parameter values.
- Full Text:
- Date Issued: 2020
- Authors: Maina, Eric Kamau
- Date: 2020
- Subjects: Bayesian statistical decision theory , Galaxies , Radio astronomy , TiRiFiC (Tilted Ring Fitting Code) , Neutral hydrogen , Spectroscopic data cubes , Galaxy parametrisation
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/145783 , vital:38466
- Description: The orbits of neutral hydrogen (H I) gas found in most disk galaxies are circular and also exhibit long-lived warps at large radii where the restoring gravitational forces of the inner disk become weak (Spekkens and Giovanelli 2006). These warps make the tilted-ring model an ideal choice for galaxy parametrisation. Analysis software utilizing the tilted-ring-model can be grouped into two and three-dimensional based software. Józsa et al. (2007b) demonstrated that three dimensional based software is better suited for galaxy parametrisation because it is affected by the effect of beam smearing only by increasing the uncertainty of parameters but not with the notorious systematic effects observed for two-dimensional fitting techniques. TiRiFiC, The Tilted Ring Fitting Code (Józsa et al. 2007b), is a software to construct parameterised models of high-resolution data cubes of rotating galaxies. It uses the tilted-ring model, and with that, a combination of some parameters such as surface brightness, position angle, rotation velocity and inclination, to describe galaxies. TiRiFiC works by directly fitting tilted-ring models to spectroscopic data cubes and hence is not affected by beam smearing or line-of-site-effects, e.g. strong warps. Because of that, the method is unavoidable as an analytic method in future Hi surveys. In the current implementation, though, there are several drawbacks. The implemented optimisers search for local solutions in parameter space only, do not quantify correlations between parameters and cannot find errors of single parameters. In theory, these drawbacks can be overcome by using Bayesian statistics, implemented in Multinest (Feroz et al. 2008), as it allows for sampling a posterior distribution irrespective of its multimodal nature resulting in parameter samples that correspond to the maximum in the posterior distribution. These parameter samples can be used as well to quantify correlations and find errors of single parameters. Since this method employs Bayesian statistics, it also allows the user to leverage any prior information they may have on parameter values.
- Full Text:
- Date Issued: 2020
A pilot wide-field VLBI survey of the GOODS-North field
- Authors: Akoto-Danso, Alexander
- Date: 2019
- Subjects: Radio astronomy , Very long baseline interferometry , Radio interometers , Imaging systems in astronomy , Hubble Space Telescope (Spacecraft) -- Observations
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/72296 , vital:30027
- Description: Very Long Baseline Interferometry (VLBI) has significant advantages in disentangling active galactic nuclei (AGN) from star formation, particularly at intermediate to high-redshift due to its high angular resolution and insensitivity to dust. Surveys using VLBI arrays are only just becoming practical over wide areas with numerous developments and innovations (such as multi-phase centre techniques) in observation and data analysis techniques. However, fully automated pipelines for VLBI data analysis are based on old software packages and are unable to incorporate new calibration and imaging algorithms. In this work, the researcher developed a pipeline for VLBI data analysis which integrates a recent wide-field imaging algorithm, RFI excision, and a purpose-built source finding algorithm specifically developed for the 64kx64k wide-field VLBI images. The researcher used this novel pipeline to process 6% (~ 9 arcmin2 of the total 160 arcmin2) of the data from the CANDELS GOODS- North extragalactic field at 1.6 GHz. The milli-arcsec scale images have an average rms of a ~ 10 uJy/beam. Forty four (44) candidate sources were detected, most of which are at sub-mJy flux densities, having brightness temperatures and luminosities of >5x105 K and >6x1021 W Hz-1 respectively. This work demonstrates that automated post-processing pipelines for wide-field, uniform sensitivity VLBI surveys are feasible and indeed made more efficient with new software, wide-field imaging algorithms and more purpose-built source- finders. This broadens the discovery space for future wide-field surveys with upcoming arrays such as the African VLBI Network (AVN), MeerKAT and the Square Kilometre Array (SKA).
- Full Text:
- Date Issued: 2019
- Authors: Akoto-Danso, Alexander
- Date: 2019
- Subjects: Radio astronomy , Very long baseline interferometry , Radio interometers , Imaging systems in astronomy , Hubble Space Telescope (Spacecraft) -- Observations
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/72296 , vital:30027
- Description: Very Long Baseline Interferometry (VLBI) has significant advantages in disentangling active galactic nuclei (AGN) from star formation, particularly at intermediate to high-redshift due to its high angular resolution and insensitivity to dust. Surveys using VLBI arrays are only just becoming practical over wide areas with numerous developments and innovations (such as multi-phase centre techniques) in observation and data analysis techniques. However, fully automated pipelines for VLBI data analysis are based on old software packages and are unable to incorporate new calibration and imaging algorithms. In this work, the researcher developed a pipeline for VLBI data analysis which integrates a recent wide-field imaging algorithm, RFI excision, and a purpose-built source finding algorithm specifically developed for the 64kx64k wide-field VLBI images. The researcher used this novel pipeline to process 6% (~ 9 arcmin2 of the total 160 arcmin2) of the data from the CANDELS GOODS- North extragalactic field at 1.6 GHz. The milli-arcsec scale images have an average rms of a ~ 10 uJy/beam. Forty four (44) candidate sources were detected, most of which are at sub-mJy flux densities, having brightness temperatures and luminosities of >5x105 K and >6x1021 W Hz-1 respectively. This work demonstrates that automated post-processing pipelines for wide-field, uniform sensitivity VLBI surveys are feasible and indeed made more efficient with new software, wide-field imaging algorithms and more purpose-built source- finders. This broadens the discovery space for future wide-field surveys with upcoming arrays such as the African VLBI Network (AVN), MeerKAT and the Square Kilometre Array (SKA).
- Full Text:
- Date Issued: 2019
CubiCal: a fast radio interferometric calibration suite exploiting complex optimisation
- Authors: Kenyon, Jonathan
- Date: 2019
- Subjects: Interferometry , Radio astronomy , Python (Computer program language) , Square Kilometre Array (Project)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/92341 , vital:30711
- Description: The advent of the Square Kilometre Array and its precursors marks the start of an exciting era for radio interferometry. However, with new instruments producing unprecedented quantities of data, many existing calibration algorithms and implementations will be hard-pressed to keep up. Fortunately, it has recently been shown that the radio interferometric calibration problem can be expressed concisely using the ideas of complex optimisation. The resulting framework exposes properties of the calibration problem which can be exploited to accelerate traditional non-linear least squares algorithms. We extend the existing work on the topic by considering the more general problem of calibrating a Jones chain: the product of several unknown gain terms. We also derive specialised solvers for performing phase-only, delay and pointing error calibration. In doing so, we devise a method for determining update rules for arbitrary, real-valued parametrisations of a complex gain. The solvers are implemented in an optimised Python package called CubiCal. CubiCal makes use of Cython to generate fast C and C++ routines for performing computationally demanding tasks whilst leveraging multiprocessing and shared memory to take advantage of modern, parallel hardware. The package is fully compatible with the measurement set, the most common format for interferometer data, and is well integrated with Montblanc - a third party package which implements optimised model visibility prediction. CubiCal's calibration routines are applied successfully to both simulated and real data for the field surrounding source 3C147. These tests include direction-independent and direction dependent calibration, as well as tests of the specialised solvers. Finally, we conduct extensive performance benchmarks and verify that CubiCal convincingly outperforms its most comparable competitor.
- Full Text:
- Date Issued: 2019
- Authors: Kenyon, Jonathan
- Date: 2019
- Subjects: Interferometry , Radio astronomy , Python (Computer program language) , Square Kilometre Array (Project)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/92341 , vital:30711
- Description: The advent of the Square Kilometre Array and its precursors marks the start of an exciting era for radio interferometry. However, with new instruments producing unprecedented quantities of data, many existing calibration algorithms and implementations will be hard-pressed to keep up. Fortunately, it has recently been shown that the radio interferometric calibration problem can be expressed concisely using the ideas of complex optimisation. The resulting framework exposes properties of the calibration problem which can be exploited to accelerate traditional non-linear least squares algorithms. We extend the existing work on the topic by considering the more general problem of calibrating a Jones chain: the product of several unknown gain terms. We also derive specialised solvers for performing phase-only, delay and pointing error calibration. In doing so, we devise a method for determining update rules for arbitrary, real-valued parametrisations of a complex gain. The solvers are implemented in an optimised Python package called CubiCal. CubiCal makes use of Cython to generate fast C and C++ routines for performing computationally demanding tasks whilst leveraging multiprocessing and shared memory to take advantage of modern, parallel hardware. The package is fully compatible with the measurement set, the most common format for interferometer data, and is well integrated with Montblanc - a third party package which implements optimised model visibility prediction. CubiCal's calibration routines are applied successfully to both simulated and real data for the field surrounding source 3C147. These tests include direction-independent and direction dependent calibration, as well as tests of the specialised solvers. Finally, we conduct extensive performance benchmarks and verify that CubiCal convincingly outperforms its most comparable competitor.
- Full Text:
- Date Issued: 2019
Statistical Analysis of the Radio-Interferometric Measurement Equation, a derived adaptive weighting scheme, and applications to LOFAR-VLBI observation of the Extended Groth Strip
- Authors: Bonnassieux, Etienne
- Date: 2019
- Subjects: Radio astronomy , Astrophysics , Astrophysics -- Instruments -- Calibration , Imaging systems in astronomy , Radio interferometers , Radio telescopes , Astronomy -- Observations
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/93789 , vital:30942
- Description: J.R.R Tolkien wrote, in his Mythopoeia, that “He sees no stars who does not see them first, of living silver made that sudden burst, to flame like flowers beneath the ancient song”. In his defense of myth-making, he formulates the argument that the attribution of meaning is an act of creation - that “trees are not ‘trees’ until so named and seen” - and that this capacity for creation defines the human creature. The scientific endeavour, in this context, can be understood as a social expression of a fundamental feature of humanity, and from this endeavour flows much understanding. This thesis, one thread among many, focuses on the study of astronomical objects as seen by the radio waves they emit. What are radio waves? Electromagnetic waves were theorised by James Clerk Maxwell (Maxwell 1864) in his great theoretical contribution to modern physics, their speed matching the speed of light as measured by Ole Christensen R0mer and, later, James Bradley. It was not until Heinrich Rudolf Hertz’s 1887 experiment that these waves were measured in a laboratory, leading to the dawn of radio communications - and, later, radio astronomy. The link between radio waves and light was one of association: light is known to behave as a wave (Young double-slit experiment), with the same propagation speed as electromagnetic radiation. Light “proper” is also known to exist beyond the optical regime: Herschel’s experiment shows that when diffracted through a prism, sunlight warms even those parts of a desk which are not observed to be lit (first evidence of infrared light). The link between optical light and unseen electromagnetic radiation is then an easy step to make, and one confirmed through countless technological applications (e.g. optical fiber to name but one). And as soon as this link is established, a question immediately comes to the mind of the astronomer: what does the sky, our Universe, look like to the radio “eye”? Radio astronomy has a short but storied history: from Karl Jansky’s serendipitous observation of the centre of the Milky Way, which outshines our Sun in the radio regime, in 1933, to Grote Reber’s hand-built back-yard radio antenna in 1937, which successfully detected radio emission from the Milky Way itself, to such monumental projects as the Square Kilometer Array and its multiple pathfinders, it has led to countless discoveries and the opening of a truly new window on the Universe. The work presented in this thesis is a contribution to this discipline - the culmination of three years of study, which is a rather short time to get a firm grasp of radio interferometry both in theory and in practice. The need for robust, automated methods - which are improving daily, thanks to the tireless labour of the scientists in the field - is becoming ever stronger as the SKA approaches, looming large on the horizon; but even today, in the precursor era of LOFAR, MeerKAT and other pathfinders, it is keenly felt. When I started my doctorate, the sheer scale of the task at hand felt overwhelming - to actually be able to contribute to its resolution seemed daunting indeed! Thankfully, as the saying goes, no society sets for itself material goals which it cannot achieve. This thesis took place at an exciting time for radio interferometry: at the start of my doctorate, the LOFAR international stations were - to my knowledge - only beginning to be used, and even then, only tentatively; MeerKAT had not yet shown its first light; the techniques used throughout my work were still being developed. At the time of writing, great strides have been made. One of the greatest technical challenges of LOFAR - imaging using the international stations - is starting to become reality. This technical challenge is the key problem that this thesis set out to address. While we only achieved partial success so far, it is a testament to the difficulty of the task that it is not yet truly resolved. One of the major results of this thesis is a model of a bright resolved source near a famous extragalactic field: properly modeling this source not only allows the use of international LOFAR stations, but also grants deeper access to the extragalactic field itself, which is otherwise polluted by the 3C source’s sidelobes. This result was only achieved thanks to the other major result of this thesis: the development of a theoretical framework with which to better understand the effect of calibration errors on images made from interferometric data, and an algorithm to strongly mitigate them. The structure of this manuscript is as follows: we begin with an introduction to radio interferometry, LOFAR, and the emission mechanisms which dominate for our field of interest. These introductions are primarily intended to give a brief overview of the technical aspects of the data reduced in this thesis. We follow with an overview of the Measurement Equation formalism, which underpins our theoretical work. This is the keystone of this thesis. We then show the theoretical work that was developed as part of the research work done during the doctorate - which was published in Astronomy & Astrophysics. Its practical application - a quality-based weighting scheme - is used throughout our data reduction. This data reduction is the next topic of this thesis: we contextualise the scientific interest of the data we reduce, and explain both the methods and the results we achieve.
- Full Text:
- Date Issued: 2019
- Authors: Bonnassieux, Etienne
- Date: 2019
- Subjects: Radio astronomy , Astrophysics , Astrophysics -- Instruments -- Calibration , Imaging systems in astronomy , Radio interferometers , Radio telescopes , Astronomy -- Observations
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/93789 , vital:30942
- Description: J.R.R Tolkien wrote, in his Mythopoeia, that “He sees no stars who does not see them first, of living silver made that sudden burst, to flame like flowers beneath the ancient song”. In his defense of myth-making, he formulates the argument that the attribution of meaning is an act of creation - that “trees are not ‘trees’ until so named and seen” - and that this capacity for creation defines the human creature. The scientific endeavour, in this context, can be understood as a social expression of a fundamental feature of humanity, and from this endeavour flows much understanding. This thesis, one thread among many, focuses on the study of astronomical objects as seen by the radio waves they emit. What are radio waves? Electromagnetic waves were theorised by James Clerk Maxwell (Maxwell 1864) in his great theoretical contribution to modern physics, their speed matching the speed of light as measured by Ole Christensen R0mer and, later, James Bradley. It was not until Heinrich Rudolf Hertz’s 1887 experiment that these waves were measured in a laboratory, leading to the dawn of radio communications - and, later, radio astronomy. The link between radio waves and light was one of association: light is known to behave as a wave (Young double-slit experiment), with the same propagation speed as electromagnetic radiation. Light “proper” is also known to exist beyond the optical regime: Herschel’s experiment shows that when diffracted through a prism, sunlight warms even those parts of a desk which are not observed to be lit (first evidence of infrared light). The link between optical light and unseen electromagnetic radiation is then an easy step to make, and one confirmed through countless technological applications (e.g. optical fiber to name but one). And as soon as this link is established, a question immediately comes to the mind of the astronomer: what does the sky, our Universe, look like to the radio “eye”? Radio astronomy has a short but storied history: from Karl Jansky’s serendipitous observation of the centre of the Milky Way, which outshines our Sun in the radio regime, in 1933, to Grote Reber’s hand-built back-yard radio antenna in 1937, which successfully detected radio emission from the Milky Way itself, to such monumental projects as the Square Kilometer Array and its multiple pathfinders, it has led to countless discoveries and the opening of a truly new window on the Universe. The work presented in this thesis is a contribution to this discipline - the culmination of three years of study, which is a rather short time to get a firm grasp of radio interferometry both in theory and in practice. The need for robust, automated methods - which are improving daily, thanks to the tireless labour of the scientists in the field - is becoming ever stronger as the SKA approaches, looming large on the horizon; but even today, in the precursor era of LOFAR, MeerKAT and other pathfinders, it is keenly felt. When I started my doctorate, the sheer scale of the task at hand felt overwhelming - to actually be able to contribute to its resolution seemed daunting indeed! Thankfully, as the saying goes, no society sets for itself material goals which it cannot achieve. This thesis took place at an exciting time for radio interferometry: at the start of my doctorate, the LOFAR international stations were - to my knowledge - only beginning to be used, and even then, only tentatively; MeerKAT had not yet shown its first light; the techniques used throughout my work were still being developed. At the time of writing, great strides have been made. One of the greatest technical challenges of LOFAR - imaging using the international stations - is starting to become reality. This technical challenge is the key problem that this thesis set out to address. While we only achieved partial success so far, it is a testament to the difficulty of the task that it is not yet truly resolved. One of the major results of this thesis is a model of a bright resolved source near a famous extragalactic field: properly modeling this source not only allows the use of international LOFAR stations, but also grants deeper access to the extragalactic field itself, which is otherwise polluted by the 3C source’s sidelobes. This result was only achieved thanks to the other major result of this thesis: the development of a theoretical framework with which to better understand the effect of calibration errors on images made from interferometric data, and an algorithm to strongly mitigate them. The structure of this manuscript is as follows: we begin with an introduction to radio interferometry, LOFAR, and the emission mechanisms which dominate for our field of interest. These introductions are primarily intended to give a brief overview of the technical aspects of the data reduced in this thesis. We follow with an overview of the Measurement Equation formalism, which underpins our theoretical work. This is the keystone of this thesis. We then show the theoretical work that was developed as part of the research work done during the doctorate - which was published in Astronomy & Astrophysics. Its practical application - a quality-based weighting scheme - is used throughout our data reduction. This data reduction is the next topic of this thesis: we contextualise the scientific interest of the data we reduce, and explain both the methods and the results we achieve.
- Full Text:
- Date Issued: 2019
Advanced radio interferometric simulation and data reduction techniques
- Authors: Makhathini, Sphesihle
- Date: 2018
- Subjects: Interferometry , Radio interferometers , Algorithms , Radio telescopes , Square Kilometre Array (Project) , Very Large Array (Observatory : N.M.) , Radio astronomy
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/57348 , vital:26875
- Description: This work shows how legacy and novel radio Interferometry software packages and algorithms can be combined to produce high-quality reductions from modern telescopes, as well as end-to-end simulations for upcoming instruments such as the Square Kilometre Array (SKA) and its pathfinders. We first use a MeqTrees based simulations framework to quantify how artefacts due to direction-dependent effects accumulate with time, and the consequences of this accumulation when observing the same field multiple times in order to reach the survey depth. Our simulations suggest that a survey like LADUMA (Looking at the Distant Universe with MeerKAT Array), which aims to achieve its survey depth of 16 µJy/beam in a 72 kHz at 1.42 GHz by observing the same field for 1000 hours, will be able to reach its target depth in the presence of these artefacts. We also present stimela, a system agnostic scripting framework for simulating, processing and imaging radio interferometric data. This framework is then used to write an end-to-end simulation pipeline in order to quantify the resolution and sensitivity of the SKA1-MID telescope (the first phase of the SKA mid-frequency telescope) as a function of frequency, as well as the scale-dependent sensitivity of the telescope. Finally, a stimela-based reduction pipeline is used to process data of the field around the source 3C147, taken by the Karl G. Jansky Very Large Array (VLA). The reconstructed image from this reduction has a typical 1a noise level of 2.87 µJy/beam, and consequently a dynamic range of 8x106:1, given the 22.58 Jy/beam flux Density of the source 3C147.
- Full Text:
- Date Issued: 2018
- Authors: Makhathini, Sphesihle
- Date: 2018
- Subjects: Interferometry , Radio interferometers , Algorithms , Radio telescopes , Square Kilometre Array (Project) , Very Large Array (Observatory : N.M.) , Radio astronomy
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/57348 , vital:26875
- Description: This work shows how legacy and novel radio Interferometry software packages and algorithms can be combined to produce high-quality reductions from modern telescopes, as well as end-to-end simulations for upcoming instruments such as the Square Kilometre Array (SKA) and its pathfinders. We first use a MeqTrees based simulations framework to quantify how artefacts due to direction-dependent effects accumulate with time, and the consequences of this accumulation when observing the same field multiple times in order to reach the survey depth. Our simulations suggest that a survey like LADUMA (Looking at the Distant Universe with MeerKAT Array), which aims to achieve its survey depth of 16 µJy/beam in a 72 kHz at 1.42 GHz by observing the same field for 1000 hours, will be able to reach its target depth in the presence of these artefacts. We also present stimela, a system agnostic scripting framework for simulating, processing and imaging radio interferometric data. This framework is then used to write an end-to-end simulation pipeline in order to quantify the resolution and sensitivity of the SKA1-MID telescope (the first phase of the SKA mid-frequency telescope) as a function of frequency, as well as the scale-dependent sensitivity of the telescope. Finally, a stimela-based reduction pipeline is used to process data of the field around the source 3C147, taken by the Karl G. Jansky Very Large Array (VLA). The reconstructed image from this reduction has a typical 1a noise level of 2.87 µJy/beam, and consequently a dynamic range of 8x106:1, given the 22.58 Jy/beam flux Density of the source 3C147.
- Full Text:
- Date Issued: 2018
Data compression, field of interest shaping and fast algorithms for direction-dependent deconvolution in radio interferometry
- Authors: Atemkeng, Marcellin T
- Date: 2017
- Subjects: Radio astronomy , Solar radio emission , Radio interferometers , Signal processing -- Digital techniques , Algorithms , Data compression (Computer science)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/6324 , vital:21089
- Description: In radio interferometry, observed visibilities are intrinsically sampled at some interval in time and frequency. Modern interferometers are capable of producing data at very high time and frequency resolution; practical limits on storage and computation costs require that some form of data compression be imposed. The traditional form of compression is simple averaging of the visibilities over coarser time and frequency bins. This has an undesired side effect: the resulting averaged visibilities “decorrelate”, and do so differently depending on the baseline length and averaging interval. This translates into a non-trivial signature in the image domain known as “smearing”, which manifests itself as an attenuation in amplitude towards off-centre sources. With the increasing fields of view and/or longer baselines employed in modern and future instruments, the trade-off between data rate and smearing becomes increasingly unfavourable. Averaging also results in baseline length and a position-dependent point spread function (PSF). In this work, we investigate alternative approaches to low-loss data compression. We show that averaging of the visibility data can be understood as a form of convolution by a boxcar-like window function, and that by employing alternative baseline-dependent window functions a more optimal interferometer smearing response may be induced. Specifically, we can improve amplitude response over a chosen field of interest and attenuate sources outside the field of interest. The main cost of this technique is a reduction in nominal sensitivity; we investigate the smearing vs. sensitivity trade-off and show that in certain regimes a favourable compromise can be achieved. We show the application of this technique to simulated data from the Jansky Very Large Array and the European Very Long Baseline Interferometry Network. Furthermore, we show that the position-dependent PSF shape induced by averaging can be approximated using linear algebraic properties to effectively reduce the computational complexity for evaluating the PSF at each sky position. We conclude by implementing a position-dependent PSF deconvolution in an imaging and deconvolution framework. Using the Low-Frequency Array radio interferometer, we show that deconvolution with position-dependent PSFs results in higher image fidelity compared to a simple CLEAN algorithm and its derivatives.
- Full Text:
- Date Issued: 2017
- Authors: Atemkeng, Marcellin T
- Date: 2017
- Subjects: Radio astronomy , Solar radio emission , Radio interferometers , Signal processing -- Digital techniques , Algorithms , Data compression (Computer science)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/6324 , vital:21089
- Description: In radio interferometry, observed visibilities are intrinsically sampled at some interval in time and frequency. Modern interferometers are capable of producing data at very high time and frequency resolution; practical limits on storage and computation costs require that some form of data compression be imposed. The traditional form of compression is simple averaging of the visibilities over coarser time and frequency bins. This has an undesired side effect: the resulting averaged visibilities “decorrelate”, and do so differently depending on the baseline length and averaging interval. This translates into a non-trivial signature in the image domain known as “smearing”, which manifests itself as an attenuation in amplitude towards off-centre sources. With the increasing fields of view and/or longer baselines employed in modern and future instruments, the trade-off between data rate and smearing becomes increasingly unfavourable. Averaging also results in baseline length and a position-dependent point spread function (PSF). In this work, we investigate alternative approaches to low-loss data compression. We show that averaging of the visibility data can be understood as a form of convolution by a boxcar-like window function, and that by employing alternative baseline-dependent window functions a more optimal interferometer smearing response may be induced. Specifically, we can improve amplitude response over a chosen field of interest and attenuate sources outside the field of interest. The main cost of this technique is a reduction in nominal sensitivity; we investigate the smearing vs. sensitivity trade-off and show that in certain regimes a favourable compromise can be achieved. We show the application of this technique to simulated data from the Jansky Very Large Array and the European Very Long Baseline Interferometry Network. Furthermore, we show that the position-dependent PSF shape induced by averaging can be approximated using linear algebraic properties to effectively reduce the computational complexity for evaluating the PSF at each sky position. We conclude by implementing a position-dependent PSF deconvolution in an imaging and deconvolution framework. Using the Low-Frequency Array radio interferometer, we show that deconvolution with position-dependent PSFs results in higher image fidelity compared to a simple CLEAN algorithm and its derivatives.
- Full Text:
- Date Issued: 2017
PyMORESANE: A Pythonic and CUDA-accelerated implementation of the MORESANE deconvolution algorithm
- Authors: Kenyon, Jonathan
- Date: 2015
- Subjects: Radio astronomy , Imaging systems in astronomy , MOdel REconstruction by Synthesis-ANalysis Estimators (MORESANE)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5563 , http://hdl.handle.net/10962/d1020098
- Description: The inadequacies of the current generation of deconvolution algorithms are rapidly becoming apparent as new, more sensitive radio interferometers are constructed. In light of these inadequacies, there is renewed interest in the field of deconvolution. Many new algorithms are being developed using the mathematical framework of compressed sensing. One such technique, MORESANE, has recently been shown to be a powerful tool for the recovery of faint difuse emission from synthetic and simulated data. However, the original implementation is not well-suited to large problem sizes due to its computational complexity. Additionally, its use of proprietary software prevents it from being freely distributed and used. This has motivated the development of a freely available Python implementation, PyMORESANE. This thesis describes the implementation of PyMORESANE as well as its subsequent augmentation with MPU and GPGPU code. These additions accelerate the algorithm and thus make it competitive with its legacy counterparts. The acceleration of the algorithm is verified by means of benchmarking tests for varying image size and complexity. Additionally, PyMORESANE is shown to work not only on synthetic data, but on real observational data. This verification means that the MORESANE algorithm, and consequently the PyMORESANE implementation, can be added to the current arsenal of deconvolution tools.
- Full Text:
- Date Issued: 2015
- Authors: Kenyon, Jonathan
- Date: 2015
- Subjects: Radio astronomy , Imaging systems in astronomy , MOdel REconstruction by Synthesis-ANalysis Estimators (MORESANE)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5563 , http://hdl.handle.net/10962/d1020098
- Description: The inadequacies of the current generation of deconvolution algorithms are rapidly becoming apparent as new, more sensitive radio interferometers are constructed. In light of these inadequacies, there is renewed interest in the field of deconvolution. Many new algorithms are being developed using the mathematical framework of compressed sensing. One such technique, MORESANE, has recently been shown to be a powerful tool for the recovery of faint difuse emission from synthetic and simulated data. However, the original implementation is not well-suited to large problem sizes due to its computational complexity. Additionally, its use of proprietary software prevents it from being freely distributed and used. This has motivated the development of a freely available Python implementation, PyMORESANE. This thesis describes the implementation of PyMORESANE as well as its subsequent augmentation with MPU and GPGPU code. These additions accelerate the algorithm and thus make it competitive with its legacy counterparts. The acceleration of the algorithm is verified by means of benchmarking tests for varying image size and complexity. Additionally, PyMORESANE is shown to work not only on synthetic data, but on real observational data. This verification means that the MORESANE algorithm, and consequently the PyMORESANE implementation, can be added to the current arsenal of deconvolution tools.
- Full Text:
- Date Issued: 2015
Designing and implementing a new pulsar timer for the Hartebeesthoek Radio Astronomy Observatory
- Authors: Youthed, Andrew David
- Date: 2008
- Subjects: Astronomical observatories , Radio astronomy , Pulsars , Astronomical instruments , Reduced instruction set computers , Random access memory
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5458 , http://hdl.handle.net/10962/d1005243 , Astronomical observatories , Radio astronomy , Pulsars , Astronomical instruments , Reduced instruction set computers , Random access memory
- Description: This thesis outlines the design and implementation of a single channel, dual polarization pulsar timing instrument for the Hartebeesthoek Radio Astronomy Observatory (HartRAO). The new timer is designed to be an improved, temporary replacement for the existing device which has been in operation for over 20 years. The existing device is no longer reliable and is di±cult to maintain. The new pulsar timer is designed to provide improved functional- ity, higher sampling speed, greater pulse resolution, more °exibility and easier maintenance over the existing device. The new device is also designed to keeping changes to the observation system to a minimum until a full de-dispersion timer can be implemented at theobservatory. The design makes use of an 8-bit Reduced Instruction Set Computer (RISC) micro-processor with external Random Access Memory (RAM). The instrument includes an IEEE-488 subsystem for interfacing the pulsar timer to the observation computer system. The microcontroller software is written in assembler code to ensure optimal loop execution speed and deterministic code execution for the system. The design path is discussed and problems encountered during the design process are highlighted. Final testing of the new instrument indicates an improvement in the sam- pling rate of 13.6 times and a significant reduction in 60Hz interference over the existing instrument.
- Full Text:
- Date Issued: 2008
- Authors: Youthed, Andrew David
- Date: 2008
- Subjects: Astronomical observatories , Radio astronomy , Pulsars , Astronomical instruments , Reduced instruction set computers , Random access memory
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5458 , http://hdl.handle.net/10962/d1005243 , Astronomical observatories , Radio astronomy , Pulsars , Astronomical instruments , Reduced instruction set computers , Random access memory
- Description: This thesis outlines the design and implementation of a single channel, dual polarization pulsar timing instrument for the Hartebeesthoek Radio Astronomy Observatory (HartRAO). The new timer is designed to be an improved, temporary replacement for the existing device which has been in operation for over 20 years. The existing device is no longer reliable and is di±cult to maintain. The new pulsar timer is designed to provide improved functional- ity, higher sampling speed, greater pulse resolution, more °exibility and easier maintenance over the existing device. The new device is also designed to keeping changes to the observation system to a minimum until a full de-dispersion timer can be implemented at theobservatory. The design makes use of an 8-bit Reduced Instruction Set Computer (RISC) micro-processor with external Random Access Memory (RAM). The instrument includes an IEEE-488 subsystem for interfacing the pulsar timer to the observation computer system. The microcontroller software is written in assembler code to ensure optimal loop execution speed and deterministic code execution for the system. The design path is discussed and problems encountered during the design process are highlighted. Final testing of the new instrument indicates an improvement in the sam- pling rate of 13.6 times and a significant reduction in 60Hz interference over the existing instrument.
- Full Text:
- Date Issued: 2008
Measuring the RFI environment of the South African SKA site
- Authors: Manners, Paul John
- Date: 2007
- Subjects: Radio telescopes , Radio telescopes -- South Africa , Radio astronomy , Radio astronomy -- South Africa , Square Kilometer Array (Spacecraft) , Radio -- Interference -- Measurement
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5474 , http://hdl.handle.net/10962/d1005259 , Radio telescopes , Radio telescopes -- South Africa , Radio astronomy , Radio astronomy -- South Africa , Square Kilometer Array (Spacecraft) , Radio -- Interference -- Measurement
- Description: The Square Kilometre Array (SKA) Project is an international effort to build the world’s largest radio telescope. It will be 100 times more sensitive than any other radio telescope currently in existence and will consist of thousands of dishes placed at baselines up to 3000 km. In addition to its increased sensitivity it will operate over a very wide frequency range (current specification is 100 MHz - 22 GHz) and will use frequency bands not primarily allocated to radio astronomy. Because of this the telescope needs to be located at a site with low levels of radio frequency interference (RFI). This implies a site that is remote and away from human activity. In bidding to host the SKA, South Africa was required to conduct an RFI survey at its proposed site for a period of 12 months. Apart from this core site, where more than half the SKA dishes may potentially be deployed, the measurement of remote sites in Southern Africa was also required. To conduct measurements at these sites, three mobile measurement systems were designed and built by the South African SKA Project. The design considerations, implementation and RFI measurements recorded during this campaign will be the focus for this dissertation.
- Full Text:
- Date Issued: 2007
- Authors: Manners, Paul John
- Date: 2007
- Subjects: Radio telescopes , Radio telescopes -- South Africa , Radio astronomy , Radio astronomy -- South Africa , Square Kilometer Array (Spacecraft) , Radio -- Interference -- Measurement
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5474 , http://hdl.handle.net/10962/d1005259 , Radio telescopes , Radio telescopes -- South Africa , Radio astronomy , Radio astronomy -- South Africa , Square Kilometer Array (Spacecraft) , Radio -- Interference -- Measurement
- Description: The Square Kilometre Array (SKA) Project is an international effort to build the world’s largest radio telescope. It will be 100 times more sensitive than any other radio telescope currently in existence and will consist of thousands of dishes placed at baselines up to 3000 km. In addition to its increased sensitivity it will operate over a very wide frequency range (current specification is 100 MHz - 22 GHz) and will use frequency bands not primarily allocated to radio astronomy. Because of this the telescope needs to be located at a site with low levels of radio frequency interference (RFI). This implies a site that is remote and away from human activity. In bidding to host the SKA, South Africa was required to conduct an RFI survey at its proposed site for a period of 12 months. Apart from this core site, where more than half the SKA dishes may potentially be deployed, the measurement of remote sites in Southern Africa was also required. To conduct measurements at these sites, three mobile measurement systems were designed and built by the South African SKA Project. The design considerations, implementation and RFI measurements recorded during this campaign will be the focus for this dissertation.
- Full Text:
- Date Issued: 2007
Modeling and measurement of torqued procession in radio pulsars
- Authors: Tiplady, Adrian John
- Date: 2005
- Subjects: Pulsars , Radio telescopes , Radio astronomy , Precession , Hartebeeshoek Radio Astronomy Observatory (HartRAO)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:5475 , http://hdl.handle.net/10962/d1005260
- Description: The long term isolated pulsar monitoring program, which commenced in 1984 using the 26 m radio telescope at the Hartebeeshoek Radio Astronomy Observatory (HartRAO), has produced high resolution timing residual data over long timespans. This has enabled the analysis of observed spin down behaviour for 27 braking pulsars, most of which have dataspans longer than 14 years. The phenomenology of observed timing residuals of certain pulsars can be explained by pseudo periodic effects such as precession. Analytic and numerical models are developed to study the kinematic and dynamic behaviour of isolated but torqued precessing pulsars. The predicted timing residual behaviour of the models is characterised, and confronted with timing data from selected pulsars. Cyclic variations in the observed timing residuals of PSR B1642-03, PSR B1323-58 and PSR B1557-50 are fitted with a torqued precession model. The phenomenology of the observed timing behaviour of these pulsars can be explained by the precession models, but precise model fitting was not possible. This is not surprising given that the complexity of the pulsar systems is not completely described by the model. The extension of the pulsar monitoring program at HartRAO is used as motivation for the design and development of a new low cost, multi-purpose digital pulsar receiver. The instrument is implemented using a hybrid filterbank architecture, consisting of an analogue frontend and digital backend, to perform incoherent dedispersion. The design of a polyphase filtering system, which will consolidate multiple processing units into a single filtering solution, is discussed for future implementation.
- Full Text:
- Date Issued: 2005
- Authors: Tiplady, Adrian John
- Date: 2005
- Subjects: Pulsars , Radio telescopes , Radio astronomy , Precession , Hartebeeshoek Radio Astronomy Observatory (HartRAO)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:5475 , http://hdl.handle.net/10962/d1005260
- Description: The long term isolated pulsar monitoring program, which commenced in 1984 using the 26 m radio telescope at the Hartebeeshoek Radio Astronomy Observatory (HartRAO), has produced high resolution timing residual data over long timespans. This has enabled the analysis of observed spin down behaviour for 27 braking pulsars, most of which have dataspans longer than 14 years. The phenomenology of observed timing residuals of certain pulsars can be explained by pseudo periodic effects such as precession. Analytic and numerical models are developed to study the kinematic and dynamic behaviour of isolated but torqued precessing pulsars. The predicted timing residual behaviour of the models is characterised, and confronted with timing data from selected pulsars. Cyclic variations in the observed timing residuals of PSR B1642-03, PSR B1323-58 and PSR B1557-50 are fitted with a torqued precession model. The phenomenology of the observed timing behaviour of these pulsars can be explained by the precession models, but precise model fitting was not possible. This is not surprising given that the complexity of the pulsar systems is not completely described by the model. The extension of the pulsar monitoring program at HartRAO is used as motivation for the design and development of a new low cost, multi-purpose digital pulsar receiver. The instrument is implemented using a hybrid filterbank architecture, consisting of an analogue frontend and digital backend, to perform incoherent dedispersion. The design of a polyphase filtering system, which will consolidate multiple processing units into a single filtering solution, is discussed for future implementation.
- Full Text:
- Date Issued: 2005
Calibration and interpretation of A 2.3 GHz continuum survey
- Authors: Greybe, Andrew
- Date: 1984
- Subjects: Radio astronomy , Astronomical observatories , Galaxies
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5502 , http://hdl.handle.net/10962/d1007210 , Radio astronomy , Astronomical observatories , Galaxies
- Description: This thesis continues the Rhodes 2.3 GHz Survey of the Southern Sky. It consists of two parts : a data processing part and an astronomical analysis part. In the data processing part the data for the regions 4HR to 15HR, -80° to -61° and 12HR to 23HR . -27° to -7° are presented in contour map format. A beam pattern of the Hartebeesthoek telescope at 13 cm is constructed from drift scans of the radio source TAU A. This is used to investigate the data filtering techniques applied to the Rhodes Survey. It is proposed that a set of widely spaced scans which have been referred to the South Celestial Pole can provide a single calibrated baselevel for the Rhodes Survey. The observing technique and the necessary reduction programs to create a coarse grid of antenna temperatures of the Southern Sky using these observation are developed. Preliminary results for this technique are presented as a map of the region 18HR to 6HR, 90° to 30° with a 5°x5° resolution. On the astronomical side two studies are undertaken. The region 13HR to 23HR, -61° to -7° is searched for large extended areas of emission. 7 features occurring at intermediate galactic latitudes are found. They are interpreted as follows: one of them is the classical HII region surrounding the star Zeta Ophiuchi (l",b")=(6.7°,22.4°), and the rest are combinations of thermal and nonthermal emission from galactic features. The galactic equator profile for 24°> L > -58° is studied. It is dominated by a plateau of emission for L < -26°. This is interpreted as a combination of thermal and nonthermal radiation emitted by a ring of gas symmetric about the galactic centre with a radius of 4 - 6 kpc.
- Full Text:
- Date Issued: 1984
- Authors: Greybe, Andrew
- Date: 1984
- Subjects: Radio astronomy , Astronomical observatories , Galaxies
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5502 , http://hdl.handle.net/10962/d1007210 , Radio astronomy , Astronomical observatories , Galaxies
- Description: This thesis continues the Rhodes 2.3 GHz Survey of the Southern Sky. It consists of two parts : a data processing part and an astronomical analysis part. In the data processing part the data for the regions 4HR to 15HR, -80° to -61° and 12HR to 23HR . -27° to -7° are presented in contour map format. A beam pattern of the Hartebeesthoek telescope at 13 cm is constructed from drift scans of the radio source TAU A. This is used to investigate the data filtering techniques applied to the Rhodes Survey. It is proposed that a set of widely spaced scans which have been referred to the South Celestial Pole can provide a single calibrated baselevel for the Rhodes Survey. The observing technique and the necessary reduction programs to create a coarse grid of antenna temperatures of the Southern Sky using these observation are developed. Preliminary results for this technique are presented as a map of the region 18HR to 6HR, 90° to 30° with a 5°x5° resolution. On the astronomical side two studies are undertaken. The region 13HR to 23HR, -61° to -7° is searched for large extended areas of emission. 7 features occurring at intermediate galactic latitudes are found. They are interpreted as follows: one of them is the classical HII region surrounding the star Zeta Ophiuchi (l",b")=(6.7°,22.4°), and the rest are combinations of thermal and nonthermal emission from galactic features. The galactic equator profile for 24°> L > -58° is studied. It is dominated by a plateau of emission for L < -26°. This is interpreted as a combination of thermal and nonthermal radiation emitted by a ring of gas symmetric about the galactic centre with a radius of 4 - 6 kpc.
- Full Text:
- Date Issued: 1984
Observation and processing of 2.3 GHz radio astronomy survey data
- Authors: Jonas, Justin Leonard
- Date: 1983 , 2013-04-15
- Subjects: Radio astronomy , Southern sky (Astronomy)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5505 , http://hdl.handle.net/10962/d1007280 , Radio astronomy , Southern sky (Astronomy)
- Description: The results of the second part of the Rhodes University Southern Sky Survey at 2.3GHz are presented. The area surveyed extends from 12hOO to 22hOO right ascension between declinations -63º and -24º. The observation technique and data reduction processes are analyzed. Digital data processing techniques used to enhance and display the data are dicussed. The results show that the Galactic emission extends as far as 40º latitude. Filamentary and loop-like structures are found superimposed on this general emission. Many of these features are unidentified as yet. A large region of emission is found to coincide with the Sco-Cen stellar association. A lower limit for the ionizing flux from the stars in the association is derived. All of the non-confused extragalactic sources with flux densities greater than O.5Jy are listed. The flux densities of these sources have been measured and any possible extended features are noted. , KMBT_363 , Adobe Acrobat 9.53 Paper Capture Plug-in
- Full Text:
- Date Issued: 1983
- Authors: Jonas, Justin Leonard
- Date: 1983 , 2013-04-15
- Subjects: Radio astronomy , Southern sky (Astronomy)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5505 , http://hdl.handle.net/10962/d1007280 , Radio astronomy , Southern sky (Astronomy)
- Description: The results of the second part of the Rhodes University Southern Sky Survey at 2.3GHz are presented. The area surveyed extends from 12hOO to 22hOO right ascension between declinations -63º and -24º. The observation technique and data reduction processes are analyzed. Digital data processing techniques used to enhance and display the data are dicussed. The results show that the Galactic emission extends as far as 40º latitude. Filamentary and loop-like structures are found superimposed on this general emission. Many of these features are unidentified as yet. A large region of emission is found to coincide with the Sco-Cen stellar association. A lower limit for the ionizing flux from the stars in the association is derived. All of the non-confused extragalactic sources with flux densities greater than O.5Jy are listed. The flux densities of these sources have been measured and any possible extended features are noted. , KMBT_363 , Adobe Acrobat 9.53 Paper Capture Plug-in
- Full Text:
- Date Issued: 1983
- «
- ‹
- 1
- ›
- »