Computer control of a barry research chirpsounder
- Authors: Evans, Geoffrey Philip
- Date: 1985
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5507 , http://hdl.handle.net/10962/d1007495
- Description: This thesis describes the design and development of a computer-based controller together with additional hardware that greatly extends the capabilities of a Barry Research VOS-1 Chirpsounder. The measurement of the virtual height of the ionosphere as a function of frequency using pulse- and frequency-modulated carrier wave (FMCW techniques is described and the concept of the so called "digital" ionosonde is introduced. The modifications required for the standard Chirpsounder to perform as a versatile digital chirp ionosonde are discussed. Simplified block diagrams are used to describe the Controller hardware which is fully described in two comprehensive service manuals which have been included as appendices. Important aspects of the Controller software and data storage formats are described in detail. The emphasis is then placed on system capabilities. An operators' software manual which describes system initialization and operation in terms of system commands is included as an appendix. Results of tests at both Grahamstown, South Africa , and at the SANAE base in the Antarctic are presented.
- Full Text:
- Date Issued: 1985
- Authors: Evans, Geoffrey Philip
- Date: 1985
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5507 , http://hdl.handle.net/10962/d1007495
- Description: This thesis describes the design and development of a computer-based controller together with additional hardware that greatly extends the capabilities of a Barry Research VOS-1 Chirpsounder. The measurement of the virtual height of the ionosphere as a function of frequency using pulse- and frequency-modulated carrier wave (FMCW techniques is described and the concept of the so called "digital" ionosonde is introduced. The modifications required for the standard Chirpsounder to perform as a versatile digital chirp ionosonde are discussed. Simplified block diagrams are used to describe the Controller hardware which is fully described in two comprehensive service manuals which have been included as appendices. Important aspects of the Controller software and data storage formats are described in detail. The emphasis is then placed on system capabilities. An operators' software manual which describes system initialization and operation in terms of system commands is included as an appendix. Results of tests at both Grahamstown, South Africa , and at the SANAE base in the Antarctic are presented.
- Full Text:
- Date Issued: 1985
Computer control of an HF chirp radar
- Authors: Griggs, Desmond Bryan
- Date: 1991
- Subjects: Radar , Radar meteorology , Computerized instruments , Ionosondes
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5455 , http://hdl.handle.net/10962/d1005240 , Radar , Radar meteorology , Computerized instruments , Ionosondes
- Description: This thesis describes the interfacing of an IBM compatible microcomputer to a BR Communications chirp sounder. The need for this is twofold: Firstly for control of the sounder including automatic scheduling of operations, and secondly for data capture. A signal processing card inside the computer performs a Fast Fourier Transform on the sampled data from two phase matched receivers. The transformed data is then transferred to the host computer for further processing, display and storage on hard disk or magnetic tape, all in real time. Critical timing functions are provided by another card in the microcomputer, the timing controller. Built by the author, the design and operation of this sub-system is discussed in detail. Additional circuitry is required to perform antenna and filter switching, and a possible design thereof is also presented by the author. The completed system, comprising the chirp sounder, the PC environment, and the signal switching circuitry, has a dual purpose. It can operate as either a meteor radar, using a fixed frequency (currently 27,99 MHz), or as an advanced chirp ionosonde allowing frequency sweeps from 1,6 to 30 MHz. In the latter case fixed frequency doppler soundings are also possible. Examples of data recorded in the various modes are given.
- Full Text:
- Date Issued: 1991
- Authors: Griggs, Desmond Bryan
- Date: 1991
- Subjects: Radar , Radar meteorology , Computerized instruments , Ionosondes
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5455 , http://hdl.handle.net/10962/d1005240 , Radar , Radar meteorology , Computerized instruments , Ionosondes
- Description: This thesis describes the interfacing of an IBM compatible microcomputer to a BR Communications chirp sounder. The need for this is twofold: Firstly for control of the sounder including automatic scheduling of operations, and secondly for data capture. A signal processing card inside the computer performs a Fast Fourier Transform on the sampled data from two phase matched receivers. The transformed data is then transferred to the host computer for further processing, display and storage on hard disk or magnetic tape, all in real time. Critical timing functions are provided by another card in the microcomputer, the timing controller. Built by the author, the design and operation of this sub-system is discussed in detail. Additional circuitry is required to perform antenna and filter switching, and a possible design thereof is also presented by the author. The completed system, comprising the chirp sounder, the PC environment, and the signal switching circuitry, has a dual purpose. It can operate as either a meteor radar, using a fixed frequency (currently 27,99 MHz), or as an advanced chirp ionosonde allowing frequency sweeps from 1,6 to 30 MHz. In the latter case fixed frequency doppler soundings are also possible. Examples of data recorded in the various modes are given.
- Full Text:
- Date Issued: 1991
CubiCal: a fast radio interferometric calibration suite exploiting complex optimisation
- Authors: Kenyon, Jonathan
- Date: 2019
- Subjects: Interferometry , Radio astronomy , Python (Computer program language) , Square Kilometre Array (Project)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/92341 , vital:30711
- Description: The advent of the Square Kilometre Array and its precursors marks the start of an exciting era for radio interferometry. However, with new instruments producing unprecedented quantities of data, many existing calibration algorithms and implementations will be hard-pressed to keep up. Fortunately, it has recently been shown that the radio interferometric calibration problem can be expressed concisely using the ideas of complex optimisation. The resulting framework exposes properties of the calibration problem which can be exploited to accelerate traditional non-linear least squares algorithms. We extend the existing work on the topic by considering the more general problem of calibrating a Jones chain: the product of several unknown gain terms. We also derive specialised solvers for performing phase-only, delay and pointing error calibration. In doing so, we devise a method for determining update rules for arbitrary, real-valued parametrisations of a complex gain. The solvers are implemented in an optimised Python package called CubiCal. CubiCal makes use of Cython to generate fast C and C++ routines for performing computationally demanding tasks whilst leveraging multiprocessing and shared memory to take advantage of modern, parallel hardware. The package is fully compatible with the measurement set, the most common format for interferometer data, and is well integrated with Montblanc - a third party package which implements optimised model visibility prediction. CubiCal's calibration routines are applied successfully to both simulated and real data for the field surrounding source 3C147. These tests include direction-independent and direction dependent calibration, as well as tests of the specialised solvers. Finally, we conduct extensive performance benchmarks and verify that CubiCal convincingly outperforms its most comparable competitor.
- Full Text:
- Date Issued: 2019
- Authors: Kenyon, Jonathan
- Date: 2019
- Subjects: Interferometry , Radio astronomy , Python (Computer program language) , Square Kilometre Array (Project)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/92341 , vital:30711
- Description: The advent of the Square Kilometre Array and its precursors marks the start of an exciting era for radio interferometry. However, with new instruments producing unprecedented quantities of data, many existing calibration algorithms and implementations will be hard-pressed to keep up. Fortunately, it has recently been shown that the radio interferometric calibration problem can be expressed concisely using the ideas of complex optimisation. The resulting framework exposes properties of the calibration problem which can be exploited to accelerate traditional non-linear least squares algorithms. We extend the existing work on the topic by considering the more general problem of calibrating a Jones chain: the product of several unknown gain terms. We also derive specialised solvers for performing phase-only, delay and pointing error calibration. In doing so, we devise a method for determining update rules for arbitrary, real-valued parametrisations of a complex gain. The solvers are implemented in an optimised Python package called CubiCal. CubiCal makes use of Cython to generate fast C and C++ routines for performing computationally demanding tasks whilst leveraging multiprocessing and shared memory to take advantage of modern, parallel hardware. The package is fully compatible with the measurement set, the most common format for interferometer data, and is well integrated with Montblanc - a third party package which implements optimised model visibility prediction. CubiCal's calibration routines are applied successfully to both simulated and real data for the field surrounding source 3C147. These tests include direction-independent and direction dependent calibration, as well as tests of the specialised solvers. Finally, we conduct extensive performance benchmarks and verify that CubiCal convincingly outperforms its most comparable competitor.
- Full Text:
- Date Issued: 2019
Data compression, field of interest shaping and fast algorithms for direction-dependent deconvolution in radio interferometry
- Authors: Atemkeng, Marcellin T
- Date: 2017
- Subjects: Radio astronomy , Solar radio emission , Radio interferometers , Signal processing -- Digital techniques , Algorithms , Data compression (Computer science)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/6324 , vital:21089
- Description: In radio interferometry, observed visibilities are intrinsically sampled at some interval in time and frequency. Modern interferometers are capable of producing data at very high time and frequency resolution; practical limits on storage and computation costs require that some form of data compression be imposed. The traditional form of compression is simple averaging of the visibilities over coarser time and frequency bins. This has an undesired side effect: the resulting averaged visibilities “decorrelate”, and do so differently depending on the baseline length and averaging interval. This translates into a non-trivial signature in the image domain known as “smearing”, which manifests itself as an attenuation in amplitude towards off-centre sources. With the increasing fields of view and/or longer baselines employed in modern and future instruments, the trade-off between data rate and smearing becomes increasingly unfavourable. Averaging also results in baseline length and a position-dependent point spread function (PSF). In this work, we investigate alternative approaches to low-loss data compression. We show that averaging of the visibility data can be understood as a form of convolution by a boxcar-like window function, and that by employing alternative baseline-dependent window functions a more optimal interferometer smearing response may be induced. Specifically, we can improve amplitude response over a chosen field of interest and attenuate sources outside the field of interest. The main cost of this technique is a reduction in nominal sensitivity; we investigate the smearing vs. sensitivity trade-off and show that in certain regimes a favourable compromise can be achieved. We show the application of this technique to simulated data from the Jansky Very Large Array and the European Very Long Baseline Interferometry Network. Furthermore, we show that the position-dependent PSF shape induced by averaging can be approximated using linear algebraic properties to effectively reduce the computational complexity for evaluating the PSF at each sky position. We conclude by implementing a position-dependent PSF deconvolution in an imaging and deconvolution framework. Using the Low-Frequency Array radio interferometer, we show that deconvolution with position-dependent PSFs results in higher image fidelity compared to a simple CLEAN algorithm and its derivatives.
- Full Text:
- Date Issued: 2017
- Authors: Atemkeng, Marcellin T
- Date: 2017
- Subjects: Radio astronomy , Solar radio emission , Radio interferometers , Signal processing -- Digital techniques , Algorithms , Data compression (Computer science)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/6324 , vital:21089
- Description: In radio interferometry, observed visibilities are intrinsically sampled at some interval in time and frequency. Modern interferometers are capable of producing data at very high time and frequency resolution; practical limits on storage and computation costs require that some form of data compression be imposed. The traditional form of compression is simple averaging of the visibilities over coarser time and frequency bins. This has an undesired side effect: the resulting averaged visibilities “decorrelate”, and do so differently depending on the baseline length and averaging interval. This translates into a non-trivial signature in the image domain known as “smearing”, which manifests itself as an attenuation in amplitude towards off-centre sources. With the increasing fields of view and/or longer baselines employed in modern and future instruments, the trade-off between data rate and smearing becomes increasingly unfavourable. Averaging also results in baseline length and a position-dependent point spread function (PSF). In this work, we investigate alternative approaches to low-loss data compression. We show that averaging of the visibility data can be understood as a form of convolution by a boxcar-like window function, and that by employing alternative baseline-dependent window functions a more optimal interferometer smearing response may be induced. Specifically, we can improve amplitude response over a chosen field of interest and attenuate sources outside the field of interest. The main cost of this technique is a reduction in nominal sensitivity; we investigate the smearing vs. sensitivity trade-off and show that in certain regimes a favourable compromise can be achieved. We show the application of this technique to simulated data from the Jansky Very Large Array and the European Very Long Baseline Interferometry Network. Furthermore, we show that the position-dependent PSF shape induced by averaging can be approximated using linear algebraic properties to effectively reduce the computational complexity for evaluating the PSF at each sky position. We conclude by implementing a position-dependent PSF deconvolution in an imaging and deconvolution framework. Using the Low-Frequency Array radio interferometer, we show that deconvolution with position-dependent PSFs results in higher image fidelity compared to a simple CLEAN algorithm and its derivatives.
- Full Text:
- Date Issued: 2017
Data reduction techniques for Very Long Baseline Interferometric spectropolarimetry
- Authors: Kemball, Athol James
- Date: 1993
- Subjects: Very long baseline interferometry Radio interferometers Data reduction -- Research
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:5457 , http://hdl.handle.net/10962/d1005242
- Description: This thesis reports the results of an investigation into techniques for the calibration and imaging of spectral line polarization observations in Very Long Baseline Interferometry (VLBI). A review is given of the instrumental and propagation effects which need to be removed in the course of calibrating such obervations, with particular reference to their polarization dependence. The removal of amplitude and phase errors and the determination of the instrumental feed response is described. The polarization imaging of such data is discussed with particular reference to the case of poorly sampled cross-polarization data. The software implementation of the algorithms within the Astronomical Image Processing System (AlPS) is discussed and the specific case of spectral line polarization reduction for data observed using the MK3 VLBI system is considered in detail. VLBI observations at two separate epochs of the 1612 MHz OH masers towards the source IRC+ 10420 are reduced as part of this work. Spectral line polarization maps of the source structure are presented, including a discussion of source morphology and variability. The source is sigmficantly circularly polarized at VLBI resolution, but does not display appreciable linear polarization. A proper motion study of the circumstellar envelope is presented, which supports an ellipsoidal kinematic model with anisotropic radial outflow. Kinematic modelling of the measured proper motions suggests a distance to the source of ~ 3 kpc. The cirumstellar magnetic field strength in the masing regions is determined as 1-3 mG, assuming Zeeman splitting as the polarization mechanism.
- Full Text:
- Date Issued: 1993
- Authors: Kemball, Athol James
- Date: 1993
- Subjects: Very long baseline interferometry Radio interferometers Data reduction -- Research
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:5457 , http://hdl.handle.net/10962/d1005242
- Description: This thesis reports the results of an investigation into techniques for the calibration and imaging of spectral line polarization observations in Very Long Baseline Interferometry (VLBI). A review is given of the instrumental and propagation effects which need to be removed in the course of calibrating such obervations, with particular reference to their polarization dependence. The removal of amplitude and phase errors and the determination of the instrumental feed response is described. The polarization imaging of such data is discussed with particular reference to the case of poorly sampled cross-polarization data. The software implementation of the algorithms within the Astronomical Image Processing System (AlPS) is discussed and the specific case of spectral line polarization reduction for data observed using the MK3 VLBI system is considered in detail. VLBI observations at two separate epochs of the 1612 MHz OH masers towards the source IRC+ 10420 are reduced as part of this work. Spectral line polarization maps of the source structure are presented, including a discussion of source morphology and variability. The source is sigmficantly circularly polarized at VLBI resolution, but does not display appreciable linear polarization. A proper motion study of the circumstellar envelope is presented, which supports an ellipsoidal kinematic model with anisotropic radial outflow. Kinematic modelling of the measured proper motions suggests a distance to the source of ~ 3 kpc. The cirumstellar magnetic field strength in the masing regions is determined as 1-3 mG, assuming Zeeman splitting as the polarization mechanism.
- Full Text:
- Date Issued: 1993
Design patterns and software techniques for large-scale, open and reproducible data reduction
- Authors: Molenaar, Gijs Jan
- Date: 2021
- Subjects: Radio astronomy -- Data processing , Radio astronomy -- Data processing -- Software , Radio astronomy -- South Africa , ASTRODECONV2019 dataset , Radio telescopes -- South Africa , KERN (omputer software)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/172169 , vital:42172 , 10.21504/10962/172169
- Description: The preparation for the construction of the Square Kilometre Array, and the introduction of its operational precursors, such as LOFAR and MeerKAT, mark the beginning of an exciting era for astronomy. Impressive new data containing valuable science just waiting for discovery is already being generated, and these devices will produce far more data than has ever been collected before. However, with every new data instrument, the data rates grow to unprecedented quantities of data, requiring novel new data-processing tools. In addition, creating science grade data from the raw data still requires significant expert knowledge for processing this data. The software used is often developed by a scientist who lacks proper training in software development skills, resulting in the software not progressing beyond a prototype stage in quality. In the first chapter, we explore various organisational and technical approaches to address these issues by providing a historical overview of the development of radioastronomy pipelines since the inception of the field in the 1940s. In that, the steps required to create a radio image are investigated. We used the lessons-learned to identify patterns in the challenges experienced, and the solutions created to address these over the years. The second chapter describes the mathematical foundations that are essential for radio imaging. In the third chapter, we discuss the production of the KERN Linux distribution, which is a set of software packages containing most radio astronomy software currently in use. Considerable effort was put into making sure that the contained software installs appropriately, all items next to one other on the same system. Where required and possible, bugs and portability fixes were solved and reported with the upstream maintainers. The KERN project also has a website, and issue tracker, where users can report bugs and maintainers can coordinate the packaging effort and new releases. The software packages can be used inside Docker and Singularity containers, enabling the installation of these packages on a wide variety of platforms. In the fourth and fifth chapters, we discuss methods and frameworks for combining the available data reduction tools into recomposable pipelines and introduce the Kliko specification and software. This framework was created to enable end-user astronomers to chain and containerise operations of software in KERN packages. Next, we discuss the Common Workflow Language (CommonWL), a similar but more advanced and mature pipeline framework invented by bio-informatics scientists. CommonWL is supported by a wide range of tools already; among other schedulers, visualisers and editors. Consequently, when a pipeline is made with CommonWL, it can be deployed and manipulated with a wide range of tools. In the final chapter, we attempt something unconventional, applying a generative adversarial network based on deep learning techniques to perform the task of sky brightness reconstruction. Since deep learning methods often require a large number of training samples, we constructed a CommonWL simulation pipeline for creating dirty images and corresponding sky models. This simulated dataset has been made publicly available as the ASTRODECONV2019 dataset. It is shown that this method is useful to perform the restoration and matches the performance of a single clean cycle. In addition, we incorporated domain knowledge by adding the point spread function to the network and by utilising a custom loss function during training. Although it was not possible to improve the cleaning performance of commonly used existing tools, the computational time performance of the approach looks very promising. We suggest that a smaller scope should be the starting point for further studies and optimising of the training of the neural network could produce the desired results.
- Full Text:
- Date Issued: 2021
- Authors: Molenaar, Gijs Jan
- Date: 2021
- Subjects: Radio astronomy -- Data processing , Radio astronomy -- Data processing -- Software , Radio astronomy -- South Africa , ASTRODECONV2019 dataset , Radio telescopes -- South Africa , KERN (omputer software)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/172169 , vital:42172 , 10.21504/10962/172169
- Description: The preparation for the construction of the Square Kilometre Array, and the introduction of its operational precursors, such as LOFAR and MeerKAT, mark the beginning of an exciting era for astronomy. Impressive new data containing valuable science just waiting for discovery is already being generated, and these devices will produce far more data than has ever been collected before. However, with every new data instrument, the data rates grow to unprecedented quantities of data, requiring novel new data-processing tools. In addition, creating science grade data from the raw data still requires significant expert knowledge for processing this data. The software used is often developed by a scientist who lacks proper training in software development skills, resulting in the software not progressing beyond a prototype stage in quality. In the first chapter, we explore various organisational and technical approaches to address these issues by providing a historical overview of the development of radioastronomy pipelines since the inception of the field in the 1940s. In that, the steps required to create a radio image are investigated. We used the lessons-learned to identify patterns in the challenges experienced, and the solutions created to address these over the years. The second chapter describes the mathematical foundations that are essential for radio imaging. In the third chapter, we discuss the production of the KERN Linux distribution, which is a set of software packages containing most radio astronomy software currently in use. Considerable effort was put into making sure that the contained software installs appropriately, all items next to one other on the same system. Where required and possible, bugs and portability fixes were solved and reported with the upstream maintainers. The KERN project also has a website, and issue tracker, where users can report bugs and maintainers can coordinate the packaging effort and new releases. The software packages can be used inside Docker and Singularity containers, enabling the installation of these packages on a wide variety of platforms. In the fourth and fifth chapters, we discuss methods and frameworks for combining the available data reduction tools into recomposable pipelines and introduce the Kliko specification and software. This framework was created to enable end-user astronomers to chain and containerise operations of software in KERN packages. Next, we discuss the Common Workflow Language (CommonWL), a similar but more advanced and mature pipeline framework invented by bio-informatics scientists. CommonWL is supported by a wide range of tools already; among other schedulers, visualisers and editors. Consequently, when a pipeline is made with CommonWL, it can be deployed and manipulated with a wide range of tools. In the final chapter, we attempt something unconventional, applying a generative adversarial network based on deep learning techniques to perform the task of sky brightness reconstruction. Since deep learning methods often require a large number of training samples, we constructed a CommonWL simulation pipeline for creating dirty images and corresponding sky models. This simulated dataset has been made publicly available as the ASTRODECONV2019 dataset. It is shown that this method is useful to perform the restoration and matches the performance of a single clean cycle. In addition, we incorporated domain knowledge by adding the point spread function to the network and by utilising a custom loss function during training. Although it was not possible to improve the cleaning performance of commonly used existing tools, the computational time performance of the approach looks very promising. We suggest that a smaller scope should be the starting point for further studies and optimising of the training of the neural network could produce the desired results.
- Full Text:
- Date Issued: 2021
Designing and implementing a new pulsar timer for the Hartebeesthoek Radio Astronomy Observatory
- Authors: Youthed, Andrew David
- Date: 2008
- Subjects: Astronomical observatories , Radio astronomy , Pulsars , Astronomical instruments , Reduced instruction set computers , Random access memory
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5458 , http://hdl.handle.net/10962/d1005243 , Astronomical observatories , Radio astronomy , Pulsars , Astronomical instruments , Reduced instruction set computers , Random access memory
- Description: This thesis outlines the design and implementation of a single channel, dual polarization pulsar timing instrument for the Hartebeesthoek Radio Astronomy Observatory (HartRAO). The new timer is designed to be an improved, temporary replacement for the existing device which has been in operation for over 20 years. The existing device is no longer reliable and is di±cult to maintain. The new pulsar timer is designed to provide improved functional- ity, higher sampling speed, greater pulse resolution, more °exibility and easier maintenance over the existing device. The new device is also designed to keeping changes to the observation system to a minimum until a full de-dispersion timer can be implemented at theobservatory. The design makes use of an 8-bit Reduced Instruction Set Computer (RISC) micro-processor with external Random Access Memory (RAM). The instrument includes an IEEE-488 subsystem for interfacing the pulsar timer to the observation computer system. The microcontroller software is written in assembler code to ensure optimal loop execution speed and deterministic code execution for the system. The design path is discussed and problems encountered during the design process are highlighted. Final testing of the new instrument indicates an improvement in the sam- pling rate of 13.6 times and a significant reduction in 60Hz interference over the existing instrument.
- Full Text:
- Date Issued: 2008
- Authors: Youthed, Andrew David
- Date: 2008
- Subjects: Astronomical observatories , Radio astronomy , Pulsars , Astronomical instruments , Reduced instruction set computers , Random access memory
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5458 , http://hdl.handle.net/10962/d1005243 , Astronomical observatories , Radio astronomy , Pulsars , Astronomical instruments , Reduced instruction set computers , Random access memory
- Description: This thesis outlines the design and implementation of a single channel, dual polarization pulsar timing instrument for the Hartebeesthoek Radio Astronomy Observatory (HartRAO). The new timer is designed to be an improved, temporary replacement for the existing device which has been in operation for over 20 years. The existing device is no longer reliable and is di±cult to maintain. The new pulsar timer is designed to provide improved functional- ity, higher sampling speed, greater pulse resolution, more °exibility and easier maintenance over the existing device. The new device is also designed to keeping changes to the observation system to a minimum until a full de-dispersion timer can be implemented at theobservatory. The design makes use of an 8-bit Reduced Instruction Set Computer (RISC) micro-processor with external Random Access Memory (RAM). The instrument includes an IEEE-488 subsystem for interfacing the pulsar timer to the observation computer system. The microcontroller software is written in assembler code to ensure optimal loop execution speed and deterministic code execution for the system. The design path is discussed and problems encountered during the design process are highlighted. Final testing of the new instrument indicates an improvement in the sam- pling rate of 13.6 times and a significant reduction in 60Hz interference over the existing instrument.
- Full Text:
- Date Issued: 2008
Developing an ionospheric map for South Africa
- Authors: Okoh, Daniel Izuikeninachi
- Date: 2009
- Subjects: Ionosphere -- South Africa , Shortwave radio , Ionospheric electron density -- South Africa
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5459 , http://hdl.handle.net/10962/d1005244 , Ionosphere -- South Africa , Shortwave radio , Ionospheric electron density -- South Africa
- Description: This thesis describes the development of an ionospheric map for the South African region using the current available resources. The International Reference Ionosphere (IRI) model, the South African Bottomside Ionospheric Model (SABIM), and measurements from ionosondes in the South African Ionosonde Network, were incorporated into the map. An accurate ionospheric map depicting the foF2 and hmF2 parameters as well as electron density profiles at any location within South Africa is a useful tool for, amongst others, High Frequency (HF) communicators and space weather centers. A major product of the work is software, written in MATLAB, which produces spatial and temporal representations of the South African ionosphere. The map was validated and demonstrated for practical application, since a significant aim of the project was to make the map as applicable as possible. It is hoped that the map will find immense application in HF radio communication industries, research industries, aviation industries, and other industries that make use of Earth-Space systems. A potential user of the map is GrinTek Ewation (GEW) who is currently evaluating it for their purposes
- Full Text:
- Date Issued: 2009
- Authors: Okoh, Daniel Izuikeninachi
- Date: 2009
- Subjects: Ionosphere -- South Africa , Shortwave radio , Ionospheric electron density -- South Africa
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5459 , http://hdl.handle.net/10962/d1005244 , Ionosphere -- South Africa , Shortwave radio , Ionospheric electron density -- South Africa
- Description: This thesis describes the development of an ionospheric map for the South African region using the current available resources. The International Reference Ionosphere (IRI) model, the South African Bottomside Ionospheric Model (SABIM), and measurements from ionosondes in the South African Ionosonde Network, were incorporated into the map. An accurate ionospheric map depicting the foF2 and hmF2 parameters as well as electron density profiles at any location within South Africa is a useful tool for, amongst others, High Frequency (HF) communicators and space weather centers. A major product of the work is software, written in MATLAB, which produces spatial and temporal representations of the South African ionosphere. The map was validated and demonstrated for practical application, since a significant aim of the project was to make the map as applicable as possible. It is hoped that the map will find immense application in HF radio communication industries, research industries, aviation industries, and other industries that make use of Earth-Space systems. A potential user of the map is GrinTek Ewation (GEW) who is currently evaluating it for their purposes
- Full Text:
- Date Issued: 2009
Development of a neural network based model for predicting the occurrence of spread F within the Brazilian sector
- Authors: Paradza, Masimba Wellington
- Date: 2009
- Subjects: Neural networks (Computer science) , Ionosphere , F region
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5460 , http://hdl.handle.net/10962/d1005245 , Neural networks (Computer science) , Ionosphere , F region
- Description: Spread F is a phenomenon of the ionosphere in which the pulses returned from the ionosphere are of a much greater duration than the transmitted ones. The occurrence of spread F can be predicted using the technique of Neural Networks (NNs). This thesis presents the development and evaluation of NN based models (two single station models and a regional model) for predicting the occurrence of spread F over selected stations within the Brazilian sector. The input space for the NNs included the day number (seasonal variation), hour (diurnal variation), sunspot number (measure of the solar activity), magnetic index (measure of the magnetic activity) and magnetic position (latitude, magnetic declination and inclination). Twelve years of spread F data measured during 1978 to 1989 inclusively at the equatorial site Fortaleza and low latitude site Cachoeira Paulista are used in the development of an input space and NN architecture for the NN models. Spread F data that is believed to be related to plasma bubble developments (range spread F) were used in the development of the models while those associated with narrow spectrum irregularities that occur near the F layer (frequency spread F) were excluded. The results of the models show the dependency of the probability of spread F as a function of local time, season and latitude. The models also illustrate some characteristics of spread F such as the onset and peak occurrence of spread F as a function of distance from the equator. Results from these models are presented in this thesis and compared to measured data and to modelled data obtained with an empirical model developed for the same purpose.
- Full Text:
- Date Issued: 2009
- Authors: Paradza, Masimba Wellington
- Date: 2009
- Subjects: Neural networks (Computer science) , Ionosphere , F region
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5460 , http://hdl.handle.net/10962/d1005245 , Neural networks (Computer science) , Ionosphere , F region
- Description: Spread F is a phenomenon of the ionosphere in which the pulses returned from the ionosphere are of a much greater duration than the transmitted ones. The occurrence of spread F can be predicted using the technique of Neural Networks (NNs). This thesis presents the development and evaluation of NN based models (two single station models and a regional model) for predicting the occurrence of spread F over selected stations within the Brazilian sector. The input space for the NNs included the day number (seasonal variation), hour (diurnal variation), sunspot number (measure of the solar activity), magnetic index (measure of the magnetic activity) and magnetic position (latitude, magnetic declination and inclination). Twelve years of spread F data measured during 1978 to 1989 inclusively at the equatorial site Fortaleza and low latitude site Cachoeira Paulista are used in the development of an input space and NN architecture for the NN models. Spread F data that is believed to be related to plasma bubble developments (range spread F) were used in the development of the models while those associated with narrow spectrum irregularities that occur near the F layer (frequency spread F) were excluded. The results of the models show the dependency of the probability of spread F as a function of local time, season and latitude. The models also illustrate some characteristics of spread F such as the onset and peak occurrence of spread F as a function of distance from the equator. Results from these models are presented in this thesis and compared to measured data and to modelled data obtained with an empirical model developed for the same purpose.
- Full Text:
- Date Issued: 2009
Development of an ionospheric map for Africa
- Authors: Ssessanga, Nicholas
- Date: 2014
- Subjects: Ionosondes Ionosphere Ionosphere -- Observations Ionosphere -- Research -- Africa Ionospheric electron density -- Africa Ionospheric critical frequencies
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:5519 , http://hdl.handle.net/10962/d1011498
- Description: This thesis presents research pertaining to the development of an African Ionospheric Map (AIM). An ionospheric map is a computer program that is able to display spatial and temporal representations of ionospheric parameters such as, electron density and critical plasma frequencies, for every geographical location on the map. The purpose of this development was to make the most optimum use of all available data sources, namely ionosondes, satellites and models, and to implement error minimisation techniques in order to obtain the best result at any given location on the African continent. The focus was placed on the accurate estimation of three upper atmosphere parameters which are important for radio communications: critical frequency of the F2 layer (foF2), Total Electron Content (TEC) and the maximum usable frequency over a distance of 3000 km (M3000F2). The results show that AIM provided a more accurate estimation of the three parameters than the internationally recognised and recommended ionosphere model (IRI-2012) when used on its own. Therefore, the AIM is a more accurate solution than single independent data sources for applications requiring ionospheric mapping over the African continent.
- Full Text:
- Date Issued: 2014
- Authors: Ssessanga, Nicholas
- Date: 2014
- Subjects: Ionosondes Ionosphere Ionosphere -- Observations Ionosphere -- Research -- Africa Ionospheric electron density -- Africa Ionospheric critical frequencies
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:5519 , http://hdl.handle.net/10962/d1011498
- Description: This thesis presents research pertaining to the development of an African Ionospheric Map (AIM). An ionospheric map is a computer program that is able to display spatial and temporal representations of ionospheric parameters such as, electron density and critical plasma frequencies, for every geographical location on the map. The purpose of this development was to make the most optimum use of all available data sources, namely ionosondes, satellites and models, and to implement error minimisation techniques in order to obtain the best result at any given location on the African continent. The focus was placed on the accurate estimation of three upper atmosphere parameters which are important for radio communications: critical frequency of the F2 layer (foF2), Total Electron Content (TEC) and the maximum usable frequency over a distance of 3000 km (M3000F2). The results show that AIM provided a more accurate estimation of the three parameters than the internationally recognised and recommended ionosphere model (IRI-2012) when used on its own. Therefore, the AIM is a more accurate solution than single independent data sources for applications requiring ionospheric mapping over the African continent.
- Full Text:
- Date Issued: 2014
Distributed control applications using local area networks: a LAN based power control system at Rhodes University
- Authors: Sullivan, Anthony John
- Date: 2002
- Subjects: Embedded computer systems , Local area networks (Computer networks) , Linux
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5461 , http://hdl.handle.net/10962/d1005246 , Embedded computer systems , Local area networks (Computer networks) , Linux
- Description: This thesis describes the design and development of both the hardware and software of an embedded, distributed control system using a LAN infrastructure for communication between nodes. The primary application of this system is for power monitoring and control at Rhodes University. Both the hardware and software have been developed to provide a modular and scalable system capable of growing and adapting to meet the changing demands placed on it. The software includes a custom written Internet Protocol stack for use in the embedded environment, with a small code footprint and low processing overheads. There is also Linux-based control software, which includes a web-based device management interface and graphical output. Problems specific to the application are discussed as well as their solutions, with particular attention to the constraints of an embedded system.
- Full Text:
- Date Issued: 2002
- Authors: Sullivan, Anthony John
- Date: 2002
- Subjects: Embedded computer systems , Local area networks (Computer networks) , Linux
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5461 , http://hdl.handle.net/10962/d1005246 , Embedded computer systems , Local area networks (Computer networks) , Linux
- Description: This thesis describes the design and development of both the hardware and software of an embedded, distributed control system using a LAN infrastructure for communication between nodes. The primary application of this system is for power monitoring and control at Rhodes University. Both the hardware and software have been developed to provide a modular and scalable system capable of growing and adapting to meet the changing demands placed on it. The software includes a custom written Internet Protocol stack for use in the embedded environment, with a small code footprint and low processing overheads. There is also Linux-based control software, which includes a web-based device management interface and graphical output. Problems specific to the application are discussed as well as their solutions, with particular attention to the constraints of an embedded system.
- Full Text:
- Date Issued: 2002
Dynamics of charge movement in ∞-Al2O3:C,Mg using thermoluminescence phototransferred and optically stimulated luminescence
- Authors: Lontsi Sob, Aaron Joel
- Date: 2022-04-08
- Subjects: Thermoluminescence , Optically stimulated luminescence , Phototransfer , Deep traps , Phototransferred thermoluminescence (PTTL)
- Language: English
- Type: Academic theses , Doctoral theses , text
- Identifier: http://hdl.handle.net/10962/294607 , vital:57237 , DOI 10.21504/10962/294607
- Description: The dosimetric features of ∞-Al2O3:C,Mg have been investigated for unannealed and annealed samples. The unannealed sample is referred to as sample A whereas the samples annealed at 700, 900 and 1200°C for 15 minutes each are referred to as samples B, C and D respectively. A glow curve of unannealed ∞-Al2O3:C,Mg measured at 1°C/s after irradiation to 2.0 Gy consists of peaks at 43, 73, 164, 195, 246, 284, 336 and 374°C respectively. For sample B (annealed at 700°C), a glow curve measured at 1°C/s after irradiation to 3.0 Gy has peaks at 46, 76, 100, 170, 199, 290, 330 and 375°C whereas the glow curve of sample C (annealed at 900°C) recorded under the same conditions consists of peaks at 49, 80, 100, 174, 206, 235, 290, 335 and 375°C respectively. Sample D (annealed at 1200°C) is the most sensitive of the four samples. A glow curve of sample D measured at 1°C/s after irradiation to 0.2 Gy has peaks at 52, 82, 102, 174, 234, 288 and 384°C respectively. The peaks are labelled I-VIII in order of appearance. The 100°C peak, labelled IIa, is induced by annealing at or above 700°C. The dose response of these peaks was studied for doses within 0.1-8.2 Gy. The reported peaks follow first-order kinetics irrespective of annealing temperature. Peaks I-III of each sample are reproduced under phototransfer for preheating up to 400°C. For the unannealed sample, the reproduced peaks are labelled A1-A3 whereas for the annealed samples, they are labelled B1-B3, C1-C3 and D1-D3 respectively. The annealing-induced peak at 100°C is reproduced as B2a, C2a and D2a for samples B, C and D respectively. A PTTL peak labelled C2b or D2b is also observed near 140°C in samples C and D. In addition to these PTTL peaks, a PTTL peak corresponding to peak IV is also found for sample D and for the unannealed sample. As the corresponding conventional peaks, the PTTL peaks of each sample follow first-order kinetics. Peak I and its corresponding PTTL peak for each sample are unstable and fade to a minimal level after 300 s of storage time. On the other hand, peak II of each sample and its corresponding PTTL peak could still be observed with delay up to 5000 s. Peak III of the unannealed sample remains stable with storage time up to 48 hours. Irrespective of annealing, the trap corresponding to peak III is the most sensitive to optical stimulation. Time-dependent profiles of PTTL from unannealed and annealed ∞-Al2O3:C,Mg were also studied. The mathematical analysis of the PTTL time-response profiles is based on experimental results. The role of various electron traps in PTTL was determined by using pulse annealing and by monitoring the dependence of peak intensity on duration of illumination for peaks not removed by preheating. The presence and role of deep traps were further demonstrated with thermally assisted optically stimulated luminescence. For the unannealed sample, the activation energy for thermal assistance is 0.033 ± 0.001 eV and the activation energy for thermal i quenching is 1.043 ± 0.001 eV. For sample C, the activation energy for thermal assistance is 0.044 ± 0.003 eV whereas that for thermal quenching is 1.110 ± 0.006 eV. The values for the activation energy for thermal assistance are lower than those reported in literature. Only the values for the activation energy for thermal quenching are somewhat comparable to values reported elsewhere. , Thesis (PhD) -- Faculty of Science, Physics and Electronics, 2022
- Full Text:
- Date Issued: 2022-04-08
- Authors: Lontsi Sob, Aaron Joel
- Date: 2022-04-08
- Subjects: Thermoluminescence , Optically stimulated luminescence , Phototransfer , Deep traps , Phototransferred thermoluminescence (PTTL)
- Language: English
- Type: Academic theses , Doctoral theses , text
- Identifier: http://hdl.handle.net/10962/294607 , vital:57237 , DOI 10.21504/10962/294607
- Description: The dosimetric features of ∞-Al2O3:C,Mg have been investigated for unannealed and annealed samples. The unannealed sample is referred to as sample A whereas the samples annealed at 700, 900 and 1200°C for 15 minutes each are referred to as samples B, C and D respectively. A glow curve of unannealed ∞-Al2O3:C,Mg measured at 1°C/s after irradiation to 2.0 Gy consists of peaks at 43, 73, 164, 195, 246, 284, 336 and 374°C respectively. For sample B (annealed at 700°C), a glow curve measured at 1°C/s after irradiation to 3.0 Gy has peaks at 46, 76, 100, 170, 199, 290, 330 and 375°C whereas the glow curve of sample C (annealed at 900°C) recorded under the same conditions consists of peaks at 49, 80, 100, 174, 206, 235, 290, 335 and 375°C respectively. Sample D (annealed at 1200°C) is the most sensitive of the four samples. A glow curve of sample D measured at 1°C/s after irradiation to 0.2 Gy has peaks at 52, 82, 102, 174, 234, 288 and 384°C respectively. The peaks are labelled I-VIII in order of appearance. The 100°C peak, labelled IIa, is induced by annealing at or above 700°C. The dose response of these peaks was studied for doses within 0.1-8.2 Gy. The reported peaks follow first-order kinetics irrespective of annealing temperature. Peaks I-III of each sample are reproduced under phototransfer for preheating up to 400°C. For the unannealed sample, the reproduced peaks are labelled A1-A3 whereas for the annealed samples, they are labelled B1-B3, C1-C3 and D1-D3 respectively. The annealing-induced peak at 100°C is reproduced as B2a, C2a and D2a for samples B, C and D respectively. A PTTL peak labelled C2b or D2b is also observed near 140°C in samples C and D. In addition to these PTTL peaks, a PTTL peak corresponding to peak IV is also found for sample D and for the unannealed sample. As the corresponding conventional peaks, the PTTL peaks of each sample follow first-order kinetics. Peak I and its corresponding PTTL peak for each sample are unstable and fade to a minimal level after 300 s of storage time. On the other hand, peak II of each sample and its corresponding PTTL peak could still be observed with delay up to 5000 s. Peak III of the unannealed sample remains stable with storage time up to 48 hours. Irrespective of annealing, the trap corresponding to peak III is the most sensitive to optical stimulation. Time-dependent profiles of PTTL from unannealed and annealed ∞-Al2O3:C,Mg were also studied. The mathematical analysis of the PTTL time-response profiles is based on experimental results. The role of various electron traps in PTTL was determined by using pulse annealing and by monitoring the dependence of peak intensity on duration of illumination for peaks not removed by preheating. The presence and role of deep traps were further demonstrated with thermally assisted optically stimulated luminescence. For the unannealed sample, the activation energy for thermal assistance is 0.033 ± 0.001 eV and the activation energy for thermal i quenching is 1.043 ± 0.001 eV. For sample C, the activation energy for thermal assistance is 0.044 ± 0.003 eV whereas that for thermal quenching is 1.110 ± 0.006 eV. The values for the activation energy for thermal assistance are lower than those reported in literature. Only the values for the activation energy for thermal quenching are somewhat comparable to values reported elsewhere. , Thesis (PhD) -- Faculty of Science, Physics and Electronics, 2022
- Full Text:
- Date Issued: 2022-04-08
Dynamics of stimulated luminescence in natural quartz: Thermoluminescence and phototransferred thermoluminescence
- Authors: Folley, Damilola Esther
- Date: 2020
- Subjects: Thermoluminescence , Quartz
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/146255 , vital:38509
- Description: Natural quartz has remained an important mineral that is of topical interest in luminescence and dosimetry-related research. We investigate the dynamics of stimulated luminescence on this material through thermoluminescence (TL) and phototransferred thermoluminescence (PTTL). Measurements were made on unannealed natural quartz as well as quartz annealed at 800 and 1000̊C. The samples were annealed for 10 minutes and for 1 hour. The material, in its un- and annealed state has its main peak between 68 and 72̊C when measured at 1Cs ̃1 after a dose of 50 Gy. A study of dosimetric features and kinetic analysis was carried out on two prominent peaks, peak I and III for all the samples. The peaks show a sublinear dose response for irradiation doses between 10 and 300 Gy. Kinetic analysis shows that peak I is a first-order peak and peak III a general-order peak. Interestingly, we observe for peak I for the sample annealed at 800̊C for 1 hour an inverse thermal quenching behaviour. We demonstrate that a peak affected with an inverse thermal quenching-like behaviour can still show effect of thermal quenching when the dose the sample is irradiated to is significantly reduced. We ascribe the apparent dependence of thermal quenching on dose to competition between radiative and non-radiative transitions at the recombination centre. Peaks I, II, and III for all the samples were reproduced under phototransfer when the peaks, initially removed by preheating to a certain temperature are exposed to 470 and 525 nm light. The infuence of duration of illumination on the PTTL intensity of these peaks corresponding to various preheating temperatures is modelled using coupled first-order dfferential equations. The model is based on systems of acceptors and donors whose number and role depends on preheating temperature
- Full Text:
- Date Issued: 2020
- Authors: Folley, Damilola Esther
- Date: 2020
- Subjects: Thermoluminescence , Quartz
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/146255 , vital:38509
- Description: Natural quartz has remained an important mineral that is of topical interest in luminescence and dosimetry-related research. We investigate the dynamics of stimulated luminescence on this material through thermoluminescence (TL) and phototransferred thermoluminescence (PTTL). Measurements were made on unannealed natural quartz as well as quartz annealed at 800 and 1000̊C. The samples were annealed for 10 minutes and for 1 hour. The material, in its un- and annealed state has its main peak between 68 and 72̊C when measured at 1Cs ̃1 after a dose of 50 Gy. A study of dosimetric features and kinetic analysis was carried out on two prominent peaks, peak I and III for all the samples. The peaks show a sublinear dose response for irradiation doses between 10 and 300 Gy. Kinetic analysis shows that peak I is a first-order peak and peak III a general-order peak. Interestingly, we observe for peak I for the sample annealed at 800̊C for 1 hour an inverse thermal quenching behaviour. We demonstrate that a peak affected with an inverse thermal quenching-like behaviour can still show effect of thermal quenching when the dose the sample is irradiated to is significantly reduced. We ascribe the apparent dependence of thermal quenching on dose to competition between radiative and non-radiative transitions at the recombination centre. Peaks I, II, and III for all the samples were reproduced under phototransfer when the peaks, initially removed by preheating to a certain temperature are exposed to 470 and 525 nm light. The infuence of duration of illumination on the PTTL intensity of these peaks corresponding to various preheating temperatures is modelled using coupled first-order dfferential equations. The model is based on systems of acceptors and donors whose number and role depends on preheating temperature
- Full Text:
- Date Issued: 2020
Empirical modelling of the solar wind influence on Pc3 pulsation activity
- Authors: Lotz, Stefanus Ignatius
- Date: 2012
- Subjects: Solar wind -- Research Solar activity -- Research Stellar oscillations -- Research , Magnetospheric radio wave propagation , Interplanetary magnetic fields
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:5464 , http://hdl.handle.net/10962/d1005249
- Description: Geomagnetic pulsations are ultra-low frequency (ULF) oscillations of the geomagnetic field that have been observed in the magnetosphere and on the Earth since the 1800’s. In the 1960’s in situ observations of the solar wind suggested that the source of pulsation activity must lie beyond the magnetosphere. In this work the influence of several solar wind plasma and interplanetary magnetic field (IMF) parameters on Pc3 pulsations are studied. Pc3 pulsations are a class of geomagnetic pulsations with frequency ranging between 22 and 100 mHz. A large dataset of solar wind and pulsation measurements is employed to develop two empirical models capable of predicting the Pc3 index (an indication of Pc3 intensity) at one hour and five minute time resolution, respectively. The models are based on artificial neural networks, due to their ability to model highly non-linear interactions between dependent and independent variables. A robust, iterative process is followed to find and rank the set of solar wind input parameters that optimally predict Pc3 activity. According to the parameter selection process the input parameters to the low resolution model (1 hour data) are, in order of importance, solar wind speed, a pair of time-based parameters, dynamic solar wind pressure, and the IMF orientation with respect to the Sun-Earth line (i.e. the cone angle). Input parameters to the high resolution model (5 minute data) are solar wind speed, cone angle, solar wind density and a pair of time-based parameters. Both models accurately predict Pc3 intensity from unseen solar wind data. It is observed that Pc3 activity ceases when the density in the solar wind is very low, even while other conditions are favourable for the generation and propagation of ULF waves. The influence that solar wind density has on Pc3 activity is studied by analysing six years of solar wind and Pc3 measurements at one minute resolution. It is suggested that the pause in Pc3 activity occurs due to two reasons: Firstly, the ULF waves that are generated in the region upstream of the bow shock does not grow efficiently if the solar wind density is very low; and secondly, waves that are generated cannot be convected into the magnetosphere because of the low Mach number of the solar wind plasma due to the decreased density.
- Full Text:
- Date Issued: 2012
- Authors: Lotz, Stefanus Ignatius
- Date: 2012
- Subjects: Solar wind -- Research Solar activity -- Research Stellar oscillations -- Research , Magnetospheric radio wave propagation , Interplanetary magnetic fields
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:5464 , http://hdl.handle.net/10962/d1005249
- Description: Geomagnetic pulsations are ultra-low frequency (ULF) oscillations of the geomagnetic field that have been observed in the magnetosphere and on the Earth since the 1800’s. In the 1960’s in situ observations of the solar wind suggested that the source of pulsation activity must lie beyond the magnetosphere. In this work the influence of several solar wind plasma and interplanetary magnetic field (IMF) parameters on Pc3 pulsations are studied. Pc3 pulsations are a class of geomagnetic pulsations with frequency ranging between 22 and 100 mHz. A large dataset of solar wind and pulsation measurements is employed to develop two empirical models capable of predicting the Pc3 index (an indication of Pc3 intensity) at one hour and five minute time resolution, respectively. The models are based on artificial neural networks, due to their ability to model highly non-linear interactions between dependent and independent variables. A robust, iterative process is followed to find and rank the set of solar wind input parameters that optimally predict Pc3 activity. According to the parameter selection process the input parameters to the low resolution model (1 hour data) are, in order of importance, solar wind speed, a pair of time-based parameters, dynamic solar wind pressure, and the IMF orientation with respect to the Sun-Earth line (i.e. the cone angle). Input parameters to the high resolution model (5 minute data) are solar wind speed, cone angle, solar wind density and a pair of time-based parameters. Both models accurately predict Pc3 intensity from unseen solar wind data. It is observed that Pc3 activity ceases when the density in the solar wind is very low, even while other conditions are favourable for the generation and propagation of ULF waves. The influence that solar wind density has on Pc3 activity is studied by analysing six years of solar wind and Pc3 measurements at one minute resolution. It is suggested that the pause in Pc3 activity occurs due to two reasons: Firstly, the ULF waves that are generated in the region upstream of the bow shock does not grow efficiently if the solar wind density is very low; and secondly, waves that are generated cannot be convected into the magnetosphere because of the low Mach number of the solar wind plasma due to the decreased density.
- Full Text:
- Date Issued: 2012
Expanding the capabilities of the DPS lonosonde system
- Authors: Magnus, Lindsay Gerald
- Date: 2001
- Subjects: Ionosondes
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5560 , http://hdl.handle.net/10962/d1018243
- Description: The Digisonde Portable Sounder (DPS) is a low power pulse ionosonde capable of recording a wealth of scientific information about the ionosphere. The routine vertical incidence mode, that produces the scaled ionospheric parameters, only records limited Doppler and no precise angle of arrival (AoA) information. The drift mode produces precise scientific information but only limited range information. This thesis explains the operation of the DPS and then examines the drift data by first showing the Doppler velocities (V*) calculated for a fixed frequency ionogram as well as the velocities calculated from an interesting ionospheric disturbance measured with a stepped frequency ionogram and second by illustrating the presence of a variation in the AoA of ionospheric echoes at sunrise. The conclusion of the thesis is that a drift vertical incidence mode be developed to allow the simultaneous measurement of the scaled ionospheric parameters and the precise AoA and full Doppler spectrum information.
- Full Text:
- Date Issued: 2001
- Authors: Magnus, Lindsay Gerald
- Date: 2001
- Subjects: Ionosondes
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5560 , http://hdl.handle.net/10962/d1018243
- Description: The Digisonde Portable Sounder (DPS) is a low power pulse ionosonde capable of recording a wealth of scientific information about the ionosphere. The routine vertical incidence mode, that produces the scaled ionospheric parameters, only records limited Doppler and no precise angle of arrival (AoA) information. The drift mode produces precise scientific information but only limited range information. This thesis explains the operation of the DPS and then examines the drift data by first showing the Doppler velocities (V*) calculated for a fixed frequency ionogram as well as the velocities calculated from an interesting ionospheric disturbance measured with a stepped frequency ionogram and second by illustrating the presence of a variation in the AoA of ionospheric echoes at sunrise. The conclusion of the thesis is that a drift vertical incidence mode be developed to allow the simultaneous measurement of the scaled ionospheric parameters and the precise AoA and full Doppler spectrum information.
- Full Text:
- Date Issued: 2001
Finite element modelling of magma convection and attendant groundwater flow
- Authors: Harrison, Keith
- Date: 1998
- Subjects: Groundwater flow , Magmas
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5467 , http://hdl.handle.net/10962/d1005252 , Groundwater flow , Magmas
- Description: This thesis describes preliminary two- and three-dimensional modelling of mass and heat transport of hot, molten magma in crustal intrusions and of the associated thermally induced flow of groundwater contained in the surrounding country rock. The aim of such modelling is to create a tool with which to predict the location of mineral deposits formed by the transport and subsequent precipitation of minerals dissolved in the convecting groundwater. The momentum equations (Navier-Stokes equations), continuity equation and energy equation are used in conjunction with specially constructed density and viscosity relationships to govern the mass and heat transport processes of magma and groundwater. Finite element methods are used to solve the equations numerically for some simple model geometries. These methods are implemented by a commercial computer software code which is manipulated with a control program constructed by the author for the purpose. The models are of simple two- or three-dimensional geometries which all have an enclosed magma chamber surrounded completely by a shell of country rock through which groundwater is free to move. Modelling begins immediately after the intrusive event when the magma (in most cases rhyolitic) is at its greatest temperature. Heat is allowed to flow from the magma into the country rock causing thermal convection of the groundwater contained therein. The effect of the country rock as a porous medium on the flow of groundwater is modelled by including a distributed resistance term in the momentum equation. The computer code that controls the modelling is such that adaptions made to the models to represent real physical intrusive systems are trivial. Results of the research at this stage allow approximate prediction of the location of mineral deposits. Enhanced predictions can be made by effecting improvements to the models such as a more detailed representation of chemical processes, adaption of the computer code to allow multiple injections of magma and the modelling of frozen magma as a porous medium which admits the flow of groundwater.
- Full Text:
- Date Issued: 1998
- Authors: Harrison, Keith
- Date: 1998
- Subjects: Groundwater flow , Magmas
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5467 , http://hdl.handle.net/10962/d1005252 , Groundwater flow , Magmas
- Description: This thesis describes preliminary two- and three-dimensional modelling of mass and heat transport of hot, molten magma in crustal intrusions and of the associated thermally induced flow of groundwater contained in the surrounding country rock. The aim of such modelling is to create a tool with which to predict the location of mineral deposits formed by the transport and subsequent precipitation of minerals dissolved in the convecting groundwater. The momentum equations (Navier-Stokes equations), continuity equation and energy equation are used in conjunction with specially constructed density and viscosity relationships to govern the mass and heat transport processes of magma and groundwater. Finite element methods are used to solve the equations numerically for some simple model geometries. These methods are implemented by a commercial computer software code which is manipulated with a control program constructed by the author for the purpose. The models are of simple two- or three-dimensional geometries which all have an enclosed magma chamber surrounded completely by a shell of country rock through which groundwater is free to move. Modelling begins immediately after the intrusive event when the magma (in most cases rhyolitic) is at its greatest temperature. Heat is allowed to flow from the magma into the country rock causing thermal convection of the groundwater contained therein. The effect of the country rock as a porous medium on the flow of groundwater is modelled by including a distributed resistance term in the momentum equation. The computer code that controls the modelling is such that adaptions made to the models to represent real physical intrusive systems are trivial. Results of the research at this stage allow approximate prediction of the location of mineral deposits. Enhanced predictions can be made by effecting improvements to the models such as a more detailed representation of chemical processes, adaption of the computer code to allow multiple injections of magma and the modelling of frozen magma as a porous medium which admits the flow of groundwater.
- Full Text:
- Date Issued: 1998
Finite element simulations of shear aggregation as a mechanism to form platinum group elements (PGEs) in dyke-like ore bodies
- Authors: Mbandezi, Mxolisi Louis
- Date: 2002
- Subjects: Platinum group , Magmas , Shear flow , Geophysics , Terrestrial heat flow
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5561 , http://hdl.handle.net/10962/d1018249
- Description: This research describes a two-dimensional modelling effort of heat and mass transport in simplified intrusive models of sills and their feeder dykes. These simplified models resembled a complex intrusive system such as the Great Dyke of Zimbabwe. This study investigated the impact of variable geometry to transport processes in two ways. First the time evolution of heat and mass transport during cooling was investigated. Then emphasis was placed on the application of convective scavenging as a mechanism that leads to the formation of minerals of economic interest, in particular the Platinum Group Elements (PGEs). The Navier-Stokes equations employed generated regions of high shear within the magma where we expected enhanced collisions between the immiscible sulphide liquid particles and PGEs. These collisions scavenge PGEs from the primary melt, aggregate and concentrate it to form PGEs enrichment in zero shear zones. The PGEs scavenge; concentrate and 'glue' in zero shear zones in the early history of convection because of viscosity and dispersive pressure (Bagnold effect). The effect of increasing the geometry size enhances scavenging, creates bigger zero shear zones with dilute concentrate of PGEs but you get high shear near the roots of the dyke/sill where the concentration will not be dilute. The time evolution calculations show that increasing the size of the magma chamber results in stronger initial convection currents for large magma models than for small ones. However, convection takes, approximately the same time to cease for both models. The research concludes that the time evolution for convective heat transfer is dependent on the viscosity rather than on geometry size. However, conductive heat transfer to the e-folding temperature was almost six times as long for the large model (M4) than the small one (M2). Variable viscosity as a physical property was applied to models 2 and 4 only. Video animations that simulate the cooling process for these models are enclosed in a CD at the back of this thesis. These simulations provide information with regard to the emplacement history and distribution of PGEs ore bodies. This will assist the reserve estimation and the location of economic minerals.
- Full Text:
- Date Issued: 2002
- Authors: Mbandezi, Mxolisi Louis
- Date: 2002
- Subjects: Platinum group , Magmas , Shear flow , Geophysics , Terrestrial heat flow
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5561 , http://hdl.handle.net/10962/d1018249
- Description: This research describes a two-dimensional modelling effort of heat and mass transport in simplified intrusive models of sills and their feeder dykes. These simplified models resembled a complex intrusive system such as the Great Dyke of Zimbabwe. This study investigated the impact of variable geometry to transport processes in two ways. First the time evolution of heat and mass transport during cooling was investigated. Then emphasis was placed on the application of convective scavenging as a mechanism that leads to the formation of minerals of economic interest, in particular the Platinum Group Elements (PGEs). The Navier-Stokes equations employed generated regions of high shear within the magma where we expected enhanced collisions between the immiscible sulphide liquid particles and PGEs. These collisions scavenge PGEs from the primary melt, aggregate and concentrate it to form PGEs enrichment in zero shear zones. The PGEs scavenge; concentrate and 'glue' in zero shear zones in the early history of convection because of viscosity and dispersive pressure (Bagnold effect). The effect of increasing the geometry size enhances scavenging, creates bigger zero shear zones with dilute concentrate of PGEs but you get high shear near the roots of the dyke/sill where the concentration will not be dilute. The time evolution calculations show that increasing the size of the magma chamber results in stronger initial convection currents for large magma models than for small ones. However, convection takes, approximately the same time to cease for both models. The research concludes that the time evolution for convective heat transfer is dependent on the viscosity rather than on geometry size. However, conductive heat transfer to the e-folding temperature was almost six times as long for the large model (M4) than the small one (M2). Variable viscosity as a physical property was applied to models 2 and 4 only. Video animations that simulate the cooling process for these models are enclosed in a CD at the back of this thesis. These simulations provide information with regard to the emplacement history and distribution of PGEs ore bodies. This will assist the reserve estimation and the location of economic minerals.
- Full Text:
- Date Issued: 2002
Finite precision arithmetic in Polyphase Filterbank implementations
- Authors: Myburgh, Talon
- Date: 2020
- Subjects: Radio interferometers , Interferometry , Radio telescopes , Gate array circuits , Floating-point arithmetic , Python (Computer program language) , Polyphase Filterbank , Finite precision arithmetic , MeerKAT
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/146187 , vital:38503
- Description: The MeerKAT is the most sensitive radio telescope in its class, and it is important that systematic effects do not limit the dynamic range of the instrument, preventing this sensitivity from being harnessed for deep integrations. During commissioning, spurious artefacts were noted in the MeerKAT passband and the root cause was attributed to systematic errors in the digital signal path. Finite precision arithmetic used by the Polyphase Filterbank (PFB) was one of the main factors contributing to the spurious responses, together with bugs in the firmware. This thesis describes a software PFB simulator that was built to mimic the MeerKAT PFB and allow investigation into the origin and mitigation of the effects seen on the telescope. This simulator was used to investigate the effects in signal integrity of various rounding techniques, overflow strategies and dual polarisation processing in the PFB. Using the simulator to investigate a number of different signal levels, bit-width and algorithmic scenarios, it gave insight into how the periodic dips occurring in the MeerKAT passband were the result of the implementation using an inappropriate rounding strategy. It further indicated how to select the best strategy for preventing overflow while maintaining high quantization effciency in the FFT. This practice of simulating the design behaviour in the PFB independently of the tools used to design the DSP firmware, is a step towards an end-to-end simulation of the MeerKAT system (or any radio telescope using nite precision digital signal processing systems). This would be useful for design, diagnostics, signal analysis and prototyping of the overall instrument.
- Full Text:
- Date Issued: 2020
- Authors: Myburgh, Talon
- Date: 2020
- Subjects: Radio interferometers , Interferometry , Radio telescopes , Gate array circuits , Floating-point arithmetic , Python (Computer program language) , Polyphase Filterbank , Finite precision arithmetic , MeerKAT
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/146187 , vital:38503
- Description: The MeerKAT is the most sensitive radio telescope in its class, and it is important that systematic effects do not limit the dynamic range of the instrument, preventing this sensitivity from being harnessed for deep integrations. During commissioning, spurious artefacts were noted in the MeerKAT passband and the root cause was attributed to systematic errors in the digital signal path. Finite precision arithmetic used by the Polyphase Filterbank (PFB) was one of the main factors contributing to the spurious responses, together with bugs in the firmware. This thesis describes a software PFB simulator that was built to mimic the MeerKAT PFB and allow investigation into the origin and mitigation of the effects seen on the telescope. This simulator was used to investigate the effects in signal integrity of various rounding techniques, overflow strategies and dual polarisation processing in the PFB. Using the simulator to investigate a number of different signal levels, bit-width and algorithmic scenarios, it gave insight into how the periodic dips occurring in the MeerKAT passband were the result of the implementation using an inappropriate rounding strategy. It further indicated how to select the best strategy for preventing overflow while maintaining high quantization effciency in the FFT. This practice of simulating the design behaviour in the PFB independently of the tools used to design the DSP firmware, is a step towards an end-to-end simulation of the MeerKAT system (or any radio telescope using nite precision digital signal processing systems). This would be useful for design, diagnostics, signal analysis and prototyping of the overall instrument.
- Full Text:
- Date Issued: 2020
First H F Doppler soundings of the ionosphere at SANAE
- Authors: De Kock, Errol James
- Date: 1980
- Subjects: Ionosphere
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5500 , http://hdl.handle.net/10962/d1006869 , Ionosphere
- Full Text:
- Date Issued: 1980
- Authors: De Kock, Errol James
- Date: 1980
- Subjects: Ionosphere
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5500 , http://hdl.handle.net/10962/d1006869 , Ionosphere
- Full Text:
- Date Issued: 1980
Forecasting solar cycle 24 using neural networks
- Authors: Uwamahoro, Jean
- Date: 2009
- Subjects: Solar cycle , Neural networks (Computer science) , Ionosphere , Ionospheric electron density , Ionospheric forecasting , Solar thermal energy
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5468 , http://hdl.handle.net/10962/d1005253 , Solar cycle , Neural networks (Computer science) , Ionosphere , Ionospheric electron density , Ionospheric forecasting , Solar thermal energy
- Description: The ability to predict the future behavior of solar activity has become of extreme importance due to its effect on the near-Earth environment. Predictions of both the amplitude and timing of the next solar cycle will assist in estimating the various consequences of Space Weather. Several prediction techniques have been applied and have achieved varying degrees of success in the domain of solar activity prediction. These techniques include, for example, neural networks and geomagnetic precursor methods. In this thesis, various neural network based models were developed and the model considered to be optimum was used to estimate the shape and timing of solar cycle 24. Given the recent success of the geomagnetic precusrsor methods, geomagnetic activity as measured by the aa index is considered among the main inputs to the neural network model. The neural network model developed is also provided with the time input parameters defining the year and the month of a particular solar cycle, in order to characterise the temporal behaviour of sunspot number as observed during the last 10 solar cycles. The structure of input-output patterns to the neural network is constructed in such a way that the network learns the relationship between the aa index values of a particular cycle, and the sunspot number values of the following cycle. Assuming January 2008 as the minimum preceding solar cycle 24, the shape and amplitude of solar cycle 24 is estimated in terms of monthly mean and smoothed monthly sunspot number. This new prediction model estimates an average solar cycle 24, with the maximum occurring around June 2012 [± 11 months], with a smoothed monthly maximum sunspot number of 121 ± 9.
- Full Text:
- Date Issued: 2009
- Authors: Uwamahoro, Jean
- Date: 2009
- Subjects: Solar cycle , Neural networks (Computer science) , Ionosphere , Ionospheric electron density , Ionospheric forecasting , Solar thermal energy
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5468 , http://hdl.handle.net/10962/d1005253 , Solar cycle , Neural networks (Computer science) , Ionosphere , Ionospheric electron density , Ionospheric forecasting , Solar thermal energy
- Description: The ability to predict the future behavior of solar activity has become of extreme importance due to its effect on the near-Earth environment. Predictions of both the amplitude and timing of the next solar cycle will assist in estimating the various consequences of Space Weather. Several prediction techniques have been applied and have achieved varying degrees of success in the domain of solar activity prediction. These techniques include, for example, neural networks and geomagnetic precursor methods. In this thesis, various neural network based models were developed and the model considered to be optimum was used to estimate the shape and timing of solar cycle 24. Given the recent success of the geomagnetic precusrsor methods, geomagnetic activity as measured by the aa index is considered among the main inputs to the neural network model. The neural network model developed is also provided with the time input parameters defining the year and the month of a particular solar cycle, in order to characterise the temporal behaviour of sunspot number as observed during the last 10 solar cycles. The structure of input-output patterns to the neural network is constructed in such a way that the network learns the relationship between the aa index values of a particular cycle, and the sunspot number values of the following cycle. Assuming January 2008 as the minimum preceding solar cycle 24, the shape and amplitude of solar cycle 24 is estimated in terms of monthly mean and smoothed monthly sunspot number. This new prediction model estimates an average solar cycle 24, with the maximum occurring around June 2012 [± 11 months], with a smoothed monthly maximum sunspot number of 121 ± 9.
- Full Text:
- Date Issued: 2009