Conductivity profiles for a horizontally uniform earth

**Authors:**Murrell, Hugh Crozier**Date:**1983**Subjects:**Coen, Shimon -- Criticism and interpretation , Wang-Ho Yu, Michael -- Criticism and interpretation , Algorithms**Language:**English**Type:**Thesis , Masters , MSc**Identifier:**vital:5395 , http://hdl.handle.net/10962/d1001984**Description:**An investigation is made into the mathematics behind the noniterative inversion algorithm of Shimon Coen and Michael Wang-Ho Yu [1981]. The algorithm determines the conductivity profile of a horizontally uniform earth from surface measurements of apparent resistivity with a Schlumberger array. The algorithm is checked by performing the inversion on both artifical and raw field data**Full Text:**

**Authors:**Murrell, Hugh Crozier**Date:**1983**Subjects:**Coen, Shimon -- Criticism and interpretation , Wang-Ho Yu, Michael -- Criticism and interpretation , Algorithms**Language:**English**Type:**Thesis , Masters , MSc**Identifier:**vital:5395 , http://hdl.handle.net/10962/d1001984**Description:**An investigation is made into the mathematics behind the noniterative inversion algorithm of Shimon Coen and Michael Wang-Ho Yu [1981]. The algorithm determines the conductivity profile of a horizontally uniform earth from surface measurements of apparent resistivity with a Schlumberger array. The algorithm is checked by performing the inversion on both artifical and raw field data**Full Text:**

A real time Fast Fourier Transform analyser

**Authors:**Fisher, John Stanley**Date:**1980**Subjects:**Fourier transformations , Ionosondes , Algorithms , Computer simulation**Language:**English**Type:**Thesis , Masters , MSc**Identifier:**vital:5439 , http://hdl.handle.net/10962/d1001992**Description:**From the requirements of the Ionosonde digitisation project, undertaken by Rhodes University Antarctic Research Group, it was decided to use the Fast Fourier Transform to compute the spectrum analysis. Several FFT algorithms are reviewed and properties discussed, and the Ccoley Tukey algorithm chosen for utilization. The hardware implementation of this algorithm, and the microprogram control of the whole system are discussed in detail, and such design aspects that required computer simulation are also treated in detail. The final testing of the analyser is shown, and includes a test using data from an ionosonde sounding. The conclusions contain details of extensions to the analysers present operation, required by plans to place the whole Chirpsounder under microprocessor control**Full Text:**

**Authors:**Fisher, John Stanley**Date:**1980**Subjects:**Fourier transformations , Ionosondes , Algorithms , Computer simulation**Language:**English**Type:**Thesis , Masters , MSc**Identifier:**vital:5439 , http://hdl.handle.net/10962/d1001992**Description:**From the requirements of the Ionosonde digitisation project, undertaken by Rhodes University Antarctic Research Group, it was decided to use the Fast Fourier Transform to compute the spectrum analysis. Several FFT algorithms are reviewed and properties discussed, and the Ccoley Tukey algorithm chosen for utilization. The hardware implementation of this algorithm, and the microprogram control of the whole system are discussed in detail, and such design aspects that required computer simulation are also treated in detail. The final testing of the analyser is shown, and includes a test using data from an ionosonde sounding. The conclusions contain details of extensions to the analysers present operation, required by plans to place the whole Chirpsounder under microprocessor control**Full Text:**

Advanced radio interferometric simulation and data reduction techniques

**Date:**2018**Subjects:**Interferometry , Algorithms , Radio telescopes , Square Kilometer Array (Spacecraft) , Radio astronomy**Type:**text , Thesis , Doctoral , PhD**Identifier:**http://hdl.handle.net/10962/57348 , http://vital.seals.ac.za8080/10962/90428 , vital:26875**Description:**This work shows how legacy and novel radio Interferometry software packages and algorithms can be combined to produce high-quality reductions from modern telescopes, as well as end-to-end simulations for upcoming instruments such as the Square Kilometre Array (SKA) and its pathfinders. We first use a MeqTrees based simulations framework to quantify how artefacts due to direction-dependent effects accumulate with time, and the consequences of this accumulation when observing the same field multiple times in order to reach the survey depth. Our simulations suggest that a survey like LADUMA (Looking at the Distant Universe with MeerKAT Array), which aims to achieve its survey depth of 16 µJy/beam in a 72 kHz at 1.42 GHz by observing the same field for 1000 hours, will be able to reach its target depth in the presence of these artefacts. We also present stimela, a system agnostic scripting framework for simulating, processing and imaging radio interferometric data. This framework is then used to write an end-to-end simulation pipeline in order to quantify the resolution and sensitivity of the SKA1-MID telescope (the first phase of the SKA mid-frequency telescope) as a function of frequency, as well as the scale-dependent sensitivity of the telescope. Finally, a stimela-based reduction pipeline is used to process data of the field around the source 3C147, taken by the Karl G. Jansky Very Large Array (VLA). The reconstructed image from this reduction has a typical 1a noise level of 2.87 µJy/beam, and consequently a dynamic range of 8x106:1, given the 22.58 Jy/beam flux Density of the source 3C147.**Full Text:**

**Date:**2018**Subjects:**Interferometry , Algorithms , Radio telescopes , Square Kilometer Array (Spacecraft) , Radio astronomy**Type:**text , Thesis , Doctoral , PhD**Identifier:**http://hdl.handle.net/10962/57348 , http://vital.seals.ac.za8080/10962/90428 , vital:26875**Description:**This work shows how legacy and novel radio Interferometry software packages and algorithms can be combined to produce high-quality reductions from modern telescopes, as well as end-to-end simulations for upcoming instruments such as the Square Kilometre Array (SKA) and its pathfinders. We first use a MeqTrees based simulations framework to quantify how artefacts due to direction-dependent effects accumulate with time, and the consequences of this accumulation when observing the same field multiple times in order to reach the survey depth. Our simulations suggest that a survey like LADUMA (Looking at the Distant Universe with MeerKAT Array), which aims to achieve its survey depth of 16 µJy/beam in a 72 kHz at 1.42 GHz by observing the same field for 1000 hours, will be able to reach its target depth in the presence of these artefacts. We also present stimela, a system agnostic scripting framework for simulating, processing and imaging radio interferometric data. This framework is then used to write an end-to-end simulation pipeline in order to quantify the resolution and sensitivity of the SKA1-MID telescope (the first phase of the SKA mid-frequency telescope) as a function of frequency, as well as the scale-dependent sensitivity of the telescope. Finally, a stimela-based reduction pipeline is used to process data of the field around the source 3C147, taken by the Karl G. Jansky Very Large Array (VLA). The reconstructed image from this reduction has a typical 1a noise level of 2.87 µJy/beam, and consequently a dynamic range of 8x106:1, given the 22.58 Jy/beam flux Density of the source 3C147.**Full Text:**

Algorithms for the solution of the quadratic programming problem

**Authors:**Vankova, Martina**Date:**2004**Subjects:**Quadratic programming , Nonlinear programming , Algorithms**Language:**English**Type:**Thesis , Masters , MSc**Identifier:**vital:11086 , http://hdl.handle.net/10948/348 , Quadratic programming , Nonlinear programming , Algorithms**Description:**The purpose of this dissertation was to provide a review of the theory of Optimization, in particular quadratic programming, and the algorithms suitable for solving both convex and non-convex quadratic programming problems. Optimization problems arise in a wide variety of fields and many can be effectively modeled with linear equations. However, there are problems for which linear models are not sufficient thus creating a need for non-linear systems. This dissertation includes a literature study of the formal theory necessary for understanding optimization and an investigation of the algorithms available for solving a special class of the non-linear programming problem, namely the quadratic programming problem. It was not the intention of this dissertation to discuss all possible algorithms for solving the quadratic programming problem, therefore certain algorithms for convex and non-convex quadratic programming problems were selected for a detailed discussion in the dissertation. Some of the algorithms were selected arbitrarily, because limited information was available comparing the efficiency of the various algorithms. Algorithms available for solving general non-linear programming problems were also included and briefly discussed as they can be used to solve quadratic programming problems. A number of algorithms were then selected for evaluation, depending on the frequency of use in practice and the availability of software implementing these algorithms. The evaluation included a theoretical and quantitative comparison of the algorithms. The quantitative results were analyzed and discussed and it was shown that the results supported the theoretical comparison. It was also shown that it is difficult to conclude that one algorithm is better than another as the efficiency of an algorithm greatly depends on the size of the problem, the complexity of an algorithm and many other implementation issues. Optimization problems arise continuously in a wide range of fields and thus create the need for effective methods of solving them. This dissertation provides the fundamental theory necessary for the understanding of optimization problems, with particular reference to quadratic programming problems and the algorithms that solve such problems. Keywords: Quadratic Programming, Quadratic Programming Algorithms, Optimization, Non-linear Programming, Convex, Non-convex.**Full Text:**

**Authors:**Vankova, Martina**Date:**2004**Subjects:**Quadratic programming , Nonlinear programming , Algorithms**Language:**English**Type:**Thesis , Masters , MSc**Identifier:**vital:11086 , http://hdl.handle.net/10948/348 , Quadratic programming , Nonlinear programming , Algorithms**Description:**The purpose of this dissertation was to provide a review of the theory of Optimization, in particular quadratic programming, and the algorithms suitable for solving both convex and non-convex quadratic programming problems. Optimization problems arise in a wide variety of fields and many can be effectively modeled with linear equations. However, there are problems for which linear models are not sufficient thus creating a need for non-linear systems. This dissertation includes a literature study of the formal theory necessary for understanding optimization and an investigation of the algorithms available for solving a special class of the non-linear programming problem, namely the quadratic programming problem. It was not the intention of this dissertation to discuss all possible algorithms for solving the quadratic programming problem, therefore certain algorithms for convex and non-convex quadratic programming problems were selected for a detailed discussion in the dissertation. Some of the algorithms were selected arbitrarily, because limited information was available comparing the efficiency of the various algorithms. Algorithms available for solving general non-linear programming problems were also included and briefly discussed as they can be used to solve quadratic programming problems. A number of algorithms were then selected for evaluation, depending on the frequency of use in practice and the availability of software implementing these algorithms. The evaluation included a theoretical and quantitative comparison of the algorithms. The quantitative results were analyzed and discussed and it was shown that the results supported the theoretical comparison. It was also shown that it is difficult to conclude that one algorithm is better than another as the efficiency of an algorithm greatly depends on the size of the problem, the complexity of an algorithm and many other implementation issues. Optimization problems arise continuously in a wide range of fields and thus create the need for effective methods of solving them. This dissertation provides the fundamental theory necessary for the understanding of optimization problems, with particular reference to quadratic programming problems and the algorithms that solve such problems. Keywords: Quadratic Programming, Quadratic Programming Algorithms, Optimization, Non-linear Programming, Convex, Non-convex.**Full Text:**

The impact of an in-depth code comprehension tool in an introductory programming module

**Authors:**Leppan, Ronald George**Date:**2008**Subjects:**Algorithms , Computer programming , Algorithms -- Study and teaching -- South Africa , Visual perception**Language:**English**Type:**Thesis , Masters , MSc**Identifier:**vital:10471 , http://hdl.handle.net/10948/847 , Algorithms , Computer programming , Algorithms -- Study and teaching -- South Africa , Visual perception**Description:**Reading and understanding algorithms is not an easy task and often neglected by educators in an introductory programming course. One proposed solution to this problem is the incorporation of a technological support tool to aid program comprehension in introductory programming. Many researchers advocate the identification of beacons and the use of chunking as support for code comprehension. Beacon recognition and chunking can also be used as support in the teaching model of introductory programming. Educators use a variety of different support tools to facilitate program comprehension in introductory programming. Review of a variety of support tools fails to deliver an existing tool to support a teaching model that incorporates chunking and the identification of beacons. The experimental support tool in this dissertation (BeReT) is primarily designed to encourage a student to correctly identify beacons within provided program extracts. BeReT can also be used to allow students to group together related statements and to learn about plans implemented in any semantically and syntactically correct algorithm uploaded by an instructor. While these requirements are evident in the design and implementation of BeReT, data is required to measure the effect BeReT has on the indepth comprehension of introductory programming algorithms. A between-groups experiment is described which compares the program comprehension of students that used BeReT to study various introductory algorithms, with students that relied solely on traditional lecturing materials. The use of an eye tracker was incorporated into the empirical study to visualise the results of controlled experiments. The results indicate that a technological support tool like BeReT can have a significantly positive effect on student comprehension of algorithms traditionally taught in introductory programming. This research provides educators with an alternative way for the incorporation of in-depth code comprehension skills in introductory programming.**Full Text:**

**Authors:**Leppan, Ronald George**Date:**2008**Subjects:**Algorithms , Computer programming , Algorithms -- Study and teaching -- South Africa , Visual perception**Language:**English**Type:**Thesis , Masters , MSc**Identifier:**vital:10471 , http://hdl.handle.net/10948/847 , Algorithms , Computer programming , Algorithms -- Study and teaching -- South Africa , Visual perception**Description:**Reading and understanding algorithms is not an easy task and often neglected by educators in an introductory programming course. One proposed solution to this problem is the incorporation of a technological support tool to aid program comprehension in introductory programming. Many researchers advocate the identification of beacons and the use of chunking as support for code comprehension. Beacon recognition and chunking can also be used as support in the teaching model of introductory programming. Educators use a variety of different support tools to facilitate program comprehension in introductory programming. Review of a variety of support tools fails to deliver an existing tool to support a teaching model that incorporates chunking and the identification of beacons. The experimental support tool in this dissertation (BeReT) is primarily designed to encourage a student to correctly identify beacons within provided program extracts. BeReT can also be used to allow students to group together related statements and to learn about plans implemented in any semantically and syntactically correct algorithm uploaded by an instructor. While these requirements are evident in the design and implementation of BeReT, data is required to measure the effect BeReT has on the indepth comprehension of introductory programming algorithms. A between-groups experiment is described which compares the program comprehension of students that used BeReT to study various introductory algorithms, with students that relied solely on traditional lecturing materials. The use of an eye tracker was incorporated into the empirical study to visualise the results of controlled experiments. The results indicate that a technological support tool like BeReT can have a significantly positive effect on student comprehension of algorithms traditionally taught in introductory programming. This research provides educators with an alternative way for the incorporation of in-depth code comprehension skills in introductory programming.**Full Text:**

Algorithmic skeletons as a method of parallel programming

**Authors:**Watkins, Rees Collyer**Date:**1993**Subjects:**Parallel programming (Computer science) , Algorithms**Language:**English**Type:**Thesis , Masters , MSc**Identifier:**vital:4609 , http://hdl.handle.net/10962/d1004889 , Parallel programming (Computer science) , Algorithms**Description:**A new style of abstraction for program development, based on the concept of algorithmic skeletons, has been proposed in the literature. The programmer is offered a variety of independent algorithmic skeletons each of which describe the structure of a particular style of algorithm. The appropriate skeleton is used by the system to mould the solution. Parallel programs are particularly appropriate for this technique because of their complexity. This thesis investigates algorithmic skeletons as a method of hiding the complexities of parallel programming from the user, and for guiding them towards efficient solutions. To explore this approach, this thesis describes the implementation and benchmarking of the divide and conquer and task queue paradigms as skeletons. All but one category of problem, as implemented in this thesis, scale well over eight processors. The rate of speed up tails off when there are significant communication requirements. The results show that, with some user knowledge, efficient parallel programs can be developed using this method. The evaluation explores methods for fine tuning some skeleton programs to achieve increased efficiency.**Full Text:**

**Authors:**Watkins, Rees Collyer**Date:**1993**Subjects:**Parallel programming (Computer science) , Algorithms**Language:**English**Type:**Thesis , Masters , MSc**Identifier:**vital:4609 , http://hdl.handle.net/10962/d1004889 , Parallel programming (Computer science) , Algorithms**Description:**A new style of abstraction for program development, based on the concept of algorithmic skeletons, has been proposed in the literature. The programmer is offered a variety of independent algorithmic skeletons each of which describe the structure of a particular style of algorithm. The appropriate skeleton is used by the system to mould the solution. Parallel programs are particularly appropriate for this technique because of their complexity. This thesis investigates algorithmic skeletons as a method of hiding the complexities of parallel programming from the user, and for guiding them towards efficient solutions. To explore this approach, this thesis describes the implementation and benchmarking of the divide and conquer and task queue paradigms as skeletons. All but one category of problem, as implemented in this thesis, scale well over eight processors. The rate of speed up tails off when there are significant communication requirements. The results show that, with some user knowledge, efficient parallel programs can be developed using this method. The evaluation explores methods for fine tuning some skeleton programs to achieve increased efficiency.**Full Text:**

Preimages for SHA-1

**Date:**2018**Subjects:**Cryptography , Algorithms , Hashing (Computer science) , Computer security**Type:**text , Thesis , Doctoral , PhD**Identifier:**http://hdl.handle.net/10962/57885 , http://vital.seals.ac.za8080/10962/84948 , vital:27004**Description:**This research explores the problem of finding a preimage — an input that, when passed through a particular function, will result in a pre-specified output — for the compression function of the SHA-1 cryptographic hash. This problem is much more difficult than the problem of finding a collision for a hash function, and preimage attacks for very few popular hash functions are known. The research begins by introducing the field and giving an overview of the existing work in the area. A thorough analysis of the compression function is made, resulting in alternative formulations for both parts of the function, and both statistical and theoretical tools to determine the difficulty of the SHA-1 preimage problem. Different representations (And- Inverter Graph, Binary Decision Diagram, Conjunctive Normal Form, Constraint Satisfaction form, and Disjunctive Normal Form) and associated tools to manipulate and/or analyse these representations are then applied and explored, and results are collected and interpreted. In conclusion, the SHA-1 preimage problem remains unsolved and insoluble for the foreseeable future. The primary issue is one of efficient representation; despite a promising theoretical difficulty, both the diffusion characteristics and the depth of the tree stand in the way of efficient search. Despite this, the research served to confirm and quantify the difficulty of the problem both theoretically, using Schaefer's Theorem, and practically, in the context of different representations.**Full Text:**

**Date:**2018**Subjects:**Cryptography , Algorithms , Hashing (Computer science) , Computer security**Type:**text , Thesis , Doctoral , PhD**Identifier:**http://hdl.handle.net/10962/57885 , http://vital.seals.ac.za8080/10962/84948 , vital:27004**Description:**This research explores the problem of finding a preimage — an input that, when passed through a particular function, will result in a pre-specified output — for the compression function of the SHA-1 cryptographic hash. This problem is much more difficult than the problem of finding a collision for a hash function, and preimage attacks for very few popular hash functions are known. The research begins by introducing the field and giving an overview of the existing work in the area. A thorough analysis of the compression function is made, resulting in alternative formulations for both parts of the function, and both statistical and theoretical tools to determine the difficulty of the SHA-1 preimage problem. Different representations (And- Inverter Graph, Binary Decision Diagram, Conjunctive Normal Form, Constraint Satisfaction form, and Disjunctive Normal Form) and associated tools to manipulate and/or analyse these representations are then applied and explored, and results are collected and interpreted. In conclusion, the SHA-1 preimage problem remains unsolved and insoluble for the foreseeable future. The primary issue is one of efficient representation; despite a promising theoretical difficulty, both the diffusion characteristics and the depth of the tree stand in the way of efficient search. Despite this, the research served to confirm and quantify the difficulty of the problem both theoretically, using Schaefer's Theorem, and practically, in the context of different representations.**Full Text:**

Clustering algorithms and their effect on edge preservation in image compression

**Authors:**Ndebele, Nothando Elizabeth**Date:**2009**Subjects:**Image compression , Vector analysis , Cluster analysis , Cluster anaylsis -- Data processing , Algorithms**Language:**English**Type:**Thesis , Masters , MSc**Identifier:**vital:5576 , http://hdl.handle.net/10962/d1008210 , Image compression , Vector analysis , Cluster analysis , Cluster anaylsis -- Data processing , Algorithms**Description:**Image compression aims to reduce the amount of data that is stored or transmitted for images. One technique that may be used to this end is vector quantization. Vectors may be used to represent images. Vector quantization reduces the number of vectors required for an image by representing a cluster of similar vectors by one typical vector that is part of a set of vectors referred to as the code book. For compression, for each image vector, only the closest codebook vector is stored or transmitted. For reconstruction, the image vectors are again replaced by the the closest codebook vectors. Hence vector quantization is a lossy compression technique and the quality of the reconstructed image depends strongly on the quality of the codebook. The design of the codebook is therefore an important part of the process. In this thesis we examine three clustering algorithms which can be used for codebook design in image compression: c-means (CM), fuzzy c-means (FCM) and learning vector quantization (LVQ). We give a description of these algorithms and their application to codebook design. Edges are an important part of the visual information contained in an image. It is essential therefore to use codebooks which allow an accurate representation of the edges. One of the shortcomings of using vector quantization is poor edge representation. We therefore carry out experiments using these algorithms to compare their edge preserving qualities. We also investigate the combination of these algorithms with classified vector quantization (CVQ) and the replication method (RM). Both these methods have been suggested as methods for improving edge representation. We use a cross validation approach to estimate the mean squared error to measure the performance of each of the algorithms and the edge preserving methods. The results reflect that the edges are less accurately represented than the non - edge areas when using CM, FCM and LVQ. The advantage of using CVQ is that the time taken for code book design is reduced particularly for CM and FCM. RM is found to be effective where the codebook is trained using a set that has larger proportions of edges than the test set.**Full Text:**

**Authors:**Ndebele, Nothando Elizabeth**Date:**2009**Subjects:**Image compression , Vector analysis , Cluster analysis , Cluster anaylsis -- Data processing , Algorithms**Language:**English**Type:**Thesis , Masters , MSc**Identifier:**vital:5576 , http://hdl.handle.net/10962/d1008210 , Image compression , Vector analysis , Cluster analysis , Cluster anaylsis -- Data processing , Algorithms**Description:**Image compression aims to reduce the amount of data that is stored or transmitted for images. One technique that may be used to this end is vector quantization. Vectors may be used to represent images. Vector quantization reduces the number of vectors required for an image by representing a cluster of similar vectors by one typical vector that is part of a set of vectors referred to as the code book. For compression, for each image vector, only the closest codebook vector is stored or transmitted. For reconstruction, the image vectors are again replaced by the the closest codebook vectors. Hence vector quantization is a lossy compression technique and the quality of the reconstructed image depends strongly on the quality of the codebook. The design of the codebook is therefore an important part of the process. In this thesis we examine three clustering algorithms which can be used for codebook design in image compression: c-means (CM), fuzzy c-means (FCM) and learning vector quantization (LVQ). We give a description of these algorithms and their application to codebook design. Edges are an important part of the visual information contained in an image. It is essential therefore to use codebooks which allow an accurate representation of the edges. One of the shortcomings of using vector quantization is poor edge representation. We therefore carry out experiments using these algorithms to compare their edge preserving qualities. We also investigate the combination of these algorithms with classified vector quantization (CVQ) and the replication method (RM). Both these methods have been suggested as methods for improving edge representation. We use a cross validation approach to estimate the mean squared error to measure the performance of each of the algorithms and the edge preserving methods. The results reflect that the edges are less accurately represented than the non - edge areas when using CM, FCM and LVQ. The advantage of using CVQ is that the time taken for code book design is reduced particularly for CM and FCM. RM is found to be effective where the codebook is trained using a set that has larger proportions of edges than the test set.**Full Text:**

Addressing flux suppression, radio frequency interference, and selection of optimal solution intervals during radio interferometric calibration

**Authors:**Sob, Ulrich Armel Mbou**Date:**2020**Subjects:**CubiCal (Software) , Radio -- Interference , Imaging systems in astronomy , Algorithms , Astronomical instruments -- Calibration , Astronomy -- Data processing**Language:**English**Type:**Thesis , Doctoral , PhD**Identifier:**http://hdl.handle.net/10962/147714 , vital:38663**Description:**The forthcoming Square Kilometre Array is expected to provide answers to some of the most intriguing questions about our Universe. However, as it is already noticeable from MeerKAT and other precursors, the amounts of data produced by these new instruments are significantly challenging to calibrate and image. Calibration of radio interferometric data is usually biased by incomplete sky models and radio frequency interference (RFI) resulting in calibration artefacts that limit the dynamic range and image fidelity of the resulting images. One of the most noticeable of these artefacts is the formation of spurious sources which causes suppression of real emissions. Fortunately, it has been shown that calibration algorithms employing heavy-tailed likelihood functions are less susceptible to this due to their robustness against outliers. Leveraging on recent developments in the field of complex optimisation, we implement a robust calibration algorithm using a Student’s t likelihood function and Wirtinger derivatives. The new algorithm, dubbed the robust solver, is incorporated as a subroutine into the newly released calibration software package CubiCal. We perform statistical analysis on the distribution of visibilities and provide an insight into the functioning of the robust solver and describe different scenarios where it will improve calibration. We use simulations to show that the robust solver effectively reduces the amount of flux suppressed from unmodelled sources both in direction independent and direction dependent calibration. Furthermore, the robust solver is shown to successfully mitigate the effects of low-level RFI when applied to a simulated and a real VLA dataset. Finally, we demonstrate that there are close links between the amount of flux suppressed from sources, the effects of the RFI and the employed solution interval during radio interferometric calibration. Hence, we investigate the effects of solution intervals and the different factors to consider in order to select adequate solution intervals. Furthermore, we propose a practical brute force method for selecting optimal solution intervals. The proposed method is successfully applied to a VLA dataset.**Full Text:**

**Authors:**Sob, Ulrich Armel Mbou**Date:**2020**Subjects:**CubiCal (Software) , Radio -- Interference , Imaging systems in astronomy , Algorithms , Astronomical instruments -- Calibration , Astronomy -- Data processing**Language:**English**Type:**Thesis , Doctoral , PhD**Identifier:**http://hdl.handle.net/10962/147714 , vital:38663**Description:**The forthcoming Square Kilometre Array is expected to provide answers to some of the most intriguing questions about our Universe. However, as it is already noticeable from MeerKAT and other precursors, the amounts of data produced by these new instruments are significantly challenging to calibrate and image. Calibration of radio interferometric data is usually biased by incomplete sky models and radio frequency interference (RFI) resulting in calibration artefacts that limit the dynamic range and image fidelity of the resulting images. One of the most noticeable of these artefacts is the formation of spurious sources which causes suppression of real emissions. Fortunately, it has been shown that calibration algorithms employing heavy-tailed likelihood functions are less susceptible to this due to their robustness against outliers. Leveraging on recent developments in the field of complex optimisation, we implement a robust calibration algorithm using a Student’s t likelihood function and Wirtinger derivatives. The new algorithm, dubbed the robust solver, is incorporated as a subroutine into the newly released calibration software package CubiCal. We perform statistical analysis on the distribution of visibilities and provide an insight into the functioning of the robust solver and describe different scenarios where it will improve calibration. We use simulations to show that the robust solver effectively reduces the amount of flux suppressed from unmodelled sources both in direction independent and direction dependent calibration. Furthermore, the robust solver is shown to successfully mitigate the effects of low-level RFI when applied to a simulated and a real VLA dataset. Finally, we demonstrate that there are close links between the amount of flux suppressed from sources, the effects of the RFI and the employed solution interval during radio interferometric calibration. Hence, we investigate the effects of solution intervals and the different factors to consider in order to select adequate solution intervals. Furthermore, we propose a practical brute force method for selecting optimal solution intervals. The proposed method is successfully applied to a VLA dataset.**Full Text:**

Qualitative and quantitative properties of solutions of ordinary differential equations

**Authors:**Ogundare, Babatunde Sunday**Date:**2009**Subjects:**Differential equations , Lyapunov functions , Chebyshev polynomials , Algorithms**Language:**English**Type:**Thesis , Doctoral , PhD (Applied Mathematics)**Identifier:**vital:11588 , http://hdl.handle.net/10353/244 , Differential equations , Lyapunov functions , Chebyshev polynomials , Algorithms**Description:**This thesis is concerned with the qualitative and quantitative properties of solutions of certain classes of ordinary di erential equations (ODEs); in particular linear boundary value problems of second order ODE's and non-linear ODEs of order at most four. The Lyapunov's second method of special functions called Lyapunov functions are employed extensively in this thesis. We construct suitable complete Lyapunov functions to discuss the qualitative properties of solutions to certain classes of non-linear ordinary di erential equations considered. Though there is no unique way of constructing Lyapunov functions, We adopt Cartwright's method to construct complete Lyapunov functions that are required in this thesis. Su cient conditions were established to discuss the qualitative properties such as boundedness, convergence, periodicity and stability of the classes of equations of our focus. Another aspect of this thesis is on the quantitative properties of solutions. New scheme based on interpolation and collocation is derived for solving initial value problem of ODEs. This scheme is derived from the general method of deriving the spline functions. Also by exploiting the Trigonometric identity property of the Chebyshev polynomials, We develop a new scheme for approximating the solutions of two-point boundary value problems. These schemes are user-friendly, easy to develop algorithm (computer program) and execute. They compare favorably with known standard methods used in solving the classes of problems they were derived for**Full Text:**

**Authors:**Ogundare, Babatunde Sunday**Date:**2009**Subjects:**Differential equations , Lyapunov functions , Chebyshev polynomials , Algorithms**Language:**English**Type:**Thesis , Doctoral , PhD (Applied Mathematics)**Identifier:**vital:11588 , http://hdl.handle.net/10353/244 , Differential equations , Lyapunov functions , Chebyshev polynomials , Algorithms**Description:**This thesis is concerned with the qualitative and quantitative properties of solutions of certain classes of ordinary di erential equations (ODEs); in particular linear boundary value problems of second order ODE's and non-linear ODEs of order at most four. The Lyapunov's second method of special functions called Lyapunov functions are employed extensively in this thesis. We construct suitable complete Lyapunov functions to discuss the qualitative properties of solutions to certain classes of non-linear ordinary di erential equations considered. Though there is no unique way of constructing Lyapunov functions, We adopt Cartwright's method to construct complete Lyapunov functions that are required in this thesis. Su cient conditions were established to discuss the qualitative properties such as boundedness, convergence, periodicity and stability of the classes of equations of our focus. Another aspect of this thesis is on the quantitative properties of solutions. New scheme based on interpolation and collocation is derived for solving initial value problem of ODEs. This scheme is derived from the general method of deriving the spline functions. Also by exploiting the Trigonometric identity property of the Chebyshev polynomials, We develop a new scheme for approximating the solutions of two-point boundary value problems. These schemes are user-friendly, easy to develop algorithm (computer program) and execute. They compare favorably with known standard methods used in solving the classes of problems they were derived for**Full Text:**

- «
- ‹
- 1
- ›
- »