Qualitative and quantitative properties of solutions of ordinary differential equations

**Authors:**Ogundare, Babatunde Sunday**Date:**2009**Subjects:**Differential equations , Lyapunov functions , Chebyshev polynomials , Algorithms**Language:**English**Type:**Thesis , Doctoral , PhD (Applied Mathematics)**Identifier:**vital:11588 , http://hdl.handle.net/10353/244 , Differential equations , Lyapunov functions , Chebyshev polynomials , Algorithms**Description:**This thesis is concerned with the qualitative and quantitative properties of solutions of certain classes of ordinary di erential equations (ODEs); in particular linear boundary value problems of second order ODE's and non-linear ODEs of order at most four. The Lyapunov's second method of special functions called Lyapunov functions are employed extensively in this thesis. We construct suitable complete Lyapunov functions to discuss the qualitative properties of solutions to certain classes of non-linear ordinary di erential equations considered. Though there is no unique way of constructing Lyapunov functions, We adopt Cartwright's method to construct complete Lyapunov functions that are required in this thesis. Su cient conditions were established to discuss the qualitative properties such as boundedness, convergence, periodicity and stability of the classes of equations of our focus. Another aspect of this thesis is on the quantitative properties of solutions. New scheme based on interpolation and collocation is derived for solving initial value problem of ODEs. This scheme is derived from the general method of deriving the spline functions. Also by exploiting the Trigonometric identity property of the Chebyshev polynomials, We develop a new scheme for approximating the solutions of two-point boundary value problems. These schemes are user-friendly, easy to develop algorithm (computer program) and execute. They compare favorably with known standard methods used in solving the classes of problems they were derived for**Full Text:**

**Authors:**Ogundare, Babatunde Sunday**Date:**2009**Subjects:**Differential equations , Lyapunov functions , Chebyshev polynomials , Algorithms**Language:**English**Type:**Thesis , Doctoral , PhD (Applied Mathematics)**Identifier:**vital:11588 , http://hdl.handle.net/10353/244 , Differential equations , Lyapunov functions , Chebyshev polynomials , Algorithms**Description:**This thesis is concerned with the qualitative and quantitative properties of solutions of certain classes of ordinary di erential equations (ODEs); in particular linear boundary value problems of second order ODE's and non-linear ODEs of order at most four. The Lyapunov's second method of special functions called Lyapunov functions are employed extensively in this thesis. We construct suitable complete Lyapunov functions to discuss the qualitative properties of solutions to certain classes of non-linear ordinary di erential equations considered. Though there is no unique way of constructing Lyapunov functions, We adopt Cartwright's method to construct complete Lyapunov functions that are required in this thesis. Su cient conditions were established to discuss the qualitative properties such as boundedness, convergence, periodicity and stability of the classes of equations of our focus. Another aspect of this thesis is on the quantitative properties of solutions. New scheme based on interpolation and collocation is derived for solving initial value problem of ODEs. This scheme is derived from the general method of deriving the spline functions. Also by exploiting the Trigonometric identity property of the Chebyshev polynomials, We develop a new scheme for approximating the solutions of two-point boundary value problems. These schemes are user-friendly, easy to develop algorithm (computer program) and execute. They compare favorably with known standard methods used in solving the classes of problems they were derived for**Full Text:**

Conductivity profiles for a horizontally uniform earth

**Authors:**Murrell, Hugh Crozier**Date:**1983**Subjects:**Coen, Shimon -- Criticism and interpretation , Wang-Ho Yu, Michael -- Criticism and interpretation , Algorithms**Language:**English**Type:**Thesis , Masters , MSc**Identifier:**vital:5395 , http://hdl.handle.net/10962/d1001984**Description:**An investigation is made into the mathematics behind the noniterative inversion algorithm of Shimon Coen and Michael Wang-Ho Yu [1981]. The algorithm determines the conductivity profile of a horizontally uniform earth from surface measurements of apparent resistivity with a Schlumberger array. The algorithm is checked by performing the inversion on both artifical and raw field data**Full Text:**

**Authors:**Murrell, Hugh Crozier**Date:**1983**Subjects:**Coen, Shimon -- Criticism and interpretation , Wang-Ho Yu, Michael -- Criticism and interpretation , Algorithms**Language:**English**Type:**Thesis , Masters , MSc**Identifier:**vital:5395 , http://hdl.handle.net/10962/d1001984**Description:**An investigation is made into the mathematics behind the noniterative inversion algorithm of Shimon Coen and Michael Wang-Ho Yu [1981]. The algorithm determines the conductivity profile of a horizontally uniform earth from surface measurements of apparent resistivity with a Schlumberger array. The algorithm is checked by performing the inversion on both artifical and raw field data**Full Text:**

Algorithmic skeletons as a method of parallel programming

**Authors:**Watkins, Rees Collyer**Date:**1993**Subjects:**Parallel programming (Computer science) , Algorithms**Language:**English**Type:**Thesis , Masters , MSc**Identifier:**vital:4609 , http://hdl.handle.net/10962/d1004889 , Parallel programming (Computer science) , Algorithms**Description:**A new style of abstraction for program development, based on the concept of algorithmic skeletons, has been proposed in the literature. The programmer is offered a variety of independent algorithmic skeletons each of which describe the structure of a particular style of algorithm. The appropriate skeleton is used by the system to mould the solution. Parallel programs are particularly appropriate for this technique because of their complexity. This thesis investigates algorithmic skeletons as a method of hiding the complexities of parallel programming from the user, and for guiding them towards efficient solutions. To explore this approach, this thesis describes the implementation and benchmarking of the divide and conquer and task queue paradigms as skeletons. All but one category of problem, as implemented in this thesis, scale well over eight processors. The rate of speed up tails off when there are significant communication requirements. The results show that, with some user knowledge, efficient parallel programs can be developed using this method. The evaluation explores methods for fine tuning some skeleton programs to achieve increased efficiency.**Full Text:**

**Authors:**Watkins, Rees Collyer**Date:**1993**Subjects:**Parallel programming (Computer science) , Algorithms**Language:**English**Type:**Thesis , Masters , MSc**Identifier:**vital:4609 , http://hdl.handle.net/10962/d1004889 , Parallel programming (Computer science) , Algorithms**Description:**A new style of abstraction for program development, based on the concept of algorithmic skeletons, has been proposed in the literature. The programmer is offered a variety of independent algorithmic skeletons each of which describe the structure of a particular style of algorithm. The appropriate skeleton is used by the system to mould the solution. Parallel programs are particularly appropriate for this technique because of their complexity. This thesis investigates algorithmic skeletons as a method of hiding the complexities of parallel programming from the user, and for guiding them towards efficient solutions. To explore this approach, this thesis describes the implementation and benchmarking of the divide and conquer and task queue paradigms as skeletons. All but one category of problem, as implemented in this thesis, scale well over eight processors. The rate of speed up tails off when there are significant communication requirements. The results show that, with some user knowledge, efficient parallel programs can be developed using this method. The evaluation explores methods for fine tuning some skeleton programs to achieve increased efficiency.**Full Text:**

Algorithms for the solution of the quadratic programming problem

**Authors:**Vankova, Martina**Date:**2004**Subjects:**Quadratic programming , Nonlinear programming , Algorithms**Language:**English**Type:**Thesis , Masters , MSc**Identifier:**vital:11086 , http://hdl.handle.net/10948/348 , Quadratic programming , Nonlinear programming , Algorithms**Description:**The purpose of this dissertation was to provide a review of the theory of Optimization, in particular quadratic programming, and the algorithms suitable for solving both convex and non-convex quadratic programming problems. Optimization problems arise in a wide variety of fields and many can be effectively modeled with linear equations. However, there are problems for which linear models are not sufficient thus creating a need for non-linear systems. This dissertation includes a literature study of the formal theory necessary for understanding optimization and an investigation of the algorithms available for solving a special class of the non-linear programming problem, namely the quadratic programming problem. It was not the intention of this dissertation to discuss all possible algorithms for solving the quadratic programming problem, therefore certain algorithms for convex and non-convex quadratic programming problems were selected for a detailed discussion in the dissertation. Some of the algorithms were selected arbitrarily, because limited information was available comparing the efficiency of the various algorithms. Algorithms available for solving general non-linear programming problems were also included and briefly discussed as they can be used to solve quadratic programming problems. A number of algorithms were then selected for evaluation, depending on the frequency of use in practice and the availability of software implementing these algorithms. The evaluation included a theoretical and quantitative comparison of the algorithms. The quantitative results were analyzed and discussed and it was shown that the results supported the theoretical comparison. It was also shown that it is difficult to conclude that one algorithm is better than another as the efficiency of an algorithm greatly depends on the size of the problem, the complexity of an algorithm and many other implementation issues. Optimization problems arise continuously in a wide range of fields and thus create the need for effective methods of solving them. This dissertation provides the fundamental theory necessary for the understanding of optimization problems, with particular reference to quadratic programming problems and the algorithms that solve such problems. Keywords: Quadratic Programming, Quadratic Programming Algorithms, Optimization, Non-linear Programming, Convex, Non-convex.**Full Text:**

**Authors:**Vankova, Martina**Date:**2004**Subjects:**Quadratic programming , Nonlinear programming , Algorithms**Language:**English**Type:**Thesis , Masters , MSc**Identifier:**vital:11086 , http://hdl.handle.net/10948/348 , Quadratic programming , Nonlinear programming , Algorithms**Description:**The purpose of this dissertation was to provide a review of the theory of Optimization, in particular quadratic programming, and the algorithms suitable for solving both convex and non-convex quadratic programming problems. Optimization problems arise in a wide variety of fields and many can be effectively modeled with linear equations. However, there are problems for which linear models are not sufficient thus creating a need for non-linear systems. This dissertation includes a literature study of the formal theory necessary for understanding optimization and an investigation of the algorithms available for solving a special class of the non-linear programming problem, namely the quadratic programming problem. It was not the intention of this dissertation to discuss all possible algorithms for solving the quadratic programming problem, therefore certain algorithms for convex and non-convex quadratic programming problems were selected for a detailed discussion in the dissertation. Some of the algorithms were selected arbitrarily, because limited information was available comparing the efficiency of the various algorithms. Algorithms available for solving general non-linear programming problems were also included and briefly discussed as they can be used to solve quadratic programming problems. A number of algorithms were then selected for evaluation, depending on the frequency of use in practice and the availability of software implementing these algorithms. The evaluation included a theoretical and quantitative comparison of the algorithms. The quantitative results were analyzed and discussed and it was shown that the results supported the theoretical comparison. It was also shown that it is difficult to conclude that one algorithm is better than another as the efficiency of an algorithm greatly depends on the size of the problem, the complexity of an algorithm and many other implementation issues. Optimization problems arise continuously in a wide range of fields and thus create the need for effective methods of solving them. This dissertation provides the fundamental theory necessary for the understanding of optimization problems, with particular reference to quadratic programming problems and the algorithms that solve such problems. Keywords: Quadratic Programming, Quadratic Programming Algorithms, Optimization, Non-linear Programming, Convex, Non-convex.**Full Text:**

Grabcuts for image segmentation: A comparative study of clustering techniques

**Authors:**Manzi, Nozuko Zuleika**Date:**2019**Subjects:**Algorithms , Computer graphics**Language:**English**Type:**Thesis , Masters , MSc**Identifier:**http://hdl.handle.net/10353/14494 , vital:39995**Description:**Image segmentation is the partitioning of a digital image into small segments such as pixels or sets of pixels. It is significant as it allows for the visualization of structures of interest, removing unnecessary information. In addition, image segmentation is used in many fields like, for instance healthcare for image surgery, construction, etc. as it enables structure analysis. Segmentation of images can be computationally expensive especially when a large dataset is used, thus the importance of fast and effective segmentation algorithms is realised. This method is used to locate objects and boundaries (i.e. foreground and background) in images. The aim of this study is to provide a comparison of clustering techniques that would allow the Grabcuts for image segmentation algorithm to be effective and inexpensive. The Grabcuts based method, which is an extension of the graph cut based method, has been instrumental in solving many problems in computer vision i.e. image restoration, image segmentation, object recognition, tracking and analysis. According to Ramirez,et.al [47], the Grabcuts approach is an iterative and minimal user interaction algorithm as it chooses a segmentation by iteratively revising the foreground and background pixels assignments. The method uses min-cut/ max-flow algorithm to segment digital images proposed by Boykov and Jolly [9]. The input of this approach is a digital image with a selected v region of interest (ROI). The ROI is selected using a rectangular bounding box. The pixels inside the bounding box are assigned to the foreground, while the others are assigned to the background. In this study, the Grabcuts for image segmentation algorithm designed by [48] with a Gaussian Mixture Model (GMM) based on the Kmeans and Kmedoids clustering techniques are developed and compared. In addition, the algorithms developed are allowed to run on the Central Processing Unit (CPU) under two scenarios. Scenario 1 involves allowing the Kmeans and Kmedoids clustering techniques to the Squared Euclidean distance measures to calculate the similarities and dissimilarities in pixels in an image. In scenario 2, the Kmeans and Kmedoids clustering techniques will use the City Block distance measure to calculate similarities as well as dissimilarities between pixels in a given image. The same images from the Berkeley Segmentation Dataset and Benchmark 500 were used as input to the algorithms and the number of clusters, K, was varied from 2 to 5. It was observed that the Kmeans clustering technique outperformed the Kmedoids clustering technique under the two scenarios for all the test images with K varied from 2 to 5, in terms of runtime required. In addition, the Kmeans clustering technique obtained more compact and separate clusters under scenario 1, than its counterpart. On the other hand, the Kmedoids obtained more compact and separate clusters than the Kmeans clustering technique under scenario 2. The silhouette validity index favoured the smallest number of clusters for both clustering techniques as it suggested the optimal number of clusters for the Kmeans and Kmedoids clustering techniques under the two scenarios was 2. Although the Kmeans required less computation time than vi its counterpart, the generation of foreground and background took longer for the GMM based on Kmeans than it did for the GMM based on Kmedoids clustering technique. Furthermore, the Grabcuts for image segmentation algorithm with a GMM based on the Kmedoids clustering technique was computationally less expensive than the Grabcuts for image segmentation algorithm with a GMM based on the Kmeans clustering technique. This was observed to be true under both scenario 1 and 2. The Grabcuts for image with the GMM based on the Kmeans clustering techniques obtained slightly better segmentation results when the visual quality is concerned, than its counterpart under the two scenarios considered. On the other hand, the BFscores showed that the Grabcuts for image segmentation algorithm with the GMM based on Kmedoids produces images with higher BF-scores than its counterpart when K was varied from 2 to 5 for most of the test images. In addition, most of the images obtained the majority of their best segmentation results when K=2. This was observed to be true under scenario 1 as well as scenario 2. Therefore, the Kmedoids clustering technique under scenario 2 with K=2 would be the best option for the segmentation of difficult images in BSDS500. This is due to its ability to generate GMMs and segment difficult images more efficiently (i.e. time complexity, higher BF-scores, more under segmented rather than over segmented images, inter alia.) while producing comparable visual segmentation results to those obtained by the Grabcuts for image segmentation: GMM-Kmeans.**Full Text:**

**Authors:**Manzi, Nozuko Zuleika**Date:**2019**Subjects:**Algorithms , Computer graphics**Language:**English**Type:**Thesis , Masters , MSc**Identifier:**http://hdl.handle.net/10353/14494 , vital:39995**Description:**Image segmentation is the partitioning of a digital image into small segments such as pixels or sets of pixels. It is significant as it allows for the visualization of structures of interest, removing unnecessary information. In addition, image segmentation is used in many fields like, for instance healthcare for image surgery, construction, etc. as it enables structure analysis. Segmentation of images can be computationally expensive especially when a large dataset is used, thus the importance of fast and effective segmentation algorithms is realised. This method is used to locate objects and boundaries (i.e. foreground and background) in images. The aim of this study is to provide a comparison of clustering techniques that would allow the Grabcuts for image segmentation algorithm to be effective and inexpensive. The Grabcuts based method, which is an extension of the graph cut based method, has been instrumental in solving many problems in computer vision i.e. image restoration, image segmentation, object recognition, tracking and analysis. According to Ramirez,et.al [47], the Grabcuts approach is an iterative and minimal user interaction algorithm as it chooses a segmentation by iteratively revising the foreground and background pixels assignments. The method uses min-cut/ max-flow algorithm to segment digital images proposed by Boykov and Jolly [9]. The input of this approach is a digital image with a selected v region of interest (ROI). The ROI is selected using a rectangular bounding box. The pixels inside the bounding box are assigned to the foreground, while the others are assigned to the background. In this study, the Grabcuts for image segmentation algorithm designed by [48] with a Gaussian Mixture Model (GMM) based on the Kmeans and Kmedoids clustering techniques are developed and compared. In addition, the algorithms developed are allowed to run on the Central Processing Unit (CPU) under two scenarios. Scenario 1 involves allowing the Kmeans and Kmedoids clustering techniques to the Squared Euclidean distance measures to calculate the similarities and dissimilarities in pixels in an image. In scenario 2, the Kmeans and Kmedoids clustering techniques will use the City Block distance measure to calculate similarities as well as dissimilarities between pixels in a given image. The same images from the Berkeley Segmentation Dataset and Benchmark 500 were used as input to the algorithms and the number of clusters, K, was varied from 2 to 5. It was observed that the Kmeans clustering technique outperformed the Kmedoids clustering technique under the two scenarios for all the test images with K varied from 2 to 5, in terms of runtime required. In addition, the Kmeans clustering technique obtained more compact and separate clusters under scenario 1, than its counterpart. On the other hand, the Kmedoids obtained more compact and separate clusters than the Kmeans clustering technique under scenario 2. The silhouette validity index favoured the smallest number of clusters for both clustering techniques as it suggested the optimal number of clusters for the Kmeans and Kmedoids clustering techniques under the two scenarios was 2. Although the Kmeans required less computation time than vi its counterpart, the generation of foreground and background took longer for the GMM based on Kmeans than it did for the GMM based on Kmedoids clustering technique. Furthermore, the Grabcuts for image segmentation algorithm with a GMM based on the Kmedoids clustering technique was computationally less expensive than the Grabcuts for image segmentation algorithm with a GMM based on the Kmeans clustering technique. This was observed to be true under both scenario 1 and 2. The Grabcuts for image with the GMM based on the Kmeans clustering techniques obtained slightly better segmentation results when the visual quality is concerned, than its counterpart under the two scenarios considered. On the other hand, the BFscores showed that the Grabcuts for image segmentation algorithm with the GMM based on Kmedoids produces images with higher BF-scores than its counterpart when K was varied from 2 to 5 for most of the test images. In addition, most of the images obtained the majority of their best segmentation results when K=2. This was observed to be true under scenario 1 as well as scenario 2. Therefore, the Kmedoids clustering technique under scenario 2 with K=2 would be the best option for the segmentation of difficult images in BSDS500. This is due to its ability to generate GMMs and segment difficult images more efficiently (i.e. time complexity, higher BF-scores, more under segmented rather than over segmented images, inter alia.) while producing comparable visual segmentation results to those obtained by the Grabcuts for image segmentation: GMM-Kmeans.**Full Text:**

The impact of an in-depth code comprehension tool in an introductory programming module

**Authors:**Leppan, Ronald George**Date:**2008**Subjects:**Algorithms , Computer programming , Algorithms -- Study and teaching -- South Africa , Visual perception**Language:**English**Type:**Thesis , Masters , MSc**Identifier:**vital:10471 , http://hdl.handle.net/10948/847 , Algorithms , Computer programming , Algorithms -- Study and teaching -- South Africa , Visual perception**Description:**Reading and understanding algorithms is not an easy task and often neglected by educators in an introductory programming course. One proposed solution to this problem is the incorporation of a technological support tool to aid program comprehension in introductory programming. Many researchers advocate the identification of beacons and the use of chunking as support for code comprehension. Beacon recognition and chunking can also be used as support in the teaching model of introductory programming. Educators use a variety of different support tools to facilitate program comprehension in introductory programming. Review of a variety of support tools fails to deliver an existing tool to support a teaching model that incorporates chunking and the identification of beacons. The experimental support tool in this dissertation (BeReT) is primarily designed to encourage a student to correctly identify beacons within provided program extracts. BeReT can also be used to allow students to group together related statements and to learn about plans implemented in any semantically and syntactically correct algorithm uploaded by an instructor. While these requirements are evident in the design and implementation of BeReT, data is required to measure the effect BeReT has on the indepth comprehension of introductory programming algorithms. A between-groups experiment is described which compares the program comprehension of students that used BeReT to study various introductory algorithms, with students that relied solely on traditional lecturing materials. The use of an eye tracker was incorporated into the empirical study to visualise the results of controlled experiments. The results indicate that a technological support tool like BeReT can have a significantly positive effect on student comprehension of algorithms traditionally taught in introductory programming. This research provides educators with an alternative way for the incorporation of in-depth code comprehension skills in introductory programming.**Full Text:**

**Authors:**Leppan, Ronald George**Date:**2008**Subjects:**Algorithms , Computer programming , Algorithms -- Study and teaching -- South Africa , Visual perception**Language:**English**Type:**Thesis , Masters , MSc**Identifier:**vital:10471 , http://hdl.handle.net/10948/847 , Algorithms , Computer programming , Algorithms -- Study and teaching -- South Africa , Visual perception**Description:**Reading and understanding algorithms is not an easy task and often neglected by educators in an introductory programming course. One proposed solution to this problem is the incorporation of a technological support tool to aid program comprehension in introductory programming. Many researchers advocate the identification of beacons and the use of chunking as support for code comprehension. Beacon recognition and chunking can also be used as support in the teaching model of introductory programming. Educators use a variety of different support tools to facilitate program comprehension in introductory programming. Review of a variety of support tools fails to deliver an existing tool to support a teaching model that incorporates chunking and the identification of beacons. The experimental support tool in this dissertation (BeReT) is primarily designed to encourage a student to correctly identify beacons within provided program extracts. BeReT can also be used to allow students to group together related statements and to learn about plans implemented in any semantically and syntactically correct algorithm uploaded by an instructor. While these requirements are evident in the design and implementation of BeReT, data is required to measure the effect BeReT has on the indepth comprehension of introductory programming algorithms. A between-groups experiment is described which compares the program comprehension of students that used BeReT to study various introductory algorithms, with students that relied solely on traditional lecturing materials. The use of an eye tracker was incorporated into the empirical study to visualise the results of controlled experiments. The results indicate that a technological support tool like BeReT can have a significantly positive effect on student comprehension of algorithms traditionally taught in introductory programming. This research provides educators with an alternative way for the incorporation of in-depth code comprehension skills in introductory programming.**Full Text:**

A comparison of exact string search algorithms for deep packet inspection

**Authors:**Hunt, Kieran**Date:**2018**Subjects:**Algorithms , Firewalls (Computer security) , Computer networks -- Security measures , Intrusion detection systems (Computer security) , Deep Packet Inspection**Language:**English**Type:**text , Thesis , Masters , MSc**Identifier:**http://hdl.handle.net/10962/60629 , vital:27807**Description:**Every day, computer networks throughout the world face a constant onslaught of attacks. To combat these, network administrators are forced to employ a multitude of mitigating measures. Devices such as firewalls and Intrusion Detection Systems are prevalent today and employ extensive Deep Packet Inspection to scrutinise each piece of network traffic. Systems such as these usually require specialised hardware to meet the demand imposed by high throughput networks. Hardware like this is extremely expensive and singular in its function. It is with this in mind that the string search algorithms are introduced. These algorithms have been proven to perform well when searching through large volumes of text and may be able to perform equally well in the context of Deep Packet Inspection. String search algorithms are designed to match a single pattern to a substring of a given piece of text. This is not unlike the heuristics employed by traditional Deep Packet Inspection systems. This research compares the performance of a large number of string search algorithms during packet processing. Deep Packet Inspection places stringent restrictions on the reliability and speed of the algorithms due to increased performance pressures. A test system had to be designed in order to properly test the string search algorithms in the context of Deep Packet Inspection. The system allowed for precise and repeatable tests of each algorithm and then for their comparison. Of the algorithms tested, the Horspool and Quick Search algorithms posted the best results for both speed and reliability. The Not So Naive and Rabin-Karp algorithms were slowest overall.**Full Text:**

**Authors:**Hunt, Kieran**Date:**2018**Subjects:**Algorithms , Firewalls (Computer security) , Computer networks -- Security measures , Intrusion detection systems (Computer security) , Deep Packet Inspection**Language:**English**Type:**text , Thesis , Masters , MSc**Identifier:**http://hdl.handle.net/10962/60629 , vital:27807**Description:**Every day, computer networks throughout the world face a constant onslaught of attacks. To combat these, network administrators are forced to employ a multitude of mitigating measures. Devices such as firewalls and Intrusion Detection Systems are prevalent today and employ extensive Deep Packet Inspection to scrutinise each piece of network traffic. Systems such as these usually require specialised hardware to meet the demand imposed by high throughput networks. Hardware like this is extremely expensive and singular in its function. It is with this in mind that the string search algorithms are introduced. These algorithms have been proven to perform well when searching through large volumes of text and may be able to perform equally well in the context of Deep Packet Inspection. String search algorithms are designed to match a single pattern to a substring of a given piece of text. This is not unlike the heuristics employed by traditional Deep Packet Inspection systems. This research compares the performance of a large number of string search algorithms during packet processing. Deep Packet Inspection places stringent restrictions on the reliability and speed of the algorithms due to increased performance pressures. A test system had to be designed in order to properly test the string search algorithms in the context of Deep Packet Inspection. The system allowed for precise and repeatable tests of each algorithm and then for their comparison. Of the algorithms tested, the Horspool and Quick Search algorithms posted the best results for both speed and reliability. The Not So Naive and Rabin-Karp algorithms were slowest overall.**Full Text:**

A real time Fast Fourier Transform analyser

**Authors:**Fisher, John Stanley**Date:**1980**Subjects:**Fourier transformations , Ionosondes , Algorithms , Computer simulation**Language:**English**Type:**Thesis , Masters , MSc**Identifier:**vital:5439 , http://hdl.handle.net/10962/d1001992**Description:**From the requirements of the Ionosonde digitisation project, undertaken by Rhodes University Antarctic Research Group, it was decided to use the Fast Fourier Transform to compute the spectrum analysis. Several FFT algorithms are reviewed and properties discussed, and the Ccoley Tukey algorithm chosen for utilization. The hardware implementation of this algorithm, and the microprogram control of the whole system are discussed in detail, and such design aspects that required computer simulation are also treated in detail. The final testing of the analyser is shown, and includes a test using data from an ionosonde sounding. The conclusions contain details of extensions to the analysers present operation, required by plans to place the whole Chirpsounder under microprocessor control**Full Text:**

**Authors:**Fisher, John Stanley**Date:**1980**Subjects:**Fourier transformations , Ionosondes , Algorithms , Computer simulation**Language:**English**Type:**Thesis , Masters , MSc**Identifier:**vital:5439 , http://hdl.handle.net/10962/d1001992**Description:**From the requirements of the Ionosonde digitisation project, undertaken by Rhodes University Antarctic Research Group, it was decided to use the Fast Fourier Transform to compute the spectrum analysis. Several FFT algorithms are reviewed and properties discussed, and the Ccoley Tukey algorithm chosen for utilization. The hardware implementation of this algorithm, and the microprogram control of the whole system are discussed in detail, and such design aspects that required computer simulation are also treated in detail. The final testing of the analyser is shown, and includes a test using data from an ionosonde sounding. The conclusions contain details of extensions to the analysers present operation, required by plans to place the whole Chirpsounder under microprocessor control**Full Text:**

Clustering algorithms and their effect on edge preservation in image compression

**Authors:**Ndebele, Nothando Elizabeth**Date:**2009**Subjects:**Image compression , Vector analysis , Cluster analysis , Cluster anaylsis -- Data processing , Algorithms**Language:**English**Type:**Thesis , Masters , MSc**Identifier:**vital:5576 , http://hdl.handle.net/10962/d1008210 , Image compression , Vector analysis , Cluster analysis , Cluster anaylsis -- Data processing , Algorithms**Description:**Image compression aims to reduce the amount of data that is stored or transmitted for images. One technique that may be used to this end is vector quantization. Vectors may be used to represent images. Vector quantization reduces the number of vectors required for an image by representing a cluster of similar vectors by one typical vector that is part of a set of vectors referred to as the code book. For compression, for each image vector, only the closest codebook vector is stored or transmitted. For reconstruction, the image vectors are again replaced by the the closest codebook vectors. Hence vector quantization is a lossy compression technique and the quality of the reconstructed image depends strongly on the quality of the codebook. The design of the codebook is therefore an important part of the process. In this thesis we examine three clustering algorithms which can be used for codebook design in image compression: c-means (CM), fuzzy c-means (FCM) and learning vector quantization (LVQ). We give a description of these algorithms and their application to codebook design. Edges are an important part of the visual information contained in an image. It is essential therefore to use codebooks which allow an accurate representation of the edges. One of the shortcomings of using vector quantization is poor edge representation. We therefore carry out experiments using these algorithms to compare their edge preserving qualities. We also investigate the combination of these algorithms with classified vector quantization (CVQ) and the replication method (RM). Both these methods have been suggested as methods for improving edge representation. We use a cross validation approach to estimate the mean squared error to measure the performance of each of the algorithms and the edge preserving methods. The results reflect that the edges are less accurately represented than the non - edge areas when using CM, FCM and LVQ. The advantage of using CVQ is that the time taken for code book design is reduced particularly for CM and FCM. RM is found to be effective where the codebook is trained using a set that has larger proportions of edges than the test set.**Full Text:**

**Authors:**Ndebele, Nothando Elizabeth**Date:**2009**Subjects:**Image compression , Vector analysis , Cluster analysis , Cluster anaylsis -- Data processing , Algorithms**Language:**English**Type:**Thesis , Masters , MSc**Identifier:**vital:5576 , http://hdl.handle.net/10962/d1008210 , Image compression , Vector analysis , Cluster analysis , Cluster anaylsis -- Data processing , Algorithms**Description:**Image compression aims to reduce the amount of data that is stored or transmitted for images. One technique that may be used to this end is vector quantization. Vectors may be used to represent images. Vector quantization reduces the number of vectors required for an image by representing a cluster of similar vectors by one typical vector that is part of a set of vectors referred to as the code book. For compression, for each image vector, only the closest codebook vector is stored or transmitted. For reconstruction, the image vectors are again replaced by the the closest codebook vectors. Hence vector quantization is a lossy compression technique and the quality of the reconstructed image depends strongly on the quality of the codebook. The design of the codebook is therefore an important part of the process. In this thesis we examine three clustering algorithms which can be used for codebook design in image compression: c-means (CM), fuzzy c-means (FCM) and learning vector quantization (LVQ). We give a description of these algorithms and their application to codebook design. Edges are an important part of the visual information contained in an image. It is essential therefore to use codebooks which allow an accurate representation of the edges. One of the shortcomings of using vector quantization is poor edge representation. We therefore carry out experiments using these algorithms to compare their edge preserving qualities. We also investigate the combination of these algorithms with classified vector quantization (CVQ) and the replication method (RM). Both these methods have been suggested as methods for improving edge representation. We use a cross validation approach to estimate the mean squared error to measure the performance of each of the algorithms and the edge preserving methods. The results reflect that the edges are less accurately represented than the non - edge areas when using CM, FCM and LVQ. The advantage of using CVQ is that the time taken for code book design is reduced particularly for CM and FCM. RM is found to be effective where the codebook is trained using a set that has larger proportions of edges than the test set.**Full Text:**

Addressing flux suppression, radio frequency interference, and selection of optimal solution intervals during radio interferometric calibration

**Authors:**Sob, Ulrich Armel Mbou**Date:**2020**Subjects:**CubiCal (Software) , Radio -- Interference , Imaging systems in astronomy , Algorithms , Astronomical instruments -- Calibration , Astronomy -- Data processing**Language:**English**Type:**Thesis , Doctoral , PhD**Identifier:**http://hdl.handle.net/10962/147714 , vital:38663**Description:**The forthcoming Square Kilometre Array is expected to provide answers to some of the most intriguing questions about our Universe. However, as it is already noticeable from MeerKAT and other precursors, the amounts of data produced by these new instruments are significantly challenging to calibrate and image. Calibration of radio interferometric data is usually biased by incomplete sky models and radio frequency interference (RFI) resulting in calibration artefacts that limit the dynamic range and image fidelity of the resulting images. One of the most noticeable of these artefacts is the formation of spurious sources which causes suppression of real emissions. Fortunately, it has been shown that calibration algorithms employing heavy-tailed likelihood functions are less susceptible to this due to their robustness against outliers. Leveraging on recent developments in the field of complex optimisation, we implement a robust calibration algorithm using a Student’s t likelihood function and Wirtinger derivatives. The new algorithm, dubbed the robust solver, is incorporated as a subroutine into the newly released calibration software package CubiCal. We perform statistical analysis on the distribution of visibilities and provide an insight into the functioning of the robust solver and describe different scenarios where it will improve calibration. We use simulations to show that the robust solver effectively reduces the amount of flux suppressed from unmodelled sources both in direction independent and direction dependent calibration. Furthermore, the robust solver is shown to successfully mitigate the effects of low-level RFI when applied to a simulated and a real VLA dataset. Finally, we demonstrate that there are close links between the amount of flux suppressed from sources, the effects of the RFI and the employed solution interval during radio interferometric calibration. Hence, we investigate the effects of solution intervals and the different factors to consider in order to select adequate solution intervals. Furthermore, we propose a practical brute force method for selecting optimal solution intervals. The proposed method is successfully applied to a VLA dataset.**Full Text:**

**Authors:**Sob, Ulrich Armel Mbou**Date:**2020**Subjects:**CubiCal (Software) , Radio -- Interference , Imaging systems in astronomy , Algorithms , Astronomical instruments -- Calibration , Astronomy -- Data processing**Language:**English**Type:**Thesis , Doctoral , PhD**Identifier:**http://hdl.handle.net/10962/147714 , vital:38663**Description:**The forthcoming Square Kilometre Array is expected to provide answers to some of the most intriguing questions about our Universe. However, as it is already noticeable from MeerKAT and other precursors, the amounts of data produced by these new instruments are significantly challenging to calibrate and image. Calibration of radio interferometric data is usually biased by incomplete sky models and radio frequency interference (RFI) resulting in calibration artefacts that limit the dynamic range and image fidelity of the resulting images. One of the most noticeable of these artefacts is the formation of spurious sources which causes suppression of real emissions. Fortunately, it has been shown that calibration algorithms employing heavy-tailed likelihood functions are less susceptible to this due to their robustness against outliers. Leveraging on recent developments in the field of complex optimisation, we implement a robust calibration algorithm using a Student’s t likelihood function and Wirtinger derivatives. The new algorithm, dubbed the robust solver, is incorporated as a subroutine into the newly released calibration software package CubiCal. We perform statistical analysis on the distribution of visibilities and provide an insight into the functioning of the robust solver and describe different scenarios where it will improve calibration. We use simulations to show that the robust solver effectively reduces the amount of flux suppressed from unmodelled sources both in direction independent and direction dependent calibration. Furthermore, the robust solver is shown to successfully mitigate the effects of low-level RFI when applied to a simulated and a real VLA dataset. Finally, we demonstrate that there are close links between the amount of flux suppressed from sources, the effects of the RFI and the employed solution interval during radio interferometric calibration. Hence, we investigate the effects of solution intervals and the different factors to consider in order to select adequate solution intervals. Furthermore, we propose a practical brute force method for selecting optimal solution intervals. The proposed method is successfully applied to a VLA dataset.**Full Text:**

Preimages for SHA-1

**Authors:**Motara, Yusuf Moosa**Date:**2018**Subjects:**Cryptography , Algorithms , Hashing (Computer science) , Computer security**Language:**English**Type:**text , Thesis , Doctoral , PhD**Identifier:**http://hdl.handle.net/10962/57885 , vital:27004**Description:**This research explores the problem of finding a preimage — an input that, when passed through a particular function, will result in a pre-specified output — for the compression function of the SHA-1 cryptographic hash. This problem is much more difficult than the problem of finding a collision for a hash function, and preimage attacks for very few popular hash functions are known. The research begins by introducing the field and giving an overview of the existing work in the area. A thorough analysis of the compression function is made, resulting in alternative formulations for both parts of the function, and both statistical and theoretical tools to determine the difficulty of the SHA-1 preimage problem. Different representations (And- Inverter Graph, Binary Decision Diagram, Conjunctive Normal Form, Constraint Satisfaction form, and Disjunctive Normal Form) and associated tools to manipulate and/or analyse these representations are then applied and explored, and results are collected and interpreted. In conclusion, the SHA-1 preimage problem remains unsolved and insoluble for the foreseeable future. The primary issue is one of efficient representation; despite a promising theoretical difficulty, both the diffusion characteristics and the depth of the tree stand in the way of efficient search. Despite this, the research served to confirm and quantify the difficulty of the problem both theoretically, using Schaefer's Theorem, and practically, in the context of different representations.**Full Text:**

**Authors:**Motara, Yusuf Moosa**Date:**2018**Subjects:**Cryptography , Algorithms , Hashing (Computer science) , Computer security**Language:**English**Type:**text , Thesis , Doctoral , PhD**Identifier:**http://hdl.handle.net/10962/57885 , vital:27004**Description:**This research explores the problem of finding a preimage — an input that, when passed through a particular function, will result in a pre-specified output — for the compression function of the SHA-1 cryptographic hash. This problem is much more difficult than the problem of finding a collision for a hash function, and preimage attacks for very few popular hash functions are known. The research begins by introducing the field and giving an overview of the existing work in the area. A thorough analysis of the compression function is made, resulting in alternative formulations for both parts of the function, and both statistical and theoretical tools to determine the difficulty of the SHA-1 preimage problem. Different representations (And- Inverter Graph, Binary Decision Diagram, Conjunctive Normal Form, Constraint Satisfaction form, and Disjunctive Normal Form) and associated tools to manipulate and/or analyse these representations are then applied and explored, and results are collected and interpreted. In conclusion, the SHA-1 preimage problem remains unsolved and insoluble for the foreseeable future. The primary issue is one of efficient representation; despite a promising theoretical difficulty, both the diffusion characteristics and the depth of the tree stand in the way of efficient search. Despite this, the research served to confirm and quantify the difficulty of the problem both theoretically, using Schaefer's Theorem, and practically, in the context of different representations.**Full Text:**

Advanced radio interferometric simulation and data reduction techniques

**Authors:**Makhathini, Sphesihle**Date:**2018**Subjects:**Interferometry , Radio interferometers , Algorithms , Radio telescopes , Square Kilometre Array (Project) , Very Large Array (Observatory : N.M.) , Radio astronomy**Language:**English**Type:**text , Thesis , Doctoral , PhD**Identifier:**http://hdl.handle.net/10962/57348 , vital:26875**Description:**This work shows how legacy and novel radio Interferometry software packages and algorithms can be combined to produce high-quality reductions from modern telescopes, as well as end-to-end simulations for upcoming instruments such as the Square Kilometre Array (SKA) and its pathfinders. We first use a MeqTrees based simulations framework to quantify how artefacts due to direction-dependent effects accumulate with time, and the consequences of this accumulation when observing the same field multiple times in order to reach the survey depth. Our simulations suggest that a survey like LADUMA (Looking at the Distant Universe with MeerKAT Array), which aims to achieve its survey depth of 16 µJy/beam in a 72 kHz at 1.42 GHz by observing the same field for 1000 hours, will be able to reach its target depth in the presence of these artefacts. We also present stimela, a system agnostic scripting framework for simulating, processing and imaging radio interferometric data. This framework is then used to write an end-to-end simulation pipeline in order to quantify the resolution and sensitivity of the SKA1-MID telescope (the first phase of the SKA mid-frequency telescope) as a function of frequency, as well as the scale-dependent sensitivity of the telescope. Finally, a stimela-based reduction pipeline is used to process data of the field around the source 3C147, taken by the Karl G. Jansky Very Large Array (VLA). The reconstructed image from this reduction has a typical 1a noise level of 2.87 µJy/beam, and consequently a dynamic range of 8x106:1, given the 22.58 Jy/beam flux Density of the source 3C147.**Full Text:**

**Authors:**Makhathini, Sphesihle**Date:**2018**Subjects:**Interferometry , Radio interferometers , Algorithms , Radio telescopes , Square Kilometre Array (Project) , Very Large Array (Observatory : N.M.) , Radio astronomy**Language:**English**Type:**text , Thesis , Doctoral , PhD**Identifier:**http://hdl.handle.net/10962/57348 , vital:26875**Description:**This work shows how legacy and novel radio Interferometry software packages and algorithms can be combined to produce high-quality reductions from modern telescopes, as well as end-to-end simulations for upcoming instruments such as the Square Kilometre Array (SKA) and its pathfinders. We first use a MeqTrees based simulations framework to quantify how artefacts due to direction-dependent effects accumulate with time, and the consequences of this accumulation when observing the same field multiple times in order to reach the survey depth. Our simulations suggest that a survey like LADUMA (Looking at the Distant Universe with MeerKAT Array), which aims to achieve its survey depth of 16 µJy/beam in a 72 kHz at 1.42 GHz by observing the same field for 1000 hours, will be able to reach its target depth in the presence of these artefacts. We also present stimela, a system agnostic scripting framework for simulating, processing and imaging radio interferometric data. This framework is then used to write an end-to-end simulation pipeline in order to quantify the resolution and sensitivity of the SKA1-MID telescope (the first phase of the SKA mid-frequency telescope) as a function of frequency, as well as the scale-dependent sensitivity of the telescope. Finally, a stimela-based reduction pipeline is used to process data of the field around the source 3C147, taken by the Karl G. Jansky Very Large Array (VLA). The reconstructed image from this reduction has a typical 1a noise level of 2.87 µJy/beam, and consequently a dynamic range of 8x106:1, given the 22.58 Jy/beam flux Density of the source 3C147.**Full Text:**

Data compression, field of interest shaping and fast algorithms for direction-dependent deconvolution in radio interferometry

**Authors:**Atemkeng, Marcellin T**Date:**2017**Subjects:**Radio astronomy , Solar radio emission , Radio interferometers , Signal processing -- Digital techniques , Algorithms , Data compression (Computer science)**Language:**English**Type:**Thesis , Doctoral , PhD**Identifier:**http://hdl.handle.net/10962/6324 , vital:21089**Description:**In radio interferometry, observed visibilities are intrinsically sampled at some interval in time and frequency. Modern interferometers are capable of producing data at very high time and frequency resolution; practical limits on storage and computation costs require that some form of data compression be imposed. The traditional form of compression is simple averaging of the visibilities over coarser time and frequency bins. This has an undesired side effect: the resulting averaged visibilities “decorrelate”, and do so differently depending on the baseline length and averaging interval. This translates into a non-trivial signature in the image domain known as “smearing”, which manifests itself as an attenuation in amplitude towards off-centre sources. With the increasing fields of view and/or longer baselines employed in modern and future instruments, the trade-off between data rate and smearing becomes increasingly unfavourable. Averaging also results in baseline length and a position-dependent point spread function (PSF). In this work, we investigate alternative approaches to low-loss data compression. We show that averaging of the visibility data can be understood as a form of convolution by a boxcar-like window function, and that by employing alternative baseline-dependent window functions a more optimal interferometer smearing response may be induced. Specifically, we can improve amplitude response over a chosen field of interest and attenuate sources outside the field of interest. The main cost of this technique is a reduction in nominal sensitivity; we investigate the smearing vs. sensitivity trade-off and show that in certain regimes a favourable compromise can be achieved. We show the application of this technique to simulated data from the Jansky Very Large Array and the European Very Long Baseline Interferometry Network. Furthermore, we show that the position-dependent PSF shape induced by averaging can be approximated using linear algebraic properties to effectively reduce the computational complexity for evaluating the PSF at each sky position. We conclude by implementing a position-dependent PSF deconvolution in an imaging and deconvolution framework. Using the Low-Frequency Array radio interferometer, we show that deconvolution with position-dependent PSFs results in higher image fidelity compared to a simple CLEAN algorithm and its derivatives.**Full Text:**

**Authors:**Atemkeng, Marcellin T**Date:**2017**Subjects:**Radio astronomy , Solar radio emission , Radio interferometers , Signal processing -- Digital techniques , Algorithms , Data compression (Computer science)**Language:**English**Type:**Thesis , Doctoral , PhD**Identifier:**http://hdl.handle.net/10962/6324 , vital:21089**Description:**In radio interferometry, observed visibilities are intrinsically sampled at some interval in time and frequency. Modern interferometers are capable of producing data at very high time and frequency resolution; practical limits on storage and computation costs require that some form of data compression be imposed. The traditional form of compression is simple averaging of the visibilities over coarser time and frequency bins. This has an undesired side effect: the resulting averaged visibilities “decorrelate”, and do so differently depending on the baseline length and averaging interval. This translates into a non-trivial signature in the image domain known as “smearing”, which manifests itself as an attenuation in amplitude towards off-centre sources. With the increasing fields of view and/or longer baselines employed in modern and future instruments, the trade-off between data rate and smearing becomes increasingly unfavourable. Averaging also results in baseline length and a position-dependent point spread function (PSF). In this work, we investigate alternative approaches to low-loss data compression. We show that averaging of the visibility data can be understood as a form of convolution by a boxcar-like window function, and that by employing alternative baseline-dependent window functions a more optimal interferometer smearing response may be induced. Specifically, we can improve amplitude response over a chosen field of interest and attenuate sources outside the field of interest. The main cost of this technique is a reduction in nominal sensitivity; we investigate the smearing vs. sensitivity trade-off and show that in certain regimes a favourable compromise can be achieved. We show the application of this technique to simulated data from the Jansky Very Large Array and the European Very Long Baseline Interferometry Network. Furthermore, we show that the position-dependent PSF shape induced by averaging can be approximated using linear algebraic properties to effectively reduce the computational complexity for evaluating the PSF at each sky position. We conclude by implementing a position-dependent PSF deconvolution in an imaging and deconvolution framework. Using the Low-Frequency Array radio interferometer, we show that deconvolution with position-dependent PSFs results in higher image fidelity compared to a simple CLEAN algorithm and its derivatives.**Full Text:**

- «
- ‹
- 1
- ›
- »