Accelerated implementations of the RIME for DDE calibration and source modelling
- Authors: Van Staden, Joshua
- Date: 2021
- Subjects: Radio astronomy , Radio inferometers , Radio inferometers -- Calibration , Radio astronomy -- Data processing , Radio inferometers -- Data processing , Radio inferometers -- Calibration -- Data processing
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/172422 , vital:42199
- Description: Second- and third-generation calibration methods filter out subtle effects in interferometer data, and therefore yield significantly higher dynamic ranges. The basis of these calibration techniques relies on building a model of the sky and corrupting it with models of the effects acting on the sources. The sensitivities of modern instruments call for more elaborate models to capture the level of detail that is required to achieve accurate calibration. This thesis implements two types of models to be used in for second- and third-generation calibration. The first model implemented is shapelets, which can be used to model radio source morphologies directly in uv space. The second model implemented is Zernike polynomials, which can be used to represent the primary beam of the antenna. We implement these models in the CODEX-AFRICANUS package and provide a set of unit tests for each model. Additionally, we compare our implementations against other methods of representing these objects and instrumental effects, namely NIFTY-GRIDDER against shapelets and a FITS-interpolation method against the Zernike polynomials. We find that to achieve sufficient accuracy, our implementation of the shapelet model has a higher runtime to that of the NIFTY-GRIDDER. However, the NIFTY-GRIDDER cannot simulate a component-based sky model while the shapelet model can. Additionally, the shapelet model is fully parametric, which allows for integration into a parameterised solver. We find that, while having a smaller memory footprint, our Zernike model has a greater computational complexity than that of the FITS-interpolated method. However, we find that the Zernike implementation has floating-point accuracy in its modelling, while the FITS-interpolated model loses some accuracy through the discretisation of the beam.
- Full Text:
- Date Issued: 2021
- Authors: Van Staden, Joshua
- Date: 2021
- Subjects: Radio astronomy , Radio inferometers , Radio inferometers -- Calibration , Radio astronomy -- Data processing , Radio inferometers -- Data processing , Radio inferometers -- Calibration -- Data processing
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/172422 , vital:42199
- Description: Second- and third-generation calibration methods filter out subtle effects in interferometer data, and therefore yield significantly higher dynamic ranges. The basis of these calibration techniques relies on building a model of the sky and corrupting it with models of the effects acting on the sources. The sensitivities of modern instruments call for more elaborate models to capture the level of detail that is required to achieve accurate calibration. This thesis implements two types of models to be used in for second- and third-generation calibration. The first model implemented is shapelets, which can be used to model radio source morphologies directly in uv space. The second model implemented is Zernike polynomials, which can be used to represent the primary beam of the antenna. We implement these models in the CODEX-AFRICANUS package and provide a set of unit tests for each model. Additionally, we compare our implementations against other methods of representing these objects and instrumental effects, namely NIFTY-GRIDDER against shapelets and a FITS-interpolation method against the Zernike polynomials. We find that to achieve sufficient accuracy, our implementation of the shapelet model has a higher runtime to that of the NIFTY-GRIDDER. However, the NIFTY-GRIDDER cannot simulate a component-based sky model while the shapelet model can. Additionally, the shapelet model is fully parametric, which allows for integration into a parameterised solver. We find that, while having a smaller memory footprint, our Zernike model has a greater computational complexity than that of the FITS-interpolated method. However, we find that the Zernike implementation has floating-point accuracy in its modelling, while the FITS-interpolated model loses some accuracy through the discretisation of the beam.
- Full Text:
- Date Issued: 2021
An Evaluation of Machine Learning Methods for Classifying Bot Traffic in Software Defined Networks
- Van Staden, Joshua, Brown, Dane L
- Authors: Van Staden, Joshua , Brown, Dane L
- Date: 2021
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/465645 , vital:76628 , xlink:href="https://link.springer.com/chapter/10.1007/978-981-19-7874-6_72"
- Description: Internet security is an ever-expanding field. Cyber-attacks occur very frequently, and so detecting them is an important aspect of preserving services. Machine learning offers a helpful tool with which to detect cyber attacks. However, it is impossible to deploy a machine-learning algorithm to detect attacks in a non-centralized network. Software Defined Networks (SDNs) offer a centralized view of a network, allowing machine learning algorithms to detect malicious activity within a network. The InSDN dataset is a recently-released dataset that contains a set of sniffed packets within a virtual SDN. These sniffed packets correspond to various attacks, including DDoS attacks, Probing and Password-Guessing, among others. This study aims to evaluate various machine learning models against this new dataset. Specifically, we aim to evaluate their classification ability and runtimes when trained on fewer features. The machine learning models tested include a Neural Network, Support Vector Machine, Random Forest, Multilayer Perceptron, Logistic Regression, and K-Nearest Neighbours. Cluster-based algorithms such as the K-Nearest Neighbour and Random Forest proved to be the best performers. Linear-based algorithms such as the Multilayer Perceptron performed the worst. This suggests a good level of clustering in the top few features with little space for linear separability. The reduction of features significantly reduced training time, particularly in the better-performing models.
- Full Text:
- Date Issued: 2021
- Authors: Van Staden, Joshua , Brown, Dane L
- Date: 2021
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/465645 , vital:76628 , xlink:href="https://link.springer.com/chapter/10.1007/978-981-19-7874-6_72"
- Description: Internet security is an ever-expanding field. Cyber-attacks occur very frequently, and so detecting them is an important aspect of preserving services. Machine learning offers a helpful tool with which to detect cyber attacks. However, it is impossible to deploy a machine-learning algorithm to detect attacks in a non-centralized network. Software Defined Networks (SDNs) offer a centralized view of a network, allowing machine learning algorithms to detect malicious activity within a network. The InSDN dataset is a recently-released dataset that contains a set of sniffed packets within a virtual SDN. These sniffed packets correspond to various attacks, including DDoS attacks, Probing and Password-Guessing, among others. This study aims to evaluate various machine learning models against this new dataset. Specifically, we aim to evaluate their classification ability and runtimes when trained on fewer features. The machine learning models tested include a Neural Network, Support Vector Machine, Random Forest, Multilayer Perceptron, Logistic Regression, and K-Nearest Neighbours. Cluster-based algorithms such as the K-Nearest Neighbour and Random Forest proved to be the best performers. Linear-based algorithms such as the Multilayer Perceptron performed the worst. This suggests a good level of clustering in the top few features with little space for linear separability. The reduction of features significantly reduced training time, particularly in the better-performing models.
- Full Text:
- Date Issued: 2021
An Evaluation of YOLO-Based Algorithms for Hand Detection in the Kitchen
- Van Staden, Joshua, Brown, Dane L
- Authors: Van Staden, Joshua , Brown, Dane L
- Date: 2021
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/465134 , vital:76576 , xlink:href="https://ieeexplore.ieee.org/abstract/document/9519307"
- Description: Convolutional Neural Networks have offered an accurate method with which to run object detection on images. Specifically, the YOLO family of object detection algorithms have proven to be relatively fast and accurate. Since its inception, the different variants of this algorithm have been tested on different datasets. In this paper, we evaluate the performances of these algorithms on the recent Epic Kitchens-100 dataset. This dataset provides egocentric footage of people interacting with various objects in the kitchen. Most prominently shown in the footage is an egocentric view of the participants' hands. We aim to use the YOLOv3 algorithm to detect these hands within the footage provided in this dataset. In particular, we examine the YOLOv3 algorithm using two different backbones: MobileNet-lite and VGG16. We trained them on a mixture of samples from the Egohands and Epic Kitchens-100 datasets. In a separate experiment, average precision was measured on an unseen Epic Kitchens-100 subset. We found that the models are relatively simple and lead to lower scores on the Epic Kitchens 100 dataset. This is attributed to the high background noise on the Epic Kitchens 100 dataset. Nonetheless, the VGG16 architecture was found to have a higher Average Precision (AP) and is, therefore, more suited for retrospective analysis. None of the models was suitable for real-time analysis due to complex egocentric data.
- Full Text:
- Date Issued: 2021
- Authors: Van Staden, Joshua , Brown, Dane L
- Date: 2021
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/465134 , vital:76576 , xlink:href="https://ieeexplore.ieee.org/abstract/document/9519307"
- Description: Convolutional Neural Networks have offered an accurate method with which to run object detection on images. Specifically, the YOLO family of object detection algorithms have proven to be relatively fast and accurate. Since its inception, the different variants of this algorithm have been tested on different datasets. In this paper, we evaluate the performances of these algorithms on the recent Epic Kitchens-100 dataset. This dataset provides egocentric footage of people interacting with various objects in the kitchen. Most prominently shown in the footage is an egocentric view of the participants' hands. We aim to use the YOLOv3 algorithm to detect these hands within the footage provided in this dataset. In particular, we examine the YOLOv3 algorithm using two different backbones: MobileNet-lite and VGG16. We trained them on a mixture of samples from the Egohands and Epic Kitchens-100 datasets. In a separate experiment, average precision was measured on an unseen Epic Kitchens-100 subset. We found that the models are relatively simple and lead to lower scores on the Epic Kitchens 100 dataset. This is attributed to the high background noise on the Epic Kitchens 100 dataset. Nonetheless, the VGG16 architecture was found to have a higher Average Precision (AP) and is, therefore, more suited for retrospective analysis. None of the models was suitable for real-time analysis due to complex egocentric data.
- Full Text:
- Date Issued: 2021
- «
- ‹
- 1
- ›
- »