Investigating unimodal isolated signer-independent sign language recognition
- Authors: Marais, Marc Jason
- Date: 2024-04-04
- Subjects: Convolutional neural network , Sign language recognition , Human activity recognition , Pattern recognition systems , Neural networks (Computer science)
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/435343 , vital:73149
- Description: Sign language serves as the mode of communication for the Deaf and Hard of Hearing community, embodying a rich linguistic and cultural heritage. Recent Sign Language Recognition (SLR) system developments aim to facilitate seamless communication between the Deaf community and the broader society. However, most existing systems are limited by signer-dependent models, hindering their adaptability to diverse signing styles and signers, thus impeding their practical implementation in real-world scenarios. This research explores various unimodal approaches, both pose-based and vision-based, for isolated signer-independent SLR using RGB video input on the LSA64 and AUTSL datasets. The unimodal RGB-only input strategy provides a realistic SLR setting where alternative data sources are either unavailable or necessitate specialised equipment. Through systematic testing scenarios, isolated signer-independent SLR experiments are conducted on both datasets, primarily focusing on AUTSL – a signer-independent dataset. The vision-based R(2+1)D-18 model emerged as the top performer, achieving 90.64% accuracy on the unseen AUTSL dataset test split, closely followed by the pose-based Spatio- Temporal Graph Convolutional Network (ST-GCN) model with an accuracy of 89.95%. Furthermore, these models achieved comparable accuracies at a significantly lower computational demand. Notably, the pose-based approach demonstrates robust generalisation to substantial background and signer variation. Moreover, the pose-based approach demands significantly less computational power and training time than vision-based approaches. The proposed unimodal pose-based and vision-based systems were concluded to both be effective at classifying sign classes in the LSA64 and AUTSL datasets. , Thesis (MSc) -- Faculty of Science, Ichthyology and Fisheries Science, 2024
- Full Text:
- Date Issued: 2024-04-04
- Authors: Marais, Marc Jason
- Date: 2024-04-04
- Subjects: Convolutional neural network , Sign language recognition , Human activity recognition , Pattern recognition systems , Neural networks (Computer science)
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/435343 , vital:73149
- Description: Sign language serves as the mode of communication for the Deaf and Hard of Hearing community, embodying a rich linguistic and cultural heritage. Recent Sign Language Recognition (SLR) system developments aim to facilitate seamless communication between the Deaf community and the broader society. However, most existing systems are limited by signer-dependent models, hindering their adaptability to diverse signing styles and signers, thus impeding their practical implementation in real-world scenarios. This research explores various unimodal approaches, both pose-based and vision-based, for isolated signer-independent SLR using RGB video input on the LSA64 and AUTSL datasets. The unimodal RGB-only input strategy provides a realistic SLR setting where alternative data sources are either unavailable or necessitate specialised equipment. Through systematic testing scenarios, isolated signer-independent SLR experiments are conducted on both datasets, primarily focusing on AUTSL – a signer-independent dataset. The vision-based R(2+1)D-18 model emerged as the top performer, achieving 90.64% accuracy on the unseen AUTSL dataset test split, closely followed by the pose-based Spatio- Temporal Graph Convolutional Network (ST-GCN) model with an accuracy of 89.95%. Furthermore, these models achieved comparable accuracies at a significantly lower computational demand. Notably, the pose-based approach demonstrates robust generalisation to substantial background and signer variation. Moreover, the pose-based approach demands significantly less computational power and training time than vision-based approaches. The proposed unimodal pose-based and vision-based systems were concluded to both be effective at classifying sign classes in the LSA64 and AUTSL datasets. , Thesis (MSc) -- Faculty of Science, Ichthyology and Fisheries Science, 2024
- Full Text:
- Date Issued: 2024-04-04
A model for measuring and predicting stress for software developers using vital signs and activities
- Authors: Hibbers, Ilze
- Date: 2024-04
- Subjects: Machine learning , Neural networks (Computer science) , Computer software developers
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/63799 , vital:73614
- Description: Occupational stress is a well-recognised issue that affects individuals in various professions and industries. Reducing occupational stress has multiple benefits, such as improving employee's health and performance. This study proposes a model to measure and predict occupational stress using data collected in a real IT office environment. Different data sources, such as questionnaires, application software (RescueTime) and Fitbit smartwatches were used for collecting heart rate (HR), facial emotions, computer interactions, and application usage. The results of the Demand Control Support and Effort and Reward questionnaires indicated that the participants experienced high social support and an average level of workload. Participants also reported their daily perceived stress and workload level using a 5- point score. The perceived stress of the participants was overall neutral. There was no correlation found between HR, interactions, fear, and meetings. K-means and Bernoulli algorithms were applied to the dataset and two well-separated clusters were formed. The centroids indicated that higher heart rates were grouped either with meetings or had a higher difference in the center point values for interactions. Silhouette scores and 5-fold-validation were used to measure the accuracy of the clusters. However, these clusters were unable to predict the daily reported stress levels. Calculations were done on the computer usage data to measure interaction speeds and time spent working, in meetings, or away from the computer. These calculations were used as input into a decision tree with the reported daily stress levels. The results of the tree helped to identify which patterns lead to stressful days. The results indicated that days with high time pressure led to more reported stress. A new, more general tree was developed, which was able to predict 82 per cent of the daily stress reported. The main discovery of the research was that stress does not have a straightforward connection with computer interactions, facial emotions, or meetings. High interactions sometimes lead to stress and other times do not. So, predicting stress involves finding patterns and how data from different data sources interact with each other. Future work will revolve around validating the model in more office environments around South Africa. , Thesis (MSc) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics, 2024
- Full Text:
- Date Issued: 2024-04
A model for measuring and predicting stress for software developers using vital signs and activities
- Authors: Hibbers, Ilze
- Date: 2024-04
- Subjects: Machine learning , Neural networks (Computer science) , Computer software developers
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/63799 , vital:73614
- Description: Occupational stress is a well-recognised issue that affects individuals in various professions and industries. Reducing occupational stress has multiple benefits, such as improving employee's health and performance. This study proposes a model to measure and predict occupational stress using data collected in a real IT office environment. Different data sources, such as questionnaires, application software (RescueTime) and Fitbit smartwatches were used for collecting heart rate (HR), facial emotions, computer interactions, and application usage. The results of the Demand Control Support and Effort and Reward questionnaires indicated that the participants experienced high social support and an average level of workload. Participants also reported their daily perceived stress and workload level using a 5- point score. The perceived stress of the participants was overall neutral. There was no correlation found between HR, interactions, fear, and meetings. K-means and Bernoulli algorithms were applied to the dataset and two well-separated clusters were formed. The centroids indicated that higher heart rates were grouped either with meetings or had a higher difference in the center point values for interactions. Silhouette scores and 5-fold-validation were used to measure the accuracy of the clusters. However, these clusters were unable to predict the daily reported stress levels. Calculations were done on the computer usage data to measure interaction speeds and time spent working, in meetings, or away from the computer. These calculations were used as input into a decision tree with the reported daily stress levels. The results of the tree helped to identify which patterns lead to stressful days. The results indicated that days with high time pressure led to more reported stress. A new, more general tree was developed, which was able to predict 82 per cent of the daily stress reported. The main discovery of the research was that stress does not have a straightforward connection with computer interactions, facial emotions, or meetings. High interactions sometimes lead to stress and other times do not. So, predicting stress involves finding patterns and how data from different data sources interact with each other. Future work will revolve around validating the model in more office environments around South Africa. , Thesis (MSc) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics, 2024
- Full Text:
- Date Issued: 2024-04
Augmenting the Moore-Penrose generalised Inverse to train neural networks
- Authors: Fang, Bobby
- Date: 2024-04
- Subjects: Neural networks (Computer science) , Machine learning , Mathematical optimization -- Computer programs
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/63755 , vital:73595
- Description: An Extreme Learning Machine (ELM) is a non-iterative and fast feedforward neural network training algorithm which uses the Moore-Penrose generalised inverse of a matrix to compute the weights of the output layer of the neural network, using a random initialisation for the hidden layer. While ELM has been used to train feedforward neural networks, the effectiveness of the MP generalised to train recurrent neural networks is yet to be investigated. The primary aim of this research was to investigate how biases in the output layer and the MP generalised inverse can be used to train recurrent neural networks. To accomplish this, the Bias Augmented ELM (BA-ELM), which concatenated the hidden layer output matrix with a ones-column vector to simulate the biases in the output layer, was proposed. A variety of datasets generated from optimisation test functions, as well as using real-world regression and classification datasets, were used to validate BA-ELM. The results showed in specific circumstances that BA-ELM was able to perform better than ELM. Following this, Recurrent ELM (R-ELM) was proposed which uses a recurrent hidden layer instead of a feedforward hidden layer. Recurrent neural networks also rely on having functional feedback connections in the recurrent layer. A hybrid training algorithm, Recurrent Hybrid ELM (R-HELM), was proposed, which uses a gradient-based algorithm to optimise the recurrent layer and the MP generalised inverse to compute the output weights. The evaluation of R-ELM and R-HELM algorithms were carried out using three different recurrent architectures on two recurrent tasks derived from the Susceptible- Exposed-Infected-Removed (SEIR) epidemiology model. Various training hyperparameters were evaluated through hyperparameter investigations to investigate their effectiveness on the hybrid training algorithm. With optimal hyperparameters, the hybrid training algorithm was able to achieve better performance than the conventional gradient-based algorithm. , Thesis (MSc) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics, 2024
- Full Text:
- Date Issued: 2024-04
- Authors: Fang, Bobby
- Date: 2024-04
- Subjects: Neural networks (Computer science) , Machine learning , Mathematical optimization -- Computer programs
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/63755 , vital:73595
- Description: An Extreme Learning Machine (ELM) is a non-iterative and fast feedforward neural network training algorithm which uses the Moore-Penrose generalised inverse of a matrix to compute the weights of the output layer of the neural network, using a random initialisation for the hidden layer. While ELM has been used to train feedforward neural networks, the effectiveness of the MP generalised to train recurrent neural networks is yet to be investigated. The primary aim of this research was to investigate how biases in the output layer and the MP generalised inverse can be used to train recurrent neural networks. To accomplish this, the Bias Augmented ELM (BA-ELM), which concatenated the hidden layer output matrix with a ones-column vector to simulate the biases in the output layer, was proposed. A variety of datasets generated from optimisation test functions, as well as using real-world regression and classification datasets, were used to validate BA-ELM. The results showed in specific circumstances that BA-ELM was able to perform better than ELM. Following this, Recurrent ELM (R-ELM) was proposed which uses a recurrent hidden layer instead of a feedforward hidden layer. Recurrent neural networks also rely on having functional feedback connections in the recurrent layer. A hybrid training algorithm, Recurrent Hybrid ELM (R-HELM), was proposed, which uses a gradient-based algorithm to optimise the recurrent layer and the MP generalised inverse to compute the output weights. The evaluation of R-ELM and R-HELM algorithms were carried out using three different recurrent architectures on two recurrent tasks derived from the Susceptible- Exposed-Infected-Removed (SEIR) epidemiology model. Various training hyperparameters were evaluated through hyperparameter investigations to investigate their effectiveness on the hybrid training algorithm. With optimal hyperparameters, the hybrid training algorithm was able to achieve better performance than the conventional gradient-based algorithm. , Thesis (MSc) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics, 2024
- Full Text:
- Date Issued: 2024-04
Self-attentive vision in evolutionary robotics
- Authors: Botha, Bouwer
- Date: 2024-04
- Subjects: Evolutionary robotics , Robotics , Neural networks (Computer science)
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/63628 , vital:73566
- Description: The autonomy of a robot refers to its ability to achieve a task in an environment with minimal human supervision. This may require autonomous solutions to be able to perceive their environment to inform their decisions. An inexpensive and highly informative way that robots can perceive the environment is through vision. The autonomy of a robot is reliant on the quality of the robotic controller. These controllers are the software interface between the robot and environment that determine the actions of the robot based on the perceived environment. Controllers are typically created using manual programming techniques, which become progressively more challenging with increasing complexity of both the robot and task. An alternative to manual programming is the use of machine learning techniques such as those used by Evolutionary Robotics (ER). ER is an area of research that investigates the automatic creation of controllers. Instead of manually programming a controller, an Evolutionary Algorithms can be used to evolve the controller through repeated interactions with the task environment. Employing the ER approach on camera-based controllers, however, has presented problems for conventional ER methods. Firstly, existing architectures that are capable of automatically processing images, have a large number of trained parameters. These architectures over-encumber the evolutionary process due to the large search space of possible configurations. Secondly, the evolution of complex controllers needs to be done in simulation, which requires either: (a) the construction of a photo-realistic virtual environment with accurate lighting, texturing and models or (b) potential reduction of the controller capability by simplifying the problem via image preprocessing. Any controller trained in simulation also raises the inherent concern of not being able to transfer to the real world. This study proposes a new technique for the evolution of camera-based controllers in ER, that aims to address the highlighted problems. The use of self-attention is proposed to facilitate the evolution of compact controllers that are able to evolve specialized sets of task-relevant features in unprocessed images by focussing on important image regions. Furthermore, a new neural network-based simulation approach, Generative Neuro-Augmented Vision (GNAV), is proposed to simplify simulation construction. GNAV makes use of random data collected in a simple virtual environment and the real world. A neural network is trained to overcome the visual discrepancies between these two environments. GNAV enables a controller to be trained in a simple simulated environment that appears similar to the real environment, while requiring minimal human supervision. The capabilities of the new technique were demonstrated using a series of real-world navigation tasks based on camera vision. Controllers utilizing the proposed self-attention mechanism were trained using GNAV and transferred to a real camera-equipped robot. The controllers were shown to be able to perform the same tasks in the real world. , Thesis (MSc) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics, 2024
- Full Text:
- Date Issued: 2024-04
- Authors: Botha, Bouwer
- Date: 2024-04
- Subjects: Evolutionary robotics , Robotics , Neural networks (Computer science)
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/63628 , vital:73566
- Description: The autonomy of a robot refers to its ability to achieve a task in an environment with minimal human supervision. This may require autonomous solutions to be able to perceive their environment to inform their decisions. An inexpensive and highly informative way that robots can perceive the environment is through vision. The autonomy of a robot is reliant on the quality of the robotic controller. These controllers are the software interface between the robot and environment that determine the actions of the robot based on the perceived environment. Controllers are typically created using manual programming techniques, which become progressively more challenging with increasing complexity of both the robot and task. An alternative to manual programming is the use of machine learning techniques such as those used by Evolutionary Robotics (ER). ER is an area of research that investigates the automatic creation of controllers. Instead of manually programming a controller, an Evolutionary Algorithms can be used to evolve the controller through repeated interactions with the task environment. Employing the ER approach on camera-based controllers, however, has presented problems for conventional ER methods. Firstly, existing architectures that are capable of automatically processing images, have a large number of trained parameters. These architectures over-encumber the evolutionary process due to the large search space of possible configurations. Secondly, the evolution of complex controllers needs to be done in simulation, which requires either: (a) the construction of a photo-realistic virtual environment with accurate lighting, texturing and models or (b) potential reduction of the controller capability by simplifying the problem via image preprocessing. Any controller trained in simulation also raises the inherent concern of not being able to transfer to the real world. This study proposes a new technique for the evolution of camera-based controllers in ER, that aims to address the highlighted problems. The use of self-attention is proposed to facilitate the evolution of compact controllers that are able to evolve specialized sets of task-relevant features in unprocessed images by focussing on important image regions. Furthermore, a new neural network-based simulation approach, Generative Neuro-Augmented Vision (GNAV), is proposed to simplify simulation construction. GNAV makes use of random data collected in a simple virtual environment and the real world. A neural network is trained to overcome the visual discrepancies between these two environments. GNAV enables a controller to be trained in a simple simulated environment that appears similar to the real environment, while requiring minimal human supervision. The capabilities of the new technique were demonstrated using a series of real-world navigation tasks based on camera vision. Controllers utilizing the proposed self-attention mechanism were trained using GNAV and transferred to a real camera-equipped robot. The controllers were shown to be able to perform the same tasks in the real world. , Thesis (MSc) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics, 2024
- Full Text:
- Date Issued: 2024-04
A multispectral and machine learning approach to early stress classification in plants
- Authors: Poole, Louise Carmen
- Date: 2022-04-06
- Subjects: Machine learning , Neural networks (Computer science) , Multispectral imaging , Image processing , Plant stress detection
- Language: English
- Type: Master's thesis , text
- Identifier: http://hdl.handle.net/10962/232410 , vital:49989
- Description: Crop loss and failure can impact both a country’s economy and food security, often to devastating effects. As such, the importance of successfully detecting plant stresses early in their development is essential to minimize spread and damage to crop production. Identification of the stress and the stress-causing agent is the most critical and challenging step in plant and crop protection. With the development of and increase in ease of access to new equipment and technology in recent years, the use of spectroscopy in the early detection of plant diseases has become notably popular. This thesis narrows down the most suitable multispectral imaging techniques and machine learning algorithms for early stress detection. Datasets were collected of visible images and multispectral images. Dehydration was selected as the plant stress type for the main experiments, and data was collected from six plant species typically used in agriculture. Key contributions of this thesis include multispectral and visible datasets showing plant dehydration as well as a separate preliminary dataset on plant disease. Promising results on dehydration showed statistically significant accuracy improvements in the multispectral imaging compared to visible imaging for early stress detection, with multispectral input obtaining a 92.50% accuracy over visible input’s 77.50% on general plant species. The system was effective at stress detection on known plant species, with multispectral imaging introducing greater improvement to early stress detection than advanced stress detection. Furthermore, strong species discrimination was achieved when exclusively testing either early or advanced dehydration against healthy species. , Thesis (MSc) -- Faculty of Science, Ichthyology & Fisheries Sciences, 2022
- Full Text:
- Date Issued: 2022-04-06
- Authors: Poole, Louise Carmen
- Date: 2022-04-06
- Subjects: Machine learning , Neural networks (Computer science) , Multispectral imaging , Image processing , Plant stress detection
- Language: English
- Type: Master's thesis , text
- Identifier: http://hdl.handle.net/10962/232410 , vital:49989
- Description: Crop loss and failure can impact both a country’s economy and food security, often to devastating effects. As such, the importance of successfully detecting plant stresses early in their development is essential to minimize spread and damage to crop production. Identification of the stress and the stress-causing agent is the most critical and challenging step in plant and crop protection. With the development of and increase in ease of access to new equipment and technology in recent years, the use of spectroscopy in the early detection of plant diseases has become notably popular. This thesis narrows down the most suitable multispectral imaging techniques and machine learning algorithms for early stress detection. Datasets were collected of visible images and multispectral images. Dehydration was selected as the plant stress type for the main experiments, and data was collected from six plant species typically used in agriculture. Key contributions of this thesis include multispectral and visible datasets showing plant dehydration as well as a separate preliminary dataset on plant disease. Promising results on dehydration showed statistically significant accuracy improvements in the multispectral imaging compared to visible imaging for early stress detection, with multispectral input obtaining a 92.50% accuracy over visible input’s 77.50% on general plant species. The system was effective at stress detection on known plant species, with multispectral imaging introducing greater improvement to early stress detection than advanced stress detection. Furthermore, strong species discrimination was achieved when exclusively testing either early or advanced dehydration against healthy species. , Thesis (MSc) -- Faculty of Science, Ichthyology & Fisheries Sciences, 2022
- Full Text:
- Date Issued: 2022-04-06
Statistical and Mathematical Learning: an application to fraud detection and prevention
- Authors: Hamlomo, Sisipho
- Date: 2022-04-06
- Subjects: Credit card fraud , Bootstrap (Statistics) , Support vector machines , Neural networks (Computer science) , Decision trees , Machine learning , Cross-validation , Imbalanced data
- Language: English
- Type: Master's thesis , text
- Identifier: http://hdl.handle.net/10962/233795 , vital:50128
- Description: Credit card fraud is an ever-growing problem. There has been a rapid increase in the rate of fraudulent activities in recent years resulting in a considerable loss to several organizations, companies, and government agencies. Many researchers have focused on detecting fraudulent behaviours early using advanced machine learning techniques. However, credit card fraud detection is not a straightforward task since fraudulent behaviours usually differ for each attempt and the dataset is highly imbalanced, that is, the frequency of non-fraudulent cases outnumbers the frequency of fraudulent cases. In the case of the European credit card dataset, we have a ratio of approximately one fraudulent case to five hundred and seventy-eight non-fraudulent cases. Different methods were implemented to overcome this problem, namely random undersampling, one-sided sampling, SMOTE combined with Tomek links and parameter tuning. Predictive classifiers, namely logistic regression, decision trees, k-nearest neighbour, support vector machine and multilayer perceptrons, are applied to predict if a transaction is fraudulent or non-fraudulent. The model's performance is evaluated based on recall, precision, F1-score, the area under receiver operating characteristics curve, geometric mean and Matthew correlation coefficient. The results showed that the logistic regression classifier performed better than other classifiers except when the dataset was oversampled. , Thesis (MSc) -- Faculty of Science, Statistics, 2022
- Full Text:
- Date Issued: 2022-04-06
- Authors: Hamlomo, Sisipho
- Date: 2022-04-06
- Subjects: Credit card fraud , Bootstrap (Statistics) , Support vector machines , Neural networks (Computer science) , Decision trees , Machine learning , Cross-validation , Imbalanced data
- Language: English
- Type: Master's thesis , text
- Identifier: http://hdl.handle.net/10962/233795 , vital:50128
- Description: Credit card fraud is an ever-growing problem. There has been a rapid increase in the rate of fraudulent activities in recent years resulting in a considerable loss to several organizations, companies, and government agencies. Many researchers have focused on detecting fraudulent behaviours early using advanced machine learning techniques. However, credit card fraud detection is not a straightforward task since fraudulent behaviours usually differ for each attempt and the dataset is highly imbalanced, that is, the frequency of non-fraudulent cases outnumbers the frequency of fraudulent cases. In the case of the European credit card dataset, we have a ratio of approximately one fraudulent case to five hundred and seventy-eight non-fraudulent cases. Different methods were implemented to overcome this problem, namely random undersampling, one-sided sampling, SMOTE combined with Tomek links and parameter tuning. Predictive classifiers, namely logistic regression, decision trees, k-nearest neighbour, support vector machine and multilayer perceptrons, are applied to predict if a transaction is fraudulent or non-fraudulent. The model's performance is evaluated based on recall, precision, F1-score, the area under receiver operating characteristics curve, geometric mean and Matthew correlation coefficient. The results showed that the logistic regression classifier performed better than other classifiers except when the dataset was oversampled. , Thesis (MSc) -- Faculty of Science, Statistics, 2022
- Full Text:
- Date Issued: 2022-04-06
Deep neural networks for robot vision in evolutionary robotics
- Authors: Watt, Nathan
- Date: 2021-04
- Subjects: Gqeberha (South Africa) , Eastern Cape (South Africa) , Neural networks (Computer science)
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/52100 , vital:43448
- Description: Advances in electronics manufacturing have made robots and their sensors cheaper and more accessible. Robots can have a variety of sensors, such as touch sensors, distance sensors and cameras. A robot’s controller is the software which interprets its sensors and determines how the robot will behave. The difficulty in programming robot controllers increases with complex robots and complicated tasks, forming a barrier to deploying robots for real-world applications. Robot controllers can be automatically created with Evolutionary Robotics (ER). ER makes use of an Evolutionary Algorithm (EA) to evolve controllers to complete a particular task. Instead of manually programming controllers, an EA can evolve controllers when provided with the robot’s task. ER has been used to evolve controllers for many different kinds of robots with a variety of sensors, however the use of robots with on-board camera sensors has been limited. The nature of EAs makes evolving a controller for a camera-equipped robot particularly difficult. There are two main challenges which complicate the evolution of vision-based controllers. First, every image from a camera contains a large amount of information, and a controller needs many parameters to receive that information, however it is difficult to evolve controllers with such a large number of parameters using EAs. Second, during the process of evolution, it is necessary to evaluate the fitness of many candidate controllers. This is typically done in simulation, however creating a simulator for a camera sensor is a tedious and timeconsuming task, as building a photo-realistic simulated environment requires handcrafted 3-dimensional models, textures and lighting. Two techniques have been used in previous experiments to overcome the challenges associated with evolving vision-based controllers. Either the controller was provided with extremely low-resolution images, or a task-specific algorithm was used to preprocess the images, only providing the necessary information to the controller. , Thesis (MSc) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics, 2021
- Full Text: false
- Authors: Watt, Nathan
- Date: 2021-04
- Subjects: Gqeberha (South Africa) , Eastern Cape (South Africa) , Neural networks (Computer science)
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/52100 , vital:43448
- Description: Advances in electronics manufacturing have made robots and their sensors cheaper and more accessible. Robots can have a variety of sensors, such as touch sensors, distance sensors and cameras. A robot’s controller is the software which interprets its sensors and determines how the robot will behave. The difficulty in programming robot controllers increases with complex robots and complicated tasks, forming a barrier to deploying robots for real-world applications. Robot controllers can be automatically created with Evolutionary Robotics (ER). ER makes use of an Evolutionary Algorithm (EA) to evolve controllers to complete a particular task. Instead of manually programming controllers, an EA can evolve controllers when provided with the robot’s task. ER has been used to evolve controllers for many different kinds of robots with a variety of sensors, however the use of robots with on-board camera sensors has been limited. The nature of EAs makes evolving a controller for a camera-equipped robot particularly difficult. There are two main challenges which complicate the evolution of vision-based controllers. First, every image from a camera contains a large amount of information, and a controller needs many parameters to receive that information, however it is difficult to evolve controllers with such a large number of parameters using EAs. Second, during the process of evolution, it is necessary to evaluate the fitness of many candidate controllers. This is typically done in simulation, however creating a simulator for a camera sensor is a tedious and timeconsuming task, as building a photo-realistic simulated environment requires handcrafted 3-dimensional models, textures and lighting. Two techniques have been used in previous experiments to overcome the challenges associated with evolving vision-based controllers. Either the controller was provided with extremely low-resolution images, or a task-specific algorithm was used to preprocess the images, only providing the necessary information to the controller. , Thesis (MSc) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics, 2021
- Full Text: false
The development of an ionospheric storm-time index for the South African region
- Authors: Tshisaphungo, Mpho
- Date: 2021-04
- Subjects: Ionospheric storms -- South Africa , Global Positioning System , Neural networks (Computer science) , Regression analysis , Ionosondes , Auroral electrojet , Geomagnetic indexes , Magnetic storms -- South Africa
- Language: English
- Type: thesis , text , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/178409 , vital:42937 , 10.21504/10962/178409
- Description: This thesis presents the development of a regional ionospheric storm-time model which forms the foundation of an index to provide a quick view of the ionospheric storm effects over South African mid-latitude region. The model is based on the foF2 measurements from four South African ionosonde stations. The data coverage for the model development over Grahamstown (33.3◦S, 26.5◦E), Hermanus (34.42◦S, 19.22◦E), Louisvale (28.50◦S, 21.20◦E), and Madimbo (22.39◦S, 30.88◦E) is 1996-2016, 2009-2016, 2000-2016, and 2000-2016 respectively. Data from the Global Positioning System (GPS) and radio occultation (RO) technique were used during validation. As the measure of either positive or negative storm effect, the variation of the critical frequency of the F2 layer (foF2) from the monthly median values (denoted as _foF2) is modeled. The modeling of _foF2 is based on only storm time data with the criteria of Dst 6 -50 nT and Kp > 4. The modeling methods used in the study were artificial neural network (ANN), linear regression (LR) and polynomial functions. The approach taken was to first test the modeling techniques on a single station before expanding the study to cover the regional aspect. The single station modeling was developed based on ionosonde data over Grahamstown. The inputs for the model which related to seasonal variation, diurnal variation, geomagnetic activity and solar activity were considered. For the geomagnetic activity, three indices namely; the symmetric disturbance in the horizontal component of the Earth’s magnetic field (SYM − H), the Auroral Electrojet (AE) index and local geomagnetic index A, were included as inputs. The performance of a single station model revealed that, of the three geomagnetic indices, SYM − H index has the largest contribution of 41% and 54% based on ANN and LR techniques respectively. The average correlation coefficients (R) for both ANN and LR models was 0.8, when validated during the selected storms falling within the period of model development. When validated using storms that fall outside the period of model development, the model gave R values of 0.6 and 0.5 for ANN and LR respectively. In addition, the GPS total electron content (TEC) derived measurements were used to estimate foF2 data. This is because there are more GPS receivers than ionosonde locations and the utilisation of this data increases the spatial coverage of the regional model. The estimation of foF2 from GPS TEC was done at GPS-ionosonde co-locations using polynomial functions. The average R values of 0.69 and 0.65 were obtained between actual and derived _foF2 over the co-locations and other GPS stations respectively. Validation of GPS TEC derived foF2 with RO data over regions out of ionospheric pierce points coverage with respect to ionosonde locations gave R greater than 0.9 for the selected storm period of 4-8 August 2011. The regional storm-time model was then developed based on the ANN technique using the four South African ionosonde stations. The maximum and minimum R values of 0.6 and 0.5 were obtained over ionosonde and GPS locations respectively. This model forms the basis towards the regional ionospheric storm-time index. , Thesis (PhD) -- Faculty of Science, Physics and Electronics, 2021
- Full Text:
- Date Issued: 2021-04
- Authors: Tshisaphungo, Mpho
- Date: 2021-04
- Subjects: Ionospheric storms -- South Africa , Global Positioning System , Neural networks (Computer science) , Regression analysis , Ionosondes , Auroral electrojet , Geomagnetic indexes , Magnetic storms -- South Africa
- Language: English
- Type: thesis , text , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/178409 , vital:42937 , 10.21504/10962/178409
- Description: This thesis presents the development of a regional ionospheric storm-time model which forms the foundation of an index to provide a quick view of the ionospheric storm effects over South African mid-latitude region. The model is based on the foF2 measurements from four South African ionosonde stations. The data coverage for the model development over Grahamstown (33.3◦S, 26.5◦E), Hermanus (34.42◦S, 19.22◦E), Louisvale (28.50◦S, 21.20◦E), and Madimbo (22.39◦S, 30.88◦E) is 1996-2016, 2009-2016, 2000-2016, and 2000-2016 respectively. Data from the Global Positioning System (GPS) and radio occultation (RO) technique were used during validation. As the measure of either positive or negative storm effect, the variation of the critical frequency of the F2 layer (foF2) from the monthly median values (denoted as _foF2) is modeled. The modeling of _foF2 is based on only storm time data with the criteria of Dst 6 -50 nT and Kp > 4. The modeling methods used in the study were artificial neural network (ANN), linear regression (LR) and polynomial functions. The approach taken was to first test the modeling techniques on a single station before expanding the study to cover the regional aspect. The single station modeling was developed based on ionosonde data over Grahamstown. The inputs for the model which related to seasonal variation, diurnal variation, geomagnetic activity and solar activity were considered. For the geomagnetic activity, three indices namely; the symmetric disturbance in the horizontal component of the Earth’s magnetic field (SYM − H), the Auroral Electrojet (AE) index and local geomagnetic index A, were included as inputs. The performance of a single station model revealed that, of the three geomagnetic indices, SYM − H index has the largest contribution of 41% and 54% based on ANN and LR techniques respectively. The average correlation coefficients (R) for both ANN and LR models was 0.8, when validated during the selected storms falling within the period of model development. When validated using storms that fall outside the period of model development, the model gave R values of 0.6 and 0.5 for ANN and LR respectively. In addition, the GPS total electron content (TEC) derived measurements were used to estimate foF2 data. This is because there are more GPS receivers than ionosonde locations and the utilisation of this data increases the spatial coverage of the regional model. The estimation of foF2 from GPS TEC was done at GPS-ionosonde co-locations using polynomial functions. The average R values of 0.69 and 0.65 were obtained between actual and derived _foF2 over the co-locations and other GPS stations respectively. Validation of GPS TEC derived foF2 with RO data over regions out of ionospheric pierce points coverage with respect to ionosonde locations gave R greater than 0.9 for the selected storm period of 4-8 August 2011. The regional storm-time model was then developed based on the ANN technique using the four South African ionosonde stations. The maximum and minimum R values of 0.6 and 0.5 were obtained over ionosonde and GPS locations respectively. This model forms the basis towards the regional ionospheric storm-time index. , Thesis (PhD) -- Faculty of Science, Physics and Electronics, 2021
- Full Text:
- Date Issued: 2021-04
Application of machine learning, molecular modelling and structural data mining against antiretroviral drug resistance in HIV-1
- Sheik Amamuddy, Olivier Serge André
- Authors: Sheik Amamuddy, Olivier Serge André
- Date: 2020
- Subjects: Machine learning , Molecules -- Models , Data mining , Neural networks (Computer science) , Antiretroviral agents , Protease inhibitors , Drug resistance , Multidrug resistance , Molecular dynamics , Renin-angiotensin system , HIV (Viruses) -- South Africa , HIV (Viruses) -- Social aspects -- South Africa , South African Natural Compounds Database
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/115964 , vital:34282
- Description: Millions are affected with the Human Immunodeficiency Virus (HIV) world wide, even though the death toll is on the decline. Antiretrovirals (ARVs), more specifically protease inhibitors have shown tremendous success since their introduction into therapy since the mid 1990’s by slowing down progression to the Acquired Immune Deficiency Syndrome (AIDS). However, Drug Resistance Mutations (DRMs) are constantly selected for due to viral adaptation, making drugs less effective over time. The current challenge is to manage the infection optimally with a limited set of drugs, with differing associated levels of toxicities in the face of a virus that (1) exists as a quasispecies, (2) may transmit acquired DRMs to drug-naive individuals and (3) that can manifest class-wide resistance due to similarities in design. The presence of latent reservoirs, unawareness of infection status, education and various socio-economic factors make the problem even more complex. Adequate timing and choice of drug prescription together with treatment adherence are very important as drug toxicities, drug failure and sub-optimal treatment regimens leave room for further development of drug resistance. While CD4 cell count and the determination of viral load from patients in resource-limited settings are very helpful to track how well a patient’s immune system is able to keep the virus in check, they can be lengthy in determining whether an ARV is effective. Phenosense assay kits answer this problem using viruses engineered to contain the patient sequences and evaluating their growth in the presence of different ARVs, but this can be expensive and too involved for routine checks. As a cheaper and faster alternative, genotypic assays provide similar information from HIV pol sequences obtained from blood samples, inferring ARV efficacy on the basis of drug resistance mutation patterns. However, these are inherently complex and the various methods of in silico prediction, such as Geno2pheno, REGA and Stanford HIVdb do not always agree in every case, even though this gap decreases as the list of resistance mutations is updated. A major gap in HIV treatment is that the information used for predicting drug resistance is mainly computed from data containing an overwhelming majority of B subtype HIV, when these only comprise about 12% of the worldwide HIV infections. In addition to growing evidence that drug resistance is subtype-related, it is intuitive to hypothesize that as subtyping is a phylogenetic classification, the more divergent a subtype is from the strains used in training prediction models, the less their resistance profiles would correlate. For the aforementioned reasons, we used a multi-faceted approach to attack the virus in multiple ways. This research aimed to (1) improve resistance prediction methods by focusing solely on the available subtype, (2) mine structural information pertaining to resistance in order to find any exploitable weak points and increase knowledge of the mechanistic processes of drug resistance in HIV protease. Finally, (3) we screen for protease inhibitors amongst a database of natural compounds [the South African natural compound database (SANCDB)] to find molecules or molecular properties usable to come up with improved inhibition against the drug target. In this work, structural information was mined using the Anisotropic Network Model, Dynamics Cross-Correlation, Perturbation Response Scanning, residue contact network analysis and the radius of gyration. These methods failed to give any resistance-associated patterns in terms of natural movement, internal correlated motions, residue perturbation response, relational behaviour and global compaction respectively. Applications of drug docking, homology-modelling and energy minimization for generating features suitable for machine-learning were not very promising, and rather suggest that the value of binding energies by themselves from Vina may not be very reliable quantitatively. All these failures lead to a refinement that resulted in a highly sensitive statistically-guided network construction and analysis, which leads to key findings in the early dynamics associated with resistance across all PI drugs. The latter experiment unravelled a conserved lateral expansion motion occurring at the flap elbows, and an associated contraction that drives the base of the dimerization domain towards the catalytic site’s floor in the case of drug resistance. Interestingly, we found that despite the conserved movement, bond angles were degenerate. Alongside, 16 Artificial Neural Network models were optimised for HIV proteases and reverse transcriptase inhibitors, with performances on par with Stanford HIVdb. Finally, we prioritised 9 compounds with potential protease inhibitory activity using virtual screening and molecular dynamics (MD) to additionally suggest a promising modification to one of the compounds. This yielded another molecule inhibiting equally well both opened and closed receptor target conformations, whereby each of the compounds had been selected against an array of multi-drug-resistant receptor variants. While a main hurdle was a lack of non-B subtype data, our findings, especially from the statistically-guided network analysis, may extrapolate to a certain extent to them as the level of conservation was very high within subtype B, despite all the present variations. This network construction method lays down a sensitive approach for analysing a pair of alternate phenotypes for which complex patterns prevail, given a sufficient number of experimental units. During the course of research a weighted contact mapping tool was developed to compare renin-angiotensinogen variants and packaged as part of the MD-TASK tool suite. Finally the functionality, compatibility and performance of the MODE-TASK tool were evaluated and confirmed for both Python2.7.x and Python3.x, for the analysis of normals modes from single protein structures and essential modes from MD trajectories. These techniques and tools collectively add onto the conventional means of MD analysis.
- Full Text:
- Date Issued: 2020
- Authors: Sheik Amamuddy, Olivier Serge André
- Date: 2020
- Subjects: Machine learning , Molecules -- Models , Data mining , Neural networks (Computer science) , Antiretroviral agents , Protease inhibitors , Drug resistance , Multidrug resistance , Molecular dynamics , Renin-angiotensin system , HIV (Viruses) -- South Africa , HIV (Viruses) -- Social aspects -- South Africa , South African Natural Compounds Database
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/115964 , vital:34282
- Description: Millions are affected with the Human Immunodeficiency Virus (HIV) world wide, even though the death toll is on the decline. Antiretrovirals (ARVs), more specifically protease inhibitors have shown tremendous success since their introduction into therapy since the mid 1990’s by slowing down progression to the Acquired Immune Deficiency Syndrome (AIDS). However, Drug Resistance Mutations (DRMs) are constantly selected for due to viral adaptation, making drugs less effective over time. The current challenge is to manage the infection optimally with a limited set of drugs, with differing associated levels of toxicities in the face of a virus that (1) exists as a quasispecies, (2) may transmit acquired DRMs to drug-naive individuals and (3) that can manifest class-wide resistance due to similarities in design. The presence of latent reservoirs, unawareness of infection status, education and various socio-economic factors make the problem even more complex. Adequate timing and choice of drug prescription together with treatment adherence are very important as drug toxicities, drug failure and sub-optimal treatment regimens leave room for further development of drug resistance. While CD4 cell count and the determination of viral load from patients in resource-limited settings are very helpful to track how well a patient’s immune system is able to keep the virus in check, they can be lengthy in determining whether an ARV is effective. Phenosense assay kits answer this problem using viruses engineered to contain the patient sequences and evaluating their growth in the presence of different ARVs, but this can be expensive and too involved for routine checks. As a cheaper and faster alternative, genotypic assays provide similar information from HIV pol sequences obtained from blood samples, inferring ARV efficacy on the basis of drug resistance mutation patterns. However, these are inherently complex and the various methods of in silico prediction, such as Geno2pheno, REGA and Stanford HIVdb do not always agree in every case, even though this gap decreases as the list of resistance mutations is updated. A major gap in HIV treatment is that the information used for predicting drug resistance is mainly computed from data containing an overwhelming majority of B subtype HIV, when these only comprise about 12% of the worldwide HIV infections. In addition to growing evidence that drug resistance is subtype-related, it is intuitive to hypothesize that as subtyping is a phylogenetic classification, the more divergent a subtype is from the strains used in training prediction models, the less their resistance profiles would correlate. For the aforementioned reasons, we used a multi-faceted approach to attack the virus in multiple ways. This research aimed to (1) improve resistance prediction methods by focusing solely on the available subtype, (2) mine structural information pertaining to resistance in order to find any exploitable weak points and increase knowledge of the mechanistic processes of drug resistance in HIV protease. Finally, (3) we screen for protease inhibitors amongst a database of natural compounds [the South African natural compound database (SANCDB)] to find molecules or molecular properties usable to come up with improved inhibition against the drug target. In this work, structural information was mined using the Anisotropic Network Model, Dynamics Cross-Correlation, Perturbation Response Scanning, residue contact network analysis and the radius of gyration. These methods failed to give any resistance-associated patterns in terms of natural movement, internal correlated motions, residue perturbation response, relational behaviour and global compaction respectively. Applications of drug docking, homology-modelling and energy minimization for generating features suitable for machine-learning were not very promising, and rather suggest that the value of binding energies by themselves from Vina may not be very reliable quantitatively. All these failures lead to a refinement that resulted in a highly sensitive statistically-guided network construction and analysis, which leads to key findings in the early dynamics associated with resistance across all PI drugs. The latter experiment unravelled a conserved lateral expansion motion occurring at the flap elbows, and an associated contraction that drives the base of the dimerization domain towards the catalytic site’s floor in the case of drug resistance. Interestingly, we found that despite the conserved movement, bond angles were degenerate. Alongside, 16 Artificial Neural Network models were optimised for HIV proteases and reverse transcriptase inhibitors, with performances on par with Stanford HIVdb. Finally, we prioritised 9 compounds with potential protease inhibitory activity using virtual screening and molecular dynamics (MD) to additionally suggest a promising modification to one of the compounds. This yielded another molecule inhibiting equally well both opened and closed receptor target conformations, whereby each of the compounds had been selected against an array of multi-drug-resistant receptor variants. While a main hurdle was a lack of non-B subtype data, our findings, especially from the statistically-guided network analysis, may extrapolate to a certain extent to them as the level of conservation was very high within subtype B, despite all the present variations. This network construction method lays down a sensitive approach for analysing a pair of alternate phenotypes for which complex patterns prevail, given a sufficient number of experimental units. During the course of research a weighted contact mapping tool was developed to compare renin-angiotensinogen variants and packaged as part of the MD-TASK tool suite. Finally the functionality, compatibility and performance of the MODE-TASK tool were evaluated and confirmed for both Python2.7.x and Python3.x, for the analysis of normals modes from single protein structures and essential modes from MD trajectories. These techniques and tools collectively add onto the conventional means of MD analysis.
- Full Text:
- Date Issued: 2020
Technology in conservation: towards a system for in-field drone detection of invasive vegetation
- James, Katherine Margaret Frances
- Authors: James, Katherine Margaret Frances
- Date: 2020
- Subjects: Drone aircraft in remote sensing , Neural networks (Computer science) , Drone aircraft in remote sensing -- Case studies , Machine learning , Computer vision , Environmental monitoring -- Remote sensing , Invasive plants -- Monitoring
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/143408 , vital:38244
- Description: Remote sensing can assist in monitoring the spread of invasive vegetation. The adoption of camera-carrying unmanned aerial vehicles, commonly referred to as drones, as remote sensing tools has yielded images of higher spatial resolution than traditional techniques. Drones also have the potential to interact with the environment through the delivery of bio-control or herbicide, as seen with their adoption in precision agriculture. Unlike in agricultural applications, however, invasive plants do not have a predictable position relative to each other within the environment. To facilitate the adoption of drones as an environmental monitoring and management tool, drones need to be able to intelligently distinguish between invasive and non-invasive vegetation on the fly. In this thesis, we present the augmentation of a commercially available drone with a deep machine learning model to investigate the viability of differentiating between an invasive shrub and other vegetation. As a case study, this was applied to the shrub genus Hakea, originating in Australia and invasive in several countries including South Africa. However, for this research, the methodology is important, rather than the chosen target plant. A dataset was collected using the available drone and manually annotated to facilitate the supervised training of the model. Two approaches were explored, namely, classification and semantic segmentation. For each of these, several models were trained and evaluated to find the optimal one. The chosen model was then interfaced with the drone via an Android application on a mobile device and its performance was preliminarily evaluated in the field. Based on these findings, refinements were made and thereafter a thorough field evaluation was performed to determine the best conditions for model operation. Results from the classification task show that deep learning models are capable of distinguishing between target and other shrubs in ideal candidate windows. However, classification in this manner is restricted by the proposal of such candidate windows. End-to-end image segmentation using deep learning overcomes this problem, classifying the image in a pixel-wise manner. Furthermore, the use of appropriate loss functions was found to improve model performance. Field tests show that illumination and shadow pose challenges to the model, but that good recall can be achieved when the conditions are ideal. False positive detection remains an issue that could be improved. This approach shows the potential for drones as an environmental monitoring and management tool when coupled with deep machine learning techniques and outlines potential problems that may be encountered.
- Full Text:
- Date Issued: 2020
- Authors: James, Katherine Margaret Frances
- Date: 2020
- Subjects: Drone aircraft in remote sensing , Neural networks (Computer science) , Drone aircraft in remote sensing -- Case studies , Machine learning , Computer vision , Environmental monitoring -- Remote sensing , Invasive plants -- Monitoring
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/143408 , vital:38244
- Description: Remote sensing can assist in monitoring the spread of invasive vegetation. The adoption of camera-carrying unmanned aerial vehicles, commonly referred to as drones, as remote sensing tools has yielded images of higher spatial resolution than traditional techniques. Drones also have the potential to interact with the environment through the delivery of bio-control or herbicide, as seen with their adoption in precision agriculture. Unlike in agricultural applications, however, invasive plants do not have a predictable position relative to each other within the environment. To facilitate the adoption of drones as an environmental monitoring and management tool, drones need to be able to intelligently distinguish between invasive and non-invasive vegetation on the fly. In this thesis, we present the augmentation of a commercially available drone with a deep machine learning model to investigate the viability of differentiating between an invasive shrub and other vegetation. As a case study, this was applied to the shrub genus Hakea, originating in Australia and invasive in several countries including South Africa. However, for this research, the methodology is important, rather than the chosen target plant. A dataset was collected using the available drone and manually annotated to facilitate the supervised training of the model. Two approaches were explored, namely, classification and semantic segmentation. For each of these, several models were trained and evaluated to find the optimal one. The chosen model was then interfaced with the drone via an Android application on a mobile device and its performance was preliminarily evaluated in the field. Based on these findings, refinements were made and thereafter a thorough field evaluation was performed to determine the best conditions for model operation. Results from the classification task show that deep learning models are capable of distinguishing between target and other shrubs in ideal candidate windows. However, classification in this manner is restricted by the proposal of such candidate windows. End-to-end image segmentation using deep learning overcomes this problem, classifying the image in a pixel-wise manner. Furthermore, the use of appropriate loss functions was found to improve model performance. Field tests show that illumination and shadow pose challenges to the model, but that good recall can be achieved when the conditions are ideal. False positive detection remains an issue that could be improved. This approach shows the potential for drones as an environmental monitoring and management tool when coupled with deep machine learning techniques and outlines potential problems that may be encountered.
- Full Text:
- Date Issued: 2020
A comparative study of artificial neural networks and physics models as simulators in evolutionary robotics
- Pretorius, Christiaan Johannes
- Authors: Pretorius, Christiaan Johannes
- Date: 2019
- Subjects: Neural networks (Computer science)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10948/30789 , vital:31131
- Description: The Evolutionary Robotics (ER) process is a technique that applies evolutionary optimization algorithms to the task of automatically developing, or evolving, robotic control programs. These control programs, or simply controllers, are evolved in order to allow a robot to perform a required task. During the ER process, use is often made of robotic simulators to evaluate the performance of candidate controllers that are produced in the course of the controller evolution process. Such simulators accelerate and otherwise simplify the controller evolution process, as opposed to the more arduous process of evaluating controllers in the real world without use of simulation. To date, the vast majority of simulators that have been applied in ER are physics- based models which are constructed by taking into account the underlying physics governing the operation of the robotic system in question. An alternative approach to simulator implementation in ER is the usage of Artificial Neural Networks (ANNs) as simulators in the ER process. Such simulators are referred to as Simulator Neural Networks (SNNs). Previous studies have indicated that SNNs can successfully be used as an alter- native to physics-based simulators in the ER process on various robotic platforms. At the commencement of the current study it was not, however, known how this relatively new method of simulation would compare to traditional physics-based simulation approaches in ER. The study presented in this thesis thus endeavoured to quantitatively compare SNNs and physics-based models as simulators in the ER process. In order to con- duct this comparative study, both SNNs and physics simulators were constructed for the modelling of three different robotic platforms: a differentially-steered robot, a wheeled inverted pendulum robot and a hexapod robot. Each of these two types of simulation was then used in simulation-based evolution processes to evolve con- trollers for each robotic platform. During these controller evolution processes, the SNNs and physics models were compared in terms of their accuracy in making pre- dictions of robotic behaviour, their computational efficiency in arriving at these predictions, the human effort required to construct each simulator and, most im- portantly, the real-world performance of controllers evolved by making use of each simulator. The results obtained in this study illustrated experimentally that SNNs were, in the majority of cases, able to make more accurate predictions than the physics- based models and these SNNs were arguably simpler to construct than the physics simulators. Additionally, SNNs were also shown to be a computationally efficient alternative to physics-based simulators in ER and, again in the majority of cases, these SNNs were able to produce controllers which outperformed those evolved in the physics-based simulators, when these controllers were uploaded to the real-world robots. The results of this thesis thus suggest that SNNs are a viable alternative to more commonly-used physics simulators in ER and further investigation of the potential of this simulation technique appears warranted.
- Full Text:
- Date Issued: 2019
- Authors: Pretorius, Christiaan Johannes
- Date: 2019
- Subjects: Neural networks (Computer science)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10948/30789 , vital:31131
- Description: The Evolutionary Robotics (ER) process is a technique that applies evolutionary optimization algorithms to the task of automatically developing, or evolving, robotic control programs. These control programs, or simply controllers, are evolved in order to allow a robot to perform a required task. During the ER process, use is often made of robotic simulators to evaluate the performance of candidate controllers that are produced in the course of the controller evolution process. Such simulators accelerate and otherwise simplify the controller evolution process, as opposed to the more arduous process of evaluating controllers in the real world without use of simulation. To date, the vast majority of simulators that have been applied in ER are physics- based models which are constructed by taking into account the underlying physics governing the operation of the robotic system in question. An alternative approach to simulator implementation in ER is the usage of Artificial Neural Networks (ANNs) as simulators in the ER process. Such simulators are referred to as Simulator Neural Networks (SNNs). Previous studies have indicated that SNNs can successfully be used as an alter- native to physics-based simulators in the ER process on various robotic platforms. At the commencement of the current study it was not, however, known how this relatively new method of simulation would compare to traditional physics-based simulation approaches in ER. The study presented in this thesis thus endeavoured to quantitatively compare SNNs and physics-based models as simulators in the ER process. In order to con- duct this comparative study, both SNNs and physics simulators were constructed for the modelling of three different robotic platforms: a differentially-steered robot, a wheeled inverted pendulum robot and a hexapod robot. Each of these two types of simulation was then used in simulation-based evolution processes to evolve con- trollers for each robotic platform. During these controller evolution processes, the SNNs and physics models were compared in terms of their accuracy in making pre- dictions of robotic behaviour, their computational efficiency in arriving at these predictions, the human effort required to construct each simulator and, most im- portantly, the real-world performance of controllers evolved by making use of each simulator. The results obtained in this study illustrated experimentally that SNNs were, in the majority of cases, able to make more accurate predictions than the physics- based models and these SNNs were arguably simpler to construct than the physics simulators. Additionally, SNNs were also shown to be a computationally efficient alternative to physics-based simulators in ER and, again in the majority of cases, these SNNs were able to produce controllers which outperformed those evolved in the physics-based simulators, when these controllers were uploaded to the real-world robots. The results of this thesis thus suggest that SNNs are a viable alternative to more commonly-used physics simulators in ER and further investigation of the potential of this simulation technique appears warranted.
- Full Text:
- Date Issued: 2019
Deep learning applied to the semantic segmentation of tyre stockpiles
- Barfknecht, Nicholas Christopher
- Authors: Barfknecht, Nicholas Christopher
- Date: 2018
- Subjects: Neural networks (Computer science)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10948/23947 , vital:30647
- Description: The global push for manufacturing which is environmentally sustainable has disrupted standard methods of waste tyre disposal. This push is further intensified by the health and safety risks discarded tyres pose to the surrounding population. Waste tyre recycling initiatives in South Africa are on the increase; however, there is still a growing number of undocumented tyre stockpiles developing throughout the country. The plans put in place to eradicate these tyre stockpiles have been met with collection, transport and storage logistical issues caused by the remoteness and distant locales. Eastwood (2016) aimed at optimising the logistics associated with collection, by estimating the number of visible tyres from images of tyre stockpiles. This research was limited by the need for manual segmentation of each tyre stockpile located within each image. This research proposes the use of semantic segmentation to automatically segment images of tyre stockpiles. An initial review of neural network, convolutional network and semantic segmentation literature resulted in the selection of Dilated Net as the semantic segmentation architecture for this research. Dilated Net builds upon the VGG-16 classification architecture to perform semantic segmentation. This resulted in classification experiments which were evaluated using precision, recall and f1-score. The results indicated that regardless of tyre stockpile image dimension, fairly accurate levels of classification accuracy can be attained. This was followed by semantic segmentation experiments which made use of intersection over union (IoU) and pixel accuracy to evaluate the effectiveness of Dilated Net on images of tyre stockpiles. The results indicated that accurate tyre stockpile segmentation regions can be obtained and that the trained model generalises well to unseen images.
- Full Text:
- Date Issued: 2018
- Authors: Barfknecht, Nicholas Christopher
- Date: 2018
- Subjects: Neural networks (Computer science)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10948/23947 , vital:30647
- Description: The global push for manufacturing which is environmentally sustainable has disrupted standard methods of waste tyre disposal. This push is further intensified by the health and safety risks discarded tyres pose to the surrounding population. Waste tyre recycling initiatives in South Africa are on the increase; however, there is still a growing number of undocumented tyre stockpiles developing throughout the country. The plans put in place to eradicate these tyre stockpiles have been met with collection, transport and storage logistical issues caused by the remoteness and distant locales. Eastwood (2016) aimed at optimising the logistics associated with collection, by estimating the number of visible tyres from images of tyre stockpiles. This research was limited by the need for manual segmentation of each tyre stockpile located within each image. This research proposes the use of semantic segmentation to automatically segment images of tyre stockpiles. An initial review of neural network, convolutional network and semantic segmentation literature resulted in the selection of Dilated Net as the semantic segmentation architecture for this research. Dilated Net builds upon the VGG-16 classification architecture to perform semantic segmentation. This resulted in classification experiments which were evaluated using precision, recall and f1-score. The results indicated that regardless of tyre stockpile image dimension, fairly accurate levels of classification accuracy can be attained. This was followed by semantic segmentation experiments which made use of intersection over union (IoU) and pixel accuracy to evaluate the effectiveness of Dilated Net on images of tyre stockpiles. The results indicated that accurate tyre stockpile segmentation regions can be obtained and that the trained model generalises well to unseen images.
- Full Text:
- Date Issued: 2018
Modelling Ionospheric vertical drifts over the African low latitude region
- Dubazane, Makhosonke Berthwell
- Authors: Dubazane, Makhosonke Berthwell
- Date: 2018
- Subjects: Ionospheric drift , Magnetometers , Functions, Orthogonal , Neural networks (Computer science) , Ionospheric electron density -- Africa , Communication and Navigation Outage Forecasting Systems (C/NOFS)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/63356 , vital:28396
- Description: Low/equatorial latitudes vertical plasma drifts and electric fields govern the formation and changes of ionospheric density structures which affect space-based systems such as communications, navigation and positioning. Dynamical and electrodynamical processes play important roles in plasma distribution at different altitudes. Because of the high variability of E × B drift in low latitude regions, coupled with various processes that sometimes originate from high latitudes especially during geomagnetic storm conditions, it is challenging to develop accurate vertical drift models. This is despite the fact that there are very few instruments dedicated to provide electric field and hence E × B drift data in low/equatorial latitude regions. To this effect, there exists no ground-based instrument for direct measurements of E×B drift data in the African sector. This study presents the first time investigation aimed at modelling the long-term variability of low latitude vertical E × B drift over the African sector using a combination of Communication and Navigation Outage Forecasting Systems (C/NOFS) and ground-based magnetometer observations/measurements during 2008-2013. Because the approach is based on the estimation of equatorial electrojet from ground-based magnetometer observations, the developed models are only valid for local daytime. Three modelling techniques have been considered. The application of Empirical Orthogonal Functions and partial least squares has been performed on vertical E × B drift modelling for the first time. The artificial neural networks that have the advantage of learning underlying changes between a set of inputs and known output were also used in vertical E × B drift modelling. Due to lack of E×B drift data over the African sector, the developed models were validated using satellite data and the climatological Scherliess-Fejer model incorporated within the International Reference Ionosphere model. Maximum correlation coefficient of ∼ 0.8 was achieved when validating the developed models with C/NOFS E × B drift observations that were not used in any model development. For most of the time, the climatological model overestimates the local daytime vertical E × B drift velocities. The methods and approach presented in this study provide a background for constructing vertical E ×B drift databases in longitude sectors that do not have radar instrumentation. This will in turn make it possible to study day-to-day variability of vertical E×B drift and hopefully lead to the development of regional and global models that will incorporate local time information in different longitude sectors.
- Full Text:
- Date Issued: 2018
- Authors: Dubazane, Makhosonke Berthwell
- Date: 2018
- Subjects: Ionospheric drift , Magnetometers , Functions, Orthogonal , Neural networks (Computer science) , Ionospheric electron density -- Africa , Communication and Navigation Outage Forecasting Systems (C/NOFS)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/63356 , vital:28396
- Description: Low/equatorial latitudes vertical plasma drifts and electric fields govern the formation and changes of ionospheric density structures which affect space-based systems such as communications, navigation and positioning. Dynamical and electrodynamical processes play important roles in plasma distribution at different altitudes. Because of the high variability of E × B drift in low latitude regions, coupled with various processes that sometimes originate from high latitudes especially during geomagnetic storm conditions, it is challenging to develop accurate vertical drift models. This is despite the fact that there are very few instruments dedicated to provide electric field and hence E × B drift data in low/equatorial latitude regions. To this effect, there exists no ground-based instrument for direct measurements of E×B drift data in the African sector. This study presents the first time investigation aimed at modelling the long-term variability of low latitude vertical E × B drift over the African sector using a combination of Communication and Navigation Outage Forecasting Systems (C/NOFS) and ground-based magnetometer observations/measurements during 2008-2013. Because the approach is based on the estimation of equatorial electrojet from ground-based magnetometer observations, the developed models are only valid for local daytime. Three modelling techniques have been considered. The application of Empirical Orthogonal Functions and partial least squares has been performed on vertical E × B drift modelling for the first time. The artificial neural networks that have the advantage of learning underlying changes between a set of inputs and known output were also used in vertical E × B drift modelling. Due to lack of E×B drift data over the African sector, the developed models were validated using satellite data and the climatological Scherliess-Fejer model incorporated within the International Reference Ionosphere model. Maximum correlation coefficient of ∼ 0.8 was achieved when validating the developed models with C/NOFS E × B drift observations that were not used in any model development. For most of the time, the climatological model overestimates the local daytime vertical E × B drift velocities. The methods and approach presented in this study provide a background for constructing vertical E ×B drift databases in longitude sectors that do not have radar instrumentation. This will in turn make it possible to study day-to-day variability of vertical E×B drift and hopefully lead to the development of regional and global models that will incorporate local time information in different longitude sectors.
- Full Text:
- Date Issued: 2018
Tomographic imaging of East African equatorial ionosphere and study of equatorial plasma bubbles
- Authors: Giday, Nigussie Mezgebe
- Date: 2018
- Subjects: Ionosphere -- Africa, Central , Tomography -- Africa, Central , Global Positioning System , Neural networks (Computer science) , Space environment , Multi-Instrument Data Analysis System (MIDAS) , Equatorial plasma bubbles
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/63980 , vital:28516
- Description: In spite of the fact that the African ionospheric equatorial region has the largest ground footprint along the geomagnetic equator, it has not been well studied due to the absence of adequate ground-based instruments. This thesis presents research on both tomographic imaging of the African equatorial ionosphere and the study of the ionospheric irregularities/equatorial plasma bubbles (EPBs) under varying geomagnetic conditions. The Multi-Instrument Data Analysis System (MIDAS), an inversion algorithm, was investigated for its validity and ability as a tool to reconstruct multi-scaled ionospheric structures for different geomagnetic conditions. This was done for the narrow East African longitude sector with data from the available ground Global Positioning Sys-tem (GPS) receivers. The MIDAS results were compared to the results of two models, namely the IRI and GIM. MIDAS results compared more favourably with the observation vertical total electron content (VTEC), with a computed maximum correlation coefficient (r) of 0.99 and minimum root-mean-square error (RMSE) of 2.91 TECU, than did the results of the IRI-2012 and GIM models with maximum r of 0.93 and 0.99, and minimum RMSE of 13.03 TECU and 6.52 TECU, respectively, over all the test stations and validation days. The ability of MIDAS to reconstruct storm-time TEC was also compared with the results produced by the use of a Artificial Neural Net-work (ANN) for the African low- and mid-latitude regions. In terms of latitude, on average,MIDAS performed 13.44 % better than ANN in the African mid-latitudes, while MIDAS under performed in low-latitudes. This thesis also reports on the effects of moderate geomagnetic conditions on the evolution of EPBs and/or ionospheric irregularities during their season of occurrence using data from (or measurements by) space- and ground-based instruments for the east African equatorial sector. The study showed that the strength of daytime equatorial electrojet (EEJ), the steepness of the TEC peak-to-trough gradient and/or the meridional/transequatorial thermospheric winds sometimes have collective/interwoven effects, while at other times one mechanism dominates. In summary, this research offered tomographic results that outperform the results of the commonly used (“standard”) global models (i.e. IRI and GIM) for a longitude sector of importance to space weather, which has not been adequately studied due to a lack of sufficient instrumentation.
- Full Text:
- Date Issued: 2018
- Authors: Giday, Nigussie Mezgebe
- Date: 2018
- Subjects: Ionosphere -- Africa, Central , Tomography -- Africa, Central , Global Positioning System , Neural networks (Computer science) , Space environment , Multi-Instrument Data Analysis System (MIDAS) , Equatorial plasma bubbles
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/63980 , vital:28516
- Description: In spite of the fact that the African ionospheric equatorial region has the largest ground footprint along the geomagnetic equator, it has not been well studied due to the absence of adequate ground-based instruments. This thesis presents research on both tomographic imaging of the African equatorial ionosphere and the study of the ionospheric irregularities/equatorial plasma bubbles (EPBs) under varying geomagnetic conditions. The Multi-Instrument Data Analysis System (MIDAS), an inversion algorithm, was investigated for its validity and ability as a tool to reconstruct multi-scaled ionospheric structures for different geomagnetic conditions. This was done for the narrow East African longitude sector with data from the available ground Global Positioning Sys-tem (GPS) receivers. The MIDAS results were compared to the results of two models, namely the IRI and GIM. MIDAS results compared more favourably with the observation vertical total electron content (VTEC), with a computed maximum correlation coefficient (r) of 0.99 and minimum root-mean-square error (RMSE) of 2.91 TECU, than did the results of the IRI-2012 and GIM models with maximum r of 0.93 and 0.99, and minimum RMSE of 13.03 TECU and 6.52 TECU, respectively, over all the test stations and validation days. The ability of MIDAS to reconstruct storm-time TEC was also compared with the results produced by the use of a Artificial Neural Net-work (ANN) for the African low- and mid-latitude regions. In terms of latitude, on average,MIDAS performed 13.44 % better than ANN in the African mid-latitudes, while MIDAS under performed in low-latitudes. This thesis also reports on the effects of moderate geomagnetic conditions on the evolution of EPBs and/or ionospheric irregularities during their season of occurrence using data from (or measurements by) space- and ground-based instruments for the east African equatorial sector. The study showed that the strength of daytime equatorial electrojet (EEJ), the steepness of the TEC peak-to-trough gradient and/or the meridional/transequatorial thermospheric winds sometimes have collective/interwoven effects, while at other times one mechanism dominates. In summary, this research offered tomographic results that outperform the results of the commonly used (“standard”) global models (i.e. IRI and GIM) for a longitude sector of importance to space weather, which has not been adequately studied due to a lack of sufficient instrumentation.
- Full Text:
- Date Issued: 2018
Wireless industrial intelligent controller for a non-linear system
- Authors: Fernandes, John Manuel
- Date: 2015
- Subjects: Neural networks (Computer science) , Linear systems
- Language: English
- Type: Thesis , Masters , MEngineering (Mechatronics)
- Identifier: http://hdl.handle.net/10948/9021 , vital:26457
- Description: Modern neural network (NN) based control schemes have surmounted many of the limitations found in the traditional control approaches. Nevertheless, these modern control techniques have only recently been introduced for use on high-specification Programmable Logic Controllers (PLCs) and usually at a very high cost in terms of the required software and hardware. This ‗intelligent‘ control in the sector of industrial automation, specifically on standard PLCs thus remains an area of study that is open to further research and development. The research documented in this thesis examined the effectiveness of linear traditional control schemes such as Proportional Integral Derivative (PID), Lead and Lead-Lag control, in comparison to non-linear NN based control schemes when applied on a strongly non-linear platform. To this end, a mechatronic-type balancing system, namely, the Ball-on-Wheel (BOW) system was designed, constructed and modelled. Thereafter various traditional and intelligent controllers were implemented in order to control the system. The BOW platform may be taken to represent any single-input, single-output (SISO) non-linear system in use in the real world. The system makes use of current industrial technology including a standard PLC as the digital computational platform, a servo drive and wireless access for remote control. The results gathered from the research revealed that NN based control schemes (i.e. Pure NN and NN-PID), although comparatively slower in response, have greater advantages over traditional controllers in that they are able to adapt to external system changes as well as system non-linearity through a process of learning. These controllers also reduce the guess work that is usually involved with the traditional control approaches where cumbersome modelling, linearization or manual tuning is required. Furthermore, the research showed that online-learning adaptive traditional controllers such as the NN-PID controller which maintains the best of both the intelligent and traditional controllers may be implemented easily and with minimum expense on standard PLCs.
- Full Text:
- Date Issued: 2015
- Authors: Fernandes, John Manuel
- Date: 2015
- Subjects: Neural networks (Computer science) , Linear systems
- Language: English
- Type: Thesis , Masters , MEngineering (Mechatronics)
- Identifier: http://hdl.handle.net/10948/9021 , vital:26457
- Description: Modern neural network (NN) based control schemes have surmounted many of the limitations found in the traditional control approaches. Nevertheless, these modern control techniques have only recently been introduced for use on high-specification Programmable Logic Controllers (PLCs) and usually at a very high cost in terms of the required software and hardware. This ‗intelligent‘ control in the sector of industrial automation, specifically on standard PLCs thus remains an area of study that is open to further research and development. The research documented in this thesis examined the effectiveness of linear traditional control schemes such as Proportional Integral Derivative (PID), Lead and Lead-Lag control, in comparison to non-linear NN based control schemes when applied on a strongly non-linear platform. To this end, a mechatronic-type balancing system, namely, the Ball-on-Wheel (BOW) system was designed, constructed and modelled. Thereafter various traditional and intelligent controllers were implemented in order to control the system. The BOW platform may be taken to represent any single-input, single-output (SISO) non-linear system in use in the real world. The system makes use of current industrial technology including a standard PLC as the digital computational platform, a servo drive and wireless access for remote control. The results gathered from the research revealed that NN based control schemes (i.e. Pure NN and NN-PID), although comparatively slower in response, have greater advantages over traditional controllers in that they are able to adapt to external system changes as well as system non-linearity through a process of learning. These controllers also reduce the guess work that is usually involved with the traditional control approaches where cumbersome modelling, linearization or manual tuning is required. Furthermore, the research showed that online-learning adaptive traditional controllers such as the NN-PID controller which maintains the best of both the intelligent and traditional controllers may be implemented easily and with minimum expense on standard PLCs.
- Full Text:
- Date Issued: 2015
A hybridisation technique for game playing using the upper confidence for trees algorithm with artificial neural networks
- Authors: Burger, Clayton
- Date: 2014
- Subjects: Neural networks (Computer science) , Computer algorithms
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10948/3957 , vital:20495
- Description: In the domain of strategic game playing, the use of statistical techniques such as the Upper Confidence for Trees (UCT) algorithm, has become the norm as they offer many benefits over classical algorithms. These benefits include requiring no game-specific strategic knowledge and time-scalable performance. UCT does not incorporate any strategic information specific to the game considered, but instead uses repeated sampling to effectively brute-force search through the game tree or search space. The lack of game-specific knowledge in UCT is thus both a benefit but also a strategic disadvantage. Pattern recognition techniques, specifically Neural Networks (NN), were identified as a means of addressing the lack of game-specific knowledge in UCT. Through a novel hybridisation technique which combines UCT and trained NNs for pruning, the UCTNN algorithm was derived. The NN component of UCT-NN was trained using a UCT self-play scheme to generate game-specific knowledge without the need to construct and manage game databases for training purposes. The UCT-NN algorithm is outlined for pruning in the game of Go-Moku as a candidate case-study for this research. The UCT-NN algorithm contained three major parameters which emerged from the UCT algorithm, the use of NNs and the pruning schemes considered. Suitable methods for finding candidate values for these three parameters were outlined and applied to the game of Go-Moku on a 5 by 5 board. An empirical investigation of the playing performance of UCT-NN was conducted in comparison to UCT through three benchmarks. The benchmarks comprise a common randomly moving opponent, a common UCTmax player which is given a large amount of playing time, and a pair-wise tournament between UCT-NN and UCT. The results of the performance evaluation for 5 by 5 Go-Moku were promising, which prompted an evaluation of a larger 9 by 9 Go-Moku board. The results of both evaluations indicate that the time allocated to the UCT-NN algorithm directly affects its performance when compared to UCT. The UCT-NN algorithm generally performs better than UCT in games with very limited time-constraints in all benchmarks considered except when playing against a randomly moving player in 9 by 9 Go-Moku. In real-time and near-real-time Go-Moku games, UCT-NN provides statistically significant improvements compared to UCT. The findings of this research contribute to the realisation of applying game-specific knowledge to the UCT algorithm.
- Full Text:
- Date Issued: 2014
- Authors: Burger, Clayton
- Date: 2014
- Subjects: Neural networks (Computer science) , Computer algorithms
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10948/3957 , vital:20495
- Description: In the domain of strategic game playing, the use of statistical techniques such as the Upper Confidence for Trees (UCT) algorithm, has become the norm as they offer many benefits over classical algorithms. These benefits include requiring no game-specific strategic knowledge and time-scalable performance. UCT does not incorporate any strategic information specific to the game considered, but instead uses repeated sampling to effectively brute-force search through the game tree or search space. The lack of game-specific knowledge in UCT is thus both a benefit but also a strategic disadvantage. Pattern recognition techniques, specifically Neural Networks (NN), were identified as a means of addressing the lack of game-specific knowledge in UCT. Through a novel hybridisation technique which combines UCT and trained NNs for pruning, the UCTNN algorithm was derived. The NN component of UCT-NN was trained using a UCT self-play scheme to generate game-specific knowledge without the need to construct and manage game databases for training purposes. The UCT-NN algorithm is outlined for pruning in the game of Go-Moku as a candidate case-study for this research. The UCT-NN algorithm contained three major parameters which emerged from the UCT algorithm, the use of NNs and the pruning schemes considered. Suitable methods for finding candidate values for these three parameters were outlined and applied to the game of Go-Moku on a 5 by 5 board. An empirical investigation of the playing performance of UCT-NN was conducted in comparison to UCT through three benchmarks. The benchmarks comprise a common randomly moving opponent, a common UCTmax player which is given a large amount of playing time, and a pair-wise tournament between UCT-NN and UCT. The results of the performance evaluation for 5 by 5 Go-Moku were promising, which prompted an evaluation of a larger 9 by 9 Go-Moku board. The results of both evaluations indicate that the time allocated to the UCT-NN algorithm directly affects its performance when compared to UCT. The UCT-NN algorithm generally performs better than UCT in games with very limited time-constraints in all benchmarks considered except when playing against a randomly moving player in 9 by 9 Go-Moku. In real-time and near-real-time Go-Moku games, UCT-NN provides statistically significant improvements compared to UCT. The findings of this research contribute to the realisation of applying game-specific knowledge to the UCT algorithm.
- Full Text:
- Date Issued: 2014
Updating the ionospheric propagation factor, M(3000)F2, global model using the neural network technique and relevant geophysical input parameters
- Oronsaye, Samuel Iyen Jeffrey
- Authors: Oronsaye, Samuel Iyen Jeffrey
- Date: 2013
- Subjects: Neural networks (Computer science) , Ionospheric radio wave propagation , Ionosphere , Geophysics , Ionosondes
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5434 , http://hdl.handle.net/10962/d1001609 , Neural networks (Computer science) , Ionospheric radio wave propagation , Ionosphere , Geophysics , Ionosondes
- Description: This thesis presents an update to the ionospheric propagation factor, M(3000)F2, global empirical model developed by Oyeyemi et al. (2007) (NNO). An additional aim of this research was to produce the updated model in a form that could be used within the International Reference Ionosphere (IRI) global model without adding to the complexity of the IRI. M(3000)F2 is the highest frequency at which a radio signal can be received over a distance of 3000 km after reflection in the ionosphere. The study employed the artificial neural network (ANN) technique using relevant geophysical input parameters which are known to influence the M(3000)F2 parameter. Ionosonde data from 135 ionospheric stations globally, including a number of equatorial stations, were available for this work. M(3000)F2 hourly values from 1976 to 2008, spanning all periods of low and high solar activity were used for model development and verification. A preliminary investigation was first carried out using a relatively small dataset to determine the appropriate input parameters for global M(3000)F2 parameter modelling. Inputs representing diurnal variation, seasonal variation, solar variation, modified dip latitude, longitude and latitude were found to be the optimum parameters for modelling the diurnal and seasonal variations of the M(3000)F2 parameter both on a temporal and spatial basis. The outcome of the preliminary study was applied to the overall dataset to develop a comprehensive ANN M(3000)F2 model which displays a remarkable improvement over the NNO model as well as the IRI version. The model shows 7.11% and 3.85% improvement over the NNO model as well as 13.04% and 10.05% over the IRI M(3000)F2 model, around high and low solar activity periods respectively. A comparison of the diurnal structure of the ANN and the IRI predicted values reveal that the ANN model is more effective in representing the diurnal structure of the M(3000)F2 values than the IRI M(3000)F2 model. The capability of the ANN model in reproducing the seasonal variation pattern of the M(3000)F2 values at 00h00UT, 06h00UT, 12h00UT, and l8h00UT more appropriately than the IRI version is illustrated in this work. A significant result obtained in this study is the ability of the ANN model in improving the post-sunset predicted values of the M(3000)F2 parameter which is known to be problematic to the IRI M(3000)F2 model in the low-latitude and the equatorial regions. The final M(3000)F2 model provides for an improved equatorial prediction and a simplified input space that allows for easy incorporation into the IRI model.
- Full Text:
- Date Issued: 2013
- Authors: Oronsaye, Samuel Iyen Jeffrey
- Date: 2013
- Subjects: Neural networks (Computer science) , Ionospheric radio wave propagation , Ionosphere , Geophysics , Ionosondes
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5434 , http://hdl.handle.net/10962/d1001609 , Neural networks (Computer science) , Ionospheric radio wave propagation , Ionosphere , Geophysics , Ionosondes
- Description: This thesis presents an update to the ionospheric propagation factor, M(3000)F2, global empirical model developed by Oyeyemi et al. (2007) (NNO). An additional aim of this research was to produce the updated model in a form that could be used within the International Reference Ionosphere (IRI) global model without adding to the complexity of the IRI. M(3000)F2 is the highest frequency at which a radio signal can be received over a distance of 3000 km after reflection in the ionosphere. The study employed the artificial neural network (ANN) technique using relevant geophysical input parameters which are known to influence the M(3000)F2 parameter. Ionosonde data from 135 ionospheric stations globally, including a number of equatorial stations, were available for this work. M(3000)F2 hourly values from 1976 to 2008, spanning all periods of low and high solar activity were used for model development and verification. A preliminary investigation was first carried out using a relatively small dataset to determine the appropriate input parameters for global M(3000)F2 parameter modelling. Inputs representing diurnal variation, seasonal variation, solar variation, modified dip latitude, longitude and latitude were found to be the optimum parameters for modelling the diurnal and seasonal variations of the M(3000)F2 parameter both on a temporal and spatial basis. The outcome of the preliminary study was applied to the overall dataset to develop a comprehensive ANN M(3000)F2 model which displays a remarkable improvement over the NNO model as well as the IRI version. The model shows 7.11% and 3.85% improvement over the NNO model as well as 13.04% and 10.05% over the IRI M(3000)F2 model, around high and low solar activity periods respectively. A comparison of the diurnal structure of the ANN and the IRI predicted values reveal that the ANN model is more effective in representing the diurnal structure of the M(3000)F2 values than the IRI M(3000)F2 model. The capability of the ANN model in reproducing the seasonal variation pattern of the M(3000)F2 values at 00h00UT, 06h00UT, 12h00UT, and l8h00UT more appropriately than the IRI version is illustrated in this work. A significant result obtained in this study is the ability of the ANN model in improving the post-sunset predicted values of the M(3000)F2 parameter which is known to be problematic to the IRI M(3000)F2 model in the low-latitude and the equatorial regions. The final M(3000)F2 model provides for an improved equatorial prediction and a simplified input space that allows for easy incorporation into the IRI model.
- Full Text:
- Date Issued: 2013
Universal approximation properties of feedforward artificial neural networks.
- Authors: Redpath, Stuart Frederick
- Date: 2011
- Subjects: Neural networks (Computer science) , Artificial intelligence -- Biological applications , Functional analysis , Weierstrass-Stone Theorem , Banach-Hahn theorem
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5430 , http://hdl.handle.net/10962/d1015869
- Description: In this thesis we summarise several results in the literature which show the approximation capabilities of multilayer feedforward artificial neural networks. We show that multilayer feedforward artificial neural networks are capable of approximating continuous and measurable functions from Rn to R to any degree of accuracy under certain conditions. In particular making use of the Stone-Weierstrass and Hahn-Banach theorems, we show that a multilayer feedforward artificial neural network can approximate any continuous function to any degree of accuracy, by using either an arbitrary squashing function or any continuous sigmoidal function for activation. Making use of the Stone-Weirstrass Theorem again, we extend these approximation capabilities of multilayer feedforward artificial neural networks to the space of measurable functions under any probability measure.
- Full Text:
- Date Issued: 2011
- Authors: Redpath, Stuart Frederick
- Date: 2011
- Subjects: Neural networks (Computer science) , Artificial intelligence -- Biological applications , Functional analysis , Weierstrass-Stone Theorem , Banach-Hahn theorem
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5430 , http://hdl.handle.net/10962/d1015869
- Description: In this thesis we summarise several results in the literature which show the approximation capabilities of multilayer feedforward artificial neural networks. We show that multilayer feedforward artificial neural networks are capable of approximating continuous and measurable functions from Rn to R to any degree of accuracy under certain conditions. In particular making use of the Stone-Weierstrass and Hahn-Banach theorems, we show that a multilayer feedforward artificial neural network can approximate any continuous function to any degree of accuracy, by using either an arbitrary squashing function or any continuous sigmoidal function for activation. Making use of the Stone-Weirstrass Theorem again, we extend these approximation capabilities of multilayer feedforward artificial neural networks to the space of measurable functions under any probability measure.
- Full Text:
- Date Issued: 2011
Artificial neural networks as simulators for behavioural evolution in evolutionary robotics
- Pretorius, Christiaan Johannes
- Authors: Pretorius, Christiaan Johannes
- Date: 2010
- Subjects: Neural networks (Computer science) , Robotics
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10462 , http://hdl.handle.net/10948/1476 , Neural networks (Computer science) , Robotics
- Description: Robotic simulators for use in Evolutionary Robotics (ER) have certain challenges associated with the complexity of their construction and the accuracy of predictions made by these simulators. Such robotic simulators are often based on physics models, which have been shown to produce accurate results. However, the construction of physics-based simulators can be complex and time-consuming. Alternative simulation schemes construct robotic simulators from empirically-collected data. Such empirical simulators, however, also have associated challenges, such as that some of these simulators do not generalize well on the data from which they are constructed, as these models employ simple interpolation on said data. As a result of the identified challenges in existing robotic simulators for use in ER, this project investigates the potential use of Artificial Neural Networks, henceforth simply referred to as Neural Networks (NNs), as alternative robotic simulators. In contrast to physics models, NN-based simulators can be constructed without needing an explicit mathematical model of the system being modeled, which can simplify simulator development. Furthermore, the generalization capabilities of NNs suggest that NNs could generalize well on data from which these simulators are constructed. These generalization abilities of NNs, along with NNs’ noise tolerance, suggest that NNs could be well-suited to application in robotics simulation. Investigating whether NNs can be effectively used as robotic simulators in ER is thus the endeavour of this work. Since not much research has been done in employing NNs as robotic simulators, many aspects of the experimental framework on which this dissertation reports needed to be carefully decided upon. Two robot morphologies were selected on which the NN simulators created in this work were based, namely a differentially steered robot and an inverted pendulum robot. Motion tracking and robotic sensor logging were used to acquire data from which the NN simulators were constructed. Furthermore, custom code was written for almost all aspects of the study, namely data acquisition for NN training, the actual NN training process, the evolution of robotic controllers using the created NN simulators, as well as the onboard robotic implementations of evolved controllers. Experimental tests performed in order to determine ideal topologies for each of the NN simulators developed in this study indicated that different NN topologies can lead to large differences in training accuracy. After performing these tests, the training accuracy of the created simulators was analyzed. This analysis showed that the NN simulators generally trained well and could generalize well on data not presented during simulator construction. In order to validate the feasibility of the created NN simulators in the ER process, these simulators were subsequently used to evolve controllers in simulation, similar to controllers developed in related studies. Encouraging results were obtained, with the newly-evolved controllers allowing real-world experimental robots to exhibit obstacle avoidance and light-approaching behaviour with a reasonable degree of success. The created NN simulators furthermore allowed for the successful evolution of a complex inverted pendulum stabilization controller in simulation. It was thus clearly established that NN-based robotic simulators can be successfully employed as alternative simulation schemes in the ER process.
- Full Text:
- Date Issued: 2010
- Authors: Pretorius, Christiaan Johannes
- Date: 2010
- Subjects: Neural networks (Computer science) , Robotics
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10462 , http://hdl.handle.net/10948/1476 , Neural networks (Computer science) , Robotics
- Description: Robotic simulators for use in Evolutionary Robotics (ER) have certain challenges associated with the complexity of their construction and the accuracy of predictions made by these simulators. Such robotic simulators are often based on physics models, which have been shown to produce accurate results. However, the construction of physics-based simulators can be complex and time-consuming. Alternative simulation schemes construct robotic simulators from empirically-collected data. Such empirical simulators, however, also have associated challenges, such as that some of these simulators do not generalize well on the data from which they are constructed, as these models employ simple interpolation on said data. As a result of the identified challenges in existing robotic simulators for use in ER, this project investigates the potential use of Artificial Neural Networks, henceforth simply referred to as Neural Networks (NNs), as alternative robotic simulators. In contrast to physics models, NN-based simulators can be constructed without needing an explicit mathematical model of the system being modeled, which can simplify simulator development. Furthermore, the generalization capabilities of NNs suggest that NNs could generalize well on data from which these simulators are constructed. These generalization abilities of NNs, along with NNs’ noise tolerance, suggest that NNs could be well-suited to application in robotics simulation. Investigating whether NNs can be effectively used as robotic simulators in ER is thus the endeavour of this work. Since not much research has been done in employing NNs as robotic simulators, many aspects of the experimental framework on which this dissertation reports needed to be carefully decided upon. Two robot morphologies were selected on which the NN simulators created in this work were based, namely a differentially steered robot and an inverted pendulum robot. Motion tracking and robotic sensor logging were used to acquire data from which the NN simulators were constructed. Furthermore, custom code was written for almost all aspects of the study, namely data acquisition for NN training, the actual NN training process, the evolution of robotic controllers using the created NN simulators, as well as the onboard robotic implementations of evolved controllers. Experimental tests performed in order to determine ideal topologies for each of the NN simulators developed in this study indicated that different NN topologies can lead to large differences in training accuracy. After performing these tests, the training accuracy of the created simulators was analyzed. This analysis showed that the NN simulators generally trained well and could generalize well on data not presented during simulator construction. In order to validate the feasibility of the created NN simulators in the ER process, these simulators were subsequently used to evolve controllers in simulation, similar to controllers developed in related studies. Encouraging results were obtained, with the newly-evolved controllers allowing real-world experimental robots to exhibit obstacle avoidance and light-approaching behaviour with a reasonable degree of success. The created NN simulators furthermore allowed for the successful evolution of a complex inverted pendulum stabilization controller in simulation. It was thus clearly established that NN-based robotic simulators can be successfully employed as alternative simulation schemes in the ER process.
- Full Text:
- Date Issued: 2010
Optimization of salbutamol sulfate dissolution from sustained release matrix formulations using an artificial neural network
- Chaibva, Faith A, Burton, Michael H, Walker, Roderick B
- Authors: Chaibva, Faith A , Burton, Michael H , Walker, Roderick B
- Date: 2010
- Subjects: Neural networks (Computer science)
- Language: English
- Type: Article
- Identifier: vital:6352 , http://hdl.handle.net/10962/d1006034
- Description: An artificial neural network was used to optimize the release of salbutamol sulfate from hydrophilic matrix formulations. Model formulations to be used for training, testing and validating the neural network were manufactured with the aid of a central composite design with varying the levels of Methocel® K100M, xanthan gum, Carbopol® 974P and Surelease® as the input factors. In vitro dissolution time profiles at six different sampling times were used as target data in training the neural network for formulation optimization. A multi layer perceptron with one hidden layer was constructed using Matlab®, and the number of nodes in the hidden layer was optimized by trial and error to develop a model with the best predictive ability. The results revealed that a neural network with nine nodes was optimal for developing and optimizing formulations. Simulations undertaken with the training data revealed that the constructed model was useable. The optimized neural network was used for optimization of formulation with desirable release characteristics and the results indicated that there was agreement between the predicted formulation and the manufactured formulation. This work illustrates the possible utility of artificial neural networks for the optimization of pharmaceutical formulations with desirable performance characteristics.
- Full Text:
- Date Issued: 2010
- Authors: Chaibva, Faith A , Burton, Michael H , Walker, Roderick B
- Date: 2010
- Subjects: Neural networks (Computer science)
- Language: English
- Type: Article
- Identifier: vital:6352 , http://hdl.handle.net/10962/d1006034
- Description: An artificial neural network was used to optimize the release of salbutamol sulfate from hydrophilic matrix formulations. Model formulations to be used for training, testing and validating the neural network were manufactured with the aid of a central composite design with varying the levels of Methocel® K100M, xanthan gum, Carbopol® 974P and Surelease® as the input factors. In vitro dissolution time profiles at six different sampling times were used as target data in training the neural network for formulation optimization. A multi layer perceptron with one hidden layer was constructed using Matlab®, and the number of nodes in the hidden layer was optimized by trial and error to develop a model with the best predictive ability. The results revealed that a neural network with nine nodes was optimal for developing and optimizing formulations. Simulations undertaken with the training data revealed that the constructed model was useable. The optimized neural network was used for optimization of formulation with desirable release characteristics and the results indicated that there was agreement between the predicted formulation and the manufactured formulation. This work illustrates the possible utility of artificial neural networks for the optimization of pharmaceutical formulations with desirable performance characteristics.
- Full Text:
- Date Issued: 2010