A comparative study of artificial neural networks and physics models as simulators in evolutionary robotics
- Authors: Pretorius, Christiaan Johannes
- Date: 2019
- Subjects: Neural networks (Computer science)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10948/30789 , vital:31131
- Description: The Evolutionary Robotics (ER) process is a technique that applies evolutionary optimization algorithms to the task of automatically developing, or evolving, robotic control programs. These control programs, or simply controllers, are evolved in order to allow a robot to perform a required task. During the ER process, use is often made of robotic simulators to evaluate the performance of candidate controllers that are produced in the course of the controller evolution process. Such simulators accelerate and otherwise simplify the controller evolution process, as opposed to the more arduous process of evaluating controllers in the real world without use of simulation. To date, the vast majority of simulators that have been applied in ER are physics- based models which are constructed by taking into account the underlying physics governing the operation of the robotic system in question. An alternative approach to simulator implementation in ER is the usage of Artificial Neural Networks (ANNs) as simulators in the ER process. Such simulators are referred to as Simulator Neural Networks (SNNs). Previous studies have indicated that SNNs can successfully be used as an alter- native to physics-based simulators in the ER process on various robotic platforms. At the commencement of the current study it was not, however, known how this relatively new method of simulation would compare to traditional physics-based simulation approaches in ER. The study presented in this thesis thus endeavoured to quantitatively compare SNNs and physics-based models as simulators in the ER process. In order to con- duct this comparative study, both SNNs and physics simulators were constructed for the modelling of three different robotic platforms: a differentially-steered robot, a wheeled inverted pendulum robot and a hexapod robot. Each of these two types of simulation was then used in simulation-based evolution processes to evolve con- trollers for each robotic platform. During these controller evolution processes, the SNNs and physics models were compared in terms of their accuracy in making pre- dictions of robotic behaviour, their computational efficiency in arriving at these predictions, the human effort required to construct each simulator and, most im- portantly, the real-world performance of controllers evolved by making use of each simulator. The results obtained in this study illustrated experimentally that SNNs were, in the majority of cases, able to make more accurate predictions than the physics- based models and these SNNs were arguably simpler to construct than the physics simulators. Additionally, SNNs were also shown to be a computationally efficient alternative to physics-based simulators in ER and, again in the majority of cases, these SNNs were able to produce controllers which outperformed those evolved in the physics-based simulators, when these controllers were uploaded to the real-world robots. The results of this thesis thus suggest that SNNs are a viable alternative to more commonly-used physics simulators in ER and further investigation of the potential of this simulation technique appears warranted.
- Full Text:
- Date Issued: 2019
A feasibility study into total electron content prediction using neural networks
- Authors: Habarulema, John Bosco
- Date: 2008
- Subjects: Electrons , Neural networks (Computer science) , Global Positioning System , Ionosphere , Ionospheric electron density
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5466 , http://hdl.handle.net/10962/d1005251 , Electrons , Neural networks (Computer science) , Global Positioning System , Ionosphere , Ionospheric electron density
- Description: Global Positioning System (GPS) networks provide an opportunity to study the dynamics and continuous changes in the ionosphere by supplementing ionospheric measurements which are usually obtained by various techniques such as ionosondes, incoherent scatter radars and satellites. Total electron content (TEC) is one of the physical quantities that can be derived from GPS data, and provides an indication of ionospheric variability. This thesis presents a feasibility study for the development of a Neural Network (NN) based model for the prediction of South African GPS derived TEC. The South African GPS receiver network is operated and maintained by the Chief Directorate Surveys and Mapping (CDSM) in Cape Town, South Africa. Three South African locations were identified and used in the development of an input space and NN architecture for the model. The input space includes the day number (seasonal variation), hour (diurnal variation), sunspot number (measure of the solar activity), and magnetic index(measure of the magnetic activity). An attempt to study the effects of solar wind on TEC variability was carried out using the Advanced Composition Explorer (ACE) data and it is recommended that more study be done using low altitude satellite data. An analysis was done by comparing predicted NN TEC with TEC values from the IRI2001 version of the International Reference Ionosphere (IRI), validating GPS TEC with ionosonde TEC (ITEC) and assessing the performance of the NN model during equinoxes and solstices. Results show that NNs predict GPS TEC more accurately than the IRI at South African GPS locations, but that more good quality GPS data is required before a truly representative empirical GPS TEC model can be released.
- Full Text:
- Date Issued: 2008
A hybridisation technique for game playing using the upper confidence for trees algorithm with artificial neural networks
- Authors: Burger, Clayton
- Date: 2014
- Subjects: Neural networks (Computer science) , Computer algorithms
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10948/3957 , vital:20495
- Description: In the domain of strategic game playing, the use of statistical techniques such as the Upper Confidence for Trees (UCT) algorithm, has become the norm as they offer many benefits over classical algorithms. These benefits include requiring no game-specific strategic knowledge and time-scalable performance. UCT does not incorporate any strategic information specific to the game considered, but instead uses repeated sampling to effectively brute-force search through the game tree or search space. The lack of game-specific knowledge in UCT is thus both a benefit but also a strategic disadvantage. Pattern recognition techniques, specifically Neural Networks (NN), were identified as a means of addressing the lack of game-specific knowledge in UCT. Through a novel hybridisation technique which combines UCT and trained NNs for pruning, the UCTNN algorithm was derived. The NN component of UCT-NN was trained using a UCT self-play scheme to generate game-specific knowledge without the need to construct and manage game databases for training purposes. The UCT-NN algorithm is outlined for pruning in the game of Go-Moku as a candidate case-study for this research. The UCT-NN algorithm contained three major parameters which emerged from the UCT algorithm, the use of NNs and the pruning schemes considered. Suitable methods for finding candidate values for these three parameters were outlined and applied to the game of Go-Moku on a 5 by 5 board. An empirical investigation of the playing performance of UCT-NN was conducted in comparison to UCT through three benchmarks. The benchmarks comprise a common randomly moving opponent, a common UCTmax player which is given a large amount of playing time, and a pair-wise tournament between UCT-NN and UCT. The results of the performance evaluation for 5 by 5 Go-Moku were promising, which prompted an evaluation of a larger 9 by 9 Go-Moku board. The results of both evaluations indicate that the time allocated to the UCT-NN algorithm directly affects its performance when compared to UCT. The UCT-NN algorithm generally performs better than UCT in games with very limited time-constraints in all benchmarks considered except when playing against a randomly moving player in 9 by 9 Go-Moku. In real-time and near-real-time Go-Moku games, UCT-NN provides statistically significant improvements compared to UCT. The findings of this research contribute to the realisation of applying game-specific knowledge to the UCT algorithm.
- Full Text:
- Date Issued: 2014
A model for measuring and predicting stress for software developers using vital signs and activities
- Authors: Hibbers, Ilze
- Date: 2024-04
- Subjects: Machine learning , Neural networks (Computer science) , Computer software developers
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/63799 , vital:73614
- Description: Occupational stress is a well-recognised issue that affects individuals in various professions and industries. Reducing occupational stress has multiple benefits, such as improving employee's health and performance. This study proposes a model to measure and predict occupational stress using data collected in a real IT office environment. Different data sources, such as questionnaires, application software (RescueTime) and Fitbit smartwatches were used for collecting heart rate (HR), facial emotions, computer interactions, and application usage. The results of the Demand Control Support and Effort and Reward questionnaires indicated that the participants experienced high social support and an average level of workload. Participants also reported their daily perceived stress and workload level using a 5- point score. The perceived stress of the participants was overall neutral. There was no correlation found between HR, interactions, fear, and meetings. K-means and Bernoulli algorithms were applied to the dataset and two well-separated clusters were formed. The centroids indicated that higher heart rates were grouped either with meetings or had a higher difference in the center point values for interactions. Silhouette scores and 5-fold-validation were used to measure the accuracy of the clusters. However, these clusters were unable to predict the daily reported stress levels. Calculations were done on the computer usage data to measure interaction speeds and time spent working, in meetings, or away from the computer. These calculations were used as input into a decision tree with the reported daily stress levels. The results of the tree helped to identify which patterns lead to stressful days. The results indicated that days with high time pressure led to more reported stress. A new, more general tree was developed, which was able to predict 82 per cent of the daily stress reported. The main discovery of the research was that stress does not have a straightforward connection with computer interactions, facial emotions, or meetings. High interactions sometimes lead to stress and other times do not. So, predicting stress involves finding patterns and how data from different data sources interact with each other. Future work will revolve around validating the model in more office environments around South Africa. , Thesis (MSc) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics, 2024
- Full Text:
- Date Issued: 2024-04
A multispectral and machine learning approach to early stress classification in plants
- Authors: Poole, Louise Carmen
- Date: 2022-04-06
- Subjects: Machine learning , Neural networks (Computer science) , Multispectral imaging , Image processing , Plant stress detection
- Language: English
- Type: Master's thesis , text
- Identifier: http://hdl.handle.net/10962/232410 , vital:49989
- Description: Crop loss and failure can impact both a country’s economy and food security, often to devastating effects. As such, the importance of successfully detecting plant stresses early in their development is essential to minimize spread and damage to crop production. Identification of the stress and the stress-causing agent is the most critical and challenging step in plant and crop protection. With the development of and increase in ease of access to new equipment and technology in recent years, the use of spectroscopy in the early detection of plant diseases has become notably popular. This thesis narrows down the most suitable multispectral imaging techniques and machine learning algorithms for early stress detection. Datasets were collected of visible images and multispectral images. Dehydration was selected as the plant stress type for the main experiments, and data was collected from six plant species typically used in agriculture. Key contributions of this thesis include multispectral and visible datasets showing plant dehydration as well as a separate preliminary dataset on plant disease. Promising results on dehydration showed statistically significant accuracy improvements in the multispectral imaging compared to visible imaging for early stress detection, with multispectral input obtaining a 92.50% accuracy over visible input’s 77.50% on general plant species. The system was effective at stress detection on known plant species, with multispectral imaging introducing greater improvement to early stress detection than advanced stress detection. Furthermore, strong species discrimination was achieved when exclusively testing either early or advanced dehydration against healthy species. , Thesis (MSc) -- Faculty of Science, Ichthyology & Fisheries Sciences, 2022
- Full Text:
- Date Issued: 2022-04-06
An analysis of neural networks and time series techniques for demand forecasting
- Authors: Winn, David
- Date: 2007
- Subjects: Time-series analysis , Neural networks (Computer science) , Artificial intelligence , Marketing -- Management , Marketing -- Data processing , Marketing -- Statistical methods , Consumer behaviour
- Language: English
- Type: Thesis , Masters , MCom
- Identifier: vital:5572 , http://hdl.handle.net/10962/d1004362 , Time-series analysis , Neural networks (Computer science) , Artificial intelligence , Marketing -- Management , Marketing -- Data processing , Marketing -- Statistical methods , Consumer behaviour
- Description: This research examines the plausibility of developing demand forecasting techniques which are consistently and accurately able to predict demand. Time Series Techniques and Artificial Neural Networks are both investigated. Deodorant sales in South Africa are specifically studied in this thesis. Marketing techniques which are used to influence consumer buyer behaviour are considered, and these factors are integrated into the forecasting models wherever possible. The results of this research suggest that Artificial Neural Networks can be developed which consistently outperform industry forecasting targets as well as Time Series forecasts, suggesting that producers could reduce costs by adopting this more effective method.
- Full Text:
- Date Issued: 2007
Application of machine learning, molecular modelling and structural data mining against antiretroviral drug resistance in HIV-1
- Authors: Sheik Amamuddy, Olivier Serge André
- Date: 2020
- Subjects: Machine learning , Molecules -- Models , Data mining , Neural networks (Computer science) , Antiretroviral agents , Protease inhibitors , Drug resistance , Multidrug resistance , Molecular dynamics , Renin-angiotensin system , HIV (Viruses) -- South Africa , HIV (Viruses) -- Social aspects -- South Africa , South African Natural Compounds Database
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/115964 , vital:34282
- Description: Millions are affected with the Human Immunodeficiency Virus (HIV) world wide, even though the death toll is on the decline. Antiretrovirals (ARVs), more specifically protease inhibitors have shown tremendous success since their introduction into therapy since the mid 1990’s by slowing down progression to the Acquired Immune Deficiency Syndrome (AIDS). However, Drug Resistance Mutations (DRMs) are constantly selected for due to viral adaptation, making drugs less effective over time. The current challenge is to manage the infection optimally with a limited set of drugs, with differing associated levels of toxicities in the face of a virus that (1) exists as a quasispecies, (2) may transmit acquired DRMs to drug-naive individuals and (3) that can manifest class-wide resistance due to similarities in design. The presence of latent reservoirs, unawareness of infection status, education and various socio-economic factors make the problem even more complex. Adequate timing and choice of drug prescription together with treatment adherence are very important as drug toxicities, drug failure and sub-optimal treatment regimens leave room for further development of drug resistance. While CD4 cell count and the determination of viral load from patients in resource-limited settings are very helpful to track how well a patient’s immune system is able to keep the virus in check, they can be lengthy in determining whether an ARV is effective. Phenosense assay kits answer this problem using viruses engineered to contain the patient sequences and evaluating their growth in the presence of different ARVs, but this can be expensive and too involved for routine checks. As a cheaper and faster alternative, genotypic assays provide similar information from HIV pol sequences obtained from blood samples, inferring ARV efficacy on the basis of drug resistance mutation patterns. However, these are inherently complex and the various methods of in silico prediction, such as Geno2pheno, REGA and Stanford HIVdb do not always agree in every case, even though this gap decreases as the list of resistance mutations is updated. A major gap in HIV treatment is that the information used for predicting drug resistance is mainly computed from data containing an overwhelming majority of B subtype HIV, when these only comprise about 12% of the worldwide HIV infections. In addition to growing evidence that drug resistance is subtype-related, it is intuitive to hypothesize that as subtyping is a phylogenetic classification, the more divergent a subtype is from the strains used in training prediction models, the less their resistance profiles would correlate. For the aforementioned reasons, we used a multi-faceted approach to attack the virus in multiple ways. This research aimed to (1) improve resistance prediction methods by focusing solely on the available subtype, (2) mine structural information pertaining to resistance in order to find any exploitable weak points and increase knowledge of the mechanistic processes of drug resistance in HIV protease. Finally, (3) we screen for protease inhibitors amongst a database of natural compounds [the South African natural compound database (SANCDB)] to find molecules or molecular properties usable to come up with improved inhibition against the drug target. In this work, structural information was mined using the Anisotropic Network Model, Dynamics Cross-Correlation, Perturbation Response Scanning, residue contact network analysis and the radius of gyration. These methods failed to give any resistance-associated patterns in terms of natural movement, internal correlated motions, residue perturbation response, relational behaviour and global compaction respectively. Applications of drug docking, homology-modelling and energy minimization for generating features suitable for machine-learning were not very promising, and rather suggest that the value of binding energies by themselves from Vina may not be very reliable quantitatively. All these failures lead to a refinement that resulted in a highly sensitive statistically-guided network construction and analysis, which leads to key findings in the early dynamics associated with resistance across all PI drugs. The latter experiment unravelled a conserved lateral expansion motion occurring at the flap elbows, and an associated contraction that drives the base of the dimerization domain towards the catalytic site’s floor in the case of drug resistance. Interestingly, we found that despite the conserved movement, bond angles were degenerate. Alongside, 16 Artificial Neural Network models were optimised for HIV proteases and reverse transcriptase inhibitors, with performances on par with Stanford HIVdb. Finally, we prioritised 9 compounds with potential protease inhibitory activity using virtual screening and molecular dynamics (MD) to additionally suggest a promising modification to one of the compounds. This yielded another molecule inhibiting equally well both opened and closed receptor target conformations, whereby each of the compounds had been selected against an array of multi-drug-resistant receptor variants. While a main hurdle was a lack of non-B subtype data, our findings, especially from the statistically-guided network analysis, may extrapolate to a certain extent to them as the level of conservation was very high within subtype B, despite all the present variations. This network construction method lays down a sensitive approach for analysing a pair of alternate phenotypes for which complex patterns prevail, given a sufficient number of experimental units. During the course of research a weighted contact mapping tool was developed to compare renin-angiotensinogen variants and packaged as part of the MD-TASK tool suite. Finally the functionality, compatibility and performance of the MODE-TASK tool were evaluated and confirmed for both Python2.7.x and Python3.x, for the analysis of normals modes from single protein structures and essential modes from MD trajectories. These techniques and tools collectively add onto the conventional means of MD analysis.
- Full Text:
- Date Issued: 2020
Artificial neural networks as simulators for behavioural evolution in evolutionary robotics
- Authors: Pretorius, Christiaan Johannes
- Date: 2010
- Subjects: Neural networks (Computer science) , Robotics
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10462 , http://hdl.handle.net/10948/1476 , Neural networks (Computer science) , Robotics
- Description: Robotic simulators for use in Evolutionary Robotics (ER) have certain challenges associated with the complexity of their construction and the accuracy of predictions made by these simulators. Such robotic simulators are often based on physics models, which have been shown to produce accurate results. However, the construction of physics-based simulators can be complex and time-consuming. Alternative simulation schemes construct robotic simulators from empirically-collected data. Such empirical simulators, however, also have associated challenges, such as that some of these simulators do not generalize well on the data from which they are constructed, as these models employ simple interpolation on said data. As a result of the identified challenges in existing robotic simulators for use in ER, this project investigates the potential use of Artificial Neural Networks, henceforth simply referred to as Neural Networks (NNs), as alternative robotic simulators. In contrast to physics models, NN-based simulators can be constructed without needing an explicit mathematical model of the system being modeled, which can simplify simulator development. Furthermore, the generalization capabilities of NNs suggest that NNs could generalize well on data from which these simulators are constructed. These generalization abilities of NNs, along with NNs’ noise tolerance, suggest that NNs could be well-suited to application in robotics simulation. Investigating whether NNs can be effectively used as robotic simulators in ER is thus the endeavour of this work. Since not much research has been done in employing NNs as robotic simulators, many aspects of the experimental framework on which this dissertation reports needed to be carefully decided upon. Two robot morphologies were selected on which the NN simulators created in this work were based, namely a differentially steered robot and an inverted pendulum robot. Motion tracking and robotic sensor logging were used to acquire data from which the NN simulators were constructed. Furthermore, custom code was written for almost all aspects of the study, namely data acquisition for NN training, the actual NN training process, the evolution of robotic controllers using the created NN simulators, as well as the onboard robotic implementations of evolved controllers. Experimental tests performed in order to determine ideal topologies for each of the NN simulators developed in this study indicated that different NN topologies can lead to large differences in training accuracy. After performing these tests, the training accuracy of the created simulators was analyzed. This analysis showed that the NN simulators generally trained well and could generalize well on data not presented during simulator construction. In order to validate the feasibility of the created NN simulators in the ER process, these simulators were subsequently used to evolve controllers in simulation, similar to controllers developed in related studies. Encouraging results were obtained, with the newly-evolved controllers allowing real-world experimental robots to exhibit obstacle avoidance and light-approaching behaviour with a reasonable degree of success. The created NN simulators furthermore allowed for the successful evolution of a complex inverted pendulum stabilization controller in simulation. It was thus clearly established that NN-based robotic simulators can be successfully employed as alternative simulation schemes in the ER process.
- Full Text:
- Date Issued: 2010
Augmenting the Moore-Penrose generalised Inverse to train neural networks
- Authors: Fang, Bobby
- Date: 2024-04
- Subjects: Neural networks (Computer science) , Machine learning , Mathematical optimization -- Computer programs
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/63755 , vital:73595
- Description: An Extreme Learning Machine (ELM) is a non-iterative and fast feedforward neural network training algorithm which uses the Moore-Penrose generalised inverse of a matrix to compute the weights of the output layer of the neural network, using a random initialisation for the hidden layer. While ELM has been used to train feedforward neural networks, the effectiveness of the MP generalised to train recurrent neural networks is yet to be investigated. The primary aim of this research was to investigate how biases in the output layer and the MP generalised inverse can be used to train recurrent neural networks. To accomplish this, the Bias Augmented ELM (BA-ELM), which concatenated the hidden layer output matrix with a ones-column vector to simulate the biases in the output layer, was proposed. A variety of datasets generated from optimisation test functions, as well as using real-world regression and classification datasets, were used to validate BA-ELM. The results showed in specific circumstances that BA-ELM was able to perform better than ELM. Following this, Recurrent ELM (R-ELM) was proposed which uses a recurrent hidden layer instead of a feedforward hidden layer. Recurrent neural networks also rely on having functional feedback connections in the recurrent layer. A hybrid training algorithm, Recurrent Hybrid ELM (R-HELM), was proposed, which uses a gradient-based algorithm to optimise the recurrent layer and the MP generalised inverse to compute the output weights. The evaluation of R-ELM and R-HELM algorithms were carried out using three different recurrent architectures on two recurrent tasks derived from the Susceptible- Exposed-Infected-Removed (SEIR) epidemiology model. Various training hyperparameters were evaluated through hyperparameter investigations to investigate their effectiveness on the hybrid training algorithm. With optimal hyperparameters, the hybrid training algorithm was able to achieve better performance than the conventional gradient-based algorithm. , Thesis (MSc) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics, 2024
- Full Text:
- Date Issued: 2024-04
Comparative analysis of YOLOV5 and YOLOV8 for automated fish detection and classification in underwater environments
- Authors: Kuhlane, Luxolo
- Date: 2024-10-11
- Subjects: Artificial intelligence , Deep learning (Machine learning) , Machine learning , Neural networks (Computer science) , You Only Look Once , YOLOv5 , YOLOv8
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/464333 , vital:76502
- Description: The application of traditional manual techniques for fish detection and classification faces significant challenges, primarily stemming from their labour-intensive nature and limited scalability. Automating these kinds of processes through computer vision practices and machine learning techniques has emerged as a potential solution in recent years. With the development of and increase in ease of access to new technology in recent years, the use of a deep learning object detector known as YOLO (You Only Look Once) in the detection and classification of fish has steadily become notably popular. This thesis thus explores suitable YOLO architectures for detecting and classifying fish. The YOLOv5 and YOLOv8 models were evaluated explicitly for detecting and classifying fish in underwater environments. The selection of these models was based on a literature review highlighting their success in similar applications but remains largely understudied in underwater environments. Therefore, the effectiveness of these models was evaluated through comprehensive experimentation on collected and publicly available underwater fish datasets. In collaboration with the South African Institute of Biodiversity (SAIAB), five datasets were collected and manually annotated for labels for supervised machine learning. Moreover, two publicly available datasets were sourced for comparison to the literature. Furthermore, after determining that the smallest YOLO architectures are better suited to these imbalanced datasets, hyperparameter tuning tailored the models to the characteristics of the various underwater environments used in the research. The popular DeepFish dataset was evaluated to establish a baseline and feasibility of these models in the understudied domain. The results demonstrated high detection accuracy for both YOLOv5 and YOLOv8. However, YOLOv8 outperformed YOLOv5, achieving 97.43% accuracy compared to 94.53%. After experiments on seven datasets, trends revealed YOLOv8’s enhanced generalisation accuracy due to architectural improvements, particularly in detecting smaller fish. Overall, YOLOv8 demonstrated that it is the better fish detection and classification model on diverse data. , Thesis (MSc) -- Faculty of Science, Computer Science, 2024
- Full Text:
- Date Issued: 2024-10-11
Deep learning applied to the semantic segmentation of tyre stockpiles
- Authors: Barfknecht, Nicholas Christopher
- Date: 2018
- Subjects: Neural networks (Computer science)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10948/23947 , vital:30647
- Description: The global push for manufacturing which is environmentally sustainable has disrupted standard methods of waste tyre disposal. This push is further intensified by the health and safety risks discarded tyres pose to the surrounding population. Waste tyre recycling initiatives in South Africa are on the increase; however, there is still a growing number of undocumented tyre stockpiles developing throughout the country. The plans put in place to eradicate these tyre stockpiles have been met with collection, transport and storage logistical issues caused by the remoteness and distant locales. Eastwood (2016) aimed at optimising the logistics associated with collection, by estimating the number of visible tyres from images of tyre stockpiles. This research was limited by the need for manual segmentation of each tyre stockpile located within each image. This research proposes the use of semantic segmentation to automatically segment images of tyre stockpiles. An initial review of neural network, convolutional network and semantic segmentation literature resulted in the selection of Dilated Net as the semantic segmentation architecture for this research. Dilated Net builds upon the VGG-16 classification architecture to perform semantic segmentation. This resulted in classification experiments which were evaluated using precision, recall and f1-score. The results indicated that regardless of tyre stockpile image dimension, fairly accurate levels of classification accuracy can be attained. This was followed by semantic segmentation experiments which made use of intersection over union (IoU) and pixel accuracy to evaluate the effectiveness of Dilated Net on images of tyre stockpiles. The results indicated that accurate tyre stockpile segmentation regions can be obtained and that the trained model generalises well to unseen images.
- Full Text:
- Date Issued: 2018
Deep neural networks for robot vision in evolutionary robotics
- Authors: Watt, Nathan
- Date: 2021-04
- Subjects: Gqeberha (South Africa) , Eastern Cape (South Africa) , Neural networks (Computer science)
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/52100 , vital:43448
- Description: Advances in electronics manufacturing have made robots and their sensors cheaper and more accessible. Robots can have a variety of sensors, such as touch sensors, distance sensors and cameras. A robot’s controller is the software which interprets its sensors and determines how the robot will behave. The difficulty in programming robot controllers increases with complex robots and complicated tasks, forming a barrier to deploying robots for real-world applications. Robot controllers can be automatically created with Evolutionary Robotics (ER). ER makes use of an Evolutionary Algorithm (EA) to evolve controllers to complete a particular task. Instead of manually programming controllers, an EA can evolve controllers when provided with the robot’s task. ER has been used to evolve controllers for many different kinds of robots with a variety of sensors, however the use of robots with on-board camera sensors has been limited. The nature of EAs makes evolving a controller for a camera-equipped robot particularly difficult. There are two main challenges which complicate the evolution of vision-based controllers. First, every image from a camera contains a large amount of information, and a controller needs many parameters to receive that information, however it is difficult to evolve controllers with such a large number of parameters using EAs. Second, during the process of evolution, it is necessary to evaluate the fitness of many candidate controllers. This is typically done in simulation, however creating a simulator for a camera sensor is a tedious and timeconsuming task, as building a photo-realistic simulated environment requires handcrafted 3-dimensional models, textures and lighting. Two techniques have been used in previous experiments to overcome the challenges associated with evolving vision-based controllers. Either the controller was provided with extremely low-resolution images, or a task-specific algorithm was used to preprocess the images, only providing the necessary information to the controller. , Thesis (MSc) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics, 2021
- Full Text: false
Development of a neural network based model for predicting the occurrence of spread F within the Brazilian sector
- Authors: Paradza, Masimba Wellington
- Date: 2009
- Subjects: Neural networks (Computer science) , Ionosphere , F region
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5460 , http://hdl.handle.net/10962/d1005245 , Neural networks (Computer science) , Ionosphere , F region
- Description: Spread F is a phenomenon of the ionosphere in which the pulses returned from the ionosphere are of a much greater duration than the transmitted ones. The occurrence of spread F can be predicted using the technique of Neural Networks (NNs). This thesis presents the development and evaluation of NN based models (two single station models and a regional model) for predicting the occurrence of spread F over selected stations within the Brazilian sector. The input space for the NNs included the day number (seasonal variation), hour (diurnal variation), sunspot number (measure of the solar activity), magnetic index (measure of the magnetic activity) and magnetic position (latitude, magnetic declination and inclination). Twelve years of spread F data measured during 1978 to 1989 inclusively at the equatorial site Fortaleza and low latitude site Cachoeira Paulista are used in the development of an input space and NN architecture for the NN models. Spread F data that is believed to be related to plasma bubble developments (range spread F) were used in the development of the models while those associated with narrow spectrum irregularities that occur near the F layer (frequency spread F) were excluded. The results of the models show the dependency of the probability of spread F as a function of local time, season and latitude. The models also illustrate some characteristics of spread F such as the onset and peak occurrence of spread F as a function of distance from the equator. Results from these models are presented in this thesis and compared to measured data and to modelled data obtained with an empirical model developed for the same purpose.
- Full Text:
- Date Issued: 2009
Forecasting solar cycle 24 using neural networks
- Authors: Uwamahoro, Jean
- Date: 2009
- Subjects: Solar cycle , Neural networks (Computer science) , Ionosphere , Ionospheric electron density , Ionospheric forecasting , Solar thermal energy
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5468 , http://hdl.handle.net/10962/d1005253 , Solar cycle , Neural networks (Computer science) , Ionosphere , Ionospheric electron density , Ionospheric forecasting , Solar thermal energy
- Description: The ability to predict the future behavior of solar activity has become of extreme importance due to its effect on the near-Earth environment. Predictions of both the amplitude and timing of the next solar cycle will assist in estimating the various consequences of Space Weather. Several prediction techniques have been applied and have achieved varying degrees of success in the domain of solar activity prediction. These techniques include, for example, neural networks and geomagnetic precursor methods. In this thesis, various neural network based models were developed and the model considered to be optimum was used to estimate the shape and timing of solar cycle 24. Given the recent success of the geomagnetic precusrsor methods, geomagnetic activity as measured by the aa index is considered among the main inputs to the neural network model. The neural network model developed is also provided with the time input parameters defining the year and the month of a particular solar cycle, in order to characterise the temporal behaviour of sunspot number as observed during the last 10 solar cycles. The structure of input-output patterns to the neural network is constructed in such a way that the network learns the relationship between the aa index values of a particular cycle, and the sunspot number values of the following cycle. Assuming January 2008 as the minimum preceding solar cycle 24, the shape and amplitude of solar cycle 24 is estimated in terms of monthly mean and smoothed monthly sunspot number. This new prediction model estimates an average solar cycle 24, with the maximum occurring around June 2012 [± 11 months], with a smoothed monthly maximum sunspot number of 121 ± 9.
- Full Text:
- Date Issued: 2009
Investigating unimodal isolated signer-independent sign language recognition
- Authors: Marais, Marc Jason
- Date: 2024-04-04
- Subjects: Convolutional neural network , Sign language recognition , Human activity recognition , Pattern recognition systems , Neural networks (Computer science)
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/435343 , vital:73149
- Description: Sign language serves as the mode of communication for the Deaf and Hard of Hearing community, embodying a rich linguistic and cultural heritage. Recent Sign Language Recognition (SLR) system developments aim to facilitate seamless communication between the Deaf community and the broader society. However, most existing systems are limited by signer-dependent models, hindering their adaptability to diverse signing styles and signers, thus impeding their practical implementation in real-world scenarios. This research explores various unimodal approaches, both pose-based and vision-based, for isolated signer-independent SLR using RGB video input on the LSA64 and AUTSL datasets. The unimodal RGB-only input strategy provides a realistic SLR setting where alternative data sources are either unavailable or necessitate specialised equipment. Through systematic testing scenarios, isolated signer-independent SLR experiments are conducted on both datasets, primarily focusing on AUTSL – a signer-independent dataset. The vision-based R(2+1)D-18 model emerged as the top performer, achieving 90.64% accuracy on the unseen AUTSL dataset test split, closely followed by the pose-based Spatio- Temporal Graph Convolutional Network (ST-GCN) model with an accuracy of 89.95%. Furthermore, these models achieved comparable accuracies at a significantly lower computational demand. Notably, the pose-based approach demonstrates robust generalisation to substantial background and signer variation. Moreover, the pose-based approach demands significantly less computational power and training time than vision-based approaches. The proposed unimodal pose-based and vision-based systems were concluded to both be effective at classifying sign classes in the LSA64 and AUTSL datasets. , Thesis (MSc) -- Faculty of Science, Ichthyology and Fisheries Science, 2024
- Full Text:
- Date Issued: 2024-04-04
Modelling Ionospheric vertical drifts over the African low latitude region
- Authors: Dubazane, Makhosonke Berthwell
- Date: 2018
- Subjects: Ionospheric drift , Magnetometers , Functions, Orthogonal , Neural networks (Computer science) , Ionospheric electron density -- Africa , Communication and Navigation Outage Forecasting Systems (C/NOFS)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/63356 , vital:28396
- Description: Low/equatorial latitudes vertical plasma drifts and electric fields govern the formation and changes of ionospheric density structures which affect space-based systems such as communications, navigation and positioning. Dynamical and electrodynamical processes play important roles in plasma distribution at different altitudes. Because of the high variability of E × B drift in low latitude regions, coupled with various processes that sometimes originate from high latitudes especially during geomagnetic storm conditions, it is challenging to develop accurate vertical drift models. This is despite the fact that there are very few instruments dedicated to provide electric field and hence E × B drift data in low/equatorial latitude regions. To this effect, there exists no ground-based instrument for direct measurements of E×B drift data in the African sector. This study presents the first time investigation aimed at modelling the long-term variability of low latitude vertical E × B drift over the African sector using a combination of Communication and Navigation Outage Forecasting Systems (C/NOFS) and ground-based magnetometer observations/measurements during 2008-2013. Because the approach is based on the estimation of equatorial electrojet from ground-based magnetometer observations, the developed models are only valid for local daytime. Three modelling techniques have been considered. The application of Empirical Orthogonal Functions and partial least squares has been performed on vertical E × B drift modelling for the first time. The artificial neural networks that have the advantage of learning underlying changes between a set of inputs and known output were also used in vertical E × B drift modelling. Due to lack of E×B drift data over the African sector, the developed models were validated using satellite data and the climatological Scherliess-Fejer model incorporated within the International Reference Ionosphere model. Maximum correlation coefficient of ∼ 0.8 was achieved when validating the developed models with C/NOFS E × B drift observations that were not used in any model development. For most of the time, the climatological model overestimates the local daytime vertical E × B drift velocities. The methods and approach presented in this study provide a background for constructing vertical E ×B drift databases in longitude sectors that do not have radar instrumentation. This will in turn make it possible to study day-to-day variability of vertical E×B drift and hopefully lead to the development of regional and global models that will incorporate local time information in different longitude sectors.
- Full Text:
- Date Issued: 2018
NeGPAIM : a model for the proactive detection of information security intrusions, utilizing fuzzy logic and neural network techniques
- Authors: Botha, Martin
- Date: 2003
- Subjects: Computer security , Fuzzy logic , Neural networks (Computer science)
- Language: English
- Type: Thesis , Doctoral , DTech (Computer Studies)
- Identifier: vital:10792 , http://hdl.handle.net/10948/142 , Computer security , Fuzzy logic , Neural networks (Computer science)
- Description: “Information is the lifeblood of any organisation and everything an organisation does involves using information in some way” (Peppard, 1993, p.5). Therefore, it can be argued that information is an organisation’s most precious asset and as with all other assets, like equipment, money, personnel, and so on, this asset needs to be protected properly at all times (Whitman & Mattord, 2003, pp.1-14). The introduction of modern technologies, such as e-commerce, will not only increase the value of information, but will also increase security requirements of those organizations that are intending to utilize such technologies. Evidence of these requirements can be observed in the 2001 CSI/FBI Computer Crime and Security Survey (Power, 2001). According to this source, the annual financial losses caused through security breaches in 2001 have increased by 277% when compared to the results from 1997. The 2002 and 2003 Computer Crime and Security Survey confirms this by stating that the threat of computer crime and other related information security breaches continues unabated and that the financial toll is mounting (Richardson, 2003). Information is normally protected by means of a process of identifying, implementing, managing and maintaining a set of information security controls, countermeasures or safeguards (GMITS, 1998). In the rest of this thesis, the term security controls will be utilized when referring to information protection mechanisms or procedures. These security controls can be of a physical (for example, door locks), a technical (for example, passwords) and/or a procedural nature (for example, to make back-up copies of critical files)(Pfleeger, 2003, pp.22-23; Stallings, 1995, p.1). The effective identification, implementation, management and maintenance of this set of security controls are usually integrated into an Information Security Management Program, the objective of which is to ensure an acceptable level of information confidentiality, integrity and availability within the organisation at all times (Pfleeger, 2003, pp.10-12; Whitman & Mattord, 2003, pp.1-14; Von Solms, 1993). Once the most effective security controls have been identified and implemented, it is important that this level of security be maintained through a process of continued control. For this reason, it is important that proper change management, measurement, audit, monitoring and detection be implemented (Bruce & Dempsey, 1997). Monitoring and detection are important functions and refer to the ability to identify and detect situations where information security policies have been compromised and/or breached or security violations have taken place (BS 7799, 1999; GMITS, 1998; Von Solms, 1993). The Information Security Officer is usually the person responsible for most of the operational tasks in the control process within an Information Security Management Program (Von Solms, 1993). In practice, these tasks could also be performed by a system administrator, network administrator, etc. In the rest of the thesis the person responsible for these tasks will be referred to as system administrator. These tasks have proved to be very challenging and demanding. The main reason for this is the rapid advancement of technology in the discipline of Information Technology, for example, the modern distributed computing environment, the Internet, the “freedom” of end-users, the introduction of e-commerce, and etc. (Whitman & Mattord, 2003, p.9; Sundaram, 2000, p.1; Moses, 2001, p.6; Allen, 2001, p.1). As a result of the importance of this control process, and especially the monitoring and detection tasks, it is vital that the system administrator has proper tools at his/her disposal to perform this task effectively. Many of the tools that are currently available to the system administrator, utilize technical controls, such as, audit logs and user profiles. Audit logs are normally used to record all events executed on a system. These logs are simply files that record security and non-security related events that take place on a computer system within an organisation. For this reason, these logs can be used by these tools to gain valuable information on security violations, such as intrusions and, therefore, are able to monitor the current actions of each user (Microsoft, 2002; Smith, 1989, pp. 116-117). User profiles are files that contain information about users` desktop operating environments and are used by the operating system to structure each user environment so that it is the same each time a user logs onto the system (Microsoft, 2002; Block, 1994, p.54). Thus, a user profile is used to indicate which actions the user is allowed to perform on the system. Both technical controls (audit logs and user profiles) are frequently available in most computer environments (such as, UNIX, Firewalls, Windows, etc.) (Cooper et al, 1995, p.129). Therefore, seeing that the audit logs record most events taking place on an information system and the user profile indicates the authorized actions of each user, the system administrator could most probably utilise these controls in a more proactive manner.
- Full Text:
- Date Issued: 2003
Optimization of salbutamol sulfate dissolution from sustained release matrix formulations using an artificial neural network
- Authors: Chaibva, Faith A , Burton, Michael H , Walker, Roderick B
- Date: 2010
- Subjects: Neural networks (Computer science)
- Language: English
- Type: Article
- Identifier: vital:6352 , http://hdl.handle.net/10962/d1006034
- Description: An artificial neural network was used to optimize the release of salbutamol sulfate from hydrophilic matrix formulations. Model formulations to be used for training, testing and validating the neural network were manufactured with the aid of a central composite design with varying the levels of Methocel® K100M, xanthan gum, Carbopol® 974P and Surelease® as the input factors. In vitro dissolution time profiles at six different sampling times were used as target data in training the neural network for formulation optimization. A multi layer perceptron with one hidden layer was constructed using Matlab®, and the number of nodes in the hidden layer was optimized by trial and error to develop a model with the best predictive ability. The results revealed that a neural network with nine nodes was optimal for developing and optimizing formulations. Simulations undertaken with the training data revealed that the constructed model was useable. The optimized neural network was used for optimization of formulation with desirable release characteristics and the results indicated that there was agreement between the predicted formulation and the manufactured formulation. This work illustrates the possible utility of artificial neural networks for the optimization of pharmaceutical formulations with desirable performance characteristics.
- Full Text:
- Date Issued: 2010
Predictability of Geomagnetically Induced Currents using neural networks
- Authors: Lotz, Stefanus Ignatius
- Date: 2009
- Subjects: Advanced Composition Explorer (Artificial satellite) , Geomagnetism , Electromagnetic induction , Neural networks (Computer science) , Artificial intelligence
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5483 , http://hdl.handle.net/10962/d1005269 , Advanced Composition Explorer (Artificial satellite) , Geomagnetism , Electromagnetic induction , Neural networks (Computer science) , Artificial intelligence
- Description: It is a well documented fact that Geomagnetically Induced Currents (GIC’s) poses a significant threat to ground-based electric conductor networks like oil pipelines, railways and powerline networks. A study is undertaken to determine the feasibility of using artificial neural network models to predict GIC occurrence in the Southern African power grid. The magnitude of an induced current at a specific location on the Earth’s surface is directly related to the temporal derivative of the geomagnetic field (specifically its horizontal components) at that point. Hence, the focus of the problem is on the prediction of the temporal variations in the horizontal geomagnetic field (@Bx/@t and @By/@t). Artificial neural networks are used to predict @Bx/@t and @By/@t measured at Hermanus, South Africa (34.27◦ S, 19.12◦ E) with a 30 minute prediction lead time. As input parameters to the neural networks, insitu solar wind measurements made by the Advanced Composition Explorer (ACE) satellite are used. The results presented here compare well with similar models developed at high-latitude locations (e.g. Sweden, Finland, Canada) where extensive GIC research has been undertaken. It is concluded that it would indeed be feasible to use a neural network model to predict GIC occurrence in the Southern African power grid, provided that GIC measurements, powerline configuration and network parameters are made available.
- Full Text:
- Date Issued: 2009
Protein secondary structure prediction using neural networks and support vector machines
- Authors: Tsilo, Lipontseng Cecilia
- Date: 2009
- Subjects: Neural networks (Computer science) , Support vector machines , Proteins -- Structure -- Mathematical models
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5569 , http://hdl.handle.net/10962/d1002809 , Neural networks (Computer science) , Support vector machines , Proteins -- Structure -- Mathematical models
- Description: Predicting the secondary structure of proteins is important in biochemistry because the 3D structure can be determined from the local folds that are found in secondary structures. Moreover, knowing the tertiary structure of proteins can assist in determining their functions. The objective of this thesis is to compare the performance of Neural Networks (NN) and Support Vector Machines (SVM) in predicting the secondary structure of 62 globular proteins from their primary sequence. For each NN and SVM, we created six binary classifiers to distinguish between the classes’ helices (H) strand (E), and coil (C). For NN we use Resilient Backpropagation training with and without early stopping. We use NN with either no hidden layer or with one hidden layer with 1,2,...,40 hidden neurons. For SVM we use a Gaussian kernel with parameter fixed at = 0.1 and varying cost parameters C in the range [0.1,5]. 10- fold cross-validation is used to obtain overall estimates for the probability of making a correct prediction. Our experiments indicate for NN and SVM that the different binary classifiers have varying accuracies: from 69% correct predictions for coils vs. non-coil up to 80% correct predictions for stand vs. non-strand. It is further demonstrated that NN with no hidden layer or not more than 2 hidden neurons in the hidden layer are sufficient for better predictions. For SVM we show that the estimated accuracies do not depend on the value of the cost parameter. As a major result, we will demonstrate that the accuracy estimates of NN and SVM binary classifiers cannot distinguish. This contradicts a modern belief in bioinformatics that SVM outperforms other predictors.
- Full Text:
- Date Issued: 2009