A comparison of web-based technologies to serve images from an Oracle9i database
- Authors: Swales, Dylan
- Date: 2004 , 2013-06-18
- Subjects: Active server pages , Microsoft .NET , JavaServer pages , Oracle (Computer file) , Internet searching , Web site development--Computer programs , World Wide Web , Online information services
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4583 , http://hdl.handle.net/10962/d1004380 , Active server pages , Microsoft .NET , JavaServer pages , Oracle (Computer file) , Internet searching , Web site development--Computer programs , World Wide Web , Online information services
- Description: The nature of Internet and Intranet Web applications has changed from a static content-distribution medium into an interactive, dynamic medium, often used to serve multimedia from back-end object-relational databases to Web-enabled clients. Consequently, developers need to make an informed technological choice for developing software that supports a Web-based application for distributing multimedia over networks. This decision is based on several factors. Among the factors are ease of programming, richness of features, scalability, and performance. The research focuses on these key factors when distributing images from an Oracle9i database using Java Servlets, JSP, ASP, and ASP.NET as the server-side development technologies. Prototype applications are developed and tested within each technology: one for single image serving and the other for multiple image serving. A matrix of recommendations is provided to distinguish which technology, or combination of technologies, provides the best performance and development platform for image serving within the studied envirorunent. , KMBT_363 , Adobe Acrobat 9.54 Paper Capture Plug-in
- Full Text:
- Date Issued: 2004
- Authors: Swales, Dylan
- Date: 2004 , 2013-06-18
- Subjects: Active server pages , Microsoft .NET , JavaServer pages , Oracle (Computer file) , Internet searching , Web site development--Computer programs , World Wide Web , Online information services
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4583 , http://hdl.handle.net/10962/d1004380 , Active server pages , Microsoft .NET , JavaServer pages , Oracle (Computer file) , Internet searching , Web site development--Computer programs , World Wide Web , Online information services
- Description: The nature of Internet and Intranet Web applications has changed from a static content-distribution medium into an interactive, dynamic medium, often used to serve multimedia from back-end object-relational databases to Web-enabled clients. Consequently, developers need to make an informed technological choice for developing software that supports a Web-based application for distributing multimedia over networks. This decision is based on several factors. Among the factors are ease of programming, richness of features, scalability, and performance. The research focuses on these key factors when distributing images from an Oracle9i database using Java Servlets, JSP, ASP, and ASP.NET as the server-side development technologies. Prototype applications are developed and tested within each technology: one for single image serving and the other for multiple image serving. A matrix of recommendations is provided to distinguish which technology, or combination of technologies, provides the best performance and development platform for image serving within the studied envirorunent. , KMBT_363 , Adobe Acrobat 9.54 Paper Capture Plug-in
- Full Text:
- Date Issued: 2004
Technology in conservation: towards a system for in-field drone detection of invasive vegetation
- James, Katherine Margaret Frances
- Authors: James, Katherine Margaret Frances
- Date: 2020
- Subjects: Drone aircraft in remote sensing , Neural networks (Computer science) , Drone aircraft in remote sensing -- Case studies , Machine learning , Computer vision , Environmental monitoring -- Remote sensing , Invasive plants -- Monitoring
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/143408 , vital:38244
- Description: Remote sensing can assist in monitoring the spread of invasive vegetation. The adoption of camera-carrying unmanned aerial vehicles, commonly referred to as drones, as remote sensing tools has yielded images of higher spatial resolution than traditional techniques. Drones also have the potential to interact with the environment through the delivery of bio-control or herbicide, as seen with their adoption in precision agriculture. Unlike in agricultural applications, however, invasive plants do not have a predictable position relative to each other within the environment. To facilitate the adoption of drones as an environmental monitoring and management tool, drones need to be able to intelligently distinguish between invasive and non-invasive vegetation on the fly. In this thesis, we present the augmentation of a commercially available drone with a deep machine learning model to investigate the viability of differentiating between an invasive shrub and other vegetation. As a case study, this was applied to the shrub genus Hakea, originating in Australia and invasive in several countries including South Africa. However, for this research, the methodology is important, rather than the chosen target plant. A dataset was collected using the available drone and manually annotated to facilitate the supervised training of the model. Two approaches were explored, namely, classification and semantic segmentation. For each of these, several models were trained and evaluated to find the optimal one. The chosen model was then interfaced with the drone via an Android application on a mobile device and its performance was preliminarily evaluated in the field. Based on these findings, refinements were made and thereafter a thorough field evaluation was performed to determine the best conditions for model operation. Results from the classification task show that deep learning models are capable of distinguishing between target and other shrubs in ideal candidate windows. However, classification in this manner is restricted by the proposal of such candidate windows. End-to-end image segmentation using deep learning overcomes this problem, classifying the image in a pixel-wise manner. Furthermore, the use of appropriate loss functions was found to improve model performance. Field tests show that illumination and shadow pose challenges to the model, but that good recall can be achieved when the conditions are ideal. False positive detection remains an issue that could be improved. This approach shows the potential for drones as an environmental monitoring and management tool when coupled with deep machine learning techniques and outlines potential problems that may be encountered.
- Full Text:
- Date Issued: 2020
- Authors: James, Katherine Margaret Frances
- Date: 2020
- Subjects: Drone aircraft in remote sensing , Neural networks (Computer science) , Drone aircraft in remote sensing -- Case studies , Machine learning , Computer vision , Environmental monitoring -- Remote sensing , Invasive plants -- Monitoring
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/143408 , vital:38244
- Description: Remote sensing can assist in monitoring the spread of invasive vegetation. The adoption of camera-carrying unmanned aerial vehicles, commonly referred to as drones, as remote sensing tools has yielded images of higher spatial resolution than traditional techniques. Drones also have the potential to interact with the environment through the delivery of bio-control or herbicide, as seen with their adoption in precision agriculture. Unlike in agricultural applications, however, invasive plants do not have a predictable position relative to each other within the environment. To facilitate the adoption of drones as an environmental monitoring and management tool, drones need to be able to intelligently distinguish between invasive and non-invasive vegetation on the fly. In this thesis, we present the augmentation of a commercially available drone with a deep machine learning model to investigate the viability of differentiating between an invasive shrub and other vegetation. As a case study, this was applied to the shrub genus Hakea, originating in Australia and invasive in several countries including South Africa. However, for this research, the methodology is important, rather than the chosen target plant. A dataset was collected using the available drone and manually annotated to facilitate the supervised training of the model. Two approaches were explored, namely, classification and semantic segmentation. For each of these, several models were trained and evaluated to find the optimal one. The chosen model was then interfaced with the drone via an Android application on a mobile device and its performance was preliminarily evaluated in the field. Based on these findings, refinements were made and thereafter a thorough field evaluation was performed to determine the best conditions for model operation. Results from the classification task show that deep learning models are capable of distinguishing between target and other shrubs in ideal candidate windows. However, classification in this manner is restricted by the proposal of such candidate windows. End-to-end image segmentation using deep learning overcomes this problem, classifying the image in a pixel-wise manner. Furthermore, the use of appropriate loss functions was found to improve model performance. Field tests show that illumination and shadow pose challenges to the model, but that good recall can be achieved when the conditions are ideal. False positive detection remains an issue that could be improved. This approach shows the potential for drones as an environmental monitoring and management tool when coupled with deep machine learning techniques and outlines potential problems that may be encountered.
- Full Text:
- Date Issued: 2020
NetwIOC: a framework for the automated generation of network-based IOCS for malware information sharing and defence
- Authors: Rudman, Lauren Lynne
- Date: 2018
- Subjects: Malware (Computer software) , Computer networks Security measures , Computer security , Python (Computer program language)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/60639 , vital:27809
- Description: With the substantial number of new malware variants found each day, it is useful to have an efficient way to retrieve Indicators of Compromise (IOCs) from the malware in a format suitable for sharing and detection. In the past, these indicators were manually created after inspection of binary samples and network traffic. The Cuckoo Sandbox, is an existing dynamic malware analysis system which meets the requirements for the proposed framework and was extended by adding a few custom modules. This research explored a way to automate the generation of detailed network-based IOCs in a popular format which can be used for sharing. This was done through careful filtering and analysis of the PCAP hie generated by the sandbox, and placing these values into the correct type of STIX objects using Python, Through several evaluations, analysis of what type of network traffic can be expected for the creation of IOCs was conducted, including a brief ease study that examined the effect of analysis time on the number of IOCs created. Using the automatically generated IOCs to create defence and detection mechanisms for the network was evaluated and proved successful, A proof of concept sharing platform developed for the STIX IOCs is showcased at the end of the research.
- Full Text:
- Date Issued: 2018
- Authors: Rudman, Lauren Lynne
- Date: 2018
- Subjects: Malware (Computer software) , Computer networks Security measures , Computer security , Python (Computer program language)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/60639 , vital:27809
- Description: With the substantial number of new malware variants found each day, it is useful to have an efficient way to retrieve Indicators of Compromise (IOCs) from the malware in a format suitable for sharing and detection. In the past, these indicators were manually created after inspection of binary samples and network traffic. The Cuckoo Sandbox, is an existing dynamic malware analysis system which meets the requirements for the proposed framework and was extended by adding a few custom modules. This research explored a way to automate the generation of detailed network-based IOCs in a popular format which can be used for sharing. This was done through careful filtering and analysis of the PCAP hie generated by the sandbox, and placing these values into the correct type of STIX objects using Python, Through several evaluations, analysis of what type of network traffic can be expected for the creation of IOCs was conducted, including a brief ease study that examined the effect of analysis time on the number of IOCs created. Using the automatically generated IOCs to create defence and detection mechanisms for the network was evaluated and proved successful, A proof of concept sharing platform developed for the STIX IOCs is showcased at the end of the research.
- Full Text:
- Date Issued: 2018
Investigating tools and techniques for improving software performance on multiprocessor computer systems
- Authors: Tristram, Waide Barrington
- Date: 2012
- Subjects: Multiprocessors , Multiprogramming (Electronic computers) , Parallel programming (Computer science) , Linux , Abstract data types (Computer science) , Threads (Computer programs) , Computer programming
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4655 , http://hdl.handle.net/10962/d1006651 , Multiprocessors , Multiprogramming (Electronic computers) , Parallel programming (Computer science) , Linux , Abstract data types (Computer science) , Threads (Computer programs) , Computer programming
- Description: The availability of modern commodity multicore processors and multiprocessor computer systems has resulted in the widespread adoption of parallel computers in a variety of environments, ranging from the home to workstation and server environments in particular. Unfortunately, parallel programming is harder and requires more expertise than the traditional sequential programming model. The variety of tools and parallel programming models available to the programmer further complicates the issue. The primary goal of this research was to identify and describe a selection of parallel programming tools and techniques to aid novice parallel programmers in the process of developing efficient parallel C/C++ programs for the Linux platform. This was achieved by highlighting and describing the key concepts and hardware factors that affect parallel programming, providing a brief survey of commonly available software development tools and parallel programming models and libraries, and presenting structured approaches to software performance tuning and parallel programming. Finally, the performance of several parallel programming models and libraries was investigated, along with the programming effort required to implement solutions using the respective models. A quantitative research methodology was applied to the investigation of the performance and programming effort associated with the selected parallel programming models and libraries, which included automatic parallelisation by the compiler, Boost Threads, Cilk Plus, OpenMP, POSIX threads (Pthreads), and Threading Building Blocks (TBB). Additionally, the performance of the GNU C/C++ and Intel C/C++ compilers was examined. The results revealed that the choice of parallel programming model or library is dependent on the type of problem being solved and that there is no overall best choice for all classes of problem. However, the results also indicate that parallel programming models with higher levels of abstraction require less programming effort and provide similar performance compared to explicit threading models. The principle conclusion was that the problem analysis and parallel design are an important factor in the selection of the parallel programming model and tools, but that models with higher levels of abstractions, such as OpenMP and Threading Building Blocks, are favoured.
- Full Text:
- Date Issued: 2012
- Authors: Tristram, Waide Barrington
- Date: 2012
- Subjects: Multiprocessors , Multiprogramming (Electronic computers) , Parallel programming (Computer science) , Linux , Abstract data types (Computer science) , Threads (Computer programs) , Computer programming
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4655 , http://hdl.handle.net/10962/d1006651 , Multiprocessors , Multiprogramming (Electronic computers) , Parallel programming (Computer science) , Linux , Abstract data types (Computer science) , Threads (Computer programs) , Computer programming
- Description: The availability of modern commodity multicore processors and multiprocessor computer systems has resulted in the widespread adoption of parallel computers in a variety of environments, ranging from the home to workstation and server environments in particular. Unfortunately, parallel programming is harder and requires more expertise than the traditional sequential programming model. The variety of tools and parallel programming models available to the programmer further complicates the issue. The primary goal of this research was to identify and describe a selection of parallel programming tools and techniques to aid novice parallel programmers in the process of developing efficient parallel C/C++ programs for the Linux platform. This was achieved by highlighting and describing the key concepts and hardware factors that affect parallel programming, providing a brief survey of commonly available software development tools and parallel programming models and libraries, and presenting structured approaches to software performance tuning and parallel programming. Finally, the performance of several parallel programming models and libraries was investigated, along with the programming effort required to implement solutions using the respective models. A quantitative research methodology was applied to the investigation of the performance and programming effort associated with the selected parallel programming models and libraries, which included automatic parallelisation by the compiler, Boost Threads, Cilk Plus, OpenMP, POSIX threads (Pthreads), and Threading Building Blocks (TBB). Additionally, the performance of the GNU C/C++ and Intel C/C++ compilers was examined. The results revealed that the choice of parallel programming model or library is dependent on the type of problem being solved and that there is no overall best choice for all classes of problem. However, the results also indicate that parallel programming models with higher levels of abstraction require less programming effort and provide similar performance compared to explicit threading models. The principle conclusion was that the problem analysis and parallel design are an important factor in the selection of the parallel programming model and tools, but that models with higher levels of abstractions, such as OpenMP and Threading Building Blocks, are favoured.
- Full Text:
- Date Issued: 2012
Towards a collection of cost-effective technologies in support of the NIST cybersecurity framework
- Shackleton, Bruce Michael Stuart
- Authors: Shackleton, Bruce Michael Stuart
- Date: 2018
- Subjects: National Institute of Standards and Technology (U.S.) , Computer security , Computer networks Security measures , Small business Information technology Cost effectiveness , Open source software
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/62494 , vital:28199
- Description: The NIST Cybersecurity Framework (CSF) is a specific risk and cybersecurity framework. It provides guidance on controls that can be implemented to help improve an organisation’s cybersecurity risk posture. The CSF Functions consist of Identify, Protect, Detect, Respond, and Recover. Like most Information Technology (IT) frameworks, there are elements of people, processes, and technology. The same elements are required to successfully implement the NIST CSF. This research specifically focuses on the technology element. While there are many commercial technologies available for a small to medium sized business, the costs can be prohibitively expensive. Therefore, this research investigates cost-effective technologies and assesses their alignment to the NIST CSF. The assessment was made against the NIST CSF subcategories. Each subcategory was analysed to identify where a technology would likely be required. The framework provides a list of Informative References. These Informative References were used to create high- level technology categories, as well as identify the technical controls against which the technologies were measured. The technologies tested were either open source or proprietary. All open source technologies tested were free to use, or have a free community edition. Proprietary technologies would be free to use, or considered generally available to most organisations, such as components contained within Microsoft platforms. The results from the experimentation demonstrated that there are multiple cost-effective technologies that can support the NIST CSF. Once all technologies were tested, the NIST CSF was extended. Two new columns were added, namely high-level technology category, and tested technology. The columns were populated with output from the research. This extended framework begins an initial collection of cost-effective technologies in support of the NIST CSF.
- Full Text:
- Date Issued: 2018
- Authors: Shackleton, Bruce Michael Stuart
- Date: 2018
- Subjects: National Institute of Standards and Technology (U.S.) , Computer security , Computer networks Security measures , Small business Information technology Cost effectiveness , Open source software
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/62494 , vital:28199
- Description: The NIST Cybersecurity Framework (CSF) is a specific risk and cybersecurity framework. It provides guidance on controls that can be implemented to help improve an organisation’s cybersecurity risk posture. The CSF Functions consist of Identify, Protect, Detect, Respond, and Recover. Like most Information Technology (IT) frameworks, there are elements of people, processes, and technology. The same elements are required to successfully implement the NIST CSF. This research specifically focuses on the technology element. While there are many commercial technologies available for a small to medium sized business, the costs can be prohibitively expensive. Therefore, this research investigates cost-effective technologies and assesses their alignment to the NIST CSF. The assessment was made against the NIST CSF subcategories. Each subcategory was analysed to identify where a technology would likely be required. The framework provides a list of Informative References. These Informative References were used to create high- level technology categories, as well as identify the technical controls against which the technologies were measured. The technologies tested were either open source or proprietary. All open source technologies tested were free to use, or have a free community edition. Proprietary technologies would be free to use, or considered generally available to most organisations, such as components contained within Microsoft platforms. The results from the experimentation demonstrated that there are multiple cost-effective technologies that can support the NIST CSF. Once all technologies were tested, the NIST CSF was extended. Two new columns were added, namely high-level technology category, and tested technology. The columns were populated with output from the research. This extended framework begins an initial collection of cost-effective technologies in support of the NIST CSF.
- Full Text:
- Date Issued: 2018
Buffering strategies and bandwidth renegotiation for MPEG video streams
- Authors: Schonken, Nico
- Date: 1999
- Subjects: Video compression , Computer algorithms , Digital video
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4651 , http://hdl.handle.net/10962/d1006620 , Video compression , Computer algorithms , Digital video
- Description: This paper confirms the existence of short-term and long-term variation of the required bandwidth for MPEG videostreams. We show how the use of a small amount of buffering and GOP grouping can significantly reduce the effect of the short-term variation. By introducing a number of bandwidth renegotiation techniques, which can be applied to MPEG video streams in general, we are able to reduce the effect of long-term variation. These techniques include those that need the a priori knowledge of frame sizes as well as one that can renegotiate dynamically. A costing algorithm has also been introduced in order to compare various proposals against each other.
- Full Text:
- Date Issued: 1999
- Authors: Schonken, Nico
- Date: 1999
- Subjects: Video compression , Computer algorithms , Digital video
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4651 , http://hdl.handle.net/10962/d1006620 , Video compression , Computer algorithms , Digital video
- Description: This paper confirms the existence of short-term and long-term variation of the required bandwidth for MPEG videostreams. We show how the use of a small amount of buffering and GOP grouping can significantly reduce the effect of the short-term variation. By introducing a number of bandwidth renegotiation techniques, which can be applied to MPEG video streams in general, we are able to reduce the effect of long-term variation. These techniques include those that need the a priori knowledge of frame sizes as well as one that can renegotiate dynamically. A costing algorithm has also been introduced in order to compare various proposals against each other.
- Full Text:
- Date Issued: 1999
Towards large scale software based network routing simulation
- Authors: Herbert, Alan
- Date: 2015
- Subjects: Routers (Computer networks) , Computer software , Linux
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4709 , http://hdl.handle.net/10962/d1017931
- Description: Software based routing simulators suffer from large simulation host requirements and are prone to slow downs because of resource limitations, as well as context switching due to user space to kernel space requests. Furthermore, hardware based simulations do not scale with the passing of time as their available resources are set at the time of manufacture. This research aims to provide a software based, scalable solution to network simulation. It aims to achieve this by a Linux kernel-based solution, through insertion of a custom kernel module. This will reduce the number of context switches by eliminating the user space context requirement, and serve to be highly compatible with any host that can run the Linux kernel. Through careful consideration in data structure choice and software component design, this routing simulator achieved results of over 7 Gbps of throughput over multiple simulated node hops on consumer hardware. Alongside this throughput, this routing simulator also brings to light scalability and the ability to instantiate and simulate networks in excess of 1 million routing nodes within 1 GB of system memory
- Full Text:
- Date Issued: 2015
- Authors: Herbert, Alan
- Date: 2015
- Subjects: Routers (Computer networks) , Computer software , Linux
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4709 , http://hdl.handle.net/10962/d1017931
- Description: Software based routing simulators suffer from large simulation host requirements and are prone to slow downs because of resource limitations, as well as context switching due to user space to kernel space requests. Furthermore, hardware based simulations do not scale with the passing of time as their available resources are set at the time of manufacture. This research aims to provide a software based, scalable solution to network simulation. It aims to achieve this by a Linux kernel-based solution, through insertion of a custom kernel module. This will reduce the number of context switches by eliminating the user space context requirement, and serve to be highly compatible with any host that can run the Linux kernel. Through careful consideration in data structure choice and software component design, this routing simulator achieved results of over 7 Gbps of throughput over multiple simulated node hops on consumer hardware. Alongside this throughput, this routing simulator also brings to light scalability and the ability to instantiate and simulate networks in excess of 1 million routing nodes within 1 GB of system memory
- Full Text:
- Date Issued: 2015
Classification of the difficulty in accelerating problems using GPUs
- Authors: Tristram, Uvedale Roy
- Date: 2014
- Subjects: Graphics processing units , Computer algorithms , Computer programming , Problem solving -- Data processing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4699 , http://hdl.handle.net/10962/d1012978
- Description: Scientists continually require additional processing power, as this enables them to compute larger problem sizes, use more complex models and algorithms, and solve problems previously thought computationally impractical. General-purpose computation on graphics processing units (GPGPU) can help in this regard, as there is great potential in using graphics processors to accelerate many scientific models and algorithms. However, some problems are considerably harder to accelerate than others, and it may be challenging for those new to GPGPU to ascertain the difficulty of accelerating a particular problem or seek appropriate optimisation guidance. Through what was learned in the acceleration of a hydrological uncertainty ensemble model, large numbers of k-difference string comparisons, and a radix sort, problem attributes have been identified that can assist in the evaluation of the difficulty in accelerating a problem using GPUs. The identified attributes are inherent parallelism, branch divergence, problem size, required computational parallelism, memory access pattern regularity, data transfer overhead, and thread cooperation. Using these attributes as difficulty indicators, an initial problem difficulty classification framework has been created that aids in GPU acceleration difficulty evaluation. This framework further facilitates directed guidance on suggested optimisations and required knowledge based on problem classification, which has been demonstrated for the aforementioned accelerated problems. It is anticipated that this framework, or a derivative thereof, will prove to be a useful resource for new or novice GPGPU developers in the evaluation of potential problems for GPU acceleration.
- Full Text:
- Date Issued: 2014
- Authors: Tristram, Uvedale Roy
- Date: 2014
- Subjects: Graphics processing units , Computer algorithms , Computer programming , Problem solving -- Data processing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4699 , http://hdl.handle.net/10962/d1012978
- Description: Scientists continually require additional processing power, as this enables them to compute larger problem sizes, use more complex models and algorithms, and solve problems previously thought computationally impractical. General-purpose computation on graphics processing units (GPGPU) can help in this regard, as there is great potential in using graphics processors to accelerate many scientific models and algorithms. However, some problems are considerably harder to accelerate than others, and it may be challenging for those new to GPGPU to ascertain the difficulty of accelerating a particular problem or seek appropriate optimisation guidance. Through what was learned in the acceleration of a hydrological uncertainty ensemble model, large numbers of k-difference string comparisons, and a radix sort, problem attributes have been identified that can assist in the evaluation of the difficulty in accelerating a problem using GPUs. The identified attributes are inherent parallelism, branch divergence, problem size, required computational parallelism, memory access pattern regularity, data transfer overhead, and thread cooperation. Using these attributes as difficulty indicators, an initial problem difficulty classification framework has been created that aids in GPU acceleration difficulty evaluation. This framework further facilitates directed guidance on suggested optimisations and required knowledge based on problem classification, which has been demonstrated for the aforementioned accelerated problems. It is anticipated that this framework, or a derivative thereof, will prove to be a useful resource for new or novice GPGPU developers in the evaluation of potential problems for GPU acceleration.
- Full Text:
- Date Issued: 2014
An analysis of the use of DNS for malicious payload distribution
- Authors: Dube, Ishmael
- Date: 2019
- Subjects: Internet domain names , Computer networks -- Security measures , Computer security , Computer network protocols , Data protection
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/97531 , vital:31447
- Description: The Domain Name System (DNS) protocol is a fundamental part of Internet activities that can be abused by cybercriminals to conduct malicious activities. Previous research has shown that cybercriminals use different methods, including the DNS protocol, to distribute malicious content, remain hidden and avoid detection from various technologies that are put in place to detect anomalies. This allows botnets and certain malware families to establish covert communication channels that can be used to send or receive data and also distribute malicious payloads using the DNS queries and responses. Cybercriminals use the DNS to breach highly protected networks, distribute malicious content, and exfiltrate sensitive information without being detected by security controls put in place by embedding certain strings in DNS packets. This research undertaking broadens this research field and fills in the existing research gap by extending the analysis of DNS being used as a payload distribution channel to detection of domains that are used to distribute different malicious payloads. This research undertaking analysed the use of the DNS in detecting domains and channels that are used for distributing malicious payloads. Passive DNS data which replicate DNS queries on name servers to detect anomalies in DNS queries was evaluated and analysed in order to detect malicious payloads. The research characterises the malicious payload distribution channels by analysing passive DNS traffic and modelling the DNS query and response patterns. The research found that it is possible to detect malicious payload distribution channels through the analysis of DNS TXT resource records.
- Full Text:
- Date Issued: 2019
- Authors: Dube, Ishmael
- Date: 2019
- Subjects: Internet domain names , Computer networks -- Security measures , Computer security , Computer network protocols , Data protection
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/97531 , vital:31447
- Description: The Domain Name System (DNS) protocol is a fundamental part of Internet activities that can be abused by cybercriminals to conduct malicious activities. Previous research has shown that cybercriminals use different methods, including the DNS protocol, to distribute malicious content, remain hidden and avoid detection from various technologies that are put in place to detect anomalies. This allows botnets and certain malware families to establish covert communication channels that can be used to send or receive data and also distribute malicious payloads using the DNS queries and responses. Cybercriminals use the DNS to breach highly protected networks, distribute malicious content, and exfiltrate sensitive information without being detected by security controls put in place by embedding certain strings in DNS packets. This research undertaking broadens this research field and fills in the existing research gap by extending the analysis of DNS being used as a payload distribution channel to detection of domains that are used to distribute different malicious payloads. This research undertaking analysed the use of the DNS in detecting domains and channels that are used for distributing malicious payloads. Passive DNS data which replicate DNS queries on name servers to detect anomalies in DNS queries was evaluated and analysed in order to detect malicious payloads. The research characterises the malicious payload distribution channels by analysing passive DNS traffic and modelling the DNS query and response patterns. The research found that it is possible to detect malicious payload distribution channels through the analysis of DNS TXT resource records.
- Full Text:
- Date Issued: 2019
Developing high-fidelity mental models of programming concepts using manipulatives and interactive metaphors
- Authors: Funcke, Matthew
- Date: 2015
- Subjects: Computer programming -- Study and teaching (Higher) , Computer programmers
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4707 , http://hdl.handle.net/10962/d1017929
- Description: It is well established that both learning and teaching programming are difficult tasks. Difficulties often occur due to weak mental models and common misconceptions. This study proposes a method of teaching programming that both encourages high-fidelity mental models and attempts to minimise misconceptions in novice programmers, through the use of metaphors and manipulatives. The elements in ActionWorld with which the students interact are realizations of metaphors. By simple example, a variable has a metaphorical representation as a labelled box that can hold a value. The dissertation develops a set of metaphors which have several core requirements: metaphors should avoid causing misconceptions, they need to be high-fidelity so as to avoid failing when used with a new concept, students must be able to relate to them, and finally, they should be usable across multiple educational media. The learning style that ActionWorld supports is one which requires active participation from the student - the system acts as a foundation upon which students are encouraged to build their mental models. This teaching style is achieved by placing the student in the role of code interpreter, the code they need to interpret will not advance until they have demonstrated its meaning via use of the aforementioned metaphors. ActionWorld was developed using an iterative developmental process that consistently improved upon various aspects of the project through a continual evaluation-enhancement cycle. The primary outputs of this project include a unified set of high-fidelity metaphors, a virtual-machine API for use in similar future projects, and two metaphor-testing games. All of the aforementioned deliverables were tested using multiple quality-evaluation criteria, the results of which were consistently positive. ActionWorld and its constituent components contribute to the wide assortment of methods one might use to teach novice programmers.
- Full Text:
- Date Issued: 2015
- Authors: Funcke, Matthew
- Date: 2015
- Subjects: Computer programming -- Study and teaching (Higher) , Computer programmers
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4707 , http://hdl.handle.net/10962/d1017929
- Description: It is well established that both learning and teaching programming are difficult tasks. Difficulties often occur due to weak mental models and common misconceptions. This study proposes a method of teaching programming that both encourages high-fidelity mental models and attempts to minimise misconceptions in novice programmers, through the use of metaphors and manipulatives. The elements in ActionWorld with which the students interact are realizations of metaphors. By simple example, a variable has a metaphorical representation as a labelled box that can hold a value. The dissertation develops a set of metaphors which have several core requirements: metaphors should avoid causing misconceptions, they need to be high-fidelity so as to avoid failing when used with a new concept, students must be able to relate to them, and finally, they should be usable across multiple educational media. The learning style that ActionWorld supports is one which requires active participation from the student - the system acts as a foundation upon which students are encouraged to build their mental models. This teaching style is achieved by placing the student in the role of code interpreter, the code they need to interpret will not advance until they have demonstrated its meaning via use of the aforementioned metaphors. ActionWorld was developed using an iterative developmental process that consistently improved upon various aspects of the project through a continual evaluation-enhancement cycle. The primary outputs of this project include a unified set of high-fidelity metaphors, a virtual-machine API for use in similar future projects, and two metaphor-testing games. All of the aforementioned deliverables were tested using multiple quality-evaluation criteria, the results of which were consistently positive. ActionWorld and its constituent components contribute to the wide assortment of methods one might use to teach novice programmers.
- Full Text:
- Date Issued: 2015
DRUBIS : a distributed face-identification experimentation framework - design, implementation and performance issues
- Authors: Ndlangisa, Mboneli
- Date: 2004
- Subjects: Principal components analysis , Human face recognition (Computer science) , Image processing , Biometric identification
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4704 , http://hdl.handle.net/10962/d1015768
- Description: We report on the design, implementation and performance issues of the DRUBIS (Distributed Rhodes University Biometric Identification System) experimentation framework. The Principal Component Analysis (PCA) face-recognition approach is used as a case study. DRUBIS is a flexible experimentation framework, distributed over a number of modules that are easily pluggable and swappable, allowing for the easy construction of prototype systems. Web services are the logical means of distributing DRUBIS components and a number of prototype applications have been implemented from this framework. Different popular PCA face-recognition related experiments were used to evaluate our experimentation framework. We extract recognition performance measures from these experiments. In particular, we use the framework for a more indepth study of the suitability of the DFFS (Difference From Face Space) metric as a means for image classification in the area of race and gender determination.
- Full Text:
- Date Issued: 2004
- Authors: Ndlangisa, Mboneli
- Date: 2004
- Subjects: Principal components analysis , Human face recognition (Computer science) , Image processing , Biometric identification
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4704 , http://hdl.handle.net/10962/d1015768
- Description: We report on the design, implementation and performance issues of the DRUBIS (Distributed Rhodes University Biometric Identification System) experimentation framework. The Principal Component Analysis (PCA) face-recognition approach is used as a case study. DRUBIS is a flexible experimentation framework, distributed over a number of modules that are easily pluggable and swappable, allowing for the easy construction of prototype systems. Web services are the logical means of distributing DRUBIS components and a number of prototype applications have been implemented from this framework. Different popular PCA face-recognition related experiments were used to evaluate our experimentation framework. We extract recognition performance measures from these experiments. In particular, we use the framework for a more indepth study of the suitability of the DFFS (Difference From Face Space) metric as a means for image classification in the area of race and gender determination.
- Full Text:
- Date Issued: 2004
An automatic programming system to generate payroll programs
- Fielding, Elizabeth Vera Catherine
- Authors: Fielding, Elizabeth Vera Catherine
- Date: 1979
- Subjects: Computer software -- Development , Programming (computers) , Software architecture
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4695 , http://hdl.handle.net/10962/d1011829 , Computer software -- Development , Programming (computers) , Software architecture
- Description: The purpose of this project was to try to investigate one approach to the problem of automatically generating programs from some specification. Rather than following the approach which requires the user to define his problem using some formulation, it was decided to look at a class of problems that have similar solutions, but have many variations, and to try to design a system capable of obtaining user requirements and generating solutions tailored to these requirements. The aim was to design the system in such a way that it could be extended to cater for other classes of problems, so that eventually a system which could automatically generate program solutions for a range of problems might be developed. Intro. p. 1.
- Full Text:
- Date Issued: 1979
- Authors: Fielding, Elizabeth Vera Catherine
- Date: 1979
- Subjects: Computer software -- Development , Programming (computers) , Software architecture
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4695 , http://hdl.handle.net/10962/d1011829 , Computer software -- Development , Programming (computers) , Software architecture
- Description: The purpose of this project was to try to investigate one approach to the problem of automatically generating programs from some specification. Rather than following the approach which requires the user to define his problem using some formulation, it was decided to look at a class of problems that have similar solutions, but have many variations, and to try to design a system capable of obtaining user requirements and generating solutions tailored to these requirements. The aim was to design the system in such a way that it could be extended to cater for other classes of problems, so that eventually a system which could automatically generate program solutions for a range of problems might be developed. Intro. p. 1.
- Full Text:
- Date Issued: 1979
Detecting derivative malware samples using deobfuscation-assisted similarity analysis
- Authors: Wrench, Peter Mark
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/383 , vital:19954
- Description: The overwhelming popularity of PHP as a hosting platform has made it the language of choice for developers of Remote Access Trojans (RATs or web shells) and other malicious software. These shells are typically used to compromise and monetise web platforms by providing the attacker with basic remote access to the system, including _le transfer, command execution, network reconnaissance, and database connectivity. Once infected, compromised systems can be used to defraud users by hosting phishing sites, performing Distributed Denial of Service attacks, or serving as anonymous platforms for sending spam or other malfeasance. The vast majority of these threats are largely derivative, incorporating core capabilities found in more established RATs such as c99 and r57. Authors of malicious software routinely produce new shell variants by modifying the behaviours of these ubiquitous RATs, either to add desired functionality or to avoid detection by signature-based detection systems. Once these modified shells are eventually identified (or additional functionality is required), the process of shell adaptation begins again. The end result of this iterative process is a web of separate but related shell variants, many of which are at least partially derived from one of the more popular and influential RATs. In response to the problem outlined above, the author set out to design and implement a system capable of circumventing common obfuscation techniques and identifying derivative malware samples in a given collection. To begin with, a decoder component was developed to syntactically deobfuscate and normalise PHP code by detecting and reversing idiomatic obfuscation constructs, and to apply uniform formatting conventions to all system inputs. A unified malware analysis framework, called Viper, was then extended to create a modular similarity analysis system comprised of individual feature extraction modules, modules responsible for batch processing, a matrix module for comparing sample features, and two visualisation modules capable of generating visual representations of shell similarity. The principal conclusion of the research was that the deobfuscation performed by the decoder component prior to analysis dramatically improved the observed levels of similarity between test samples. This in turn allowed the modular similarity analysis system to identify derivative clusters (or families) within a large collection of shells more accurately. Techniques for isolating and re-rendering these clusters were also developed and demonstrated to be effective at increasing the amount of detail available for evaluating the relative magnitudes of the relationships within each cluster.
- Full Text:
- Date Issued: 2016
- Authors: Wrench, Peter Mark
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/383 , vital:19954
- Description: The overwhelming popularity of PHP as a hosting platform has made it the language of choice for developers of Remote Access Trojans (RATs or web shells) and other malicious software. These shells are typically used to compromise and monetise web platforms by providing the attacker with basic remote access to the system, including _le transfer, command execution, network reconnaissance, and database connectivity. Once infected, compromised systems can be used to defraud users by hosting phishing sites, performing Distributed Denial of Service attacks, or serving as anonymous platforms for sending spam or other malfeasance. The vast majority of these threats are largely derivative, incorporating core capabilities found in more established RATs such as c99 and r57. Authors of malicious software routinely produce new shell variants by modifying the behaviours of these ubiquitous RATs, either to add desired functionality or to avoid detection by signature-based detection systems. Once these modified shells are eventually identified (or additional functionality is required), the process of shell adaptation begins again. The end result of this iterative process is a web of separate but related shell variants, many of which are at least partially derived from one of the more popular and influential RATs. In response to the problem outlined above, the author set out to design and implement a system capable of circumventing common obfuscation techniques and identifying derivative malware samples in a given collection. To begin with, a decoder component was developed to syntactically deobfuscate and normalise PHP code by detecting and reversing idiomatic obfuscation constructs, and to apply uniform formatting conventions to all system inputs. A unified malware analysis framework, called Viper, was then extended to create a modular similarity analysis system comprised of individual feature extraction modules, modules responsible for batch processing, a matrix module for comparing sample features, and two visualisation modules capable of generating visual representations of shell similarity. The principal conclusion of the research was that the deobfuscation performed by the decoder component prior to analysis dramatically improved the observed levels of similarity between test samples. This in turn allowed the modular similarity analysis system to identify derivative clusters (or families) within a large collection of shells more accurately. Techniques for isolating and re-rendering these clusters were also developed and demonstrated to be effective at increasing the amount of detail available for evaluating the relative magnitudes of the relationships within each cluster.
- Full Text:
- Date Issued: 2016
Static analysis of functional languages
- Authors: Mountjoy, Jon-Dean
- Date: 1994 , 2012-10-10
- Subjects: Functional programming languages
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4669 , http://hdl.handle.net/10962/d1006690 , Functional programming languages
- Description: Static analysis is the name given to a number of compile time analysis techniques used to automatically generate information which can lead to improvements in the execution performance of function languages. This thesis provides an introduction to these techniques and their implementation. The abstract interpretation framework is an example of a technique used to extract information from a program by providing the program with an alternate semantics and evaluating this program over a non-standard domain. The elements of this domain represent certain properties of interest. This framework is examined in detail, as well as various extensions and variants of it. The use of binary logical relations and program logics as alternative formulations of the framework , and partial equivalence relations as an extension to it, are also looked at. The projection analysis framework determines how much of a sub-expression can be evaluated by examining the context in which the expression is to be evaluated, and provides an elegant method for finding particular types of information from data structures. This is also examined. The most costly operation in implementing an analysis is the computation of fixed points. Methods developed to make this process more efficient are looked at. This leads to the final chapter which highlights the dependencies and relationships between the different frameworks and their mathematical disciplines. , KMBT_223
- Full Text:
- Date Issued: 1994
- Authors: Mountjoy, Jon-Dean
- Date: 1994 , 2012-10-10
- Subjects: Functional programming languages
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4669 , http://hdl.handle.net/10962/d1006690 , Functional programming languages
- Description: Static analysis is the name given to a number of compile time analysis techniques used to automatically generate information which can lead to improvements in the execution performance of function languages. This thesis provides an introduction to these techniques and their implementation. The abstract interpretation framework is an example of a technique used to extract information from a program by providing the program with an alternate semantics and evaluating this program over a non-standard domain. The elements of this domain represent certain properties of interest. This framework is examined in detail, as well as various extensions and variants of it. The use of binary logical relations and program logics as alternative formulations of the framework , and partial equivalence relations as an extension to it, are also looked at. The projection analysis framework determines how much of a sub-expression can be evaluated by examining the context in which the expression is to be evaluated, and provides an elegant method for finding particular types of information from data structures. This is also examined. The most costly operation in implementing an analysis is the computation of fixed points. Methods developed to make this process more efficient are looked at. This leads to the final chapter which highlights the dependencies and relationships between the different frameworks and their mathematical disciplines. , KMBT_223
- Full Text:
- Date Issued: 1994
An investigation into some critical computer networking parameters : Internet addressing and routing
- Authors: Isted, Edwin David
- Date: 1996
- Subjects: Computer networks , Internet , Electronic mail systems
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4608 , http://hdl.handle.net/10962/d1004874 , Computer networks , Internet , Electronic mail systems
- Description: This thesis describes the evaluation of several proposals suggested as replacements for the currenT Internet's TCPJIP protocol suite. The emphasis of this thesis is on how the proposals solve the current routing and addressing problems associated with the Internet. The addressing problem is found to be related to address space depletion, and the routing problem related to excessive routing costs. The evaluation is performed based on criteria selected for their applicability as future Internet design criteria. AIl the protocols are evaluated using the above-mentioned criteria. It is concluded that the most suitable addressing mechanism is an expandable multi-level format, with a logical separation of location and host identification information. Similarly, the most suitable network representation technique is found to be an unrestricted hierarchical structure which uses a suitable abstraction mechanism. It is further found that these two solutions could adequately solve the existing addressing and routing problems and allow substantial growth of the Internet.
- Full Text:
- Date Issued: 1996
An investigation into some critical computer networking parameters : Internet addressing and routing
- Authors: Isted, Edwin David
- Date: 1996
- Subjects: Computer networks , Internet , Electronic mail systems
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4608 , http://hdl.handle.net/10962/d1004874 , Computer networks , Internet , Electronic mail systems
- Description: This thesis describes the evaluation of several proposals suggested as replacements for the currenT Internet's TCPJIP protocol suite. The emphasis of this thesis is on how the proposals solve the current routing and addressing problems associated with the Internet. The addressing problem is found to be related to address space depletion, and the routing problem related to excessive routing costs. The evaluation is performed based on criteria selected for their applicability as future Internet design criteria. AIl the protocols are evaluated using the above-mentioned criteria. It is concluded that the most suitable addressing mechanism is an expandable multi-level format, with a logical separation of location and host identification information. Similarly, the most suitable network representation technique is found to be an unrestricted hierarchical structure which uses a suitable abstraction mechanism. It is further found that these two solutions could adequately solve the existing addressing and routing problems and allow substantial growth of the Internet.
- Full Text:
- Date Issued: 1996
The analysis of a computer music network and the implementation of essential subsystems
- Authors: Wilks, Antony John
- Date: 1995
- Subjects: Computer networks , Computer music , MIDI (Standard)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4666 , http://hdl.handle.net/10962/d1006682 , Computer networks , Computer music , MIDI (Standard)
- Description: The inability to share resources in commercial and institutional computer music studios results in non-optimal resource utilisation. The use of computers to process, store and communicate data can be extended within these studios, to provide the capability of sharing resources amongst their users. This thesis describes a computer music network which was designed for this purpose. Certain devices had to be custom built for the implementation of the network. The thesis discusses the design and construction of these devices.
- Full Text:
- Date Issued: 1995
- Authors: Wilks, Antony John
- Date: 1995
- Subjects: Computer networks , Computer music , MIDI (Standard)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4666 , http://hdl.handle.net/10962/d1006682 , Computer networks , Computer music , MIDI (Standard)
- Description: The inability to share resources in commercial and institutional computer music studios results in non-optimal resource utilisation. The use of computers to process, store and communicate data can be extended within these studios, to provide the capability of sharing resources amongst their users. This thesis describes a computer music network which was designed for this purpose. Certain devices had to be custom built for the implementation of the network. The thesis discusses the design and construction of these devices.
- Full Text:
- Date Issued: 1995
Design, evaluation and comparison of evolution and reinforcement learning models
- Authors: Mclean, Clinton Brett
- Date: 2002
- Subjects: Evolutionary computation Neural networks (Computer science) Reinforcement learning
- Language: English
- Type: Thesis , Masters , MEcon
- Identifier: vital:4625 , http://hdl.handle.net/10962/d1006493
- Description: This work presents the design, evaluation and comparison of evolution and reinforcement learning models, in isolation and combined in Darwinian and Lamarckian frameworks, with a particular emphasis being placed on their adaptive nature in response to environments that become increasingly unstable. Our ultimate objective is to determine whether hybrid models of evolution and learning can demonstrate adaptive qualities beyond those of such models when applied in isolation. This work demonstrates the limitations of evolution, reinforcement learning and Lamarckian models in dealing with increasingly unstable environments, while noting the effective adaptive nature of a Darwinian model to assimilate increasing levels of instability. This is shown to be a result of the Darwinian evolution model's ability to separate learning at two levels, the population's experience of the environment over the course of many generations and the individual's experience of the environment over the course of its lifetime. Thus, knowledge relating to the general characteristics of the environment over many generations can be maintained in the population's genotypes with phenotype (reinforcement) learning being utilized to adapt a particular agent to the particular characteristics of its environment. Lamarckian evolution, though, is shown to demonstrate adaptive characteristics that are highly effective in response to the stable environments. Selection and reproduction combined with reinforcement learning creates a model that has the ability to utilize useful knowledge produced by reinforcements, as opposed to random mutations, to accelerate the search process. As a result the influence of individual learning on the populations evolution is shown to be more successful when applied in the more direct Lamarckian form. Based on our results demonstrating the success of Lamarckian strategies in stable environments and Darwinian strategies in unstable environments, hybrid Darwinian/Lamarckian models are created with a view towards combining the advantages of both forms of evolution to produce a superior adaptive capability. Our investigation demonstrates that such hybrid models can effectively combine the adaptive advantageous of both Darwinian and Lamarckian evolution to provide a more effective capability of adapting to a range of conditions, from stable to unstable, appropriately adjusting the required degree of inheritance in response to the requirements of the environment.
- Full Text:
- Date Issued: 2002
- Authors: Mclean, Clinton Brett
- Date: 2002
- Subjects: Evolutionary computation Neural networks (Computer science) Reinforcement learning
- Language: English
- Type: Thesis , Masters , MEcon
- Identifier: vital:4625 , http://hdl.handle.net/10962/d1006493
- Description: This work presents the design, evaluation and comparison of evolution and reinforcement learning models, in isolation and combined in Darwinian and Lamarckian frameworks, with a particular emphasis being placed on their adaptive nature in response to environments that become increasingly unstable. Our ultimate objective is to determine whether hybrid models of evolution and learning can demonstrate adaptive qualities beyond those of such models when applied in isolation. This work demonstrates the limitations of evolution, reinforcement learning and Lamarckian models in dealing with increasingly unstable environments, while noting the effective adaptive nature of a Darwinian model to assimilate increasing levels of instability. This is shown to be a result of the Darwinian evolution model's ability to separate learning at two levels, the population's experience of the environment over the course of many generations and the individual's experience of the environment over the course of its lifetime. Thus, knowledge relating to the general characteristics of the environment over many generations can be maintained in the population's genotypes with phenotype (reinforcement) learning being utilized to adapt a particular agent to the particular characteristics of its environment. Lamarckian evolution, though, is shown to demonstrate adaptive characteristics that are highly effective in response to the stable environments. Selection and reproduction combined with reinforcement learning creates a model that has the ability to utilize useful knowledge produced by reinforcements, as opposed to random mutations, to accelerate the search process. As a result the influence of individual learning on the populations evolution is shown to be more successful when applied in the more direct Lamarckian form. Based on our results demonstrating the success of Lamarckian strategies in stable environments and Darwinian strategies in unstable environments, hybrid Darwinian/Lamarckian models are created with a view towards combining the advantages of both forms of evolution to produce a superior adaptive capability. Our investigation demonstrates that such hybrid models can effectively combine the adaptive advantageous of both Darwinian and Lamarckian evolution to provide a more effective capability of adapting to a range of conditions, from stable to unstable, appropriately adjusting the required degree of inheritance in response to the requirements of the environment.
- Full Text:
- Date Issued: 2002
Evaluating the cyber security skills gap relating to penetration testing
- Authors: Beukes, Dirk Johannes
- Date: 2021
- Subjects: Computer networks -- Security measures , Computer networks -- Monitoring , Computer networks -- Management , Data protection , Information technology -- Security measures , Professionals -- Supply and demand , Electronic data personnel -- Supply and demand
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/171120 , vital:42021
- Description: Information Technology (IT) is growing rapidly and has become an integral part of daily life. It provides a boundless list of services and opportunities, generating boundless sources of information, which could be abused or exploited. Due to this growth, there are thousands of new users added to the grid using computer systems in a static and mobile environment; this fact alone creates endless volumes of data to be exploited and hardware devices to be abused by the wrong people. The growth in the IT environment adds challenges that may affect users in their personal, professional, and business lives. There are constant threats on corporate and private computer networks and computer systems. In the corporate environment companies try to eliminate the threat by testing networks making use of penetration tests and by implementing cyber awareness programs to make employees more aware of the cyber threat. Penetration tests and vulnerability assessments are undervalued; are seen as a formality and are not used to increase system security. If used regularly the computer system will be more secure and attacks minimized. With the growth in technology, industries all over the globe become fully dependent on information systems in doing their day-to-day business. As technology evolves and new technology becomes available, the bigger the risk becomes to protect against the dangers which come with this new technology. For industry to protect itself against this growth in technology, personnel with a certain skill set is needed. This is where cyber security plays a very important role in the protection of information systems to ensure the confidentiality, integrity and availability of the information system itself and the data on the system. Due to this drive to secure information systems, the need for cyber security by professionals is on the rise as well. It is estimated that there is a shortage of one million cyber security professionals globally. What is the reason for this skills shortage? Will it be possible to close this skills shortage gap? This study is about identifying the skills gap and identifying possible ways to close this skills gap. In this study, research was conducted on the cyber security international standards, cyber security training at universities and international certification focusing specifically on penetration testing, the evaluation of the need of industry while recruiting new penetration testers, finishing with suggestions on how to fill possible gaps in the skills market with a conclusion.
- Full Text:
- Date Issued: 2021
- Authors: Beukes, Dirk Johannes
- Date: 2021
- Subjects: Computer networks -- Security measures , Computer networks -- Monitoring , Computer networks -- Management , Data protection , Information technology -- Security measures , Professionals -- Supply and demand , Electronic data personnel -- Supply and demand
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/171120 , vital:42021
- Description: Information Technology (IT) is growing rapidly and has become an integral part of daily life. It provides a boundless list of services and opportunities, generating boundless sources of information, which could be abused or exploited. Due to this growth, there are thousands of new users added to the grid using computer systems in a static and mobile environment; this fact alone creates endless volumes of data to be exploited and hardware devices to be abused by the wrong people. The growth in the IT environment adds challenges that may affect users in their personal, professional, and business lives. There are constant threats on corporate and private computer networks and computer systems. In the corporate environment companies try to eliminate the threat by testing networks making use of penetration tests and by implementing cyber awareness programs to make employees more aware of the cyber threat. Penetration tests and vulnerability assessments are undervalued; are seen as a formality and are not used to increase system security. If used regularly the computer system will be more secure and attacks minimized. With the growth in technology, industries all over the globe become fully dependent on information systems in doing their day-to-day business. As technology evolves and new technology becomes available, the bigger the risk becomes to protect against the dangers which come with this new technology. For industry to protect itself against this growth in technology, personnel with a certain skill set is needed. This is where cyber security plays a very important role in the protection of information systems to ensure the confidentiality, integrity and availability of the information system itself and the data on the system. Due to this drive to secure information systems, the need for cyber security by professionals is on the rise as well. It is estimated that there is a shortage of one million cyber security professionals globally. What is the reason for this skills shortage? Will it be possible to close this skills shortage gap? This study is about identifying the skills gap and identifying possible ways to close this skills gap. In this study, research was conducted on the cyber security international standards, cyber security training at universities and international certification focusing specifically on penetration testing, the evaluation of the need of industry while recruiting new penetration testers, finishing with suggestions on how to fill possible gaps in the skills market with a conclusion.
- Full Text:
- Date Issued: 2021
Investigating the viability of a framework for small scale, easily deployable and extensible hotspot management systems
- Authors: Thinyane, Mamello P
- Date: 2006
- Subjects: Local area networks (Computer networks) , Computer networks -- Management , Computer network architectures , Computer network protocols , Wireless communication systems , XML (Document markup language)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4638 , http://hdl.handle.net/10962/d1006553
- Description: The proliferation of PALs (Public Access Locations) is fuelling the development of new standards, protocols, services, and applications for WLANs (Wireless Local Area Networks). PALs are set up at public locations to meet continually changing, multiservice, multi-protocol user requirements. This research investigates the essential infrastructural requirements that will enable further proliferation of PALs, and consequently facilitate ubiquitous computing. Based on these requirements, an extensible architectural framework for PAL management systems that inherently facilitates the provisioning of multiple services and multiple protocols on PALs is derived. The ensuing framework, which is called Xobogel, is based on the microkernel architectural pattern, and the IPDR (Internet Protocol Data Record) specification. Xobogel takes into consideration and supports the implementation of diverse business models for PALs, in respect of distinct environmental factors. It also facilitates next-generation network service usage accounting through a simple, flexible, and extensible XML based usage record. The framework is subsequently validated for service element extensibility and simplicity through the design, implementation, and experimental deployment of SEHS (Small Extensible Hotspot System), a system based on the framework. The robustness and scalability of the framework is observed to be sufficient for SMME deployment, withstanding the stress testing experiments performed on SEHS. The range of service element and charging modules implemented confirm an acceptable level of flexibility and extensibility within the framework.
- Full Text:
- Date Issued: 2006
- Authors: Thinyane, Mamello P
- Date: 2006
- Subjects: Local area networks (Computer networks) , Computer networks -- Management , Computer network architectures , Computer network protocols , Wireless communication systems , XML (Document markup language)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4638 , http://hdl.handle.net/10962/d1006553
- Description: The proliferation of PALs (Public Access Locations) is fuelling the development of new standards, protocols, services, and applications for WLANs (Wireless Local Area Networks). PALs are set up at public locations to meet continually changing, multiservice, multi-protocol user requirements. This research investigates the essential infrastructural requirements that will enable further proliferation of PALs, and consequently facilitate ubiquitous computing. Based on these requirements, an extensible architectural framework for PAL management systems that inherently facilitates the provisioning of multiple services and multiple protocols on PALs is derived. The ensuing framework, which is called Xobogel, is based on the microkernel architectural pattern, and the IPDR (Internet Protocol Data Record) specification. Xobogel takes into consideration and supports the implementation of diverse business models for PALs, in respect of distinct environmental factors. It also facilitates next-generation network service usage accounting through a simple, flexible, and extensible XML based usage record. The framework is subsequently validated for service element extensibility and simplicity through the design, implementation, and experimental deployment of SEHS (Small Extensible Hotspot System), a system based on the framework. The robustness and scalability of the framework is observed to be sufficient for SMME deployment, withstanding the stress testing experiments performed on SEHS. The range of service element and charging modules implemented confirm an acceptable level of flexibility and extensibility within the framework.
- Full Text:
- Date Issued: 2006
Designing and implementing a virtual reality interaction framework
- Authors: Rorke, Michael
- Date: 2000
- Subjects: Virtual reality , Computer simulation , Human-computer interaction , Computer graphics
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4623 , http://hdl.handle.net/10962/d1006491 , Virtual reality , Computer simulation , Human-computer interaction , Computer graphics
- Description: Virtual Reality offers the possibility for humans to interact in a more natural way with the computer and its applications. Currently, Virtual Reality is used mainly in the field of visualisation where 3D graphics allow users to more easily view complex sets of data or structures. The field of interaction in Virtual Reality has been largely neglected due mainly to problems with input devices and equipment costs. Recent research has aimed to overcome these interaction problems, thereby creating a usable interaction platform for Virtual Reality. This thesis presents a background into the field of interaction in Virtual Reality. It goes on to propose a generic framework for the implementation of common interaction techniques into a homogeneous application development environment. This framework adds a new layer to the standard Virtual Reality toolkit – the interaction abstraction layer, or interactor layer. This separation is in line with current HCI practices. The interactor layer is further divided into specific sections – input component, interaction component, system component, intermediaries, entities and widgets. Each of these performs a specific function, with clearly defined interfaces between the different components to promote easy objectoriented implementation of the framework. The validity of the framework is shown in comparison with accepted taxonomies in the area of Virtual Reality interaction. Thus demonstrating that the framework covers all the relevant factors involved in the field. Furthermore, the thesis describes an implementation of this framework. The implementation was completed using the Rhodes University CoRgi Virtual Reality toolkit. Several postgraduate students in the Rhodes University Computer Science Department utilised the framework implementation to develop a set of case studies. These case studies demonstrate the practical use of the framework to create useful Virtual Reality applications, as well as demonstrating the generic nature of the framework and its extensibility to be able to handle new interaction techniques. Finally, the generic nature of the framework is further demonstrated by moving it from the standard CoRgi Virtual Reality toolkit, to a distributed version of this toolkit. The distributed implementation of the framework utilises the Common Object Request Broker Architecture (CORBA) to implement the distribution of the objects in the system. Using this distributed implementation, we are able to ascertain that CORBA is useful in the field of distributed real-time Virtual Reality, even taking into account the extra overhead introduced by the additional abstraction layer. We conclude from this thesis that it is important to abstract the interaction layer from the other layers of a Virtual Reality toolkit in order to provide a consistent interface to developers. We have shown that our framework is implementable and useful in the field, making it easier for developers to include interaction in their Virtual Reality applications. Our framework is able to handle all the current aspects of interaction in Virtual Reality, as well as being general enough to implement future interaction techniques. The framework is also applicable to different Virtual Reality toolkits and development platforms, making it ideal for developing general, cross-platform interactive Virtual Reality applications.
- Full Text:
- Date Issued: 2000
- Authors: Rorke, Michael
- Date: 2000
- Subjects: Virtual reality , Computer simulation , Human-computer interaction , Computer graphics
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4623 , http://hdl.handle.net/10962/d1006491 , Virtual reality , Computer simulation , Human-computer interaction , Computer graphics
- Description: Virtual Reality offers the possibility for humans to interact in a more natural way with the computer and its applications. Currently, Virtual Reality is used mainly in the field of visualisation where 3D graphics allow users to more easily view complex sets of data or structures. The field of interaction in Virtual Reality has been largely neglected due mainly to problems with input devices and equipment costs. Recent research has aimed to overcome these interaction problems, thereby creating a usable interaction platform for Virtual Reality. This thesis presents a background into the field of interaction in Virtual Reality. It goes on to propose a generic framework for the implementation of common interaction techniques into a homogeneous application development environment. This framework adds a new layer to the standard Virtual Reality toolkit – the interaction abstraction layer, or interactor layer. This separation is in line with current HCI practices. The interactor layer is further divided into specific sections – input component, interaction component, system component, intermediaries, entities and widgets. Each of these performs a specific function, with clearly defined interfaces between the different components to promote easy objectoriented implementation of the framework. The validity of the framework is shown in comparison with accepted taxonomies in the area of Virtual Reality interaction. Thus demonstrating that the framework covers all the relevant factors involved in the field. Furthermore, the thesis describes an implementation of this framework. The implementation was completed using the Rhodes University CoRgi Virtual Reality toolkit. Several postgraduate students in the Rhodes University Computer Science Department utilised the framework implementation to develop a set of case studies. These case studies demonstrate the practical use of the framework to create useful Virtual Reality applications, as well as demonstrating the generic nature of the framework and its extensibility to be able to handle new interaction techniques. Finally, the generic nature of the framework is further demonstrated by moving it from the standard CoRgi Virtual Reality toolkit, to a distributed version of this toolkit. The distributed implementation of the framework utilises the Common Object Request Broker Architecture (CORBA) to implement the distribution of the objects in the system. Using this distributed implementation, we are able to ascertain that CORBA is useful in the field of distributed real-time Virtual Reality, even taking into account the extra overhead introduced by the additional abstraction layer. We conclude from this thesis that it is important to abstract the interaction layer from the other layers of a Virtual Reality toolkit in order to provide a consistent interface to developers. We have shown that our framework is implementable and useful in the field, making it easier for developers to include interaction in their Virtual Reality applications. Our framework is able to handle all the current aspects of interaction in Virtual Reality, as well as being general enough to implement future interaction techniques. The framework is also applicable to different Virtual Reality toolkits and development platforms, making it ideal for developing general, cross-platform interactive Virtual Reality applications.
- Full Text:
- Date Issued: 2000