An investigation into the current state of web based cryptominers and cryptojacking
- Authors: Len, Robert
- Date: 2021-04
- Subjects: Cryptocurrencies , Malware (Computer software) , Computer networks -- Security measures , Computer networks -- Monitoring , Cryptomining , Coinhive , Cryptojacking
- Language: English
- Type: thesis , text , Masters , MSc
- Identifier: http://hdl.handle.net/10962/178248 , vital:42924
- Description: The aim of this research was to conduct a review of the current state and extent of surreptitious crypto mining software and its prevalence as a means for income generation. Income is generated through the use of a viewer's browser to execute custom JavaScript code to mine cryptocurrencies such as Monero and Bitcoin. The research aimed to measure the prevalence of illicit mining scripts being utilised for “in-browser" cryptojacking while further analysing the ecosystems that support the cryptomining environment. The extent of the research covers aspects such as the content (or type) of the sites hosting malicious “in-browser" cryptomining software as well as the occurrences of currencies utilised in the cryptographic mining and the analysis of cryptographic mining code samples. This research aims to compare the results of previous work with the current state of affairs since the closure of Coinhive in March 2018. Coinhive were at the time the market leader in such web based mining services. Beyond the analysis of the prevalence of cryptomining on the web today, research into the methodologies and techniques used to detect and counteract cryptomining are also conducted. This includes the most recent developments in malicious JavaScript de-obfuscation as well as cryptomining signature creation and detection. Methodologies for heuristic JavaScript behaviour identification and subsequent identification of potential malicious out-liars are also included within the research of the countermeasure analysis. The research revealed that although no longer functional, Coinhive remained as the most prevalent script being used for “in-browser" cryptomining services. While remaining the most prevalent, there was however a significant decline in overall occurrences compared to when coinhive.com was operational. Analysis of the ecosystem hosting \in-browser" mining websites was found to be distributed both geographically as well as in terms of domain categorisations. , Thesis (MSc) -- Faculty of Science, Computer Science, 2021
- Full Text:
- Date Issued: 2021-04
- Authors: Len, Robert
- Date: 2021-04
- Subjects: Cryptocurrencies , Malware (Computer software) , Computer networks -- Security measures , Computer networks -- Monitoring , Cryptomining , Coinhive , Cryptojacking
- Language: English
- Type: thesis , text , Masters , MSc
- Identifier: http://hdl.handle.net/10962/178248 , vital:42924
- Description: The aim of this research was to conduct a review of the current state and extent of surreptitious crypto mining software and its prevalence as a means for income generation. Income is generated through the use of a viewer's browser to execute custom JavaScript code to mine cryptocurrencies such as Monero and Bitcoin. The research aimed to measure the prevalence of illicit mining scripts being utilised for “in-browser" cryptojacking while further analysing the ecosystems that support the cryptomining environment. The extent of the research covers aspects such as the content (or type) of the sites hosting malicious “in-browser" cryptomining software as well as the occurrences of currencies utilised in the cryptographic mining and the analysis of cryptographic mining code samples. This research aims to compare the results of previous work with the current state of affairs since the closure of Coinhive in March 2018. Coinhive were at the time the market leader in such web based mining services. Beyond the analysis of the prevalence of cryptomining on the web today, research into the methodologies and techniques used to detect and counteract cryptomining are also conducted. This includes the most recent developments in malicious JavaScript de-obfuscation as well as cryptomining signature creation and detection. Methodologies for heuristic JavaScript behaviour identification and subsequent identification of potential malicious out-liars are also included within the research of the countermeasure analysis. The research revealed that although no longer functional, Coinhive remained as the most prevalent script being used for “in-browser" cryptomining services. While remaining the most prevalent, there was however a significant decline in overall occurrences compared to when coinhive.com was operational. Analysis of the ecosystem hosting \in-browser" mining websites was found to be distributed both geographically as well as in terms of domain categorisations. , Thesis (MSc) -- Faculty of Science, Computer Science, 2021
- Full Text:
- Date Issued: 2021-04
Evaluating the cyber security skills gap relating to penetration testing
- Authors: Beukes, Dirk Johannes
- Date: 2021
- Subjects: Computer networks -- Security measures , Computer networks -- Monitoring , Computer networks -- Management , Data protection , Information technology -- Security measures , Professionals -- Supply and demand , Electronic data personnel -- Supply and demand
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/171120 , vital:42021
- Description: Information Technology (IT) is growing rapidly and has become an integral part of daily life. It provides a boundless list of services and opportunities, generating boundless sources of information, which could be abused or exploited. Due to this growth, there are thousands of new users added to the grid using computer systems in a static and mobile environment; this fact alone creates endless volumes of data to be exploited and hardware devices to be abused by the wrong people. The growth in the IT environment adds challenges that may affect users in their personal, professional, and business lives. There are constant threats on corporate and private computer networks and computer systems. In the corporate environment companies try to eliminate the threat by testing networks making use of penetration tests and by implementing cyber awareness programs to make employees more aware of the cyber threat. Penetration tests and vulnerability assessments are undervalued; are seen as a formality and are not used to increase system security. If used regularly the computer system will be more secure and attacks minimized. With the growth in technology, industries all over the globe become fully dependent on information systems in doing their day-to-day business. As technology evolves and new technology becomes available, the bigger the risk becomes to protect against the dangers which come with this new technology. For industry to protect itself against this growth in technology, personnel with a certain skill set is needed. This is where cyber security plays a very important role in the protection of information systems to ensure the confidentiality, integrity and availability of the information system itself and the data on the system. Due to this drive to secure information systems, the need for cyber security by professionals is on the rise as well. It is estimated that there is a shortage of one million cyber security professionals globally. What is the reason for this skills shortage? Will it be possible to close this skills shortage gap? This study is about identifying the skills gap and identifying possible ways to close this skills gap. In this study, research was conducted on the cyber security international standards, cyber security training at universities and international certification focusing specifically on penetration testing, the evaluation of the need of industry while recruiting new penetration testers, finishing with suggestions on how to fill possible gaps in the skills market with a conclusion.
- Full Text:
- Date Issued: 2021
- Authors: Beukes, Dirk Johannes
- Date: 2021
- Subjects: Computer networks -- Security measures , Computer networks -- Monitoring , Computer networks -- Management , Data protection , Information technology -- Security measures , Professionals -- Supply and demand , Electronic data personnel -- Supply and demand
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/171120 , vital:42021
- Description: Information Technology (IT) is growing rapidly and has become an integral part of daily life. It provides a boundless list of services and opportunities, generating boundless sources of information, which could be abused or exploited. Due to this growth, there are thousands of new users added to the grid using computer systems in a static and mobile environment; this fact alone creates endless volumes of data to be exploited and hardware devices to be abused by the wrong people. The growth in the IT environment adds challenges that may affect users in their personal, professional, and business lives. There are constant threats on corporate and private computer networks and computer systems. In the corporate environment companies try to eliminate the threat by testing networks making use of penetration tests and by implementing cyber awareness programs to make employees more aware of the cyber threat. Penetration tests and vulnerability assessments are undervalued; are seen as a formality and are not used to increase system security. If used regularly the computer system will be more secure and attacks minimized. With the growth in technology, industries all over the globe become fully dependent on information systems in doing their day-to-day business. As technology evolves and new technology becomes available, the bigger the risk becomes to protect against the dangers which come with this new technology. For industry to protect itself against this growth in technology, personnel with a certain skill set is needed. This is where cyber security plays a very important role in the protection of information systems to ensure the confidentiality, integrity and availability of the information system itself and the data on the system. Due to this drive to secure information systems, the need for cyber security by professionals is on the rise as well. It is estimated that there is a shortage of one million cyber security professionals globally. What is the reason for this skills shortage? Will it be possible to close this skills shortage gap? This study is about identifying the skills gap and identifying possible ways to close this skills gap. In this study, research was conducted on the cyber security international standards, cyber security training at universities and international certification focusing specifically on penetration testing, the evaluation of the need of industry while recruiting new penetration testers, finishing with suggestions on how to fill possible gaps in the skills market with a conclusion.
- Full Text:
- Date Issued: 2021
An Analysis of Internet Background Radiation within an African IPv4 netblock
- Authors: Hendricks, Wadeegh
- Date: 2020
- Subjects: Computer networks -- Monitoring –- South Africa , Dark Web , Computer networks -- Security measures –- South Africa , Universities and Colleges -- Computer networks -- Security measures , Malware (Computer software) , TCP/IP (Computer network protocol)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/103791 , vital:32298
- Description: The use of passive network sensors has in the past proven to be quite effective in monitoring and analysing the current state of traffic on a network. Internet traffic destined to a routable, yet unused address block is often referred to as Internet Background Radiation (IBR) and characterised as unsolicited. This unsolicited traffic is however quite valuable to researchers in that it allows them to study the traffic patterns in a covert manner. IBR is largely composed of network and port scanning traffic, backscatter packets from virus and malware activity and to a lesser extent, misconfiguration of network devices. This research answers the following two questions: (1) What is the current state of IBR within the context of a South African IP address space and (2) Can any anomalies be detected in the traffic, with specific reference to current global malware attacks such as Mirai and similar. Rhodes University operates five IPv4 passive network sensors, commonly known as network telescopes, each monitoring its own /24 IP address block. The oldest of these network telescopes has been collecting traffic for over a decade, with the newest being established in 2011. This research focuses on the in-depth analysis of the traffic captured by one telescope in the 155/8 range over a 12 month period, from January to December 2017. The traffic was analysed and classified according the protocol, TCP flag, source IP address, destination port, packet count and payload size. Apart from the normal network traffic graphs and tables, a geographic heatmap of source traffic was also created, based on the source IP address. Spikes and noticeable variances in traffic patterns were further investigated and evidence of Mirai like malware activity was observed. Network and port scanning were found to comprise the largest amount of traffic, accounting for over 90% of the total IBR. Various scanning techniques were identified, including low level passive scanning and much higher level active scanning.
- Full Text:
- Date Issued: 2020
- Authors: Hendricks, Wadeegh
- Date: 2020
- Subjects: Computer networks -- Monitoring –- South Africa , Dark Web , Computer networks -- Security measures –- South Africa , Universities and Colleges -- Computer networks -- Security measures , Malware (Computer software) , TCP/IP (Computer network protocol)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/103791 , vital:32298
- Description: The use of passive network sensors has in the past proven to be quite effective in monitoring and analysing the current state of traffic on a network. Internet traffic destined to a routable, yet unused address block is often referred to as Internet Background Radiation (IBR) and characterised as unsolicited. This unsolicited traffic is however quite valuable to researchers in that it allows them to study the traffic patterns in a covert manner. IBR is largely composed of network and port scanning traffic, backscatter packets from virus and malware activity and to a lesser extent, misconfiguration of network devices. This research answers the following two questions: (1) What is the current state of IBR within the context of a South African IP address space and (2) Can any anomalies be detected in the traffic, with specific reference to current global malware attacks such as Mirai and similar. Rhodes University operates five IPv4 passive network sensors, commonly known as network telescopes, each monitoring its own /24 IP address block. The oldest of these network telescopes has been collecting traffic for over a decade, with the newest being established in 2011. This research focuses on the in-depth analysis of the traffic captured by one telescope in the 155/8 range over a 12 month period, from January to December 2017. The traffic was analysed and classified according the protocol, TCP flag, source IP address, destination port, packet count and payload size. Apart from the normal network traffic graphs and tables, a geographic heatmap of source traffic was also created, based on the source IP address. Spikes and noticeable variances in traffic patterns were further investigated and evidence of Mirai like malware activity was observed. Network and port scanning were found to comprise the largest amount of traffic, accounting for over 90% of the total IBR. Various scanning techniques were identified, including low level passive scanning and much higher level active scanning.
- Full Text:
- Date Issued: 2020
An exploration of the overlap between open source threat intelligence and active internet background radiation
- Authors: Pearson, Deon Turner
- Date: 2020
- Subjects: Computer networks -- Security measures , Computer networks -- Monitoring , Malware (Computer software) , TCP/IP (Computer network protocol) , Open source intelligence
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/103802 , vital:32299
- Description: Organisations and individuals are facing increasing persistent threats on the Internet from worms, port scanners, and malicious software (malware). These threats are constantly evolving as attack techniques are discovered. To aid in the detection and prevention of such threats, and to stay ahead of the adversaries conducting the attacks, security specialists are utilising Threat Intelligence (TI) data in their defense strategies. TI data can be obtained from a variety of different sources such as private routers, firewall logs, public archives, and public or private network telescopes. However, at the rate and ease at which TI is produced and published, specifically Open Source Threat Intelligence (OSINT), the quality is dropping, resulting in fragmented, context-less and variable data. This research utilised two sets of TI data, a collection of OSINT and active Internet Background Radiation (IBR). The data was collected over a period of 12 months, from 37 publicly available OSINT datasets and five IBR datasets. Through the identification and analysis of common data between the OSINT and IBR datasets, this research was able to gain insight into how effective OSINT is at detecting and potentially reducing ongoing malicious Internet traffic. As part of this research, a minimal framework for the collection, processing/analysis, and distribution of OSINT was developed and tested. The research focused on exploring areas in common between the two datasets, with the intention of creating an enriched, contextualised, and reduced set of malicious source IP addresses that could be published for consumers to use in their own environment. The findings of this research pointed towards a persistent group of IP addresses observed on both datasets, over the period under research. Using these persistent IP addresses, the research was able to identify specific services being targeted. Amongst these persistent IP addresses were significant packets from Mirai like IoT Malware on port 23/tcp and 2323/tcp as well as general scanning activity on port 445/TCP.
- Full Text:
- Date Issued: 2020
- Authors: Pearson, Deon Turner
- Date: 2020
- Subjects: Computer networks -- Security measures , Computer networks -- Monitoring , Malware (Computer software) , TCP/IP (Computer network protocol) , Open source intelligence
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/103802 , vital:32299
- Description: Organisations and individuals are facing increasing persistent threats on the Internet from worms, port scanners, and malicious software (malware). These threats are constantly evolving as attack techniques are discovered. To aid in the detection and prevention of such threats, and to stay ahead of the adversaries conducting the attacks, security specialists are utilising Threat Intelligence (TI) data in their defense strategies. TI data can be obtained from a variety of different sources such as private routers, firewall logs, public archives, and public or private network telescopes. However, at the rate and ease at which TI is produced and published, specifically Open Source Threat Intelligence (OSINT), the quality is dropping, resulting in fragmented, context-less and variable data. This research utilised two sets of TI data, a collection of OSINT and active Internet Background Radiation (IBR). The data was collected over a period of 12 months, from 37 publicly available OSINT datasets and five IBR datasets. Through the identification and analysis of common data between the OSINT and IBR datasets, this research was able to gain insight into how effective OSINT is at detecting and potentially reducing ongoing malicious Internet traffic. As part of this research, a minimal framework for the collection, processing/analysis, and distribution of OSINT was developed and tested. The research focused on exploring areas in common between the two datasets, with the intention of creating an enriched, contextualised, and reduced set of malicious source IP addresses that could be published for consumers to use in their own environment. The findings of this research pointed towards a persistent group of IP addresses observed on both datasets, over the period under research. Using these persistent IP addresses, the research was able to identify specific services being targeted. Amongst these persistent IP addresses were significant packets from Mirai like IoT Malware on port 23/tcp and 2323/tcp as well as general scanning activity on port 445/TCP.
- Full Text:
- Date Issued: 2020
An investigation into the application of Distributed Endpoint Processing to 3D Immersive Audio Rendering
- Authors: Devonport, Robin Sean
- Date: 2020
- Subjects: Uncatalogued
- Language: English
- Type: thesis , text , Masters , MSc
- Identifier: http://hdl.handle.net/10962/163258 , vital:41022
- Description: Thesis (MSc)--Rhodes University, Faculty of Science, Computer Science, 2020.
- Full Text:
- Date Issued: 2020
- Authors: Devonport, Robin Sean
- Date: 2020
- Subjects: Uncatalogued
- Language: English
- Type: thesis , text , Masters , MSc
- Identifier: http://hdl.handle.net/10962/163258 , vital:41022
- Description: Thesis (MSc)--Rhodes University, Faculty of Science, Computer Science, 2020.
- Full Text:
- Date Issued: 2020
An investigation into the readiness of open source software to build a Telco Cloud for virtualising network functions
- Authors: Chindeka, Tapiwa C
- Date: 2020
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/124320 , vital:35593
- Description: Cloud computing offers new mechanisms that change the way networks can be created and managed. The increased demand for multimedia and Internet of Things (IoT) services using the Internet Protocol is also fueling the need to look more into a networking approach that is less reliant on physical hardware components and allows new networks and network components to be created on-demand. Network Function Virtualisation (NFV) is a networking paradigm that decouples network functions from the hardware on which they run on. This offers new approaches to telecommunication providers who are looking to new ways of improving Quality of Service (QoS) in cost effective ways. Cloud technologies have given way to more specialised cloud environments such as the telco cloud. The telco cloud is a cloud environment where telecommunication services are hosted utilising NFV techniques. As the use of telecommunication standards moves towards 5G, network services will be provided in a virtualised manner in order to keep up with the demand. Open source software is a driver for innovation as it is has a collaborative culture to support it. This research investigates the readiness of open source tools to build a telco cloud that supports functions such as autoscaling and fault tolerance. Currently available open source software was explored for the different aspects involved in building a cloud from the ground up. The ETSI NFV MANO framework is also discussed as it is a widely used guiding standard for implementing NFV. Guided by the ETSI NFV MANO framework, open source software was used in an experiment to build a resilient cloud environment in which a virtualised IP Multimedia Subsystem (vIMS) network was deployed. Through this experimentation, it is evident that open source tools are mature enough to build the cloud environment and its ETSI NFV MANO compliant orchestration. However, features such as autoscaling and fault tolerance are still fairly immature and experimental.
- Full Text:
- Date Issued: 2020
- Authors: Chindeka, Tapiwa C
- Date: 2020
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/124320 , vital:35593
- Description: Cloud computing offers new mechanisms that change the way networks can be created and managed. The increased demand for multimedia and Internet of Things (IoT) services using the Internet Protocol is also fueling the need to look more into a networking approach that is less reliant on physical hardware components and allows new networks and network components to be created on-demand. Network Function Virtualisation (NFV) is a networking paradigm that decouples network functions from the hardware on which they run on. This offers new approaches to telecommunication providers who are looking to new ways of improving Quality of Service (QoS) in cost effective ways. Cloud technologies have given way to more specialised cloud environments such as the telco cloud. The telco cloud is a cloud environment where telecommunication services are hosted utilising NFV techniques. As the use of telecommunication standards moves towards 5G, network services will be provided in a virtualised manner in order to keep up with the demand. Open source software is a driver for innovation as it is has a collaborative culture to support it. This research investigates the readiness of open source tools to build a telco cloud that supports functions such as autoscaling and fault tolerance. Currently available open source software was explored for the different aspects involved in building a cloud from the ground up. The ETSI NFV MANO framework is also discussed as it is a widely used guiding standard for implementing NFV. Guided by the ETSI NFV MANO framework, open source software was used in an experiment to build a resilient cloud environment in which a virtualised IP Multimedia Subsystem (vIMS) network was deployed. Through this experimentation, it is evident that open source tools are mature enough to build the cloud environment and its ETSI NFV MANO compliant orchestration. However, features such as autoscaling and fault tolerance are still fairly immature and experimental.
- Full Text:
- Date Issued: 2020
Building a flexible and inexpensive multi-layer switch for software-defined networks
- Authors: Magwenzi, Tinashe
- Date: 2020
- Subjects: Software-defined networking (Computer network technology) , Telecommunication -- Switching systems , OpenFlow (Computer network protocol) , Local area networks (Computer networks)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/142841 , vital:38122
- Description: Software-Defined Networking (SDN) is a paradigm which enables the realisation of programmable network through the separation of the control logic from the forwarding functions. This separation is a departure from the traditional architecture. Much of the work done in SDN enabled devices has concentrated on higher end, high speed networks (10s GBit/s 100s GBit/s), rather than the relatively low bandwidth links (10s MBit/s to a few GBit/s) which are seen, for example, in South Africa. As SDN is increasingly becoming more accepted, due to its advantages over the traditional networks, it has been adopted for industrial purposes such as networking in data centres and network providers. The demand for programmable networks is increasing but is limited by the ability of providers to upgrade their infrastructure. In addition, as access to the Internet has become less expensive, the use of Internet is increasing in academic institutions, NGOs, and small to medium enterprises. This thesis details a means of building and managing a small scale Software-Defined Network using commodity hardware and open source tools. Core to the SDN Network illustrated in this thesis is the prototype of a multi-layer SDN switch. The proposed device is targeted to serve lower bandwidth communication (in relation to commercially produced high speed SDN-enabled devices). The performance of the prototype multilayer switch had shown to achieve: data-rates of up to 99.998%, average latencies that are under 40µs during forwarding/switching and under 100µs during routing while using packet sizes between 64 bytes and 1518 bytes, and a jitter of less than 15µs during all tests. This research explores in detail the design, development, and management of a multi-layer switch and its placement and integration in small scale SDN network. This includes testing of Layer 2 forwarding and Layer 3 routing, OpenFlow compliance testing, the management of the switch using created SDN applications, and real life network functionality such as forwarding, routing and VLAN networking to demonstrate its real world applicability.
- Full Text:
- Date Issued: 2020
- Authors: Magwenzi, Tinashe
- Date: 2020
- Subjects: Software-defined networking (Computer network technology) , Telecommunication -- Switching systems , OpenFlow (Computer network protocol) , Local area networks (Computer networks)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/142841 , vital:38122
- Description: Software-Defined Networking (SDN) is a paradigm which enables the realisation of programmable network through the separation of the control logic from the forwarding functions. This separation is a departure from the traditional architecture. Much of the work done in SDN enabled devices has concentrated on higher end, high speed networks (10s GBit/s 100s GBit/s), rather than the relatively low bandwidth links (10s MBit/s to a few GBit/s) which are seen, for example, in South Africa. As SDN is increasingly becoming more accepted, due to its advantages over the traditional networks, it has been adopted for industrial purposes such as networking in data centres and network providers. The demand for programmable networks is increasing but is limited by the ability of providers to upgrade their infrastructure. In addition, as access to the Internet has become less expensive, the use of Internet is increasing in academic institutions, NGOs, and small to medium enterprises. This thesis details a means of building and managing a small scale Software-Defined Network using commodity hardware and open source tools. Core to the SDN Network illustrated in this thesis is the prototype of a multi-layer SDN switch. The proposed device is targeted to serve lower bandwidth communication (in relation to commercially produced high speed SDN-enabled devices). The performance of the prototype multilayer switch had shown to achieve: data-rates of up to 99.998%, average latencies that are under 40µs during forwarding/switching and under 100µs during routing while using packet sizes between 64 bytes and 1518 bytes, and a jitter of less than 15µs during all tests. This research explores in detail the design, development, and management of a multi-layer switch and its placement and integration in small scale SDN network. This includes testing of Layer 2 forwarding and Layer 3 routing, OpenFlow compliance testing, the management of the switch using created SDN applications, and real life network functionality such as forwarding, routing and VLAN networking to demonstrate its real world applicability.
- Full Text:
- Date Issued: 2020
Determination of speaker configuration for an immersive audio content creation system
- Authors: Lebusa, Motebang
- Date: 2020
- Subjects: Loudspeakers , Surround-sound systems , Algorithms , Coordinates
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/163375 , vital:41034
- Description: Various spatialisation algorithms require the knowledge of speaker locations to accurately localise sound in 3D environments. The rendering process uses speaker coordinates to feed into their algorithms so that they can render the immersive audio content as intended by an artist. The need to measure the loudspeaker coordinates becomes necessary, especially in environments where the speaker layouts change frequently. Manually measuring the coordinates, however, tends to be a laborious task that is prone to errors. This research provides an automated solution to the problem of speaker coordinates measurement. The solution system, SDIAS, is a client-server system that uses the capabilities provided by the Ethernet Audio Video Bridging standard to measure the 3D loudspeaker coordinates for immersive sound systems. SDIAS deploys commodity hardware and readily available software to implement the solution. A server sends a short tone to each speaker in the speaker configuration, at equal intervals. A microphone attached to a mobile device picks up these transmitted tones on the client side, from different locations. The transmission and reception times from both components of the system are used to measure the time of flight for each tone sent to a loudspeaker. These are then used to determine the 3D coordinates of each loudspeaker in the available layout. Tests were performed to determine the accuracy of the determination algorithm for SDIAS, and were compared to the manually measured coordinates. , Thesis (MSc) -- Faculty of Science, Computer Science, 2020
- Full Text:
- Date Issued: 2020
- Authors: Lebusa, Motebang
- Date: 2020
- Subjects: Loudspeakers , Surround-sound systems , Algorithms , Coordinates
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/163375 , vital:41034
- Description: Various spatialisation algorithms require the knowledge of speaker locations to accurately localise sound in 3D environments. The rendering process uses speaker coordinates to feed into their algorithms so that they can render the immersive audio content as intended by an artist. The need to measure the loudspeaker coordinates becomes necessary, especially in environments where the speaker layouts change frequently. Manually measuring the coordinates, however, tends to be a laborious task that is prone to errors. This research provides an automated solution to the problem of speaker coordinates measurement. The solution system, SDIAS, is a client-server system that uses the capabilities provided by the Ethernet Audio Video Bridging standard to measure the 3D loudspeaker coordinates for immersive sound systems. SDIAS deploys commodity hardware and readily available software to implement the solution. A server sends a short tone to each speaker in the speaker configuration, at equal intervals. A microphone attached to a mobile device picks up these transmitted tones on the client side, from different locations. The transmission and reception times from both components of the system are used to measure the time of flight for each tone sent to a loudspeaker. These are then used to determine the 3D coordinates of each loudspeaker in the available layout. Tests were performed to determine the accuracy of the determination algorithm for SDIAS, and were compared to the manually measured coordinates. , Thesis (MSc) -- Faculty of Science, Computer Science, 2020
- Full Text:
- Date Issued: 2020
NFComms: A synchronous communication framework for the CPU-NFP heterogeneous system
- Authors: Pennefather, Sean
- Date: 2020
- Subjects: Network processors , Computer programming , Parallel processing (Electronic computers) , Netronome
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/144181 , vital:38318
- Description: This work explores the viability of using a Network Flow Processor (NFP), developed by Netronome, as a coprocessor for the construction of a CPU-NFP heterogeneous platform in the domain of general processing. When considering heterogeneous platforms involving architectures like the NFP, the communication framework provided is typically represented as virtual network interfaces and is thus not suitable for generic communication. To enable a CPU-NFP heterogeneous platform for use in the domain of general computing, a suitable generic communication framework is required. A feasibility study for a suitable communication medium between the two candidate architectures showed that a generic framework that conforms to the mechanisms dictated by Communicating Sequential Processes is achievable. The resulting NFComms framework, which facilitates inter- and intra-architecture communication through the use of synchronous message passing, supports up to 16 unidirectional channels and includes queuing mechanisms for transparently supporting concurrent streams exceeding the channel count. The framework has a minimum latency of between 15.5 μs and 18 μs per synchronous transaction and can sustain a peak throughput of up to 30 Gbit/s. The framework also supports a runtime for interacting with the Go programming language, allowing user-space processes to subscribe channels to the framework for interacting with processes executing on the NFP. The viability of utilising a heterogeneous CPU-NFP system for use in the domain of general and network computing was explored by introducing a set of problems or applications spanning general computing, and network processing. These were implemented on the heterogeneous architecture and benchmarked against equivalent CPU-only and CPU/GPU solutions. The results recorded were used to form an opinion on the viability of using an NFP for general processing. It is the author’s opinion that, beyond very specific use cases, it appears that the NFP-400 is not currently a viable solution as a coprocessor in the field of general computing. This does not mean that the proposed framework or the concept of a heterogeneous CPU-NFP system should be discarded as such a system does have acceptable use in the fields of network and stream processing. Additionally, when comparing the recorded limitations to those seen during the early stages of general purpose GPU development, it is clear that general processing on the NFP is currently in a similar state.
- Full Text:
- Date Issued: 2020
- Authors: Pennefather, Sean
- Date: 2020
- Subjects: Network processors , Computer programming , Parallel processing (Electronic computers) , Netronome
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/144181 , vital:38318
- Description: This work explores the viability of using a Network Flow Processor (NFP), developed by Netronome, as a coprocessor for the construction of a CPU-NFP heterogeneous platform in the domain of general processing. When considering heterogeneous platforms involving architectures like the NFP, the communication framework provided is typically represented as virtual network interfaces and is thus not suitable for generic communication. To enable a CPU-NFP heterogeneous platform for use in the domain of general computing, a suitable generic communication framework is required. A feasibility study for a suitable communication medium between the two candidate architectures showed that a generic framework that conforms to the mechanisms dictated by Communicating Sequential Processes is achievable. The resulting NFComms framework, which facilitates inter- and intra-architecture communication through the use of synchronous message passing, supports up to 16 unidirectional channels and includes queuing mechanisms for transparently supporting concurrent streams exceeding the channel count. The framework has a minimum latency of between 15.5 μs and 18 μs per synchronous transaction and can sustain a peak throughput of up to 30 Gbit/s. The framework also supports a runtime for interacting with the Go programming language, allowing user-space processes to subscribe channels to the framework for interacting with processes executing on the NFP. The viability of utilising a heterogeneous CPU-NFP system for use in the domain of general and network computing was explored by introducing a set of problems or applications spanning general computing, and network processing. These were implemented on the heterogeneous architecture and benchmarked against equivalent CPU-only and CPU/GPU solutions. The results recorded were used to form an opinion on the viability of using an NFP for general processing. It is the author’s opinion that, beyond very specific use cases, it appears that the NFP-400 is not currently a viable solution as a coprocessor in the field of general computing. This does not mean that the proposed framework or the concept of a heterogeneous CPU-NFP system should be discarded as such a system does have acceptable use in the fields of network and stream processing. Additionally, when comparing the recorded limitations to those seen during the early stages of general purpose GPU development, it is clear that general processing on the NFP is currently in a similar state.
- Full Text:
- Date Issued: 2020
Securing software development using developer access control
- Authors: Ongers, Grant
- Date: 2020
- Subjects: Computer software -- Development , Computers -- Access control , Computer security -- Software , Computer networks -- Security measures , Source code (Computer science) , Plug-ins (Computer programs) , Data encryption (Computer science) , Network Access Control , Data Loss Prevention , Google’s BeyondCorp , Confidentiality, Integrity and Availability (CIA) triad
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/149022 , vital:38796
- Description: This research is aimed at software development companies and highlights the unique information security concerns in the context of a non-malicious software developer’s work environment; and furthermore explores an application driven solution which focuses specifically on providing developer environments with access control for source code repositories. In order to achieve that, five goals were defined as discussed in section 1.3. The application designed to provide the developer environment with access control to source code repositories was modelled on lessons taken from the principles of Network Access Control (NAC), Data Loss Prevention (DLP), and Google’s BeyondCorp (GBC) for zero-trust end-user computing. The intention of this research is to provide software developers with maximum access to source code without compromising Confidentiality, as per the Confidentiality, Integrity and Availability (CIA) triad. Employing data gleaned from examining the characteristics of DLP, NAC, and Beyond- Corp—proof-of-concept code was developed to regulate access to the developer’s environment and source code. The system required sufficient flexibility to support the diversity of software development environments. In order to achieve this, a modular design was selected. The system comprised a client side agent and a plug-in-ready server component. The client side agent mounts and dismounts encrypted volumes containing source code. Furthermore, it provides the server with information of the client that is demanded by plug-ins. The server side service provided encryption keys to facilitate the mounting of the volumes and, through plug-ins, asked questions of the client agent to determine whether access should be granted. The solution was then tested with integration and system testing. There were plans to have it used by development teams who were then to be surveyed as to their view on the proof of concept but this proved impossible. The conclusion provides a basis by which organisations that develop software can better balance the two corners of the CIA triad most often in conflict: Confidentiality in terms of their source code against the Availability of the same to developers.
- Full Text:
- Date Issued: 2020
- Authors: Ongers, Grant
- Date: 2020
- Subjects: Computer software -- Development , Computers -- Access control , Computer security -- Software , Computer networks -- Security measures , Source code (Computer science) , Plug-ins (Computer programs) , Data encryption (Computer science) , Network Access Control , Data Loss Prevention , Google’s BeyondCorp , Confidentiality, Integrity and Availability (CIA) triad
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/149022 , vital:38796
- Description: This research is aimed at software development companies and highlights the unique information security concerns in the context of a non-malicious software developer’s work environment; and furthermore explores an application driven solution which focuses specifically on providing developer environments with access control for source code repositories. In order to achieve that, five goals were defined as discussed in section 1.3. The application designed to provide the developer environment with access control to source code repositories was modelled on lessons taken from the principles of Network Access Control (NAC), Data Loss Prevention (DLP), and Google’s BeyondCorp (GBC) for zero-trust end-user computing. The intention of this research is to provide software developers with maximum access to source code without compromising Confidentiality, as per the Confidentiality, Integrity and Availability (CIA) triad. Employing data gleaned from examining the characteristics of DLP, NAC, and Beyond- Corp—proof-of-concept code was developed to regulate access to the developer’s environment and source code. The system required sufficient flexibility to support the diversity of software development environments. In order to achieve this, a modular design was selected. The system comprised a client side agent and a plug-in-ready server component. The client side agent mounts and dismounts encrypted volumes containing source code. Furthermore, it provides the server with information of the client that is demanded by plug-ins. The server side service provided encryption keys to facilitate the mounting of the volumes and, through plug-ins, asked questions of the client agent to determine whether access should be granted. The solution was then tested with integration and system testing. There were plans to have it used by development teams who were then to be surveyed as to their view on the proof of concept but this proved impossible. The conclusion provides a basis by which organisations that develop software can better balance the two corners of the CIA triad most often in conflict: Confidentiality in terms of their source code against the Availability of the same to developers.
- Full Text:
- Date Issued: 2020
Technology in conservation: towards a system for in-field drone detection of invasive vegetation
- James, Katherine Margaret Frances
- Authors: James, Katherine Margaret Frances
- Date: 2020
- Subjects: Drone aircraft in remote sensing , Neural networks (Computer science) , Drone aircraft in remote sensing -- Case studies , Machine learning , Computer vision , Environmental monitoring -- Remote sensing , Invasive plants -- Monitoring
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/143408 , vital:38244
- Description: Remote sensing can assist in monitoring the spread of invasive vegetation. The adoption of camera-carrying unmanned aerial vehicles, commonly referred to as drones, as remote sensing tools has yielded images of higher spatial resolution than traditional techniques. Drones also have the potential to interact with the environment through the delivery of bio-control or herbicide, as seen with their adoption in precision agriculture. Unlike in agricultural applications, however, invasive plants do not have a predictable position relative to each other within the environment. To facilitate the adoption of drones as an environmental monitoring and management tool, drones need to be able to intelligently distinguish between invasive and non-invasive vegetation on the fly. In this thesis, we present the augmentation of a commercially available drone with a deep machine learning model to investigate the viability of differentiating between an invasive shrub and other vegetation. As a case study, this was applied to the shrub genus Hakea, originating in Australia and invasive in several countries including South Africa. However, for this research, the methodology is important, rather than the chosen target plant. A dataset was collected using the available drone and manually annotated to facilitate the supervised training of the model. Two approaches were explored, namely, classification and semantic segmentation. For each of these, several models were trained and evaluated to find the optimal one. The chosen model was then interfaced with the drone via an Android application on a mobile device and its performance was preliminarily evaluated in the field. Based on these findings, refinements were made and thereafter a thorough field evaluation was performed to determine the best conditions for model operation. Results from the classification task show that deep learning models are capable of distinguishing between target and other shrubs in ideal candidate windows. However, classification in this manner is restricted by the proposal of such candidate windows. End-to-end image segmentation using deep learning overcomes this problem, classifying the image in a pixel-wise manner. Furthermore, the use of appropriate loss functions was found to improve model performance. Field tests show that illumination and shadow pose challenges to the model, but that good recall can be achieved when the conditions are ideal. False positive detection remains an issue that could be improved. This approach shows the potential for drones as an environmental monitoring and management tool when coupled with deep machine learning techniques and outlines potential problems that may be encountered.
- Full Text:
- Date Issued: 2020
- Authors: James, Katherine Margaret Frances
- Date: 2020
- Subjects: Drone aircraft in remote sensing , Neural networks (Computer science) , Drone aircraft in remote sensing -- Case studies , Machine learning , Computer vision , Environmental monitoring -- Remote sensing , Invasive plants -- Monitoring
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/143408 , vital:38244
- Description: Remote sensing can assist in monitoring the spread of invasive vegetation. The adoption of camera-carrying unmanned aerial vehicles, commonly referred to as drones, as remote sensing tools has yielded images of higher spatial resolution than traditional techniques. Drones also have the potential to interact with the environment through the delivery of bio-control or herbicide, as seen with their adoption in precision agriculture. Unlike in agricultural applications, however, invasive plants do not have a predictable position relative to each other within the environment. To facilitate the adoption of drones as an environmental monitoring and management tool, drones need to be able to intelligently distinguish between invasive and non-invasive vegetation on the fly. In this thesis, we present the augmentation of a commercially available drone with a deep machine learning model to investigate the viability of differentiating between an invasive shrub and other vegetation. As a case study, this was applied to the shrub genus Hakea, originating in Australia and invasive in several countries including South Africa. However, for this research, the methodology is important, rather than the chosen target plant. A dataset was collected using the available drone and manually annotated to facilitate the supervised training of the model. Two approaches were explored, namely, classification and semantic segmentation. For each of these, several models were trained and evaluated to find the optimal one. The chosen model was then interfaced with the drone via an Android application on a mobile device and its performance was preliminarily evaluated in the field. Based on these findings, refinements were made and thereafter a thorough field evaluation was performed to determine the best conditions for model operation. Results from the classification task show that deep learning models are capable of distinguishing between target and other shrubs in ideal candidate windows. However, classification in this manner is restricted by the proposal of such candidate windows. End-to-end image segmentation using deep learning overcomes this problem, classifying the image in a pixel-wise manner. Furthermore, the use of appropriate loss functions was found to improve model performance. Field tests show that illumination and shadow pose challenges to the model, but that good recall can be achieved when the conditions are ideal. False positive detection remains an issue that could be improved. This approach shows the potential for drones as an environmental monitoring and management tool when coupled with deep machine learning techniques and outlines potential problems that may be encountered.
- Full Text:
- Date Issued: 2020
Towards a capability maturity model for a cyber range
- Authors: Aschmann, Michael Joseph
- Date: 2020
- Subjects: Computer software -- Development , Computer security
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/163142 , vital:41013
- Description: This work describes research undertaken towards the development of a Capability Maturity Model (CMM) for Cyber Ranges (CRs) focused on cyber security. Global cyber security needs are on the rise, and the need for attribution within the cyber domain is of particular concern. This has prompted major efforts to enhance cyber capabilities within organisations to increase their total cyber resilience posture. These efforts include, but are not limited to, the testing of computational devices, networks, and applications, and cyber skills training focused on prevention, detection and cyber attack response. A cyber range allows for the testing of the computational environment. By developing cyber events within a confined virtual or sand-boxed cyber environment, a cyber range can prepare the next generation of cyber security specialists to handle a variety of potential cyber attacks. Cyber ranges have different purposes, each designed to fulfil a different computational testing and cyber training goal; consequently, cyber ranges can vary greatly in the level of variety, capability, maturity and complexity. As cyber ranges proliferate and become more and more valued as tools for cyber security, a method to classify or rate them becomes essential. Yet while a universal criteria for measuring cyber ranges in terms of their capability maturity levels becomes more critical, there are currently very limited resources for researchers aiming to perform this kind of work. For this reason, this work proposes and describes a CMM, designed to give organisations the ability to benchmark the capability maturity of a given cyber range. This research adopted a synthesised approach to the development of a CMM, grounded in prior research and focused on the production of a conceptual model that provides a useful level of abstraction. In order to achieve this goal, the core capability elements of a cyber range are defined with their relative importance, allowing for the development of a proposed classification cyber range levels. An analysis of data gathered during the course of an expert review, together with other research, further supported the development of the conceptual model. In the context of cyber range capability, classification will include the ability of the cyber range to perform its functions optimally with different core capability elements, focusing on the Measurement of Capability (MoC) with its elements, namely effect, performance and threat ability. Cyber range maturity can evolve over time and can be defined through the Measurement of Maturity (MoM) with its elements, namely people, processes, technology. The combination of these measurements utilising the CMM for a CR determines the capability maturity level of a CR. The primary outcome of this research is the proposed level-based CMM framework for a cyber range, developed using adopted and synthesised CMMs, the analysis of an expert review, and the mapping of the results.
- Full Text:
- Date Issued: 2020
- Authors: Aschmann, Michael Joseph
- Date: 2020
- Subjects: Computer software -- Development , Computer security
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/163142 , vital:41013
- Description: This work describes research undertaken towards the development of a Capability Maturity Model (CMM) for Cyber Ranges (CRs) focused on cyber security. Global cyber security needs are on the rise, and the need for attribution within the cyber domain is of particular concern. This has prompted major efforts to enhance cyber capabilities within organisations to increase their total cyber resilience posture. These efforts include, but are not limited to, the testing of computational devices, networks, and applications, and cyber skills training focused on prevention, detection and cyber attack response. A cyber range allows for the testing of the computational environment. By developing cyber events within a confined virtual or sand-boxed cyber environment, a cyber range can prepare the next generation of cyber security specialists to handle a variety of potential cyber attacks. Cyber ranges have different purposes, each designed to fulfil a different computational testing and cyber training goal; consequently, cyber ranges can vary greatly in the level of variety, capability, maturity and complexity. As cyber ranges proliferate and become more and more valued as tools for cyber security, a method to classify or rate them becomes essential. Yet while a universal criteria for measuring cyber ranges in terms of their capability maturity levels becomes more critical, there are currently very limited resources for researchers aiming to perform this kind of work. For this reason, this work proposes and describes a CMM, designed to give organisations the ability to benchmark the capability maturity of a given cyber range. This research adopted a synthesised approach to the development of a CMM, grounded in prior research and focused on the production of a conceptual model that provides a useful level of abstraction. In order to achieve this goal, the core capability elements of a cyber range are defined with their relative importance, allowing for the development of a proposed classification cyber range levels. An analysis of data gathered during the course of an expert review, together with other research, further supported the development of the conceptual model. In the context of cyber range capability, classification will include the ability of the cyber range to perform its functions optimally with different core capability elements, focusing on the Measurement of Capability (MoC) with its elements, namely effect, performance and threat ability. Cyber range maturity can evolve over time and can be defined through the Measurement of Maturity (MoM) with its elements, namely people, processes, technology. The combination of these measurements utilising the CMM for a CR determines the capability maturity level of a CR. The primary outcome of this research is the proposed level-based CMM framework for a cyber range, developed using adopted and synthesised CMMs, the analysis of an expert review, and the mapping of the results.
- Full Text:
- Date Issued: 2020
Transformative ICT education practices in rural secondary schools for developmental needs and realities: the Eastern Cape Province, South Africa
- Authors: Simuja, Clement
- Date: 2020
- Subjects: Education, Secondary -- South Africa -- Data processing , Information technology -- Study and teaching (Secondary) --South Africa , Educational technology -- Developing countries , Rural development -- Developing countries , Computer-assisted instruction -- South Africa -- Eastern Cape , Internet in education -- South Africa , Rural schools -- South Africa -- Eastern Cape , Community and school -- South Africa -- Eastern Cape
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/150631 , vital:38991
- Description: The perceived social development significance of Information and Communication Technology (ICT) has dramatically expanded the domains in which this cluster of ICTs is being discussed and acted upon. The action to promote community development in rural areas in South Africa has made its way into the introduction of ICT education in secondary schools. Since rural secondary schools form part of the framework for rural communities, they are being challenged to provide ICT education that makes a difference in learners’ lives. This requires engaging education practices that inspire learners to construct knowledge of ICT that does not only respond to examination purposes but rather, to the needs and development aspirations of the community. This research examines the experience of engaging learners and communities in socially informed ICT education in rural secondary schools. Specifically, it seeks to develop a critique of current practices involved in ICT education in rural secondary schools, and explores plausible alternatives to such practices that would make ICT education more transformative and structured towards the developmental concerns of communities. The main empirical focus for the research was five rural secondary schools in the Eastern Cape Province in South Africa. The research involved 53 participants that participated in a socially informed ICT training process. The training was designed to inspire participants to share their self-defined ICT education and ICT knowledge experiences. Critical Action Learning and Philosophical Inquiry provided the methodological framework, whilst the theoretical framework draws on Foucault’s philosophical ideas on power-knowledge relations. Through this theoretical analysis, the research examines the dynamic interplay of practices in ICT education with the values, ideals, and knowledge that form the core-life experiences of learners and rural communities. The research findings of this study indicate that current ICT education practices in rural secondary schools are endowed with ideologies that are affecting learners’ identity, social experiences, power, and ownership of the reflective meaning of using ICTs in community development. The contribution of this thesis lies in demonstrating ways that reframe ICT education transformatively, and more specifically its practices in the light of the way power, identity, ownership and social experience construct and offer learners a transformative view of self and the world. This could enable ICT education to fulfil the potential of contributing to social development in rural communities. The thesis culminates by presenting a theoretical framework that articulates the structural and authoritative components of ICT education practices – these relate to learners’ conscious understandings and represented thoughts, sensations and meanings embedded in the context, and actions and locations of using their knowledge of ICT.
- Full Text:
- Date Issued: 2020
- Authors: Simuja, Clement
- Date: 2020
- Subjects: Education, Secondary -- South Africa -- Data processing , Information technology -- Study and teaching (Secondary) --South Africa , Educational technology -- Developing countries , Rural development -- Developing countries , Computer-assisted instruction -- South Africa -- Eastern Cape , Internet in education -- South Africa , Rural schools -- South Africa -- Eastern Cape , Community and school -- South Africa -- Eastern Cape
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/150631 , vital:38991
- Description: The perceived social development significance of Information and Communication Technology (ICT) has dramatically expanded the domains in which this cluster of ICTs is being discussed and acted upon. The action to promote community development in rural areas in South Africa has made its way into the introduction of ICT education in secondary schools. Since rural secondary schools form part of the framework for rural communities, they are being challenged to provide ICT education that makes a difference in learners’ lives. This requires engaging education practices that inspire learners to construct knowledge of ICT that does not only respond to examination purposes but rather, to the needs and development aspirations of the community. This research examines the experience of engaging learners and communities in socially informed ICT education in rural secondary schools. Specifically, it seeks to develop a critique of current practices involved in ICT education in rural secondary schools, and explores plausible alternatives to such practices that would make ICT education more transformative and structured towards the developmental concerns of communities. The main empirical focus for the research was five rural secondary schools in the Eastern Cape Province in South Africa. The research involved 53 participants that participated in a socially informed ICT training process. The training was designed to inspire participants to share their self-defined ICT education and ICT knowledge experiences. Critical Action Learning and Philosophical Inquiry provided the methodological framework, whilst the theoretical framework draws on Foucault’s philosophical ideas on power-knowledge relations. Through this theoretical analysis, the research examines the dynamic interplay of practices in ICT education with the values, ideals, and knowledge that form the core-life experiences of learners and rural communities. The research findings of this study indicate that current ICT education practices in rural secondary schools are endowed with ideologies that are affecting learners’ identity, social experiences, power, and ownership of the reflective meaning of using ICTs in community development. The contribution of this thesis lies in demonstrating ways that reframe ICT education transformatively, and more specifically its practices in the light of the way power, identity, ownership and social experience construct and offer learners a transformative view of self and the world. This could enable ICT education to fulfil the potential of contributing to social development in rural communities. The thesis culminates by presenting a theoretical framework that articulates the structural and authoritative components of ICT education practices – these relate to learners’ conscious understandings and represented thoughts, sensations and meanings embedded in the context, and actions and locations of using their knowledge of ICT.
- Full Text:
- Date Issued: 2020
A comparative study of CERBER, MAKTUB and LOCKY Ransomware using a Hybridised-Malware analysis
- Authors: Schmitt, Veronica
- Date: 2019
- Subjects: Microsoft Windows (Computer file) , Data protection , Computer crimes -- Prevention , Computer security , Computer networks -- Security measures , Computers -- Access control , Malware (Computer software)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92313 , vital:30702
- Description: There has been a significant increase in the prevalence of Ransomware attacks in the preceding four years to date. This indicates that the battle has not yet been won defending against this class of malware. This research proposes that by identifying the similarities within the operational framework of Ransomware strains, a better overall understanding of their operation and function can be achieved. This, in turn, will aid in a quicker response to future attacks. With the average Ransomware attack taking two hours to be identified, it shows that there is not yet a clear understanding as to why these attacks are so successful. Research into Ransomware is limited by what is currently known on the topic. Due to the limitations of the research the decision was taken to only examined three samples of Ransomware from different families. This was decided due to the complexities and comprehensive nature of the research. The in depth nature of the research and the time constraints associated with it did not allow for proof of concept of this framework to be tested on more than three families, but the exploratory work was promising and should be further explored in future research. The aim of the research is to follow the Hybrid-Malware analysis framework which consists of both static and the dynamic analysis phases, in addition to the digital forensic examination of the infected system. This allows for signature-based findings, along with behavioural and forensic findings all in one. This information allows for a better understanding of how this malware is designed and how it infects and remains persistent on a system. The operating system which has been chosen is the Microsoft Window 7 operating system which is still utilised by a significant proportion of Windows users especially in the corporate environment. The experiment process was designed to enable the researcher the ability to collect information regarding the Ransomware and every aspect of its behaviour and communication on a target system. The results can be compared across the three strains to identify the commonalities. The initial hypothesis was that Ransomware variants are all much like an instant cake box consists of specific building blocks which remain the same with the flavouring of the cake mix being the unique feature.
- Full Text:
- Date Issued: 2019
- Authors: Schmitt, Veronica
- Date: 2019
- Subjects: Microsoft Windows (Computer file) , Data protection , Computer crimes -- Prevention , Computer security , Computer networks -- Security measures , Computers -- Access control , Malware (Computer software)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92313 , vital:30702
- Description: There has been a significant increase in the prevalence of Ransomware attacks in the preceding four years to date. This indicates that the battle has not yet been won defending against this class of malware. This research proposes that by identifying the similarities within the operational framework of Ransomware strains, a better overall understanding of their operation and function can be achieved. This, in turn, will aid in a quicker response to future attacks. With the average Ransomware attack taking two hours to be identified, it shows that there is not yet a clear understanding as to why these attacks are so successful. Research into Ransomware is limited by what is currently known on the topic. Due to the limitations of the research the decision was taken to only examined three samples of Ransomware from different families. This was decided due to the complexities and comprehensive nature of the research. The in depth nature of the research and the time constraints associated with it did not allow for proof of concept of this framework to be tested on more than three families, but the exploratory work was promising and should be further explored in future research. The aim of the research is to follow the Hybrid-Malware analysis framework which consists of both static and the dynamic analysis phases, in addition to the digital forensic examination of the infected system. This allows for signature-based findings, along with behavioural and forensic findings all in one. This information allows for a better understanding of how this malware is designed and how it infects and remains persistent on a system. The operating system which has been chosen is the Microsoft Window 7 operating system which is still utilised by a significant proportion of Windows users especially in the corporate environment. The experiment process was designed to enable the researcher the ability to collect information regarding the Ransomware and every aspect of its behaviour and communication on a target system. The results can be compared across the three strains to identify the commonalities. The initial hypothesis was that Ransomware variants are all much like an instant cake box consists of specific building blocks which remain the same with the flavouring of the cake mix being the unique feature.
- Full Text:
- Date Issued: 2019
A development method for deriving reusable concurrent programs from verified CSP models
- Authors: Dibley, James
- Date: 2019
- Subjects: CSP (Computer program language) , Sequential processing (Computer science) , Go (Computer program language) , CSPIDER (Open source tool)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/72329 , vital:30035
- Description: This work proposes and demonstrates a novel method for software development that applies formal verification techniques to the design and implementation of concurrent programs. This method is supported by a new software tool, CSPIDER, which translates machine-readable Communicating Sequential Processes (CSP) models into encapsulated, reusable components coded in the Go programming language. In relation to existing CSP implementation techniques, this work is only the second to implement a translator and it provides original support for some CSP language constructs and modelling approaches. The method is evaluated through three case studies: a concurrent sorting array, a trialdivision prime number generator, and a component node for the Ricart-Agrawala distributed mutual exclusion algorithm. Each of these case studies presents the formal verification of safety and functional requirements through CSP model-checking, and it is shown that CSPIDER is capable of generating reusable implementations from each model. The Ricart-Agrawala case study demonstrates the application of the method to the design of a protocol component. This method maintains full compatibility with the primary CSP verification tool. Applying the CSPIDER tool requires minimal commitment to an explicitly defined modelling style and a very small set of pre-translation annotations, but all of these measures can be instated prior to verification. The Go code that CSPIDER produces requires no intervention before it may be used as a component within a larger development. The translator provides a traceable, structured implementation of the CSP model, automatically deriving formal parameters and a channel-based client interface from its interpretation of the CSP model. Each case study demonstrates the use of the translated component within a simple test development.
- Full Text:
- Date Issued: 2019
- Authors: Dibley, James
- Date: 2019
- Subjects: CSP (Computer program language) , Sequential processing (Computer science) , Go (Computer program language) , CSPIDER (Open source tool)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/72329 , vital:30035
- Description: This work proposes and demonstrates a novel method for software development that applies formal verification techniques to the design and implementation of concurrent programs. This method is supported by a new software tool, CSPIDER, which translates machine-readable Communicating Sequential Processes (CSP) models into encapsulated, reusable components coded in the Go programming language. In relation to existing CSP implementation techniques, this work is only the second to implement a translator and it provides original support for some CSP language constructs and modelling approaches. The method is evaluated through three case studies: a concurrent sorting array, a trialdivision prime number generator, and a component node for the Ricart-Agrawala distributed mutual exclusion algorithm. Each of these case studies presents the formal verification of safety and functional requirements through CSP model-checking, and it is shown that CSPIDER is capable of generating reusable implementations from each model. The Ricart-Agrawala case study demonstrates the application of the method to the design of a protocol component. This method maintains full compatibility with the primary CSP verification tool. Applying the CSPIDER tool requires minimal commitment to an explicitly defined modelling style and a very small set of pre-translation annotations, but all of these measures can be instated prior to verification. The Go code that CSPIDER produces requires no intervention before it may be used as a component within a larger development. The translator provides a traceable, structured implementation of the CSP model, automatically deriving formal parameters and a channel-based client interface from its interpretation of the CSP model. Each case study demonstrates the use of the translated component within a simple test development.
- Full Text:
- Date Issued: 2019
A framework for scoring and tagging NetFlow data
- Authors: Sweeney, Michael John
- Date: 2019
- Subjects: NetFlow , Big data , High performance computing , Event processing (Computer science)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/65022 , vital:28654
- Description: With the increase in link speeds and the growth of the Internet, the volume of NetFlow data generated has increased significantly over time and processing these volumes has become a challenge, more specifically a Big Data challenge. With the advent of technologies and architectures designed to handle Big Data volumes, researchers have investigated their application to the processing of NetFlow data. This work builds on prior work wherein a scoring methodology was proposed for identifying anomalies in NetFlow by proposing and implementing a system that allows for automatic, real-time scoring through the adoption of Big Data stream processing architectures. The first part of the research looks at the means of event detection using the scoring approach and implementing as a number of individual, standalone components, each responsible for detecting and scoring a single type of flow trait. The second part is the implementation of these scoring components in a framework, named Themis1, capable of handling high volumes of data with low latency processing times. This was tackled using tools, technologies and architectural elements from the world of Big Data stream processing. The performance of the framework on the stream processing architecture was shown to demonstrate good flow throughput at low processing latencies on a single low end host. The successful demonstration of the framework on a single host opens the way to leverage the scaling capabilities afforded by the architectures and technologies used. This gives weight to the possibility of using this framework for real time threat detection using NetFlow data from larger networked environments.
- Full Text:
- Date Issued: 2019
- Authors: Sweeney, Michael John
- Date: 2019
- Subjects: NetFlow , Big data , High performance computing , Event processing (Computer science)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/65022 , vital:28654
- Description: With the increase in link speeds and the growth of the Internet, the volume of NetFlow data generated has increased significantly over time and processing these volumes has become a challenge, more specifically a Big Data challenge. With the advent of technologies and architectures designed to handle Big Data volumes, researchers have investigated their application to the processing of NetFlow data. This work builds on prior work wherein a scoring methodology was proposed for identifying anomalies in NetFlow by proposing and implementing a system that allows for automatic, real-time scoring through the adoption of Big Data stream processing architectures. The first part of the research looks at the means of event detection using the scoring approach and implementing as a number of individual, standalone components, each responsible for detecting and scoring a single type of flow trait. The second part is the implementation of these scoring components in a framework, named Themis1, capable of handling high volumes of data with low latency processing times. This was tackled using tools, technologies and architectural elements from the world of Big Data stream processing. The performance of the framework on the stream processing architecture was shown to demonstrate good flow throughput at low processing latencies on a single low end host. The successful demonstration of the framework on a single host opens the way to leverage the scaling capabilities afforded by the architectures and technologies used. This gives weight to the possibility of using this framework for real time threat detection using NetFlow data from larger networked environments.
- Full Text:
- Date Issued: 2019
A multi-threading software countermeasure to mitigate side channel analysis in the time domain
- Authors: Frieslaar, Ibraheem
- Date: 2019
- Subjects: Computer security , Data encryption (Computer science) , Noise generators (Electronics)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/71152 , vital:29790
- Description: This research is the first of its kind to investigate the utilisation of a multi-threading software-based countermeasure to mitigate Side Channel Analysis (SCA) attacks, with a particular focus on the AES-128 cryptographic algorithm. This investigation is novel, as there has not been a software-based countermeasure relying on multi-threading to our knowledge. The research has been tested on the Atmel microcontrollers, as well as a more fully featured system in the form of the popular Raspberry Pi that utilises the ARM7 processor. The main contributions of this research is the introduction of a multi-threading software based countermeasure used to mitigate SCA attacks on both an embedded device and a Raspberry Pi. These threads are comprised of various mathematical operations which are utilised to generate electromagnetic (EM) noise resulting in the obfuscation of the execution of the AES-128 algorithm. A novel EM noise generator known as the FRIES noise generator is implemented to obfuscate data captured in the EM field. FRIES comprises of hiding the execution of AES-128 algorithm within the EM noise generated by the 512 Secure Hash Algorithm (SHA) from the libcrypto++ and OpenSSL libraries. In order to evaluate the proposed countermeasure, a novel attack methodology was developed where the entire secret AES-128 encryption key was recovered from a Raspberry Pi, which has not been achieved before. The FRIES noise generator was pitted against this new attack vector and other known noise generators. The results exhibited that the FRIES noise generator withstood this attack whilst other existing techniques still leaked out secret information. The visual location of the AES-128 encryption algorithm in the EM spectrum and key recovery was prevented. These results demonstrated that the proposed multi-threading software based countermeasure was able to be resistant to existing and new forms of attacks, thus verifying that a multi-threading software based countermeasure can serve to mitigate SCA attacks.
- Full Text:
- Date Issued: 2019
- Authors: Frieslaar, Ibraheem
- Date: 2019
- Subjects: Computer security , Data encryption (Computer science) , Noise generators (Electronics)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/71152 , vital:29790
- Description: This research is the first of its kind to investigate the utilisation of a multi-threading software-based countermeasure to mitigate Side Channel Analysis (SCA) attacks, with a particular focus on the AES-128 cryptographic algorithm. This investigation is novel, as there has not been a software-based countermeasure relying on multi-threading to our knowledge. The research has been tested on the Atmel microcontrollers, as well as a more fully featured system in the form of the popular Raspberry Pi that utilises the ARM7 processor. The main contributions of this research is the introduction of a multi-threading software based countermeasure used to mitigate SCA attacks on both an embedded device and a Raspberry Pi. These threads are comprised of various mathematical operations which are utilised to generate electromagnetic (EM) noise resulting in the obfuscation of the execution of the AES-128 algorithm. A novel EM noise generator known as the FRIES noise generator is implemented to obfuscate data captured in the EM field. FRIES comprises of hiding the execution of AES-128 algorithm within the EM noise generated by the 512 Secure Hash Algorithm (SHA) from the libcrypto++ and OpenSSL libraries. In order to evaluate the proposed countermeasure, a novel attack methodology was developed where the entire secret AES-128 encryption key was recovered from a Raspberry Pi, which has not been achieved before. The FRIES noise generator was pitted against this new attack vector and other known noise generators. The results exhibited that the FRIES noise generator withstood this attack whilst other existing techniques still leaked out secret information. The visual location of the AES-128 encryption algorithm in the EM spectrum and key recovery was prevented. These results demonstrated that the proposed multi-threading software based countermeasure was able to be resistant to existing and new forms of attacks, thus verifying that a multi-threading software based countermeasure can serve to mitigate SCA attacks.
- Full Text:
- Date Issued: 2019
A study of malicious software on the macOS operating system
- Authors: Regensberg, Mark Alan
- Date: 2019
- Subjects: Malware (Computer software) , Computer security , Computer viruses , Mac OS
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92302 , vital:30701
- Description: Much of the published malware research begins with a common refrain: the cost, quantum and complexity of threats are increasing, and research and practice should prioritise efforts to automate and reduce times to detect and prevent malware, while improving the consistency of categories and taxonomies applied to modern malware. Existing work related to malware targeting Apple's macOS platform has not been spared this approach, although limited research has been conducted on the true nature of threats faced by users of the operating system. While macOS focused research available consistently notes an increase in macOS users, devices and ultimately in threats, an opportunity exists to understand the real nature of threats faced by macOS users and suggest potential avenues for future work. This research provides a view of the current state of macOS malware by analysing and exploring a dataset of malware detections on macOS endpoints captured over a period of eleven months by an anti-malware software vendor. The dataset is augmented with malware information provided by the widely used Virus. Total service, as well as the application of prior automated malware categorisation work, AVClass to categorise and SSDeep to cluster and report on observed data. With Windows and Android platforms frequently in the spotlight as targets for highly disruptive malware like botnets, ransomware and cryptominers, research and intuition seem to suggest the threat of malware on this increasingly popular platform should be growing and evolving accordingly. Findings suggests that the direction and nature of growth and evolution may not be entirely as clear as industry reports suggest. Adware and Potentially Unwanted Applications (PUAs) make up the vast majority of the detected threats, with remote access trojans (RATs), ransomware and cryptocurrency miners comprising a relatively small proportion of the detected malware. This provides a number of avenues for potential future work to compare and contrast with research on other platforms, as well as identification of key factors that may influence its growth in the future.
- Full Text:
- Date Issued: 2019
- Authors: Regensberg, Mark Alan
- Date: 2019
- Subjects: Malware (Computer software) , Computer security , Computer viruses , Mac OS
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92302 , vital:30701
- Description: Much of the published malware research begins with a common refrain: the cost, quantum and complexity of threats are increasing, and research and practice should prioritise efforts to automate and reduce times to detect and prevent malware, while improving the consistency of categories and taxonomies applied to modern malware. Existing work related to malware targeting Apple's macOS platform has not been spared this approach, although limited research has been conducted on the true nature of threats faced by users of the operating system. While macOS focused research available consistently notes an increase in macOS users, devices and ultimately in threats, an opportunity exists to understand the real nature of threats faced by macOS users and suggest potential avenues for future work. This research provides a view of the current state of macOS malware by analysing and exploring a dataset of malware detections on macOS endpoints captured over a period of eleven months by an anti-malware software vendor. The dataset is augmented with malware information provided by the widely used Virus. Total service, as well as the application of prior automated malware categorisation work, AVClass to categorise and SSDeep to cluster and report on observed data. With Windows and Android platforms frequently in the spotlight as targets for highly disruptive malware like botnets, ransomware and cryptominers, research and intuition seem to suggest the threat of malware on this increasingly popular platform should be growing and evolving accordingly. Findings suggests that the direction and nature of growth and evolution may not be entirely as clear as industry reports suggest. Adware and Potentially Unwanted Applications (PUAs) make up the vast majority of the detected threats, with remote access trojans (RATs), ransomware and cryptocurrency miners comprising a relatively small proportion of the detected malware. This provides a number of avenues for potential future work to compare and contrast with research on other platforms, as well as identification of key factors that may influence its growth in the future.
- Full Text:
- Date Issued: 2019
An analysis of the use of DNS for malicious payload distribution
- Authors: Dube, Ishmael
- Date: 2019
- Subjects: Internet domain names , Computer networks -- Security measures , Computer security , Computer network protocols , Data protection
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/97531 , vital:31447
- Description: The Domain Name System (DNS) protocol is a fundamental part of Internet activities that can be abused by cybercriminals to conduct malicious activities. Previous research has shown that cybercriminals use different methods, including the DNS protocol, to distribute malicious content, remain hidden and avoid detection from various technologies that are put in place to detect anomalies. This allows botnets and certain malware families to establish covert communication channels that can be used to send or receive data and also distribute malicious payloads using the DNS queries and responses. Cybercriminals use the DNS to breach highly protected networks, distribute malicious content, and exfiltrate sensitive information without being detected by security controls put in place by embedding certain strings in DNS packets. This research undertaking broadens this research field and fills in the existing research gap by extending the analysis of DNS being used as a payload distribution channel to detection of domains that are used to distribute different malicious payloads. This research undertaking analysed the use of the DNS in detecting domains and channels that are used for distributing malicious payloads. Passive DNS data which replicate DNS queries on name servers to detect anomalies in DNS queries was evaluated and analysed in order to detect malicious payloads. The research characterises the malicious payload distribution channels by analysing passive DNS traffic and modelling the DNS query and response patterns. The research found that it is possible to detect malicious payload distribution channels through the analysis of DNS TXT resource records.
- Full Text:
- Date Issued: 2019
- Authors: Dube, Ishmael
- Date: 2019
- Subjects: Internet domain names , Computer networks -- Security measures , Computer security , Computer network protocols , Data protection
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/97531 , vital:31447
- Description: The Domain Name System (DNS) protocol is a fundamental part of Internet activities that can be abused by cybercriminals to conduct malicious activities. Previous research has shown that cybercriminals use different methods, including the DNS protocol, to distribute malicious content, remain hidden and avoid detection from various technologies that are put in place to detect anomalies. This allows botnets and certain malware families to establish covert communication channels that can be used to send or receive data and also distribute malicious payloads using the DNS queries and responses. Cybercriminals use the DNS to breach highly protected networks, distribute malicious content, and exfiltrate sensitive information without being detected by security controls put in place by embedding certain strings in DNS packets. This research undertaking broadens this research field and fills in the existing research gap by extending the analysis of DNS being used as a payload distribution channel to detection of domains that are used to distribute different malicious payloads. This research undertaking analysed the use of the DNS in detecting domains and channels that are used for distributing malicious payloads. Passive DNS data which replicate DNS queries on name servers to detect anomalies in DNS queries was evaluated and analysed in order to detect malicious payloads. The research characterises the malicious payload distribution channels by analysing passive DNS traffic and modelling the DNS query and response patterns. The research found that it is possible to detect malicious payload distribution channels through the analysis of DNS TXT resource records.
- Full Text:
- Date Issued: 2019
An investigation of the security of passwords derived from African languages
- Authors: Sishi, Sibusiso Teboho
- Date: 2019
- Subjects: Computers -- Access control -- Passwords , Computer users -- Attitudes , Internet -- Access control , Internet -- Security measures , Internet -- Management , Data protection
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/163273 , vital:41024
- Description: Password authentication has become ubiquitous in the cyber age. To-date, there have been several studies on country based passwords by authors who studied, amongst others, English, Finnish, Italian and Chinese based passwords. However, there has been a lack of focused study on the type of passwords that are being created in Africa and whether there are benefits in creating passwords in an African language. For this research, password databases containing LAN Manager (LM) and NT LAN Manager (NTLM) hashes extracted from South African organisations in a variety of sectors in the economy, were obtained to gain an understanding of user behaviour in creating passwords. Analysis of the passwords obtained from these hashes (using several cracking methods) showed that many organisational passwords are based on the English language. This is understandable considering that the business language in South Africa is English even though South Africa has 11 official languages. African language based passwords were derived from known English weak passwords and some of the passwords were appended with numbers and special characters. The African based passwords created using eight Southern African languages were then uploaded to the Internet to test the security around using passwords based on African languages. Since most of the passwords were able to be cracked by third party researchers, we conclude that any password that is derived from known weak English words marked no improvement in the security of a password written in an African language, especially the more widely spoken languages, namely, isiZulu, isiXhosa and Setswana.
- Full Text:
- Date Issued: 2019
- Authors: Sishi, Sibusiso Teboho
- Date: 2019
- Subjects: Computers -- Access control -- Passwords , Computer users -- Attitudes , Internet -- Access control , Internet -- Security measures , Internet -- Management , Data protection
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/163273 , vital:41024
- Description: Password authentication has become ubiquitous in the cyber age. To-date, there have been several studies on country based passwords by authors who studied, amongst others, English, Finnish, Italian and Chinese based passwords. However, there has been a lack of focused study on the type of passwords that are being created in Africa and whether there are benefits in creating passwords in an African language. For this research, password databases containing LAN Manager (LM) and NT LAN Manager (NTLM) hashes extracted from South African organisations in a variety of sectors in the economy, were obtained to gain an understanding of user behaviour in creating passwords. Analysis of the passwords obtained from these hashes (using several cracking methods) showed that many organisational passwords are based on the English language. This is understandable considering that the business language in South Africa is English even though South Africa has 11 official languages. African language based passwords were derived from known English weak passwords and some of the passwords were appended with numbers and special characters. The African based passwords created using eight Southern African languages were then uploaded to the Internet to test the security around using passwords based on African languages. Since most of the passwords were able to be cracked by third party researchers, we conclude that any password that is derived from known weak English words marked no improvement in the security of a password written in an African language, especially the more widely spoken languages, namely, isiZulu, isiXhosa and Setswana.
- Full Text:
- Date Issued: 2019