Pro-active visualization of cyber security on a National Level : a South African case study
- Authors: Swart, Ignatius Petrus
- Date: 2015
- Subjects: Internet -- Security measures -- South Africa , Computer security -- Government policy -- South Africa
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4718 , http://hdl.handle.net/10962/d1017940
- Description: The need for increased national cyber security situational awareness is evident from the growing number of published national cyber security strategies. Governments are progressively seen as responsible for cyber security, but at the same time increasingly constrained by legal, privacy and resource considerations. Infrastructure and services that form part of the national cyber domain are often not under the control of government, necessitating the need for information sharing between governments and commercial partners. While sharing of security information is necessary, it typically requires considerable time to be implemented effectively. In an effort to decrease the time and effort required for cyber security situational awareness, this study considered commercially available data sources relating to a national cyber domain. Open source information is typically used by attackers to gather information with great success. An understanding of the data provided by these sources can also afford decision makers the opportunity to set priorities more effectively. Through the use of an adapted Joint Directors of Laboratories (JDL) fusion model, an experimental system was implemented that visualized the potential that open source intelligence could have on cyber situational awareness. Datasets used in the validation of the model contained information obtained from eight different data sources over a two year period with a focus on the South African .co.za sub domain. Over a million infrastructure devices were examined in this study along with information pertaining to a potential 88 million vulnerabilities on these devices. During the examination of data sources, a severe lack of information regarding the human aspect in cyber security was identified that led to the creation of a novel Personally Identifiable Information detection sensor (PII). The resultant two million records pertaining to PII in the South African domain were incorporated into the data fusion experiment for processing. The results of this processing are discussed in the three case studies. The results offered in this study aim to highlight how data fusion and effective visualization can serve to move national cyber security from a primarily reactive undertaking to a more pro-active model.
- Full Text:
- Date Issued: 2015
- Authors: Swart, Ignatius Petrus
- Date: 2015
- Subjects: Internet -- Security measures -- South Africa , Computer security -- Government policy -- South Africa
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4718 , http://hdl.handle.net/10962/d1017940
- Description: The need for increased national cyber security situational awareness is evident from the growing number of published national cyber security strategies. Governments are progressively seen as responsible for cyber security, but at the same time increasingly constrained by legal, privacy and resource considerations. Infrastructure and services that form part of the national cyber domain are often not under the control of government, necessitating the need for information sharing between governments and commercial partners. While sharing of security information is necessary, it typically requires considerable time to be implemented effectively. In an effort to decrease the time and effort required for cyber security situational awareness, this study considered commercially available data sources relating to a national cyber domain. Open source information is typically used by attackers to gather information with great success. An understanding of the data provided by these sources can also afford decision makers the opportunity to set priorities more effectively. Through the use of an adapted Joint Directors of Laboratories (JDL) fusion model, an experimental system was implemented that visualized the potential that open source intelligence could have on cyber situational awareness. Datasets used in the validation of the model contained information obtained from eight different data sources over a two year period with a focus on the South African .co.za sub domain. Over a million infrastructure devices were examined in this study along with information pertaining to a potential 88 million vulnerabilities on these devices. During the examination of data sources, a severe lack of information regarding the human aspect in cyber security was identified that led to the creation of a novel Personally Identifiable Information detection sensor (PII). The resultant two million records pertaining to PII in the South African domain were incorporated into the data fusion experiment for processing. The results of this processing are discussed in the three case studies. The results offered in this study aim to highlight how data fusion and effective visualization can serve to move national cyber security from a primarily reactive undertaking to a more pro-active model.
- Full Text:
- Date Issued: 2015
Prototyping a peer-to-peer session initiation protocol user agent
- Authors: Tsietsi, Mosiuoa Jeremia
- Date: 2008 , 2008-03-10
- Subjects: Computer networks , Computer network protocols -- Standards , Data transmission systems -- Standards , Peer-to-peer architecture (Computer networks) , Computer network architectures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4646 , http://hdl.handle.net/10962/d1006603 , Computer networks , Computer network protocols -- Standards , Data transmission systems -- Standards , Peer-to-peer architecture (Computer networks) , Computer network architectures
- Description: The Session Initiation Protocol (SIP) has in recent years become a popular protocol for the exchange of text, voice and video over IP networks. This thesis proposes the use of a class of structured peer to peer protocols - commonly known as Distributed Hash Tables (DHTs) - to provide a SIP overlay with services such as end-point location management and message relay, in the absence of traditional, centralised resources such as SIP proxies and registrars. A peer-to-peer layer named OverCord, which allows the interaction with any specific DHT protocol via the use of appropriate plug-ins, was designed, implemented and tested. This layer was then incorporated into a SIP user agent distributed by NIST (National Institute of Standards and Technology, USA). The modified user agent is capable of reliably establishing text, audio and video communication with similarly modified agents (peers) as well as conventional, centralized SIP overlays.
- Full Text:
- Date Issued: 2008
- Authors: Tsietsi, Mosiuoa Jeremia
- Date: 2008 , 2008-03-10
- Subjects: Computer networks , Computer network protocols -- Standards , Data transmission systems -- Standards , Peer-to-peer architecture (Computer networks) , Computer network architectures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4646 , http://hdl.handle.net/10962/d1006603 , Computer networks , Computer network protocols -- Standards , Data transmission systems -- Standards , Peer-to-peer architecture (Computer networks) , Computer network architectures
- Description: The Session Initiation Protocol (SIP) has in recent years become a popular protocol for the exchange of text, voice and video over IP networks. This thesis proposes the use of a class of structured peer to peer protocols - commonly known as Distributed Hash Tables (DHTs) - to provide a SIP overlay with services such as end-point location management and message relay, in the absence of traditional, centralised resources such as SIP proxies and registrars. A peer-to-peer layer named OverCord, which allows the interaction with any specific DHT protocol via the use of appropriate plug-ins, was designed, implemented and tested. This layer was then incorporated into a SIP user agent distributed by NIST (National Institute of Standards and Technology, USA). The modified user agent is capable of reliably establishing text, audio and video communication with similarly modified agents (peers) as well as conventional, centralized SIP overlays.
- Full Text:
- Date Issued: 2008
Pseudo-random access compressed archive for security log data
- Authors: Radley, Johannes Jurgens
- Date: 2015
- Subjects: Computer security , Information storage and retrieval systems , Data compression (Computer science)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4723 , http://hdl.handle.net/10962/d1020019
- Description: We are surrounded by an increasing number of devices and applications that produce a huge quantity of machine generated data. Almost all the machine data contains some element of security information that can be used to discover, monitor and investigate security events.The work proposes a pseudo-random access compressed storage method for log data to be used with an information retrieval system that in turn provides the ability to search and correlate log data and the corresponding events. We explain the method for converting log files into distinct events and storing the events in a compressed file. This yields an entry identifier for each log entry that provides a pointer that can be used by indexing methods. The research also evaluates the compression performance penalties encountered by using this storage system, including decreased compression ratio, as well as increased compression and decompression times.
- Full Text:
- Date Issued: 2015
- Authors: Radley, Johannes Jurgens
- Date: 2015
- Subjects: Computer security , Information storage and retrieval systems , Data compression (Computer science)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4723 , http://hdl.handle.net/10962/d1020019
- Description: We are surrounded by an increasing number of devices and applications that produce a huge quantity of machine generated data. Almost all the machine data contains some element of security information that can be used to discover, monitor and investigate security events.The work proposes a pseudo-random access compressed storage method for log data to be used with an information retrieval system that in turn provides the ability to search and correlate log data and the corresponding events. We explain the method for converting log files into distinct events and storing the events in a compressed file. This yields an entry identifier for each log entry that provides a pointer that can be used by indexing methods. The research also evaluates the compression performance penalties encountered by using this storage system, including decreased compression ratio, as well as increased compression and decompression times.
- Full Text:
- Date Issued: 2015
Pursuing cost-effective secure network micro-segmentation
- Authors: Fürst, Mark Richard
- Date: 2018
- Subjects: Computer networks -- Security measures , Computer networks -- Access control , Firewalls (Computer security) , IPSec (Computer network protocol) , Network micro-segmentation
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/131106 , vital:36524
- Description: Traditional network segmentation allows discrete trust levels to be defined for different network segments, using physical firewalls or routers that control north-south traffic flowing between different interfaces. This technique reduces the attack surface area should an attacker breach one of the perimeter defences. However, east-west traffic flowing between endpoints within the same network segment does not pass through a firewall, and an attacker may be able to move laterally between endpoints within that segment. Network micro-segmentation was designed to address the challenge of controlling east-west traffic, and various solutions have been released with differing levels of capabilities and feature sets. These approaches range from simple network switch Access Control List based segmentation to complex hypervisor based software-defined security segments defined down to the individual workload, container or process level, and enforced via policy based security controls for each segment. Several commercial solutions for network micro-segmentation exist, but these are primarily focused on physical and cloud data centres, and are often accompanied by significant capital outlay and resource requirements. Given these constraints, this research determines whether existing tools provided with operating systems can be re-purposed to implement micro-segmentation and restrict east-west traffic within one or more network segments for a small-to-medium sized corporate network. To this end, a proof-of-concept lab environment was built with a heterogeneous mix of Windows and Linux virtual servers and workstations deployed in an Active Directory domain. The use of Group Policy Objects to deploy IPsec Server and Domain Isolation for controlling traffic between endpoints is examined, in conjunction with IPsec Authenticated Header and Encapsulating Security Payload modes as an additional layer of security. The outcome of the research shows that revisiting existing tools can enable organisations to implement an additional, cost-effective secure layer of defence in their network.
- Full Text:
- Date Issued: 2018
- Authors: Fürst, Mark Richard
- Date: 2018
- Subjects: Computer networks -- Security measures , Computer networks -- Access control , Firewalls (Computer security) , IPSec (Computer network protocol) , Network micro-segmentation
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/131106 , vital:36524
- Description: Traditional network segmentation allows discrete trust levels to be defined for different network segments, using physical firewalls or routers that control north-south traffic flowing between different interfaces. This technique reduces the attack surface area should an attacker breach one of the perimeter defences. However, east-west traffic flowing between endpoints within the same network segment does not pass through a firewall, and an attacker may be able to move laterally between endpoints within that segment. Network micro-segmentation was designed to address the challenge of controlling east-west traffic, and various solutions have been released with differing levels of capabilities and feature sets. These approaches range from simple network switch Access Control List based segmentation to complex hypervisor based software-defined security segments defined down to the individual workload, container or process level, and enforced via policy based security controls for each segment. Several commercial solutions for network micro-segmentation exist, but these are primarily focused on physical and cloud data centres, and are often accompanied by significant capital outlay and resource requirements. Given these constraints, this research determines whether existing tools provided with operating systems can be re-purposed to implement micro-segmentation and restrict east-west traffic within one or more network segments for a small-to-medium sized corporate network. To this end, a proof-of-concept lab environment was built with a heterogeneous mix of Windows and Linux virtual servers and workstations deployed in an Active Directory domain. The use of Group Policy Objects to deploy IPsec Server and Domain Isolation for controlling traffic between endpoints is examined, in conjunction with IPsec Authenticated Header and Encapsulating Security Payload modes as an additional layer of security. The outcome of the research shows that revisiting existing tools can enable organisations to implement an additional, cost-effective secure layer of defence in their network.
- Full Text:
- Date Issued: 2018
RADGIS - an improved architecture for runtime-extensible, distributed GIS applications
- Authors: Preston, Richard Michael
- Date: 2002
- Subjects: Geographic information systems
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4626 , http://hdl.handle.net/10962/d1006497
- Description: A number of GIS architectures and technologies have emerged recently to facilitate the visualisation and processing of geospatial data over the Web. The work presented in this dissertation builds on these efforts and undertakes to overcome some of the major problems with traditional GIS client architectures, including application bloat, lack of customisability, and lack of interoperability between GIS products. In this dissertation we describe how a new client-side GIS architecture was developed and implemented as a proof-of-concept application called RADGIS, which is based on open standards and emerging distributed component-based software paradigms. RADGIS reflects the current trend in development focus from Web browser-based applications to customised clients, based on open standards, that make use of distributed Web services. While much attention has been paid to exposing data on the Web, there is growing momentum towards providing “value-added” services. A good example of this is the tremendous industry interest in the provision of location-based services, which has been discussed as a special use-case of our RADGIS architecture. Thus, in the near future client applications will not simply be used to access data transparently, but will also become facilitators for the location-transparent invocation of local and remote services. This flexible architecture will ensure that data can be stored and processed independently of the location of the client that wishes to view or interact with it. Our RADGIS application enables content developers and end-users to create and/or customise GIS applications dynamically at runtime through the incorporation of GIS services. This ensures that the client application has the flexibility to withstand changing levels of expertise or user requirements. These GIS services are implemented as components that execute locally on the client machine, or as remote CORBA Objects or EJBs. Assembly and deployment of these components is achieved using a specialised XML descriptor. This XML descriptor is written using a markup language that we developed specifically for this purpose, called DGCML, which contains deployment information, as well as a GUI specification and links to an XML-based help system that can be merged with the RADGIS client application’s existing help system. Thus, no additional requirements are imposed on object developers by the RADGIS architecture, i.e. there is no need to rewrite existing objects since DGCML acts as a runtime-customisable wrapper, allowing existing objects to be utilised by RADGIS. While the focus of this thesis has been on overcoming the above-mentioned problems with traditional GIS applications, the work described here can also be applied in a much broader context, especially in the development of highly customisable client applications that are able to integrate Web services at runtime.
- Full Text:
- Date Issued: 2002
- Authors: Preston, Richard Michael
- Date: 2002
- Subjects: Geographic information systems
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4626 , http://hdl.handle.net/10962/d1006497
- Description: A number of GIS architectures and technologies have emerged recently to facilitate the visualisation and processing of geospatial data over the Web. The work presented in this dissertation builds on these efforts and undertakes to overcome some of the major problems with traditional GIS client architectures, including application bloat, lack of customisability, and lack of interoperability between GIS products. In this dissertation we describe how a new client-side GIS architecture was developed and implemented as a proof-of-concept application called RADGIS, which is based on open standards and emerging distributed component-based software paradigms. RADGIS reflects the current trend in development focus from Web browser-based applications to customised clients, based on open standards, that make use of distributed Web services. While much attention has been paid to exposing data on the Web, there is growing momentum towards providing “value-added” services. A good example of this is the tremendous industry interest in the provision of location-based services, which has been discussed as a special use-case of our RADGIS architecture. Thus, in the near future client applications will not simply be used to access data transparently, but will also become facilitators for the location-transparent invocation of local and remote services. This flexible architecture will ensure that data can be stored and processed independently of the location of the client that wishes to view or interact with it. Our RADGIS application enables content developers and end-users to create and/or customise GIS applications dynamically at runtime through the incorporation of GIS services. This ensures that the client application has the flexibility to withstand changing levels of expertise or user requirements. These GIS services are implemented as components that execute locally on the client machine, or as remote CORBA Objects or EJBs. Assembly and deployment of these components is achieved using a specialised XML descriptor. This XML descriptor is written using a markup language that we developed specifically for this purpose, called DGCML, which contains deployment information, as well as a GUI specification and links to an XML-based help system that can be merged with the RADGIS client application’s existing help system. Thus, no additional requirements are imposed on object developers by the RADGIS architecture, i.e. there is no need to rewrite existing objects since DGCML acts as a runtime-customisable wrapper, allowing existing objects to be utilised by RADGIS. While the focus of this thesis has been on overcoming the above-mentioned problems with traditional GIS applications, the work described here can also be applied in a much broader context, especially in the development of highly customisable client applications that are able to integrate Web services at runtime.
- Full Text:
- Date Issued: 2002
Remora : implementing adaptive parallelism on a heterogeneous cluster of networked workstations
- Authors: Rehmet, Geoffrey Michael
- Date: 1995
- Subjects: LINDA (Computer system) , Local area networks (Computer networks) , Computer networks , Remora (Computer system)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4673 , http://hdl.handle.net/10962/d1006696 , LINDA (Computer system) , Local area networks (Computer networks) , Computer networks , Remora (Computer system)
- Description: Computers connected to a local area network are often only fully utilized for short periods of time. In fact, most workstations are not used at all for a significant portion of the day. The combined "idle time" of the workstations on a network constitutes a significant computing resource, which is generally wasted. If harnessed properly, such a resource could constitute a cheap alternative to expensive high-performance computers. Adaptive parallelism refers to the parallel execution of a computation on a dynamically changing set of processors. This thesis investigates the viability of this approach as a vehicle to harness the "idle cycles" available on a heterogeneous cluster of networked computers. A system, called Remora, which implements adaptive parallelism via the Linda programming paradigm, is presented. Experiments, performed using Remora, show that adaptive parallelism provides an efficient vehicle for using idle processor cycles, without having an adverse effect on the tasks which constitute the normal workload of the computers being used.
- Full Text:
- Date Issued: 1995
- Authors: Rehmet, Geoffrey Michael
- Date: 1995
- Subjects: LINDA (Computer system) , Local area networks (Computer networks) , Computer networks , Remora (Computer system)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4673 , http://hdl.handle.net/10962/d1006696 , LINDA (Computer system) , Local area networks (Computer networks) , Computer networks , Remora (Computer system)
- Description: Computers connected to a local area network are often only fully utilized for short periods of time. In fact, most workstations are not used at all for a significant portion of the day. The combined "idle time" of the workstations on a network constitutes a significant computing resource, which is generally wasted. If harnessed properly, such a resource could constitute a cheap alternative to expensive high-performance computers. Adaptive parallelism refers to the parallel execution of a computation on a dynamically changing set of processors. This thesis investigates the viability of this approach as a vehicle to harness the "idle cycles" available on a heterogeneous cluster of networked computers. A system, called Remora, which implements adaptive parallelism via the Linda programming paradigm, is presented. Experiments, performed using Remora, show that adaptive parallelism provides an efficient vehicle for using idle processor cycles, without having an adverse effect on the tasks which constitute the normal workload of the computers being used.
- Full Text:
- Date Issued: 1995
Remote fidelity of Container-Based Network Emulators
- Authors: Peach, Schalk Willem
- Date: 2021-10-29
- Subjects: Computer networks Security measures , Intrusion detection systems (Computer security) , Computer security , Host-based intrusion detection systems (Computer security) , Emulators (Computer programs) , Computer network protocols , Container-Based Network Emulators (CBNEs) , Network Experimentation Platforms (NEPs)
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10962/192141 , vital:45199
- Description: This thesis examines if Container-Based Network Emulators (CBNEs) are able to instantiate emulated nodes that provide sufficient realism to be used in information security experiments. The realism measure used is based on the information available from the point of view of a remote attacker. During the evaluation of a Container-Based Network Emulator (CBNE) as a platform to replicate production networks for information security experiments, it was observed that nmap fingerprinting returned Operating System (OS) family and version results inconsistent with that of the host Operating System (OS). CBNEs utilise Linux namespaces, the technology used for containerisation, to instantiate \emulated" hosts for experimental networks. Linux containers partition resources of the host OS to create lightweight virtual machines that share a single OS kernel. As all emulated hosts share the same kernel in a CBNE network, there is a reasonable expectation that the fingerprints of the host OS and emulated hosts should be the same. Based on how CBNEs instantiate emulated networks and that fingerprinting returned inconsistent results, it was hypothesised that the technologies used to construct CBNEs are capable of influencing fingerprints generated by utilities such as nmap. It was predicted that hosts emulated using different CBNEs would show deviations in remotely generated fingerprints when compared to fingerprints generated for the host OS. An experimental network consisting of two emulated hosts and a Layer 2 switch was instantiated on multiple CBNEs using the same host OS. Active and passive fingerprinting was conducted between the emulated hosts to generate fingerprints and OS family and version matches. Passive fingerprinting failed to produce OS family and version matches as the fingerprint databases for these utilities are no longer maintained. For active fingerprinting the OS family results were consistent between tested systems and the host OS, though OS version results reported was inconsistent. A comparison of the generated fingerprints revealed that for certain CBNEs fingerprint features related to network stack optimisations of the host OS deviated from other CBNEs and the host OS. The hypothesis that CBNEs can influence remotely generated fingerprints was partially confirmed. One CBNE system modified Linux kernel networking options, causing a deviation from fingerprints generated for other tested systems and the host OS. The hypothesis was also partially rejected as the technologies used by CBNEs do not influence the remote fidelity of emulated hosts. , Thesis (MSc) -- Faculty of Science, Computer Science, 2021
- Full Text:
- Date Issued: 2021-10-29
- Authors: Peach, Schalk Willem
- Date: 2021-10-29
- Subjects: Computer networks Security measures , Intrusion detection systems (Computer security) , Computer security , Host-based intrusion detection systems (Computer security) , Emulators (Computer programs) , Computer network protocols , Container-Based Network Emulators (CBNEs) , Network Experimentation Platforms (NEPs)
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10962/192141 , vital:45199
- Description: This thesis examines if Container-Based Network Emulators (CBNEs) are able to instantiate emulated nodes that provide sufficient realism to be used in information security experiments. The realism measure used is based on the information available from the point of view of a remote attacker. During the evaluation of a Container-Based Network Emulator (CBNE) as a platform to replicate production networks for information security experiments, it was observed that nmap fingerprinting returned Operating System (OS) family and version results inconsistent with that of the host Operating System (OS). CBNEs utilise Linux namespaces, the technology used for containerisation, to instantiate \emulated" hosts for experimental networks. Linux containers partition resources of the host OS to create lightweight virtual machines that share a single OS kernel. As all emulated hosts share the same kernel in a CBNE network, there is a reasonable expectation that the fingerprints of the host OS and emulated hosts should be the same. Based on how CBNEs instantiate emulated networks and that fingerprinting returned inconsistent results, it was hypothesised that the technologies used to construct CBNEs are capable of influencing fingerprints generated by utilities such as nmap. It was predicted that hosts emulated using different CBNEs would show deviations in remotely generated fingerprints when compared to fingerprints generated for the host OS. An experimental network consisting of two emulated hosts and a Layer 2 switch was instantiated on multiple CBNEs using the same host OS. Active and passive fingerprinting was conducted between the emulated hosts to generate fingerprints and OS family and version matches. Passive fingerprinting failed to produce OS family and version matches as the fingerprint databases for these utilities are no longer maintained. For active fingerprinting the OS family results were consistent between tested systems and the host OS, though OS version results reported was inconsistent. A comparison of the generated fingerprints revealed that for certain CBNEs fingerprint features related to network stack optimisations of the host OS deviated from other CBNEs and the host OS. The hypothesis that CBNEs can influence remotely generated fingerprints was partially confirmed. One CBNE system modified Linux kernel networking options, causing a deviation from fingerprints generated for other tested systems and the host OS. The hypothesis was also partially rejected as the technologies used by CBNEs do not influence the remote fidelity of emulated hosts. , Thesis (MSc) -- Faculty of Science, Computer Science, 2021
- Full Text:
- Date Issued: 2021-10-29
Routing MIDI messages in a shared music studio environment
- Authors: Mosala, Thabo Jerry
- Date: 1995
- Subjects: MIDI (Standard) , Computer networks
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4672 , http://hdl.handle.net/10962/d1006695 , MIDI (Standard) , Computer networks
- Description: The Rhodes Computer Music Network (RHOCMN) is a network which allows main studio resources to be shared. RHOCMN is growing into a multi-workstation environment and additional devices are being incorporated into the system. A star configuration is used for transmitting MIDI from a MIDI patch bay to the workstations and MIDI devices. This imposes several disadvantages on the use of the studio, such as wiring problems. In a quest to avoid problems related to MIDI in RHOCMN, the MIDINet system was developed. The idea was to acquire a viable solution to MIDI's main problems which does not involve a redefinition of the MIDI specification.
- Full Text:
- Date Issued: 1995
- Authors: Mosala, Thabo Jerry
- Date: 1995
- Subjects: MIDI (Standard) , Computer networks
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4672 , http://hdl.handle.net/10962/d1006695 , MIDI (Standard) , Computer networks
- Description: The Rhodes Computer Music Network (RHOCMN) is a network which allows main studio resources to be shared. RHOCMN is growing into a multi-workstation environment and additional devices are being incorporated into the system. A star configuration is used for transmitting MIDI from a MIDI patch bay to the workstations and MIDI devices. This imposes several disadvantages on the use of the studio, such as wiring problems. In a quest to avoid problems related to MIDI in RHOCMN, the MIDINet system was developed. The idea was to acquire a viable solution to MIDI's main problems which does not involve a redefinition of the MIDI specification.
- Full Text:
- Date Issued: 1995
Search engine poisoning and its prevalence in modern search engines
- Authors: Blaauw, Pieter
- Date: 2013
- Subjects: Web search engines Internet searching World Wide Web Malware (Computer software) Computer viruses Rootkits (Computer software) Spyware (Computer software)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4572 , http://hdl.handle.net/10962/d1002037
- Description: The prevalence of Search Engine Poisoning in trending topics and popular search terms on the web within search engines is investigated. Search Engine Poisoning is the act of manipulating search engines in order to display search results from websites infected with malware. Research done between February and August 2012, using both manual and automated techniques, shows us how easily the criminal element manages to insert malicious content into web pages related to popular search terms within search engines. In order to provide the reader with a clear overview and understanding of the motives and the methods of the operators of Search Engine Poisoning campaigns, an in-depth review of automated and semi-automated web exploit kits is done, as well as looking into the motives for running these campaigns. Three high profile case studies are examined, and the various Search Engine Poisoning campaigns associated with these case studies are discussed in detail to the reader. From February to August 2012, data was collected from the top trending topics on Google’s search engine along with the top listed sites related to these topics, and then passed through various automated tools to discover if these results have been infiltrated by the operators of Search Engine Poisoning campaings, and the results of these automated scans are then discussed in detail. During the research period, manual searching for Search Engine Poisoning campaigns was also done, using high profile news events and popular search terms. These results are analysed in detail to determine the methods of attack, the purpose of the attack and the parties behind it
- Full Text:
- Date Issued: 2013
- Authors: Blaauw, Pieter
- Date: 2013
- Subjects: Web search engines Internet searching World Wide Web Malware (Computer software) Computer viruses Rootkits (Computer software) Spyware (Computer software)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4572 , http://hdl.handle.net/10962/d1002037
- Description: The prevalence of Search Engine Poisoning in trending topics and popular search terms on the web within search engines is investigated. Search Engine Poisoning is the act of manipulating search engines in order to display search results from websites infected with malware. Research done between February and August 2012, using both manual and automated techniques, shows us how easily the criminal element manages to insert malicious content into web pages related to popular search terms within search engines. In order to provide the reader with a clear overview and understanding of the motives and the methods of the operators of Search Engine Poisoning campaigns, an in-depth review of automated and semi-automated web exploit kits is done, as well as looking into the motives for running these campaigns. Three high profile case studies are examined, and the various Search Engine Poisoning campaigns associated with these case studies are discussed in detail to the reader. From February to August 2012, data was collected from the top trending topics on Google’s search engine along with the top listed sites related to these topics, and then passed through various automated tools to discover if these results have been infiltrated by the operators of Search Engine Poisoning campaings, and the results of these automated scans are then discussed in detail. During the research period, manual searching for Search Engine Poisoning campaigns was also done, using high profile news events and popular search terms. These results are analysed in detail to determine the methods of attack, the purpose of the attack and the parties behind it
- Full Text:
- Date Issued: 2013
Securing media streams in an Asterisk-based environment and evaluating the resulting performance cost
- Authors: Clayton, Bradley
- Date: 2007 , 2007-01-08
- Subjects: Asterisk (Computer file) , Computer networks -- Security measures , Internet telephony -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4647 , http://hdl.handle.net/10962/d1006606 , Asterisk (Computer file) , Computer networks -- Security measures , Internet telephony -- Security measures
- Description: When adding Confidentiality, Integrity and Availability (CIA) to a multi-user VoIP (Voice over IP) system, performance and quality are at risk. The aim of this study is twofold. Firstly, it describes current methods suitable to secure voice streams within a VoIP system and make them available in an Asterisk-based VoIP environment. (Asterisk is a well established, open-source, TDM/VoIP PBX.) Secondly, this study evaluates the performance cost incurred after implementing each security method within the Asterisk-based system, using a special testbed suite, named DRAPA, which was developed expressly for this study. The three security methods implemented and studied were IPSec (Internet Protocol Security), SRTP (Secure Real-time Transport Protocol), and SIAX2 (Secure Inter-Asterisk eXchange 2 protocol). From the experiments, it was found that bandwidth and CPU usage were significantly affected by the addition of CIA. In ranking the three security methods in terms of these two resources, it was found that SRTP incurs the least bandwidth overhead, followed by SIAX2 and then IPSec. Where CPU utilisation is concerned, it was found that SIAX2 incurs the least overhead, followed by IPSec, and then SRTP.
- Full Text:
- Date Issued: 2007
- Authors: Clayton, Bradley
- Date: 2007 , 2007-01-08
- Subjects: Asterisk (Computer file) , Computer networks -- Security measures , Internet telephony -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4647 , http://hdl.handle.net/10962/d1006606 , Asterisk (Computer file) , Computer networks -- Security measures , Internet telephony -- Security measures
- Description: When adding Confidentiality, Integrity and Availability (CIA) to a multi-user VoIP (Voice over IP) system, performance and quality are at risk. The aim of this study is twofold. Firstly, it describes current methods suitable to secure voice streams within a VoIP system and make them available in an Asterisk-based VoIP environment. (Asterisk is a well established, open-source, TDM/VoIP PBX.) Secondly, this study evaluates the performance cost incurred after implementing each security method within the Asterisk-based system, using a special testbed suite, named DRAPA, which was developed expressly for this study. The three security methods implemented and studied were IPSec (Internet Protocol Security), SRTP (Secure Real-time Transport Protocol), and SIAX2 (Secure Inter-Asterisk eXchange 2 protocol). From the experiments, it was found that bandwidth and CPU usage were significantly affected by the addition of CIA. In ranking the three security methods in terms of these two resources, it was found that SRTP incurs the least bandwidth overhead, followed by SIAX2 and then IPSec. Where CPU utilisation is concerned, it was found that SIAX2 incurs the least overhead, followed by IPSec, and then SRTP.
- Full Text:
- Date Issued: 2007
Securing softswitches from malicious attacks
- Authors: Opie, Jake Weyman
- Date: 2007
- Subjects: Internet telephony -- Security measures , Computer networks -- Security measures , Digital telephone systems , Communication -- Technological innovations , Computer network protocols , TCP/IP (Computer network protocol) , Switching theory
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4683 , http://hdl.handle.net/10962/d1007714 , Internet telephony -- Security measures , Computer networks -- Security measures , Digital telephone systems , Communication -- Technological innovations , Computer network protocols , TCP/IP (Computer network protocol) , Switching theory
- Description: Traditionally, real-time communication, such as voice calls, has run on separate, closed networks. Of all the limitations that these networks had, the ability of malicious attacks to cripple communication was not a crucial one. This situation has changed radically now that real-time communication and data have merged to share the same network. The objective of this project is to investigate the securing of softswitches with functionality similar to Private Branch Exchanges (PBX) from malicious attacks. The focus of the project will be a practical investigation of how to secure ILANGA, an ASTERISK-based system under development at Rhodes University. The practical investigation that focuses on ILANGA is based on performing six varied experiments on the different components of ILANGA. Before the six experiments are performed, basic preliminary security measures and the restrictions placed on the access to the database are discussed. The outcomes of these experiments are discussed and the precise reasons why these attacks were either successful or unsuccessful are given. Suggestions of a theoretical nature on how to defend against the successful attacks are also presented.
- Full Text:
- Date Issued: 2007
- Authors: Opie, Jake Weyman
- Date: 2007
- Subjects: Internet telephony -- Security measures , Computer networks -- Security measures , Digital telephone systems , Communication -- Technological innovations , Computer network protocols , TCP/IP (Computer network protocol) , Switching theory
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4683 , http://hdl.handle.net/10962/d1007714 , Internet telephony -- Security measures , Computer networks -- Security measures , Digital telephone systems , Communication -- Technological innovations , Computer network protocols , TCP/IP (Computer network protocol) , Switching theory
- Description: Traditionally, real-time communication, such as voice calls, has run on separate, closed networks. Of all the limitations that these networks had, the ability of malicious attacks to cripple communication was not a crucial one. This situation has changed radically now that real-time communication and data have merged to share the same network. The objective of this project is to investigate the securing of softswitches with functionality similar to Private Branch Exchanges (PBX) from malicious attacks. The focus of the project will be a practical investigation of how to secure ILANGA, an ASTERISK-based system under development at Rhodes University. The practical investigation that focuses on ILANGA is based on performing six varied experiments on the different components of ILANGA. Before the six experiments are performed, basic preliminary security measures and the restrictions placed on the access to the database are discussed. The outcomes of these experiments are discussed and the precise reasons why these attacks were either successful or unsuccessful are given. Suggestions of a theoretical nature on how to defend against the successful attacks are also presented.
- Full Text:
- Date Issued: 2007
Securing software development using developer access control
- Authors: Ongers, Grant
- Date: 2020
- Subjects: Computer software -- Development , Computers -- Access control , Computer security -- Software , Computer networks -- Security measures , Source code (Computer science) , Plug-ins (Computer programs) , Data encryption (Computer science) , Network Access Control , Data Loss Prevention , Google’s BeyondCorp , Confidentiality, Integrity and Availability (CIA) triad
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/149022 , vital:38796
- Description: This research is aimed at software development companies and highlights the unique information security concerns in the context of a non-malicious software developer’s work environment; and furthermore explores an application driven solution which focuses specifically on providing developer environments with access control for source code repositories. In order to achieve that, five goals were defined as discussed in section 1.3. The application designed to provide the developer environment with access control to source code repositories was modelled on lessons taken from the principles of Network Access Control (NAC), Data Loss Prevention (DLP), and Google’s BeyondCorp (GBC) for zero-trust end-user computing. The intention of this research is to provide software developers with maximum access to source code without compromising Confidentiality, as per the Confidentiality, Integrity and Availability (CIA) triad. Employing data gleaned from examining the characteristics of DLP, NAC, and Beyond- Corp—proof-of-concept code was developed to regulate access to the developer’s environment and source code. The system required sufficient flexibility to support the diversity of software development environments. In order to achieve this, a modular design was selected. The system comprised a client side agent and a plug-in-ready server component. The client side agent mounts and dismounts encrypted volumes containing source code. Furthermore, it provides the server with information of the client that is demanded by plug-ins. The server side service provided encryption keys to facilitate the mounting of the volumes and, through plug-ins, asked questions of the client agent to determine whether access should be granted. The solution was then tested with integration and system testing. There were plans to have it used by development teams who were then to be surveyed as to their view on the proof of concept but this proved impossible. The conclusion provides a basis by which organisations that develop software can better balance the two corners of the CIA triad most often in conflict: Confidentiality in terms of their source code against the Availability of the same to developers.
- Full Text:
- Date Issued: 2020
- Authors: Ongers, Grant
- Date: 2020
- Subjects: Computer software -- Development , Computers -- Access control , Computer security -- Software , Computer networks -- Security measures , Source code (Computer science) , Plug-ins (Computer programs) , Data encryption (Computer science) , Network Access Control , Data Loss Prevention , Google’s BeyondCorp , Confidentiality, Integrity and Availability (CIA) triad
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/149022 , vital:38796
- Description: This research is aimed at software development companies and highlights the unique information security concerns in the context of a non-malicious software developer’s work environment; and furthermore explores an application driven solution which focuses specifically on providing developer environments with access control for source code repositories. In order to achieve that, five goals were defined as discussed in section 1.3. The application designed to provide the developer environment with access control to source code repositories was modelled on lessons taken from the principles of Network Access Control (NAC), Data Loss Prevention (DLP), and Google’s BeyondCorp (GBC) for zero-trust end-user computing. The intention of this research is to provide software developers with maximum access to source code without compromising Confidentiality, as per the Confidentiality, Integrity and Availability (CIA) triad. Employing data gleaned from examining the characteristics of DLP, NAC, and Beyond- Corp—proof-of-concept code was developed to regulate access to the developer’s environment and source code. The system required sufficient flexibility to support the diversity of software development environments. In order to achieve this, a modular design was selected. The system comprised a client side agent and a plug-in-ready server component. The client side agent mounts and dismounts encrypted volumes containing source code. Furthermore, it provides the server with information of the client that is demanded by plug-ins. The server side service provided encryption keys to facilitate the mounting of the volumes and, through plug-ins, asked questions of the client agent to determine whether access should be granted. The solution was then tested with integration and system testing. There were plans to have it used by development teams who were then to be surveyed as to their view on the proof of concept but this proved impossible. The conclusion provides a basis by which organisations that develop software can better balance the two corners of the CIA triad most often in conflict: Confidentiality in terms of their source code against the Availability of the same to developers.
- Full Text:
- Date Issued: 2020
Selecting and augmenting a FOSS development and deployment environment for personalized video-oriented services in a Telco context
- Authors: Shibeshi, Zelalem Sintayehu
- Date: 2016
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/943 , vital:20005
- Description: The great demand for video services on the Internet is one contributing factor that led telecom companies to search for solutions to deliver innovative video services, using the different access technologies managed by them and leveraging the capacity of enforcing Quality of Service (QoS). One part of the solution was an infrastructure that guarantees QoS for these services, in the form of the IP Multimedia Subsystem (IMS) framework. The IMS framework was developed for delivering innovative multimedia services, but IMS alone does not provide the required services. This has led to further work in the area of multimedia service architectures. One noteworthy architecture is IPTV. IPTV is more than what its name implies, as it allows the development of various innovative video-oriented services and not just tv. When IPTV was introduced, many thought that it would bring back the revenue loss that telecom companies experienced to over-the-top (OTT) service providers. However, despite all its promises, the IPTV implementation has not shown as wide an uptake as one would expect. Although there could be various reasons for the slow penetration of IPTV, one reason could be the technical challenge that IPTV poses to service developers. One of the main reasons for the embarking of the research reported in this thesis was to identify and select free and open source software (FOSS) based platforms and augment them for easy development and deployment of video-oriented services. The thesis motivated how the IPTV architecture, with some modification, can be a good architecture to develop innovative video-oriented services. For a better understanding and investigate the issues of video-oriented service development on different platforms, we followed an incremental and iterative prototyping method. As a result, various video-oriented services were first developed and implementation-related issues were analyzed. This has helped us to identify problems that service developers face, including the requirement to utilize a number of protocols to develop an IPTV-based video-oriented service and the lack of a platform that provides a consistent programming interface to implement them all. The process also helped us to identify new uses cases through the process. As part of our selection process, we found that the Mobicents service development platform can be used as the basis for a good service development and deployment environment for video-oriented services. Mobicents is a Java-based service delivery platform for quick development, deployment and management of next generation network applications. Mobicents is a good choice because it provides a consistent programming interface and supports the various protocols needed in a consistent manner or an easy way to include the support for them. We used Mobicents to compose the environment that developers can use to build video-oriented services. Specifically we developed components and service building blocks that service developer can use to develop various innovative video-oriented services. During our research, we also identified various issues with regard to support from streaming servers in general and open source streaming servers in particular and also with the protocol they use. Specifically, we identified issues with Real Time Streaming Protocol (RTSP), a protocol specified as the media control protocol in the IPTV specification, and made proposals for solving them. We developed an RSTP proxy to augment the features lacking in the current streaming servers and implemented some of the features we proposed in it.
- Full Text:
- Date Issued: 2016
- Authors: Shibeshi, Zelalem Sintayehu
- Date: 2016
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/943 , vital:20005
- Description: The great demand for video services on the Internet is one contributing factor that led telecom companies to search for solutions to deliver innovative video services, using the different access technologies managed by them and leveraging the capacity of enforcing Quality of Service (QoS). One part of the solution was an infrastructure that guarantees QoS for these services, in the form of the IP Multimedia Subsystem (IMS) framework. The IMS framework was developed for delivering innovative multimedia services, but IMS alone does not provide the required services. This has led to further work in the area of multimedia service architectures. One noteworthy architecture is IPTV. IPTV is more than what its name implies, as it allows the development of various innovative video-oriented services and not just tv. When IPTV was introduced, many thought that it would bring back the revenue loss that telecom companies experienced to over-the-top (OTT) service providers. However, despite all its promises, the IPTV implementation has not shown as wide an uptake as one would expect. Although there could be various reasons for the slow penetration of IPTV, one reason could be the technical challenge that IPTV poses to service developers. One of the main reasons for the embarking of the research reported in this thesis was to identify and select free and open source software (FOSS) based platforms and augment them for easy development and deployment of video-oriented services. The thesis motivated how the IPTV architecture, with some modification, can be a good architecture to develop innovative video-oriented services. For a better understanding and investigate the issues of video-oriented service development on different platforms, we followed an incremental and iterative prototyping method. As a result, various video-oriented services were first developed and implementation-related issues were analyzed. This has helped us to identify problems that service developers face, including the requirement to utilize a number of protocols to develop an IPTV-based video-oriented service and the lack of a platform that provides a consistent programming interface to implement them all. The process also helped us to identify new uses cases through the process. As part of our selection process, we found that the Mobicents service development platform can be used as the basis for a good service development and deployment environment for video-oriented services. Mobicents is a Java-based service delivery platform for quick development, deployment and management of next generation network applications. Mobicents is a good choice because it provides a consistent programming interface and supports the various protocols needed in a consistent manner or an easy way to include the support for them. We used Mobicents to compose the environment that developers can use to build video-oriented services. Specifically we developed components and service building blocks that service developer can use to develop various innovative video-oriented services. During our research, we also identified various issues with regard to support from streaming servers in general and open source streaming servers in particular and also with the protocol they use. Specifically, we identified issues with Real Time Streaming Protocol (RTSP), a protocol specified as the media control protocol in the IPTV specification, and made proposals for solving them. We developed an RSTP proxy to augment the features lacking in the current streaming servers and implemented some of the features we proposed in it.
- Full Text:
- Date Issued: 2016
Service provisioning in two open-source SIP implementation, cinema and vocal
- Authors: Hsieh, Ming Chih
- Date: 2013-06-18
- Subjects: Real-time data processing , Computer network protocols , Internet telephony , Digital telephone systems , Communication -- Technological innovations
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4687 , http://hdl.handle.net/10962/d1008195 , Real-time data processing , Computer network protocols , Internet telephony , Digital telephone systems , Communication -- Technological innovations
- Description: The distribution of real-time multimedia streams is seen nowadays as the next step forward for the Internet. One of the most obvious uses of such streams is to support telephony over the Internet, replacing and improving traditional telephony. This thesis investigates the development and deployment of services in two Internet telephony environments, namely CINEMA (Columbia InterNet Extensible Multimedia Architecture) and VOCAL (Vovida Open Communication Application Library), both based on the Session Initiation Protocol (SIP) and open-sourced. A classification of services is proposed, which divides services into two large groups: basic and advanced services. Basic services are services such as making point-to-point calls, registering with the server and making calls via the server. Any other service is considered an advanced service. Advanced services are defined by four categories: Call Related, Interactive, Internetworking and Hybrid. New services were implemented for the Call Related, Interactive and Internetworking categories. First, features involving call blocking, call screening and missed calls were implemented in the two environments in order to investigate Call-related services. Next, a notification feature was implemented in both environments in order to investigate Interactive services. Finally, a translator between MGCP and SIP was developed to investigate an Internetworking service in the VOCAL environment. The practical implementation of the new features just described was used to answer questions about the location of the services, as well as the level of required expertise and the ease or difficulty experienced in creating services in each of the two environments. , KMBT_363 , Adobe Acrobat 9.54 Paper Capture Plug-in
- Full Text:
- Authors: Hsieh, Ming Chih
- Date: 2013-06-18
- Subjects: Real-time data processing , Computer network protocols , Internet telephony , Digital telephone systems , Communication -- Technological innovations
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4687 , http://hdl.handle.net/10962/d1008195 , Real-time data processing , Computer network protocols , Internet telephony , Digital telephone systems , Communication -- Technological innovations
- Description: The distribution of real-time multimedia streams is seen nowadays as the next step forward for the Internet. One of the most obvious uses of such streams is to support telephony over the Internet, replacing and improving traditional telephony. This thesis investigates the development and deployment of services in two Internet telephony environments, namely CINEMA (Columbia InterNet Extensible Multimedia Architecture) and VOCAL (Vovida Open Communication Application Library), both based on the Session Initiation Protocol (SIP) and open-sourced. A classification of services is proposed, which divides services into two large groups: basic and advanced services. Basic services are services such as making point-to-point calls, registering with the server and making calls via the server. Any other service is considered an advanced service. Advanced services are defined by four categories: Call Related, Interactive, Internetworking and Hybrid. New services were implemented for the Call Related, Interactive and Internetworking categories. First, features involving call blocking, call screening and missed calls were implemented in the two environments in order to investigate Call-related services. Next, a notification feature was implemented in both environments in order to investigate Interactive services. Finally, a translator between MGCP and SIP was developed to investigate an Internetworking service in the VOCAL environment. The practical implementation of the new features just described was used to answer questions about the location of the services, as well as the level of required expertise and the ease or difficulty experienced in creating services in each of the two environments. , KMBT_363 , Adobe Acrobat 9.54 Paper Capture Plug-in
- Full Text:
Simplified menu-driven data analysis tool with macro-like automation
- Authors: Kazembe, Luntha
- Date: 2022-10-14
- Subjects: Data analysis , Macro instructions (Electronic computers) , Quantitative research Software , Python (Computer program language) , Scripting languages (Computer science)
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/362905 , vital:65373
- Description: This study seeks to improve the data analysis process for individuals and small businesses with limited resources by developing a simplified data analysis software tool that allows users to carry out data analysis effectively and efficiently. Design considerations were identified to address limitations common in such environments, these included making the tool easy-to-use, requiring only a basic understanding of the data analysis process, designing the tool in manner that minimises computing resource requirements and user interaction and implementing it using Python which is open-source, effective and efficient in processing data. We develop a prototype simplified data analysis tool as a proof-of-concept. The tool has two components, namely, core elements which provide functionality for the data anal- ysis process including data collection, transformations, analysis and visualizations, and automation and performance enhancements to improve the data analysis process. The automation enhancements consist of the record and playback macro feature while the performance enhancements include multiprocessing and multi-threading abilities. The data analysis software was developed to analyse various alpha-numeric data formats by using a variety of statistical and mathematical techniques. The record and playback macro feature enhances the data analysis process by saving users time and computing resources when analysing large volumes of data or carrying out repetitive data analysis tasks. The feature has two components namely, the record component that is used to record data analysis steps and the playback component used to execute recorded steps. The simplified data analysis tool has parallelization designed and implemented which allows users to carry out two or more analysis tasks at a time, this improves productivity as users can do other tasks while the tool is processing data using recorded steps in the background. The tool was created and subsequently tested using common analysis scenarios applied to network data, log data and stock data. Results show that decision-making requirements such as accurate information, can be satisfied using this analysis tool. Based on the functionality implemented, similar analysis functionality to that provided by Microsoft Excel is available, but in a simplified manner. Moreover, a more sophisticated macro functionality is provided for the execution of repetitive tasks using the recording feature. Overall, the study found that the simplified data analysis tool is functional, usable, scalable, efficient and can carry out multiple analysis tasks simultaneously. , Thesis (MSc) -- Faculty of Science, Computer Science, 2022
- Full Text:
- Date Issued: 2022-10-14
- Authors: Kazembe, Luntha
- Date: 2022-10-14
- Subjects: Data analysis , Macro instructions (Electronic computers) , Quantitative research Software , Python (Computer program language) , Scripting languages (Computer science)
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/362905 , vital:65373
- Description: This study seeks to improve the data analysis process for individuals and small businesses with limited resources by developing a simplified data analysis software tool that allows users to carry out data analysis effectively and efficiently. Design considerations were identified to address limitations common in such environments, these included making the tool easy-to-use, requiring only a basic understanding of the data analysis process, designing the tool in manner that minimises computing resource requirements and user interaction and implementing it using Python which is open-source, effective and efficient in processing data. We develop a prototype simplified data analysis tool as a proof-of-concept. The tool has two components, namely, core elements which provide functionality for the data anal- ysis process including data collection, transformations, analysis and visualizations, and automation and performance enhancements to improve the data analysis process. The automation enhancements consist of the record and playback macro feature while the performance enhancements include multiprocessing and multi-threading abilities. The data analysis software was developed to analyse various alpha-numeric data formats by using a variety of statistical and mathematical techniques. The record and playback macro feature enhances the data analysis process by saving users time and computing resources when analysing large volumes of data or carrying out repetitive data analysis tasks. The feature has two components namely, the record component that is used to record data analysis steps and the playback component used to execute recorded steps. The simplified data analysis tool has parallelization designed and implemented which allows users to carry out two or more analysis tasks at a time, this improves productivity as users can do other tasks while the tool is processing data using recorded steps in the background. The tool was created and subsequently tested using common analysis scenarios applied to network data, log data and stock data. Results show that decision-making requirements such as accurate information, can be satisfied using this analysis tool. Based on the functionality implemented, similar analysis functionality to that provided by Microsoft Excel is available, but in a simplified manner. Moreover, a more sophisticated macro functionality is provided for the execution of repetitive tasks using the recording feature. Overall, the study found that the simplified data analysis tool is functional, usable, scalable, efficient and can carry out multiple analysis tasks simultaneously. , Thesis (MSc) -- Faculty of Science, Computer Science, 2022
- Full Text:
- Date Issued: 2022-10-14
Software quality assurance in a remote client/contractor context
- Authors: Black, Angus Hugh
- Date: 2006
- Subjects: Computer software -- Quality control , Software engineering , Information technology
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4648 , http://hdl.handle.net/10962/d1006615 , Computer software -- Quality control , Software engineering , Information technology
- Description: With the reliance on information technology and the software that this technology utilizes increasing every day, it is of paramount importance that software developed be of an acceptable quality. This quality can be achieved through the utilization of various software engineering standards and guidelines. The question is, to what extent do these standards and guidelines need to be utilized and how are these standards and guidelines implemented? This research focuses on how guidelines developed by standardization bodies and the unified process developed by Rational can be integrated to achieve a suitable process and version control system within the context of a remote client/contractor small team environment.
- Full Text:
- Date Issued: 2006
- Authors: Black, Angus Hugh
- Date: 2006
- Subjects: Computer software -- Quality control , Software engineering , Information technology
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4648 , http://hdl.handle.net/10962/d1006615 , Computer software -- Quality control , Software engineering , Information technology
- Description: With the reliance on information technology and the software that this technology utilizes increasing every day, it is of paramount importance that software developed be of an acceptable quality. This quality can be achieved through the utilization of various software engineering standards and guidelines. The question is, to what extent do these standards and guidelines need to be utilized and how are these standards and guidelines implemented? This research focuses on how guidelines developed by standardization bodies and the unified process developed by Rational can be integrated to achieve a suitable process and version control system within the context of a remote client/contractor small team environment.
- Full Text:
- Date Issued: 2006
Static analysis of functional languages
- Authors: Mountjoy, Jon-Dean
- Date: 1994 , 2012-10-10
- Subjects: Functional programming languages
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4669 , http://hdl.handle.net/10962/d1006690 , Functional programming languages
- Description: Static analysis is the name given to a number of compile time analysis techniques used to automatically generate information which can lead to improvements in the execution performance of function languages. This thesis provides an introduction to these techniques and their implementation. The abstract interpretation framework is an example of a technique used to extract information from a program by providing the program with an alternate semantics and evaluating this program over a non-standard domain. The elements of this domain represent certain properties of interest. This framework is examined in detail, as well as various extensions and variants of it. The use of binary logical relations and program logics as alternative formulations of the framework , and partial equivalence relations as an extension to it, are also looked at. The projection analysis framework determines how much of a sub-expression can be evaluated by examining the context in which the expression is to be evaluated, and provides an elegant method for finding particular types of information from data structures. This is also examined. The most costly operation in implementing an analysis is the computation of fixed points. Methods developed to make this process more efficient are looked at. This leads to the final chapter which highlights the dependencies and relationships between the different frameworks and their mathematical disciplines. , KMBT_223
- Full Text:
- Date Issued: 1994
- Authors: Mountjoy, Jon-Dean
- Date: 1994 , 2012-10-10
- Subjects: Functional programming languages
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4669 , http://hdl.handle.net/10962/d1006690 , Functional programming languages
- Description: Static analysis is the name given to a number of compile time analysis techniques used to automatically generate information which can lead to improvements in the execution performance of function languages. This thesis provides an introduction to these techniques and their implementation. The abstract interpretation framework is an example of a technique used to extract information from a program by providing the program with an alternate semantics and evaluating this program over a non-standard domain. The elements of this domain represent certain properties of interest. This framework is examined in detail, as well as various extensions and variants of it. The use of binary logical relations and program logics as alternative formulations of the framework , and partial equivalence relations as an extension to it, are also looked at. The projection analysis framework determines how much of a sub-expression can be evaluated by examining the context in which the expression is to be evaluated, and provides an elegant method for finding particular types of information from data structures. This is also examined. The most costly operation in implementing an analysis is the computation of fixed points. Methods developed to make this process more efficient are looked at. This leads to the final chapter which highlights the dependencies and relationships between the different frameworks and their mathematical disciplines. , KMBT_223
- Full Text:
- Date Issued: 1994
Studies related to the process of program development
- Authors: Williams, Morgan Howard
- Date: 1994
- Subjects: Computer programming
- Language: English
- Type: Thesis , Doctoral , DSc
- Identifier: vital:4680 , http://hdl.handle.net/10962/d1007235
- Description: The submitted work consists of a collection of publications arising from research carried out at Rhodes University (1970-1980) and at Heriot-Watt University (1980-1992). The theme of this research is the process of program development, i.e. the process of creating a computer program to solve some particular problem. The papers presented cover a number of different topics which relate to this process, viz. (a) Programming methodology programming. (b) Properties of programming languages. aspects of structured. (c) Formal specification of programming languages. (d) Compiler techniques. (e) Declarative programming languages. (f) Program development aids. (g) Automatic program generation. (h) Databases. (i) Algorithms and applications.
- Full Text:
- Date Issued: 1994
- Authors: Williams, Morgan Howard
- Date: 1994
- Subjects: Computer programming
- Language: English
- Type: Thesis , Doctoral , DSc
- Identifier: vital:4680 , http://hdl.handle.net/10962/d1007235
- Description: The submitted work consists of a collection of publications arising from research carried out at Rhodes University (1970-1980) and at Heriot-Watt University (1980-1992). The theme of this research is the process of program development, i.e. the process of creating a computer program to solve some particular problem. The papers presented cover a number of different topics which relate to this process, viz. (a) Programming methodology programming. (b) Properties of programming languages. aspects of structured. (c) Formal specification of programming languages. (d) Compiler techniques. (e) Declarative programming languages. (f) Program development aids. (g) Automatic program generation. (h) Databases. (i) Algorithms and applications.
- Full Text:
- Date Issued: 1994
Targeted attack detection by means of free and open source solutions
- Authors: Bernardo, Louis F
- Date: 2019
- Subjects: Computer networks -- Security measures , Information technology -- Security measures , Computer security -- Management , Data protection
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92269 , vital:30703
- Description: Compliance requirements are part of everyday business requirements for various areas, such as retail and medical services. As part of compliance it may be required to have infrastructure in place to monitor the activities in the environment to ensure that the relevant data and environment is sufficiently protected. At the core of such monitoring solutions one would find some type of data repository, or database, to store and ultimately correlate the captured events. Such solutions are commonly called Security Information and Event Management, or SIEM for short. Larger companies have been known to use commercial solutions such as IBM's Qradar, Logrythm, or Splunk. However, these come at significant cost and arent suitable for smaller businesses with limited budgets. These solutions require manual configuration of event correlation for detection of activities that place the environment in danger. This usually requires vendor implementation assistance that also would come at a cost. Alternatively, there are open source solutions that provide the required functionality. This research will demonstrate building an open source solution, with minimal to no cost for hardware or software, while still maintaining the capability of detecting targeted attacks. The solution presented in this research includes Wazuh, which is a combination of OSSEC and the ELK stack, integrated with an Network Intrusion Detection System (NIDS). The success of the integration, is determined by measuring postive attack detection based on each different configuration options. To perform the testing, a deliberately vulnerable platform named Metasploitable will be used as a victim host. The victim host vulnerabilities were created specifically to serve as target for Metasploit. The attacks were generated by utilising Metasploit Framework on a prebuilt Kali Linux host.
- Full Text:
- Date Issued: 2019
- Authors: Bernardo, Louis F
- Date: 2019
- Subjects: Computer networks -- Security measures , Information technology -- Security measures , Computer security -- Management , Data protection
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92269 , vital:30703
- Description: Compliance requirements are part of everyday business requirements for various areas, such as retail and medical services. As part of compliance it may be required to have infrastructure in place to monitor the activities in the environment to ensure that the relevant data and environment is sufficiently protected. At the core of such monitoring solutions one would find some type of data repository, or database, to store and ultimately correlate the captured events. Such solutions are commonly called Security Information and Event Management, or SIEM for short. Larger companies have been known to use commercial solutions such as IBM's Qradar, Logrythm, or Splunk. However, these come at significant cost and arent suitable for smaller businesses with limited budgets. These solutions require manual configuration of event correlation for detection of activities that place the environment in danger. This usually requires vendor implementation assistance that also would come at a cost. Alternatively, there are open source solutions that provide the required functionality. This research will demonstrate building an open source solution, with minimal to no cost for hardware or software, while still maintaining the capability of detecting targeted attacks. The solution presented in this research includes Wazuh, which is a combination of OSSEC and the ELK stack, integrated with an Network Intrusion Detection System (NIDS). The success of the integration, is determined by measuring postive attack detection based on each different configuration options. To perform the testing, a deliberately vulnerable platform named Metasploitable will be used as a victim host. The victim host vulnerabilities were created specifically to serve as target for Metasploit. The attacks were generated by utilising Metasploit Framework on a prebuilt Kali Linux host.
- Full Text:
- Date Issued: 2019
Technology in conservation: towards a system for in-field drone detection of invasive vegetation
- James, Katherine Margaret Frances
- Authors: James, Katherine Margaret Frances
- Date: 2020
- Subjects: Drone aircraft in remote sensing , Neural networks (Computer science) , Drone aircraft in remote sensing -- Case studies , Machine learning , Computer vision , Environmental monitoring -- Remote sensing , Invasive plants -- Monitoring
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/143408 , vital:38244
- Description: Remote sensing can assist in monitoring the spread of invasive vegetation. The adoption of camera-carrying unmanned aerial vehicles, commonly referred to as drones, as remote sensing tools has yielded images of higher spatial resolution than traditional techniques. Drones also have the potential to interact with the environment through the delivery of bio-control or herbicide, as seen with their adoption in precision agriculture. Unlike in agricultural applications, however, invasive plants do not have a predictable position relative to each other within the environment. To facilitate the adoption of drones as an environmental monitoring and management tool, drones need to be able to intelligently distinguish between invasive and non-invasive vegetation on the fly. In this thesis, we present the augmentation of a commercially available drone with a deep machine learning model to investigate the viability of differentiating between an invasive shrub and other vegetation. As a case study, this was applied to the shrub genus Hakea, originating in Australia and invasive in several countries including South Africa. However, for this research, the methodology is important, rather than the chosen target plant. A dataset was collected using the available drone and manually annotated to facilitate the supervised training of the model. Two approaches were explored, namely, classification and semantic segmentation. For each of these, several models were trained and evaluated to find the optimal one. The chosen model was then interfaced with the drone via an Android application on a mobile device and its performance was preliminarily evaluated in the field. Based on these findings, refinements were made and thereafter a thorough field evaluation was performed to determine the best conditions for model operation. Results from the classification task show that deep learning models are capable of distinguishing between target and other shrubs in ideal candidate windows. However, classification in this manner is restricted by the proposal of such candidate windows. End-to-end image segmentation using deep learning overcomes this problem, classifying the image in a pixel-wise manner. Furthermore, the use of appropriate loss functions was found to improve model performance. Field tests show that illumination and shadow pose challenges to the model, but that good recall can be achieved when the conditions are ideal. False positive detection remains an issue that could be improved. This approach shows the potential for drones as an environmental monitoring and management tool when coupled with deep machine learning techniques and outlines potential problems that may be encountered.
- Full Text:
- Date Issued: 2020
- Authors: James, Katherine Margaret Frances
- Date: 2020
- Subjects: Drone aircraft in remote sensing , Neural networks (Computer science) , Drone aircraft in remote sensing -- Case studies , Machine learning , Computer vision , Environmental monitoring -- Remote sensing , Invasive plants -- Monitoring
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/143408 , vital:38244
- Description: Remote sensing can assist in monitoring the spread of invasive vegetation. The adoption of camera-carrying unmanned aerial vehicles, commonly referred to as drones, as remote sensing tools has yielded images of higher spatial resolution than traditional techniques. Drones also have the potential to interact with the environment through the delivery of bio-control or herbicide, as seen with their adoption in precision agriculture. Unlike in agricultural applications, however, invasive plants do not have a predictable position relative to each other within the environment. To facilitate the adoption of drones as an environmental monitoring and management tool, drones need to be able to intelligently distinguish between invasive and non-invasive vegetation on the fly. In this thesis, we present the augmentation of a commercially available drone with a deep machine learning model to investigate the viability of differentiating between an invasive shrub and other vegetation. As a case study, this was applied to the shrub genus Hakea, originating in Australia and invasive in several countries including South Africa. However, for this research, the methodology is important, rather than the chosen target plant. A dataset was collected using the available drone and manually annotated to facilitate the supervised training of the model. Two approaches were explored, namely, classification and semantic segmentation. For each of these, several models were trained and evaluated to find the optimal one. The chosen model was then interfaced with the drone via an Android application on a mobile device and its performance was preliminarily evaluated in the field. Based on these findings, refinements were made and thereafter a thorough field evaluation was performed to determine the best conditions for model operation. Results from the classification task show that deep learning models are capable of distinguishing between target and other shrubs in ideal candidate windows. However, classification in this manner is restricted by the proposal of such candidate windows. End-to-end image segmentation using deep learning overcomes this problem, classifying the image in a pixel-wise manner. Furthermore, the use of appropriate loss functions was found to improve model performance. Field tests show that illumination and shadow pose challenges to the model, but that good recall can be achieved when the conditions are ideal. False positive detection remains an issue that could be improved. This approach shows the potential for drones as an environmental monitoring and management tool when coupled with deep machine learning techniques and outlines potential problems that may be encountered.
- Full Text:
- Date Issued: 2020