Remote fidelity of Container-Based Network Emulators
- Authors: Peach, Schalk Willem
- Date: 2021-10-29
- Subjects: Computer networks Security measures , Intrusion detection systems (Computer security) , Computer security , Host-based intrusion detection systems (Computer security) , Emulators (Computer programs) , Computer network protocols , Container-Based Network Emulators (CBNEs) , Network Experimentation Platforms (NEPs)
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10962/192141 , vital:45199
- Description: This thesis examines if Container-Based Network Emulators (CBNEs) are able to instantiate emulated nodes that provide sufficient realism to be used in information security experiments. The realism measure used is based on the information available from the point of view of a remote attacker. During the evaluation of a Container-Based Network Emulator (CBNE) as a platform to replicate production networks for information security experiments, it was observed that nmap fingerprinting returned Operating System (OS) family and version results inconsistent with that of the host Operating System (OS). CBNEs utilise Linux namespaces, the technology used for containerisation, to instantiate \emulated" hosts for experimental networks. Linux containers partition resources of the host OS to create lightweight virtual machines that share a single OS kernel. As all emulated hosts share the same kernel in a CBNE network, there is a reasonable expectation that the fingerprints of the host OS and emulated hosts should be the same. Based on how CBNEs instantiate emulated networks and that fingerprinting returned inconsistent results, it was hypothesised that the technologies used to construct CBNEs are capable of influencing fingerprints generated by utilities such as nmap. It was predicted that hosts emulated using different CBNEs would show deviations in remotely generated fingerprints when compared to fingerprints generated for the host OS. An experimental network consisting of two emulated hosts and a Layer 2 switch was instantiated on multiple CBNEs using the same host OS. Active and passive fingerprinting was conducted between the emulated hosts to generate fingerprints and OS family and version matches. Passive fingerprinting failed to produce OS family and version matches as the fingerprint databases for these utilities are no longer maintained. For active fingerprinting the OS family results were consistent between tested systems and the host OS, though OS version results reported was inconsistent. A comparison of the generated fingerprints revealed that for certain CBNEs fingerprint features related to network stack optimisations of the host OS deviated from other CBNEs and the host OS. The hypothesis that CBNEs can influence remotely generated fingerprints was partially confirmed. One CBNE system modified Linux kernel networking options, causing a deviation from fingerprints generated for other tested systems and the host OS. The hypothesis was also partially rejected as the technologies used by CBNEs do not influence the remote fidelity of emulated hosts. , Thesis (MSc) -- Faculty of Science, Computer Science, 2021
- Full Text:
- Date Issued: 2021-10-29
- Authors: Peach, Schalk Willem
- Date: 2021-10-29
- Subjects: Computer networks Security measures , Intrusion detection systems (Computer security) , Computer security , Host-based intrusion detection systems (Computer security) , Emulators (Computer programs) , Computer network protocols , Container-Based Network Emulators (CBNEs) , Network Experimentation Platforms (NEPs)
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10962/192141 , vital:45199
- Description: This thesis examines if Container-Based Network Emulators (CBNEs) are able to instantiate emulated nodes that provide sufficient realism to be used in information security experiments. The realism measure used is based on the information available from the point of view of a remote attacker. During the evaluation of a Container-Based Network Emulator (CBNE) as a platform to replicate production networks for information security experiments, it was observed that nmap fingerprinting returned Operating System (OS) family and version results inconsistent with that of the host Operating System (OS). CBNEs utilise Linux namespaces, the technology used for containerisation, to instantiate \emulated" hosts for experimental networks. Linux containers partition resources of the host OS to create lightweight virtual machines that share a single OS kernel. As all emulated hosts share the same kernel in a CBNE network, there is a reasonable expectation that the fingerprints of the host OS and emulated hosts should be the same. Based on how CBNEs instantiate emulated networks and that fingerprinting returned inconsistent results, it was hypothesised that the technologies used to construct CBNEs are capable of influencing fingerprints generated by utilities such as nmap. It was predicted that hosts emulated using different CBNEs would show deviations in remotely generated fingerprints when compared to fingerprints generated for the host OS. An experimental network consisting of two emulated hosts and a Layer 2 switch was instantiated on multiple CBNEs using the same host OS. Active and passive fingerprinting was conducted between the emulated hosts to generate fingerprints and OS family and version matches. Passive fingerprinting failed to produce OS family and version matches as the fingerprint databases for these utilities are no longer maintained. For active fingerprinting the OS family results were consistent between tested systems and the host OS, though OS version results reported was inconsistent. A comparison of the generated fingerprints revealed that for certain CBNEs fingerprint features related to network stack optimisations of the host OS deviated from other CBNEs and the host OS. The hypothesis that CBNEs can influence remotely generated fingerprints was partially confirmed. One CBNE system modified Linux kernel networking options, causing a deviation from fingerprints generated for other tested systems and the host OS. The hypothesis was also partially rejected as the technologies used by CBNEs do not influence the remote fidelity of emulated hosts. , Thesis (MSc) -- Faculty of Science, Computer Science, 2021
- Full Text:
- Date Issued: 2021-10-29
An exploratory investigation into an Integrated Vulnerability and Patch Management Framework
- Authors: Carstens, Duane
- Date: 2021-04
- Subjects: Computer security , Computer security -- Management , Computer networks -- Security measures , Patch Management , Integrated Vulnerability
- Language: English
- Type: thesis , text , Masters , MSc
- Identifier: http://hdl.handle.net/10962/177940 , vital:42892
- Description: In the rapidly changing world of cybersecurity, the constant increase of vulnerabilities continues to be a prevalent issue for many organisations. Malicious actors are aware that most organisations cannot timeously patch known vulnerabilities and are ill-prepared to protect against newly created vulnerabilities where a signature or an available patch has not yet been created. Consequently, information security personnel face ongoing challenges to mitigate these risks. In this research, the problem of remediation in a world of increasing vulnerabilities is considered. The current paradigm of vulnerability and patch management is reviewed using a pragmatic approach to all associated variables of these services / practices and, as a result, what is working and what is not working in terms of remediation is understood. In addition to the analysis, a taxonomy is created to provide a graphical representation of all associated variables to vulnerability and patch management based on existing literature. Frameworks currently being utilised in the industry to create an effective engagement model between vulnerability and patch management services are considered. The link between quantifying a threat, vulnerability and consequence; what Microsoft has available for patching; and the action plan for resulting vulnerabilities is explored. Furthermore, the processes and means of communication between each of these services are investigated to ensure there is effective remediation of vulnerabilities, ultimately improving the security risk posture of an organisation. In order to effectively measure the security risk posture, progress is measured between each of these services through a single averaged measurement metric. The outcome of the research highlights influencing factors that impact successful vulnerability management, in line with identified themes from the research taxonomy. These influencing factors are however significantly undermined due to resources within the same organisations not having a clear and consistent understanding of their role, organisational capabilities and objectives for effective vulnerability and patch management within their organisations. , Thesis (MSc) -- Faculty of Science, Computer Science, 2021
- Full Text:
- Date Issued: 2021-04
- Authors: Carstens, Duane
- Date: 2021-04
- Subjects: Computer security , Computer security -- Management , Computer networks -- Security measures , Patch Management , Integrated Vulnerability
- Language: English
- Type: thesis , text , Masters , MSc
- Identifier: http://hdl.handle.net/10962/177940 , vital:42892
- Description: In the rapidly changing world of cybersecurity, the constant increase of vulnerabilities continues to be a prevalent issue for many organisations. Malicious actors are aware that most organisations cannot timeously patch known vulnerabilities and are ill-prepared to protect against newly created vulnerabilities where a signature or an available patch has not yet been created. Consequently, information security personnel face ongoing challenges to mitigate these risks. In this research, the problem of remediation in a world of increasing vulnerabilities is considered. The current paradigm of vulnerability and patch management is reviewed using a pragmatic approach to all associated variables of these services / practices and, as a result, what is working and what is not working in terms of remediation is understood. In addition to the analysis, a taxonomy is created to provide a graphical representation of all associated variables to vulnerability and patch management based on existing literature. Frameworks currently being utilised in the industry to create an effective engagement model between vulnerability and patch management services are considered. The link between quantifying a threat, vulnerability and consequence; what Microsoft has available for patching; and the action plan for resulting vulnerabilities is explored. Furthermore, the processes and means of communication between each of these services are investigated to ensure there is effective remediation of vulnerabilities, ultimately improving the security risk posture of an organisation. In order to effectively measure the security risk posture, progress is measured between each of these services through a single averaged measurement metric. The outcome of the research highlights influencing factors that impact successful vulnerability management, in line with identified themes from the research taxonomy. These influencing factors are however significantly undermined due to resources within the same organisations not having a clear and consistent understanding of their role, organisational capabilities and objectives for effective vulnerability and patch management within their organisations. , Thesis (MSc) -- Faculty of Science, Computer Science, 2021
- Full Text:
- Date Issued: 2021-04
Towards a capability maturity model for a cyber range
- Authors: Aschmann, Michael Joseph
- Date: 2020
- Subjects: Computer software -- Development , Computer security
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/163142 , vital:41013
- Description: This work describes research undertaken towards the development of a Capability Maturity Model (CMM) for Cyber Ranges (CRs) focused on cyber security. Global cyber security needs are on the rise, and the need for attribution within the cyber domain is of particular concern. This has prompted major efforts to enhance cyber capabilities within organisations to increase their total cyber resilience posture. These efforts include, but are not limited to, the testing of computational devices, networks, and applications, and cyber skills training focused on prevention, detection and cyber attack response. A cyber range allows for the testing of the computational environment. By developing cyber events within a confined virtual or sand-boxed cyber environment, a cyber range can prepare the next generation of cyber security specialists to handle a variety of potential cyber attacks. Cyber ranges have different purposes, each designed to fulfil a different computational testing and cyber training goal; consequently, cyber ranges can vary greatly in the level of variety, capability, maturity and complexity. As cyber ranges proliferate and become more and more valued as tools for cyber security, a method to classify or rate them becomes essential. Yet while a universal criteria for measuring cyber ranges in terms of their capability maturity levels becomes more critical, there are currently very limited resources for researchers aiming to perform this kind of work. For this reason, this work proposes and describes a CMM, designed to give organisations the ability to benchmark the capability maturity of a given cyber range. This research adopted a synthesised approach to the development of a CMM, grounded in prior research and focused on the production of a conceptual model that provides a useful level of abstraction. In order to achieve this goal, the core capability elements of a cyber range are defined with their relative importance, allowing for the development of a proposed classification cyber range levels. An analysis of data gathered during the course of an expert review, together with other research, further supported the development of the conceptual model. In the context of cyber range capability, classification will include the ability of the cyber range to perform its functions optimally with different core capability elements, focusing on the Measurement of Capability (MoC) with its elements, namely effect, performance and threat ability. Cyber range maturity can evolve over time and can be defined through the Measurement of Maturity (MoM) with its elements, namely people, processes, technology. The combination of these measurements utilising the CMM for a CR determines the capability maturity level of a CR. The primary outcome of this research is the proposed level-based CMM framework for a cyber range, developed using adopted and synthesised CMMs, the analysis of an expert review, and the mapping of the results.
- Full Text:
- Date Issued: 2020
- Authors: Aschmann, Michael Joseph
- Date: 2020
- Subjects: Computer software -- Development , Computer security
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/163142 , vital:41013
- Description: This work describes research undertaken towards the development of a Capability Maturity Model (CMM) for Cyber Ranges (CRs) focused on cyber security. Global cyber security needs are on the rise, and the need for attribution within the cyber domain is of particular concern. This has prompted major efforts to enhance cyber capabilities within organisations to increase their total cyber resilience posture. These efforts include, but are not limited to, the testing of computational devices, networks, and applications, and cyber skills training focused on prevention, detection and cyber attack response. A cyber range allows for the testing of the computational environment. By developing cyber events within a confined virtual or sand-boxed cyber environment, a cyber range can prepare the next generation of cyber security specialists to handle a variety of potential cyber attacks. Cyber ranges have different purposes, each designed to fulfil a different computational testing and cyber training goal; consequently, cyber ranges can vary greatly in the level of variety, capability, maturity and complexity. As cyber ranges proliferate and become more and more valued as tools for cyber security, a method to classify or rate them becomes essential. Yet while a universal criteria for measuring cyber ranges in terms of their capability maturity levels becomes more critical, there are currently very limited resources for researchers aiming to perform this kind of work. For this reason, this work proposes and describes a CMM, designed to give organisations the ability to benchmark the capability maturity of a given cyber range. This research adopted a synthesised approach to the development of a CMM, grounded in prior research and focused on the production of a conceptual model that provides a useful level of abstraction. In order to achieve this goal, the core capability elements of a cyber range are defined with their relative importance, allowing for the development of a proposed classification cyber range levels. An analysis of data gathered during the course of an expert review, together with other research, further supported the development of the conceptual model. In the context of cyber range capability, classification will include the ability of the cyber range to perform its functions optimally with different core capability elements, focusing on the Measurement of Capability (MoC) with its elements, namely effect, performance and threat ability. Cyber range maturity can evolve over time and can be defined through the Measurement of Maturity (MoM) with its elements, namely people, processes, technology. The combination of these measurements utilising the CMM for a CR determines the capability maturity level of a CR. The primary outcome of this research is the proposed level-based CMM framework for a cyber range, developed using adopted and synthesised CMMs, the analysis of an expert review, and the mapping of the results.
- Full Text:
- Date Issued: 2020
A comparative study of CERBER, MAKTUB and LOCKY Ransomware using a Hybridised-Malware analysis
- Authors: Schmitt, Veronica
- Date: 2019
- Subjects: Microsoft Windows (Computer file) , Data protection , Computer crimes -- Prevention , Computer security , Computer networks -- Security measures , Computers -- Access control , Malware (Computer software)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92313 , vital:30702
- Description: There has been a significant increase in the prevalence of Ransomware attacks in the preceding four years to date. This indicates that the battle has not yet been won defending against this class of malware. This research proposes that by identifying the similarities within the operational framework of Ransomware strains, a better overall understanding of their operation and function can be achieved. This, in turn, will aid in a quicker response to future attacks. With the average Ransomware attack taking two hours to be identified, it shows that there is not yet a clear understanding as to why these attacks are so successful. Research into Ransomware is limited by what is currently known on the topic. Due to the limitations of the research the decision was taken to only examined three samples of Ransomware from different families. This was decided due to the complexities and comprehensive nature of the research. The in depth nature of the research and the time constraints associated with it did not allow for proof of concept of this framework to be tested on more than three families, but the exploratory work was promising and should be further explored in future research. The aim of the research is to follow the Hybrid-Malware analysis framework which consists of both static and the dynamic analysis phases, in addition to the digital forensic examination of the infected system. This allows for signature-based findings, along with behavioural and forensic findings all in one. This information allows for a better understanding of how this malware is designed and how it infects and remains persistent on a system. The operating system which has been chosen is the Microsoft Window 7 operating system which is still utilised by a significant proportion of Windows users especially in the corporate environment. The experiment process was designed to enable the researcher the ability to collect information regarding the Ransomware and every aspect of its behaviour and communication on a target system. The results can be compared across the three strains to identify the commonalities. The initial hypothesis was that Ransomware variants are all much like an instant cake box consists of specific building blocks which remain the same with the flavouring of the cake mix being the unique feature.
- Full Text:
- Date Issued: 2019
- Authors: Schmitt, Veronica
- Date: 2019
- Subjects: Microsoft Windows (Computer file) , Data protection , Computer crimes -- Prevention , Computer security , Computer networks -- Security measures , Computers -- Access control , Malware (Computer software)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92313 , vital:30702
- Description: There has been a significant increase in the prevalence of Ransomware attacks in the preceding four years to date. This indicates that the battle has not yet been won defending against this class of malware. This research proposes that by identifying the similarities within the operational framework of Ransomware strains, a better overall understanding of their operation and function can be achieved. This, in turn, will aid in a quicker response to future attacks. With the average Ransomware attack taking two hours to be identified, it shows that there is not yet a clear understanding as to why these attacks are so successful. Research into Ransomware is limited by what is currently known on the topic. Due to the limitations of the research the decision was taken to only examined three samples of Ransomware from different families. This was decided due to the complexities and comprehensive nature of the research. The in depth nature of the research and the time constraints associated with it did not allow for proof of concept of this framework to be tested on more than three families, but the exploratory work was promising and should be further explored in future research. The aim of the research is to follow the Hybrid-Malware analysis framework which consists of both static and the dynamic analysis phases, in addition to the digital forensic examination of the infected system. This allows for signature-based findings, along with behavioural and forensic findings all in one. This information allows for a better understanding of how this malware is designed and how it infects and remains persistent on a system. The operating system which has been chosen is the Microsoft Window 7 operating system which is still utilised by a significant proportion of Windows users especially in the corporate environment. The experiment process was designed to enable the researcher the ability to collect information regarding the Ransomware and every aspect of its behaviour and communication on a target system. The results can be compared across the three strains to identify the commonalities. The initial hypothesis was that Ransomware variants are all much like an instant cake box consists of specific building blocks which remain the same with the flavouring of the cake mix being the unique feature.
- Full Text:
- Date Issued: 2019
A multi-threading software countermeasure to mitigate side channel analysis in the time domain
- Authors: Frieslaar, Ibraheem
- Date: 2019
- Subjects: Computer security , Data encryption (Computer science) , Noise generators (Electronics)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/71152 , vital:29790
- Description: This research is the first of its kind to investigate the utilisation of a multi-threading software-based countermeasure to mitigate Side Channel Analysis (SCA) attacks, with a particular focus on the AES-128 cryptographic algorithm. This investigation is novel, as there has not been a software-based countermeasure relying on multi-threading to our knowledge. The research has been tested on the Atmel microcontrollers, as well as a more fully featured system in the form of the popular Raspberry Pi that utilises the ARM7 processor. The main contributions of this research is the introduction of a multi-threading software based countermeasure used to mitigate SCA attacks on both an embedded device and a Raspberry Pi. These threads are comprised of various mathematical operations which are utilised to generate electromagnetic (EM) noise resulting in the obfuscation of the execution of the AES-128 algorithm. A novel EM noise generator known as the FRIES noise generator is implemented to obfuscate data captured in the EM field. FRIES comprises of hiding the execution of AES-128 algorithm within the EM noise generated by the 512 Secure Hash Algorithm (SHA) from the libcrypto++ and OpenSSL libraries. In order to evaluate the proposed countermeasure, a novel attack methodology was developed where the entire secret AES-128 encryption key was recovered from a Raspberry Pi, which has not been achieved before. The FRIES noise generator was pitted against this new attack vector and other known noise generators. The results exhibited that the FRIES noise generator withstood this attack whilst other existing techniques still leaked out secret information. The visual location of the AES-128 encryption algorithm in the EM spectrum and key recovery was prevented. These results demonstrated that the proposed multi-threading software based countermeasure was able to be resistant to existing and new forms of attacks, thus verifying that a multi-threading software based countermeasure can serve to mitigate SCA attacks.
- Full Text:
- Date Issued: 2019
- Authors: Frieslaar, Ibraheem
- Date: 2019
- Subjects: Computer security , Data encryption (Computer science) , Noise generators (Electronics)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/71152 , vital:29790
- Description: This research is the first of its kind to investigate the utilisation of a multi-threading software-based countermeasure to mitigate Side Channel Analysis (SCA) attacks, with a particular focus on the AES-128 cryptographic algorithm. This investigation is novel, as there has not been a software-based countermeasure relying on multi-threading to our knowledge. The research has been tested on the Atmel microcontrollers, as well as a more fully featured system in the form of the popular Raspberry Pi that utilises the ARM7 processor. The main contributions of this research is the introduction of a multi-threading software based countermeasure used to mitigate SCA attacks on both an embedded device and a Raspberry Pi. These threads are comprised of various mathematical operations which are utilised to generate electromagnetic (EM) noise resulting in the obfuscation of the execution of the AES-128 algorithm. A novel EM noise generator known as the FRIES noise generator is implemented to obfuscate data captured in the EM field. FRIES comprises of hiding the execution of AES-128 algorithm within the EM noise generated by the 512 Secure Hash Algorithm (SHA) from the libcrypto++ and OpenSSL libraries. In order to evaluate the proposed countermeasure, a novel attack methodology was developed where the entire secret AES-128 encryption key was recovered from a Raspberry Pi, which has not been achieved before. The FRIES noise generator was pitted against this new attack vector and other known noise generators. The results exhibited that the FRIES noise generator withstood this attack whilst other existing techniques still leaked out secret information. The visual location of the AES-128 encryption algorithm in the EM spectrum and key recovery was prevented. These results demonstrated that the proposed multi-threading software based countermeasure was able to be resistant to existing and new forms of attacks, thus verifying that a multi-threading software based countermeasure can serve to mitigate SCA attacks.
- Full Text:
- Date Issued: 2019
A study of malicious software on the macOS operating system
- Authors: Regensberg, Mark Alan
- Date: 2019
- Subjects: Malware (Computer software) , Computer security , Computer viruses , Mac OS
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92302 , vital:30701
- Description: Much of the published malware research begins with a common refrain: the cost, quantum and complexity of threats are increasing, and research and practice should prioritise efforts to automate and reduce times to detect and prevent malware, while improving the consistency of categories and taxonomies applied to modern malware. Existing work related to malware targeting Apple's macOS platform has not been spared this approach, although limited research has been conducted on the true nature of threats faced by users of the operating system. While macOS focused research available consistently notes an increase in macOS users, devices and ultimately in threats, an opportunity exists to understand the real nature of threats faced by macOS users and suggest potential avenues for future work. This research provides a view of the current state of macOS malware by analysing and exploring a dataset of malware detections on macOS endpoints captured over a period of eleven months by an anti-malware software vendor. The dataset is augmented with malware information provided by the widely used Virus. Total service, as well as the application of prior automated malware categorisation work, AVClass to categorise and SSDeep to cluster and report on observed data. With Windows and Android platforms frequently in the spotlight as targets for highly disruptive malware like botnets, ransomware and cryptominers, research and intuition seem to suggest the threat of malware on this increasingly popular platform should be growing and evolving accordingly. Findings suggests that the direction and nature of growth and evolution may not be entirely as clear as industry reports suggest. Adware and Potentially Unwanted Applications (PUAs) make up the vast majority of the detected threats, with remote access trojans (RATs), ransomware and cryptocurrency miners comprising a relatively small proportion of the detected malware. This provides a number of avenues for potential future work to compare and contrast with research on other platforms, as well as identification of key factors that may influence its growth in the future.
- Full Text:
- Date Issued: 2019
- Authors: Regensberg, Mark Alan
- Date: 2019
- Subjects: Malware (Computer software) , Computer security , Computer viruses , Mac OS
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92302 , vital:30701
- Description: Much of the published malware research begins with a common refrain: the cost, quantum and complexity of threats are increasing, and research and practice should prioritise efforts to automate and reduce times to detect and prevent malware, while improving the consistency of categories and taxonomies applied to modern malware. Existing work related to malware targeting Apple's macOS platform has not been spared this approach, although limited research has been conducted on the true nature of threats faced by users of the operating system. While macOS focused research available consistently notes an increase in macOS users, devices and ultimately in threats, an opportunity exists to understand the real nature of threats faced by macOS users and suggest potential avenues for future work. This research provides a view of the current state of macOS malware by analysing and exploring a dataset of malware detections on macOS endpoints captured over a period of eleven months by an anti-malware software vendor. The dataset is augmented with malware information provided by the widely used Virus. Total service, as well as the application of prior automated malware categorisation work, AVClass to categorise and SSDeep to cluster and report on observed data. With Windows and Android platforms frequently in the spotlight as targets for highly disruptive malware like botnets, ransomware and cryptominers, research and intuition seem to suggest the threat of malware on this increasingly popular platform should be growing and evolving accordingly. Findings suggests that the direction and nature of growth and evolution may not be entirely as clear as industry reports suggest. Adware and Potentially Unwanted Applications (PUAs) make up the vast majority of the detected threats, with remote access trojans (RATs), ransomware and cryptocurrency miners comprising a relatively small proportion of the detected malware. This provides a number of avenues for potential future work to compare and contrast with research on other platforms, as well as identification of key factors that may influence its growth in the future.
- Full Text:
- Date Issued: 2019
An analysis of the use of DNS for malicious payload distribution
- Authors: Dube, Ishmael
- Date: 2019
- Subjects: Internet domain names , Computer networks -- Security measures , Computer security , Computer network protocols , Data protection
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/97531 , vital:31447
- Description: The Domain Name System (DNS) protocol is a fundamental part of Internet activities that can be abused by cybercriminals to conduct malicious activities. Previous research has shown that cybercriminals use different methods, including the DNS protocol, to distribute malicious content, remain hidden and avoid detection from various technologies that are put in place to detect anomalies. This allows botnets and certain malware families to establish covert communication channels that can be used to send or receive data and also distribute malicious payloads using the DNS queries and responses. Cybercriminals use the DNS to breach highly protected networks, distribute malicious content, and exfiltrate sensitive information without being detected by security controls put in place by embedding certain strings in DNS packets. This research undertaking broadens this research field and fills in the existing research gap by extending the analysis of DNS being used as a payload distribution channel to detection of domains that are used to distribute different malicious payloads. This research undertaking analysed the use of the DNS in detecting domains and channels that are used for distributing malicious payloads. Passive DNS data which replicate DNS queries on name servers to detect anomalies in DNS queries was evaluated and analysed in order to detect malicious payloads. The research characterises the malicious payload distribution channels by analysing passive DNS traffic and modelling the DNS query and response patterns. The research found that it is possible to detect malicious payload distribution channels through the analysis of DNS TXT resource records.
- Full Text:
- Date Issued: 2019
- Authors: Dube, Ishmael
- Date: 2019
- Subjects: Internet domain names , Computer networks -- Security measures , Computer security , Computer network protocols , Data protection
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/97531 , vital:31447
- Description: The Domain Name System (DNS) protocol is a fundamental part of Internet activities that can be abused by cybercriminals to conduct malicious activities. Previous research has shown that cybercriminals use different methods, including the DNS protocol, to distribute malicious content, remain hidden and avoid detection from various technologies that are put in place to detect anomalies. This allows botnets and certain malware families to establish covert communication channels that can be used to send or receive data and also distribute malicious payloads using the DNS queries and responses. Cybercriminals use the DNS to breach highly protected networks, distribute malicious content, and exfiltrate sensitive information without being detected by security controls put in place by embedding certain strings in DNS packets. This research undertaking broadens this research field and fills in the existing research gap by extending the analysis of DNS being used as a payload distribution channel to detection of domains that are used to distribute different malicious payloads. This research undertaking analysed the use of the DNS in detecting domains and channels that are used for distributing malicious payloads. Passive DNS data which replicate DNS queries on name servers to detect anomalies in DNS queries was evaluated and analysed in order to detect malicious payloads. The research characterises the malicious payload distribution channels by analysing passive DNS traffic and modelling the DNS query and response patterns. The research found that it is possible to detect malicious payload distribution channels through the analysis of DNS TXT resource records.
- Full Text:
- Date Issued: 2019
Towards understanding and mitigating attacks leveraging zero-day exploits
- Authors: Smit, Liam
- Date: 2019
- Subjects: Computer crimes -- Prevention , Data protection , Hacking , Computer security , Computer networks -- Security measures , Computers -- Access control , Malware (Computer software)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/115718 , vital:34218
- Description: Zero-day vulnerabilities are unknown and therefore not addressed with the result that they can be exploited by attackers to gain unauthorised system access. In order to understand and mitigate against attacks leveraging zero-days or unknown techniques, it is necessary to study the vulnerabilities, exploits and attacks that make use of them. In recent years there have been a number of leaks publishing such attacks using various methods to exploit vulnerabilities. This research seeks to understand what types of vulnerabilities exist, why and how these are exploited, and how to defend against such attacks by either mitigating the vulnerabilities or the method / process of exploiting them. By moving beyond merely remedying the vulnerabilities to defences that are able to prevent or detect the actions taken by attackers, the security of the information system will be better positioned to deal with future unknown threats. An interesting finding is how attackers exploit moving beyond the observable bounds to circumvent security defences, for example, compromising syslog servers, or going down to lower system rings to gain access. However, defenders can counter this by employing defences that are external to the system preventing attackers from disabling them or removing collected evidence after gaining system access. Attackers are able to defeat air-gaps via the leakage of electromagnetic radiation as well as misdirect attribution by planting false artefacts for forensic analysis and attacking from third party information systems. They analyse the methods of other attackers to learn new techniques. An example of this is the Umbrage project whereby malware is analysed to decide whether it should be implemented as a proof of concept. Another important finding is that attackers respect defence mechanisms such as: remote syslog (e.g. firewall), core dump files, database auditing, and Tripwire (e.g. SlyHeretic). These defences all have the potential to result in the attacker being discovered. Attackers must either negate the defence mechanism or find unprotected targets. Defenders can use technologies such as encryption to defend against interception and man-in-the-middle attacks. They can also employ honeytokens and honeypots to alarm misdirect, slow down and learn from attackers. By employing various tactics defenders are able to increase their chance of detecting and time to react to attacks, even those exploiting hitherto unknown vulnerabilities. To summarize the information presented in this thesis and to show the practical importance thereof, an examination is presented of the NSA's network intrusion of the SWIFT organisation. It shows that the firewalls were exploited with remote code execution zerodays. This attack has a striking parallel in the approach used in the recent VPNFilter malware. If nothing else, the leaks provide information to other actors on how to attack and what to avoid. However, by studying state actors, we can gain insight into what other actors with fewer resources can do in the future.
- Full Text:
- Date Issued: 2019
- Authors: Smit, Liam
- Date: 2019
- Subjects: Computer crimes -- Prevention , Data protection , Hacking , Computer security , Computer networks -- Security measures , Computers -- Access control , Malware (Computer software)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/115718 , vital:34218
- Description: Zero-day vulnerabilities are unknown and therefore not addressed with the result that they can be exploited by attackers to gain unauthorised system access. In order to understand and mitigate against attacks leveraging zero-days or unknown techniques, it is necessary to study the vulnerabilities, exploits and attacks that make use of them. In recent years there have been a number of leaks publishing such attacks using various methods to exploit vulnerabilities. This research seeks to understand what types of vulnerabilities exist, why and how these are exploited, and how to defend against such attacks by either mitigating the vulnerabilities or the method / process of exploiting them. By moving beyond merely remedying the vulnerabilities to defences that are able to prevent or detect the actions taken by attackers, the security of the information system will be better positioned to deal with future unknown threats. An interesting finding is how attackers exploit moving beyond the observable bounds to circumvent security defences, for example, compromising syslog servers, or going down to lower system rings to gain access. However, defenders can counter this by employing defences that are external to the system preventing attackers from disabling them or removing collected evidence after gaining system access. Attackers are able to defeat air-gaps via the leakage of electromagnetic radiation as well as misdirect attribution by planting false artefacts for forensic analysis and attacking from third party information systems. They analyse the methods of other attackers to learn new techniques. An example of this is the Umbrage project whereby malware is analysed to decide whether it should be implemented as a proof of concept. Another important finding is that attackers respect defence mechanisms such as: remote syslog (e.g. firewall), core dump files, database auditing, and Tripwire (e.g. SlyHeretic). These defences all have the potential to result in the attacker being discovered. Attackers must either negate the defence mechanism or find unprotected targets. Defenders can use technologies such as encryption to defend against interception and man-in-the-middle attacks. They can also employ honeytokens and honeypots to alarm misdirect, slow down and learn from attackers. By employing various tactics defenders are able to increase their chance of detecting and time to react to attacks, even those exploiting hitherto unknown vulnerabilities. To summarize the information presented in this thesis and to show the practical importance thereof, an examination is presented of the NSA's network intrusion of the SWIFT organisation. It shows that the firewalls were exploited with remote code execution zerodays. This attack has a striking parallel in the approach used in the recent VPNFilter malware. If nothing else, the leaks provide information to other actors on how to attack and what to avoid. However, by studying state actors, we can gain insight into what other actors with fewer resources can do in the future.
- Full Text:
- Date Issued: 2019
NetwIOC: a framework for the automated generation of network-based IOCS for malware information sharing and defence
- Authors: Rudman, Lauren Lynne
- Date: 2018
- Subjects: Malware (Computer software) , Computer networks Security measures , Computer security , Python (Computer program language)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/60639 , vital:27809
- Description: With the substantial number of new malware variants found each day, it is useful to have an efficient way to retrieve Indicators of Compromise (IOCs) from the malware in a format suitable for sharing and detection. In the past, these indicators were manually created after inspection of binary samples and network traffic. The Cuckoo Sandbox, is an existing dynamic malware analysis system which meets the requirements for the proposed framework and was extended by adding a few custom modules. This research explored a way to automate the generation of detailed network-based IOCs in a popular format which can be used for sharing. This was done through careful filtering and analysis of the PCAP hie generated by the sandbox, and placing these values into the correct type of STIX objects using Python, Through several evaluations, analysis of what type of network traffic can be expected for the creation of IOCs was conducted, including a brief ease study that examined the effect of analysis time on the number of IOCs created. Using the automatically generated IOCs to create defence and detection mechanisms for the network was evaluated and proved successful, A proof of concept sharing platform developed for the STIX IOCs is showcased at the end of the research.
- Full Text:
- Date Issued: 2018
- Authors: Rudman, Lauren Lynne
- Date: 2018
- Subjects: Malware (Computer software) , Computer networks Security measures , Computer security , Python (Computer program language)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/60639 , vital:27809
- Description: With the substantial number of new malware variants found each day, it is useful to have an efficient way to retrieve Indicators of Compromise (IOCs) from the malware in a format suitable for sharing and detection. In the past, these indicators were manually created after inspection of binary samples and network traffic. The Cuckoo Sandbox, is an existing dynamic malware analysis system which meets the requirements for the proposed framework and was extended by adding a few custom modules. This research explored a way to automate the generation of detailed network-based IOCs in a popular format which can be used for sharing. This was done through careful filtering and analysis of the PCAP hie generated by the sandbox, and placing these values into the correct type of STIX objects using Python, Through several evaluations, analysis of what type of network traffic can be expected for the creation of IOCs was conducted, including a brief ease study that examined the effect of analysis time on the number of IOCs created. Using the automatically generated IOCs to create defence and detection mechanisms for the network was evaluated and proved successful, A proof of concept sharing platform developed for the STIX IOCs is showcased at the end of the research.
- Full Text:
- Date Issued: 2018
Towards a collection of cost-effective technologies in support of the NIST cybersecurity framework
- Shackleton, Bruce Michael Stuart
- Authors: Shackleton, Bruce Michael Stuart
- Date: 2018
- Subjects: National Institute of Standards and Technology (U.S.) , Computer security , Computer networks Security measures , Small business Information technology Cost effectiveness , Open source software
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/62494 , vital:28199
- Description: The NIST Cybersecurity Framework (CSF) is a specific risk and cybersecurity framework. It provides guidance on controls that can be implemented to help improve an organisation’s cybersecurity risk posture. The CSF Functions consist of Identify, Protect, Detect, Respond, and Recover. Like most Information Technology (IT) frameworks, there are elements of people, processes, and technology. The same elements are required to successfully implement the NIST CSF. This research specifically focuses on the technology element. While there are many commercial technologies available for a small to medium sized business, the costs can be prohibitively expensive. Therefore, this research investigates cost-effective technologies and assesses their alignment to the NIST CSF. The assessment was made against the NIST CSF subcategories. Each subcategory was analysed to identify where a technology would likely be required. The framework provides a list of Informative References. These Informative References were used to create high- level technology categories, as well as identify the technical controls against which the technologies were measured. The technologies tested were either open source or proprietary. All open source technologies tested were free to use, or have a free community edition. Proprietary technologies would be free to use, or considered generally available to most organisations, such as components contained within Microsoft platforms. The results from the experimentation demonstrated that there are multiple cost-effective technologies that can support the NIST CSF. Once all technologies were tested, the NIST CSF was extended. Two new columns were added, namely high-level technology category, and tested technology. The columns were populated with output from the research. This extended framework begins an initial collection of cost-effective technologies in support of the NIST CSF.
- Full Text:
- Date Issued: 2018
- Authors: Shackleton, Bruce Michael Stuart
- Date: 2018
- Subjects: National Institute of Standards and Technology (U.S.) , Computer security , Computer networks Security measures , Small business Information technology Cost effectiveness , Open source software
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/62494 , vital:28199
- Description: The NIST Cybersecurity Framework (CSF) is a specific risk and cybersecurity framework. It provides guidance on controls that can be implemented to help improve an organisation’s cybersecurity risk posture. The CSF Functions consist of Identify, Protect, Detect, Respond, and Recover. Like most Information Technology (IT) frameworks, there are elements of people, processes, and technology. The same elements are required to successfully implement the NIST CSF. This research specifically focuses on the technology element. While there are many commercial technologies available for a small to medium sized business, the costs can be prohibitively expensive. Therefore, this research investigates cost-effective technologies and assesses their alignment to the NIST CSF. The assessment was made against the NIST CSF subcategories. Each subcategory was analysed to identify where a technology would likely be required. The framework provides a list of Informative References. These Informative References were used to create high- level technology categories, as well as identify the technical controls against which the technologies were measured. The technologies tested were either open source or proprietary. All open source technologies tested were free to use, or have a free community edition. Proprietary technologies would be free to use, or considered generally available to most organisations, such as components contained within Microsoft platforms. The results from the experimentation demonstrated that there are multiple cost-effective technologies that can support the NIST CSF. Once all technologies were tested, the NIST CSF was extended. Two new columns were added, namely high-level technology category, and tested technology. The columns were populated with output from the research. This extended framework begins an initial collection of cost-effective technologies in support of the NIST CSF.
- Full Text:
- Date Issued: 2018
Amber : a aero-interaction honeypot with distributed intelligence
- Authors: Schoeman, Adam
- Date: 2015
- Subjects: Security systems -- Security measures , Computer viruses , Intrusion detection systems (Computer security) , Computer security
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4716 , http://hdl.handle.net/10962/d1017938
- Description: For the greater part, security controls are based on the principle of Decision through Detection (DtD). The exception to this is a honeypot, which analyses interactions between a third party and itself, while occupying a piece of unused information space. As honeypots are not located on productive information resources, any interaction with it can be assumed to be non-productive. This allows the honeypot to make decisions based simply on the presence of data, rather than on the behaviour of the data. But due to limited resources in human capital, honeypots’ uptake in the South African market has been underwhelming. Amber attempts to change this by offering a zero-interaction security system, which will use the honeypot approach of decision through Presence (DtP) to generate a blacklist of third parties, which can be passed on to a network enforcer. Empirical testing has proved the usefulness of this alternative and low cost approach in defending networks. The functionality of the system was also extended by installing nodes in different geographical locations, and streaming their detections into the central Amber hive.
- Full Text:
- Date Issued: 2015
- Authors: Schoeman, Adam
- Date: 2015
- Subjects: Security systems -- Security measures , Computer viruses , Intrusion detection systems (Computer security) , Computer security
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4716 , http://hdl.handle.net/10962/d1017938
- Description: For the greater part, security controls are based on the principle of Decision through Detection (DtD). The exception to this is a honeypot, which analyses interactions between a third party and itself, while occupying a piece of unused information space. As honeypots are not located on productive information resources, any interaction with it can be assumed to be non-productive. This allows the honeypot to make decisions based simply on the presence of data, rather than on the behaviour of the data. But due to limited resources in human capital, honeypots’ uptake in the South African market has been underwhelming. Amber attempts to change this by offering a zero-interaction security system, which will use the honeypot approach of decision through Presence (DtP) to generate a blacklist of third parties, which can be passed on to a network enforcer. Empirical testing has proved the usefulness of this alternative and low cost approach in defending networks. The functionality of the system was also extended by installing nodes in different geographical locations, and streaming their detections into the central Amber hive.
- Full Text:
- Date Issued: 2015
An investigation of ISO/IEC 27001 adoption in South Africa
- Authors: Coetzer, Christo
- Date: 2015
- Subjects: ISO 27001 Standard , Information technology -- Security measures , Computer security , Data protection
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4720 , http://hdl.handle.net/10962/d1018669
- Description: The research objective of this study is to investigate the low adoption of the ISO/IEC 27001 standard in South African organisations. This study does not differentiate between the ISO/IEC 27001:2005 and ISO/IEC 27001:2013 versions, as the focus is on adoption of the ISO/IEC 27001 standard. A survey-based research design was selected as the data collection method. The research instruments used in this study include a web-based questionnaire and in-person interviews with the participants. Based on the findings of this research, the organisations that participated in this study have an understanding of the ISO/IEC 27001 standard; however, fewer than a quarter of these have fully adopted the ISO/IEC 27001 standard. Furthermore, the main business objectives for organisations that have adopted the ISO/IEC 27001 standard were to ensure legal and regulatory compliance, and to fulfil client requirements. An Information Security Management System management guide based on the ISO/IEC 27001 Plan-Do-Check-Act model is developed to help organisations interested in the standard move towards ISO/IEC 27001 compliance.
- Full Text:
- Date Issued: 2015
- Authors: Coetzer, Christo
- Date: 2015
- Subjects: ISO 27001 Standard , Information technology -- Security measures , Computer security , Data protection
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4720 , http://hdl.handle.net/10962/d1018669
- Description: The research objective of this study is to investigate the low adoption of the ISO/IEC 27001 standard in South African organisations. This study does not differentiate between the ISO/IEC 27001:2005 and ISO/IEC 27001:2013 versions, as the focus is on adoption of the ISO/IEC 27001 standard. A survey-based research design was selected as the data collection method. The research instruments used in this study include a web-based questionnaire and in-person interviews with the participants. Based on the findings of this research, the organisations that participated in this study have an understanding of the ISO/IEC 27001 standard; however, fewer than a quarter of these have fully adopted the ISO/IEC 27001 standard. Furthermore, the main business objectives for organisations that have adopted the ISO/IEC 27001 standard were to ensure legal and regulatory compliance, and to fulfil client requirements. An Information Security Management System management guide based on the ISO/IEC 27001 Plan-Do-Check-Act model is developed to help organisations interested in the standard move towards ISO/IEC 27001 compliance.
- Full Text:
- Date Issued: 2015
Pseudo-random access compressed archive for security log data
- Authors: Radley, Johannes Jurgens
- Date: 2015
- Subjects: Computer security , Information storage and retrieval systems , Data compression (Computer science)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4723 , http://hdl.handle.net/10962/d1020019
- Description: We are surrounded by an increasing number of devices and applications that produce a huge quantity of machine generated data. Almost all the machine data contains some element of security information that can be used to discover, monitor and investigate security events.The work proposes a pseudo-random access compressed storage method for log data to be used with an information retrieval system that in turn provides the ability to search and correlate log data and the corresponding events. We explain the method for converting log files into distinct events and storing the events in a compressed file. This yields an entry identifier for each log entry that provides a pointer that can be used by indexing methods. The research also evaluates the compression performance penalties encountered by using this storage system, including decreased compression ratio, as well as increased compression and decompression times.
- Full Text:
- Date Issued: 2015
- Authors: Radley, Johannes Jurgens
- Date: 2015
- Subjects: Computer security , Information storage and retrieval systems , Data compression (Computer science)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4723 , http://hdl.handle.net/10962/d1020019
- Description: We are surrounded by an increasing number of devices and applications that produce a huge quantity of machine generated data. Almost all the machine data contains some element of security information that can be used to discover, monitor and investigate security events.The work proposes a pseudo-random access compressed storage method for log data to be used with an information retrieval system that in turn provides the ability to search and correlate log data and the corresponding events. We explain the method for converting log files into distinct events and storing the events in a compressed file. This yields an entry identifier for each log entry that provides a pointer that can be used by indexing methods. The research also evaluates the compression performance penalties encountered by using this storage system, including decreased compression ratio, as well as increased compression and decompression times.
- Full Text:
- Date Issued: 2015
Towards a framework for building security operation centers
- Authors: Jacobs, Pierre Conrad
- Date: 2015
- Subjects: Security systems industry , Systems engineering , Expert systems (Computer science) , COBIT (Information technology management standard) , Computer security
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4710 , http://hdl.handle.net/10962/d1017932
- Description: In this thesis a framework for Security Operation Centers (SOCs) is proposed. It was developed by utilising Systems Engineering best practices, combined with industry-accepted standards and frameworks, such as the TM Forum’s eTOM framework, CoBIT, ITIL, and ISO/IEC 27002:2005. This framework encompasses the design considerations, the operational considerations and the means to measure the effectiveness and efficiency of SOCs. The intent is to provide guidance to consumers on how to compare and measure the capabilities of SOCs provided by disparate service providers, and to provide service providers (internal and external) a framework to use when building and improving their offerings. The importance of providing a consistent, measureable and guaranteed service to customers is becoming more important, as there is an increased focus on holistic management of security. This has in turn resulted in an increased number of both internal and managed service provider solutions. While some frameworks exist for designing, building and operating specific security technologies used within SOCs, we did not find any comprehensive framework for designing, building and managing SOCs. Consequently, consumers of SOCs do not enjoy a constant experience from vendors, and may experience inconsistent services from geographically dispersed offerings provided by the same vendor.
- Full Text:
- Date Issued: 2015
- Authors: Jacobs, Pierre Conrad
- Date: 2015
- Subjects: Security systems industry , Systems engineering , Expert systems (Computer science) , COBIT (Information technology management standard) , Computer security
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4710 , http://hdl.handle.net/10962/d1017932
- Description: In this thesis a framework for Security Operation Centers (SOCs) is proposed. It was developed by utilising Systems Engineering best practices, combined with industry-accepted standards and frameworks, such as the TM Forum’s eTOM framework, CoBIT, ITIL, and ISO/IEC 27002:2005. This framework encompasses the design considerations, the operational considerations and the means to measure the effectiveness and efficiency of SOCs. The intent is to provide guidance to consumers on how to compare and measure the capabilities of SOCs provided by disparate service providers, and to provide service providers (internal and external) a framework to use when building and improving their offerings. The importance of providing a consistent, measureable and guaranteed service to customers is becoming more important, as there is an increased focus on holistic management of security. This has in turn resulted in an increased number of both internal and managed service provider solutions. While some frameworks exist for designing, building and operating specific security technologies used within SOCs, we did not find any comprehensive framework for designing, building and managing SOCs. Consequently, consumers of SOCs do not enjoy a constant experience from vendors, and may experience inconsistent services from geographically dispersed offerings provided by the same vendor.
- Full Text:
- Date Issued: 2015
An exploration into the use of webinjects by financial malware
- Authors: Forrester, Jock Ingram
- Date: 2014
- Subjects: Malware (Computer software) -- Analysis , Internet fraud , Computer crimes , Computer security , Electronic commerce
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4697 , http://hdl.handle.net/10962/d1012079 , Malware (Computer software) -- Analysis , Internet fraud , Computer crimes , Computer security , Electronic commerce
- Description: As the number of computing devices connected to the Internet increases and the Internet itself becomes more pervasive, so does the opportunity for criminals to use these devices in cybercrimes. Supporting the increase in cybercrime is the growth and maturity of the digital underground economy with strong links to its more visible and physical counterpart. The digital underground economy provides software and related services to equip the entrepreneurial cybercriminal with the appropriate skills and required tools. Financial malware, particularly the capability for injection of code into web browsers, has become one of the more profitable cybercrime tool sets due to its versatility and adaptability when targeting clients of institutions with an online presence, both in and outside of the financial industry. There are numerous families of financial malware available for use, with perhaps the most prevalent being Zeus and SpyEye. Criminals create (or purchase) and grow botnets of computing devices infected with financial malware that has been configured to attack clients of certain websites. In the research data set there are 483 configuration files containing approximately 40 000 webinjects that were captured from various financial malware botnets between October 2010 and June 2012. They were processed and analysed to determine the methods used by criminals to defraud either the user of the computing device, or the institution of which the user is a client. The configuration files contain the injection code that is executed in the web browser to create a surrogate interface, which is then used by the criminal to interact with the user and institution in order to commit fraud. Demographics on the captured data set are presented and case studies are documented based on the various methods used to defraud and bypass financial security controls across multiple industries. The case studies cover techniques used in social engineering, bypassing security controls and automated transfers.
- Full Text:
- Date Issued: 2014
- Authors: Forrester, Jock Ingram
- Date: 2014
- Subjects: Malware (Computer software) -- Analysis , Internet fraud , Computer crimes , Computer security , Electronic commerce
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4697 , http://hdl.handle.net/10962/d1012079 , Malware (Computer software) -- Analysis , Internet fraud , Computer crimes , Computer security , Electronic commerce
- Description: As the number of computing devices connected to the Internet increases and the Internet itself becomes more pervasive, so does the opportunity for criminals to use these devices in cybercrimes. Supporting the increase in cybercrime is the growth and maturity of the digital underground economy with strong links to its more visible and physical counterpart. The digital underground economy provides software and related services to equip the entrepreneurial cybercriminal with the appropriate skills and required tools. Financial malware, particularly the capability for injection of code into web browsers, has become one of the more profitable cybercrime tool sets due to its versatility and adaptability when targeting clients of institutions with an online presence, both in and outside of the financial industry. There are numerous families of financial malware available for use, with perhaps the most prevalent being Zeus and SpyEye. Criminals create (or purchase) and grow botnets of computing devices infected with financial malware that has been configured to attack clients of certain websites. In the research data set there are 483 configuration files containing approximately 40 000 webinjects that were captured from various financial malware botnets between October 2010 and June 2012. They were processed and analysed to determine the methods used by criminals to defraud either the user of the computing device, or the institution of which the user is a client. The configuration files contain the injection code that is executed in the web browser to create a surrogate interface, which is then used by the criminal to interact with the user and institution in order to commit fraud. Demographics on the captured data set are presented and case studies are documented based on the various methods used to defraud and bypass financial security controls across multiple industries. The case studies cover techniques used in social engineering, bypassing security controls and automated transfers.
- Full Text:
- Date Issued: 2014
Data-centric security : towards a utopian model for protecting corporate data on mobile devices
- Authors: Mayisela, Simphiwe Hector
- Date: 2014
- Subjects: Computer security , Computer networks -- Security measures , Business enterprises -- Computer networks -- Security measures , Mobile computing -- Security measures , Mobile communication systems -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4688 , http://hdl.handle.net/10962/d1011094 , Computer security , Computer networks -- Security measures , Business enterprises -- Computer networks -- Security measures , Mobile computing -- Security measures , Mobile communication systems -- Security measures
- Description: Data-centric security is significant in understanding, assessing and mitigating the various risks and impacts of sharing information outside corporate boundaries. Information generally leaves corporate boundaries through mobile devices. Mobile devices continue to evolve as multi-functional tools for everyday life, surpassing their initial intended use. This added capability and increasingly extensive use of mobile devices does not come without a degree of risk - hence the need to guard and protect information as it exists beyond the corporate boundaries and throughout its lifecycle. Literature on existing models crafted to protect data, rather than infrastructure in which the data resides, is reviewed. Technologies that organisations have implemented to adopt the data-centric model are studied. A utopian model that takes into account the shortcomings of existing technologies and deficiencies of common theories is proposed. Two sets of qualitative studies are reported; the first is a preliminary online survey to assess the ubiquity of mobile devices and extent of technology adoption towards implementation of data-centric model; and the second comprises of a focus survey and expert interviews pertaining on technologies that organisations have implemented to adopt the data-centric model. The latter study revealed insufficient data at the time of writing for the results to be statistically significant; however; indicative trends supported the assertions documented in the literature review. The question that this research answers is whether or not current technology implementations designed to mitigate risks from mobile devices, actually address business requirements. This research question, answered through these two sets qualitative studies, discovered inconsistencies between the technology implementations and business requirements. The thesis concludes by proposing a realistic model, based on the outcome of the qualitative study, which bridges the gap between the technology implementations and business requirements. Future work which could perhaps be conducted in light of the findings and the comments from this research is also considered.
- Full Text:
- Date Issued: 2014
- Authors: Mayisela, Simphiwe Hector
- Date: 2014
- Subjects: Computer security , Computer networks -- Security measures , Business enterprises -- Computer networks -- Security measures , Mobile computing -- Security measures , Mobile communication systems -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4688 , http://hdl.handle.net/10962/d1011094 , Computer security , Computer networks -- Security measures , Business enterprises -- Computer networks -- Security measures , Mobile computing -- Security measures , Mobile communication systems -- Security measures
- Description: Data-centric security is significant in understanding, assessing and mitigating the various risks and impacts of sharing information outside corporate boundaries. Information generally leaves corporate boundaries through mobile devices. Mobile devices continue to evolve as multi-functional tools for everyday life, surpassing their initial intended use. This added capability and increasingly extensive use of mobile devices does not come without a degree of risk - hence the need to guard and protect information as it exists beyond the corporate boundaries and throughout its lifecycle. Literature on existing models crafted to protect data, rather than infrastructure in which the data resides, is reviewed. Technologies that organisations have implemented to adopt the data-centric model are studied. A utopian model that takes into account the shortcomings of existing technologies and deficiencies of common theories is proposed. Two sets of qualitative studies are reported; the first is a preliminary online survey to assess the ubiquity of mobile devices and extent of technology adoption towards implementation of data-centric model; and the second comprises of a focus survey and expert interviews pertaining on technologies that organisations have implemented to adopt the data-centric model. The latter study revealed insufficient data at the time of writing for the results to be statistically significant; however; indicative trends supported the assertions documented in the literature review. The question that this research answers is whether or not current technology implementations designed to mitigate risks from mobile devices, actually address business requirements. This research question, answered through these two sets qualitative studies, discovered inconsistencies between the technology implementations and business requirements. The thesis concludes by proposing a realistic model, based on the outcome of the qualitative study, which bridges the gap between the technology implementations and business requirements. Future work which could perhaps be conducted in light of the findings and the comments from this research is also considered.
- Full Text:
- Date Issued: 2014
Deploying DNSSEC in islands of security
- Authors: Murisa, Wesley Vengayi
- Date: 2013 , 2013-03-31
- Subjects: Internet domain names , Computer security , Computer network protocols , Computer security -- Africa
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4577 , http://hdl.handle.net/10962/d1003053 , Internet domain names , Computer security , Computer network protocols , Computer security -- Africa
- Description: The Domain Name System (DNS), a name resolution protocol is one of the vulnerable network protocols that has been subjected to many security attacks such as cache poisoning, denial of service and the 'Kaminsky' spoofing attack. When DNS was designed, security was not incorporated into its design. The DNS Security Extensions (DNSSEC) provides security to the name resolution process by using public key cryptosystems. Although DNSSEC has backward compatibility with unsecured zones, it only offers security to clients when communicating with security aware zones. Widespread deployment of DNSSEC is therefore necessary to secure the name resolution process and provide security to the Internet. Only a few Top Level Domains (TLD's) have deployed DNSSEC, this inherently makes it difficult for their sub-domains to implement the security extensions to the DNS. This study analyses mechanisms that can be used by domains in islands of security to deploy DNSSEC so that the name resolution process can be secured in two specific cases where either the TLD is not signed or the domain registrar is not able to support signed domains. The DNS client side mechanisms evaluated in this study include web browser plug-ins, local validating resolvers and domain look-aside validation. The results of the study show that web browser plug-ins cannot work on their own without local validating resolvers. The web browser validators, however, proved to be useful in indicating to the user whether a domain has been validated or not. Local resolvers present a more secure option for Internet users who cannot trust the communication channel between their stub resolvers and remote name servers. However, they do not provide a way of showing the user whether a domain name has been correctly validated or not. Based on the results of the tests conducted, it is recommended that local validators be used with browser validators for visibility and improved security. On the DNS server side, Domain Look-aside Validation (DLV) presents a viable alternative for organizations in islands of security like most countries in Africa where only two country code Top Level Domains (ccTLD) have deployed DNSSEC. This research recommends use of DLV by corporates to provide DNS security to both internal and external users accessing their web based services. , LaTeX with hyperref package , pdfTeX-1.40.10
- Full Text:
- Date Issued: 2013
- Authors: Murisa, Wesley Vengayi
- Date: 2013 , 2013-03-31
- Subjects: Internet domain names , Computer security , Computer network protocols , Computer security -- Africa
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4577 , http://hdl.handle.net/10962/d1003053 , Internet domain names , Computer security , Computer network protocols , Computer security -- Africa
- Description: The Domain Name System (DNS), a name resolution protocol is one of the vulnerable network protocols that has been subjected to many security attacks such as cache poisoning, denial of service and the 'Kaminsky' spoofing attack. When DNS was designed, security was not incorporated into its design. The DNS Security Extensions (DNSSEC) provides security to the name resolution process by using public key cryptosystems. Although DNSSEC has backward compatibility with unsecured zones, it only offers security to clients when communicating with security aware zones. Widespread deployment of DNSSEC is therefore necessary to secure the name resolution process and provide security to the Internet. Only a few Top Level Domains (TLD's) have deployed DNSSEC, this inherently makes it difficult for their sub-domains to implement the security extensions to the DNS. This study analyses mechanisms that can be used by domains in islands of security to deploy DNSSEC so that the name resolution process can be secured in two specific cases where either the TLD is not signed or the domain registrar is not able to support signed domains. The DNS client side mechanisms evaluated in this study include web browser plug-ins, local validating resolvers and domain look-aside validation. The results of the study show that web browser plug-ins cannot work on their own without local validating resolvers. The web browser validators, however, proved to be useful in indicating to the user whether a domain has been validated or not. Local resolvers present a more secure option for Internet users who cannot trust the communication channel between their stub resolvers and remote name servers. However, they do not provide a way of showing the user whether a domain name has been correctly validated or not. Based on the results of the tests conducted, it is recommended that local validators be used with browser validators for visibility and improved security. On the DNS server side, Domain Look-aside Validation (DLV) presents a viable alternative for organizations in islands of security like most countries in Africa where only two country code Top Level Domains (ccTLD) have deployed DNSSEC. This research recommends use of DLV by corporates to provide DNS security to both internal and external users accessing their web based services. , LaTeX with hyperref package , pdfTeX-1.40.10
- Full Text:
- Date Issued: 2013
Log analysis aided by latent semantic mapping
- Authors: Buys, Stephanus
- Date: 2013 , 2013-04-14
- Subjects: Latent semantic indexing , Data mining , Computer networks -- Security measures , Computer hackers , Computer security
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4575 , http://hdl.handle.net/10962/d1002963 , Latent semantic indexing , Data mining , Computer networks -- Security measures , Computer hackers , Computer security
- Description: In an age of zero-day exploits and increased on-line attacks on computing infrastructure, operational security practitioners are becoming increasingly aware of the value of the information captured in log events. Analysis of these events is critical during incident response, forensic investigations related to network breaches, hacking attacks and data leaks. Such analysis has led to the discipline of Security Event Analysis, also known as Log Analysis. There are several challenges when dealing with events, foremost being the increased volumes at which events are often generated and stored. Furthermore, events are often captured as unstructured data, with very little consistency in the formats or contents of the events. In this environment, security analysts and implementers of Log Management (LM) or Security Information and Event Management (SIEM) systems face the daunting task of identifying, classifying and disambiguating massive volumes of events in order for security analysis and automation to proceed. Latent Semantic Mapping (LSM) is a proven paradigm shown to be an effective method of, among other things, enabling word clustering, document clustering, topic clustering and semantic inference. This research is an investigation into the practical application of LSM in the discipline of Security Event Analysis, showing the value of using LSM to assist practitioners in identifying types of events, classifying events as belonging to certain sources or technologies and disambiguating different events from each other. The culmination of this research presents adaptations to traditional natural language processing techniques that resulted in improved efficacy of LSM when dealing with Security Event Analysis. This research provides strong evidence supporting the wider adoption and use of LSM, as well as further investigation into Security Event Analysis assisted by LSM and other natural language or computer-learning processing techniques. , LaTeX with hyperref package , Adobe Acrobat 9.54 Paper Capture Plug-in
- Full Text:
- Date Issued: 2013
- Authors: Buys, Stephanus
- Date: 2013 , 2013-04-14
- Subjects: Latent semantic indexing , Data mining , Computer networks -- Security measures , Computer hackers , Computer security
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4575 , http://hdl.handle.net/10962/d1002963 , Latent semantic indexing , Data mining , Computer networks -- Security measures , Computer hackers , Computer security
- Description: In an age of zero-day exploits and increased on-line attacks on computing infrastructure, operational security practitioners are becoming increasingly aware of the value of the information captured in log events. Analysis of these events is critical during incident response, forensic investigations related to network breaches, hacking attacks and data leaks. Such analysis has led to the discipline of Security Event Analysis, also known as Log Analysis. There are several challenges when dealing with events, foremost being the increased volumes at which events are often generated and stored. Furthermore, events are often captured as unstructured data, with very little consistency in the formats or contents of the events. In this environment, security analysts and implementers of Log Management (LM) or Security Information and Event Management (SIEM) systems face the daunting task of identifying, classifying and disambiguating massive volumes of events in order for security analysis and automation to proceed. Latent Semantic Mapping (LSM) is a proven paradigm shown to be an effective method of, among other things, enabling word clustering, document clustering, topic clustering and semantic inference. This research is an investigation into the practical application of LSM in the discipline of Security Event Analysis, showing the value of using LSM to assist practitioners in identifying types of events, classifying events as belonging to certain sources or technologies and disambiguating different events from each other. The culmination of this research presents adaptations to traditional natural language processing techniques that resulted in improved efficacy of LSM when dealing with Security Event Analysis. This research provides strong evidence supporting the wider adoption and use of LSM, as well as further investigation into Security Event Analysis assisted by LSM and other natural language or computer-learning processing techniques. , LaTeX with hyperref package , Adobe Acrobat 9.54 Paper Capture Plug-in
- Full Text:
- Date Issued: 2013
An investigation of issues of privacy, anonymity and multi-factor authentication in an open environment
- Authors: Miles, Shaun Graeme
- Date: 2012-06-20
- Subjects: Electronic data processing departments -- Security measures , Electronic data processing departments , Privacy, Right of , Computer security , Data protection , Computers -- Access control
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4656 , http://hdl.handle.net/10962/d1006653 , Electronic data processing departments -- Security measures , Electronic data processing departments , Privacy, Right of , Computer security , Data protection , Computers -- Access control
- Description: This thesis performs an investigation into issues concerning the broad area ofIdentity and Access Management, with a focus on open environments. Through literature research the issues of privacy, anonymity and access control are identified. The issue of privacy is an inherent problem due to the nature of the digital network environment. Information can be duplicated and modified regardless of the wishes and intentions ofthe owner of that information unless proper measures are taken to secure the environment. Once information is published or divulged on the network, there is very little way of controlling the subsequent usage of that information. To address this issue a model for privacy is presented that follows the user centric paradigm of meta-identity. The lack of anonymity, where security measures can be thwarted through the observation of the environment, is a concern for users and systems. By an attacker observing the communication channel and monitoring the interactions between users and systems over a long enough period of time, it is possible to infer knowledge about the users and systems. This knowledge is used to build an identity profile of potential victims to be used in subsequent attacks. To address the problem, mechanisms for providing an acceptable level of anonymity while maintaining adequate accountability (from a legal standpoint) are explored. In terms of access control, the inherent weakness of single factor authentication mechanisms is discussed. The typical mechanism is the user-name and password pair, which provides a single point of failure. By increasing the factors used in authentication, the amount of work required to compromise the system increases non-linearly. Within an open network, several aspects hinder wide scale adoption and use of multi-factor authentication schemes, such as token management and the impact on usability. The framework is developed from a Utopian point of view, with the aim of being applicable to many situations as opposed to a single specific domain. The framework incorporates multi-factor authentication over multiple paths using mobile phones and GSM networks, and explores the usefulness of such an approach. The models are in tum analysed, providing a discussion into the assumptions made and the problems faced by each model. , Adobe Acrobat Pro 9.5.1 , Adobe Acrobat 9.51 Paper Capture Plug-in
- Full Text:
- Authors: Miles, Shaun Graeme
- Date: 2012-06-20
- Subjects: Electronic data processing departments -- Security measures , Electronic data processing departments , Privacy, Right of , Computer security , Data protection , Computers -- Access control
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4656 , http://hdl.handle.net/10962/d1006653 , Electronic data processing departments -- Security measures , Electronic data processing departments , Privacy, Right of , Computer security , Data protection , Computers -- Access control
- Description: This thesis performs an investigation into issues concerning the broad area ofIdentity and Access Management, with a focus on open environments. Through literature research the issues of privacy, anonymity and access control are identified. The issue of privacy is an inherent problem due to the nature of the digital network environment. Information can be duplicated and modified regardless of the wishes and intentions ofthe owner of that information unless proper measures are taken to secure the environment. Once information is published or divulged on the network, there is very little way of controlling the subsequent usage of that information. To address this issue a model for privacy is presented that follows the user centric paradigm of meta-identity. The lack of anonymity, where security measures can be thwarted through the observation of the environment, is a concern for users and systems. By an attacker observing the communication channel and monitoring the interactions between users and systems over a long enough period of time, it is possible to infer knowledge about the users and systems. This knowledge is used to build an identity profile of potential victims to be used in subsequent attacks. To address the problem, mechanisms for providing an acceptable level of anonymity while maintaining adequate accountability (from a legal standpoint) are explored. In terms of access control, the inherent weakness of single factor authentication mechanisms is discussed. The typical mechanism is the user-name and password pair, which provides a single point of failure. By increasing the factors used in authentication, the amount of work required to compromise the system increases non-linearly. Within an open network, several aspects hinder wide scale adoption and use of multi-factor authentication schemes, such as token management and the impact on usability. The framework is developed from a Utopian point of view, with the aim of being applicable to many situations as opposed to a single specific domain. The framework incorporates multi-factor authentication over multiple paths using mobile phones and GSM networks, and explores the usefulness of such an approach. The models are in tum analysed, providing a discussion into the assumptions made and the problems faced by each model. , Adobe Acrobat Pro 9.5.1 , Adobe Acrobat 9.51 Paper Capture Plug-in
- Full Text:
Limiting vulnerability exposure through effective patch management: threat mitigation through vulnerability remediation
- Authors: White, Dominic Stjohn Dolin
- Date: 2007 , 2007-02-08
- Subjects: Computer networks -- Security measures , Computer viruses , Computer security
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4629 , http://hdl.handle.net/10962/d1006510 , Computer networks -- Security measures , Computer viruses , Computer security
- Description: This document aims to provide a complete discussion on vulnerability and patch management. The first chapters look at the trends relating to vulnerabilities, exploits, attacks and patches. These trends describe the drivers of patch and vulnerability management and situate the discussion in the current security climate. The following chapters then aim to present both policy and technical solutions to the problem. The policies described lay out a comprehensive set of steps that can be followed by any organisation to implement their own patch management policy, including practical advice on integration with other policies, managing risk, identifying vulnerability, strategies for reducing downtime and generating metrics to measure progress. Having covered the steps that can be taken by users, a strategy describing how best a vendor should implement a related patch release policy is provided. An argument is made that current monthly patch release schedules are inadequate to allow users to most effectively and timeously mitigate vulnerabilities. The final chapters discuss the technical aspect of automating parts of the policies described. In particular the concept of 'defense in depth' is used to discuss additional strategies for 'buying time' during the patch process. The document then goes on to conclude that in the face of increasing malicious activity and more complex patching, solid frameworks such as those provided in this document are required to ensure an organisation can fully manage the patching process. However, more research is required to fully understand vulnerabilities and exploits. In particular more attention must be paid to threats, as little work as been done to fully understand threat-agent capabilities and activities from a day to day basis. , TeX output 2007.02.08:2212 , Adobe Acrobat 9.51 Paper Capture Plug-in
- Full Text:
- Date Issued: 2007
- Authors: White, Dominic Stjohn Dolin
- Date: 2007 , 2007-02-08
- Subjects: Computer networks -- Security measures , Computer viruses , Computer security
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4629 , http://hdl.handle.net/10962/d1006510 , Computer networks -- Security measures , Computer viruses , Computer security
- Description: This document aims to provide a complete discussion on vulnerability and patch management. The first chapters look at the trends relating to vulnerabilities, exploits, attacks and patches. These trends describe the drivers of patch and vulnerability management and situate the discussion in the current security climate. The following chapters then aim to present both policy and technical solutions to the problem. The policies described lay out a comprehensive set of steps that can be followed by any organisation to implement their own patch management policy, including practical advice on integration with other policies, managing risk, identifying vulnerability, strategies for reducing downtime and generating metrics to measure progress. Having covered the steps that can be taken by users, a strategy describing how best a vendor should implement a related patch release policy is provided. An argument is made that current monthly patch release schedules are inadequate to allow users to most effectively and timeously mitigate vulnerabilities. The final chapters discuss the technical aspect of automating parts of the policies described. In particular the concept of 'defense in depth' is used to discuss additional strategies for 'buying time' during the patch process. The document then goes on to conclude that in the face of increasing malicious activity and more complex patching, solid frameworks such as those provided in this document are required to ensure an organisation can fully manage the patching process. However, more research is required to fully understand vulnerabilities and exploits. In particular more attention must be paid to threats, as little work as been done to fully understand threat-agent capabilities and activities from a day to day basis. , TeX output 2007.02.08:2212 , Adobe Acrobat 9.51 Paper Capture Plug-in
- Full Text:
- Date Issued: 2007