Towards a Sandbox for the Deobfuscation and Dissection of PHP Malware
- Wrench, Peter M, Irwin, Barry V W
- Authors: Wrench, Peter M , Irwin, Barry V W
- Date: 2014
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429700 , vital:72633 , 10.1109/ISSA.2014.6950504
- Description: The creation and proliferation of PHP-based Remote Access Trojans (or web shells) used in both the compromise and post exploitation of web platforms has fuelled research into automated methods of dissecting and analysing these shells. Current malware tools disguise themselves by making use of obfuscation techniques designed to frustrate any efforts to dissect or reverse engineer the code. Advanced code engineering can even cause malware to behave differently if it detects that it is not running on the system for which it was originally targeted. To combat these defensive techniques, this paper presents a sandbox-based environment that aims to accurately mimic a vulnerable host and is capable of semi-automatic semantic dissection and syntactic deobfuscation of PHP code.
- Full Text:
- Date Issued: 2014
- Authors: Wrench, Peter M , Irwin, Barry V W
- Date: 2014
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429700 , vital:72633 , 10.1109/ISSA.2014.6950504
- Description: The creation and proliferation of PHP-based Remote Access Trojans (or web shells) used in both the compromise and post exploitation of web platforms has fuelled research into automated methods of dissecting and analysing these shells. Current malware tools disguise themselves by making use of obfuscation techniques designed to frustrate any efforts to dissect or reverse engineer the code. Advanced code engineering can even cause malware to behave differently if it detects that it is not running on the system for which it was originally targeted. To combat these defensive techniques, this paper presents a sandbox-based environment that aims to accurately mimic a vulnerable host and is capable of semi-automatic semantic dissection and syntactic deobfuscation of PHP code.
- Full Text:
- Date Issued: 2014
A baseline study of potentially malicious activity across five network telescopes
- Authors: Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429714 , vital:72634 , https://ieeexplore.ieee.org/abstract/document/6568378
- Description: +This paper explores the Internet Background Radiation (IBR) observed across five distinct network telescopes over a 15 month period. These network telescopes consisting of a /24 netblock each and are deployed in IP space administered by TENET, the tertiary education network in South Africa covering three numerically distant /8 network blocks. The differences and similarities in the observed network traffic are explored. Two anecdotal case studies are presented relating to the MS08-067 and MS12-020 vulnerabilities in the Microsoft Windows platforms. The first of these is related to the Conficker worm outbreak in 2008, and traffic targeting 445/tcp remains one of the top constituents of IBR as observed on the telescopes. The case of MS12-020 is of interest, as a long period of scanning activity targeting 3389/tcp, used by the Microsoft RDP service, was observed, with a significant drop on activity relating to the release of the security advisory and patch. Other areas of interest are highlighted, particularly where correlation in scanning activity was observed across the sensors. The paper concludes with some discussion on the application of network telescopes as part of a cyber-defence solution.
- Full Text:
- Date Issued: 2013
- Authors: Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429714 , vital:72634 , https://ieeexplore.ieee.org/abstract/document/6568378
- Description: +This paper explores the Internet Background Radiation (IBR) observed across five distinct network telescopes over a 15 month period. These network telescopes consisting of a /24 netblock each and are deployed in IP space administered by TENET, the tertiary education network in South Africa covering three numerically distant /8 network blocks. The differences and similarities in the observed network traffic are explored. Two anecdotal case studies are presented relating to the MS08-067 and MS12-020 vulnerabilities in the Microsoft Windows platforms. The first of these is related to the Conficker worm outbreak in 2008, and traffic targeting 445/tcp remains one of the top constituents of IBR as observed on the telescopes. The case of MS12-020 is of interest, as a long period of scanning activity targeting 3389/tcp, used by the Microsoft RDP service, was observed, with a significant drop on activity relating to the release of the security advisory and patch. Other areas of interest are highlighted, particularly where correlation in scanning activity was observed across the sensors. The paper concludes with some discussion on the application of network telescopes as part of a cyber-defence solution.
- Full Text:
- Date Issued: 2013
A high-level architecture for efficient packet trace analysis on gpu co-processors
- Nottingham, Alastair, Irwin, Barry V W
- Authors: Nottingham, Alastair , Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429572 , vital:72623 , 10.1109/ISSA.2013.6641052
- Description: This paper proposes a high-level architecture to support efficient, massively parallel packet classification, filtering and analysis using commodity Graphics Processing Unit (GPU) hardware. The proposed architecture aims to provide a flexible and efficient parallel packet processing and analysis framework, supporting complex programmable filtering, data mining operations, statistical analysis functions and traffic visualisation, with minimal CPU overhead. In particular, this framework aims to provide a robust set of high-speed analysis functionality, in order to dramatically reduce the time required to process and analyse extremely large network traces. This architecture derives from initial research, which has shown GPU co-processors to be effective in accelerating packet classification to up to tera-bit speeds with minimal CPU overhead, far exceeding the bandwidth capacity between standard long term storage and the GPU device. This paper provides a high-level overview of the proposed architecture and its primary components, motivated by the results of prior research in the field.
- Full Text:
- Date Issued: 2013
- Authors: Nottingham, Alastair , Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429572 , vital:72623 , 10.1109/ISSA.2013.6641052
- Description: This paper proposes a high-level architecture to support efficient, massively parallel packet classification, filtering and analysis using commodity Graphics Processing Unit (GPU) hardware. The proposed architecture aims to provide a flexible and efficient parallel packet processing and analysis framework, supporting complex programmable filtering, data mining operations, statistical analysis functions and traffic visualisation, with minimal CPU overhead. In particular, this framework aims to provide a robust set of high-speed analysis functionality, in order to dramatically reduce the time required to process and analyse extremely large network traces. This architecture derives from initial research, which has shown GPU co-processors to be effective in accelerating packet classification to up to tera-bit speeds with minimal CPU overhead, far exceeding the bandwidth capacity between standard long term storage and the GPU device. This paper provides a high-level overview of the proposed architecture and its primary components, motivated by the results of prior research in the field.
- Full Text:
- Date Issued: 2013
A kernel-driven framework for high performance internet routing simulation
- Herbert, Alan, Irwin, Barry V W
- Authors: Herbert, Alan , Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429585 , vital:72624 , 10.1109/ISSA.2013.6641048
- Description: The ability to provide the simulation of packets traversing an internet path is an integral part of providing realistic simulations for network training, and cyber defence exercises. This paper builds on previous work, and considers an in-kernel approach to solving the routing simulation problem. The inkernel approach is anticipated to allow the framework to be able to achieve throughput rates of 1GB/s or higher using commodity hardware. Processes that run outside the context of the kernel of most operating system require context switching to access hardware and kernel modules. This leads to considerable delays in the processes, such as network simulators, that frequently access hardware such as hard disk accesses and network packet handling. To mitigate this problem, as experienced with earlier implementations, this research looks towards implementing a kernel module to handle network routing and simulation within a UNIX based system. This would remove delays incurred from context switching and allows for direct access to the hardware components of the host.
- Full Text:
- Date Issued: 2013
- Authors: Herbert, Alan , Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429585 , vital:72624 , 10.1109/ISSA.2013.6641048
- Description: The ability to provide the simulation of packets traversing an internet path is an integral part of providing realistic simulations for network training, and cyber defence exercises. This paper builds on previous work, and considers an in-kernel approach to solving the routing simulation problem. The inkernel approach is anticipated to allow the framework to be able to achieve throughput rates of 1GB/s or higher using commodity hardware. Processes that run outside the context of the kernel of most operating system require context switching to access hardware and kernel modules. This leads to considerable delays in the processes, such as network simulators, that frequently access hardware such as hard disk accesses and network packet handling. To mitigate this problem, as experienced with earlier implementations, this research looks towards implementing a kernel module to handle network routing and simulation within a UNIX based system. This would remove delays incurred from context switching and allows for direct access to the hardware components of the host.
- Full Text:
- Date Issued: 2013
A source analysis of the conficker outbreak from a network telescope.
- Authors: Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429742 , vital:72636 , 10.23919/SAIEE.2013.8531865
- Description: This paper discusses a dataset of some 16 million packets targeting port 445/tcp collected by a network telescope utilising a /24 netblock in South African IP address space. An initial overview of the collected data is provided. This is followed by a detailed analysis of the packet characteristics observed, including size and TTL. The peculiarities of the observed target selection and the results of the flaw in the Conficker worm's propagation algorithm are presented. An analysis of the 4 million observed source hosts is reported, grouped by both packet counts and the number of distinct hosts per network address block. Address blocks of size /8, 16 and 24 are used for groupings. The localisation, by geographic region and numerical proximity, of high ranking aggregate netblocks is highlighted. The observed shift in geopolitical origins observed during the evolution of the Conficker worm is also discussed. The paper concludes with some overall analyses, and consideration of the application of network telescopes to the monitoring of such outbreaks in the future.
- Full Text:
- Date Issued: 2013
- Authors: Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429742 , vital:72636 , 10.23919/SAIEE.2013.8531865
- Description: This paper discusses a dataset of some 16 million packets targeting port 445/tcp collected by a network telescope utilising a /24 netblock in South African IP address space. An initial overview of the collected data is provided. This is followed by a detailed analysis of the packet characteristics observed, including size and TTL. The peculiarities of the observed target selection and the results of the flaw in the Conficker worm's propagation algorithm are presented. An analysis of the 4 million observed source hosts is reported, grouped by both packet counts and the number of distinct hosts per network address block. Address blocks of size /8, 16 and 24 are used for groupings. The localisation, by geographic region and numerical proximity, of high ranking aggregate netblocks is highlighted. The observed shift in geopolitical origins observed during the evolution of the Conficker worm is also discussed. The paper concludes with some overall analyses, and consideration of the application of network telescopes to the monitoring of such outbreaks in the future.
- Full Text:
- Date Issued: 2013
Automated classification of computer network attacks
- van Heerden, Renier, Leenen, Louise, Irwin, Barry V W
- Authors: van Heerden, Renier , Leenen, Louise , Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429622 , vital:72627 , 10.1109/ICASTech.2013.6707510
- Description: In this paper we demonstrate how an automated reasoner, HermiT, is used to classify instances of computer network based attacks in conjunction with a network attack ontology. The ontology describes different types of network attacks through classes and inter-class relationships and has previously been implemented in the Protege ontology editor. Two significant recent instances of network based attacks are presented as individuals in the ontology and correctly classified by the automated reasoner according to the relevant types of attack scenarios depicted in the ontology. The two network attack instances are the Distributed Denial of Service attack on SpamHaus in 2013 and the theft of 42 million Rand ($6.7 million) from South African Postbank in 2012.
- Full Text:
- Date Issued: 2013
- Authors: van Heerden, Renier , Leenen, Louise , Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429622 , vital:72627 , 10.1109/ICASTech.2013.6707510
- Description: In this paper we demonstrate how an automated reasoner, HermiT, is used to classify instances of computer network based attacks in conjunction with a network attack ontology. The ontology describes different types of network attacks through classes and inter-class relationships and has previously been implemented in the Protege ontology editor. Two significant recent instances of network based attacks are presented as individuals in the ontology and correctly classified by the automated reasoner according to the relevant types of attack scenarios depicted in the ontology. The two network attack instances are the Distributed Denial of Service attack on SpamHaus in 2013 and the theft of 42 million Rand ($6.7 million) from South African Postbank in 2012.
- Full Text:
- Date Issued: 2013
Classification of security operation centers
- Jacobs, Pierre, Arnab, Alapan, Irwin, Barry V W
- Authors: Jacobs, Pierre , Arnab, Alapan , Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429635 , vital:72628 , 10.1109/ISSA.2013.6641054
- Description: Security Operation Centers (SOCs) are a necessary service for organisations that want to address compliance and threat management. While there are frameworks in existence that addresses the technology aspects of these services, a holistic framework addressing processes, staffing and technology currently do not exist. Additionally, it would be useful for organizations and constituents considering building, buying or selling these services to measure the effectiveness and maturity of the provided services. In this paper, we propose a classification and rating scheme for SOC services, evaluating both the capabilities and the maturity of the services offered.
- Full Text:
- Date Issued: 2013
- Authors: Jacobs, Pierre , Arnab, Alapan , Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429635 , vital:72628 , 10.1109/ISSA.2013.6641054
- Description: Security Operation Centers (SOCs) are a necessary service for organisations that want to address compliance and threat management. While there are frameworks in existence that addresses the technology aspects of these services, a holistic framework addressing processes, staffing and technology currently do not exist. Additionally, it would be useful for organizations and constituents considering building, buying or selling these services to measure the effectiveness and maturity of the provided services. In this paper, we propose a classification and rating scheme for SOC services, evaluating both the capabilities and the maturity of the services offered.
- Full Text:
- Date Issued: 2013
Classification of security operation centers
- Jacobs, Pierre, Arnab, Alapan, Irwin, Barry V W
- Authors: Jacobs, Pierre , Arnab, Alapan , Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429785 , vital:72639 , 10.1109/ISSA.2013.6641054
- Description: Security Operation Centers (SOCs) are a necessary service for organisations that want to address compliance and threat management. While there are frameworks in existence that addresses the technology aspects of these services, a holistic framework addressing processes, staffing and technology currently do not exist. Additionally, it would be useful for organizations and constituents considering building, buying or selling these services to measure the effectiveness and maturity of the provided services. In this paper, we propose a classification and rating scheme for SOC services, evaluating both the capabilities and the maturity of the services offered.
- Full Text:
- Date Issued: 2013
- Authors: Jacobs, Pierre , Arnab, Alapan , Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429785 , vital:72639 , 10.1109/ISSA.2013.6641054
- Description: Security Operation Centers (SOCs) are a necessary service for organisations that want to address compliance and threat management. While there are frameworks in existence that addresses the technology aspects of these services, a holistic framework addressing processes, staffing and technology currently do not exist. Additionally, it would be useful for organizations and constituents considering building, buying or selling these services to measure the effectiveness and maturity of the provided services. In this paper, we propose a classification and rating scheme for SOC services, evaluating both the capabilities and the maturity of the services offered.
- Full Text:
- Date Issued: 2013
Deep Routing Simulation
- Irwin, Barry V W, Herbert, Alan
- Authors: Irwin, Barry V W , Herbert, Alan
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430353 , vital:72685 , https://www.academic-bookshop.com/ourshop/prod_2546879-ICIW-2013-8th-International-Conference-on-Information-Warfare-and-Security.html
- Description: This paper discusses a dataset of some 16 million packets targeting port 445/tcp collected by a network telescope utilising a /24 netblock in South African IP address space. An initial overview of the collected data is provided. This is followed by a detailed analysis of the packet characteristics observed, including size and TTL. The peculiarities of the observed target selection and the results of the flaw in the Conficker worm's propagation algorithm are presented. An analysis of the 4 million observed source hosts is reported, grouped by both packet counts and the number of distinct hosts per network address block. Address blocks of size /8, 16 and 24 are used for groupings. The localisation, by geographic region and numerical proximity, of high ranking aggregate netblocks is highlighted. The observed shift in geopolitical origins observed during the evolution of the Conficker worm is also discussed. The paper concludes with some overall analyses, and consideration of the application of network telescopes to the monitoring of such outbreaks in the future.
- Full Text:
- Date Issued: 2013
- Authors: Irwin, Barry V W , Herbert, Alan
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430353 , vital:72685 , https://www.academic-bookshop.com/ourshop/prod_2546879-ICIW-2013-8th-International-Conference-on-Information-Warfare-and-Security.html
- Description: This paper discusses a dataset of some 16 million packets targeting port 445/tcp collected by a network telescope utilising a /24 netblock in South African IP address space. An initial overview of the collected data is provided. This is followed by a detailed analysis of the packet characteristics observed, including size and TTL. The peculiarities of the observed target selection and the results of the flaw in the Conficker worm's propagation algorithm are presented. An analysis of the 4 million observed source hosts is reported, grouped by both packet counts and the number of distinct hosts per network address block. Address blocks of size /8, 16 and 24 are used for groupings. The localisation, by geographic region and numerical proximity, of high ranking aggregate netblocks is highlighted. The observed shift in geopolitical origins observed during the evolution of the Conficker worm is also discussed. The paper concludes with some overall analyses, and consideration of the application of network telescopes to the monitoring of such outbreaks in the future.
- Full Text:
- Date Issued: 2013
Developing a virtualised testbed environment in preparation for testing of network based attacks
- Van Heerden, Renier, Pieterse, Heloise, Burke, Ivan, Irwin, Barry V W
- Authors: Van Heerden, Renier , Pieterse, Heloise , Burke, Ivan , Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429648 , vital:72629 , 10.1109/ICASTech.2013.6707509
- Description: Computer network attacks are difficult to simulate due to the damage they may cause to live networks and the complexity required simulating a useful network. We constructed a virtualised network within a vSphereESXi environment which is able to simulate: thirty workstations, ten servers, three distinct network segments and the accompanying network traffic. The VSphere environment provided added benefits, such as the ability to pause, restart and snapshot virtual computers. These abilities enabled the authors to reset the simulation environment before each test and mitigated against the damage that an attack potentially inflicts on the test network. Without simulated network traffic, the virtualised network was too sterile. This resulted in any network event being a simple task to detect, making network traffic simulation a requirement for an event detection test bed. Five main kinds of traffic were simulated: Web browsing, File transfer, e-mail, version control and Intranet File traffic. The simulated traffic volumes were pseudo randomised to represent differing temporal patterns. By building a virtualised network with simulated traffic we were able to test IDS' and other network attack detection sensors in a much more realistic environment before moving it to a live network. The goal of this paper is to present a virtualised testbedenvironmentin which network attacks can safely be tested.
- Full Text:
- Date Issued: 2013
- Authors: Van Heerden, Renier , Pieterse, Heloise , Burke, Ivan , Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429648 , vital:72629 , 10.1109/ICASTech.2013.6707509
- Description: Computer network attacks are difficult to simulate due to the damage they may cause to live networks and the complexity required simulating a useful network. We constructed a virtualised network within a vSphereESXi environment which is able to simulate: thirty workstations, ten servers, three distinct network segments and the accompanying network traffic. The VSphere environment provided added benefits, such as the ability to pause, restart and snapshot virtual computers. These abilities enabled the authors to reset the simulation environment before each test and mitigated against the damage that an attack potentially inflicts on the test network. Without simulated network traffic, the virtualised network was too sterile. This resulted in any network event being a simple task to detect, making network traffic simulation a requirement for an event detection test bed. Five main kinds of traffic were simulated: Web browsing, File transfer, e-mail, version control and Intranet File traffic. The simulated traffic volumes were pseudo randomised to represent differing temporal patterns. By building a virtualised network with simulated traffic we were able to test IDS' and other network attack detection sensors in a much more realistic environment before moving it to a live network. The goal of this paper is to present a virtualised testbedenvironmentin which network attacks can safely be tested.
- Full Text:
- Date Issued: 2013
Real-time distributed malicious traffic monitoring for honeypots and network telescopes
- Hunter, Samuel O, Irwin, Barry V W, Stalmans, Etienne
- Authors: Hunter, Samuel O , Irwin, Barry V W , Stalmans, Etienne
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429660 , vital:72630 , 10.1109/ISSA.2013.6641050
- Description: Network telescopes and honeypots have been used with great success to record malicious network traffic for analysis, however, this is often done off-line well after the traffic was observed. This has left us with only a cursory understanding of malicious hosts and no knowledge of the software they run, uptime or other malicious activity they may have participated in. This work covers a messaging framework (rDSN) that was developed to allow for the real-time analysis of malicious traffic. This data was captured from multiple, distributed honeypots and network telescopes. Data was collected over a period of two months from these data sensors. Using this data new techniques for malicious host analysis and re-identification in dynamic IP address space were explored. An Automated Reconnaissance (AR) Framework was developed to aid the process of data collection, this framework was responsible for gathering information from malicious hosts through both passive and active fingerprinting techniques. From the analysis of this data; correlations between malicious hosts were identified based on characteristics such as Operating System, targeted service, location and services running on the malicious hosts. An initial investigation in Latency Based Multilateration (LBM), a novel technique to assist in host re-identification was tested and proved successful as a supporting metric for host re-identification.
- Full Text:
- Date Issued: 2013
- Authors: Hunter, Samuel O , Irwin, Barry V W , Stalmans, Etienne
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429660 , vital:72630 , 10.1109/ISSA.2013.6641050
- Description: Network telescopes and honeypots have been used with great success to record malicious network traffic for analysis, however, this is often done off-line well after the traffic was observed. This has left us with only a cursory understanding of malicious hosts and no knowledge of the software they run, uptime or other malicious activity they may have participated in. This work covers a messaging framework (rDSN) that was developed to allow for the real-time analysis of malicious traffic. This data was captured from multiple, distributed honeypots and network telescopes. Data was collected over a period of two months from these data sensors. Using this data new techniques for malicious host analysis and re-identification in dynamic IP address space were explored. An Automated Reconnaissance (AR) Framework was developed to aid the process of data collection, this framework was responsible for gathering information from malicious hosts through both passive and active fingerprinting techniques. From the analysis of this data; correlations between malicious hosts were identified based on characteristics such as Operating System, targeted service, location and services running on the malicious hosts. An initial investigation in Latency Based Multilateration (LBM), a novel technique to assist in host re-identification was tested and proved successful as a supporting metric for host re-identification.
- Full Text:
- Date Issued: 2013
Towards a GPU accelerated virtual machine for massively parallel packet classification and filtering
- Nottingham, Alastair, Irwin, Barry V W
- Authors: Nottingham, Alastair , Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430295 , vital:72681 , https://doi.org/10.1145/2513456.2513504
- Description: This paper considers the application of GPU co-processors to accelerate the analysis of packet data, particularly within extremely large packet traces spanning months or years of traffic. Discussion focuses on the construction, performance and limitations of the experimental GPF (GPU Packet Filter), which employs a prototype massively-parallel protocol-independent multi-match algorithm to rapidly compare packets against multiple arbitrary filters. The paper concludes with a consideration of mechanisms to expand the flexibility and power of the GPF algorithm to construct a fully programmable GPU packet classification virtual machine, which can perform massively parallel classification, data-mining and data-transformation to explore and analyse packet traces. This virtual machine is a component of a larger framework of capture analysis tools which together provide capture indexing, manipulation, filtering and visualisation functions.
- Full Text:
- Date Issued: 2013
Towards a GPU accelerated virtual machine for massively parallel packet classification and filtering
- Authors: Nottingham, Alastair , Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430295 , vital:72681 , https://doi.org/10.1145/2513456.2513504
- Description: This paper considers the application of GPU co-processors to accelerate the analysis of packet data, particularly within extremely large packet traces spanning months or years of traffic. Discussion focuses on the construction, performance and limitations of the experimental GPF (GPU Packet Filter), which employs a prototype massively-parallel protocol-independent multi-match algorithm to rapidly compare packets against multiple arbitrary filters. The paper concludes with a consideration of mechanisms to expand the flexibility and power of the GPF algorithm to construct a fully programmable GPU packet classification virtual machine, which can perform massively parallel classification, data-mining and data-transformation to explore and analyse packet traces. This virtual machine is a component of a larger framework of capture analysis tools which together provide capture indexing, manipulation, filtering and visualisation functions.
- Full Text:
- Date Issued: 2013
Visualization of a data leak
- Swart, Ignus, Grobler, Marthie, Irwin, Barry V W
- Authors: Swart, Ignus , Grobler, Marthie , Irwin, Barry V W
- Date: 2013
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428584 , vital:72522 , 10.1109/ISSA.2013.6641046
- Description: The potential impact that data leakage can have on a country, both on a national level as well as on an individual level, can be wide reaching and potentially catastrophic. In January 2013, several South African companies became the target of a hack attack, resulting in the breach of security measures and the leaking of a claimed 700000 records. The affected companies are spread across a number of domains, thus making the leak a very wide impact area. The aim of this paper is to analyze the data released from the South African breach and to visualize the extent of the loss by the companies affected. The value of this work lies in its connection to and interpretation of related South African legislation. The data extracted during the analysis is primarily personally identifiable information, such as defined by the Electronic Communications and Transactions Act of 2002 and the Protection of Personal Information Bill of 2009.
- Full Text:
- Date Issued: 2013
- Authors: Swart, Ignus , Grobler, Marthie , Irwin, Barry V W
- Date: 2013
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428584 , vital:72522 , 10.1109/ISSA.2013.6641046
- Description: The potential impact that data leakage can have on a country, both on a national level as well as on an individual level, can be wide reaching and potentially catastrophic. In January 2013, several South African companies became the target of a hack attack, resulting in the breach of security measures and the leaking of a claimed 700000 records. The affected companies are spread across a number of domains, thus making the leak a very wide impact area. The aim of this paper is to analyze the data released from the South African breach and to visualize the extent of the loss by the companies affected. The value of this work lies in its connection to and interpretation of related South African legislation. The data extracted during the analysis is primarily personally identifiable information, such as defined by the Electronic Communications and Transactions Act of 2002 and the Protection of Personal Information Bill of 2009.
- Full Text:
- Date Issued: 2013
A computer network attack taxonomy and ontology
- Van Heerden, Renier P, Irwin, Barry V W, Burke, Ivan D, Leenen, Louise
- Authors: Van Heerden, Renier P , Irwin, Barry V W , Burke, Ivan D , Leenen, Louise
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430064 , vital:72663 , DOI: 10.4018/ijcwt.2012070102
- Description: Computer network attacks differ in the motivation of the entity behind the attack, the execution and the end result. The diversity of attacks has the consequence that no standard classification ex-ists. The benefit of automated classification of attacks, means that an attack could be mitigated accordingly. The authors extend a previous, initial taxonomy of computer network attacks which forms the basis of a proposed network attack ontology in this pa-per. The objective of this ontology is to automate the classifica-tion of a network attack during its early stages. Most published taxonomies present an attack from either the attacker's or defend-er's point of view. The authors' taxonomy presents both these points of view. The framework for an ontology was developed using a core class, the "Attack Scenario", which can be used to characterize and classify computer network attacks.
- Full Text:
- Date Issued: 2012
- Authors: Van Heerden, Renier P , Irwin, Barry V W , Burke, Ivan D , Leenen, Louise
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430064 , vital:72663 , DOI: 10.4018/ijcwt.2012070102
- Description: Computer network attacks differ in the motivation of the entity behind the attack, the execution and the end result. The diversity of attacks has the consequence that no standard classification ex-ists. The benefit of automated classification of attacks, means that an attack could be mitigated accordingly. The authors extend a previous, initial taxonomy of computer network attacks which forms the basis of a proposed network attack ontology in this pa-per. The objective of this ontology is to automate the classifica-tion of a network attack during its early stages. Most published taxonomies present an attack from either the attacker's or defend-er's point of view. The authors' taxonomy presents both these points of view. The framework for an ontology was developed using a core class, the "Attack Scenario", which can be used to characterize and classify computer network attacks.
- Full Text:
- Date Issued: 2012
A Framework for the Static Analysis of Malware focusing on Signal Processing Techniques
- Zeisberger, Sascha, Irwin, Barry V W
- Authors: Zeisberger, Sascha , Irwin, Barry V W
- Date: 2012
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427914 , vital:72473 , https://www.researchgate.net/profile/Barry-Ir-win/publication/327622833_A_Framework_for_the_Static_Analysis_of_Mal-ware_focusing_on_Signal_Processing_Techniques/links/5b9a1396a6fdcc59bf8dfc87/A-Framework-for-the-Static-Analysis-of-Malware-focusing-on-Signal-Processing-Techniques.pdf
- Description: The information gathered through conventional static analysis of malicious binaries has become increasingly limited. This is due to the rate at which new malware is being created as well as the increasingly complex methods employed to obfuscating these binaries. This paper discusses the development of a framework to analyse malware using signal processing techniques, the initial iteration of which focuses on common audio processing techniques such as Fourier transforms. The aim of this research is to identify characteristics of malware and the encryption methods used to obfuscate malware. This is achieved through the analysis of their binary structure, potentially providing an additional metric for autonomously fingerprinting malware.
- Full Text:
- Date Issued: 2012
- Authors: Zeisberger, Sascha , Irwin, Barry V W
- Date: 2012
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427914 , vital:72473 , https://www.researchgate.net/profile/Barry-Ir-win/publication/327622833_A_Framework_for_the_Static_Analysis_of_Mal-ware_focusing_on_Signal_Processing_Techniques/links/5b9a1396a6fdcc59bf8dfc87/A-Framework-for-the-Static-Analysis-of-Malware-focusing-on-Signal-Processing-Techniques.pdf
- Description: The information gathered through conventional static analysis of malicious binaries has become increasingly limited. This is due to the rate at which new malware is being created as well as the increasingly complex methods employed to obfuscating these binaries. This paper discusses the development of a framework to analyse malware using signal processing techniques, the initial iteration of which focuses on common audio processing techniques such as Fourier transforms. The aim of this research is to identify characteristics of malware and the encryption methods used to obfuscate malware. This is achieved through the analysis of their binary structure, potentially providing an additional metric for autonomously fingerprinting malware.
- Full Text:
- Date Issued: 2012
A network telescope perspective of the Conficker outbreak
- Authors: Irwin, Barry V W
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429728 , vital:72635 , 10.1109/ISSA.2012.6320455
- Description: This paper discusses a dataset of some 16 million packets targeting port 445/tcp collected by a network telescope utilising a /24 netblock in South African IP address space. An initial overview of the collected data is provided. This is followed by a detailed analysis of the packet characteristics observed, including size and TTL. The peculiarities of the observed target selection and the results of the flaw in the Conficker worm's propagation algorithm are presented. An analysis of the 4 million observed source hosts is reported by grouped by both packet counts and the number of distinct hosts per network address block. Address blocks of size /8, 16 and 24 are used for groupings. The localisation, by geographic region and numerical proximity, of high ranking aggregate netblocks is highlighted. The paper concludes with some overall analyses, and consideration of the application of network telescopes to the monitoring of such outbreaks in the future.
- Full Text:
- Date Issued: 2012
- Authors: Irwin, Barry V W
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429728 , vital:72635 , 10.1109/ISSA.2012.6320455
- Description: This paper discusses a dataset of some 16 million packets targeting port 445/tcp collected by a network telescope utilising a /24 netblock in South African IP address space. An initial overview of the collected data is provided. This is followed by a detailed analysis of the packet characteristics observed, including size and TTL. The peculiarities of the observed target selection and the results of the flaw in the Conficker worm's propagation algorithm are presented. An analysis of the 4 million observed source hosts is reported by grouped by both packet counts and the number of distinct hosts per network address block. Address blocks of size /8, 16 and 24 are used for groupings. The localisation, by geographic region and numerical proximity, of high ranking aggregate netblocks is highlighted. The paper concludes with some overall analyses, and consideration of the application of network telescopes to the monitoring of such outbreaks in the future.
- Full Text:
- Date Issued: 2012
An Analysis and Implementation of Methods for High Speed Lexical Classification of Malicious URLs
- Egan, Shaun P, Irwin, Barry V W
- Authors: Egan, Shaun P , Irwin, Barry V W
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429757 , vital:72637 , https://digifors.cs.up.ac.za/issa/2012/Proceedings/Research/58_ResearchInProgress.pdf
- Description: Several authors have put forward methods of using Artificial Neural Networks (ANN) to classify URLs as malicious or benign by using lexical features of those URLs. These methods have been compared to other methods of classification, such as blacklisting and spam filtering, and have been found to be as accurate. Early attempts proved to be as highly accurate. Fully featured classifications use lexical features as well as lookups to classify URLs and include (but are not limited to) blacklists, spam filters and reputation services. These classifiers are based on the Online Perceptron Model, using a single neuron as a linear combiner and used lexical features that rely on the presence (or lack thereof) of words belonging to a bag-of-words. Several obfuscation resistant features are also used to increase the positive classification rate of these perceptrons. Examples of these include URL length, number of directory traversals and length of arguments passed to the file within the URL. In this paper we describe how we implement the online perceptron model and methods that we used to try to increase the accuracy of this model through the use of hidden layers and training cost validation. We discuss our results in relation to those of other papers, as well as other analysis performed on the training data and the neural networks themselves to best understand why they are so effective. Also described will be the proposed model for developing these Neural Networks, how to implement them in the real world through the use of browser extensions, proxy plugins and spam filters for mail servers, and our current implementation. Finally, work that is still in progress will be described. This work includes other methods of increasing accuracy through the use of modern training techniques and testing in a real world environment.
- Full Text:
- Date Issued: 2012
- Authors: Egan, Shaun P , Irwin, Barry V W
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429757 , vital:72637 , https://digifors.cs.up.ac.za/issa/2012/Proceedings/Research/58_ResearchInProgress.pdf
- Description: Several authors have put forward methods of using Artificial Neural Networks (ANN) to classify URLs as malicious or benign by using lexical features of those URLs. These methods have been compared to other methods of classification, such as blacklisting and spam filtering, and have been found to be as accurate. Early attempts proved to be as highly accurate. Fully featured classifications use lexical features as well as lookups to classify URLs and include (but are not limited to) blacklists, spam filters and reputation services. These classifiers are based on the Online Perceptron Model, using a single neuron as a linear combiner and used lexical features that rely on the presence (or lack thereof) of words belonging to a bag-of-words. Several obfuscation resistant features are also used to increase the positive classification rate of these perceptrons. Examples of these include URL length, number of directory traversals and length of arguments passed to the file within the URL. In this paper we describe how we implement the online perceptron model and methods that we used to try to increase the accuracy of this model through the use of hidden layers and training cost validation. We discuss our results in relation to those of other papers, as well as other analysis performed on the training data and the neural networks themselves to best understand why they are so effective. Also described will be the proposed model for developing these Neural Networks, how to implement them in the real world through the use of browser extensions, proxy plugins and spam filters for mail servers, and our current implementation. Finally, work that is still in progress will be described. This work includes other methods of increasing accuracy through the use of modern training techniques and testing in a real world environment.
- Full Text:
- Date Issued: 2012
An Exploratory Framework for Extrusion Detection
- Stalmans, Etienne, Irwin, Barry V W
- Authors: Stalmans, Etienne , Irwin, Barry V W
- Date: 2012
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428027 , vital:72481 , https://www.researchgate.net/profile/Barry-Ir-win/publication/327622736_An_Exploratory_Framework_for_Extrusion_Detection/links/5b9a12ba299bf14ad4d6a3d7/An-Exploratory-Framework-for-Extrusion-Detection.pdf
- Description: Modern network architecture allows multiple connectivity options, increasing the number of possible attack vectors. With the number of internet enabled devices constantly increasing, along with employees using these devices to access internal corporate networks, the attack surface has become too large to monitor from a single end-point. Traditional security measures have focused on securing a small number of network endpoints, by monitoring inbound con-nections and are thus blind to attack vectors such as mobile internet connections and remova-ble devices. Once an attacker has gained access to a network they are able to operate unde-tected on the internal network and exfiltrate data without hindrance. This paper proposes a framework for extrusion detection, where internal network traffic and outbound connections are monitored to detect malicious activity. The proposed framework has a tiered architecture con-sisting of prevention, detection, reaction and reporting. Each tier of the framework feeds into the subsequent tier with reporting providing a feedback mechanism to improve each tier based on the outcome of previous incidents.
- Full Text:
- Date Issued: 2012
- Authors: Stalmans, Etienne , Irwin, Barry V W
- Date: 2012
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428027 , vital:72481 , https://www.researchgate.net/profile/Barry-Ir-win/publication/327622736_An_Exploratory_Framework_for_Extrusion_Detection/links/5b9a12ba299bf14ad4d6a3d7/An-Exploratory-Framework-for-Extrusion-Detection.pdf
- Description: Modern network architecture allows multiple connectivity options, increasing the number of possible attack vectors. With the number of internet enabled devices constantly increasing, along with employees using these devices to access internal corporate networks, the attack surface has become too large to monitor from a single end-point. Traditional security measures have focused on securing a small number of network endpoints, by monitoring inbound con-nections and are thus blind to attack vectors such as mobile internet connections and remova-ble devices. Once an attacker has gained access to a network they are able to operate unde-tected on the internal network and exfiltrate data without hindrance. This paper proposes a framework for extrusion detection, where internal network traffic and outbound connections are monitored to detect malicious activity. The proposed framework has a tiered architecture con-sisting of prevention, detection, reaction and reporting. Each tier of the framework feeds into the subsequent tier with reporting providing a feedback mechanism to improve each tier based on the outcome of previous incidents.
- Full Text:
- Date Issued: 2012
Building a Graphical Fuzzing Framework
- Zeisberger, Sascha, Irwin, Barry V W
- Authors: Zeisberger, Sascha , Irwin, Barry V W
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429772 , vital:72638 , https://digifors.cs.up.ac.za/issa/2012/Proceedings/Research/59_ResearchInProgress.pdf
- Description: Fuzz testing is a robustness testing technique that sends malformed data to an application’s input. This is to test an application’s behaviour when presented with input beyond its specification. The main difference between traditional testing techniques and fuzz testing is that in most traditional techniques an application is tested according to a specification and rated on how well the application conforms to that specification. Fuzz testing tests beyond the scope of a specification by intelligently generating values that may be interpreted by an application in an unintended manner. The use of fuzz testing has been more prevalent in academic and security communities despite showing success in production environments. To measure the effectiveness of fuzz testing, an experiment was conducted where several publicly available applications were fuzzed. In some instances, fuzz testing was able to force an application into an invalid state and it was concluded that fuzz testing is a relevant testing technique that could assist in developing more robust applications. This success prompted a further investigation into fuzz testing in order to compile a list of requirements that makes an effective fuzzer. The aforementioned investigation assisted in the design of a fuzz testing framework, the goal of which is to make the process more accessible to users outside of an academic and security environment. Design methodologies and justifications of said framework are discussed, focusing on the graphical user interface components as this aspect of the framework is used to increase the usability of the framework.
- Full Text:
- Date Issued: 2012
- Authors: Zeisberger, Sascha , Irwin, Barry V W
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429772 , vital:72638 , https://digifors.cs.up.ac.za/issa/2012/Proceedings/Research/59_ResearchInProgress.pdf
- Description: Fuzz testing is a robustness testing technique that sends malformed data to an application’s input. This is to test an application’s behaviour when presented with input beyond its specification. The main difference between traditional testing techniques and fuzz testing is that in most traditional techniques an application is tested according to a specification and rated on how well the application conforms to that specification. Fuzz testing tests beyond the scope of a specification by intelligently generating values that may be interpreted by an application in an unintended manner. The use of fuzz testing has been more prevalent in academic and security communities despite showing success in production environments. To measure the effectiveness of fuzz testing, an experiment was conducted where several publicly available applications were fuzzed. In some instances, fuzz testing was able to force an application into an invalid state and it was concluded that fuzz testing is a relevant testing technique that could assist in developing more robust applications. This success prompted a further investigation into fuzz testing in order to compile a list of requirements that makes an effective fuzzer. The aforementioned investigation assisted in the design of a fuzz testing framework, the goal of which is to make the process more accessible to users outside of an academic and security environment. Design methodologies and justifications of said framework are discussed, focusing on the graphical user interface components as this aspect of the framework is used to increase the usability of the framework.
- Full Text:
- Date Issued: 2012
Capturefoundry: a gpu accelerated packet capture analysis tool
- Nottingham, Alastair, Richter, John, Irwin, Barry V W
- Authors: Nottingham, Alastair , Richter, John , Irwin, Barry V W
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430112 , vital:72666 , https://doi.org/10.1145/2389836.2389877
- Description: Packet captures are used to support a variety of tasks, including network administration, fault diagnosis and security and network related research. Despite their usefulness, processing packet capture files is a slow and tedious process that impedes the analysis of large, long-term captures. This paper discusses the primary components and observed performance of CaptureFoundry, a stand-alone capture analysis support tool designed to quickly map, filter and extract packets from large capture files using a combination of indexing techniques and GPU accelerated packet classification. All results are persistent, and may be used to rapidly extract small pre-filtered captures on demand that may be analysed quickly in existing capture analysis applications. Performance results show that CaptureFoundry is capable of generating multiple indexes and classification results for large captures at hundreds of megabytes per second, with minimal CPU and memory overhead and only minor additional storage space requirements.
- Full Text:
- Date Issued: 2012
- Authors: Nottingham, Alastair , Richter, John , Irwin, Barry V W
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430112 , vital:72666 , https://doi.org/10.1145/2389836.2389877
- Description: Packet captures are used to support a variety of tasks, including network administration, fault diagnosis and security and network related research. Despite their usefulness, processing packet capture files is a slow and tedious process that impedes the analysis of large, long-term captures. This paper discusses the primary components and observed performance of CaptureFoundry, a stand-alone capture analysis support tool designed to quickly map, filter and extract packets from large capture files using a combination of indexing techniques and GPU accelerated packet classification. All results are persistent, and may be used to rapidly extract small pre-filtered captures on demand that may be analysed quickly in existing capture analysis applications. Performance results show that CaptureFoundry is capable of generating multiple indexes and classification results for large captures at hundreds of megabytes per second, with minimal CPU and memory overhead and only minor additional storage space requirements.
- Full Text:
- Date Issued: 2012