A computer network attack taxonomy and ontology
- Van Heerden, Renier P, Irwin, Barry V W, Burke, Ivan D, Leenen, Louise
- Authors: Van Heerden, Renier P , Irwin, Barry V W , Burke, Ivan D , Leenen, Louise
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430064 , vital:72663 , DOI: 10.4018/ijcwt.2012070102
- Description: Computer network attacks differ in the motivation of the entity behind the attack, the execution and the end result. The diversity of attacks has the consequence that no standard classification ex-ists. The benefit of automated classification of attacks, means that an attack could be mitigated accordingly. The authors extend a previous, initial taxonomy of computer network attacks which forms the basis of a proposed network attack ontology in this pa-per. The objective of this ontology is to automate the classifica-tion of a network attack during its early stages. Most published taxonomies present an attack from either the attacker's or defend-er's point of view. The authors' taxonomy presents both these points of view. The framework for an ontology was developed using a core class, the "Attack Scenario", which can be used to characterize and classify computer network attacks.
- Full Text:
- Date Issued: 2012
- Authors: Van Heerden, Renier P , Irwin, Barry V W , Burke, Ivan D , Leenen, Louise
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430064 , vital:72663 , DOI: 10.4018/ijcwt.2012070102
- Description: Computer network attacks differ in the motivation of the entity behind the attack, the execution and the end result. The diversity of attacks has the consequence that no standard classification ex-ists. The benefit of automated classification of attacks, means that an attack could be mitigated accordingly. The authors extend a previous, initial taxonomy of computer network attacks which forms the basis of a proposed network attack ontology in this pa-per. The objective of this ontology is to automate the classifica-tion of a network attack during its early stages. Most published taxonomies present an attack from either the attacker's or defend-er's point of view. The authors' taxonomy presents both these points of view. The framework for an ontology was developed using a core class, the "Attack Scenario", which can be used to characterize and classify computer network attacks.
- Full Text:
- Date Issued: 2012
A Framework for the Static Analysis of Malware focusing on Signal Processing Techniques
- Zeisberger, Sascha, Irwin, Barry V W
- Authors: Zeisberger, Sascha , Irwin, Barry V W
- Date: 2012
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427914 , vital:72473 , https://www.researchgate.net/profile/Barry-Ir-win/publication/327622833_A_Framework_for_the_Static_Analysis_of_Mal-ware_focusing_on_Signal_Processing_Techniques/links/5b9a1396a6fdcc59bf8dfc87/A-Framework-for-the-Static-Analysis-of-Malware-focusing-on-Signal-Processing-Techniques.pdf
- Description: The information gathered through conventional static analysis of malicious binaries has become increasingly limited. This is due to the rate at which new malware is being created as well as the increasingly complex methods employed to obfuscating these binaries. This paper discusses the development of a framework to analyse malware using signal processing techniques, the initial iteration of which focuses on common audio processing techniques such as Fourier transforms. The aim of this research is to identify characteristics of malware and the encryption methods used to obfuscate malware. This is achieved through the analysis of their binary structure, potentially providing an additional metric for autonomously fingerprinting malware.
- Full Text:
- Date Issued: 2012
- Authors: Zeisberger, Sascha , Irwin, Barry V W
- Date: 2012
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427914 , vital:72473 , https://www.researchgate.net/profile/Barry-Ir-win/publication/327622833_A_Framework_for_the_Static_Analysis_of_Mal-ware_focusing_on_Signal_Processing_Techniques/links/5b9a1396a6fdcc59bf8dfc87/A-Framework-for-the-Static-Analysis-of-Malware-focusing-on-Signal-Processing-Techniques.pdf
- Description: The information gathered through conventional static analysis of malicious binaries has become increasingly limited. This is due to the rate at which new malware is being created as well as the increasingly complex methods employed to obfuscating these binaries. This paper discusses the development of a framework to analyse malware using signal processing techniques, the initial iteration of which focuses on common audio processing techniques such as Fourier transforms. The aim of this research is to identify characteristics of malware and the encryption methods used to obfuscate malware. This is achieved through the analysis of their binary structure, potentially providing an additional metric for autonomously fingerprinting malware.
- Full Text:
- Date Issued: 2012
A network telescope perspective of the Conficker outbreak
- Authors: Irwin, Barry V W
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429728 , vital:72635 , 10.1109/ISSA.2012.6320455
- Description: This paper discusses a dataset of some 16 million packets targeting port 445/tcp collected by a network telescope utilising a /24 netblock in South African IP address space. An initial overview of the collected data is provided. This is followed by a detailed analysis of the packet characteristics observed, including size and TTL. The peculiarities of the observed target selection and the results of the flaw in the Conficker worm's propagation algorithm are presented. An analysis of the 4 million observed source hosts is reported by grouped by both packet counts and the number of distinct hosts per network address block. Address blocks of size /8, 16 and 24 are used for groupings. The localisation, by geographic region and numerical proximity, of high ranking aggregate netblocks is highlighted. The paper concludes with some overall analyses, and consideration of the application of network telescopes to the monitoring of such outbreaks in the future.
- Full Text:
- Date Issued: 2012
- Authors: Irwin, Barry V W
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429728 , vital:72635 , 10.1109/ISSA.2012.6320455
- Description: This paper discusses a dataset of some 16 million packets targeting port 445/tcp collected by a network telescope utilising a /24 netblock in South African IP address space. An initial overview of the collected data is provided. This is followed by a detailed analysis of the packet characteristics observed, including size and TTL. The peculiarities of the observed target selection and the results of the flaw in the Conficker worm's propagation algorithm are presented. An analysis of the 4 million observed source hosts is reported by grouped by both packet counts and the number of distinct hosts per network address block. Address blocks of size /8, 16 and 24 are used for groupings. The localisation, by geographic region and numerical proximity, of high ranking aggregate netblocks is highlighted. The paper concludes with some overall analyses, and consideration of the application of network telescopes to the monitoring of such outbreaks in the future.
- Full Text:
- Date Issued: 2012
An Analysis and Implementation of Methods for High Speed Lexical Classification of Malicious URLs
- Egan, Shaun P, Irwin, Barry V W
- Authors: Egan, Shaun P , Irwin, Barry V W
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429757 , vital:72637 , https://digifors.cs.up.ac.za/issa/2012/Proceedings/Research/58_ResearchInProgress.pdf
- Description: Several authors have put forward methods of using Artificial Neural Networks (ANN) to classify URLs as malicious or benign by using lexical features of those URLs. These methods have been compared to other methods of classification, such as blacklisting and spam filtering, and have been found to be as accurate. Early attempts proved to be as highly accurate. Fully featured classifications use lexical features as well as lookups to classify URLs and include (but are not limited to) blacklists, spam filters and reputation services. These classifiers are based on the Online Perceptron Model, using a single neuron as a linear combiner and used lexical features that rely on the presence (or lack thereof) of words belonging to a bag-of-words. Several obfuscation resistant features are also used to increase the positive classification rate of these perceptrons. Examples of these include URL length, number of directory traversals and length of arguments passed to the file within the URL. In this paper we describe how we implement the online perceptron model and methods that we used to try to increase the accuracy of this model through the use of hidden layers and training cost validation. We discuss our results in relation to those of other papers, as well as other analysis performed on the training data and the neural networks themselves to best understand why they are so effective. Also described will be the proposed model for developing these Neural Networks, how to implement them in the real world through the use of browser extensions, proxy plugins and spam filters for mail servers, and our current implementation. Finally, work that is still in progress will be described. This work includes other methods of increasing accuracy through the use of modern training techniques and testing in a real world environment.
- Full Text:
- Date Issued: 2012
- Authors: Egan, Shaun P , Irwin, Barry V W
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429757 , vital:72637 , https://digifors.cs.up.ac.za/issa/2012/Proceedings/Research/58_ResearchInProgress.pdf
- Description: Several authors have put forward methods of using Artificial Neural Networks (ANN) to classify URLs as malicious or benign by using lexical features of those URLs. These methods have been compared to other methods of classification, such as blacklisting and spam filtering, and have been found to be as accurate. Early attempts proved to be as highly accurate. Fully featured classifications use lexical features as well as lookups to classify URLs and include (but are not limited to) blacklists, spam filters and reputation services. These classifiers are based on the Online Perceptron Model, using a single neuron as a linear combiner and used lexical features that rely on the presence (or lack thereof) of words belonging to a bag-of-words. Several obfuscation resistant features are also used to increase the positive classification rate of these perceptrons. Examples of these include URL length, number of directory traversals and length of arguments passed to the file within the URL. In this paper we describe how we implement the online perceptron model and methods that we used to try to increase the accuracy of this model through the use of hidden layers and training cost validation. We discuss our results in relation to those of other papers, as well as other analysis performed on the training data and the neural networks themselves to best understand why they are so effective. Also described will be the proposed model for developing these Neural Networks, how to implement them in the real world through the use of browser extensions, proxy plugins and spam filters for mail servers, and our current implementation. Finally, work that is still in progress will be described. This work includes other methods of increasing accuracy through the use of modern training techniques and testing in a real world environment.
- Full Text:
- Date Issued: 2012
An Exploratory Framework for Extrusion Detection
- Stalmans, Etienne, Irwin, Barry V W
- Authors: Stalmans, Etienne , Irwin, Barry V W
- Date: 2012
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428027 , vital:72481 , https://www.researchgate.net/profile/Barry-Ir-win/publication/327622736_An_Exploratory_Framework_for_Extrusion_Detection/links/5b9a12ba299bf14ad4d6a3d7/An-Exploratory-Framework-for-Extrusion-Detection.pdf
- Description: Modern network architecture allows multiple connectivity options, increasing the number of possible attack vectors. With the number of internet enabled devices constantly increasing, along with employees using these devices to access internal corporate networks, the attack surface has become too large to monitor from a single end-point. Traditional security measures have focused on securing a small number of network endpoints, by monitoring inbound con-nections and are thus blind to attack vectors such as mobile internet connections and remova-ble devices. Once an attacker has gained access to a network they are able to operate unde-tected on the internal network and exfiltrate data without hindrance. This paper proposes a framework for extrusion detection, where internal network traffic and outbound connections are monitored to detect malicious activity. The proposed framework has a tiered architecture con-sisting of prevention, detection, reaction and reporting. Each tier of the framework feeds into the subsequent tier with reporting providing a feedback mechanism to improve each tier based on the outcome of previous incidents.
- Full Text:
- Date Issued: 2012
- Authors: Stalmans, Etienne , Irwin, Barry V W
- Date: 2012
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428027 , vital:72481 , https://www.researchgate.net/profile/Barry-Ir-win/publication/327622736_An_Exploratory_Framework_for_Extrusion_Detection/links/5b9a12ba299bf14ad4d6a3d7/An-Exploratory-Framework-for-Extrusion-Detection.pdf
- Description: Modern network architecture allows multiple connectivity options, increasing the number of possible attack vectors. With the number of internet enabled devices constantly increasing, along with employees using these devices to access internal corporate networks, the attack surface has become too large to monitor from a single end-point. Traditional security measures have focused on securing a small number of network endpoints, by monitoring inbound con-nections and are thus blind to attack vectors such as mobile internet connections and remova-ble devices. Once an attacker has gained access to a network they are able to operate unde-tected on the internal network and exfiltrate data without hindrance. This paper proposes a framework for extrusion detection, where internal network traffic and outbound connections are monitored to detect malicious activity. The proposed framework has a tiered architecture con-sisting of prevention, detection, reaction and reporting. Each tier of the framework feeds into the subsequent tier with reporting providing a feedback mechanism to improve each tier based on the outcome of previous incidents.
- Full Text:
- Date Issued: 2012
Building a Graphical Fuzzing Framework
- Zeisberger, Sascha, Irwin, Barry V W
- Authors: Zeisberger, Sascha , Irwin, Barry V W
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429772 , vital:72638 , https://digifors.cs.up.ac.za/issa/2012/Proceedings/Research/59_ResearchInProgress.pdf
- Description: Fuzz testing is a robustness testing technique that sends malformed data to an application’s input. This is to test an application’s behaviour when presented with input beyond its specification. The main difference between traditional testing techniques and fuzz testing is that in most traditional techniques an application is tested according to a specification and rated on how well the application conforms to that specification. Fuzz testing tests beyond the scope of a specification by intelligently generating values that may be interpreted by an application in an unintended manner. The use of fuzz testing has been more prevalent in academic and security communities despite showing success in production environments. To measure the effectiveness of fuzz testing, an experiment was conducted where several publicly available applications were fuzzed. In some instances, fuzz testing was able to force an application into an invalid state and it was concluded that fuzz testing is a relevant testing technique that could assist in developing more robust applications. This success prompted a further investigation into fuzz testing in order to compile a list of requirements that makes an effective fuzzer. The aforementioned investigation assisted in the design of a fuzz testing framework, the goal of which is to make the process more accessible to users outside of an academic and security environment. Design methodologies and justifications of said framework are discussed, focusing on the graphical user interface components as this aspect of the framework is used to increase the usability of the framework.
- Full Text:
- Date Issued: 2012
- Authors: Zeisberger, Sascha , Irwin, Barry V W
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429772 , vital:72638 , https://digifors.cs.up.ac.za/issa/2012/Proceedings/Research/59_ResearchInProgress.pdf
- Description: Fuzz testing is a robustness testing technique that sends malformed data to an application’s input. This is to test an application’s behaviour when presented with input beyond its specification. The main difference between traditional testing techniques and fuzz testing is that in most traditional techniques an application is tested according to a specification and rated on how well the application conforms to that specification. Fuzz testing tests beyond the scope of a specification by intelligently generating values that may be interpreted by an application in an unintended manner. The use of fuzz testing has been more prevalent in academic and security communities despite showing success in production environments. To measure the effectiveness of fuzz testing, an experiment was conducted where several publicly available applications were fuzzed. In some instances, fuzz testing was able to force an application into an invalid state and it was concluded that fuzz testing is a relevant testing technique that could assist in developing more robust applications. This success prompted a further investigation into fuzz testing in order to compile a list of requirements that makes an effective fuzzer. The aforementioned investigation assisted in the design of a fuzz testing framework, the goal of which is to make the process more accessible to users outside of an academic and security environment. Design methodologies and justifications of said framework are discussed, focusing on the graphical user interface components as this aspect of the framework is used to increase the usability of the framework.
- Full Text:
- Date Issued: 2012
Capturefoundry: a gpu accelerated packet capture analysis tool
- Nottingham, Alastair, Richter, John, Irwin, Barry V W
- Authors: Nottingham, Alastair , Richter, John , Irwin, Barry V W
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430112 , vital:72666 , https://doi.org/10.1145/2389836.2389877
- Description: Packet captures are used to support a variety of tasks, including network administration, fault diagnosis and security and network related research. Despite their usefulness, processing packet capture files is a slow and tedious process that impedes the analysis of large, long-term captures. This paper discusses the primary components and observed performance of CaptureFoundry, a stand-alone capture analysis support tool designed to quickly map, filter and extract packets from large capture files using a combination of indexing techniques and GPU accelerated packet classification. All results are persistent, and may be used to rapidly extract small pre-filtered captures on demand that may be analysed quickly in existing capture analysis applications. Performance results show that CaptureFoundry is capable of generating multiple indexes and classification results for large captures at hundreds of megabytes per second, with minimal CPU and memory overhead and only minor additional storage space requirements.
- Full Text:
- Date Issued: 2012
- Authors: Nottingham, Alastair , Richter, John , Irwin, Barry V W
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430112 , vital:72666 , https://doi.org/10.1145/2389836.2389877
- Description: Packet captures are used to support a variety of tasks, including network administration, fault diagnosis and security and network related research. Despite their usefulness, processing packet capture files is a slow and tedious process that impedes the analysis of large, long-term captures. This paper discusses the primary components and observed performance of CaptureFoundry, a stand-alone capture analysis support tool designed to quickly map, filter and extract packets from large capture files using a combination of indexing techniques and GPU accelerated packet classification. All results are persistent, and may be used to rapidly extract small pre-filtered captures on demand that may be analysed quickly in existing capture analysis applications. Performance results show that CaptureFoundry is capable of generating multiple indexes and classification results for large captures at hundreds of megabytes per second, with minimal CPU and memory overhead and only minor additional storage space requirements.
- Full Text:
- Date Issued: 2012
Classifying network attack scenarios using an ontology
- Van Heerden, Renier, Irwin, Barry V W, Burke, I D
- Authors: Van Heerden, Renier , Irwin, Barry V W , Burke, I D
- Date: 2012
- Language: English
- Type: Conference paper
- Identifier: vital:6606 , http://hdl.handle.net/10962/d1009326
- Description: This paper presents a methodology using network attack ontology to classify computer-based attacks. Computer network attacks differ in motivation, execution and end result. Because attacks are diverse, no standard classification exists. If an attack could be classified, it could be mitigated accordingly. A taxonomy of computer network attacks forms the basis of the ontology. Most published taxonomies present an attack from either the attacker's or defender's point of view. This taxonomy presents both views. The main taxonomy classes are: Actor, Actor Location, Aggressor, Attack Goal, Attack Mechanism, Attack Scenario, Automation Level, Effects, Motivation, Phase, Scope and Target. The "Actor" class is the entity executing the attack. The "Actor Location" class is the Actor‟s country of origin. The "Aggressor" class is the group instigating an attack. The "Attack Goal" class specifies the attacker‟s goal. The "Attack Mechanism" class defines the attack methodology. The "Automation Level" class indicates the level of human interaction. The "Effects" class describes the consequences of an attack. The "Motivation" class specifies incentives for an attack. The "Scope" class describes the size and utility of the target. The "Target" class is the physical device or entity targeted by an attack. The "Vulnerability" class describes a target vulnerability used by the attacker. The "Phase" class represents an attack model that subdivides an attack into different phases. The ontology was developed using an "Attack Scenario" class, which draws from other classes and can be used to characterize and classify computer network attacks. An "Attack Scenario" consists of phases, has a scope and is attributed to an actor and aggressor which have a goal. The "Attack Scenario" thus represents different classes of attacks. High profile computer network attacks such as Stuxnet and the Estonia attacks can now be been classified through the “Attack Scenario” class.
- Full Text:
- Date Issued: 2012
- Authors: Van Heerden, Renier , Irwin, Barry V W , Burke, I D
- Date: 2012
- Language: English
- Type: Conference paper
- Identifier: vital:6606 , http://hdl.handle.net/10962/d1009326
- Description: This paper presents a methodology using network attack ontology to classify computer-based attacks. Computer network attacks differ in motivation, execution and end result. Because attacks are diverse, no standard classification exists. If an attack could be classified, it could be mitigated accordingly. A taxonomy of computer network attacks forms the basis of the ontology. Most published taxonomies present an attack from either the attacker's or defender's point of view. This taxonomy presents both views. The main taxonomy classes are: Actor, Actor Location, Aggressor, Attack Goal, Attack Mechanism, Attack Scenario, Automation Level, Effects, Motivation, Phase, Scope and Target. The "Actor" class is the entity executing the attack. The "Actor Location" class is the Actor‟s country of origin. The "Aggressor" class is the group instigating an attack. The "Attack Goal" class specifies the attacker‟s goal. The "Attack Mechanism" class defines the attack methodology. The "Automation Level" class indicates the level of human interaction. The "Effects" class describes the consequences of an attack. The "Motivation" class specifies incentives for an attack. The "Scope" class describes the size and utility of the target. The "Target" class is the physical device or entity targeted by an attack. The "Vulnerability" class describes a target vulnerability used by the attacker. The "Phase" class represents an attack model that subdivides an attack into different phases. The ontology was developed using an "Attack Scenario" class, which draws from other classes and can be used to characterize and classify computer network attacks. An "Attack Scenario" consists of phases, has a scope and is attributed to an actor and aggressor which have a goal. The "Attack Scenario" thus represents different classes of attacks. High profile computer network attacks such as Stuxnet and the Estonia attacks can now be been classified through the “Attack Scenario” class.
- Full Text:
- Date Issued: 2012
Cost-effective realisation of the Internet of Things
- Andersen, Michael, Irwin, Barry V W
- Authors: Andersen, Michael , Irwin, Barry V W
- Date: 2012
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427930 , vital:72474 , https://www.researchgate.net/profile/Barry-Irwin/publication/326225063_Cost-effec-tive_realisation_of_the_Internet_of_Things/links/5b3f2262a6fdcc8506ffe75e/Cost-effective-realisation-of-the-Internet-of-Things.pdf
- Description: A hardware and software platform, created to facilitate power usage and power quality measurements along with direct power line actuation is under development. Additional general purpose control and sensing interfaces have been integrated. Measurements are persistently stored on each node to allow asynchronous retrieval of data without the need for a central server. The device communicates using an IEEE 802.15. 4 radio transceiver to create a self-configuring mesh network. Users can interface with the mesh network by connecting to any node via USB and utilising the developed high level API and interactive environment.
- Full Text:
- Date Issued: 2012
- Authors: Andersen, Michael , Irwin, Barry V W
- Date: 2012
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427930 , vital:72474 , https://www.researchgate.net/profile/Barry-Irwin/publication/326225063_Cost-effec-tive_realisation_of_the_Internet_of_Things/links/5b3f2262a6fdcc8506ffe75e/Cost-effective-realisation-of-the-Internet-of-Things.pdf
- Description: A hardware and software platform, created to facilitate power usage and power quality measurements along with direct power line actuation is under development. Additional general purpose control and sensing interfaces have been integrated. Measurements are persistently stored on each node to allow asynchronous retrieval of data without the need for a central server. The device communicates using an IEEE 802.15. 4 radio transceiver to create a self-configuring mesh network. Users can interface with the mesh network by connecting to any node via USB and utilising the developed high level API and interactive environment.
- Full Text:
- Date Issued: 2012
Geo-spatial autocorrelation as a metric for the detection of fast-flux botnet domains
- Stalmans, Etienne, Hunter, Samuel O, Irwin, Barry V W
- Authors: Stalmans, Etienne , Hunter, Samuel O , Irwin, Barry V W
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429799 , vital:72640 , 10.1109/ISSA.2012.6320433
- Description: Botnets consist of thousands of hosts infected with malware. Botnet owners communicate with these hosts using Command and Control (C2) servers. These C2 servers are usually infected hosts which the botnet owners do not have physical access to. For this reason botnets can be shut down by taking over or blocking the C2 servers. Botnet owners have employed numerous shutdown avoidance techniques. One of these techniques, DNS Fast-Flux, relies on rapidly changing address records. The addresses returned by the Fast-Flux DNS servers consist of geographically widely distributed hosts. The distributed nature of Fast-Flux botnets differs from legitimate domains, which tend to have geographically clustered server locations. This paper examines the use of spatial autocorrelation techniques based on the geographic distribution of domain servers to detect Fast-Flux domains. Moran's I and Geary's C are used to produce classifiers using multiple geographic co-ordinate systems to produce efficient and accurate results. It is shown how Fast-Flux domains can be detected reliably while only a small percentage of false positives are produced.
- Full Text:
- Date Issued: 2012
- Authors: Stalmans, Etienne , Hunter, Samuel O , Irwin, Barry V W
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429799 , vital:72640 , 10.1109/ISSA.2012.6320433
- Description: Botnets consist of thousands of hosts infected with malware. Botnet owners communicate with these hosts using Command and Control (C2) servers. These C2 servers are usually infected hosts which the botnet owners do not have physical access to. For this reason botnets can be shut down by taking over or blocking the C2 servers. Botnet owners have employed numerous shutdown avoidance techniques. One of these techniques, DNS Fast-Flux, relies on rapidly changing address records. The addresses returned by the Fast-Flux DNS servers consist of geographically widely distributed hosts. The distributed nature of Fast-Flux botnets differs from legitimate domains, which tend to have geographically clustered server locations. This paper examines the use of spatial autocorrelation techniques based on the geographic distribution of domain servers to detect Fast-Flux domains. Moran's I and Geary's C are used to produce classifiers using multiple geographic co-ordinate systems to produce efficient and accurate results. It is shown how Fast-Flux domains can be detected reliably while only a small percentage of false positives are produced.
- Full Text:
- Date Issued: 2012
Mapping the most significant computer hacking events to a temporal computer attack model
- Van Heerden, Renier, Pieterse, Heloise, Irwin, Barry V W
- Authors: Van Heerden, Renier , Pieterse, Heloise , Irwin, Barry V W
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429950 , vital:72654 , https://doi.org/10.1007/978-3-642-33332-3_21
- Description: This paper presents eight of the most significant computer hacking events (also known as computer attacks). These events were selected because of their unique impact, methodology, or other properties. A temporal computer attack model is presented that can be used to model computer based attacks. This model consists of the following stages: Target Identification, Reconnaissance, Attack, and Post-Attack Recon-naissance stages. The Attack stage is separated into: Ramp-up, Dam-age and Residue. This paper demonstrates how our eight significant hacking events are mapped to the temporal computer attack model. The temporal computer attack model becomes a valuable asset in the protection of critical infrastructure by being able to detect similar attacks earlier.
- Full Text:
- Date Issued: 2012
- Authors: Van Heerden, Renier , Pieterse, Heloise , Irwin, Barry V W
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429950 , vital:72654 , https://doi.org/10.1007/978-3-642-33332-3_21
- Description: This paper presents eight of the most significant computer hacking events (also known as computer attacks). These events were selected because of their unique impact, methodology, or other properties. A temporal computer attack model is presented that can be used to model computer based attacks. This model consists of the following stages: Target Identification, Reconnaissance, Attack, and Post-Attack Recon-naissance stages. The Attack stage is separated into: Ramp-up, Dam-age and Residue. This paper demonstrates how our eight significant hacking events are mapped to the temporal computer attack model. The temporal computer attack model becomes a valuable asset in the protection of critical infrastructure by being able to detect similar attacks earlier.
- Full Text:
- Date Issued: 2012
Network telescope metrics
- Authors: Irwin, Barry V W
- Date: 2012
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427944 , vital:72475 , https://www.researchgate.net/profile/Barry-Ir-win/publication/265121268_Network_Telescope_Metrics/links/58e23f70a6fdcc41bf973e69/Network-Telescope-Metrics.pdf
- Description: Network telescopes are a means of passive network monitoring, increasingly being used as part of a holistic network security program. One problem encountered by researchers in the sharing of the collected data form these systems. This is either due to the size of the data, or possibly a need to maintain the privacy of the Network address space being used for monitoring. This paper proposes a selection of metrics which can be used to communicate the most salient information contained in the data-set with other researchers, without the need to exchange or disclose the data-sets. Descriptive metrics for the sensor system are discussed along with numerical analysis data. The case for the use of graphical summary data is also presented.
- Full Text:
- Date Issued: 2012
- Authors: Irwin, Barry V W
- Date: 2012
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427944 , vital:72475 , https://www.researchgate.net/profile/Barry-Ir-win/publication/265121268_Network_Telescope_Metrics/links/58e23f70a6fdcc41bf973e69/Network-Telescope-Metrics.pdf
- Description: Network telescopes are a means of passive network monitoring, increasingly being used as part of a holistic network security program. One problem encountered by researchers in the sharing of the collected data form these systems. This is either due to the size of the data, or possibly a need to maintain the privacy of the Network address space being used for monitoring. This paper proposes a selection of metrics which can be used to communicate the most salient information contained in the data-set with other researchers, without the need to exchange or disclose the data-sets. Descriptive metrics for the sensor system are discussed along with numerical analysis data. The case for the use of graphical summary data is also presented.
- Full Text:
- Date Issued: 2012
Normandy: A Framework for Implementing High Speed Lexical Classification of Malicious URLs
- Egan, Shaun P, Irwin, Barry V W
- Authors: Egan, Shaun P , Irwin, Barry V W
- Date: 2012
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427958 , vital:72476 , https://www.researchgate.net/profile/Barry-Ir-win/publication/326224974_Normandy_A_Framework_for_Implementing_High_Speed_Lexical_Classification_of_Malicious_URLs/links/5b3f21074585150d2309dd50/Normandy-A-Framework-for-Implementing-High-Speed-Lexical-Classification-of-Malicious-URLs.pdf
- Description: Research has shown that it is possible to classify malicious URLs using state of the art techniques to train Artificial Neural Networks (ANN) using only lexical features of a URL. This has the advantage of being high speed and does not add any overhead to classifications as it does not require look-ups from external services. This paper discusses our method for implementing and testing a framework which automates the generation of these neural networks as well as testing involved in trying to optimize the performance of these ANNs.
- Full Text:
- Date Issued: 2012
- Authors: Egan, Shaun P , Irwin, Barry V W
- Date: 2012
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427958 , vital:72476 , https://www.researchgate.net/profile/Barry-Ir-win/publication/326224974_Normandy_A_Framework_for_Implementing_High_Speed_Lexical_Classification_of_Malicious_URLs/links/5b3f21074585150d2309dd50/Normandy-A-Framework-for-Implementing-High-Speed-Lexical-Classification-of-Malicious-URLs.pdf
- Description: Research has shown that it is possible to classify malicious URLs using state of the art techniques to train Artificial Neural Networks (ANN) using only lexical features of a URL. This has the advantage of being high speed and does not add any overhead to classifications as it does not require look-ups from external services. This paper discusses our method for implementing and testing a framework which automates the generation of these neural networks as well as testing involved in trying to optimize the performance of these ANNs.
- Full Text:
- Date Issued: 2012
Remote fingerprinting and multisensor data fusion
- Hunter, Samuel O, Stalmans, Etienne, Irwin, Barry V W, Richter, John
- Authors: Hunter, Samuel O , Stalmans, Etienne , Irwin, Barry V W , Richter, John
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429813 , vital:72641 , 10.1109/ISSA.2012.6320449
- Description: Network fingerprinting is the technique by which a device or service is enumerated in order to determine the hardware, software or application characteristics of a targeted attribute. Although fingerprinting can be achieved by a variety of means, the most common technique is the extraction of characteristics from an entity and the correlation thereof against known signatures for verification. In this paper we identify multiple host-defining metrics and propose a process of unique host tracking through the use of two novel fingerprinting techniques. We then illustrate the application of host fingerprinting and tracking for increasing situational awareness of potentially malicious hosts. In order to achieve this we provide an outline of an adapted multisensor data fusion model with the goal of increasing situational awareness through observation of unsolicited network traffic.
- Full Text:
- Date Issued: 2012
- Authors: Hunter, Samuel O , Stalmans, Etienne , Irwin, Barry V W , Richter, John
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429813 , vital:72641 , 10.1109/ISSA.2012.6320449
- Description: Network fingerprinting is the technique by which a device or service is enumerated in order to determine the hardware, software or application characteristics of a targeted attribute. Although fingerprinting can be achieved by a variety of means, the most common technique is the extraction of characteristics from an entity and the correlation thereof against known signatures for verification. In this paper we identify multiple host-defining metrics and propose a process of unique host tracking through the use of two novel fingerprinting techniques. We then illustrate the application of host fingerprinting and tracking for increasing situational awareness of potentially malicious hosts. In order to achieve this we provide an outline of an adapted multisensor data fusion model with the goal of increasing situational awareness through observation of unsolicited network traffic.
- Full Text:
- Date Issued: 2012
Social recruiting: a next generation social engineering attack
- Schoeman, A H B, Irwin, Barry V W
- Authors: Schoeman, A H B , Irwin, Barry V W
- Date: 2012
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428600 , vital:72523 , https://www.jstor.org/stable/26486876
- Description: Social engineering attacks initially experienced success due to the lack of understanding of the attack vector and resultant lack of remedial actions. Due to an increase in media coverage corporate bodies have begun to defend their interests from this vector. This has resulted in a new generation of social engineering attacks that have adapted to the industry response. These new forms of attack take into account the increased likelihood that they will be detected; rendering traditional defences against social engineering attacks moot. This paper highlights these attacks and will explain why traditional defences fail to address them as well as suggest new methods of incident response.
- Full Text:
- Date Issued: 2012
- Authors: Schoeman, A H B , Irwin, Barry V W
- Date: 2012
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428600 , vital:72523 , https://www.jstor.org/stable/26486876
- Description: Social engineering attacks initially experienced success due to the lack of understanding of the attack vector and resultant lack of remedial actions. Due to an increase in media coverage corporate bodies have begun to defend their interests from this vector. This has resulted in a new generation of social engineering attacks that have adapted to the industry response. These new forms of attack take into account the increased likelihood that they will be detected; rendering traditional defences against social engineering attacks moot. This paper highlights these attacks and will explain why traditional defences fail to address them as well as suggest new methods of incident response.
- Full Text:
- Date Issued: 2012
- «
- ‹
- 1
- ›
- »