The pattern-richness of graphical passwords
- Vorster, Johannes, Van Heerden, Renier, Irwin, Barry V W
- Authors: Vorster, Johannes , Van Heerden, Renier , Irwin, Barry V W
- Date: 2016
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/68322 , vital:29238 , https://doi.org/10.1109/ISSA.2016.7802931
- Description: Publisher version , Conventional (text-based) passwords have shown patterns such as variations on the username, or known passwords such as “password”, “admin” or “12345”. Patterns may similarly be detected in the use of Graphical passwords (GPs). The most significant such pattern - reported by many researchers - is hotspot clustering. This paper qualitatively analyses more than 200 graphical passwords for patterns other than the classically reported hotspots. The qualitative analysis finds that a significant percentage of passwords fall into a small set of patterns; patterns that can be used to form attack models against GPs. In counter action, these patterns can also be used to educate users so that future password selection is more secure. It is the hope that the outcome from this research will lead to improved behaviour and an enhancement in graphical password security.
- Full Text: false
- Date Issued: 2016
- Authors: Vorster, Johannes , Van Heerden, Renier , Irwin, Barry V W
- Date: 2016
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/68322 , vital:29238 , https://doi.org/10.1109/ISSA.2016.7802931
- Description: Publisher version , Conventional (text-based) passwords have shown patterns such as variations on the username, or known passwords such as “password”, “admin” or “12345”. Patterns may similarly be detected in the use of Graphical passwords (GPs). The most significant such pattern - reported by many researchers - is hotspot clustering. This paper qualitatively analyses more than 200 graphical passwords for patterns other than the classically reported hotspots. The qualitative analysis finds that a significant percentage of passwords fall into a small set of patterns; patterns that can be used to form attack models against GPs. In counter action, these patterns can also be used to educate users so that future password selection is more secure. It is the hope that the outcome from this research will lead to improved behaviour and an enhancement in graphical password security.
- Full Text: false
- Date Issued: 2016
Towards malicious network activity mitigation through subnet reputation analysis
- Herbert, Alan, Irwin, Barry V W
- Authors: Herbert, Alan , Irwin, Barry V W
- Date: 2016
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427799 , vital:72463 , https://www.researchgate.net/profile/Barry-Ir-win/publication/327622788_Towards_Malicious_Network_Activity_Mitigation_through_Subnet_Reputation_Analysis/links/5b9a1a88458515310583fda6/Towards-Malicious-Network-Activity-Mitigation-through-Subnet-Reputation-Analysis.pdf
- Description: Analysis technologies that focus on partial packet rather than full packet analysis have shown promise in detection of malicious activity on net-works. NetFlow is one such emergent protocol that is used to log net-work flows through summarizing key features of them. These logs can then be exported to external NetFlow sinks and proper configuration can see effective bandwidth bottleneck mitigation occurring on net-works. Furthermore, each NetFlow source node is configurable with its own unique ID number. This feature enables a system that knows where a NetFlow source node ID number resides physically to say which network flows are occurring from which physical locations irre-spective of the IP addresses involved in these network flows.
- Full Text:
- Date Issued: 2016
- Authors: Herbert, Alan , Irwin, Barry V W
- Date: 2016
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427799 , vital:72463 , https://www.researchgate.net/profile/Barry-Ir-win/publication/327622788_Towards_Malicious_Network_Activity_Mitigation_through_Subnet_Reputation_Analysis/links/5b9a1a88458515310583fda6/Towards-Malicious-Network-Activity-Mitigation-through-Subnet-Reputation-Analysis.pdf
- Description: Analysis technologies that focus on partial packet rather than full packet analysis have shown promise in detection of malicious activity on net-works. NetFlow is one such emergent protocol that is used to log net-work flows through summarizing key features of them. These logs can then be exported to external NetFlow sinks and proper configuration can see effective bandwidth bottleneck mitigation occurring on net-works. Furthermore, each NetFlow source node is configurable with its own unique ID number. This feature enables a system that knows where a NetFlow source node ID number resides physically to say which network flows are occurring from which physical locations irre-spective of the IP addresses involved in these network flows.
- Full Text:
- Date Issued: 2016
A review of current DNS TTL practices
- Van Zyl, Ignus, Rudman, Lauren, Irwin, Barry V W
- Authors: Van Zyl, Ignus , Rudman, Lauren , Irwin, Barry V W
- Date: 2015
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427813 , vital:72464 , https://www.researchgate.net/profile/Barry-Ir-win/publication/327622760_A_review_of_current_DNS_TTL_practices/links/5b9a16e292851c4ba8181b7f/A-review-of-current-DNS-TTL-practices.pdf
- Description: This paper provides insight into legitimate DNS domain Time to Live (TTL) activity captured over two live caching servers from the period January to June 2014. DNS TTL practices are identified and compared between frequently queried domains, with respect to the caching servers. A breakdown of TTL practices by Resource Record type is also given, as well as an analysis on the TTL choices of the most frequent Top Level Domains. An analysis of anomalous TTL values with respect to the gathered data is also presented.
- Full Text:
- Date Issued: 2015
- Authors: Van Zyl, Ignus , Rudman, Lauren , Irwin, Barry V W
- Date: 2015
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427813 , vital:72464 , https://www.researchgate.net/profile/Barry-Ir-win/publication/327622760_A_review_of_current_DNS_TTL_practices/links/5b9a16e292851c4ba8181b7f/A-review-of-current-DNS-TTL-practices.pdf
- Description: This paper provides insight into legitimate DNS domain Time to Live (TTL) activity captured over two live caching servers from the period January to June 2014. DNS TTL practices are identified and compared between frequently queried domains, with respect to the caching servers. A breakdown of TTL practices by Resource Record type is also given, as well as an analysis on the TTL choices of the most frequent Top Level Domains. An analysis of anomalous TTL values with respect to the gathered data is also presented.
- Full Text:
- Date Issued: 2015
A sandbox-based approach to the deobfuscation and dissection of php-based malware
- Wrench, Peter M, Irwin, Barry V W
- Authors: Wrench, Peter M , Irwin, Barry V W
- Date: 2015
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429258 , vital:72571 , 10.23919/SAIEE.2015.8531886
- Description: The creation and proliferation of PHP-based Remote Access Trojans (or web shells) used in both the compromise and post exploitation of web platforms has fuelled research into automated methods of dissecting and analysing these shells. Current malware tools disguise themselves by making use of obfuscation techniques designed to frustrate any efforts to dissect or reverse engineer the code. Advanced code engineering can even cause malware to behave differently if it detects that it is not running on the system for which it was originally targeted. To combat these defensive techniques, this paper presents a sandbox-based environment that aims to accurately mimic a vulnerable host and is capable of semi-automatic semantic dissection and syntactic deobfuscation of PHP code.
- Full Text:
- Date Issued: 2015
- Authors: Wrench, Peter M , Irwin, Barry V W
- Date: 2015
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429258 , vital:72571 , 10.23919/SAIEE.2015.8531886
- Description: The creation and proliferation of PHP-based Remote Access Trojans (or web shells) used in both the compromise and post exploitation of web platforms has fuelled research into automated methods of dissecting and analysing these shells. Current malware tools disguise themselves by making use of obfuscation techniques designed to frustrate any efforts to dissect or reverse engineer the code. Advanced code engineering can even cause malware to behave differently if it detects that it is not running on the system for which it was originally targeted. To combat these defensive techniques, this paper presents a sandbox-based environment that aims to accurately mimic a vulnerable host and is capable of semi-automatic semantic dissection and syntactic deobfuscation of PHP code.
- Full Text:
- Date Issued: 2015
An investigation into the signals leakage from a smartcard based on different runtime code
- Frieslaar, Ibraheem, Irwin, Barry V W
- Authors: Frieslaar, Ibraheem , Irwin, Barry V W
- Date: 2015
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427845 , vital:72466 , https://www.researchgate.net/profile/Ibraheem-Fries-laar/publication/307918229_An_investigation_into_the_signals_leakage_from_a_smartcard_based_on_different_runtime_code/links/57d1996008ae0c0081e04fd5/An-investigation-into-the-signals-leakage-from-a-smartcard-based-on-different-runtime-code.pdf
- Description: This paper investigates the power leakage of a smartcard. It is intended to answer two vital questions: what information is leaked out when different characters are used as output; and does the length of the output affect the amount of the information leaked. The investigation determines that as the length of the output is increased more bus lines are switched from a precharge state to a high state. This is related to the output array in the code increasing its length. Furthermore, this work shows that the output for different characters generates a different pattern. This is due to the fact that various characters needs different amount of bytes to be executed since they have different binary value. Additionally, the information leaked out can be directly linked to the smartcard’s interpreter.
- Full Text:
- Date Issued: 2015
- Authors: Frieslaar, Ibraheem , Irwin, Barry V W
- Date: 2015
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427845 , vital:72466 , https://www.researchgate.net/profile/Ibraheem-Fries-laar/publication/307918229_An_investigation_into_the_signals_leakage_from_a_smartcard_based_on_different_runtime_code/links/57d1996008ae0c0081e04fd5/An-investigation-into-the-signals-leakage-from-a-smartcard-based-on-different-runtime-code.pdf
- Description: This paper investigates the power leakage of a smartcard. It is intended to answer two vital questions: what information is leaked out when different characters are used as output; and does the length of the output affect the amount of the information leaked. The investigation determines that as the length of the output is increased more bus lines are switched from a precharge state to a high state. This is related to the output array in the code increasing its length. Furthermore, this work shows that the output for different characters generates a different pattern. This is due to the fact that various characters needs different amount of bytes to be executed since they have different binary value. Additionally, the information leaked out can be directly linked to the smartcard’s interpreter.
- Full Text:
- Date Issued: 2015
Characterization and analysis of NTP amplification based DDoS attacks
- Rudman, Lauren, Irwin, Barry V W
- Authors: Rudman, Lauren , Irwin, Barry V W
- Date: 2015
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429285 , vital:72573 , 10.1109/ISSA.2015.7335069
- Description: Network Time Protocol based DDoS attacks saw a lot of popularity throughout 2014. This paper shows the characterization and analysis of two large datasets containing packets from NTP based DDoS attacks captured in South Africa. Using a series of Python based tools, the dataset is analysed according to specific parts of the packet headers. These include the source IP address and Time-to-live (TTL) values. The analysis found the top source addresses and looked at the TTL values observed for each address. These TTL values can be used to calculate the probable operating system or DDoS attack tool used by an attacker. We found that each TTL value seen for an address can indicate the number of hosts attacking the address or indicate minor routing changes. The Time-to-Live values, as a whole, are then analysed to find the total number used throughout each attack. The most frequent TTL values are then found and show that the migratory of them indicate the attackers are using an initial TTL of 255. This value can indicate the use of a certain DDoS tool that creates packets with that exact initial TTL. The TTL values are then put into groups that can show the number of IP addresses a group of hosts are targeting.
- Full Text:
- Date Issued: 2015
- Authors: Rudman, Lauren , Irwin, Barry V W
- Date: 2015
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429285 , vital:72573 , 10.1109/ISSA.2015.7335069
- Description: Network Time Protocol based DDoS attacks saw a lot of popularity throughout 2014. This paper shows the characterization and analysis of two large datasets containing packets from NTP based DDoS attacks captured in South Africa. Using a series of Python based tools, the dataset is analysed according to specific parts of the packet headers. These include the source IP address and Time-to-live (TTL) values. The analysis found the top source addresses and looked at the TTL values observed for each address. These TTL values can be used to calculate the probable operating system or DDoS attack tool used by an attacker. We found that each TTL value seen for an address can indicate the number of hosts attacking the address or indicate minor routing changes. The Time-to-Live values, as a whole, are then analysed to find the total number used throughout each attack. The most frequent TTL values are then found and show that the migratory of them indicate the attackers are using an initial TTL of 255. This value can indicate the use of a certain DDoS tool that creates packets with that exact initial TTL. The TTL values are then put into groups that can show the number of IP addresses a group of hosts are targeting.
- Full Text:
- Date Issued: 2015
Cyber Vulnerability Assessment: Case Study of Malawi and Tanzania
- Chindipha, Stones D, Irwin, Barry V W
- Authors: Chindipha, Stones D , Irwin, Barry V W
- Date: 2015
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428558 , vital:72520 , https://accconference.mandela.ac.za/ACCConference/media/Store/images/Proceedings-2015.pdf#page=105
- Description: Much as the Internet is beneficial to our daily activities, with each passing day it also brings along with it information security concerns for the users be they at company or national level. Each year the number of Internet users keeps growing, particularly in Africa, and this means only one thing, more cyber-attacks. Governments have become a focal point of this data leakage problem making this a matter of national security. Looking at the current state of affairs, cyber-based incidents are more likely to increase in Africa, mainly due to the increased prevalence and affordability of broadband connectivity which is coupled with lack of online security awareness. A drop in the cost of broadband connection means more people will be able to afford Internet connectivity. With open Source Intelligence (OSINT), this paper aims to perform a vulnerability analysis for states in Eastern Africa building from prior research by Swart et al. which showed that there are vulnerabilities in the information systems, using the case of South Africa as an example. States in East Africa are to be considered as candidates, with the final decision being determined by access to suitable resources, and availability of information. A comparative analysis to assess the factors that affect the degree of security susceptibilities in various states will also be made and information security measures used by various governments to ascertain the extent of their contribution to this vulnerability will be assessed. This pilot study will be extended to other Southern and Eastern African states like Botswana, Kenya, Uganda and Namibia in future work.
- Full Text:
- Date Issued: 2015
- Authors: Chindipha, Stones D , Irwin, Barry V W
- Date: 2015
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428558 , vital:72520 , https://accconference.mandela.ac.za/ACCConference/media/Store/images/Proceedings-2015.pdf#page=105
- Description: Much as the Internet is beneficial to our daily activities, with each passing day it also brings along with it information security concerns for the users be they at company or national level. Each year the number of Internet users keeps growing, particularly in Africa, and this means only one thing, more cyber-attacks. Governments have become a focal point of this data leakage problem making this a matter of national security. Looking at the current state of affairs, cyber-based incidents are more likely to increase in Africa, mainly due to the increased prevalence and affordability of broadband connectivity which is coupled with lack of online security awareness. A drop in the cost of broadband connection means more people will be able to afford Internet connectivity. With open Source Intelligence (OSINT), this paper aims to perform a vulnerability analysis for states in Eastern Africa building from prior research by Swart et al. which showed that there are vulnerabilities in the information systems, using the case of South Africa as an example. States in East Africa are to be considered as candidates, with the final decision being determined by access to suitable resources, and availability of information. A comparative analysis to assess the factors that affect the degree of security susceptibilities in various states will also be made and information security measures used by various governments to ascertain the extent of their contribution to this vulnerability will be assessed. This pilot study will be extended to other Southern and Eastern African states like Botswana, Kenya, Uganda and Namibia in future work.
- Full Text:
- Date Issued: 2015
Data Centre vulnerabilities physical, logical and trusted entity security
- Swart, Ignus, Grobler, Marthie, Irwin, Barry V W
- Authors: Swart, Ignus , Grobler, Marthie , Irwin, Barry V W
- Date: 2015
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427859 , vital:72467 , https://www.researchgate.net/profile/Ignus-Swart/publication/305442546_Data_Centre_vulnerabilities_physical_logical_trusted_entity_security/links/578f38c108aecbca4cada6bf/Data-Centre-vulnerabilities-physical-logical-trusted-entity-security.pdf
- Description: Data centres are often the hub for a significant number of disparate interconnecting systems. With rapid advances in virtualization, the use of data centres have increased significantly and are set to continue growing. Systems hosted typically serve the data needs for a growing number of organizations ranging from private individuals to mammoth governmental departments. Due to this centralized method of operation, data centres have become a prime target for attackers. These attackers are not only after the data contained in the data centre but often the physical infrastructure the systems run on is the target of attack. Down time resulting from such an attack can affect a wide range of entities and can have severe financial implications for the owners of the data centre. To limit liability strict adherence to standards are prescribed. Technology however develops at a far faster pace than standards and our ability to accurately measure information security has significant hidden caveats. This allows for a situation where the defenders dilemma is exacerbated by information overload, a significant increase in attack surface and reporting tools that show only limited views. This paper investigates the logical and physical security components of a data centre and introduces the notion of third party involvement as an increase in attack surface due to the manner in which data centres typically operate.
- Full Text:
- Date Issued: 2015
- Authors: Swart, Ignus , Grobler, Marthie , Irwin, Barry V W
- Date: 2015
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427859 , vital:72467 , https://www.researchgate.net/profile/Ignus-Swart/publication/305442546_Data_Centre_vulnerabilities_physical_logical_trusted_entity_security/links/578f38c108aecbca4cada6bf/Data-Centre-vulnerabilities-physical-logical-trusted-entity-security.pdf
- Description: Data centres are often the hub for a significant number of disparate interconnecting systems. With rapid advances in virtualization, the use of data centres have increased significantly and are set to continue growing. Systems hosted typically serve the data needs for a growing number of organizations ranging from private individuals to mammoth governmental departments. Due to this centralized method of operation, data centres have become a prime target for attackers. These attackers are not only after the data contained in the data centre but often the physical infrastructure the systems run on is the target of attack. Down time resulting from such an attack can affect a wide range of entities and can have severe financial implications for the owners of the data centre. To limit liability strict adherence to standards are prescribed. Technology however develops at a far faster pace than standards and our ability to accurately measure information security has significant hidden caveats. This allows for a situation where the defenders dilemma is exacerbated by information overload, a significant increase in attack surface and reporting tools that show only limited views. This paper investigates the logical and physical security components of a data centre and introduces the notion of third party involvement as an increase in attack surface due to the manner in which data centres typically operate.
- Full Text:
- Date Issued: 2015
DDoS Attack Mitigation Through Control of Inherent Charge Decay of Memory Implementations
- Herbert, Alan, Irwin, Barry V W, van Heerden, Renier P
- Authors: Herbert, Alan , Irwin, Barry V W , van Heerden, Renier P
- Date: 2015
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430339 , vital:72684 , https://www.academic-bookshop.com/ourshop/prod_3774091-ICCWS-2015-10th-International-Conference-on-Cyber-Warfare-and-Security-Kruger-National-Park-South-Africa-PRINT-ver-ISBN-978191030996.html
- Description: DDoS (Distributed Denial of Service) attacks over recent years have shown to be devastating on the target systems and services made publicly available over the Internet. Furthermore, the backscatter1 caused by DDoS attacks also affects the available bandwidth and responsiveness of many other hosts within the Internet. The unfortunate reality of these attacks is that the targeted party cannot fight back due to the presence of botnets and malware-driven hosts. These hosts that carry out the attack on a target are usually controlled remotely and the owner of the device is unaware of it; for this reason one cannot attack back directly as this will serve little more than to disable an innocent party. A proposed solution to these DDoS attacks is to identify a potential attacking address and ignore communication from that address for a set period of time through time stamping.
- Full Text:
- Date Issued: 2015
- Authors: Herbert, Alan , Irwin, Barry V W , van Heerden, Renier P
- Date: 2015
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430339 , vital:72684 , https://www.academic-bookshop.com/ourshop/prod_3774091-ICCWS-2015-10th-International-Conference-on-Cyber-Warfare-and-Security-Kruger-National-Park-South-Africa-PRINT-ver-ISBN-978191030996.html
- Description: DDoS (Distributed Denial of Service) attacks over recent years have shown to be devastating on the target systems and services made publicly available over the Internet. Furthermore, the backscatter1 caused by DDoS attacks also affects the available bandwidth and responsiveness of many other hosts within the Internet. The unfortunate reality of these attacks is that the targeted party cannot fight back due to the presence of botnets and malware-driven hosts. These hosts that carry out the attack on a target are usually controlled remotely and the owner of the device is unaware of it; for this reason one cannot attack back directly as this will serve little more than to disable an innocent party. A proposed solution to these DDoS attacks is to identify a potential attacking address and ignore communication from that address for a set period of time through time stamping.
- Full Text:
- Date Issued: 2015
Design and Fabrication of a Low Cost Traffic Manipulation Hardware Platform
- Pennefather, Sean, Irwin, Barry V W
- Authors: Pennefather, Sean , Irwin, Barry V W
- Date: 2015
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427873 , vital:72468 , https://www.researchgate.net/profile/Barry-Ir-win/publication/327622941_Design_and_Fabrication_of_a_Low_Cost_Traffic_Manipulation_Hardware/links/5b9a1625458515310583fc8c/Design-and-Fabrication-of-a-Low-Cost-Traffic-Manipulation-Hardware.pdf
- Description: This paper describes the design and fabrication of a dedicated hardware platform for network traffic logging and modification at a production cost of under $300. The context of the device is briefly discussed before characteristics relating to hardware development are explored. The paper concludes with three application examples to show some to the potential functionality of the platform. Testing of the device shows an average TCP throughput of 84.44 MiB/s when using the designed Ethernet modules.
- Full Text:
- Date Issued: 2015
- Authors: Pennefather, Sean , Irwin, Barry V W
- Date: 2015
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427873 , vital:72468 , https://www.researchgate.net/profile/Barry-Ir-win/publication/327622941_Design_and_Fabrication_of_a_Low_Cost_Traffic_Manipulation_Hardware/links/5b9a1625458515310583fc8c/Design-and-Fabrication-of-a-Low-Cost-Traffic-Manipulation-Hardware.pdf
- Description: This paper describes the design and fabrication of a dedicated hardware platform for network traffic logging and modification at a production cost of under $300. The context of the device is briefly discussed before characteristics relating to hardware development are explored. The paper concludes with three application examples to show some to the potential functionality of the platform. Testing of the device shows an average TCP throughput of 84.44 MiB/s when using the designed Ethernet modules.
- Full Text:
- Date Issued: 2015
FPGA Based Implementation of a High Performance Scalable NetFlow Filter
- Herbert, Alan, Irwin, Barry V W, Otten, D F, Balmahoon, M R
- Authors: Herbert, Alan , Irwin, Barry V W , Otten, D F , Balmahoon, M R
- Date: 2015
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427887 , vital:72470 , https://www.researchgate.net/profile/Barry-Ir-win/publication/327622948_FPGA_Based_Implementation_of_a_High_Perfor-mance_Scalable_NetFlow_Filter/links/5b9a17a192851c4ba8181ba5/FPGA-Based-Implementation-of-a-High-Performance-Scalable-NetFlow-Filter.pdf
- Description: Full packet analysis on firewalls and intrusion detection, although effec-tive, has been found in recent times to be detrimental to the overall per-formance of networks that receive large volumes of throughput. For this reason partial packet analysis algorithms such as the NetFlow protocol have emerged to better mitigate these bottlenecks. This research delves into implementing a hardware accelerated, scalable, high per-formance system for NetFlow analysis and attack mitigation. Further-more, this implementation takes on attack mitigation through collection and processing of network flows produced at the source, rather than at the site of incident. This research platform manages to scale out its back-end through dis-tributed analysis over multiple hosts using the ZeroMQ toolset. Fur-thermore, ZeroMQ allows for multiple NetFlow data publishers, so that plug-ins can subscribe to the publishers that contain the relevant data to further increase the overall performance of the system. The dedicat-ed custom hardware optimizes the received network flows through cleaning, summarization and re-ordering into an easy to pass form when given to the sequential component of the system; this being the back-end.
- Full Text:
- Date Issued: 2015
- Authors: Herbert, Alan , Irwin, Barry V W , Otten, D F , Balmahoon, M R
- Date: 2015
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427887 , vital:72470 , https://www.researchgate.net/profile/Barry-Ir-win/publication/327622948_FPGA_Based_Implementation_of_a_High_Perfor-mance_Scalable_NetFlow_Filter/links/5b9a17a192851c4ba8181ba5/FPGA-Based-Implementation-of-a-High-Performance-Scalable-NetFlow-Filter.pdf
- Description: Full packet analysis on firewalls and intrusion detection, although effec-tive, has been found in recent times to be detrimental to the overall per-formance of networks that receive large volumes of throughput. For this reason partial packet analysis algorithms such as the NetFlow protocol have emerged to better mitigate these bottlenecks. This research delves into implementing a hardware accelerated, scalable, high per-formance system for NetFlow analysis and attack mitigation. Further-more, this implementation takes on attack mitigation through collection and processing of network flows produced at the source, rather than at the site of incident. This research platform manages to scale out its back-end through dis-tributed analysis over multiple hosts using the ZeroMQ toolset. Fur-thermore, ZeroMQ allows for multiple NetFlow data publishers, so that plug-ins can subscribe to the publishers that contain the relevant data to further increase the overall performance of the system. The dedicat-ed custom hardware optimizes the received network flows through cleaning, summarization and re-ordering into an easy to pass form when given to the sequential component of the system; this being the back-end.
- Full Text:
- Date Issued: 2015
Multi sensor national cyber security data fusion
- Swart, Ignus, Irwin, Barry V W, Grobler, Marthie
- Authors: Swart, Ignus , Irwin, Barry V W , Grobler, Marthie
- Date: 2015
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430393 , vital:72688 , https://www.academic-bookshop.com/ourshop/prod_3774091-ICCWS-2015-10th-International-Conference-on-Cyber-Warfare-and-Security-Kruger-National-Park-South-Africa-PRINT-ver-ISBN-978191030996.html
- Description: A proliferation of cyber security strategies have recently been published around the world with as many as thirty five strategies documented since 2009. These published strategies indicate the growing need to obtain a clear view of a country’s information security posture and to improve on it. The potential attack surface of a nation is extremely large however and no single source of cyber security data provides all the required information to accurately describe the cyber security readiness of a nation. There are however a variety of specialised data sources that are rich enough in relevant cyber security information to assess the state of a nation in at least key areas such as botnets, spam servers and incorrectly configured hosts present in a country. While informative both from an offensive and defensive point of view, the data sources range in a variety of factors such as accuracy, completeness, representation, cost and data availability. These factors add complexity when attempting to present a clear view of the combined intelligence of the data.
- Full Text:
- Date Issued: 2015
- Authors: Swart, Ignus , Irwin, Barry V W , Grobler, Marthie
- Date: 2015
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430393 , vital:72688 , https://www.academic-bookshop.com/ourshop/prod_3774091-ICCWS-2015-10th-International-Conference-on-Cyber-Warfare-and-Security-Kruger-National-Park-South-Africa-PRINT-ver-ISBN-978191030996.html
- Description: A proliferation of cyber security strategies have recently been published around the world with as many as thirty five strategies documented since 2009. These published strategies indicate the growing need to obtain a clear view of a country’s information security posture and to improve on it. The potential attack surface of a nation is extremely large however and no single source of cyber security data provides all the required information to accurately describe the cyber security readiness of a nation. There are however a variety of specialised data sources that are rich enough in relevant cyber security information to assess the state of a nation in at least key areas such as botnets, spam servers and incorrectly configured hosts present in a country. While informative both from an offensive and defensive point of view, the data sources range in a variety of factors such as accuracy, completeness, representation, cost and data availability. These factors add complexity when attempting to present a clear view of the combined intelligence of the data.
- Full Text:
- Date Issued: 2015
Observed correlations of unsolicited ip traffic across five distinct network telescopes
- Irwin, Barry V W, Nkhumeleni, T
- Authors: Irwin, Barry V W , Nkhumeleni, T
- Date: 2015
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428569 , vital:72521 , https://www.jstor.org/stable/26502727
- Description: Using network telescopes to monitor unused IP address space provides a favourable environment for researchers to study and detect malware, denial of service, and scanning activities on the Internet. This research focuses on comparative and correlation analysis of traffic activity across five IPv4 network telescopes, each with an aperture size of /24 over a 12-month period. Time series representations of the traffic activity observed on these sensors were constructed. Using the cross- and auto-correlation methods of time series analysis, sensor data was quantitatively analysed with the resulting correlation of network telescopes’ traffic activity found to be moderate to high, dependent on grouping.
- Full Text:
- Date Issued: 2015
- Authors: Irwin, Barry V W , Nkhumeleni, T
- Date: 2015
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428569 , vital:72521 , https://www.jstor.org/stable/26502727
- Description: Using network telescopes to monitor unused IP address space provides a favourable environment for researchers to study and detect malware, denial of service, and scanning activities on the Internet. This research focuses on comparative and correlation analysis of traffic activity across five IPv4 network telescopes, each with an aperture size of /24 over a 12-month period. Time series representations of the traffic activity observed on these sensors were constructed. Using the cross- and auto-correlation methods of time series analysis, sensor data was quantitatively analysed with the resulting correlation of network telescopes’ traffic activity found to be moderate to high, dependent on grouping.
- Full Text:
- Date Issued: 2015
Observed correlations of unsolicited network traffic over five distinct IPv4 netblocks
- Nkhumeleni, Thiswilondi M, Irwin, Barry V W
- Authors: Nkhumeleni, Thiswilondi M , Irwin, Barry V W
- Date: 2015
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430408 , vital:72689 , https://www.academic-bookshop.com/ourshop/prod_3774091-ICCWS-2015-10th-International-Conference-on-Cyber-Warfare-and-Security-Kruger-National-Park-South-Africa-PRINT-ver-ISBN-978191030996.html
- Description: Using network telescopes to monitor unused IP address space provides a favorable environment for researchers to study and detect malware, denial of service and scanning activities within global IPv4 address space. This research focuses on comparative and correlation analysis of traffic activity across the network of telescope sensors. Analysis is done using data collected over a 12 month period on five network telescopes each with an aperture size of/24, operated in disjoint IPv4 address space. These were considered as two distinct groupings. Time series’ representing time-based traffic activity observed on these sensors was constructed. Using the cross-and auto-correlation methods of time series analysis, moderate correlation of traffic activity was achieved between telescope sensors in each category. Weak to moderate correlation was calculated when comparing category A and category B network telescopes’ datasets. Results were significantly improved by considering TCP traffic separately. Moderate to strong correlation coefficients in each category were calculated when using TCP traffic only. UDP traffic analysis showed weaker correlation between sensors, however the uniformity of ICMP traffic showed correlation of traffic activity across all sensors. The results confirmed the visual observation of traffic relativity in telescope sensors within the same category and quantitatively analyzed the correlation of network telescopes’ traffic activity.
- Full Text:
- Date Issued: 2015
- Authors: Nkhumeleni, Thiswilondi M , Irwin, Barry V W
- Date: 2015
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430408 , vital:72689 , https://www.academic-bookshop.com/ourshop/prod_3774091-ICCWS-2015-10th-International-Conference-on-Cyber-Warfare-and-Security-Kruger-National-Park-South-Africa-PRINT-ver-ISBN-978191030996.html
- Description: Using network telescopes to monitor unused IP address space provides a favorable environment for researchers to study and detect malware, denial of service and scanning activities within global IPv4 address space. This research focuses on comparative and correlation analysis of traffic activity across the network of telescope sensors. Analysis is done using data collected over a 12 month period on five network telescopes each with an aperture size of/24, operated in disjoint IPv4 address space. These were considered as two distinct groupings. Time series’ representing time-based traffic activity observed on these sensors was constructed. Using the cross-and auto-correlation methods of time series analysis, moderate correlation of traffic activity was achieved between telescope sensors in each category. Weak to moderate correlation was calculated when comparing category A and category B network telescopes’ datasets. Results were significantly improved by considering TCP traffic separately. Moderate to strong correlation coefficients in each category were calculated when using TCP traffic only. UDP traffic analysis showed weaker correlation between sensors, however the uniformity of ICMP traffic showed correlation of traffic activity across all sensors. The results confirmed the visual observation of traffic relativity in telescope sensors within the same category and quantitatively analyzed the correlation of network telescopes’ traffic activity.
- Full Text:
- Date Issued: 2015
Towards a PHP webshell taxonomy using deobfuscation-assisted similarity analysis
- Wrench, Peter M, Irwin, Barry V W
- Authors: Wrench, Peter M , Irwin, Barry V W
- Date: 2015
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429560 , vital:72622 , 10.1109/ISSA.2015.7335066
- Description: The abundance of PHP-based Remote Access Trojans (or web shells) found in the wild has led malware researchers to develop systems capable of tracking and analysing these shells. In the past, such shells were ably classified using signature matching, a process that is currently unable to cope with the sheer volume and variety of web-based malware in circulation. Although a large percentage of newly-created webshell software incorporates portions of code derived from seminal shells such as c99 and r57, they are able to disguise this by making extensive use of obfuscation techniques intended to frustrate any attempts to dissect or reverse engineer the code. This paper presents an approach to shell classification and analysis (based on similarity to a body of known malware) in an attempt to create a comprehensive taxonomy of PHP-based web shells. Several different measures of similarity were used in conjunction with clustering algorithms and visualisation techniques in order to achieve this. Furthermore, an auxiliary component capable of syntactically deobfuscating PHP code is described. This was employed to reverse idiomatic obfuscation constructs used by software authors. It was found that this deobfuscation dramatically increased the observed levels of similarity by exposing additional code for analysis.
- Full Text:
- Date Issued: 2015
- Authors: Wrench, Peter M , Irwin, Barry V W
- Date: 2015
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429560 , vital:72622 , 10.1109/ISSA.2015.7335066
- Description: The abundance of PHP-based Remote Access Trojans (or web shells) found in the wild has led malware researchers to develop systems capable of tracking and analysing these shells. In the past, such shells were ably classified using signature matching, a process that is currently unable to cope with the sheer volume and variety of web-based malware in circulation. Although a large percentage of newly-created webshell software incorporates portions of code derived from seminal shells such as c99 and r57, they are able to disguise this by making extensive use of obfuscation techniques intended to frustrate any attempts to dissect or reverse engineer the code. This paper presents an approach to shell classification and analysis (based on similarity to a body of known malware) in an attempt to create a comprehensive taxonomy of PHP-based web shells. Several different measures of similarity were used in conjunction with clustering algorithms and visualisation techniques in order to achieve this. Furthermore, an auxiliary component capable of syntactically deobfuscating PHP code is described. This was employed to reverse idiomatic obfuscation constructs used by software authors. It was found that this deobfuscation dramatically increased the observed levels of similarity by exposing additional code for analysis.
- Full Text:
- Date Issued: 2015
An exploration of geolocation and traffic visualisation using network flows
- Pennefather, Sean, Irwin, Barry V W
- Authors: Pennefather, Sean , Irwin, Barry V W
- Date: 2014
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429597 , vital:72625 , 10.1109/ISSA.2014.6950
- Description: A network flow is a data record that represents characteristics associated with a unidirectional stream of packets transmitted between two hosts using an IP layer protocol. As a network flow only represents statistics relating to the data transferred in the stream, the effectiveness of utilizing network flows for traffic visualization to aid in cyber defense is not immediately apparent and needs further exploration. The goal of this research is to explore the use of network flows for data visualization and geolocation.
- Full Text:
- Date Issued: 2014
- Authors: Pennefather, Sean , Irwin, Barry V W
- Date: 2014
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429597 , vital:72625 , 10.1109/ISSA.2014.6950
- Description: A network flow is a data record that represents characteristics associated with a unidirectional stream of packets transmitted between two hosts using an IP layer protocol. As a network flow only represents statistics relating to the data transferred in the stream, the effectiveness of utilizing network flows for traffic visualization to aid in cyber defense is not immediately apparent and needs further exploration. The goal of this research is to explore the use of network flows for data visualization and geolocation.
- Full Text:
- Date Issued: 2014
Design of a Network Packet Processing platform
- Pennefather, Sean, Irwin, Barry V W
- Authors: Pennefather, Sean , Irwin, Barry V W
- Date: 2014
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427901 , vital:72472 , https://www.researchgate.net/profile/Barry-Ir-win/publication/327622772_Design_of_a_Network_Packet_Processing_platform/links/5b9a187f92851c4ba8181bd6/Design-of-a-Network-Packet-Processing-platform.pdf
- Description: This paper describes the design considerations investigated in the implementation of a prototype embedded network packet processing platform. The purpose of this system is to provide a means for researchers to process, and manipulate network traffic using an embedded standalone hardware platform, with the provision this be soft-configurable and flexible in its functionality. The performance of the Ethernet layer subsystem implemented using XMOS MCU’s is investigated. Future applications of this prototype are discussed.
- Full Text:
- Date Issued: 2014
- Authors: Pennefather, Sean , Irwin, Barry V W
- Date: 2014
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427901 , vital:72472 , https://www.researchgate.net/profile/Barry-Ir-win/publication/327622772_Design_of_a_Network_Packet_Processing_platform/links/5b9a187f92851c4ba8181bd6/Design-of-a-Network-Packet-Processing-platform.pdf
- Description: This paper describes the design considerations investigated in the implementation of a prototype embedded network packet processing platform. The purpose of this system is to provide a means for researchers to process, and manipulate network traffic using an embedded standalone hardware platform, with the provision this be soft-configurable and flexible in its functionality. The performance of the Ethernet layer subsystem implemented using XMOS MCU’s is investigated. Future applications of this prototype are discussed.
- Full Text:
- Date Issued: 2014
Human perception of the measurement of a network attack taxonomy in near real-time
- Van Heerden, Renier, Malan, Mercia M, Mouton, Francois, Irwin, Barry V W
- Authors: Van Heerden, Renier , Malan, Mercia M , Mouton, Francois , Irwin, Barry V W
- Date: 2014
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429924 , vital:72652 , https://doi.org/10.1007/978-3-662-44208-1_23
- Description: This paper investigates how the measurement of a network attack taxonomy can be related to human perception. Network attacks do not have a time limitation, but the earlier its detected, the more damage can be prevented and the more preventative actions can be taken. This paper evaluate how elements of network attacks can be measured in near real-time(60 seconds). The taxonomy we use was developed by van Heerden et al (2012) with over 100 classes. These classes present the attack and defenders point of view. The degree to which each class can be quantified or measured is determined by investigating the accuracy of various assessment methods. We classify each class as either defined, high, low or not quantifiable. For example, it may not be possible to determine the instigator of an attack (Aggressor), but only that the attack has been launched by a Hacker (Actor). Some classes can only be quantified with a low confidence or not at all in a sort (near real-time) time. The IP address of an attack can easily be faked thus reducing the confidence in the information obtained from it, and thus determining the origin of an attack with a low confidence. This determination itself is subjective. All the evaluations of the classes in this paper is subjective, but due to the very basic grouping (High, Low or Not Quantifiable) a subjective value can be used. The complexity of the taxonomy can be significantly reduced if classes with only a high perceptive accuracy is used.
- Full Text:
- Date Issued: 2014
- Authors: Van Heerden, Renier , Malan, Mercia M , Mouton, Francois , Irwin, Barry V W
- Date: 2014
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429924 , vital:72652 , https://doi.org/10.1007/978-3-662-44208-1_23
- Description: This paper investigates how the measurement of a network attack taxonomy can be related to human perception. Network attacks do not have a time limitation, but the earlier its detected, the more damage can be prevented and the more preventative actions can be taken. This paper evaluate how elements of network attacks can be measured in near real-time(60 seconds). The taxonomy we use was developed by van Heerden et al (2012) with over 100 classes. These classes present the attack and defenders point of view. The degree to which each class can be quantified or measured is determined by investigating the accuracy of various assessment methods. We classify each class as either defined, high, low or not quantifiable. For example, it may not be possible to determine the instigator of an attack (Aggressor), but only that the attack has been launched by a Hacker (Actor). Some classes can only be quantified with a low confidence or not at all in a sort (near real-time) time. The IP address of an attack can easily be faked thus reducing the confidence in the information obtained from it, and thus determining the origin of an attack with a low confidence. This determination itself is subjective. All the evaluations of the classes in this paper is subjective, but due to the very basic grouping (High, Low or Not Quantifiable) a subjective value can be used. The complexity of the taxonomy can be significantly reduced if classes with only a high perceptive accuracy is used.
- Full Text:
- Date Issued: 2014
On the viability of pro-active automated PII breach detection: A South African case study
- Swart, Ignus, Irwin, Barry V W, Grobler, Marthie
- Authors: Swart, Ignus , Irwin, Barry V W , Grobler, Marthie
- Date: 2014
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430235 , vital:72676 , https://doi.org/10.1145/2664591.2664600
- Description: Various reasons exist why certain types of information is deemed personal both by legislation and society. While crimes such as identity theft and impersonation have always been in existence, the rise of the internet and social media has exacerbated the problem. South Africa has recently joined the growing ranks of countries passing legislation to ensure the privacy of certain types of data. As is the case with most implemented security enforcement systems, most appointed privacy regulators operate in a reactive way. While this is a completely acceptable method of operation, it is not the most efficient. Research has shown that most data leaks containing personal information remains available for more than a month on average before being detected and reported. Quite often the data is discovered by a third party who selects to notify the responsible organisation but can just as easily copy the data and make use of it. This paper will display the potential benefit a privacy regulator can expect to see by implementing pro-active detection of electronic personally identifiable information (PII). Adopting pro-active detection of PII exposed on public networks can potentially contribute to a significant reduction in exposure time. The results discussed in this paper were obtained by means of experimentation on a custom created PII detection system.
- Full Text:
- Date Issued: 2014
- Authors: Swart, Ignus , Irwin, Barry V W , Grobler, Marthie
- Date: 2014
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430235 , vital:72676 , https://doi.org/10.1145/2664591.2664600
- Description: Various reasons exist why certain types of information is deemed personal both by legislation and society. While crimes such as identity theft and impersonation have always been in existence, the rise of the internet and social media has exacerbated the problem. South Africa has recently joined the growing ranks of countries passing legislation to ensure the privacy of certain types of data. As is the case with most implemented security enforcement systems, most appointed privacy regulators operate in a reactive way. While this is a completely acceptable method of operation, it is not the most efficient. Research has shown that most data leaks containing personal information remains available for more than a month on average before being detected and reported. Quite often the data is discovered by a third party who selects to notify the responsible organisation but can just as easily copy the data and make use of it. This paper will display the potential benefit a privacy regulator can expect to see by implementing pro-active detection of electronic personally identifiable information (PII). Adopting pro-active detection of PII exposed on public networks can potentially contribute to a significant reduction in exposure time. The results discussed in this paper were obtained by means of experimentation on a custom created PII detection system.
- Full Text:
- Date Issued: 2014
Testing antivirus engines to determine their effectiveness as a security layer
- Haffejee, Jameel, Irwin, Barry V W
- Authors: Haffejee, Jameel , Irwin, Barry V W
- Date: 2014
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429673 , vital:72631 , 10.1109/ISSA.2014.6950496
- Description: This research has been undertaken to empirically test the assumption that it is trivial to bypass an antivirus application and to gauge the effectiveness of antivirus engines when faced with a number of known evasion techniques. A known malicious binary was combined with evasion techniques and deployed against several antivirus engines to test their detection ability. The research also documents the process of setting up an environment for testing antivirus engines as well as building the evasion techniques used in the tests. This environment facilitated the empirical testing that was needed to determine if the assumption that antivirus security controls could easily be bypassed. The results of the empirical tests are also presented in this research and demonstrate that it is indeed within reason that an attacker can evade multiple antivirus engines without much effort. As such while an antivirus application is useful for protecting against known threats, it does not work as effectively against unknown threats.
- Full Text:
- Date Issued: 2014
- Authors: Haffejee, Jameel , Irwin, Barry V W
- Date: 2014
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429673 , vital:72631 , 10.1109/ISSA.2014.6950496
- Description: This research has been undertaken to empirically test the assumption that it is trivial to bypass an antivirus application and to gauge the effectiveness of antivirus engines when faced with a number of known evasion techniques. A known malicious binary was combined with evasion techniques and deployed against several antivirus engines to test their detection ability. The research also documents the process of setting up an environment for testing antivirus engines as well as building the evasion techniques used in the tests. This environment facilitated the empirical testing that was needed to determine if the assumption that antivirus security controls could easily be bypassed. The results of the empirical tests are also presented in this research and demonstrate that it is indeed within reason that an attacker can evade multiple antivirus engines without much effort. As such while an antivirus application is useful for protecting against known threats, it does not work as effectively against unknown threats.
- Full Text:
- Date Issued: 2014