A review of current DNS TTL practices
- Van Zyl, Ignus, Rudman, Lauren, Irwin, Barry V W
- Authors: Van Zyl, Ignus , Rudman, Lauren , Irwin, Barry V W
- Date: 2015
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427813 , vital:72464 , https://www.researchgate.net/profile/Barry-Ir-win/publication/327622760_A_review_of_current_DNS_TTL_practices/links/5b9a16e292851c4ba8181b7f/A-review-of-current-DNS-TTL-practices.pdf
- Description: This paper provides insight into legitimate DNS domain Time to Live (TTL) activity captured over two live caching servers from the period January to June 2014. DNS TTL practices are identified and compared between frequently queried domains, with respect to the caching servers. A breakdown of TTL practices by Resource Record type is also given, as well as an analysis on the TTL choices of the most frequent Top Level Domains. An analysis of anomalous TTL values with respect to the gathered data is also presented.
- Full Text:
- Date Issued: 2015
- Authors: Van Zyl, Ignus , Rudman, Lauren , Irwin, Barry V W
- Date: 2015
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427813 , vital:72464 , https://www.researchgate.net/profile/Barry-Ir-win/publication/327622760_A_review_of_current_DNS_TTL_practices/links/5b9a16e292851c4ba8181b7f/A-review-of-current-DNS-TTL-practices.pdf
- Description: This paper provides insight into legitimate DNS domain Time to Live (TTL) activity captured over two live caching servers from the period January to June 2014. DNS TTL practices are identified and compared between frequently queried domains, with respect to the caching servers. A breakdown of TTL practices by Resource Record type is also given, as well as an analysis on the TTL choices of the most frequent Top Level Domains. An analysis of anomalous TTL values with respect to the gathered data is also presented.
- Full Text:
- Date Issued: 2015
A sandbox-based approach to the deobfuscation and dissection of php-based malware
- Wrench, Peter M, Irwin, Barry V W
- Authors: Wrench, Peter M , Irwin, Barry V W
- Date: 2015
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429258 , vital:72571 , 10.23919/SAIEE.2015.8531886
- Description: The creation and proliferation of PHP-based Remote Access Trojans (or web shells) used in both the compromise and post exploitation of web platforms has fuelled research into automated methods of dissecting and analysing these shells. Current malware tools disguise themselves by making use of obfuscation techniques designed to frustrate any efforts to dissect or reverse engineer the code. Advanced code engineering can even cause malware to behave differently if it detects that it is not running on the system for which it was originally targeted. To combat these defensive techniques, this paper presents a sandbox-based environment that aims to accurately mimic a vulnerable host and is capable of semi-automatic semantic dissection and syntactic deobfuscation of PHP code.
- Full Text:
- Date Issued: 2015
- Authors: Wrench, Peter M , Irwin, Barry V W
- Date: 2015
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429258 , vital:72571 , 10.23919/SAIEE.2015.8531886
- Description: The creation and proliferation of PHP-based Remote Access Trojans (or web shells) used in both the compromise and post exploitation of web platforms has fuelled research into automated methods of dissecting and analysing these shells. Current malware tools disguise themselves by making use of obfuscation techniques designed to frustrate any efforts to dissect or reverse engineer the code. Advanced code engineering can even cause malware to behave differently if it detects that it is not running on the system for which it was originally targeted. To combat these defensive techniques, this paper presents a sandbox-based environment that aims to accurately mimic a vulnerable host and is capable of semi-automatic semantic dissection and syntactic deobfuscation of PHP code.
- Full Text:
- Date Issued: 2015
An investigation into the signals leakage from a smartcard based on different runtime code
- Frieslaar, Ibraheem, Irwin, Barry V W
- Authors: Frieslaar, Ibraheem , Irwin, Barry V W
- Date: 2015
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427845 , vital:72466 , https://www.researchgate.net/profile/Ibraheem-Fries-laar/publication/307918229_An_investigation_into_the_signals_leakage_from_a_smartcard_based_on_different_runtime_code/links/57d1996008ae0c0081e04fd5/An-investigation-into-the-signals-leakage-from-a-smartcard-based-on-different-runtime-code.pdf
- Description: This paper investigates the power leakage of a smartcard. It is intended to answer two vital questions: what information is leaked out when different characters are used as output; and does the length of the output affect the amount of the information leaked. The investigation determines that as the length of the output is increased more bus lines are switched from a precharge state to a high state. This is related to the output array in the code increasing its length. Furthermore, this work shows that the output for different characters generates a different pattern. This is due to the fact that various characters needs different amount of bytes to be executed since they have different binary value. Additionally, the information leaked out can be directly linked to the smartcard’s interpreter.
- Full Text:
- Date Issued: 2015
- Authors: Frieslaar, Ibraheem , Irwin, Barry V W
- Date: 2015
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427845 , vital:72466 , https://www.researchgate.net/profile/Ibraheem-Fries-laar/publication/307918229_An_investigation_into_the_signals_leakage_from_a_smartcard_based_on_different_runtime_code/links/57d1996008ae0c0081e04fd5/An-investigation-into-the-signals-leakage-from-a-smartcard-based-on-different-runtime-code.pdf
- Description: This paper investigates the power leakage of a smartcard. It is intended to answer two vital questions: what information is leaked out when different characters are used as output; and does the length of the output affect the amount of the information leaked. The investigation determines that as the length of the output is increased more bus lines are switched from a precharge state to a high state. This is related to the output array in the code increasing its length. Furthermore, this work shows that the output for different characters generates a different pattern. This is due to the fact that various characters needs different amount of bytes to be executed since they have different binary value. Additionally, the information leaked out can be directly linked to the smartcard’s interpreter.
- Full Text:
- Date Issued: 2015
Characterization and analysis of NTP amplification based DDoS attacks
- Rudman, Lauren, Irwin, Barry V W
- Authors: Rudman, Lauren , Irwin, Barry V W
- Date: 2015
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429285 , vital:72573 , 10.1109/ISSA.2015.7335069
- Description: Network Time Protocol based DDoS attacks saw a lot of popularity throughout 2014. This paper shows the characterization and analysis of two large datasets containing packets from NTP based DDoS attacks captured in South Africa. Using a series of Python based tools, the dataset is analysed according to specific parts of the packet headers. These include the source IP address and Time-to-live (TTL) values. The analysis found the top source addresses and looked at the TTL values observed for each address. These TTL values can be used to calculate the probable operating system or DDoS attack tool used by an attacker. We found that each TTL value seen for an address can indicate the number of hosts attacking the address or indicate minor routing changes. The Time-to-Live values, as a whole, are then analysed to find the total number used throughout each attack. The most frequent TTL values are then found and show that the migratory of them indicate the attackers are using an initial TTL of 255. This value can indicate the use of a certain DDoS tool that creates packets with that exact initial TTL. The TTL values are then put into groups that can show the number of IP addresses a group of hosts are targeting.
- Full Text:
- Date Issued: 2015
- Authors: Rudman, Lauren , Irwin, Barry V W
- Date: 2015
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429285 , vital:72573 , 10.1109/ISSA.2015.7335069
- Description: Network Time Protocol based DDoS attacks saw a lot of popularity throughout 2014. This paper shows the characterization and analysis of two large datasets containing packets from NTP based DDoS attacks captured in South Africa. Using a series of Python based tools, the dataset is analysed according to specific parts of the packet headers. These include the source IP address and Time-to-live (TTL) values. The analysis found the top source addresses and looked at the TTL values observed for each address. These TTL values can be used to calculate the probable operating system or DDoS attack tool used by an attacker. We found that each TTL value seen for an address can indicate the number of hosts attacking the address or indicate minor routing changes. The Time-to-Live values, as a whole, are then analysed to find the total number used throughout each attack. The most frequent TTL values are then found and show that the migratory of them indicate the attackers are using an initial TTL of 255. This value can indicate the use of a certain DDoS tool that creates packets with that exact initial TTL. The TTL values are then put into groups that can show the number of IP addresses a group of hosts are targeting.
- Full Text:
- Date Issued: 2015
Cyber Vulnerability Assessment: Case Study of Malawi and Tanzania
- Chindipha, Stones D, Irwin, Barry V W
- Authors: Chindipha, Stones D , Irwin, Barry V W
- Date: 2015
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428558 , vital:72520 , https://accconference.mandela.ac.za/ACCConference/media/Store/images/Proceedings-2015.pdf#page=105
- Description: Much as the Internet is beneficial to our daily activities, with each passing day it also brings along with it information security concerns for the users be they at company or national level. Each year the number of Internet users keeps growing, particularly in Africa, and this means only one thing, more cyber-attacks. Governments have become a focal point of this data leakage problem making this a matter of national security. Looking at the current state of affairs, cyber-based incidents are more likely to increase in Africa, mainly due to the increased prevalence and affordability of broadband connectivity which is coupled with lack of online security awareness. A drop in the cost of broadband connection means more people will be able to afford Internet connectivity. With open Source Intelligence (OSINT), this paper aims to perform a vulnerability analysis for states in Eastern Africa building from prior research by Swart et al. which showed that there are vulnerabilities in the information systems, using the case of South Africa as an example. States in East Africa are to be considered as candidates, with the final decision being determined by access to suitable resources, and availability of information. A comparative analysis to assess the factors that affect the degree of security susceptibilities in various states will also be made and information security measures used by various governments to ascertain the extent of their contribution to this vulnerability will be assessed. This pilot study will be extended to other Southern and Eastern African states like Botswana, Kenya, Uganda and Namibia in future work.
- Full Text:
- Date Issued: 2015
- Authors: Chindipha, Stones D , Irwin, Barry V W
- Date: 2015
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428558 , vital:72520 , https://accconference.mandela.ac.za/ACCConference/media/Store/images/Proceedings-2015.pdf#page=105
- Description: Much as the Internet is beneficial to our daily activities, with each passing day it also brings along with it information security concerns for the users be they at company or national level. Each year the number of Internet users keeps growing, particularly in Africa, and this means only one thing, more cyber-attacks. Governments have become a focal point of this data leakage problem making this a matter of national security. Looking at the current state of affairs, cyber-based incidents are more likely to increase in Africa, mainly due to the increased prevalence and affordability of broadband connectivity which is coupled with lack of online security awareness. A drop in the cost of broadband connection means more people will be able to afford Internet connectivity. With open Source Intelligence (OSINT), this paper aims to perform a vulnerability analysis for states in Eastern Africa building from prior research by Swart et al. which showed that there are vulnerabilities in the information systems, using the case of South Africa as an example. States in East Africa are to be considered as candidates, with the final decision being determined by access to suitable resources, and availability of information. A comparative analysis to assess the factors that affect the degree of security susceptibilities in various states will also be made and information security measures used by various governments to ascertain the extent of their contribution to this vulnerability will be assessed. This pilot study will be extended to other Southern and Eastern African states like Botswana, Kenya, Uganda and Namibia in future work.
- Full Text:
- Date Issued: 2015
Data Centre vulnerabilities physical, logical and trusted entity security
- Swart, Ignus, Grobler, Marthie, Irwin, Barry V W
- Authors: Swart, Ignus , Grobler, Marthie , Irwin, Barry V W
- Date: 2015
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427859 , vital:72467 , https://www.researchgate.net/profile/Ignus-Swart/publication/305442546_Data_Centre_vulnerabilities_physical_logical_trusted_entity_security/links/578f38c108aecbca4cada6bf/Data-Centre-vulnerabilities-physical-logical-trusted-entity-security.pdf
- Description: Data centres are often the hub for a significant number of disparate interconnecting systems. With rapid advances in virtualization, the use of data centres have increased significantly and are set to continue growing. Systems hosted typically serve the data needs for a growing number of organizations ranging from private individuals to mammoth governmental departments. Due to this centralized method of operation, data centres have become a prime target for attackers. These attackers are not only after the data contained in the data centre but often the physical infrastructure the systems run on is the target of attack. Down time resulting from such an attack can affect a wide range of entities and can have severe financial implications for the owners of the data centre. To limit liability strict adherence to standards are prescribed. Technology however develops at a far faster pace than standards and our ability to accurately measure information security has significant hidden caveats. This allows for a situation where the defenders dilemma is exacerbated by information overload, a significant increase in attack surface and reporting tools that show only limited views. This paper investigates the logical and physical security components of a data centre and introduces the notion of third party involvement as an increase in attack surface due to the manner in which data centres typically operate.
- Full Text:
- Date Issued: 2015
- Authors: Swart, Ignus , Grobler, Marthie , Irwin, Barry V W
- Date: 2015
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427859 , vital:72467 , https://www.researchgate.net/profile/Ignus-Swart/publication/305442546_Data_Centre_vulnerabilities_physical_logical_trusted_entity_security/links/578f38c108aecbca4cada6bf/Data-Centre-vulnerabilities-physical-logical-trusted-entity-security.pdf
- Description: Data centres are often the hub for a significant number of disparate interconnecting systems. With rapid advances in virtualization, the use of data centres have increased significantly and are set to continue growing. Systems hosted typically serve the data needs for a growing number of organizations ranging from private individuals to mammoth governmental departments. Due to this centralized method of operation, data centres have become a prime target for attackers. These attackers are not only after the data contained in the data centre but often the physical infrastructure the systems run on is the target of attack. Down time resulting from such an attack can affect a wide range of entities and can have severe financial implications for the owners of the data centre. To limit liability strict adherence to standards are prescribed. Technology however develops at a far faster pace than standards and our ability to accurately measure information security has significant hidden caveats. This allows for a situation where the defenders dilemma is exacerbated by information overload, a significant increase in attack surface and reporting tools that show only limited views. This paper investigates the logical and physical security components of a data centre and introduces the notion of third party involvement as an increase in attack surface due to the manner in which data centres typically operate.
- Full Text:
- Date Issued: 2015
DDoS Attack Mitigation Through Control of Inherent Charge Decay of Memory Implementations
- Herbert, Alan, Irwin, Barry V W, van Heerden, Renier P
- Authors: Herbert, Alan , Irwin, Barry V W , van Heerden, Renier P
- Date: 2015
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430339 , vital:72684 , https://www.academic-bookshop.com/ourshop/prod_3774091-ICCWS-2015-10th-International-Conference-on-Cyber-Warfare-and-Security-Kruger-National-Park-South-Africa-PRINT-ver-ISBN-978191030996.html
- Description: DDoS (Distributed Denial of Service) attacks over recent years have shown to be devastating on the target systems and services made publicly available over the Internet. Furthermore, the backscatter1 caused by DDoS attacks also affects the available bandwidth and responsiveness of many other hosts within the Internet. The unfortunate reality of these attacks is that the targeted party cannot fight back due to the presence of botnets and malware-driven hosts. These hosts that carry out the attack on a target are usually controlled remotely and the owner of the device is unaware of it; for this reason one cannot attack back directly as this will serve little more than to disable an innocent party. A proposed solution to these DDoS attacks is to identify a potential attacking address and ignore communication from that address for a set period of time through time stamping.
- Full Text:
- Date Issued: 2015
- Authors: Herbert, Alan , Irwin, Barry V W , van Heerden, Renier P
- Date: 2015
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430339 , vital:72684 , https://www.academic-bookshop.com/ourshop/prod_3774091-ICCWS-2015-10th-International-Conference-on-Cyber-Warfare-and-Security-Kruger-National-Park-South-Africa-PRINT-ver-ISBN-978191030996.html
- Description: DDoS (Distributed Denial of Service) attacks over recent years have shown to be devastating on the target systems and services made publicly available over the Internet. Furthermore, the backscatter1 caused by DDoS attacks also affects the available bandwidth and responsiveness of many other hosts within the Internet. The unfortunate reality of these attacks is that the targeted party cannot fight back due to the presence of botnets and malware-driven hosts. These hosts that carry out the attack on a target are usually controlled remotely and the owner of the device is unaware of it; for this reason one cannot attack back directly as this will serve little more than to disable an innocent party. A proposed solution to these DDoS attacks is to identify a potential attacking address and ignore communication from that address for a set period of time through time stamping.
- Full Text:
- Date Issued: 2015
Design and Fabrication of a Low Cost Traffic Manipulation Hardware Platform
- Pennefather, Sean, Irwin, Barry V W
- Authors: Pennefather, Sean , Irwin, Barry V W
- Date: 2015
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427873 , vital:72468 , https://www.researchgate.net/profile/Barry-Ir-win/publication/327622941_Design_and_Fabrication_of_a_Low_Cost_Traffic_Manipulation_Hardware/links/5b9a1625458515310583fc8c/Design-and-Fabrication-of-a-Low-Cost-Traffic-Manipulation-Hardware.pdf
- Description: This paper describes the design and fabrication of a dedicated hardware platform for network traffic logging and modification at a production cost of under $300. The context of the device is briefly discussed before characteristics relating to hardware development are explored. The paper concludes with three application examples to show some to the potential functionality of the platform. Testing of the device shows an average TCP throughput of 84.44 MiB/s when using the designed Ethernet modules.
- Full Text:
- Date Issued: 2015
- Authors: Pennefather, Sean , Irwin, Barry V W
- Date: 2015
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427873 , vital:72468 , https://www.researchgate.net/profile/Barry-Ir-win/publication/327622941_Design_and_Fabrication_of_a_Low_Cost_Traffic_Manipulation_Hardware/links/5b9a1625458515310583fc8c/Design-and-Fabrication-of-a-Low-Cost-Traffic-Manipulation-Hardware.pdf
- Description: This paper describes the design and fabrication of a dedicated hardware platform for network traffic logging and modification at a production cost of under $300. The context of the device is briefly discussed before characteristics relating to hardware development are explored. The paper concludes with three application examples to show some to the potential functionality of the platform. Testing of the device shows an average TCP throughput of 84.44 MiB/s when using the designed Ethernet modules.
- Full Text:
- Date Issued: 2015
FPGA Based Implementation of a High Performance Scalable NetFlow Filter
- Herbert, Alan, Irwin, Barry V W, Otten, D F, Balmahoon, M R
- Authors: Herbert, Alan , Irwin, Barry V W , Otten, D F , Balmahoon, M R
- Date: 2015
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427887 , vital:72470 , https://www.researchgate.net/profile/Barry-Ir-win/publication/327622948_FPGA_Based_Implementation_of_a_High_Perfor-mance_Scalable_NetFlow_Filter/links/5b9a17a192851c4ba8181ba5/FPGA-Based-Implementation-of-a-High-Performance-Scalable-NetFlow-Filter.pdf
- Description: Full packet analysis on firewalls and intrusion detection, although effec-tive, has been found in recent times to be detrimental to the overall per-formance of networks that receive large volumes of throughput. For this reason partial packet analysis algorithms such as the NetFlow protocol have emerged to better mitigate these bottlenecks. This research delves into implementing a hardware accelerated, scalable, high per-formance system for NetFlow analysis and attack mitigation. Further-more, this implementation takes on attack mitigation through collection and processing of network flows produced at the source, rather than at the site of incident. This research platform manages to scale out its back-end through dis-tributed analysis over multiple hosts using the ZeroMQ toolset. Fur-thermore, ZeroMQ allows for multiple NetFlow data publishers, so that plug-ins can subscribe to the publishers that contain the relevant data to further increase the overall performance of the system. The dedicat-ed custom hardware optimizes the received network flows through cleaning, summarization and re-ordering into an easy to pass form when given to the sequential component of the system; this being the back-end.
- Full Text:
- Date Issued: 2015
- Authors: Herbert, Alan , Irwin, Barry V W , Otten, D F , Balmahoon, M R
- Date: 2015
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427887 , vital:72470 , https://www.researchgate.net/profile/Barry-Ir-win/publication/327622948_FPGA_Based_Implementation_of_a_High_Perfor-mance_Scalable_NetFlow_Filter/links/5b9a17a192851c4ba8181ba5/FPGA-Based-Implementation-of-a-High-Performance-Scalable-NetFlow-Filter.pdf
- Description: Full packet analysis on firewalls and intrusion detection, although effec-tive, has been found in recent times to be detrimental to the overall per-formance of networks that receive large volumes of throughput. For this reason partial packet analysis algorithms such as the NetFlow protocol have emerged to better mitigate these bottlenecks. This research delves into implementing a hardware accelerated, scalable, high per-formance system for NetFlow analysis and attack mitigation. Further-more, this implementation takes on attack mitigation through collection and processing of network flows produced at the source, rather than at the site of incident. This research platform manages to scale out its back-end through dis-tributed analysis over multiple hosts using the ZeroMQ toolset. Fur-thermore, ZeroMQ allows for multiple NetFlow data publishers, so that plug-ins can subscribe to the publishers that contain the relevant data to further increase the overall performance of the system. The dedicat-ed custom hardware optimizes the received network flows through cleaning, summarization and re-ordering into an easy to pass form when given to the sequential component of the system; this being the back-end.
- Full Text:
- Date Issued: 2015
Multi sensor national cyber security data fusion
- Swart, Ignus, Irwin, Barry V W, Grobler, Marthie
- Authors: Swart, Ignus , Irwin, Barry V W , Grobler, Marthie
- Date: 2015
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430393 , vital:72688 , https://www.academic-bookshop.com/ourshop/prod_3774091-ICCWS-2015-10th-International-Conference-on-Cyber-Warfare-and-Security-Kruger-National-Park-South-Africa-PRINT-ver-ISBN-978191030996.html
- Description: A proliferation of cyber security strategies have recently been published around the world with as many as thirty five strategies documented since 2009. These published strategies indicate the growing need to obtain a clear view of a country’s information security posture and to improve on it. The potential attack surface of a nation is extremely large however and no single source of cyber security data provides all the required information to accurately describe the cyber security readiness of a nation. There are however a variety of specialised data sources that are rich enough in relevant cyber security information to assess the state of a nation in at least key areas such as botnets, spam servers and incorrectly configured hosts present in a country. While informative both from an offensive and defensive point of view, the data sources range in a variety of factors such as accuracy, completeness, representation, cost and data availability. These factors add complexity when attempting to present a clear view of the combined intelligence of the data.
- Full Text:
- Date Issued: 2015
- Authors: Swart, Ignus , Irwin, Barry V W , Grobler, Marthie
- Date: 2015
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430393 , vital:72688 , https://www.academic-bookshop.com/ourshop/prod_3774091-ICCWS-2015-10th-International-Conference-on-Cyber-Warfare-and-Security-Kruger-National-Park-South-Africa-PRINT-ver-ISBN-978191030996.html
- Description: A proliferation of cyber security strategies have recently been published around the world with as many as thirty five strategies documented since 2009. These published strategies indicate the growing need to obtain a clear view of a country’s information security posture and to improve on it. The potential attack surface of a nation is extremely large however and no single source of cyber security data provides all the required information to accurately describe the cyber security readiness of a nation. There are however a variety of specialised data sources that are rich enough in relevant cyber security information to assess the state of a nation in at least key areas such as botnets, spam servers and incorrectly configured hosts present in a country. While informative both from an offensive and defensive point of view, the data sources range in a variety of factors such as accuracy, completeness, representation, cost and data availability. These factors add complexity when attempting to present a clear view of the combined intelligence of the data.
- Full Text:
- Date Issued: 2015
Observed correlations of unsolicited ip traffic across five distinct network telescopes
- Irwin, Barry V W, Nkhumeleni, T
- Authors: Irwin, Barry V W , Nkhumeleni, T
- Date: 2015
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428569 , vital:72521 , https://www.jstor.org/stable/26502727
- Description: Using network telescopes to monitor unused IP address space provides a favourable environment for researchers to study and detect malware, denial of service, and scanning activities on the Internet. This research focuses on comparative and correlation analysis of traffic activity across five IPv4 network telescopes, each with an aperture size of /24 over a 12-month period. Time series representations of the traffic activity observed on these sensors were constructed. Using the cross- and auto-correlation methods of time series analysis, sensor data was quantitatively analysed with the resulting correlation of network telescopes’ traffic activity found to be moderate to high, dependent on grouping.
- Full Text:
- Date Issued: 2015
- Authors: Irwin, Barry V W , Nkhumeleni, T
- Date: 2015
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428569 , vital:72521 , https://www.jstor.org/stable/26502727
- Description: Using network telescopes to monitor unused IP address space provides a favourable environment for researchers to study and detect malware, denial of service, and scanning activities on the Internet. This research focuses on comparative and correlation analysis of traffic activity across five IPv4 network telescopes, each with an aperture size of /24 over a 12-month period. Time series representations of the traffic activity observed on these sensors were constructed. Using the cross- and auto-correlation methods of time series analysis, sensor data was quantitatively analysed with the resulting correlation of network telescopes’ traffic activity found to be moderate to high, dependent on grouping.
- Full Text:
- Date Issued: 2015
Observed correlations of unsolicited network traffic over five distinct IPv4 netblocks
- Nkhumeleni, Thiswilondi M, Irwin, Barry V W
- Authors: Nkhumeleni, Thiswilondi M , Irwin, Barry V W
- Date: 2015
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430408 , vital:72689 , https://www.academic-bookshop.com/ourshop/prod_3774091-ICCWS-2015-10th-International-Conference-on-Cyber-Warfare-and-Security-Kruger-National-Park-South-Africa-PRINT-ver-ISBN-978191030996.html
- Description: Using network telescopes to monitor unused IP address space provides a favorable environment for researchers to study and detect malware, denial of service and scanning activities within global IPv4 address space. This research focuses on comparative and correlation analysis of traffic activity across the network of telescope sensors. Analysis is done using data collected over a 12 month period on five network telescopes each with an aperture size of/24, operated in disjoint IPv4 address space. These were considered as two distinct groupings. Time series’ representing time-based traffic activity observed on these sensors was constructed. Using the cross-and auto-correlation methods of time series analysis, moderate correlation of traffic activity was achieved between telescope sensors in each category. Weak to moderate correlation was calculated when comparing category A and category B network telescopes’ datasets. Results were significantly improved by considering TCP traffic separately. Moderate to strong correlation coefficients in each category were calculated when using TCP traffic only. UDP traffic analysis showed weaker correlation between sensors, however the uniformity of ICMP traffic showed correlation of traffic activity across all sensors. The results confirmed the visual observation of traffic relativity in telescope sensors within the same category and quantitatively analyzed the correlation of network telescopes’ traffic activity.
- Full Text:
- Date Issued: 2015
- Authors: Nkhumeleni, Thiswilondi M , Irwin, Barry V W
- Date: 2015
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430408 , vital:72689 , https://www.academic-bookshop.com/ourshop/prod_3774091-ICCWS-2015-10th-International-Conference-on-Cyber-Warfare-and-Security-Kruger-National-Park-South-Africa-PRINT-ver-ISBN-978191030996.html
- Description: Using network telescopes to monitor unused IP address space provides a favorable environment for researchers to study and detect malware, denial of service and scanning activities within global IPv4 address space. This research focuses on comparative and correlation analysis of traffic activity across the network of telescope sensors. Analysis is done using data collected over a 12 month period on five network telescopes each with an aperture size of/24, operated in disjoint IPv4 address space. These were considered as two distinct groupings. Time series’ representing time-based traffic activity observed on these sensors was constructed. Using the cross-and auto-correlation methods of time series analysis, moderate correlation of traffic activity was achieved between telescope sensors in each category. Weak to moderate correlation was calculated when comparing category A and category B network telescopes’ datasets. Results were significantly improved by considering TCP traffic separately. Moderate to strong correlation coefficients in each category were calculated when using TCP traffic only. UDP traffic analysis showed weaker correlation between sensors, however the uniformity of ICMP traffic showed correlation of traffic activity across all sensors. The results confirmed the visual observation of traffic relativity in telescope sensors within the same category and quantitatively analyzed the correlation of network telescopes’ traffic activity.
- Full Text:
- Date Issued: 2015
Towards a PHP webshell taxonomy using deobfuscation-assisted similarity analysis
- Wrench, Peter M, Irwin, Barry V W
- Authors: Wrench, Peter M , Irwin, Barry V W
- Date: 2015
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429560 , vital:72622 , 10.1109/ISSA.2015.7335066
- Description: The abundance of PHP-based Remote Access Trojans (or web shells) found in the wild has led malware researchers to develop systems capable of tracking and analysing these shells. In the past, such shells were ably classified using signature matching, a process that is currently unable to cope with the sheer volume and variety of web-based malware in circulation. Although a large percentage of newly-created webshell software incorporates portions of code derived from seminal shells such as c99 and r57, they are able to disguise this by making extensive use of obfuscation techniques intended to frustrate any attempts to dissect or reverse engineer the code. This paper presents an approach to shell classification and analysis (based on similarity to a body of known malware) in an attempt to create a comprehensive taxonomy of PHP-based web shells. Several different measures of similarity were used in conjunction with clustering algorithms and visualisation techniques in order to achieve this. Furthermore, an auxiliary component capable of syntactically deobfuscating PHP code is described. This was employed to reverse idiomatic obfuscation constructs used by software authors. It was found that this deobfuscation dramatically increased the observed levels of similarity by exposing additional code for analysis.
- Full Text:
- Date Issued: 2015
- Authors: Wrench, Peter M , Irwin, Barry V W
- Date: 2015
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429560 , vital:72622 , 10.1109/ISSA.2015.7335066
- Description: The abundance of PHP-based Remote Access Trojans (or web shells) found in the wild has led malware researchers to develop systems capable of tracking and analysing these shells. In the past, such shells were ably classified using signature matching, a process that is currently unable to cope with the sheer volume and variety of web-based malware in circulation. Although a large percentage of newly-created webshell software incorporates portions of code derived from seminal shells such as c99 and r57, they are able to disguise this by making extensive use of obfuscation techniques intended to frustrate any attempts to dissect or reverse engineer the code. This paper presents an approach to shell classification and analysis (based on similarity to a body of known malware) in an attempt to create a comprehensive taxonomy of PHP-based web shells. Several different measures of similarity were used in conjunction with clustering algorithms and visualisation techniques in order to achieve this. Furthermore, an auxiliary component capable of syntactically deobfuscating PHP code is described. This was employed to reverse idiomatic obfuscation constructs used by software authors. It was found that this deobfuscation dramatically increased the observed levels of similarity by exposing additional code for analysis.
- Full Text:
- Date Issued: 2015
A high-level architecture for efficient packet trace analysis on gpu co-processors
- Nottingham, Alastair, Irwin, Barry V W
- Authors: Nottingham, Alastair , Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429572 , vital:72623 , 10.1109/ISSA.2013.6641052
- Description: This paper proposes a high-level architecture to support efficient, massively parallel packet classification, filtering and analysis using commodity Graphics Processing Unit (GPU) hardware. The proposed architecture aims to provide a flexible and efficient parallel packet processing and analysis framework, supporting complex programmable filtering, data mining operations, statistical analysis functions and traffic visualisation, with minimal CPU overhead. In particular, this framework aims to provide a robust set of high-speed analysis functionality, in order to dramatically reduce the time required to process and analyse extremely large network traces. This architecture derives from initial research, which has shown GPU co-processors to be effective in accelerating packet classification to up to tera-bit speeds with minimal CPU overhead, far exceeding the bandwidth capacity between standard long term storage and the GPU device. This paper provides a high-level overview of the proposed architecture and its primary components, motivated by the results of prior research in the field.
- Full Text:
- Date Issued: 2013
- Authors: Nottingham, Alastair , Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429572 , vital:72623 , 10.1109/ISSA.2013.6641052
- Description: This paper proposes a high-level architecture to support efficient, massively parallel packet classification, filtering and analysis using commodity Graphics Processing Unit (GPU) hardware. The proposed architecture aims to provide a flexible and efficient parallel packet processing and analysis framework, supporting complex programmable filtering, data mining operations, statistical analysis functions and traffic visualisation, with minimal CPU overhead. In particular, this framework aims to provide a robust set of high-speed analysis functionality, in order to dramatically reduce the time required to process and analyse extremely large network traces. This architecture derives from initial research, which has shown GPU co-processors to be effective in accelerating packet classification to up to tera-bit speeds with minimal CPU overhead, far exceeding the bandwidth capacity between standard long term storage and the GPU device. This paper provides a high-level overview of the proposed architecture and its primary components, motivated by the results of prior research in the field.
- Full Text:
- Date Issued: 2013
A kernel-driven framework for high performance internet routing simulation
- Herbert, Alan, Irwin, Barry V W
- Authors: Herbert, Alan , Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429585 , vital:72624 , 10.1109/ISSA.2013.6641048
- Description: The ability to provide the simulation of packets traversing an internet path is an integral part of providing realistic simulations for network training, and cyber defence exercises. This paper builds on previous work, and considers an in-kernel approach to solving the routing simulation problem. The inkernel approach is anticipated to allow the framework to be able to achieve throughput rates of 1GB/s or higher using commodity hardware. Processes that run outside the context of the kernel of most operating system require context switching to access hardware and kernel modules. This leads to considerable delays in the processes, such as network simulators, that frequently access hardware such as hard disk accesses and network packet handling. To mitigate this problem, as experienced with earlier implementations, this research looks towards implementing a kernel module to handle network routing and simulation within a UNIX based system. This would remove delays incurred from context switching and allows for direct access to the hardware components of the host.
- Full Text:
- Date Issued: 2013
- Authors: Herbert, Alan , Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429585 , vital:72624 , 10.1109/ISSA.2013.6641048
- Description: The ability to provide the simulation of packets traversing an internet path is an integral part of providing realistic simulations for network training, and cyber defence exercises. This paper builds on previous work, and considers an in-kernel approach to solving the routing simulation problem. The inkernel approach is anticipated to allow the framework to be able to achieve throughput rates of 1GB/s or higher using commodity hardware. Processes that run outside the context of the kernel of most operating system require context switching to access hardware and kernel modules. This leads to considerable delays in the processes, such as network simulators, that frequently access hardware such as hard disk accesses and network packet handling. To mitigate this problem, as experienced with earlier implementations, this research looks towards implementing a kernel module to handle network routing and simulation within a UNIX based system. This would remove delays incurred from context switching and allows for direct access to the hardware components of the host.
- Full Text:
- Date Issued: 2013
Deep Routing Simulation
- Irwin, Barry V W, Herbert, Alan
- Authors: Irwin, Barry V W , Herbert, Alan
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430353 , vital:72685 , https://www.academic-bookshop.com/ourshop/prod_2546879-ICIW-2013-8th-International-Conference-on-Information-Warfare-and-Security.html
- Description: This paper discusses a dataset of some 16 million packets targeting port 445/tcp collected by a network telescope utilising a /24 netblock in South African IP address space. An initial overview of the collected data is provided. This is followed by a detailed analysis of the packet characteristics observed, including size and TTL. The peculiarities of the observed target selection and the results of the flaw in the Conficker worm's propagation algorithm are presented. An analysis of the 4 million observed source hosts is reported, grouped by both packet counts and the number of distinct hosts per network address block. Address blocks of size /8, 16 and 24 are used for groupings. The localisation, by geographic region and numerical proximity, of high ranking aggregate netblocks is highlighted. The observed shift in geopolitical origins observed during the evolution of the Conficker worm is also discussed. The paper concludes with some overall analyses, and consideration of the application of network telescopes to the monitoring of such outbreaks in the future.
- Full Text:
- Date Issued: 2013
- Authors: Irwin, Barry V W , Herbert, Alan
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430353 , vital:72685 , https://www.academic-bookshop.com/ourshop/prod_2546879-ICIW-2013-8th-International-Conference-on-Information-Warfare-and-Security.html
- Description: This paper discusses a dataset of some 16 million packets targeting port 445/tcp collected by a network telescope utilising a /24 netblock in South African IP address space. An initial overview of the collected data is provided. This is followed by a detailed analysis of the packet characteristics observed, including size and TTL. The peculiarities of the observed target selection and the results of the flaw in the Conficker worm's propagation algorithm are presented. An analysis of the 4 million observed source hosts is reported, grouped by both packet counts and the number of distinct hosts per network address block. Address blocks of size /8, 16 and 24 are used for groupings. The localisation, by geographic region and numerical proximity, of high ranking aggregate netblocks is highlighted. The observed shift in geopolitical origins observed during the evolution of the Conficker worm is also discussed. The paper concludes with some overall analyses, and consideration of the application of network telescopes to the monitoring of such outbreaks in the future.
- Full Text:
- Date Issued: 2013
- «
- ‹
- 1
- ›
- »