High Speed Lexical Classification of Malicious URLs
- Egan, Shaun P, Irwin, Barry V W
- Authors: Egan, Shaun P , Irwin, Barry V W
- Date: 2011
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428055 , vital:72483 , https://www.researchgate.net/profile/Barry-Ir-win/publication/326225046_High_Speed_Lexical_Classification_of_Malicious_URLs/links/5b3f20acaca27207851c60f9/High-Speed-Lexical-Classification-of-Malicious-URLs.pdf
- Description: It has been shown in recent research that it is possible to identify malicious URLs through lexi-cal analysis of their URL structures alone. Lightweight algorithms are defined as methods by which URLs are analyzed that do not use external sources of information such as WHOIS lookups, blacklist lookups and content analysis. These parameters include URL length, number of delimiters as well as the number of traversals through the directory structure and are used throughout much of the research in the paradigm of lightweight classification. Methods which include external sources of information are often called fully featured classifications and have been shown to be only slightly more effective than a purely lexical analysis when considering both false-positives and falsenegatives. This distinction allows these algorithms to be run client side without the introduction of additional latency, but still providing a high level of accuracy through the use of modern techniques in training classifiers. Both AROW and CW classifier update methods will be used as prototype implementations and their effectiveness will be com-pared to fully featured analysis results. These methods are selected because they are able to train on any labeled data, including instances in which their prediction is correct, allowing them to build a confidence in specific lexical features.
- Full Text:
- Date Issued: 2011
- Authors: Egan, Shaun P , Irwin, Barry V W
- Date: 2011
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428055 , vital:72483 , https://www.researchgate.net/profile/Barry-Ir-win/publication/326225046_High_Speed_Lexical_Classification_of_Malicious_URLs/links/5b3f20acaca27207851c60f9/High-Speed-Lexical-Classification-of-Malicious-URLs.pdf
- Description: It has been shown in recent research that it is possible to identify malicious URLs through lexi-cal analysis of their URL structures alone. Lightweight algorithms are defined as methods by which URLs are analyzed that do not use external sources of information such as WHOIS lookups, blacklist lookups and content analysis. These parameters include URL length, number of delimiters as well as the number of traversals through the directory structure and are used throughout much of the research in the paradigm of lightweight classification. Methods which include external sources of information are often called fully featured classifications and have been shown to be only slightly more effective than a purely lexical analysis when considering both false-positives and falsenegatives. This distinction allows these algorithms to be run client side without the introduction of additional latency, but still providing a high level of accuracy through the use of modern techniques in training classifiers. Both AROW and CW classifier update methods will be used as prototype implementations and their effectiveness will be com-pared to fully featured analysis results. These methods are selected because they are able to train on any labeled data, including instances in which their prediction is correct, allowing them to build a confidence in specific lexical features.
- Full Text:
- Date Issued: 2011
Near Real-time Aggregation and Visualisation of Hostile Network Traffic
- Hunter, Samuel O, Irwin, Barry V W
- Authors: Hunter, Samuel O , Irwin, Barry V W
- Date: 2011
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428067 , vital:72484 , https://www.researchgate.net/profile/Barry-Irwin/publication/327622653_Near_Real-time_Aggregation_and_Visualisation_of_Hostile_Network_Traffic/links/5b9a1474a6fdcc59bf8dfcc2/Near-Real-time-Aggregation-and-Visualisation-of-Hostile-Network-Traffic.pdf4
- Description: Efficient utilization of hostile network traffic for visualization and defen-sive purposes require near real-time availability of such data. Hostile or malicious traffic was obtained through the use of network telescopes and honeypots, as they are effective at capturing mostly illegitimate and nefarious traffic. The data is then exposed in near real-time through a messaging framework and visualized with the help of a geolocation based visualization tool. Defensive applications with regards to hostile network traffic are explored; these include the dynamic quarantine of malicious hosts internal to a network and the egress filtering of denial of service traffic originating from inside a network.
- Full Text:
- Date Issued: 2011
- Authors: Hunter, Samuel O , Irwin, Barry V W
- Date: 2011
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428067 , vital:72484 , https://www.researchgate.net/profile/Barry-Irwin/publication/327622653_Near_Real-time_Aggregation_and_Visualisation_of_Hostile_Network_Traffic/links/5b9a1474a6fdcc59bf8dfcc2/Near-Real-time-Aggregation-and-Visualisation-of-Hostile-Network-Traffic.pdf4
- Description: Efficient utilization of hostile network traffic for visualization and defen-sive purposes require near real-time availability of such data. Hostile or malicious traffic was obtained through the use of network telescopes and honeypots, as they are effective at capturing mostly illegitimate and nefarious traffic. The data is then exposed in near real-time through a messaging framework and visualized with the help of a geolocation based visualization tool. Defensive applications with regards to hostile network traffic are explored; these include the dynamic quarantine of malicious hosts internal to a network and the egress filtering of denial of service traffic originating from inside a network.
- Full Text:
- Date Issued: 2011
Tartarus: A honeypot based malware tracking and mitigation framework
- Hunter, Samuel O, Irwin, Barry V W
- Authors: Hunter, Samuel O , Irwin, Barry V W
- Date: 2011
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428629 , vital:72525 , https://d1wqtxts1xzle7.cloudfront.net/96055420/Hunter-libre.pdf?1671479103=andresponse-content-disposi-tion=inline%3B+filename%3DTartarus_A_honeypot_based_malware_tracki.pdfandExpires=1714722666andSignature=JtPpR-IoAXILqsIJSlmCEvn6yyytE17YLQBeFJRKD5aBug-EbLxFpEGDf4GtQXHbxHvR4~E-b5QtMs1H6ruSYDti9fIHenRbLeepZTx9jYj92to3qZjy7UloigYbQuw0Y6sN95jI7d4HX-Xkspbz0~DsnzwFmLGopg7j9RZSHqpSpI~fBvlml3QQ2rLCm4aB9u8tSW8du5u~FiJgiLHNgJaPzEOzy4~yfKkXBh--LTFdgeAVYxQbOESGGh9k5bc-LDJhQ6dD5HpXsM3wKJvYuVyU6m83vT2scogVgKHIr-t~XuiqL35PfI3hs2c~ZO0TH4hCqwiNMHQ8GCYsLvllsA__andKey-Pair-Id=APKAJLOHF5GGSLRBV4ZA
- Description: On a daily basis many of the hosts connected to the Internet experi-ence continuous probing and attack from malicious entities. Detection and defence from these malicious entities has primarily been the con-cern of Intrusion Detection Systems, Intrusion Prevention Systems and Anti-Virus software. These systems rely heavily on known signatures to detect nefarious traffic. Due to the reliance on known malicious signa-tures, these systems have been at a serious disadvantage when it comes to detecting new, never before seen malware. This paper will introduce Tartarus which is a malware tracking and mitigation frame-work that makes use of honeypot technology in order to detect mali-cious traffic. Tartarus implements a dynamic quarantine technique to mitigate the spread of self propagating malware on a production net-work. In order to better understand the spread and impact of internet worms Tartarus is used to construct a detailed demographic of poten-tially malicious hosts on the internet. This host demographic is in turn used as a blacklist for firewall rule creation. The sources of malicious traffic is then illustrated through the use of a geolocation based visuali-sation.
- Full Text:
- Date Issued: 2011
- Authors: Hunter, Samuel O , Irwin, Barry V W
- Date: 2011
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428629 , vital:72525 , https://d1wqtxts1xzle7.cloudfront.net/96055420/Hunter-libre.pdf?1671479103=andresponse-content-disposi-tion=inline%3B+filename%3DTartarus_A_honeypot_based_malware_tracki.pdfandExpires=1714722666andSignature=JtPpR-IoAXILqsIJSlmCEvn6yyytE17YLQBeFJRKD5aBug-EbLxFpEGDf4GtQXHbxHvR4~E-b5QtMs1H6ruSYDti9fIHenRbLeepZTx9jYj92to3qZjy7UloigYbQuw0Y6sN95jI7d4HX-Xkspbz0~DsnzwFmLGopg7j9RZSHqpSpI~fBvlml3QQ2rLCm4aB9u8tSW8du5u~FiJgiLHNgJaPzEOzy4~yfKkXBh--LTFdgeAVYxQbOESGGh9k5bc-LDJhQ6dD5HpXsM3wKJvYuVyU6m83vT2scogVgKHIr-t~XuiqL35PfI3hs2c~ZO0TH4hCqwiNMHQ8GCYsLvllsA__andKey-Pair-Id=APKAJLOHF5GGSLRBV4ZA
- Description: On a daily basis many of the hosts connected to the Internet experi-ence continuous probing and attack from malicious entities. Detection and defence from these malicious entities has primarily been the con-cern of Intrusion Detection Systems, Intrusion Prevention Systems and Anti-Virus software. These systems rely heavily on known signatures to detect nefarious traffic. Due to the reliance on known malicious signa-tures, these systems have been at a serious disadvantage when it comes to detecting new, never before seen malware. This paper will introduce Tartarus which is a malware tracking and mitigation frame-work that makes use of honeypot technology in order to detect mali-cious traffic. Tartarus implements a dynamic quarantine technique to mitigate the spread of self propagating malware on a production net-work. In order to better understand the spread and impact of internet worms Tartarus is used to construct a detailed demographic of poten-tially malicious hosts on the internet. This host demographic is in turn used as a blacklist for firewall rule creation. The sources of malicious traffic is then illustrated through the use of a geolocation based visuali-sation.
- Full Text:
- Date Issued: 2011
Bandwidth management and monitoring for community networks
- Irwin, Barry V W, Siebörger, Ingrid, Wells, Daniel
- Authors: Irwin, Barry V W , Siebörger, Ingrid , Wells, Daniel
- Date: 2010
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428040 , vital:72482 , https://www.researchgate.net/profile/Ingrid-Sieboerger/publication/265121154_Bandwidth_management_and_monitoring_for_community_networks/links/5e538b85458515072db7a686/Bandwidth-management-and-monitoring-for-community-networks.pdf
- Description: This paper describes a custom-built system to replace existing routing solutions within an identified community network. The community net-work in question shares a VSAT Internet connection to provide Internet access to a number of schools and their surrounding communities. This connection provides a limited resource which needs to be managed in order to ensure equitable use by members of the community. The community network originally lacked any form of bandwidth manage-ment or monitoring which often resulted in unfair use and abuse. The solution implemented is based on a client-server architecture. The Community Access Points (CAPs) are the client components which are located at each school; providing the computers and servers with ac-cess to the rest of the community network and the Internet. These nodes also perform a number of monitoring tasks for the computers at the schools. The server component is the Access Concentrator (AC) and connects the CAPs together using encrypted and authenticated PPPoE tunnels. The AC performs several additional monitoring func-tions, both on the individual links and on the upstream Internet connec-tion. The AC provides a means of effectively and centrally managing and allocating Internet bandwidth between the schools. The system that was developed has a number of features, including Quality of Service adjustments limiting network usage and fairly billing each school for their Internet use. The system provides an effective means for sharing bandwidth between users in a community network.
- Full Text:
- Date Issued: 2010
- Authors: Irwin, Barry V W , Siebörger, Ingrid , Wells, Daniel
- Date: 2010
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428040 , vital:72482 , https://www.researchgate.net/profile/Ingrid-Sieboerger/publication/265121154_Bandwidth_management_and_monitoring_for_community_networks/links/5e538b85458515072db7a686/Bandwidth-management-and-monitoring-for-community-networks.pdf
- Description: This paper describes a custom-built system to replace existing routing solutions within an identified community network. The community net-work in question shares a VSAT Internet connection to provide Internet access to a number of schools and their surrounding communities. This connection provides a limited resource which needs to be managed in order to ensure equitable use by members of the community. The community network originally lacked any form of bandwidth manage-ment or monitoring which often resulted in unfair use and abuse. The solution implemented is based on a client-server architecture. The Community Access Points (CAPs) are the client components which are located at each school; providing the computers and servers with ac-cess to the rest of the community network and the Internet. These nodes also perform a number of monitoring tasks for the computers at the schools. The server component is the Access Concentrator (AC) and connects the CAPs together using encrypted and authenticated PPPoE tunnels. The AC performs several additional monitoring func-tions, both on the individual links and on the upstream Internet connec-tion. The AC provides a means of effectively and centrally managing and allocating Internet bandwidth between the schools. The system that was developed has a number of features, including Quality of Service adjustments limiting network usage and fairly billing each school for their Internet use. The system provides an effective means for sharing bandwidth between users in a community network.
- Full Text:
- Date Issued: 2010
Cyber security: Challenges and the way forward
- Ayofe, Azeez N, Irwin, Barry V W
- Authors: Ayofe, Azeez N , Irwin, Barry V W
- Date: 2010
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428613 , vital:72524 , https://d1wqtxts1xzle7.cloudfront.net/62565276/171920200330-53981-1mqgyr5.pdf?1585592737=andresponse-content-disposi-tion=inline%3B+filename%3DCYBER_SECURITY_CHALLENGES_AND_THE_WAY_FO.pdfandExpires=1714729368andSignature=dPUCAd1sMUF-gyDTkBFb2lzDvkVNpfp0sk1z-CdAeHH6O759dBiO-M158drmJsOo1XtOJBY4tNd8Um2gi11zw4U8yEzHO-bGUJGJTJcooTXaKwZLT-wPqS779Qo2oeiQOIiuAx6zSdcfSGjbDfFOL1YWV9UeKvhtcnGJ3p-CjJAhiPWJorGn1-z8mO6oouWzyJYc0hV0-Po8yywJD60eC2S6llQmfNRpX4otgq4fgZwZu4TEcMUWPfBzGPFPNYcCLfiQVK0YLV~XdTCWrhTlYPSMzVSs~DhQk9QPBU7IGmzQkGZo3UXnNu1slCVLb9Dqm~9DSbmttIXIDGYXEjP9l4w__andKey-Pair-Id=APKAJLOHF5GGSLRBV4ZA
- Description: The high level of insecurity on the internet is becoming worrisome so much so that transaction on the web has become a thing of doubt. Cy-bercrime is becoming ever more serious and prevalent. Findings from 2002 Computer Crime and Security Survey show an upward trend that demonstrates a need for a timely review of existing approaches to fighting this new phenomenon in the information age. In this paper, we provide an overview of Cybercrime and present an international per-spective on fighting Cybercrime. This work seeks to define the concept of cyber-crime, explain tools being used by the criminals to perpetrate their evil handiworks, identify reasons for cyber-crime, how it can be eradicated, look at those involved and the reasons for their involve-ment, we would look at how best to detect a criminal mail and in conclu-sion, proffer recommendations that would help in checking the increas-ing rate of cyber-crimes and criminals.
- Full Text:
- Date Issued: 2010
- Authors: Ayofe, Azeez N , Irwin, Barry V W
- Date: 2010
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428613 , vital:72524 , https://d1wqtxts1xzle7.cloudfront.net/62565276/171920200330-53981-1mqgyr5.pdf?1585592737=andresponse-content-disposi-tion=inline%3B+filename%3DCYBER_SECURITY_CHALLENGES_AND_THE_WAY_FO.pdfandExpires=1714729368andSignature=dPUCAd1sMUF-gyDTkBFb2lzDvkVNpfp0sk1z-CdAeHH6O759dBiO-M158drmJsOo1XtOJBY4tNd8Um2gi11zw4U8yEzHO-bGUJGJTJcooTXaKwZLT-wPqS779Qo2oeiQOIiuAx6zSdcfSGjbDfFOL1YWV9UeKvhtcnGJ3p-CjJAhiPWJorGn1-z8mO6oouWzyJYc0hV0-Po8yywJD60eC2S6llQmfNRpX4otgq4fgZwZu4TEcMUWPfBzGPFPNYcCLfiQVK0YLV~XdTCWrhTlYPSMzVSs~DhQk9QPBU7IGmzQkGZo3UXnNu1slCVLb9Dqm~9DSbmttIXIDGYXEjP9l4w__andKey-Pair-Id=APKAJLOHF5GGSLRBV4ZA
- Description: The high level of insecurity on the internet is becoming worrisome so much so that transaction on the web has become a thing of doubt. Cy-bercrime is becoming ever more serious and prevalent. Findings from 2002 Computer Crime and Security Survey show an upward trend that demonstrates a need for a timely review of existing approaches to fighting this new phenomenon in the information age. In this paper, we provide an overview of Cybercrime and present an international per-spective on fighting Cybercrime. This work seeks to define the concept of cyber-crime, explain tools being used by the criminals to perpetrate their evil handiworks, identify reasons for cyber-crime, how it can be eradicated, look at those involved and the reasons for their involve-ment, we would look at how best to detect a criminal mail and in conclu-sion, proffer recommendations that would help in checking the increas-ing rate of cyber-crimes and criminals.
- Full Text:
- Date Issued: 2010
Data classification for artificial intelligence construct training to aid in network incident identification using network telescope data
- Cowie, Bradley, Irwin, Barry V W
- Authors: Cowie, Bradley , Irwin, Barry V W
- Date: 2010
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430125 , vital:72667 , https://doi.org/10.1145/1899503.1899544
- Description: This paper considers the complexities involved in obtaining training da-ta for use by artificial intelligence constructs to identify potential network incidents using passive network telescope data. While a large amount of data obtained from network telescopes exists, this data is not current-ly marked for known incidents. Problems related to this marking process include the accuracy of the markings, the validity of the original data and the time involved. In an attempt to solve these issues two methods of training data generation are considered namely; manual identification and automated generation. The manual technique considers heuristics for finding network incidents while the automated technique considers building simulated data sets using existing models of virus propagation and malicious activity. An example artificial intelligence system is then constructed using these marked datasets.
- Full Text:
- Date Issued: 2010
- Authors: Cowie, Bradley , Irwin, Barry V W
- Date: 2010
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430125 , vital:72667 , https://doi.org/10.1145/1899503.1899544
- Description: This paper considers the complexities involved in obtaining training da-ta for use by artificial intelligence constructs to identify potential network incidents using passive network telescope data. While a large amount of data obtained from network telescopes exists, this data is not current-ly marked for known incidents. Problems related to this marking process include the accuracy of the markings, the validity of the original data and the time involved. In an attempt to solve these issues two methods of training data generation are considered namely; manual identification and automated generation. The manual technique considers heuristics for finding network incidents while the automated technique considers building simulated data sets using existing models of virus propagation and malicious activity. An example artificial intelligence system is then constructed using these marked datasets.
- Full Text:
- Date Issued: 2010
Parallel packet classification using GPU co-processors
- Nottingham, Alistair, Irwin, Barry V W
- Authors: Nottingham, Alistair , Irwin, Barry V W
- Date: 2010
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430250 , vital:72677 , https://doi.org/10.1145/1899503.1899529
- Description: In the domain of network security, packet filtering for classification pur-poses is of significant interest. Packet classification provides a mecha-nism for understanding the composition of packet streams arriving at distinct network interfaces, and is useful in diagnosing threats and un-covering vulnerabilities so as to maximise data integrity and system se-curity. Traditional packet classifiers, such as PCAP, have utilised Con-trol Flow Graphs (CFGs) in representing filter sets, due to both their amenability to optimisation, and their inherent structural applicability to the metaphor of decision-based classification. Unfortunately, CFGs do not map well to cooperative processing implementations, and single-threaded CPU-based implementations have proven too slow for real-time classification against multiple arbitrary filters on next generation networks. In this paper, we consider a novel multithreaded classification algorithm, optimised for execution on GPU co-processors, intended to accelerate classification throughput and maximise processing efficien-cy in a highly parallel execution context.
- Full Text:
- Date Issued: 2010
- Authors: Nottingham, Alistair , Irwin, Barry V W
- Date: 2010
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430250 , vital:72677 , https://doi.org/10.1145/1899503.1899529
- Description: In the domain of network security, packet filtering for classification pur-poses is of significant interest. Packet classification provides a mecha-nism for understanding the composition of packet streams arriving at distinct network interfaces, and is useful in diagnosing threats and un-covering vulnerabilities so as to maximise data integrity and system se-curity. Traditional packet classifiers, such as PCAP, have utilised Con-trol Flow Graphs (CFGs) in representing filter sets, due to both their amenability to optimisation, and their inherent structural applicability to the metaphor of decision-based classification. Unfortunately, CFGs do not map well to cooperative processing implementations, and single-threaded CPU-based implementations have proven too slow for real-time classification against multiple arbitrary filters on next generation networks. In this paper, we consider a novel multithreaded classification algorithm, optimised for execution on GPU co-processors, intended to accelerate classification throughput and maximise processing efficien-cy in a highly parallel execution context.
- Full Text:
- Date Issued: 2010
A Comparison Of The Resource Requirements Of Snort And Bro In Production Networks
- Barnett, Richard J, Irwin, Barry V W
- Authors: Barnett, Richard J , Irwin, Barry V W
- Date: 2009
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430040 , vital:72661 , https://www.iadisportal.org/applied-computing-2009-proceedings
- Description: Intrusion Detection is essential in modern networking. However, with the increas-ing load on modern networks, the resource requirements of NIDS are significant. This paper explores and compares the requirements of Snort and Bro, and finds that Snort is more efficient at processing network traffic than Bro. It also finds that both systems are capable of analysing current network loads on commodity hardware, but may be unable to do so for higher bandwidth networks. This is ben-eficial in a South African context due to the increasing international bandwidth that will come online with the launch of the SEACOM Cable, and local projects such as SANREN.
- Full Text:
- Date Issued: 2009
- Authors: Barnett, Richard J , Irwin, Barry V W
- Date: 2009
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430040 , vital:72661 , https://www.iadisportal.org/applied-computing-2009-proceedings
- Description: Intrusion Detection is essential in modern networking. However, with the increas-ing load on modern networks, the resource requirements of NIDS are significant. This paper explores and compares the requirements of Snort and Bro, and finds that Snort is more efficient at processing network traffic than Bro. It also finds that both systems are capable of analysing current network loads on commodity hardware, but may be unable to do so for higher bandwidth networks. This is ben-eficial in a South African context due to the increasing international bandwidth that will come online with the launch of the SEACOM Cable, and local projects such as SANREN.
- Full Text:
- Date Issued: 2009
A Framework for the Rapid Development of Anomaly Detection Algorithms in Network Intrusion Detection Systems
- Barnett, Richard J, Irwin, Barry V W
- Authors: Barnett, Richard J , Irwin, Barry V W
- Date: 2009
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428644 , vital:72526 , https://www.researchgate.net/profile/Johan-Van-Niekerk-2/publication/220803295_E-mail_Security_awareness_at_Nelson_Mandela_Metropolitan_University_Registrar's_Division/links/0deec51909304b0ed8000000/E-mail-Security-awareness-at-Nelson-Mandela-Metropolitan-University-Registrars-Division.pdf#page=289
- Description: Most current Network Intrusion Detection Systems (NIDS) perform de-tection by matching traffic to a set of known signatures. These systems have well defined mechanisms for the rapid creation and deployment of new signatures. However, despite their support for anomaly detection, this is usually limited and often requires a full recompilation of the sys-tem to deploy new algorithms.
- Full Text:
- Date Issued: 2009
- Authors: Barnett, Richard J , Irwin, Barry V W
- Date: 2009
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428644 , vital:72526 , https://www.researchgate.net/profile/Johan-Van-Niekerk-2/publication/220803295_E-mail_Security_awareness_at_Nelson_Mandela_Metropolitan_University_Registrar's_Division/links/0deec51909304b0ed8000000/E-mail-Security-awareness-at-Nelson-Mandela-Metropolitan-University-Registrars-Division.pdf#page=289
- Description: Most current Network Intrusion Detection Systems (NIDS) perform de-tection by matching traffic to a set of known signatures. These systems have well defined mechanisms for the rapid creation and deployment of new signatures. However, despite their support for anomaly detection, this is usually limited and often requires a full recompilation of the sys-tem to deploy new algorithms.
- Full Text:
- Date Issued: 2009
An analysis of logical network distance on observed packet counts for network telescope data
- Irwin, Barry V W, Barnett, Richard J
- Authors: Irwin, Barry V W , Barnett, Richard J
- Date: 2009
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428090 , vital:72485 , https://www.researchgate.net/profile/Barry-Ir-win/publication/228765119_An_Analysis_of_Logical_Network_Distance_on_Observed_Packet_Counts_for_Network_Telescope_Data/links/53e9c5e80cf28f342f414988/An-Analysis-of-Logical-Network-Distance-on-Observed-Packet-Counts-for-Network-Telescope-Data.pdf
- Description: This paper investigates the relationship between the logical distance between two IP addresses on the Internet, and the number of packets captured by a network telescope listening on a network containing one of the addresses. The need for the computation of a manageable measure of quantification of this distance is presented, as an alterna-tive to the raw difference that can be computed between two addresses using their Integer representations. A number of graphical analysis tools and techniques are presented to aid in this analysis. Findings are pre-sented based on a long baseline data set collected at Rhodes Universi-ty over the last three years, using a dedicated Class C (256 IP address) sensor network, and comprising 19 million packets. Of this total, 27% by packet volume originate within the same natural class A network as the telescope, and as such can be seen to be logically close to the collector network.
- Full Text:
- Date Issued: 2009
- Authors: Irwin, Barry V W , Barnett, Richard J
- Date: 2009
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428090 , vital:72485 , https://www.researchgate.net/profile/Barry-Ir-win/publication/228765119_An_Analysis_of_Logical_Network_Distance_on_Observed_Packet_Counts_for_Network_Telescope_Data/links/53e9c5e80cf28f342f414988/An-Analysis-of-Logical-Network-Distance-on-Observed-Packet-Counts-for-Network-Telescope-Data.pdf
- Description: This paper investigates the relationship between the logical distance between two IP addresses on the Internet, and the number of packets captured by a network telescope listening on a network containing one of the addresses. The need for the computation of a manageable measure of quantification of this distance is presented, as an alterna-tive to the raw difference that can be computed between two addresses using their Integer representations. A number of graphical analysis tools and techniques are presented to aid in this analysis. Findings are pre-sented based on a long baseline data set collected at Rhodes Universi-ty over the last three years, using a dedicated Class C (256 IP address) sensor network, and comprising 19 million packets. Of this total, 27% by packet volume originate within the same natural class A network as the telescope, and as such can be seen to be logically close to the collector network.
- Full Text:
- Date Issued: 2009
Automated Firewall Rule Set Generation Through Passive Traffic Inspection
- Pranschke, Georg-Christian, Irwin, Barry V W, Barnett, Richard J
- Authors: Pranschke, Georg-Christian , Irwin, Barry V W , Barnett, Richard J
- Date: 2009
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428659 , vital:72527 , https://doi.org/10.1007/978-90-481-3660-5_56
- Description: Introducing rewalls and other choke point controls in existing networks is often problematic, because in the majority of cases there is already production tra c in place that cannot be interrupted. This often necessitates the time consuming manual analysis of network tra c in order to ensure that when a new system is installed, there is no disruption to legitimate ows. To improve upon this situation it is proposed that a system facilitating network tra c analysis and rewall rule set generation is developed.
- Full Text:
- Date Issued: 2009
- Authors: Pranschke, Georg-Christian , Irwin, Barry V W , Barnett, Richard J
- Date: 2009
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428659 , vital:72527 , https://doi.org/10.1007/978-90-481-3660-5_56
- Description: Introducing rewalls and other choke point controls in existing networks is often problematic, because in the majority of cases there is already production tra c in place that cannot be interrupted. This often necessitates the time consuming manual analysis of network tra c in order to ensure that when a new system is installed, there is no disruption to legitimate ows. To improve upon this situation it is proposed that a system facilitating network tra c analysis and rewall rule set generation is developed.
- Full Text:
- Date Issued: 2009
Evaluating text preprocessing to improve compression on maillogs
- Otten, Fred, Irwin, Barry V W, Thinyane, Hannah
- Authors: Otten, Fred , Irwin, Barry V W , Thinyane, Hannah
- Date: 2009
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430138 , vital:72668 , https://doi.org/10.1145/1632149.1632157
- Description: Maillogs contain important information about mail which has been sent or received. This information can be used for statistical purposes, to help prevent viruses or to help prevent SPAM. In order to satisfy regula-tions and follow good security practices, maillogs need to be monitored and archived. Since there is a large quantity of data, some form of data reduction is necessary. Data compression programs such as gzip and bzip2 are commonly used to reduce the quantity of data. Text preprocessing can be used to aid the compression of English text files. This paper evaluates whether text preprocessing, particularly word replacement, can be used to improve the compression of maillogs. It presents an algorithm for constructing a dictionary for word replacement and provides the results of experiments conducted using the ppmd, gzip, bzip2 and 7zip programs. These tests show that text prepro-cessing improves data compression on maillogs. Improvements of up to 56 percent in compression time and up to 32 percent in compression ratio are achieved. It also shows that a dictionary may be generated and used on other maillogs to yield reductions within half a percent of the results achieved for the maillog used to generate the dictionary.
- Full Text:
- Date Issued: 2009
- Authors: Otten, Fred , Irwin, Barry V W , Thinyane, Hannah
- Date: 2009
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430138 , vital:72668 , https://doi.org/10.1145/1632149.1632157
- Description: Maillogs contain important information about mail which has been sent or received. This information can be used for statistical purposes, to help prevent viruses or to help prevent SPAM. In order to satisfy regula-tions and follow good security practices, maillogs need to be monitored and archived. Since there is a large quantity of data, some form of data reduction is necessary. Data compression programs such as gzip and bzip2 are commonly used to reduce the quantity of data. Text preprocessing can be used to aid the compression of English text files. This paper evaluates whether text preprocessing, particularly word replacement, can be used to improve the compression of maillogs. It presents an algorithm for constructing a dictionary for word replacement and provides the results of experiments conducted using the ppmd, gzip, bzip2 and 7zip programs. These tests show that text prepro-cessing improves data compression on maillogs. Improvements of up to 56 percent in compression time and up to 32 percent in compression ratio are achieved. It also shows that a dictionary may be generated and used on other maillogs to yield reductions within half a percent of the results achieved for the maillog used to generate the dictionary.
- Full Text:
- Date Issued: 2009
Extending the NFComms: framework for bulk data transfers
- Nottingham, Alastair, Irwin, Barry V W
- Authors: Nottingham, Alastair , Irwin, Barry V W
- Date: 2009
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430164 , vital:72670 , https://doi.org/10.1145/1632149.1632170
- Description: Packet analysis is an important aspect of network security, which typi-cally relies on a flexible packet filtering system to extrapolate important packet information from each processed packet. Packet analysis is a computationally intensive, highly parallelisable task, and as such, clas-sification of large packet sets, such as those collected by a network tel-escope, can require significant processing time. We wish to improve upon this, through parallel classification on a GPU. In this paper, we first consider the OpenCL architecture and its applicability to packet analy-sis. We then introduce a number of packet demultiplexing and routing algorithms, and finally present a discussion on how some of these techniques may be leveraged within a GPGPU context to improve packet classification speeds.
- Full Text:
- Date Issued: 2009
- Authors: Nottingham, Alastair , Irwin, Barry V W
- Date: 2009
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430164 , vital:72670 , https://doi.org/10.1145/1632149.1632170
- Description: Packet analysis is an important aspect of network security, which typi-cally relies on a flexible packet filtering system to extrapolate important packet information from each processed packet. Packet analysis is a computationally intensive, highly parallelisable task, and as such, clas-sification of large packet sets, such as those collected by a network tel-escope, can require significant processing time. We wish to improve upon this, through parallel classification on a GPU. In this paper, we first consider the OpenCL architecture and its applicability to packet analy-sis. We then introduce a number of packet demultiplexing and routing algorithms, and finally present a discussion on how some of these techniques may be leveraged within a GPGPU context to improve packet classification speeds.
- Full Text:
- Date Issued: 2009
gpf: A GPU accelerated packet classification tool
- Nottingham, Alastair, Irwin, Barry V W
- Authors: Nottingham, Alastair , Irwin, Barry V W
- Date: 2009
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428103 , vital:72486 , https://d1wqtxts1xzle7.cloudfront.net/67098560/gPF_A_GPU_Accelerated_Packet_Classificat20210505-17707-zqqa4s.pdf?1620201469=andresponse-content-disposi-tion=inline%3B+filename%3DgPF_A_GPU_Accelerated_Packet_Classificat.pdfandExpires=1714733902andSignature=NQ~1DjH1XOuqF8u1Yq74XyG7kp~y0II81vu40SuWO2GQhSgToTHC7ynbAoP3MGv9do~bX1PCAp2Z2TCKUVHT7CmYNRxDmnpk5G4kefH--0VotMHVtFnHnf5Q9nhrp0MIgSxEhncOrlRx5K5sRhlLkyfDib3RS8Y8vu~FIPvm1DaZrfqCZSpXKmHh9r1etybRBRtUokzayPtgbhE41bQtW9wI8J4-JTQ9doyNC-JflFuEfUnhv5Phf45lr7TALm8G8nGZBp3z9-nSLZDxls2mvvVIANCdutyOMDnMDadGoqjIB2wYwUy~Fm424ZWj7fF89Ytj9xqIU63H4NFE2HodtQ__andKey-Pair-Id=APKAJLOHF5GGSLRBV4ZA
- Description: This paper outlines the design of gPF, a fast packet classifier optimised for parallel execution on current generation commodity graphics hard-ware. Specifically, gPF leverages the potential for both the parallel classi-fication of packets at runtime, and the use of evolutionary mechanisms, in the form of a GP-GPU genetic algorithm to produce contextually opti-mised filter permutations in order to reduce redundancy and improve the per-packet throughput rate of the resultant filter program. This paper demonstrates that these optimisations have significant potential for im-proving packet classification speeds, particularly with regard to bulk pack-et processing and saturated network environments.
- Full Text:
- Date Issued: 2009
- Authors: Nottingham, Alastair , Irwin, Barry V W
- Date: 2009
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428103 , vital:72486 , https://d1wqtxts1xzle7.cloudfront.net/67098560/gPF_A_GPU_Accelerated_Packet_Classificat20210505-17707-zqqa4s.pdf?1620201469=andresponse-content-disposi-tion=inline%3B+filename%3DgPF_A_GPU_Accelerated_Packet_Classificat.pdfandExpires=1714733902andSignature=NQ~1DjH1XOuqF8u1Yq74XyG7kp~y0II81vu40SuWO2GQhSgToTHC7ynbAoP3MGv9do~bX1PCAp2Z2TCKUVHT7CmYNRxDmnpk5G4kefH--0VotMHVtFnHnf5Q9nhrp0MIgSxEhncOrlRx5K5sRhlLkyfDib3RS8Y8vu~FIPvm1DaZrfqCZSpXKmHh9r1etybRBRtUokzayPtgbhE41bQtW9wI8J4-JTQ9doyNC-JflFuEfUnhv5Phf45lr7TALm8G8nGZBp3z9-nSLZDxls2mvvVIANCdutyOMDnMDadGoqjIB2wYwUy~Fm424ZWj7fF89Ytj9xqIU63H4NFE2HodtQ__andKey-Pair-Id=APKAJLOHF5GGSLRBV4ZA
- Description: This paper outlines the design of gPF, a fast packet classifier optimised for parallel execution on current generation commodity graphics hard-ware. Specifically, gPF leverages the potential for both the parallel classi-fication of packets at runtime, and the use of evolutionary mechanisms, in the form of a GP-GPU genetic algorithm to produce contextually opti-mised filter permutations in order to reduce redundancy and improve the per-packet throughput rate of the resultant filter program. This paper demonstrates that these optimisations have significant potential for im-proving packet classification speeds, particularly with regard to bulk pack-et processing and saturated network environments.
- Full Text:
- Date Issued: 2009
Investigating the effect of Genetic Algorithms on Filter Optimisation Within Fast Packet Classifiers
- Nottingham, Alastair, Irwin, Barry V W
- Authors: Nottingham, Alastair , Irwin, Barry V W
- Date: 2009
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428674 , vital:72528 , https://www.researchgate.net/profile/Marijke-Coet-zee/publication/220803190_A_Framework_for_Web_Services_Security_Policy_Negotiation/links/0fcfd50f7d806aafc8000000/A-Framework-for-Web-Services-Security-Policy-Negotiation.pdf#page=119
- Description: Packet demultiplexing and analysis is a core concern for network secu-rity, and has hence inspired numerous optimisation attempts since their conception in early packet demultiplexing filters such as CSPF and BPF. These optimisations have generally, but not exclusively, focused on improving the speed of packet classification. Despite these im-provements however, packet filters require further optimisation in order to be effectively applied within next generation networks. One identified optimisation is that of reducing the average path length of the global filter by selecting an optimum filter permutation. Since redundant code generation does not change the order of computation, the initial filter order before filter optimisation affects the average path length of the resultant control-flow graph, thus selection of an optimum permutation of filters could provide significant performance improvements. Unfortu-nately, this problem is NP-Complete. In this paper, we consider using Genetic Algorithms to’breed’an optimum filter permutation prior to re-dundant code elimination. Specifically, we aim to evaluate the effec-tiveness of such an optimisation in reducing filter control flow graphs.
- Full Text:
- Date Issued: 2009
Investigating the effect of Genetic Algorithms on Filter Optimisation Within Fast Packet Classifiers
- Authors: Nottingham, Alastair , Irwin, Barry V W
- Date: 2009
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428674 , vital:72528 , https://www.researchgate.net/profile/Marijke-Coet-zee/publication/220803190_A_Framework_for_Web_Services_Security_Policy_Negotiation/links/0fcfd50f7d806aafc8000000/A-Framework-for-Web-Services-Security-Policy-Negotiation.pdf#page=119
- Description: Packet demultiplexing and analysis is a core concern for network secu-rity, and has hence inspired numerous optimisation attempts since their conception in early packet demultiplexing filters such as CSPF and BPF. These optimisations have generally, but not exclusively, focused on improving the speed of packet classification. Despite these im-provements however, packet filters require further optimisation in order to be effectively applied within next generation networks. One identified optimisation is that of reducing the average path length of the global filter by selecting an optimum filter permutation. Since redundant code generation does not change the order of computation, the initial filter order before filter optimisation affects the average path length of the resultant control-flow graph, thus selection of an optimum permutation of filters could provide significant performance improvements. Unfortu-nately, this problem is NP-Complete. In this paper, we consider using Genetic Algorithms to’breed’an optimum filter permutation prior to re-dundant code elimination. Specifically, we aim to evaluate the effec-tiveness of such an optimisation in reducing filter control flow graphs.
- Full Text:
- Date Issued: 2009
Management, Processing and Analysis of Cryptographic Network Protocols
- Cowie, Bradley, Irwin, Barry V W, Barnett, Richard J
- Authors: Cowie, Bradley , Irwin, Barry V W , Barnett, Richard J
- Date: 2009
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428687 , vital:72529 , https://d1wqtxts1xzle7.cloudfront.net/30968790/ISSA2009Proceedings-libre.pdf?1393060231=andresponse-content-disposi-tion=inline%3B+filename%3DAN_ANALYSIS_OF_AUTHENTICATION_FOR_PASSIV.pdfandExpires=1714732172andSignature=Ei8RhR2pCSUNGCNE40DugEyFamcyTxPuuRq9gslD~WGlNqPEgG3FL7VFRQCKXhZBWyAfGRjMtBmNDJ7Sjsgex12WxW9Fj8XdpB7Bfz23FuLc-t2YRM-2joKOHJQLxWJlfZiOzxDvVGZeM3zCHj~f3NUeY1~n6PtVtLzNdL8glIg5dzDTTIE6ms2YlxmnO6JvlzQwOWdHaUbHsZzMGOV19UPtBk-UJzHSq3NRyPe4-XNZQLNK-mEEcMGsLk6nkyXIsW2QJ7gtKW1nNkr6EMkAGSOnDai~pSqzb2imspMnlPRigAPPISrNHO79rP51H9bu1WvbRZv1KVkGvM~sRmfl28A__andKey-Pair-Id=APKAJLOHF5GGSLRBV4ZA#page=499
- Description: The use of cryptographic protocols as a means to provide security to web servers and services at the transport layer, by providing both en-cryption and authentication to data transfer, has become increasingly popular. However, we note that it is rather difficult to perform legitimate analysis, intrusion detection and debugging on cryptographic protocols, as the data that passes through is encrypted. In this paper we assume that we have legitimate access to the data and that we have the private key used in transactions and thus we will be able decrypt the data. The objective is to produce a suitable application framework that allows for easy recovery and secure storage of cryptographic keys; including ap-propriate tools to decapsulate traffic and to decrypt live packet streams or precaptured traffic contained in PCAP files. The resultant processing will then be able to provide a clear-text stream which can be used for further analysis.
- Full Text:
- Date Issued: 2009
- Authors: Cowie, Bradley , Irwin, Barry V W , Barnett, Richard J
- Date: 2009
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428687 , vital:72529 , https://d1wqtxts1xzle7.cloudfront.net/30968790/ISSA2009Proceedings-libre.pdf?1393060231=andresponse-content-disposi-tion=inline%3B+filename%3DAN_ANALYSIS_OF_AUTHENTICATION_FOR_PASSIV.pdfandExpires=1714732172andSignature=Ei8RhR2pCSUNGCNE40DugEyFamcyTxPuuRq9gslD~WGlNqPEgG3FL7VFRQCKXhZBWyAfGRjMtBmNDJ7Sjsgex12WxW9Fj8XdpB7Bfz23FuLc-t2YRM-2joKOHJQLxWJlfZiOzxDvVGZeM3zCHj~f3NUeY1~n6PtVtLzNdL8glIg5dzDTTIE6ms2YlxmnO6JvlzQwOWdHaUbHsZzMGOV19UPtBk-UJzHSq3NRyPe4-XNZQLNK-mEEcMGsLk6nkyXIsW2QJ7gtKW1nNkr6EMkAGSOnDai~pSqzb2imspMnlPRigAPPISrNHO79rP51H9bu1WvbRZv1KVkGvM~sRmfl28A__andKey-Pair-Id=APKAJLOHF5GGSLRBV4ZA#page=499
- Description: The use of cryptographic protocols as a means to provide security to web servers and services at the transport layer, by providing both en-cryption and authentication to data transfer, has become increasingly popular. However, we note that it is rather difficult to perform legitimate analysis, intrusion detection and debugging on cryptographic protocols, as the data that passes through is encrypted. In this paper we assume that we have legitimate access to the data and that we have the private key used in transactions and thus we will be able decrypt the data. The objective is to produce a suitable application framework that allows for easy recovery and secure storage of cryptographic keys; including ap-propriate tools to decapsulate traffic and to decrypt live packet streams or precaptured traffic contained in PCAP files. The resultant processing will then be able to provide a clear-text stream which can be used for further analysis.
- Full Text:
- Date Issued: 2009
Passive Traffic Inspection for Automated Firewall Rule Set Generation
- Pranschke, Georg-Christian, Irwin, Barry V W, Barnett, Richard J
- Authors: Pranschke, Georg-Christian , Irwin, Barry V W , Barnett, Richard J
- Date: 2009
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428114 , vital:72487 , https://d1wqtxts1xzle7.cloudfront.net/49200001/Automated_Firewall_Rule_Set_Generation_T20160928-12076-1n830lx-libre.pdf?1475130103=andresponse-content-disposi-tion=inline%3B+filename%3DAutomated_Firewall_Rule_Set_Generation_T.pdfandExpires=1714733377andSignature=Q0miMvZNpP7c60n42m54TvFG4hIdujVJBilbpvDKquBk54RPwU22pH6-40mpmOxIFBllKUmOgZfS9SwzuiANn-AZ2bhAELyZmf2bJ5MgceaYH5wnPjX9VzP04C2BACzhO5YutUfwkysburUx-zNdiemSofx2p1DwOszXaJNauYdP8RcHQmFl8aOnkoc3kmU02eKz8WiQISntJtu5Gpo8txP-Z6f1BEzvlVGd432tndhRwpsEVWGW43~oXsdaWQu72S8pTakgKPREqaD7CUHKMXiiUBfuiSj1nFo2n4xZQlFHqbMT7TAYzBPM0GObe~kBe5s2nY6dnOMUKUsSaeTUtqA__andKey-Pair-Id=APKAJLOHF5GGSLRBV4ZA
- Description: The introduction of network filters and chokes such as firewalls in exist-ing operational network is often problematic, due to considerations that need to be made to minimise the interruption of existent legitimate traf-fic. This often necessitates the time consuming manual analysis of net-work traffic over a period of time in order to generate and vet the rule bases to minimise disruption of legitimate flows. To improve upon this, a system facilitating network traffic analysis and firewall rule set genera-tion is proposed. The system shall be capable to deal with the ever in-creasing traffic volumes and help to provide and maintain high uptimes. A high level overview of the design of the components is presented. Additions to the system are scoring metrics which may assist the admin-istrator to optimise the rule sets for the most efficient matching of flows, based on traffic volume, frequency or packet count. A third party pack-age-Firewall Builder-is used to target the resultant rule sets to a number of different firewall and network Filtering platforms.
- Full Text:
- Date Issued: 2009
- Authors: Pranschke, Georg-Christian , Irwin, Barry V W , Barnett, Richard J
- Date: 2009
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428114 , vital:72487 , https://d1wqtxts1xzle7.cloudfront.net/49200001/Automated_Firewall_Rule_Set_Generation_T20160928-12076-1n830lx-libre.pdf?1475130103=andresponse-content-disposi-tion=inline%3B+filename%3DAutomated_Firewall_Rule_Set_Generation_T.pdfandExpires=1714733377andSignature=Q0miMvZNpP7c60n42m54TvFG4hIdujVJBilbpvDKquBk54RPwU22pH6-40mpmOxIFBllKUmOgZfS9SwzuiANn-AZ2bhAELyZmf2bJ5MgceaYH5wnPjX9VzP04C2BACzhO5YutUfwkysburUx-zNdiemSofx2p1DwOszXaJNauYdP8RcHQmFl8aOnkoc3kmU02eKz8WiQISntJtu5Gpo8txP-Z6f1BEzvlVGd432tndhRwpsEVWGW43~oXsdaWQu72S8pTakgKPREqaD7CUHKMXiiUBfuiSj1nFo2n4xZQlFHqbMT7TAYzBPM0GObe~kBe5s2nY6dnOMUKUsSaeTUtqA__andKey-Pair-Id=APKAJLOHF5GGSLRBV4ZA
- Description: The introduction of network filters and chokes such as firewalls in exist-ing operational network is often problematic, due to considerations that need to be made to minimise the interruption of existent legitimate traf-fic. This often necessitates the time consuming manual analysis of net-work traffic over a period of time in order to generate and vet the rule bases to minimise disruption of legitimate flows. To improve upon this, a system facilitating network traffic analysis and firewall rule set genera-tion is proposed. The system shall be capable to deal with the ever in-creasing traffic volumes and help to provide and maintain high uptimes. A high level overview of the design of the components is presented. Additions to the system are scoring metrics which may assist the admin-istrator to optimise the rule sets for the most efficient matching of flows, based on traffic volume, frequency or packet count. A third party pack-age-Firewall Builder-is used to target the resultant rule sets to a number of different firewall and network Filtering platforms.
- Full Text:
- Date Issued: 2009
Performance Effects of Concurrent Virtual Machine Execution in VMware Workstation 6
- Barnett, Richard J, Irwin, Barry V W
- Authors: Barnett, Richard J , Irwin, Barry V W
- Date: 2009
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429966 , vital:72655 , https://doi.org/10.1007/978-90-481-3660-5_56
- Description: The recent trend toward virtualized computing both as a means of serv-er consolidation and as a powerful desktop computing tool has lead into a wide variety of studies into the performance of hypervisor products. This study has investigated the scalability of VMware Workstation 6 on the desktop platform. We present comparative performance results for the concurrent execution of a number of virtual machines. A through statistical analysis of the performance results highlights the perfor-mance trends of different numbers of concurrent virtual machines and concludes that VMware workstation can scale in certain contexts. We find that there are different performance benefits dependant on the ap-plication and that memory intensive applications perform less effective-ly than those applications which are IO intensive. We also find that run-ning concurrent virtual machines offers a significant performance de-crease, but that the drop thereafter is less significant.
- Full Text:
- Date Issued: 2009
- Authors: Barnett, Richard J , Irwin, Barry V W
- Date: 2009
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429966 , vital:72655 , https://doi.org/10.1007/978-90-481-3660-5_56
- Description: The recent trend toward virtualized computing both as a means of serv-er consolidation and as a powerful desktop computing tool has lead into a wide variety of studies into the performance of hypervisor products. This study has investigated the scalability of VMware Workstation 6 on the desktop platform. We present comparative performance results for the concurrent execution of a number of virtual machines. A through statistical analysis of the performance results highlights the perfor-mance trends of different numbers of concurrent virtual machines and concludes that VMware workstation can scale in certain contexts. We find that there are different performance benefits dependant on the ap-plication and that memory intensive applications perform less effective-ly than those applications which are IO intensive. We also find that run-ning concurrent virtual machines offers a significant performance de-crease, but that the drop thereafter is less significant.
- Full Text:
- Date Issued: 2009
Rich Representation and Visualisation of Time-Series Data
- Kerr, Simon, Foster, Gregory G, Irwin, Barry V W
- Authors: Kerr, Simon , Foster, Gregory G , Irwin, Barry V W
- Date: 2009
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428130 , vital:72488 , https://www.researchgate.net/profile/Barry-Ir-win/publication/265821926_Rich_Representation_and_Visualisation_of_Time-Series_Data/links/5548a1350cf26a7bf4daefb1/Rich-Representation-and-Visualisation-of-Time-Series-Data.pdf
- Description: Currently the majority of data is visualized using static graphs and ta-bles. However, static graphs still leave much to be desired and provide only a small insight into trends and changes between values. We pro-pose a move away from purely static representations of data towards a more fluid and understandable environment for data representation. This is achieved through the use of an application which animates time based data. Animating time based data allows one to see nuances within a dataset from a more comprehensive perspective. This is espe-cially useful within the time based data rich telecommunications indus-try. The application comprises of two parts-the backend manages raw data which is then passed to the frontend for animation. A play function allows one to play through a time series. Which creates a fluid and dy-namic environment for exploring data. Both the advantages and disad-vantages of this approach are investigated and an application is intro-duced which can be used to animate and explore datasets.
- Full Text:
- Date Issued: 2009
- Authors: Kerr, Simon , Foster, Gregory G , Irwin, Barry V W
- Date: 2009
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428130 , vital:72488 , https://www.researchgate.net/profile/Barry-Ir-win/publication/265821926_Rich_Representation_and_Visualisation_of_Time-Series_Data/links/5548a1350cf26a7bf4daefb1/Rich-Representation-and-Visualisation-of-Time-Series-Data.pdf
- Description: Currently the majority of data is visualized using static graphs and ta-bles. However, static graphs still leave much to be desired and provide only a small insight into trends and changes between values. We pro-pose a move away from purely static representations of data towards a more fluid and understandable environment for data representation. This is achieved through the use of an application which animates time based data. Animating time based data allows one to see nuances within a dataset from a more comprehensive perspective. This is espe-cially useful within the time based data rich telecommunications indus-try. The application comprises of two parts-the backend manages raw data which is then passed to the frontend for animation. A play function allows one to play through a time series. Which creates a fluid and dy-namic environment for exploring data. Both the advantages and disad-vantages of this approach are investigated and an application is intro-duced which can be used to animate and explore datasets.
- Full Text:
- Date Issued: 2009
A Canonical Implementation Of The Advanced Encryption Standard On The Graphics Processing Unit
- Pilkington, Nick, Irwin, Barry V W
- Authors: Pilkington, Nick , Irwin, Barry V W
- Date: 2008
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430007 , vital:72659 , https://digifors.cs.up.ac.za/issa/2008/Proceedings/Research/47.pdf
- Description: This paper will present an implementation of the Advanced Encryption Standard (AES) on the graphics processing unit (GPU). It investigates the ease of implementation from first principles and the difficulties encountered. It also presents a performance analysis to evaluate if the GPU is a viable option for a cryptographics platform. The AES implementation is found to yield orders of maginitude increased performance when compared to CPU based implementations. Although the implementation introduces complica-tions, these are quickly becoming mitigated by the growing accessibility pro-vided by general programming on graphics processing units (GPGPU) frameworks like NVIDIA’s Compute Uniform Device Architechture (CUDA) and AMD/ATI’s Close to Metal (CTM).
- Full Text:
- Date Issued: 2008
- Authors: Pilkington, Nick , Irwin, Barry V W
- Date: 2008
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430007 , vital:72659 , https://digifors.cs.up.ac.za/issa/2008/Proceedings/Research/47.pdf
- Description: This paper will present an implementation of the Advanced Encryption Standard (AES) on the graphics processing unit (GPU). It investigates the ease of implementation from first principles and the difficulties encountered. It also presents a performance analysis to evaluate if the GPU is a viable option for a cryptographics platform. The AES implementation is found to yield orders of maginitude increased performance when compared to CPU based implementations. Although the implementation introduces complica-tions, these are quickly becoming mitigated by the growing accessibility pro-vided by general programming on graphics processing units (GPGPU) frameworks like NVIDIA’s Compute Uniform Device Architechture (CUDA) and AMD/ATI’s Close to Metal (CTM).
- Full Text:
- Date Issued: 2008