Securing softswitches from malicious attacks
- Authors: Opie, Jake Weyman
- Date: 2007
- Subjects: Internet telephony -- Security measures , Computer networks -- Security measures , Digital telephone systems , Communication -- Technological innovations , Computer network protocols , TCP/IP (Computer network protocol) , Switching theory
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4683 , http://hdl.handle.net/10962/d1007714 , Internet telephony -- Security measures , Computer networks -- Security measures , Digital telephone systems , Communication -- Technological innovations , Computer network protocols , TCP/IP (Computer network protocol) , Switching theory
- Description: Traditionally, real-time communication, such as voice calls, has run on separate, closed networks. Of all the limitations that these networks had, the ability of malicious attacks to cripple communication was not a crucial one. This situation has changed radically now that real-time communication and data have merged to share the same network. The objective of this project is to investigate the securing of softswitches with functionality similar to Private Branch Exchanges (PBX) from malicious attacks. The focus of the project will be a practical investigation of how to secure ILANGA, an ASTERISK-based system under development at Rhodes University. The practical investigation that focuses on ILANGA is based on performing six varied experiments on the different components of ILANGA. Before the six experiments are performed, basic preliminary security measures and the restrictions placed on the access to the database are discussed. The outcomes of these experiments are discussed and the precise reasons why these attacks were either successful or unsuccessful are given. Suggestions of a theoretical nature on how to defend against the successful attacks are also presented.
- Full Text:
- Date Issued: 2007
- Authors: Opie, Jake Weyman
- Date: 2007
- Subjects: Internet telephony -- Security measures , Computer networks -- Security measures , Digital telephone systems , Communication -- Technological innovations , Computer network protocols , TCP/IP (Computer network protocol) , Switching theory
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4683 , http://hdl.handle.net/10962/d1007714 , Internet telephony -- Security measures , Computer networks -- Security measures , Digital telephone systems , Communication -- Technological innovations , Computer network protocols , TCP/IP (Computer network protocol) , Switching theory
- Description: Traditionally, real-time communication, such as voice calls, has run on separate, closed networks. Of all the limitations that these networks had, the ability of malicious attacks to cripple communication was not a crucial one. This situation has changed radically now that real-time communication and data have merged to share the same network. The objective of this project is to investigate the securing of softswitches with functionality similar to Private Branch Exchanges (PBX) from malicious attacks. The focus of the project will be a practical investigation of how to secure ILANGA, an ASTERISK-based system under development at Rhodes University. The practical investigation that focuses on ILANGA is based on performing six varied experiments on the different components of ILANGA. Before the six experiments are performed, basic preliminary security measures and the restrictions placed on the access to the database are discussed. The outcomes of these experiments are discussed and the precise reasons why these attacks were either successful or unsuccessful are given. Suggestions of a theoretical nature on how to defend against the successful attacks are also presented.
- Full Text:
- Date Issued: 2007
Studies related to the process of program development
- Authors: Williams, Morgan Howard
- Date: 1994
- Subjects: Computer programming
- Language: English
- Type: Thesis , Doctoral , DSc
- Identifier: vital:4680 , http://hdl.handle.net/10962/d1007235
- Description: The submitted work consists of a collection of publications arising from research carried out at Rhodes University (1970-1980) and at Heriot-Watt University (1980-1992). The theme of this research is the process of program development, i.e. the process of creating a computer program to solve some particular problem. The papers presented cover a number of different topics which relate to this process, viz. (a) Programming methodology programming. (b) Properties of programming languages. aspects of structured. (c) Formal specification of programming languages. (d) Compiler techniques. (e) Declarative programming languages. (f) Program development aids. (g) Automatic program generation. (h) Databases. (i) Algorithms and applications.
- Full Text:
- Date Issued: 1994
- Authors: Williams, Morgan Howard
- Date: 1994
- Subjects: Computer programming
- Language: English
- Type: Thesis , Doctoral , DSc
- Identifier: vital:4680 , http://hdl.handle.net/10962/d1007235
- Description: The submitted work consists of a collection of publications arising from research carried out at Rhodes University (1970-1980) and at Heriot-Watt University (1980-1992). The theme of this research is the process of program development, i.e. the process of creating a computer program to solve some particular problem. The papers presented cover a number of different topics which relate to this process, viz. (a) Programming methodology programming. (b) Properties of programming languages. aspects of structured. (c) Formal specification of programming languages. (d) Compiler techniques. (e) Declarative programming languages. (f) Program development aids. (g) Automatic program generation. (h) Databases. (i) Algorithms and applications.
- Full Text:
- Date Issued: 1994
Designing and prototyping WebRTC and IMS integration using open source tools
- Authors: Motsumi, Tebagano Valerie
- Date: 2018
- Subjects: Internet Protocol multimedia subsystem , Session Initiation Protocol (Computer network protocol) , Computer software -- Development , Web Real-time Communications (WebRTC)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/63245 , vital:28386
- Description: WebRTC, or Web Real-time Communications, is a collection of web standards that detail the mechanisms, architectures and protocols that work together to deliver real-time multimedia services to the web browser. It represents a significant shift from the historical approach of using browser plugins, which over time, have proven cumbersome and problematic. Furthermore, it adopts various Internet standards in areas such as identity management, peer-to-peer connectivity, data exchange and media encoding, to provide a system that is truly open and interoperable. Given that WebRTC enables the delivery of multimedia content to any Internet Protocol (IP)-enabled device capable of hosting a web browser, this technology could potentially be used and deployed over millions of smartphones, tablets and personal computers worldwide. This service and device convergence remains an important goal of telecommunication network operators who seek to enable it through a converged network that is based on the IP Multimedia Subsystem (IMS). IMS is an IP-based subsystem that sits at the core of a modern telecommunication network and acts as the main routing substrate for media services and applications such as those that WebRTC realises. The combination of WebRTC and IMS represents an attractive coupling, and as such, a protracted investigation could help to answer important questions around the technical challenges that are involved in their integration, and the merits of various design alternatives that present themselves. This thesis is the result of such an investigation and culminates in the presentation of a detailed architectural model that is validated with a prototypical implementation in an open source testbed. The model is built on six requirements which emerge from an analysis of the literature, including previous interventions in IMS networks and a key technical report on design alternatives. Furthermore, this thesis argues that the client architecture requires support for web-oriented signalling, identity and call handling techniques leading to a potential for IMS networks to natively support these techniques as operator networks continue to grow and develop. The proposed model advocates the use of SIP over WebSockets for signalling and DTLS-SRTP for media to enable one-to-one communication and can be extended through additional functions resulting in a modular architecture. The model was implemented using open source tools which were assembled to create an experimental network testbed, and tests were conducted demonstrating successful cross domain communications under various conditions. The thesis has a strong focus on enabling ordinary software developers to assemble a prototypical network such as the one that was assembled and aims to enable experimentation in application use cases for integrated environments.
- Full Text:
- Date Issued: 2018
- Authors: Motsumi, Tebagano Valerie
- Date: 2018
- Subjects: Internet Protocol multimedia subsystem , Session Initiation Protocol (Computer network protocol) , Computer software -- Development , Web Real-time Communications (WebRTC)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/63245 , vital:28386
- Description: WebRTC, or Web Real-time Communications, is a collection of web standards that detail the mechanisms, architectures and protocols that work together to deliver real-time multimedia services to the web browser. It represents a significant shift from the historical approach of using browser plugins, which over time, have proven cumbersome and problematic. Furthermore, it adopts various Internet standards in areas such as identity management, peer-to-peer connectivity, data exchange and media encoding, to provide a system that is truly open and interoperable. Given that WebRTC enables the delivery of multimedia content to any Internet Protocol (IP)-enabled device capable of hosting a web browser, this technology could potentially be used and deployed over millions of smartphones, tablets and personal computers worldwide. This service and device convergence remains an important goal of telecommunication network operators who seek to enable it through a converged network that is based on the IP Multimedia Subsystem (IMS). IMS is an IP-based subsystem that sits at the core of a modern telecommunication network and acts as the main routing substrate for media services and applications such as those that WebRTC realises. The combination of WebRTC and IMS represents an attractive coupling, and as such, a protracted investigation could help to answer important questions around the technical challenges that are involved in their integration, and the merits of various design alternatives that present themselves. This thesis is the result of such an investigation and culminates in the presentation of a detailed architectural model that is validated with a prototypical implementation in an open source testbed. The model is built on six requirements which emerge from an analysis of the literature, including previous interventions in IMS networks and a key technical report on design alternatives. Furthermore, this thesis argues that the client architecture requires support for web-oriented signalling, identity and call handling techniques leading to a potential for IMS networks to natively support these techniques as operator networks continue to grow and develop. The proposed model advocates the use of SIP over WebSockets for signalling and DTLS-SRTP for media to enable one-to-one communication and can be extended through additional functions resulting in a modular architecture. The model was implemented using open source tools which were assembled to create an experimental network testbed, and tests were conducted demonstrating successful cross domain communications under various conditions. The thesis has a strong focus on enabling ordinary software developers to assemble a prototypical network such as the one that was assembled and aims to enable experimentation in application use cases for integrated environments.
- Full Text:
- Date Issued: 2018
Evaluation of the effectiveness of small aperture network telescopes as IBR data sources
- Authors: Chindipha, Stones Dalitso
- Date: 2023-03-31
- Subjects: Computer networks Monitoring , Computer networks Security measures , Computer bootstrapping , Time-series analysis , Regression analysis , Mathematical models
- Language: English
- Type: Academic theses , Doctoral theses , text
- Identifier: http://hdl.handle.net/10962/366264 , vital:65849 , DOI https://doi.org/10.21504/10962/366264
- Description: The use of network telescopes to collect unsolicited network traffic by monitoring unallocated address space has been in existence for over two decades. Past research has shown that there is a lot of activity happening in this unallocated space that needs monitoring as it carries threat intelligence data that has proven to be very useful in the security field. Prior to the emergence of the Internet of Things (IoT), commercialisation of IP addresses and widespread of mobile devices, there was a large pool of IPv4 addresses and thus reserving IPv4 addresses to be used for monitoring unsolicited activities going in the unallocated space was not a problem. Now, preservation of such IPv4 addresses just for monitoring is increasingly difficult as there is not enough free addresses in the IPv4 address space to be used for just monitoring. This is the case because such monitoring is seen as a ’non-productive’ use of the IP addresses. This research addresses the problem brought forth by this IPv4 address space exhaustion in relation to Internet Background Radiation (IBR) monitoring. In order to address the research questions, this research developed four mathematical models: Absolute Mean Accuracy Percentage Score (AMAPS), Symmetric Absolute Mean Accuracy Percentage Score (SAMAPS), Standardised Mean Absolute Error (SMAE), and Standardised Mean Absolute Scaled Error (SMASE). These models are used to evaluate the research objectives and quantify the variations that exist between different samples. The sample sizes represent different lens sizes of the telescopes. The study has brought to light a time series plot that shows the expected proportion of unique source IP addresses collected over time. The study also imputed data using the smaller /24 IPv4 net-block subnets to regenerate the missing data points using bootstrapping to create confidence intervals (CI). The findings from the simulated data supports the findings computed from the models. The CI offers a boost to decision making. Through a series of experiments with monthly and quarterly datasets, the study proposed a 95% - 99% confidence level to be used. It was known that large network telescopes collect more threat intelligence data than small-sized network telescopes, however, no study, to the best of our knowledge, has ever quantified such a knowledge gap. With the findings from the study, small-sized network telescope users can now use their network telescopes with full knowledge of gap that exists in the data collected between different network telescopes. , Thesis (PhD) -- Faculty of Science, Computer Science, 2023
- Full Text:
- Date Issued: 2023-03-31
- Authors: Chindipha, Stones Dalitso
- Date: 2023-03-31
- Subjects: Computer networks Monitoring , Computer networks Security measures , Computer bootstrapping , Time-series analysis , Regression analysis , Mathematical models
- Language: English
- Type: Academic theses , Doctoral theses , text
- Identifier: http://hdl.handle.net/10962/366264 , vital:65849 , DOI https://doi.org/10.21504/10962/366264
- Description: The use of network telescopes to collect unsolicited network traffic by monitoring unallocated address space has been in existence for over two decades. Past research has shown that there is a lot of activity happening in this unallocated space that needs monitoring as it carries threat intelligence data that has proven to be very useful in the security field. Prior to the emergence of the Internet of Things (IoT), commercialisation of IP addresses and widespread of mobile devices, there was a large pool of IPv4 addresses and thus reserving IPv4 addresses to be used for monitoring unsolicited activities going in the unallocated space was not a problem. Now, preservation of such IPv4 addresses just for monitoring is increasingly difficult as there is not enough free addresses in the IPv4 address space to be used for just monitoring. This is the case because such monitoring is seen as a ’non-productive’ use of the IP addresses. This research addresses the problem brought forth by this IPv4 address space exhaustion in relation to Internet Background Radiation (IBR) monitoring. In order to address the research questions, this research developed four mathematical models: Absolute Mean Accuracy Percentage Score (AMAPS), Symmetric Absolute Mean Accuracy Percentage Score (SAMAPS), Standardised Mean Absolute Error (SMAE), and Standardised Mean Absolute Scaled Error (SMASE). These models are used to evaluate the research objectives and quantify the variations that exist between different samples. The sample sizes represent different lens sizes of the telescopes. The study has brought to light a time series plot that shows the expected proportion of unique source IP addresses collected over time. The study also imputed data using the smaller /24 IPv4 net-block subnets to regenerate the missing data points using bootstrapping to create confidence intervals (CI). The findings from the simulated data supports the findings computed from the models. The CI offers a boost to decision making. Through a series of experiments with monthly and quarterly datasets, the study proposed a 95% - 99% confidence level to be used. It was known that large network telescopes collect more threat intelligence data than small-sized network telescopes, however, no study, to the best of our knowledge, has ever quantified such a knowledge gap. With the findings from the study, small-sized network telescope users can now use their network telescopes with full knowledge of gap that exists in the data collected between different network telescopes. , Thesis (PhD) -- Faculty of Science, Computer Science, 2023
- Full Text:
- Date Issued: 2023-03-31
Evolving an efficient and effective off-the-shelf computing infrastructure for schools in rural areas of South Africa
- Authors: Siebörger, Ingrid Gisélle
- Date: 2017
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/14557 , vital:21938
- Description: Upliftment of rural areas and poverty alleviation are priorities for development in South Africa. Information and knowledge are key strategic resources for social and economic development and ICTs act as tools to support them, enabling innovative and more cost effective approaches. In order for ICT interventions to be possible, infrastructure has to be deployed. For the deployment to be effective and sustainable, the local community needs to be involved in shaping and supporting it. This study describes the technical work done in the Siyakhula Living Lab (SLL), a long-term ICT4D experiment in the Mbashe Municipality, with a focus on the deployment of ICT infrastructure in schools, for teaching and learning but also for use by the communities surrounding the schools. As a result of this work, computing infrastructure was deployed, in various phases, in 17 schools in the area and a “broadband island” connecting them was created. The dissertation reports on the initial deployment phases, discussing theoretical underpinnings and policies for using technology in education as well various computing and networking technologies and associated policies available and appropriate for use in rural South African schools. This information forms the backdrop of a survey conducted with teachers from six schools in the SLL, together with experimental work towards the provision of an evolved, efficient and effective off-the-shelf computing infrastructure in selected schools, in order to attempt to address the shortcomings of the computing infrastructure deployed initially in the SLL. The result of the study is the proposal of an evolved computing infrastructure model for use in rural South African schools.
- Full Text:
- Date Issued: 2017
- Authors: Siebörger, Ingrid Gisélle
- Date: 2017
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/14557 , vital:21938
- Description: Upliftment of rural areas and poverty alleviation are priorities for development in South Africa. Information and knowledge are key strategic resources for social and economic development and ICTs act as tools to support them, enabling innovative and more cost effective approaches. In order for ICT interventions to be possible, infrastructure has to be deployed. For the deployment to be effective and sustainable, the local community needs to be involved in shaping and supporting it. This study describes the technical work done in the Siyakhula Living Lab (SLL), a long-term ICT4D experiment in the Mbashe Municipality, with a focus on the deployment of ICT infrastructure in schools, for teaching and learning but also for use by the communities surrounding the schools. As a result of this work, computing infrastructure was deployed, in various phases, in 17 schools in the area and a “broadband island” connecting them was created. The dissertation reports on the initial deployment phases, discussing theoretical underpinnings and policies for using technology in education as well various computing and networking technologies and associated policies available and appropriate for use in rural South African schools. This information forms the backdrop of a survey conducted with teachers from six schools in the SLL, together with experimental work towards the provision of an evolved, efficient and effective off-the-shelf computing infrastructure in selected schools, in order to attempt to address the shortcomings of the computing infrastructure deployed initially in the SLL. The result of the study is the proposal of an evolved computing infrastructure model for use in rural South African schools.
- Full Text:
- Date Issued: 2017
Using semantic knowledge to improve compression on log files
- Authors: Otten, Frederick John
- Date: 2009 , 2008-11-19
- Subjects: Computer networks , Data compression (Computer science) , Semantics--Data processing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4650 , http://hdl.handle.net/10962/d1006619 , Computer networks , Data compression (Computer science) , Semantics--Data processing
- Description: With the move towards global and multi-national companies, information technology infrastructure requirements are increasing. As the size of these computer networks increases, it becomes more and more difficult to monitor, control, and secure them. Networks consist of a number of diverse devices, sensors, and gateways which are often spread over large geographical areas. Each of these devices produce log files which need to be analysed and monitored to provide network security and satisfy regulations. Data compression programs such as gzip and bzip2 are commonly used to reduce the quantity of data for archival purposes after the log files have been rotated. However, there are many other compression programs which exist - each with their own advantages and disadvantages. These programs each use a different amount of memory and take different compression and decompression times to achieve different compression ratios. System log files also contain redundancy which is not necessarily exploited by standard compression programs. Log messages usually use a similar format with a defined syntax. In the log files, all the ASCII characters are not used and the messages contain certain "phrases" which often repeated. This thesis investigates the use of compression as a means of data reduction and how the use of semantic knowledge can improve data compression (also applying results to different scenarios that can occur in a distributed computing environment). It presents the results of a series of tests performed on different log files. It also examines the semantic knowledge which exists in maillog files and how it can be exploited to improve the compression results. The results from a series of text preprocessors which exploit this knowledge are presented and evaluated. These preprocessors include: one which replaces the timestamps and IP addresses with their binary equivalents and one which replaces words from a dictionary with unused ASCII characters. In this thesis, data compression is shown to be an effective method of data reduction producing up to 98 percent reduction in filesize on a corpus of log files. The use of preprocessors which exploit semantic knowledge results in up to 56 percent improvement in overall compression time and up to 32 percent reduction in compressed size. , TeX , pdfTeX-1.40.3
- Full Text:
- Date Issued: 2009
- Authors: Otten, Frederick John
- Date: 2009 , 2008-11-19
- Subjects: Computer networks , Data compression (Computer science) , Semantics--Data processing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4650 , http://hdl.handle.net/10962/d1006619 , Computer networks , Data compression (Computer science) , Semantics--Data processing
- Description: With the move towards global and multi-national companies, information technology infrastructure requirements are increasing. As the size of these computer networks increases, it becomes more and more difficult to monitor, control, and secure them. Networks consist of a number of diverse devices, sensors, and gateways which are often spread over large geographical areas. Each of these devices produce log files which need to be analysed and monitored to provide network security and satisfy regulations. Data compression programs such as gzip and bzip2 are commonly used to reduce the quantity of data for archival purposes after the log files have been rotated. However, there are many other compression programs which exist - each with their own advantages and disadvantages. These programs each use a different amount of memory and take different compression and decompression times to achieve different compression ratios. System log files also contain redundancy which is not necessarily exploited by standard compression programs. Log messages usually use a similar format with a defined syntax. In the log files, all the ASCII characters are not used and the messages contain certain "phrases" which often repeated. This thesis investigates the use of compression as a means of data reduction and how the use of semantic knowledge can improve data compression (also applying results to different scenarios that can occur in a distributed computing environment). It presents the results of a series of tests performed on different log files. It also examines the semantic knowledge which exists in maillog files and how it can be exploited to improve the compression results. The results from a series of text preprocessors which exploit this knowledge are presented and evaluated. These preprocessors include: one which replaces the timestamps and IP addresses with their binary equivalents and one which replaces words from a dictionary with unused ASCII characters. In this thesis, data compression is shown to be an effective method of data reduction producing up to 98 percent reduction in filesize on a corpus of log files. The use of preprocessors which exploit semantic knowledge results in up to 56 percent improvement in overall compression time and up to 32 percent reduction in compressed size. , TeX , pdfTeX-1.40.3
- Full Text:
- Date Issued: 2009
Towards understanding and mitigating attacks leveraging zero-day exploits
- Authors: Smit, Liam
- Date: 2019
- Subjects: Computer crimes -- Prevention , Data protection , Hacking , Computer security , Computer networks -- Security measures , Computers -- Access control , Malware (Computer software)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/115718 , vital:34218
- Description: Zero-day vulnerabilities are unknown and therefore not addressed with the result that they can be exploited by attackers to gain unauthorised system access. In order to understand and mitigate against attacks leveraging zero-days or unknown techniques, it is necessary to study the vulnerabilities, exploits and attacks that make use of them. In recent years there have been a number of leaks publishing such attacks using various methods to exploit vulnerabilities. This research seeks to understand what types of vulnerabilities exist, why and how these are exploited, and how to defend against such attacks by either mitigating the vulnerabilities or the method / process of exploiting them. By moving beyond merely remedying the vulnerabilities to defences that are able to prevent or detect the actions taken by attackers, the security of the information system will be better positioned to deal with future unknown threats. An interesting finding is how attackers exploit moving beyond the observable bounds to circumvent security defences, for example, compromising syslog servers, or going down to lower system rings to gain access. However, defenders can counter this by employing defences that are external to the system preventing attackers from disabling them or removing collected evidence after gaining system access. Attackers are able to defeat air-gaps via the leakage of electromagnetic radiation as well as misdirect attribution by planting false artefacts for forensic analysis and attacking from third party information systems. They analyse the methods of other attackers to learn new techniques. An example of this is the Umbrage project whereby malware is analysed to decide whether it should be implemented as a proof of concept. Another important finding is that attackers respect defence mechanisms such as: remote syslog (e.g. firewall), core dump files, database auditing, and Tripwire (e.g. SlyHeretic). These defences all have the potential to result in the attacker being discovered. Attackers must either negate the defence mechanism or find unprotected targets. Defenders can use technologies such as encryption to defend against interception and man-in-the-middle attacks. They can also employ honeytokens and honeypots to alarm misdirect, slow down and learn from attackers. By employing various tactics defenders are able to increase their chance of detecting and time to react to attacks, even those exploiting hitherto unknown vulnerabilities. To summarize the information presented in this thesis and to show the practical importance thereof, an examination is presented of the NSA's network intrusion of the SWIFT organisation. It shows that the firewalls were exploited with remote code execution zerodays. This attack has a striking parallel in the approach used in the recent VPNFilter malware. If nothing else, the leaks provide information to other actors on how to attack and what to avoid. However, by studying state actors, we can gain insight into what other actors with fewer resources can do in the future.
- Full Text:
- Date Issued: 2019
- Authors: Smit, Liam
- Date: 2019
- Subjects: Computer crimes -- Prevention , Data protection , Hacking , Computer security , Computer networks -- Security measures , Computers -- Access control , Malware (Computer software)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/115718 , vital:34218
- Description: Zero-day vulnerabilities are unknown and therefore not addressed with the result that they can be exploited by attackers to gain unauthorised system access. In order to understand and mitigate against attacks leveraging zero-days or unknown techniques, it is necessary to study the vulnerabilities, exploits and attacks that make use of them. In recent years there have been a number of leaks publishing such attacks using various methods to exploit vulnerabilities. This research seeks to understand what types of vulnerabilities exist, why and how these are exploited, and how to defend against such attacks by either mitigating the vulnerabilities or the method / process of exploiting them. By moving beyond merely remedying the vulnerabilities to defences that are able to prevent or detect the actions taken by attackers, the security of the information system will be better positioned to deal with future unknown threats. An interesting finding is how attackers exploit moving beyond the observable bounds to circumvent security defences, for example, compromising syslog servers, or going down to lower system rings to gain access. However, defenders can counter this by employing defences that are external to the system preventing attackers from disabling them or removing collected evidence after gaining system access. Attackers are able to defeat air-gaps via the leakage of electromagnetic radiation as well as misdirect attribution by planting false artefacts for forensic analysis and attacking from third party information systems. They analyse the methods of other attackers to learn new techniques. An example of this is the Umbrage project whereby malware is analysed to decide whether it should be implemented as a proof of concept. Another important finding is that attackers respect defence mechanisms such as: remote syslog (e.g. firewall), core dump files, database auditing, and Tripwire (e.g. SlyHeretic). These defences all have the potential to result in the attacker being discovered. Attackers must either negate the defence mechanism or find unprotected targets. Defenders can use technologies such as encryption to defend against interception and man-in-the-middle attacks. They can also employ honeytokens and honeypots to alarm misdirect, slow down and learn from attackers. By employing various tactics defenders are able to increase their chance of detecting and time to react to attacks, even those exploiting hitherto unknown vulnerabilities. To summarize the information presented in this thesis and to show the practical importance thereof, an examination is presented of the NSA's network intrusion of the SWIFT organisation. It shows that the firewalls were exploited with remote code execution zerodays. This attack has a striking parallel in the approach used in the recent VPNFilter malware. If nothing else, the leaks provide information to other actors on how to attack and what to avoid. However, by studying state actors, we can gain insight into what other actors with fewer resources can do in the future.
- Full Text:
- Date Issued: 2019
High speed end-to-end connection management in a bridged IEEE 1394 network of professional audio devices
- Authors: Okai-Tettey, Harold A
- Date: 2006
- Subjects: IEEE 1394 (Standard) Digital communications Computer networks Sound -- Recording and reproducing -- Digital techniques Computer sound processing
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4653 , http://hdl.handle.net/10962/d1006638
- Description: A number of companies have developed a variety of network approaches to the transfer of audio and MIDI data. By doing this, they have addressed the configuration complications that were present when using direct patching for analogue audio, digital audio, word clock, and control connections. Along with their approaches, controlling software, usually running on a PC, is used to set up and manage audio routings from the outputs to the inputs of devices. However one of the advantages of direct patching is the conceptual simplicity it provides for a user in connecting plugs of devices, the ability to connect from the host plug of one device to the host plug of another. The connection management or routing applications of the current audio networks do not allow for such a capability, and instead employ what is referred to as a two-step approach to connection management. This two-step approach requires that devices be first configured at the transport layer of the network for input and output routings, after which the transmit and receive plugs of devices are manually configured to transmit or receive data. From a user’s point of view, it is desirable for the connection management or audio routing applications of the current audio networks to be able to establish routings directly between the host plugs of devices, and not the audio channels exposed by a network’s transport, as is currently the case. The main goal of this work has been to retain the conceptual simplicity of point-to-point connection management within digital audio networks, while gaining all the benefits that digital audio networking can offer.
- Full Text:
- Date Issued: 2006
- Authors: Okai-Tettey, Harold A
- Date: 2006
- Subjects: IEEE 1394 (Standard) Digital communications Computer networks Sound -- Recording and reproducing -- Digital techniques Computer sound processing
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4653 , http://hdl.handle.net/10962/d1006638
- Description: A number of companies have developed a variety of network approaches to the transfer of audio and MIDI data. By doing this, they have addressed the configuration complications that were present when using direct patching for analogue audio, digital audio, word clock, and control connections. Along with their approaches, controlling software, usually running on a PC, is used to set up and manage audio routings from the outputs to the inputs of devices. However one of the advantages of direct patching is the conceptual simplicity it provides for a user in connecting plugs of devices, the ability to connect from the host plug of one device to the host plug of another. The connection management or routing applications of the current audio networks do not allow for such a capability, and instead employ what is referred to as a two-step approach to connection management. This two-step approach requires that devices be first configured at the transport layer of the network for input and output routings, after which the transmit and receive plugs of devices are manually configured to transmit or receive data. From a user’s point of view, it is desirable for the connection management or audio routing applications of the current audio networks to be able to establish routings directly between the host plugs of devices, and not the audio channels exposed by a network’s transport, as is currently the case. The main goal of this work has been to retain the conceptual simplicity of point-to-point connection management within digital audio networks, while gaining all the benefits that digital audio networking can offer.
- Full Text:
- Date Issued: 2006
Software quality assurance in a remote client/contractor context
- Authors: Black, Angus Hugh
- Date: 2006
- Subjects: Computer software -- Quality control , Software engineering , Information technology
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4648 , http://hdl.handle.net/10962/d1006615 , Computer software -- Quality control , Software engineering , Information technology
- Description: With the reliance on information technology and the software that this technology utilizes increasing every day, it is of paramount importance that software developed be of an acceptable quality. This quality can be achieved through the utilization of various software engineering standards and guidelines. The question is, to what extent do these standards and guidelines need to be utilized and how are these standards and guidelines implemented? This research focuses on how guidelines developed by standardization bodies and the unified process developed by Rational can be integrated to achieve a suitable process and version control system within the context of a remote client/contractor small team environment.
- Full Text:
- Date Issued: 2006
- Authors: Black, Angus Hugh
- Date: 2006
- Subjects: Computer software -- Quality control , Software engineering , Information technology
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4648 , http://hdl.handle.net/10962/d1006615 , Computer software -- Quality control , Software engineering , Information technology
- Description: With the reliance on information technology and the software that this technology utilizes increasing every day, it is of paramount importance that software developed be of an acceptable quality. This quality can be achieved through the utilization of various software engineering standards and guidelines. The question is, to what extent do these standards and guidelines need to be utilized and how are these standards and guidelines implemented? This research focuses on how guidelines developed by standardization bodies and the unified process developed by Rational can be integrated to achieve a suitable process and version control system within the context of a remote client/contractor small team environment.
- Full Text:
- Date Issued: 2006
Parallel process placement
- Authors: Handler, Caroline
- Date: 1989
- Subjects: Parallel programming (Computer science)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4568 , http://hdl.handle.net/10962/d1002033
- Description: This thesis investigates methods of automatic allocation of processes to available processors in a given network configuration. The research described covers the investigation of various algorithms for optimal process allocation. Among those researched were an algorithm which used a branch and bound technique, an algorithm based on graph theory, and an heuristic algorithm involving cluster analysis. These have been implemented and tested in conjunction with the gathering of performance statistics during program execution, for use in improving subsequent allocations. The system has been implemented on a network of loosely-coupled microcomputers using multi-port serial communication links to simulate a transputer network. The concurrent programming language occam has been implemented, replacing the explicit process allocation constructs with an automatic placement algorithm. This enables the source code to be completely separated from hardware considerations
- Full Text:
- Date Issued: 1989
- Authors: Handler, Caroline
- Date: 1989
- Subjects: Parallel programming (Computer science)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4568 , http://hdl.handle.net/10962/d1002033
- Description: This thesis investigates methods of automatic allocation of processes to available processors in a given network configuration. The research described covers the investigation of various algorithms for optimal process allocation. Among those researched were an algorithm which used a branch and bound technique, an algorithm based on graph theory, and an heuristic algorithm involving cluster analysis. These have been implemented and tested in conjunction with the gathering of performance statistics during program execution, for use in improving subsequent allocations. The system has been implemented on a network of loosely-coupled microcomputers using multi-port serial communication links to simulate a transputer network. The concurrent programming language occam has been implemented, replacing the explicit process allocation constructs with an automatic placement algorithm. This enables the source code to be completely separated from hardware considerations
- Full Text:
- Date Issued: 1989
An alternative peripheral executive for the data general AOS/VS operating system
- Authors: Tennant, Robert Satchwell
- Date: 1990
- Subjects: Operating systems (Computers) , Computers
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4566 , http://hdl.handle.net/10962/d1002031
- Full Text:
- Date Issued: 1990
- Authors: Tennant, Robert Satchwell
- Date: 1990
- Subjects: Operating systems (Computers) , Computers
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4566 , http://hdl.handle.net/10962/d1002031
- Full Text:
- Date Issued: 1990
CSP-i : an implementation of CSP
- Authors: Wrench, Karen Lee
- Date: 1987 , 2013-03-08
- Subjects: Synchronization--Computers , Programming languages (Electronic computers)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4579 , http://hdl.handle.net/10962/d1003124 , Synchronization--Computers , Programming languages (Electronic computers)
- Description: CSP (Communicating Sequential Processes) is a notation proposed by Hoare, for expressing process communication and synchronization. Although this notation has been widely acclaimed, Hoare himself never implemented it as a computer language. He did however produce the necessary correctness proofs and subsequently the notation has been adopted (in various guises) by the designers of other concurrent languages such as Ada and occam. Only two attempts have been made at a direct and precise implementation of CSP. With closer scrutiny, even these implementations are found to deviate from the specifications expounded by Hoare, and in so doing restrict the original proposal. This thesis comprises two main sections. The first of these includes a brief look at the primitives of concurrent programming, followed by a comparative study of the existing adaptations of CSP and other message passing languages. The latter section is devoted to a description of the author's attempt at an original implementation of the notation. The result of this attempt is the creation of the CSP-i language and a suitable environment for executing CSP-i programs on an IBM PC. The CSP-i implementation is comparable with other concurrent systems presently available. In some aspects, the primitives featured in CSP-i provide the user with a more efficient and concise notation for expressing concurrent algorithms than several other message-based languages, notably occam. , KMBT_363 , Adobe Acrobat 9.53 Paper Capture Plug-in
- Full Text:
- Date Issued: 1987
- Authors: Wrench, Karen Lee
- Date: 1987 , 2013-03-08
- Subjects: Synchronization--Computers , Programming languages (Electronic computers)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4579 , http://hdl.handle.net/10962/d1003124 , Synchronization--Computers , Programming languages (Electronic computers)
- Description: CSP (Communicating Sequential Processes) is a notation proposed by Hoare, for expressing process communication and synchronization. Although this notation has been widely acclaimed, Hoare himself never implemented it as a computer language. He did however produce the necessary correctness proofs and subsequently the notation has been adopted (in various guises) by the designers of other concurrent languages such as Ada and occam. Only two attempts have been made at a direct and precise implementation of CSP. With closer scrutiny, even these implementations are found to deviate from the specifications expounded by Hoare, and in so doing restrict the original proposal. This thesis comprises two main sections. The first of these includes a brief look at the primitives of concurrent programming, followed by a comparative study of the existing adaptations of CSP and other message passing languages. The latter section is devoted to a description of the author's attempt at an original implementation of the notation. The result of this attempt is the creation of the CSP-i language and a suitable environment for executing CSP-i programs on an IBM PC. The CSP-i implementation is comparable with other concurrent systems presently available. In some aspects, the primitives featured in CSP-i provide the user with a more efficient and concise notation for expressing concurrent algorithms than several other message-based languages, notably occam. , KMBT_363 , Adobe Acrobat 9.53 Paper Capture Plug-in
- Full Text:
- Date Issued: 1987
NFComms: A synchronous communication framework for the CPU-NFP heterogeneous system
- Authors: Pennefather, Sean
- Date: 2020
- Subjects: Network processors , Computer programming , Parallel processing (Electronic computers) , Netronome
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/144181 , vital:38318
- Description: This work explores the viability of using a Network Flow Processor (NFP), developed by Netronome, as a coprocessor for the construction of a CPU-NFP heterogeneous platform in the domain of general processing. When considering heterogeneous platforms involving architectures like the NFP, the communication framework provided is typically represented as virtual network interfaces and is thus not suitable for generic communication. To enable a CPU-NFP heterogeneous platform for use in the domain of general computing, a suitable generic communication framework is required. A feasibility study for a suitable communication medium between the two candidate architectures showed that a generic framework that conforms to the mechanisms dictated by Communicating Sequential Processes is achievable. The resulting NFComms framework, which facilitates inter- and intra-architecture communication through the use of synchronous message passing, supports up to 16 unidirectional channels and includes queuing mechanisms for transparently supporting concurrent streams exceeding the channel count. The framework has a minimum latency of between 15.5 μs and 18 μs per synchronous transaction and can sustain a peak throughput of up to 30 Gbit/s. The framework also supports a runtime for interacting with the Go programming language, allowing user-space processes to subscribe channels to the framework for interacting with processes executing on the NFP. The viability of utilising a heterogeneous CPU-NFP system for use in the domain of general and network computing was explored by introducing a set of problems or applications spanning general computing, and network processing. These were implemented on the heterogeneous architecture and benchmarked against equivalent CPU-only and CPU/GPU solutions. The results recorded were used to form an opinion on the viability of using an NFP for general processing. It is the author’s opinion that, beyond very specific use cases, it appears that the NFP-400 is not currently a viable solution as a coprocessor in the field of general computing. This does not mean that the proposed framework or the concept of a heterogeneous CPU-NFP system should be discarded as such a system does have acceptable use in the fields of network and stream processing. Additionally, when comparing the recorded limitations to those seen during the early stages of general purpose GPU development, it is clear that general processing on the NFP is currently in a similar state.
- Full Text:
- Date Issued: 2020
- Authors: Pennefather, Sean
- Date: 2020
- Subjects: Network processors , Computer programming , Parallel processing (Electronic computers) , Netronome
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/144181 , vital:38318
- Description: This work explores the viability of using a Network Flow Processor (NFP), developed by Netronome, as a coprocessor for the construction of a CPU-NFP heterogeneous platform in the domain of general processing. When considering heterogeneous platforms involving architectures like the NFP, the communication framework provided is typically represented as virtual network interfaces and is thus not suitable for generic communication. To enable a CPU-NFP heterogeneous platform for use in the domain of general computing, a suitable generic communication framework is required. A feasibility study for a suitable communication medium between the two candidate architectures showed that a generic framework that conforms to the mechanisms dictated by Communicating Sequential Processes is achievable. The resulting NFComms framework, which facilitates inter- and intra-architecture communication through the use of synchronous message passing, supports up to 16 unidirectional channels and includes queuing mechanisms for transparently supporting concurrent streams exceeding the channel count. The framework has a minimum latency of between 15.5 μs and 18 μs per synchronous transaction and can sustain a peak throughput of up to 30 Gbit/s. The framework also supports a runtime for interacting with the Go programming language, allowing user-space processes to subscribe channels to the framework for interacting with processes executing on the NFP. The viability of utilising a heterogeneous CPU-NFP system for use in the domain of general and network computing was explored by introducing a set of problems or applications spanning general computing, and network processing. These were implemented on the heterogeneous architecture and benchmarked against equivalent CPU-only and CPU/GPU solutions. The results recorded were used to form an opinion on the viability of using an NFP for general processing. It is the author’s opinion that, beyond very specific use cases, it appears that the NFP-400 is not currently a viable solution as a coprocessor in the field of general computing. This does not mean that the proposed framework or the concept of a heterogeneous CPU-NFP system should be discarded as such a system does have acceptable use in the fields of network and stream processing. Additionally, when comparing the recorded limitations to those seen during the early stages of general purpose GPU development, it is clear that general processing on the NFP is currently in a similar state.
- Full Text:
- Date Issued: 2020
An investigation into the deployment of IEEE 802.11 networks
- Janse van Rensburg, Johanna Hendrina
- Authors: Janse van Rensburg, Johanna Hendrina
- Date: 2007
- Subjects: Local area networks (Computer networks)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4596 , http://hdl.handle.net/10962/d1004839 , Local area networks (Computer networks)
- Description: Currently, the IEEE 802.11 standard is the leading technology in the Wireless Local Area Network (WLAN) market. It provides flexibility and mobility to users, which in turn, increase productivity. Opposed to traditional fixed Local Area Network (LAN) technologies, WLANs are easier to deploy and have lower installation costs. Unfortunately, there are problems inherent within the technology and standard that inhibits its performance. Technological problems can be attributed to the physical medium of a WLAN, the electromagnetic (EM) wave. Standards based problems include security issues and the MAC layer design. However the impact of these problems can be mitigated with proper planning and design of the WLAN. To do this, an understanding of WLAN issues and the use of WLAN software tools are necessary. This thesis discusses WLAN issues such as security and electromagnetic wave propagation and introduces software that can aid the planning, deployment and maintenance of a WLAN. Furthermore the planning, implementation and auditing phases of a WLAN lifecylce are discussed. The aim being to provide an understanding of the complexities involved to deploy and maintain a secure and reliable WLAN.
- Full Text:
- Date Issued: 2007
- Authors: Janse van Rensburg, Johanna Hendrina
- Date: 2007
- Subjects: Local area networks (Computer networks)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4596 , http://hdl.handle.net/10962/d1004839 , Local area networks (Computer networks)
- Description: Currently, the IEEE 802.11 standard is the leading technology in the Wireless Local Area Network (WLAN) market. It provides flexibility and mobility to users, which in turn, increase productivity. Opposed to traditional fixed Local Area Network (LAN) technologies, WLANs are easier to deploy and have lower installation costs. Unfortunately, there are problems inherent within the technology and standard that inhibits its performance. Technological problems can be attributed to the physical medium of a WLAN, the electromagnetic (EM) wave. Standards based problems include security issues and the MAC layer design. However the impact of these problems can be mitigated with proper planning and design of the WLAN. To do this, an understanding of WLAN issues and the use of WLAN software tools are necessary. This thesis discusses WLAN issues such as security and electromagnetic wave propagation and introduces software that can aid the planning, deployment and maintenance of a WLAN. Furthermore the planning, implementation and auditing phases of a WLAN lifecylce are discussed. The aim being to provide an understanding of the complexities involved to deploy and maintain a secure and reliable WLAN.
- Full Text:
- Date Issued: 2007
Algorithmic skeletons as a method of parallel programming
- Authors: Watkins, Rees Collyer
- Date: 1993
- Subjects: Parallel programming (Computer science) , Algorithms
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4609 , http://hdl.handle.net/10962/d1004889 , Parallel programming (Computer science) , Algorithms
- Description: A new style of abstraction for program development, based on the concept of algorithmic skeletons, has been proposed in the literature. The programmer is offered a variety of independent algorithmic skeletons each of which describe the structure of a particular style of algorithm. The appropriate skeleton is used by the system to mould the solution. Parallel programs are particularly appropriate for this technique because of their complexity. This thesis investigates algorithmic skeletons as a method of hiding the complexities of parallel programming from the user, and for guiding them towards efficient solutions. To explore this approach, this thesis describes the implementation and benchmarking of the divide and conquer and task queue paradigms as skeletons. All but one category of problem, as implemented in this thesis, scale well over eight processors. The rate of speed up tails off when there are significant communication requirements. The results show that, with some user knowledge, efficient parallel programs can be developed using this method. The evaluation explores methods for fine tuning some skeleton programs to achieve increased efficiency.
- Full Text:
- Date Issued: 1993
- Authors: Watkins, Rees Collyer
- Date: 1993
- Subjects: Parallel programming (Computer science) , Algorithms
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4609 , http://hdl.handle.net/10962/d1004889 , Parallel programming (Computer science) , Algorithms
- Description: A new style of abstraction for program development, based on the concept of algorithmic skeletons, has been proposed in the literature. The programmer is offered a variety of independent algorithmic skeletons each of which describe the structure of a particular style of algorithm. The appropriate skeleton is used by the system to mould the solution. Parallel programs are particularly appropriate for this technique because of their complexity. This thesis investigates algorithmic skeletons as a method of hiding the complexities of parallel programming from the user, and for guiding them towards efficient solutions. To explore this approach, this thesis describes the implementation and benchmarking of the divide and conquer and task queue paradigms as skeletons. All but one category of problem, as implemented in this thesis, scale well over eight processors. The rate of speed up tails off when there are significant communication requirements. The results show that, with some user knowledge, efficient parallel programs can be developed using this method. The evaluation explores methods for fine tuning some skeleton programs to achieve increased efficiency.
- Full Text:
- Date Issued: 1993
Extensibility in ORDBMS databases : an exploration of the data cartridge mechanism in Oracle9i
- Ndakunda, Tulimevava Kaunapawa
- Authors: Ndakunda, Tulimevava Kaunapawa
- Date: 2013-06-18
- Subjects: Database management , Oracle (Computer file)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4686 , http://hdl.handle.net/10962/d1008098 , Database management , Oracle (Computer file)
- Description: To support current and emerging database applications, Object-Relational Database Management Systems (ORDBMS) provide mechanisms to extend the data storage capabilities and the functionality of the database with application-specific types and methods. Using these mechanisms, the database may contain user-defined data types, large objects (LOBs), external procedures, extensible indexing, query optimisation techniques and other features that are treated in the same way as built-in database features . The many extensibility options provided by the ORDBMS, however, raise several implementation challenges that are not always obvious. This thesis examines a few of the key challenges that arise when extending Oracle database with new functionality. To realise the potential of extensibility in Oracle, the thesis used the problem area of image retrieval as the main test domain. Current research efforts in image retrieval are lagging behind the required retrieval, but are continuously improving. As better retrieval techniques become available, it is important that they are integrated into the available database systems to facilitate improved retrieval. The thesis also reports on the practical experiences gained from integrating an extensible indexing scenario. Sample scenarios are integrated in Oracle9i database using the data cartridge mechanism, which allows Oracle database functionality to be extended with new functional components. The integration demonstrates how additional functionality may be effectively applied to both general and specialised domains in the database. It also reveals alternative design options that allow data cartridge developers, most of who are not database server experts, to extend the database. The thesis is concluded with some of the key observations and options that designers must consider when extending the database with new functionality. The main challenges for developers are the learning curve required to understand the data cartridge framework and the ability to adapt already developed code within the constraints of the data cartridge using the provided extensibility APls. Maximum reusability relies on making good choices for the basic functions, out of which specialised functions can be built. , KMBT_363 , Adobe Acrobat 9.54 Paper Capture Plug-in
- Full Text:
- Authors: Ndakunda, Tulimevava Kaunapawa
- Date: 2013-06-18
- Subjects: Database management , Oracle (Computer file)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4686 , http://hdl.handle.net/10962/d1008098 , Database management , Oracle (Computer file)
- Description: To support current and emerging database applications, Object-Relational Database Management Systems (ORDBMS) provide mechanisms to extend the data storage capabilities and the functionality of the database with application-specific types and methods. Using these mechanisms, the database may contain user-defined data types, large objects (LOBs), external procedures, extensible indexing, query optimisation techniques and other features that are treated in the same way as built-in database features . The many extensibility options provided by the ORDBMS, however, raise several implementation challenges that are not always obvious. This thesis examines a few of the key challenges that arise when extending Oracle database with new functionality. To realise the potential of extensibility in Oracle, the thesis used the problem area of image retrieval as the main test domain. Current research efforts in image retrieval are lagging behind the required retrieval, but are continuously improving. As better retrieval techniques become available, it is important that they are integrated into the available database systems to facilitate improved retrieval. The thesis also reports on the practical experiences gained from integrating an extensible indexing scenario. Sample scenarios are integrated in Oracle9i database using the data cartridge mechanism, which allows Oracle database functionality to be extended with new functional components. The integration demonstrates how additional functionality may be effectively applied to both general and specialised domains in the database. It also reveals alternative design options that allow data cartridge developers, most of who are not database server experts, to extend the database. The thesis is concluded with some of the key observations and options that designers must consider when extending the database with new functionality. The main challenges for developers are the learning curve required to understand the data cartridge framework and the ability to adapt already developed code within the constraints of the data cartridge using the provided extensibility APls. Maximum reusability relies on making good choices for the basic functions, out of which specialised functions can be built. , KMBT_363 , Adobe Acrobat 9.54 Paper Capture Plug-in
- Full Text:
The monitor and synchroniser concepts in the programming language CLANG
- Authors: Chalmers, Alan Gordon
- Date: 1985
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4616 , http://hdl.handle.net/10962/d1006132
- Full Text:
- Date Issued: 1985
- Authors: Chalmers, Alan Gordon
- Date: 1985
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4616 , http://hdl.handle.net/10962/d1006132
- Full Text:
- Date Issued: 1985
GPU Accelerated protocol analysis for large and long-term traffic traces
- Nottingham, Alastair Timothy
- Authors: Nottingham, Alastair Timothy
- Date: 2016
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/910 , vital:20002
- Description: This thesis describes the design and implementation of GPF+, a complete general packet classification system developed using Nvidia CUDA for Compute Capability 3.5+ GPUs. This system was developed with the aim of accelerating the analysis of arbitrary network protocols within network traffic traces using inexpensive, massively parallel commodity hardware. GPF+ and its supporting components are specifically intended to support the processing of large, long-term network packet traces such as those produced by network telescopes, which are currently difficult and time consuming to analyse. The GPF+ classifier is based on prior research in the field, which produced a prototype classifier called GPF, targeted at Compute Capability 1.3 GPUs. GPF+ greatly extends the GPF model, improving runtime flexibility and scalability, whilst maintaining high execution efficiency. GPF+ incorporates a compact, lightweight registerbased state machine that supports massively-parallel, multi-match filter predicate evaluation, as well as efficient arbitrary field extraction. GPF+ tracks packet composition during execution, and adjusts processing at runtime to avoid redundant memory transactions and unnecessary computation through warp-voting. GPF+ additionally incorporates a 128-bit in-thread cache, accelerated through register shuffling, to accelerate access to packet data in slow GPU global memory. GPF+ uses a high-level DSL to simplify protocol and filter creation, whilst better facilitating protocol reuse. The system is supported by a pipeline of multi-threaded high-performance host components, which communicate asynchronously through 0MQ messaging middleware to buffer, index, and dispatch packet data on the host system. The system was evaluated using high-end Kepler (Nvidia GTX Titan) and entry level Maxwell (Nvidia GTX 750) GPUs. The results of this evaluation showed high system performance, limited only by device side IO (600MBps) in all tests. GPF+ maintained high occupancy and device utilisation in all tests, without significant serialisation, and showed improved scaling to more complex filter sets. Results were used to visualise captures of up to 160 GB in seconds, and to extract and pre-filter captures small enough to be easily analysed in applications such as Wireshark.
- Full Text:
- Date Issued: 2016
- Authors: Nottingham, Alastair Timothy
- Date: 2016
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/910 , vital:20002
- Description: This thesis describes the design and implementation of GPF+, a complete general packet classification system developed using Nvidia CUDA for Compute Capability 3.5+ GPUs. This system was developed with the aim of accelerating the analysis of arbitrary network protocols within network traffic traces using inexpensive, massively parallel commodity hardware. GPF+ and its supporting components are specifically intended to support the processing of large, long-term network packet traces such as those produced by network telescopes, which are currently difficult and time consuming to analyse. The GPF+ classifier is based on prior research in the field, which produced a prototype classifier called GPF, targeted at Compute Capability 1.3 GPUs. GPF+ greatly extends the GPF model, improving runtime flexibility and scalability, whilst maintaining high execution efficiency. GPF+ incorporates a compact, lightweight registerbased state machine that supports massively-parallel, multi-match filter predicate evaluation, as well as efficient arbitrary field extraction. GPF+ tracks packet composition during execution, and adjusts processing at runtime to avoid redundant memory transactions and unnecessary computation through warp-voting. GPF+ additionally incorporates a 128-bit in-thread cache, accelerated through register shuffling, to accelerate access to packet data in slow GPU global memory. GPF+ uses a high-level DSL to simplify protocol and filter creation, whilst better facilitating protocol reuse. The system is supported by a pipeline of multi-threaded high-performance host components, which communicate asynchronously through 0MQ messaging middleware to buffer, index, and dispatch packet data on the host system. The system was evaluated using high-end Kepler (Nvidia GTX Titan) and entry level Maxwell (Nvidia GTX 750) GPUs. The results of this evaluation showed high system performance, limited only by device side IO (600MBps) in all tests. GPF+ maintained high occupancy and device utilisation in all tests, without significant serialisation, and showed improved scaling to more complex filter sets. Results were used to visualise captures of up to 160 GB in seconds, and to extract and pre-filter captures small enough to be easily analysed in applications such as Wireshark.
- Full Text:
- Date Issued: 2016
Network management for community networks
- Authors: Wells, Daniel David
- Date: 2010 , 2010-03-26
- Subjects: Computer networks -- Management , Internet -- South Africa , Internet -- Management , Broadband communication systems
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4643 , http://hdl.handle.net/10962/d1006587
- Description: Community networks (in South Africa and Africa) are often serviced by limited bandwidth network backhauls. Relative to the basic needs of the community, this is an expensive ongoing concern. In many cases the Internet connection is shared among multiple sites. Community networks may also have a lack of technical personnel to maintain a network of this nature. Hence, there is a demand for a system which will monitor and manage bandwidth use, as well as network use. The proposed solution for community networks and the focus within this dissertation, is a system of two parts. A Community Access Point (CAP) is located at each site within the community network. This provides the hosts and servers at that site with access to services on the community network and the Internet, it is the site's router. The CAP provides a web based interface (CAPgui) which allows configuration of the device and viewing of simple monitoring statistics. The Access Concentrator (AC) is the default router for the CAPs and the gateway to the Internet. It provides authenticated and encrypted communication between the network sites. The AC performs several monitoring functions, both for the individual sites and for the upstream Internet connection. The AC provides a means for centrally managing and effectively allocating Internet bandwidth by using the web based interface (ACgui). Bandwidth use can be allocated per user, per host and per site. The system is maintainable, extendable and customisable for different network architectures. The system was deployed successfully to two community networks. The Centre of Excellence (CoE) testbed network is a peri-urban network deployment whereas the Siyakhula Living Lab (SLL) network is a rural deployment. The results gathered conclude that the project was successful as the deployed system is more robust and more manageable than the previous systems.
- Full Text:
- Date Issued: 2010
- Authors: Wells, Daniel David
- Date: 2010 , 2010-03-26
- Subjects: Computer networks -- Management , Internet -- South Africa , Internet -- Management , Broadband communication systems
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4643 , http://hdl.handle.net/10962/d1006587
- Description: Community networks (in South Africa and Africa) are often serviced by limited bandwidth network backhauls. Relative to the basic needs of the community, this is an expensive ongoing concern. In many cases the Internet connection is shared among multiple sites. Community networks may also have a lack of technical personnel to maintain a network of this nature. Hence, there is a demand for a system which will monitor and manage bandwidth use, as well as network use. The proposed solution for community networks and the focus within this dissertation, is a system of two parts. A Community Access Point (CAP) is located at each site within the community network. This provides the hosts and servers at that site with access to services on the community network and the Internet, it is the site's router. The CAP provides a web based interface (CAPgui) which allows configuration of the device and viewing of simple monitoring statistics. The Access Concentrator (AC) is the default router for the CAPs and the gateway to the Internet. It provides authenticated and encrypted communication between the network sites. The AC performs several monitoring functions, both for the individual sites and for the upstream Internet connection. The AC provides a means for centrally managing and effectively allocating Internet bandwidth by using the web based interface (ACgui). Bandwidth use can be allocated per user, per host and per site. The system is maintainable, extendable and customisable for different network architectures. The system was deployed successfully to two community networks. The Centre of Excellence (CoE) testbed network is a peri-urban network deployment whereas the Siyakhula Living Lab (SLL) network is a rural deployment. The results gathered conclude that the project was successful as the deployed system is more robust and more manageable than the previous systems.
- Full Text:
- Date Issued: 2010
An investigation of online threat awareness and behaviour patterns amongst secondary school learners
- Authors: Irwin, Michael Padric
- Date: 2013 , 2013-04-29
- Subjects: Computer security -- South Africa -- Grahamstown , Risk perception -- South Africa -- Grahamstown , High school students -- South Africa -- Grahamstown , Communication -- Sex differences -- South Africa -- Grahamstown , Internet and teenagers -- South Africa -- Grahamstown , Internet and teenagers -- Risk assessment -- South Africa -- Grahamstown , Internet -- Safety measures -- South Africa -- Grahamstown , Online social networks -- Social aspects -- South Africa -- Grahamstown
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4576 , http://hdl.handle.net/10962/d1002965 , Computer security -- South Africa -- Grahamstown , Risk perception -- South Africa -- Grahamstown , High school students -- South Africa -- Grahamstown , Communication -- Sex differences -- South Africa -- Grahamstown , Internet and teenagers -- South Africa -- Grahamstown , Internet and teenagers -- Risk assessment -- South Africa -- Grahamstown , Internet -- Safety measures -- South Africa -- Grahamstown , Online social networks -- Social aspects -- South Africa -- Grahamstown
- Description: The research area of this work is online threat awareness within an information security context. The research was carried out on secondary school learners at boarding schools in Grahamstown. The participating learners were in Grades 8 to 12. The goals of the research included determining the actual levels of awareness, the difference between these and self-perceived levels of the participants, the assessment of risk in terms of online behaviour, and the determination of any gender differences in the answers provided by the respondents. A review of relevant literature and similar studies was carried out, and data was collected from the participating schools via an online questionnaire. This data was analysed and discussed within the frameworks of awareness of threats, online privacy social media, sexting, cyberbullying and password habits. The concepts of information security and online privacy are present throughout these discussion chapters, providing the themes for linking the discussion points together. The results of this research show that the respondents have a high level of risk. This is due to the gaps identified in actual awareness and perception, as well as the exhibition of online behaviour patterns that are considered high risk. A strong need for the construction and adoption of threat awareness programmes by these and other schools is identified, as are areas of particular need for inclusion in such programmes. Some gender differences are present, but not to the extent that, there is as significant difference between male and female respondents in terms of overall awareness, knowledge and behaviour.
- Full Text:
- Date Issued: 2013
An investigation of online threat awareness and behaviour patterns amongst secondary school learners
- Authors: Irwin, Michael Padric
- Date: 2013 , 2013-04-29
- Subjects: Computer security -- South Africa -- Grahamstown , Risk perception -- South Africa -- Grahamstown , High school students -- South Africa -- Grahamstown , Communication -- Sex differences -- South Africa -- Grahamstown , Internet and teenagers -- South Africa -- Grahamstown , Internet and teenagers -- Risk assessment -- South Africa -- Grahamstown , Internet -- Safety measures -- South Africa -- Grahamstown , Online social networks -- Social aspects -- South Africa -- Grahamstown
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4576 , http://hdl.handle.net/10962/d1002965 , Computer security -- South Africa -- Grahamstown , Risk perception -- South Africa -- Grahamstown , High school students -- South Africa -- Grahamstown , Communication -- Sex differences -- South Africa -- Grahamstown , Internet and teenagers -- South Africa -- Grahamstown , Internet and teenagers -- Risk assessment -- South Africa -- Grahamstown , Internet -- Safety measures -- South Africa -- Grahamstown , Online social networks -- Social aspects -- South Africa -- Grahamstown
- Description: The research area of this work is online threat awareness within an information security context. The research was carried out on secondary school learners at boarding schools in Grahamstown. The participating learners were in Grades 8 to 12. The goals of the research included determining the actual levels of awareness, the difference between these and self-perceived levels of the participants, the assessment of risk in terms of online behaviour, and the determination of any gender differences in the answers provided by the respondents. A review of relevant literature and similar studies was carried out, and data was collected from the participating schools via an online questionnaire. This data was analysed and discussed within the frameworks of awareness of threats, online privacy social media, sexting, cyberbullying and password habits. The concepts of information security and online privacy are present throughout these discussion chapters, providing the themes for linking the discussion points together. The results of this research show that the respondents have a high level of risk. This is due to the gaps identified in actual awareness and perception, as well as the exhibition of online behaviour patterns that are considered high risk. A strong need for the construction and adoption of threat awareness programmes by these and other schools is identified, as are areas of particular need for inclusion in such programmes. Some gender differences are present, but not to the extent that, there is as significant difference between male and female respondents in terms of overall awareness, knowledge and behaviour.
- Full Text:
- Date Issued: 2013