A common analysis framework for simulated streaming-video networks
- Authors: Mulumba, Patrick
- Date: 2009
- Subjects: Computer networks -- Management , Streaming video , Mass media -- Technological innovations
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4590 , http://hdl.handle.net/10962/d1004828 , Computer networks -- Management , Streaming video , Mass media -- Technological innovations
- Description: Distributed media streaming has been driven by the combination of improved media compression techniques and an increase in the availability of bandwidth. This increase has lead to the development of various streaming distribution engines (systems/services), which currently provide the majority of the streaming media available throughout the Internet. This study aimed to analyse a range of existing commercial and open-source streaming media distribution engines, and classify them in such a way as to define a Common Analysis Framework for Simulated Streaming-Video Networks (CAFSS-Net). This common framework was used as the basis for a simulation tool intended to aid in the development and deployment of streaming media networks and predict the performance impacts of both network configuration changes, video features (scene complexity, resolution) and general scaling. CAFSS-Net consists of six components: the server, the client(s), the network simulator, the video publishing tools, the videos and the evaluation tool-set. Test scenarios are presented consisting of different network configurations, scales and external traffic specifications. From these test scenarios, results were obtained to determine interesting observations attained and to provide an overview of the different test specications for this study. From these results, an analysis of the system was performed, yielding relationships between the videos, the different bandwidths, the different measurement tools and the different components of CAFSS-Net. Based on the analysis of the results, the implications for CAFSS-Net highlighted different achievements and proposals for future work for the different components. CAFSS-Net was able to successfully integrate all of its components to evaluate the different streaming scenarios. The streaming server, client and video components accomplished their objectives. It is noted that although the video publishing tool was able to provide the necessary compression/decompression services, proposals for the implementation of alternative compression/decompression schemes could serve as a suitable extension. The network simulator and evaluation tool-set components were also successful, but future tests (particularly in low bandwidth scenarios) are suggested in order to further improve the accuracy of the framework as a whole. CAFSS-Net is especially successful with analysing high bandwidth connections with the results being similar to those of the physical network tests.
- Full Text:
- Date Issued: 2009
- Authors: Mulumba, Patrick
- Date: 2009
- Subjects: Computer networks -- Management , Streaming video , Mass media -- Technological innovations
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4590 , http://hdl.handle.net/10962/d1004828 , Computer networks -- Management , Streaming video , Mass media -- Technological innovations
- Description: Distributed media streaming has been driven by the combination of improved media compression techniques and an increase in the availability of bandwidth. This increase has lead to the development of various streaming distribution engines (systems/services), which currently provide the majority of the streaming media available throughout the Internet. This study aimed to analyse a range of existing commercial and open-source streaming media distribution engines, and classify them in such a way as to define a Common Analysis Framework for Simulated Streaming-Video Networks (CAFSS-Net). This common framework was used as the basis for a simulation tool intended to aid in the development and deployment of streaming media networks and predict the performance impacts of both network configuration changes, video features (scene complexity, resolution) and general scaling. CAFSS-Net consists of six components: the server, the client(s), the network simulator, the video publishing tools, the videos and the evaluation tool-set. Test scenarios are presented consisting of different network configurations, scales and external traffic specifications. From these test scenarios, results were obtained to determine interesting observations attained and to provide an overview of the different test specications for this study. From these results, an analysis of the system was performed, yielding relationships between the videos, the different bandwidths, the different measurement tools and the different components of CAFSS-Net. Based on the analysis of the results, the implications for CAFSS-Net highlighted different achievements and proposals for future work for the different components. CAFSS-Net was able to successfully integrate all of its components to evaluate the different streaming scenarios. The streaming server, client and video components accomplished their objectives. It is noted that although the video publishing tool was able to provide the necessary compression/decompression services, proposals for the implementation of alternative compression/decompression schemes could serve as a suitable extension. The network simulator and evaluation tool-set components were also successful, but future tests (particularly in low bandwidth scenarios) are suggested in order to further improve the accuracy of the framework as a whole. CAFSS-Net is especially successful with analysing high bandwidth connections with the results being similar to those of the physical network tests.
- Full Text:
- Date Issued: 2009
A comparative analysis of Java and .NET mobile development environments for supporting mobile services
- Authors: Zhao, Xiaogeng
- Date: 2003 , 2013-05-23
- Subjects: Microsoft .NET , Java (Computer program language) , Mobile computing , Wireless communication systems
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4578 , http://hdl.handle.net/10962/d1003064 , Microsoft .NET , Java (Computer program language) , Mobile computing , Wireless communication systems
- Description: With the rapid development of wireless technologies, traditional mobile devices, such as pagers and cellular phones, have evolved from a purely communications and messaging-oriented medium to one that supports mobile data communication in general and acts as an application platform. As shown in a recent survey conducted by MDA, easy access to the present-day wireless Internet has resulted in mobile devices gaining more and more attention and popularity. The growth of and demand for mobile Web applications is expected to increase rapidly in the near future, as a range of software companies and mobile device manufacturers release increasingly accessible tools for creating mobile Web application and services. From a variety of possible development environments of this kind, the author has selected and examined two leading contenders, the J2ME and the Microsoft .NET mobile Web application development environments. This document reports the product life cycle of pilot mobile web applications, designed and implemented in each host environment in tum. A feature-by-feature investigation and comparison of the J2ME and .NET environments was carried out, covering the range of issues necessary for a complete mobile Web application development life cycle. The resulting analysis addresses features and efficiencies of the application development environment and the target deployment environment, the degree to which the resultant services are compatible on a variety of platforms, and the ease with which applications can be designed to be extensible. The thesis offers an objective evaluation of the J2ME and the .NET mobile development environments, which highlights their strengths and weaknesses, and suggests guidelines for designing, creating, and deploying high quality mobile Web applications. The research uncovers no clear winner across all categories assessed. J2ME currently favours situations in which bandwidth is limited and client side processing power is relatively sufficient, it exerts the processing power of mobile devices over distributed network environments. .NET requires a less constrained network throughput, but performs adequately on clients with more limited processing power, supports a more diverse target platform range, and offers a more efficient, in terms of development time, development environment. Both technologies are likely to receive significant user support for some time. , KMBT_363 , Adobe Acrobat 9.54 Paper Capture Plug-in
- Full Text:
- Date Issued: 2003
- Authors: Zhao, Xiaogeng
- Date: 2003 , 2013-05-23
- Subjects: Microsoft .NET , Java (Computer program language) , Mobile computing , Wireless communication systems
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4578 , http://hdl.handle.net/10962/d1003064 , Microsoft .NET , Java (Computer program language) , Mobile computing , Wireless communication systems
- Description: With the rapid development of wireless technologies, traditional mobile devices, such as pagers and cellular phones, have evolved from a purely communications and messaging-oriented medium to one that supports mobile data communication in general and acts as an application platform. As shown in a recent survey conducted by MDA, easy access to the present-day wireless Internet has resulted in mobile devices gaining more and more attention and popularity. The growth of and demand for mobile Web applications is expected to increase rapidly in the near future, as a range of software companies and mobile device manufacturers release increasingly accessible tools for creating mobile Web application and services. From a variety of possible development environments of this kind, the author has selected and examined two leading contenders, the J2ME and the Microsoft .NET mobile Web application development environments. This document reports the product life cycle of pilot mobile web applications, designed and implemented in each host environment in tum. A feature-by-feature investigation and comparison of the J2ME and .NET environments was carried out, covering the range of issues necessary for a complete mobile Web application development life cycle. The resulting analysis addresses features and efficiencies of the application development environment and the target deployment environment, the degree to which the resultant services are compatible on a variety of platforms, and the ease with which applications can be designed to be extensible. The thesis offers an objective evaluation of the J2ME and the .NET mobile development environments, which highlights their strengths and weaknesses, and suggests guidelines for designing, creating, and deploying high quality mobile Web applications. The research uncovers no clear winner across all categories assessed. J2ME currently favours situations in which bandwidth is limited and client side processing power is relatively sufficient, it exerts the processing power of mobile devices over distributed network environments. .NET requires a less constrained network throughput, but performs adequately on clients with more limited processing power, supports a more diverse target platform range, and offers a more efficient, in terms of development time, development environment. Both technologies are likely to receive significant user support for some time. , KMBT_363 , Adobe Acrobat 9.54 Paper Capture Plug-in
- Full Text:
- Date Issued: 2003
A comparative study of CERBER, MAKTUB and LOCKY Ransomware using a Hybridised-Malware analysis
- Authors: Schmitt, Veronica
- Date: 2019
- Subjects: Microsoft Windows (Computer file) , Data protection , Computer crimes -- Prevention , Computer security , Computer networks -- Security measures , Computers -- Access control , Malware (Computer software)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92313 , vital:30702
- Description: There has been a significant increase in the prevalence of Ransomware attacks in the preceding four years to date. This indicates that the battle has not yet been won defending against this class of malware. This research proposes that by identifying the similarities within the operational framework of Ransomware strains, a better overall understanding of their operation and function can be achieved. This, in turn, will aid in a quicker response to future attacks. With the average Ransomware attack taking two hours to be identified, it shows that there is not yet a clear understanding as to why these attacks are so successful. Research into Ransomware is limited by what is currently known on the topic. Due to the limitations of the research the decision was taken to only examined three samples of Ransomware from different families. This was decided due to the complexities and comprehensive nature of the research. The in depth nature of the research and the time constraints associated with it did not allow for proof of concept of this framework to be tested on more than three families, but the exploratory work was promising and should be further explored in future research. The aim of the research is to follow the Hybrid-Malware analysis framework which consists of both static and the dynamic analysis phases, in addition to the digital forensic examination of the infected system. This allows for signature-based findings, along with behavioural and forensic findings all in one. This information allows for a better understanding of how this malware is designed and how it infects and remains persistent on a system. The operating system which has been chosen is the Microsoft Window 7 operating system which is still utilised by a significant proportion of Windows users especially in the corporate environment. The experiment process was designed to enable the researcher the ability to collect information regarding the Ransomware and every aspect of its behaviour and communication on a target system. The results can be compared across the three strains to identify the commonalities. The initial hypothesis was that Ransomware variants are all much like an instant cake box consists of specific building blocks which remain the same with the flavouring of the cake mix being the unique feature.
- Full Text:
- Date Issued: 2019
- Authors: Schmitt, Veronica
- Date: 2019
- Subjects: Microsoft Windows (Computer file) , Data protection , Computer crimes -- Prevention , Computer security , Computer networks -- Security measures , Computers -- Access control , Malware (Computer software)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92313 , vital:30702
- Description: There has been a significant increase in the prevalence of Ransomware attacks in the preceding four years to date. This indicates that the battle has not yet been won defending against this class of malware. This research proposes that by identifying the similarities within the operational framework of Ransomware strains, a better overall understanding of their operation and function can be achieved. This, in turn, will aid in a quicker response to future attacks. With the average Ransomware attack taking two hours to be identified, it shows that there is not yet a clear understanding as to why these attacks are so successful. Research into Ransomware is limited by what is currently known on the topic. Due to the limitations of the research the decision was taken to only examined three samples of Ransomware from different families. This was decided due to the complexities and comprehensive nature of the research. The in depth nature of the research and the time constraints associated with it did not allow for proof of concept of this framework to be tested on more than three families, but the exploratory work was promising and should be further explored in future research. The aim of the research is to follow the Hybrid-Malware analysis framework which consists of both static and the dynamic analysis phases, in addition to the digital forensic examination of the infected system. This allows for signature-based findings, along with behavioural and forensic findings all in one. This information allows for a better understanding of how this malware is designed and how it infects and remains persistent on a system. The operating system which has been chosen is the Microsoft Window 7 operating system which is still utilised by a significant proportion of Windows users especially in the corporate environment. The experiment process was designed to enable the researcher the ability to collect information regarding the Ransomware and every aspect of its behaviour and communication on a target system. The results can be compared across the three strains to identify the commonalities. The initial hypothesis was that Ransomware variants are all much like an instant cake box consists of specific building blocks which remain the same with the flavouring of the cake mix being the unique feature.
- Full Text:
- Date Issued: 2019
A comparative study of the Linux and windows device driver architecture with a focus on IEEE1394 (high speed serial bus) drivers
- Authors: Tsegaye, Melekam Asrat
- Date: 2004
- Subjects: Microsoft Windows (Computer file) , Linux , Operating systems (Computers) , DOS device drivers (Computer programs) , Linux device drivers (Computer programs)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4591 , http://hdl.handle.net/10962/d1004829 , Microsoft Windows (Computer file) , Linux , Operating systems (Computers) , DOS device drivers (Computer programs) , Linux device drivers (Computer programs)
- Description: New hardware devices are continually being released to the public by hardware manufactures around the world. For these new devices to be usable under a PC operating system, device drivers that extend the functionality of the target operating system have to be constructed. This work examines and compares the device driver architectures currently in use by two of the most widely used operating systems, Microsoft’s Windows and Linux. The IEEE1394 (high speed serial bus) device driver stacks on each operating system are examined and compared as an example of a major device driver stack implementation, including driver requirements for the upcoming IEEE1394.1 bridging standard.
- Full Text:
- Date Issued: 2004
- Authors: Tsegaye, Melekam Asrat
- Date: 2004
- Subjects: Microsoft Windows (Computer file) , Linux , Operating systems (Computers) , DOS device drivers (Computer programs) , Linux device drivers (Computer programs)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4591 , http://hdl.handle.net/10962/d1004829 , Microsoft Windows (Computer file) , Linux , Operating systems (Computers) , DOS device drivers (Computer programs) , Linux device drivers (Computer programs)
- Description: New hardware devices are continually being released to the public by hardware manufactures around the world. For these new devices to be usable under a PC operating system, device drivers that extend the functionality of the target operating system have to be constructed. This work examines and compares the device driver architectures currently in use by two of the most widely used operating systems, Microsoft’s Windows and Linux. The IEEE1394 (high speed serial bus) device driver stacks on each operating system are examined and compared as an example of a major device driver stack implementation, including driver requirements for the upcoming IEEE1394.1 bridging standard.
- Full Text:
- Date Issued: 2004
A comparison of exact string search algorithms for deep packet inspection
- Authors: Hunt, Kieran
- Date: 2018
- Subjects: Algorithms , Firewalls (Computer security) , Computer networks -- Security measures , Intrusion detection systems (Computer security) , Deep Packet Inspection
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/60629 , vital:27807
- Description: Every day, computer networks throughout the world face a constant onslaught of attacks. To combat these, network administrators are forced to employ a multitude of mitigating measures. Devices such as firewalls and Intrusion Detection Systems are prevalent today and employ extensive Deep Packet Inspection to scrutinise each piece of network traffic. Systems such as these usually require specialised hardware to meet the demand imposed by high throughput networks. Hardware like this is extremely expensive and singular in its function. It is with this in mind that the string search algorithms are introduced. These algorithms have been proven to perform well when searching through large volumes of text and may be able to perform equally well in the context of Deep Packet Inspection. String search algorithms are designed to match a single pattern to a substring of a given piece of text. This is not unlike the heuristics employed by traditional Deep Packet Inspection systems. This research compares the performance of a large number of string search algorithms during packet processing. Deep Packet Inspection places stringent restrictions on the reliability and speed of the algorithms due to increased performance pressures. A test system had to be designed in order to properly test the string search algorithms in the context of Deep Packet Inspection. The system allowed for precise and repeatable tests of each algorithm and then for their comparison. Of the algorithms tested, the Horspool and Quick Search algorithms posted the best results for both speed and reliability. The Not So Naive and Rabin-Karp algorithms were slowest overall.
- Full Text:
- Date Issued: 2018
- Authors: Hunt, Kieran
- Date: 2018
- Subjects: Algorithms , Firewalls (Computer security) , Computer networks -- Security measures , Intrusion detection systems (Computer security) , Deep Packet Inspection
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/60629 , vital:27807
- Description: Every day, computer networks throughout the world face a constant onslaught of attacks. To combat these, network administrators are forced to employ a multitude of mitigating measures. Devices such as firewalls and Intrusion Detection Systems are prevalent today and employ extensive Deep Packet Inspection to scrutinise each piece of network traffic. Systems such as these usually require specialised hardware to meet the demand imposed by high throughput networks. Hardware like this is extremely expensive and singular in its function. It is with this in mind that the string search algorithms are introduced. These algorithms have been proven to perform well when searching through large volumes of text and may be able to perform equally well in the context of Deep Packet Inspection. String search algorithms are designed to match a single pattern to a substring of a given piece of text. This is not unlike the heuristics employed by traditional Deep Packet Inspection systems. This research compares the performance of a large number of string search algorithms during packet processing. Deep Packet Inspection places stringent restrictions on the reliability and speed of the algorithms due to increased performance pressures. A test system had to be designed in order to properly test the string search algorithms in the context of Deep Packet Inspection. The system allowed for precise and repeatable tests of each algorithm and then for their comparison. Of the algorithms tested, the Horspool and Quick Search algorithms posted the best results for both speed and reliability. The Not So Naive and Rabin-Karp algorithms were slowest overall.
- Full Text:
- Date Issued: 2018
A comparison of open source and proprietary digital forensic software
- Authors: Sonnekus, Michael Hendrik
- Date: 2015
- Subjects: Computer crimes , Computer crimes -- Investigation , Electronic evidence , Open source software
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4717 , http://hdl.handle.net/10962/d1017939
- Description: Scrutiny of the capabilities and accuracy of computer forensic tools is increasing as the number of incidents relying on digital evidence and the weight of that evidence increase. This thesis describes the capabilities of the leading proprietary and open source digital forensic tools. The capabilities of the tools were tested separately on digital media that had been formatted using Windows and Linux. Experiments were carried out with the intention of establishing whether the capabilities of open source computer forensics are similar to those of proprietary computer forensic tools, and whether these tools could complement one another. The tools were tested with regards to their capabilities to make and analyse digital forensic images in a forensically sound manner. The tests were carried out on each media type after deleting data from the media, and then repeated after formatting the media. The results of the experiments performed demonstrate that both proprietary and open source computer forensic tools have superior capabilities in different scenarios, and that the toolsets can be used to validate and complement one another. The implication of these findings is that investigators have an affordable means of validating their findings and are able to more effectively investigate digital media.
- Full Text:
- Date Issued: 2015
- Authors: Sonnekus, Michael Hendrik
- Date: 2015
- Subjects: Computer crimes , Computer crimes -- Investigation , Electronic evidence , Open source software
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4717 , http://hdl.handle.net/10962/d1017939
- Description: Scrutiny of the capabilities and accuracy of computer forensic tools is increasing as the number of incidents relying on digital evidence and the weight of that evidence increase. This thesis describes the capabilities of the leading proprietary and open source digital forensic tools. The capabilities of the tools were tested separately on digital media that had been formatted using Windows and Linux. Experiments were carried out with the intention of establishing whether the capabilities of open source computer forensics are similar to those of proprietary computer forensic tools, and whether these tools could complement one another. The tools were tested with regards to their capabilities to make and analyse digital forensic images in a forensically sound manner. The tests were carried out on each media type after deleting data from the media, and then repeated after formatting the media. The results of the experiments performed demonstrate that both proprietary and open source computer forensic tools have superior capabilities in different scenarios, and that the toolsets can be used to validate and complement one another. The implication of these findings is that investigators have an affordable means of validating their findings and are able to more effectively investigate digital media.
- Full Text:
- Date Issued: 2015
A comparison of web-based technologies to serve images from an Oracle9i database
- Authors: Swales, Dylan
- Date: 2004 , 2013-06-18
- Subjects: Active server pages , Microsoft .NET , JavaServer pages , Oracle (Computer file) , Internet searching , Web site development--Computer programs , World Wide Web , Online information services
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4583 , http://hdl.handle.net/10962/d1004380 , Active server pages , Microsoft .NET , JavaServer pages , Oracle (Computer file) , Internet searching , Web site development--Computer programs , World Wide Web , Online information services
- Description: The nature of Internet and Intranet Web applications has changed from a static content-distribution medium into an interactive, dynamic medium, often used to serve multimedia from back-end object-relational databases to Web-enabled clients. Consequently, developers need to make an informed technological choice for developing software that supports a Web-based application for distributing multimedia over networks. This decision is based on several factors. Among the factors are ease of programming, richness of features, scalability, and performance. The research focuses on these key factors when distributing images from an Oracle9i database using Java Servlets, JSP, ASP, and ASP.NET as the server-side development technologies. Prototype applications are developed and tested within each technology: one for single image serving and the other for multiple image serving. A matrix of recommendations is provided to distinguish which technology, or combination of technologies, provides the best performance and development platform for image serving within the studied envirorunent. , KMBT_363 , Adobe Acrobat 9.54 Paper Capture Plug-in
- Full Text:
- Date Issued: 2004
- Authors: Swales, Dylan
- Date: 2004 , 2013-06-18
- Subjects: Active server pages , Microsoft .NET , JavaServer pages , Oracle (Computer file) , Internet searching , Web site development--Computer programs , World Wide Web , Online information services
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4583 , http://hdl.handle.net/10962/d1004380 , Active server pages , Microsoft .NET , JavaServer pages , Oracle (Computer file) , Internet searching , Web site development--Computer programs , World Wide Web , Online information services
- Description: The nature of Internet and Intranet Web applications has changed from a static content-distribution medium into an interactive, dynamic medium, often used to serve multimedia from back-end object-relational databases to Web-enabled clients. Consequently, developers need to make an informed technological choice for developing software that supports a Web-based application for distributing multimedia over networks. This decision is based on several factors. Among the factors are ease of programming, richness of features, scalability, and performance. The research focuses on these key factors when distributing images from an Oracle9i database using Java Servlets, JSP, ASP, and ASP.NET as the server-side development technologies. Prototype applications are developed and tested within each technology: one for single image serving and the other for multiple image serving. A matrix of recommendations is provided to distinguish which technology, or combination of technologies, provides the best performance and development platform for image serving within the studied envirorunent. , KMBT_363 , Adobe Acrobat 9.54 Paper Capture Plug-in
- Full Text:
- Date Issued: 2004
A convenient approach to the deterministic routing of MIDI messages
- Authors: Shaw, Brent Roy
- Date: 2018
- Subjects: MIDI (Standard) , Microcontrollers , XMOS Limited , Computer architecture , Embedded computer systems
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/63256 , vital:28387
- Description: This research investigates the design and development of a Wireless MIDI Connection Management solution in order to create a deterministic MIDI transmission system. A investigation of the MIDI protocol show it to have certain limitation that can be overcome through the use of transmission solutions. These solutions can be used to improve on the versatility of MIDI while overcoming the MIDI's notorious cable length limitation. XMOS's deterministic XS1 microcontrollers are used to enable the design of a real-time system. The MIDINet system is investigated to identify both the strengths and weaknesses of such a connection management system, while other systems for network transmission of MIDI messages are reviewed. These investigations lead to a design concept for a new network MIDI transmission system that allows for the remote management of connections. The design and subsequent implementation of both the transmission system and the connection management system are then detailed. A testing methodology is then devised to allow for the newly created connection management system to be compared to the MIDINet system. The findings show the deterministic system to have lower latency than that of the MIDINet system, while utilising more compact and power efficient hardware.
- Full Text:
- Date Issued: 2018
- Authors: Shaw, Brent Roy
- Date: 2018
- Subjects: MIDI (Standard) , Microcontrollers , XMOS Limited , Computer architecture , Embedded computer systems
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/63256 , vital:28387
- Description: This research investigates the design and development of a Wireless MIDI Connection Management solution in order to create a deterministic MIDI transmission system. A investigation of the MIDI protocol show it to have certain limitation that can be overcome through the use of transmission solutions. These solutions can be used to improve on the versatility of MIDI while overcoming the MIDI's notorious cable length limitation. XMOS's deterministic XS1 microcontrollers are used to enable the design of a real-time system. The MIDINet system is investigated to identify both the strengths and weaknesses of such a connection management system, while other systems for network transmission of MIDI messages are reviewed. These investigations lead to a design concept for a new network MIDI transmission system that allows for the remote management of connections. The design and subsequent implementation of both the transmission system and the connection management system are then detailed. A testing methodology is then devised to allow for the newly created connection management system to be compared to the MIDINet system. The findings show the deterministic system to have lower latency than that of the MIDINet system, while utilising more compact and power efficient hardware.
- Full Text:
- Date Issued: 2018
A decision-making model to guide securing blockchain deployments
- Authors: Cronje, Gerhard Roets
- Date: 2021-10-29
- Subjects: Blockchains (Databases) , Bitcoin , Cryptocurrencies , Distributed databases , Computer networks Security measures , Computer networks Security measures Decision making , Ethereum
- Language: English
- Type: Masters theses , text
- Identifier: http://hdl.handle.net/10962/188865 , vital:44793
- Description: Satoshi Nakamoto, the pseudo-identity accredit with the paper that sparked the implementation of Bitcoin, is famously quoted as remarking, electronically of course, that “If you don’t believe it or don’t get it, I don’t have time to try and convince you, sorry” (Tsapis, 2019, p. 1). What is noticeable, 12 years after the famed Satoshi paper that initiated Bitcoin (Nakamoto, 2008), is that blockchain at the very least has staying power and potentially wide application. A lesser known figure Marc Kenisberg, founder of Bitcoin Chaser which is one of the many companies formed around the Bitcoin ecosystem, summarised it well saying “…Blockchain is the tech - Bitcoin is merely the first mainstream manifestation of its potential” (Tsapis, 2019, p. 1). With blockchain still trying to reach its potential and still maturing on its way towards a mainstream technology the main question that arises for security professionals is how do I ensure we do it securely? This research seeks to address that question by proposing a decision-making model that can be used by a security professional to guide them through ensuring appropriate security for blockchain deployments. This research is certainly not the first attempt at discussing the security of the blockchain and will not be the last, as the technology around blockchain and distributed ledger technology is still rapidly evolving. What this research does try to achieve is not to delve into extremely specific areas of blockchain security, or get bogged down in technical details, but to provide a reference framework that aims to cover all the major areas to be considered. The approach followed was to review the literature regarding blockchain and to identify the main security areas to be addressed. It then proposes a decision-making model and tests the model against a fictitious but relevant real-world example. It concludes with learnings from this research. The reader can be the judge, but the model aims to be a practical valuable resource to be used by any security professional, to navigate the security aspects logically and understandably when being involved in a blockchain deployment. In contrast to the Satoshi quote, this research tries to convince the reader and assist him/her in understanding the security choices related to every blockchain deployment. , Thesis (MSc) -- Faculty of Science, Computer Science, 2021
- Full Text:
- Date Issued: 2021-10-29
- Authors: Cronje, Gerhard Roets
- Date: 2021-10-29
- Subjects: Blockchains (Databases) , Bitcoin , Cryptocurrencies , Distributed databases , Computer networks Security measures , Computer networks Security measures Decision making , Ethereum
- Language: English
- Type: Masters theses , text
- Identifier: http://hdl.handle.net/10962/188865 , vital:44793
- Description: Satoshi Nakamoto, the pseudo-identity accredit with the paper that sparked the implementation of Bitcoin, is famously quoted as remarking, electronically of course, that “If you don’t believe it or don’t get it, I don’t have time to try and convince you, sorry” (Tsapis, 2019, p. 1). What is noticeable, 12 years after the famed Satoshi paper that initiated Bitcoin (Nakamoto, 2008), is that blockchain at the very least has staying power and potentially wide application. A lesser known figure Marc Kenisberg, founder of Bitcoin Chaser which is one of the many companies formed around the Bitcoin ecosystem, summarised it well saying “…Blockchain is the tech - Bitcoin is merely the first mainstream manifestation of its potential” (Tsapis, 2019, p. 1). With blockchain still trying to reach its potential and still maturing on its way towards a mainstream technology the main question that arises for security professionals is how do I ensure we do it securely? This research seeks to address that question by proposing a decision-making model that can be used by a security professional to guide them through ensuring appropriate security for blockchain deployments. This research is certainly not the first attempt at discussing the security of the blockchain and will not be the last, as the technology around blockchain and distributed ledger technology is still rapidly evolving. What this research does try to achieve is not to delve into extremely specific areas of blockchain security, or get bogged down in technical details, but to provide a reference framework that aims to cover all the major areas to be considered. The approach followed was to review the literature regarding blockchain and to identify the main security areas to be addressed. It then proposes a decision-making model and tests the model against a fictitious but relevant real-world example. It concludes with learnings from this research. The reader can be the judge, but the model aims to be a practical valuable resource to be used by any security professional, to navigate the security aspects logically and understandably when being involved in a blockchain deployment. In contrast to the Satoshi quote, this research tries to convince the reader and assist him/her in understanding the security choices related to every blockchain deployment. , Thesis (MSc) -- Faculty of Science, Computer Science, 2021
- Full Text:
- Date Issued: 2021-10-29
A detailed investigation of interoperability for web services
- Authors: Wright, Madeleine
- Date: 2006
- Subjects: Firefox , Web services , World Wide Web , Computer architecture , C# (Computer program language) , PHP (Computer program language) , Java (Computer program language)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4592 , http://hdl.handle.net/10962/d1004832 , Firefox , Web services , World Wide Web , Computer architecture , C# (Computer program language) , PHP (Computer program language) , Java (Computer program language)
- Description: The thesis presents a qualitative survey of web services' interoperability, offering a snapshot of development and trends at the end of 2005. It starts by examining the beginnings of web services in earlier distributed computing and middleware technologies, determining the distance from these approaches evident in current web-services architectures. It establishes a working definition of web services, examining the protocols that now seek to define it and the extent to which they contribute to its most crucial feature, interoperability. The thesis then considers the REST approach to web services as being in a class of its own, concluding that this approach to interoperable distributed computing is not only the simplest but also the most interoperable. It looks briefly at interoperability issues raised by technologies in the wider arena of Service Oriented Architecture. The chapter on protocols is complemented by a chapter that validates the qualitative findings by examining web services in practice. These have been implemented by a variety of toolkits and on different platforms. Included in the study is a preliminary examination of JAX-WS, the replacement for JAX-RPC, which is still under development. Although the main language of implementation is Java, the study includes services in C# and PHP and one implementation of a client using a Firefox extension. The study concludes that different forms of web service may co-exist with earlier middleware technologies. While remaining aware that there are still pitfalls that might yet derail the movement towards greater interoperability, the conclusion sounds an optimistic note that recent cooperation between different vendors may yet result in a solution that achieves interoperability through core web-service standards.
- Full Text:
- Date Issued: 2006
- Authors: Wright, Madeleine
- Date: 2006
- Subjects: Firefox , Web services , World Wide Web , Computer architecture , C# (Computer program language) , PHP (Computer program language) , Java (Computer program language)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4592 , http://hdl.handle.net/10962/d1004832 , Firefox , Web services , World Wide Web , Computer architecture , C# (Computer program language) , PHP (Computer program language) , Java (Computer program language)
- Description: The thesis presents a qualitative survey of web services' interoperability, offering a snapshot of development and trends at the end of 2005. It starts by examining the beginnings of web services in earlier distributed computing and middleware technologies, determining the distance from these approaches evident in current web-services architectures. It establishes a working definition of web services, examining the protocols that now seek to define it and the extent to which they contribute to its most crucial feature, interoperability. The thesis then considers the REST approach to web services as being in a class of its own, concluding that this approach to interoperable distributed computing is not only the simplest but also the most interoperable. It looks briefly at interoperability issues raised by technologies in the wider arena of Service Oriented Architecture. The chapter on protocols is complemented by a chapter that validates the qualitative findings by examining web services in practice. These have been implemented by a variety of toolkits and on different platforms. Included in the study is a preliminary examination of JAX-WS, the replacement for JAX-RPC, which is still under development. Although the main language of implementation is Java, the study includes services in C# and PHP and one implementation of a client using a Firefox extension. The study concludes that different forms of web service may co-exist with earlier middleware technologies. While remaining aware that there are still pitfalls that might yet derail the movement towards greater interoperability, the conclusion sounds an optimistic note that recent cooperation between different vendors may yet result in a solution that achieves interoperability through core web-service standards.
- Full Text:
- Date Issued: 2006
A development method for deriving reusable concurrent programs from verified CSP models
- Authors: Dibley, James
- Date: 2019
- Subjects: CSP (Computer program language) , Sequential processing (Computer science) , Go (Computer program language) , CSPIDER (Open source tool)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/72329 , vital:30035
- Description: This work proposes and demonstrates a novel method for software development that applies formal verification techniques to the design and implementation of concurrent programs. This method is supported by a new software tool, CSPIDER, which translates machine-readable Communicating Sequential Processes (CSP) models into encapsulated, reusable components coded in the Go programming language. In relation to existing CSP implementation techniques, this work is only the second to implement a translator and it provides original support for some CSP language constructs and modelling approaches. The method is evaluated through three case studies: a concurrent sorting array, a trialdivision prime number generator, and a component node for the Ricart-Agrawala distributed mutual exclusion algorithm. Each of these case studies presents the formal verification of safety and functional requirements through CSP model-checking, and it is shown that CSPIDER is capable of generating reusable implementations from each model. The Ricart-Agrawala case study demonstrates the application of the method to the design of a protocol component. This method maintains full compatibility with the primary CSP verification tool. Applying the CSPIDER tool requires minimal commitment to an explicitly defined modelling style and a very small set of pre-translation annotations, but all of these measures can be instated prior to verification. The Go code that CSPIDER produces requires no intervention before it may be used as a component within a larger development. The translator provides a traceable, structured implementation of the CSP model, automatically deriving formal parameters and a channel-based client interface from its interpretation of the CSP model. Each case study demonstrates the use of the translated component within a simple test development.
- Full Text:
- Date Issued: 2019
- Authors: Dibley, James
- Date: 2019
- Subjects: CSP (Computer program language) , Sequential processing (Computer science) , Go (Computer program language) , CSPIDER (Open source tool)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/72329 , vital:30035
- Description: This work proposes and demonstrates a novel method for software development that applies formal verification techniques to the design and implementation of concurrent programs. This method is supported by a new software tool, CSPIDER, which translates machine-readable Communicating Sequential Processes (CSP) models into encapsulated, reusable components coded in the Go programming language. In relation to existing CSP implementation techniques, this work is only the second to implement a translator and it provides original support for some CSP language constructs and modelling approaches. The method is evaluated through three case studies: a concurrent sorting array, a trialdivision prime number generator, and a component node for the Ricart-Agrawala distributed mutual exclusion algorithm. Each of these case studies presents the formal verification of safety and functional requirements through CSP model-checking, and it is shown that CSPIDER is capable of generating reusable implementations from each model. The Ricart-Agrawala case study demonstrates the application of the method to the design of a protocol component. This method maintains full compatibility with the primary CSP verification tool. Applying the CSPIDER tool requires minimal commitment to an explicitly defined modelling style and a very small set of pre-translation annotations, but all of these measures can be instated prior to verification. The Go code that CSPIDER produces requires no intervention before it may be used as a component within a larger development. The translator provides a traceable, structured implementation of the CSP model, automatically deriving formal parameters and a channel-based client interface from its interpretation of the CSP model. Each case study demonstrates the use of the translated component within a simple test development.
- Full Text:
- Date Issued: 2019
A distributed approach to surround sound production
- Authors: Smith, Adrian Wilfrid
- Date: 1999
- Subjects: Surround-sound systems , Computer sound processing , Music -- Data processing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4602 , http://hdl.handle.net/10962/d1004855 , Surround-sound systems , Computer sound processing , Music -- Data processing
- Description: The requirement for multi-channel surround sound in audio production applications is growing rapidly. Audio processing in these applications can be costly, particularly in multi-channel systems. A distributed approach is proposed for the development of a realtime spatialization system for surround sound music production, using Ambisonic surround sound methods. The latency in the system is analyzed, with a focus on the audio processing and network delays, in order to ascertain the feasibility of an enhanced, distributed real-time spatialization system.
- Full Text:
- Date Issued: 1999
- Authors: Smith, Adrian Wilfrid
- Date: 1999
- Subjects: Surround-sound systems , Computer sound processing , Music -- Data processing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4602 , http://hdl.handle.net/10962/d1004855 , Surround-sound systems , Computer sound processing , Music -- Data processing
- Description: The requirement for multi-channel surround sound in audio production applications is growing rapidly. Audio processing in these applications can be costly, particularly in multi-channel systems. A distributed approach is proposed for the development of a realtime spatialization system for surround sound music production, using Ambisonic surround sound methods. The latency in the system is analyzed, with a focus on the audio processing and network delays, in order to ascertain the feasibility of an enhanced, distributed real-time spatialization system.
- Full Text:
- Date Issued: 1999
A distributed Linda server on a network of heterogeneous processors
- Authors: Smith, Graham Leslie
- Date: 1993
- Subjects: LINDA (Computer system) , Parallel programming (Computer science)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4610 , http://hdl.handle.net/10962/d1004890 , LINDA (Computer system) , Parallel programming (Computer science)
- Description: Linda is an approach to parallelism which relies on a virtual associative shared memory called tuple space. Tuple space is accessed through a small set of primitive operations and is conceptually easy to understand and manipulate. The physical implementation of a Linda tuple space may of course be completely different from the conceptual model. Rhodes has implemented versions of Linda on a ring of RS-232 joined PC's and on a cluster of T800 transputers with a single copy of tuple space on one transputer. Current research targets the implementation of a distributed Linda server on a network of heterogeneous processors. This work describes the design and implementation of a distributed Linda server. Emphasis is placed on aspects of the design which enhance portability and efficiency.
- Full Text:
- Date Issued: 1993
- Authors: Smith, Graham Leslie
- Date: 1993
- Subjects: LINDA (Computer system) , Parallel programming (Computer science)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4610 , http://hdl.handle.net/10962/d1004890 , LINDA (Computer system) , Parallel programming (Computer science)
- Description: Linda is an approach to parallelism which relies on a virtual associative shared memory called tuple space. Tuple space is accessed through a small set of primitive operations and is conceptually easy to understand and manipulate. The physical implementation of a Linda tuple space may of course be completely different from the conceptual model. Rhodes has implemented versions of Linda on a ring of RS-232 joined PC's and on a cluster of T800 transputers with a single copy of tuple space on one transputer. Current research targets the implementation of a distributed Linda server on a network of heterogeneous processors. This work describes the design and implementation of a distributed Linda server. Emphasis is placed on aspects of the design which enhance portability and efficiency.
- Full Text:
- Date Issued: 1993
A formalised ontology for network attack classification
- Authors: Van Heerden, Renier Pelser
- Date: 2014
- Subjects: Computer networks -- Security measures Computer security Computer crimes -- Investigation Computer crimes -- Prevention
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4691 , http://hdl.handle.net/10962/d1011603
- Description: One of the most popular attack vectors against computers are their network connections. Attacks on computers through their networks are commonplace and have various levels of complexity. This research formally describes network-based computer attacks in the form of a story, formally and within an ontology. The ontology categorises network attacks where attack scenarios are the focal class. This class consists of: Denial-of- Service, Industrial Espionage, Web Defacement, Unauthorised Data Access, Financial Theft, Industrial Sabotage, Cyber-Warfare, Resource Theft, System Compromise, and Runaway Malware. This ontology was developed by building a taxonomy and a temporal network attack model. Network attack instances (also know as individuals) are classified according to their respective attack scenarios, with the use of an automated reasoner within the ontology. The automated reasoner deductions are verified formally; and via the automated reasoner, a relaxed set of scenarios is determined, which is relevant in a near real-time environment. A prototype system (called Aeneas) was developed to classify network-based attacks. Aeneas integrates the sensors into a detection system that can classify network attacks in a near real-time environment. To verify the ontology and the prototype Aeneas, a virtual test bed was developed in which network-based attacks were generated to verify the detection system. Aeneas was able to detect incoming attacks and classify them according to their scenario. The novel part of this research is the attack scenarios that are described in the form of a story, as well as formally and in an ontology. The ontology is used in a novel way to determine to which class attack instances belong and how the network attack ontology is affected in a near real-time environment.
- Full Text:
- Date Issued: 2014
- Authors: Van Heerden, Renier Pelser
- Date: 2014
- Subjects: Computer networks -- Security measures Computer security Computer crimes -- Investigation Computer crimes -- Prevention
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4691 , http://hdl.handle.net/10962/d1011603
- Description: One of the most popular attack vectors against computers are their network connections. Attacks on computers through their networks are commonplace and have various levels of complexity. This research formally describes network-based computer attacks in the form of a story, formally and within an ontology. The ontology categorises network attacks where attack scenarios are the focal class. This class consists of: Denial-of- Service, Industrial Espionage, Web Defacement, Unauthorised Data Access, Financial Theft, Industrial Sabotage, Cyber-Warfare, Resource Theft, System Compromise, and Runaway Malware. This ontology was developed by building a taxonomy and a temporal network attack model. Network attack instances (also know as individuals) are classified according to their respective attack scenarios, with the use of an automated reasoner within the ontology. The automated reasoner deductions are verified formally; and via the automated reasoner, a relaxed set of scenarios is determined, which is relevant in a near real-time environment. A prototype system (called Aeneas) was developed to classify network-based attacks. Aeneas integrates the sensors into a detection system that can classify network attacks in a near real-time environment. To verify the ontology and the prototype Aeneas, a virtual test bed was developed in which network-based attacks were generated to verify the detection system. Aeneas was able to detect incoming attacks and classify them according to their scenario. The novel part of this research is the attack scenarios that are described in the form of a story, as well as formally and in an ontology. The ontology is used in a novel way to determine to which class attack instances belong and how the network attack ontology is affected in a near real-time environment.
- Full Text:
- Date Issued: 2014
A framework for high speed lexical classification of malicious URLs
- Authors: Egan, Shaun Peter
- Date: 2014
- Subjects: Internet -- Security measures -- Research , Uniform Resource Identifiers -- Security measures -- Research , Neural networks (Computer science) -- Research , Computer security -- Research , Computer crimes -- Prevention , Phishing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4696 , http://hdl.handle.net/10962/d1011933 , Internet -- Security measures -- Research , Uniform Resource Identifiers -- Security measures -- Research , Neural networks (Computer science) -- Research , Computer security -- Research , Computer crimes -- Prevention , Phishing
- Description: Phishing attacks employ social engineering to target end-users, with the goal of stealing identifying or sensitive information. This information is used in activities such as identity theft or financial fraud. During a phishing campaign, attackers distribute URLs which; along with false information, point to fraudulent resources in an attempt to deceive users into requesting the resource. These URLs are made obscure through the use of several techniques which make automated detection difficult. Current methods used to detect malicious URLs face multiple problems which attackers use to their advantage. These problems include: the time required to react to new attacks; shifts in trends in URL obfuscation and usability problems caused by the latency incurred by the lookups required by these approaches. A new method of identifying malicious URLs using Artificial Neural Networks (ANNs) has been shown to be effective by several authors. The simple method of classification performed by ANNs result in very high classification speeds with little impact on usability. Samples used for the training, validation and testing of these ANNs are gathered from Phishtank and Open Directory. Words selected from the different sections of the samples are used to create a `Bag-of-Words (BOW)' which is used as a binary input vector indicating the presence of a word for a given sample. Twenty additional features which measure lexical attributes of the sample are used to increase classification accuracy. A framework that is capable of generating these classifiers in an automated fashion is implemented. These classifiers are automatically stored on a remote update distribution service which has been built to supply updates to classifier implementations. An example browser plugin is created and uses ANNs provided by this service. It is both capable of classifying URLs requested by a user in real time and is able to block these requests. The framework is tested in terms of training time and classification accuracy. Classification speed and the effectiveness of compression algorithms on the data required to distribute updates is tested. It is concluded that it is possible to generate these ANNs in a frequent fashion, and in a method that is small enough to distribute easily. It is also shown that classifications are made at high-speed with high-accuracy, resulting in little impact on usability.
- Full Text:
- Date Issued: 2014
- Authors: Egan, Shaun Peter
- Date: 2014
- Subjects: Internet -- Security measures -- Research , Uniform Resource Identifiers -- Security measures -- Research , Neural networks (Computer science) -- Research , Computer security -- Research , Computer crimes -- Prevention , Phishing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4696 , http://hdl.handle.net/10962/d1011933 , Internet -- Security measures -- Research , Uniform Resource Identifiers -- Security measures -- Research , Neural networks (Computer science) -- Research , Computer security -- Research , Computer crimes -- Prevention , Phishing
- Description: Phishing attacks employ social engineering to target end-users, with the goal of stealing identifying or sensitive information. This information is used in activities such as identity theft or financial fraud. During a phishing campaign, attackers distribute URLs which; along with false information, point to fraudulent resources in an attempt to deceive users into requesting the resource. These URLs are made obscure through the use of several techniques which make automated detection difficult. Current methods used to detect malicious URLs face multiple problems which attackers use to their advantage. These problems include: the time required to react to new attacks; shifts in trends in URL obfuscation and usability problems caused by the latency incurred by the lookups required by these approaches. A new method of identifying malicious URLs using Artificial Neural Networks (ANNs) has been shown to be effective by several authors. The simple method of classification performed by ANNs result in very high classification speeds with little impact on usability. Samples used for the training, validation and testing of these ANNs are gathered from Phishtank and Open Directory. Words selected from the different sections of the samples are used to create a `Bag-of-Words (BOW)' which is used as a binary input vector indicating the presence of a word for a given sample. Twenty additional features which measure lexical attributes of the sample are used to increase classification accuracy. A framework that is capable of generating these classifiers in an automated fashion is implemented. These classifiers are automatically stored on a remote update distribution service which has been built to supply updates to classifier implementations. An example browser plugin is created and uses ANNs provided by this service. It is both capable of classifying URLs requested by a user in real time and is able to block these requests. The framework is tested in terms of training time and classification accuracy. Classification speed and the effectiveness of compression algorithms on the data required to distribute updates is tested. It is concluded that it is possible to generate these ANNs in a frequent fashion, and in a method that is small enough to distribute easily. It is also shown that classifications are made at high-speed with high-accuracy, resulting in little impact on usability.
- Full Text:
- Date Issued: 2014
A framework for interpreting noisy, two-dimensional images, based on a fuzzification of programmed, attributed graph grammars
- Authors: Watkins, Gregory Shroll
- Date: 1998
- Subjects: Music -- Data processing Computer sound processing Artificial intelligence -- Musical applications Fuzzy systems
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4604 , http://hdl.handle.net/10962/d1004862
- Description: This thesis investigates a fuzzy syntactic approach to the interpretation of noisy two-dimensional images. This approach is based on a modification of the attributed graph grammar formalism to utilise fuzzy membership functions in the applicability predicates. As far as we are aware, this represents the first such modification of graph grammars. Furthermore, we develop a method for programming the resultant fuzzy attributed graph grammars through the use of non-deterministic control diagrams. To do this, we modify the standard programming mechanism to allow it to cope with the fuzzy certainty values associated with productions in our grammar. Our objective was to develop a flexible framework which can be used for the recognition of a wide variety of image classes, and which is adept at dealing with noise in these images. Programmed graph grammars are specifically chosen for the ease with which they allow one to specify a new two-dimensional image class. We implement a prototype system for Optical Music Recognition using our framework. This system allows us to test the capabilities of the framework for coping with noise in the context of handwritten music score recognition. Preliminary results from the prototype system show that the framework copes well with noisy images.
- Full Text:
- Date Issued: 1998
- Authors: Watkins, Gregory Shroll
- Date: 1998
- Subjects: Music -- Data processing Computer sound processing Artificial intelligence -- Musical applications Fuzzy systems
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4604 , http://hdl.handle.net/10962/d1004862
- Description: This thesis investigates a fuzzy syntactic approach to the interpretation of noisy two-dimensional images. This approach is based on a modification of the attributed graph grammar formalism to utilise fuzzy membership functions in the applicability predicates. As far as we are aware, this represents the first such modification of graph grammars. Furthermore, we develop a method for programming the resultant fuzzy attributed graph grammars through the use of non-deterministic control diagrams. To do this, we modify the standard programming mechanism to allow it to cope with the fuzzy certainty values associated with productions in our grammar. Our objective was to develop a flexible framework which can be used for the recognition of a wide variety of image classes, and which is adept at dealing with noise in these images. Programmed graph grammars are specifically chosen for the ease with which they allow one to specify a new two-dimensional image class. We implement a prototype system for Optical Music Recognition using our framework. This system allows us to test the capabilities of the framework for coping with noise in the context of handwritten music score recognition. Preliminary results from the prototype system show that the framework copes well with noisy images.
- Full Text:
- Date Issued: 1998
A framework for malicious host fingerprinting using distributed network sensors
- Authors: Hunter, Samuel Oswald
- Date: 2018
- Subjects: Computer networks -- Security measures , Malware (Computer software) , Multisensor data fusion , Distributed Sensor Networks , Automated Reconnaissance Framework , Latency Based Multilateration
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/60653 , vital:27811
- Description: Numerous software agents exist and are responsible for increasing volumes of malicious traffic that is observed on the Internet today. From a technical perspective the existing techniques for monitoring malicious agents and traffic were not developed to allow for the interrogation of the source of malicious traffic. This interrogation or reconnaissance would be considered active analysis as opposed to existing, mostly passive analysis. Unlike passive analysis, the active techniques are time-sensitive and their results become increasingly inaccurate as time delta between observation and interrogation increases. In addition to this, some studies had shown that the geographic separation of hosts on the Internet have resulted in pockets of different malicious agents and traffic targeting victims. As such it would be important to perform any kind of data collection over various source and in distributed IP address space. The data gathering and exposure capabilities of sensors such as honeypots and network telescopes were extended through the development of near-realtime Distributed Sensor Network modules that allowed for the near-realtime analysis of malicious traffic from distributed, heterogeneous monitoring sensors. In order to utilise the data exposed by the near-realtime Distributed Sensor Network modules an Automated Reconnaissance Framework was created, this framework was tasked with active and passive information collection and analysis of data in near-realtime and was designed from an adapted Multi Sensor Data Fusion model. The hypothesis was made that if sufficiently different characteristics of a host could be identified; combined they could act as a unique fingerprint for that host, potentially allowing for the re-identification of that host, even if its IP address had changed. To this end the concept of Latency Based Multilateration was introduced, acting as an additional metric for remote host fingerprinting. The vast amount of information gathered by the AR-Framework required the development of visualisation tools which could illustrate this data in near-realtime and also provided various degrees of interaction to accommodate human interpretation of such data. Ultimately the data collected through the application of the near-realtime Distributed Sensor Network and AR-Framework provided a unique perspective of a malicious host demographic. Allowing for new correlations to be drawn between attributes such as common open ports and operating systems, location, and inferred intent of these malicious hosts. The result of which expands our current understanding of malicious hosts on the Internet and enables further research in the area.
- Full Text:
- Date Issued: 2018
- Authors: Hunter, Samuel Oswald
- Date: 2018
- Subjects: Computer networks -- Security measures , Malware (Computer software) , Multisensor data fusion , Distributed Sensor Networks , Automated Reconnaissance Framework , Latency Based Multilateration
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/60653 , vital:27811
- Description: Numerous software agents exist and are responsible for increasing volumes of malicious traffic that is observed on the Internet today. From a technical perspective the existing techniques for monitoring malicious agents and traffic were not developed to allow for the interrogation of the source of malicious traffic. This interrogation or reconnaissance would be considered active analysis as opposed to existing, mostly passive analysis. Unlike passive analysis, the active techniques are time-sensitive and their results become increasingly inaccurate as time delta between observation and interrogation increases. In addition to this, some studies had shown that the geographic separation of hosts on the Internet have resulted in pockets of different malicious agents and traffic targeting victims. As such it would be important to perform any kind of data collection over various source and in distributed IP address space. The data gathering and exposure capabilities of sensors such as honeypots and network telescopes were extended through the development of near-realtime Distributed Sensor Network modules that allowed for the near-realtime analysis of malicious traffic from distributed, heterogeneous monitoring sensors. In order to utilise the data exposed by the near-realtime Distributed Sensor Network modules an Automated Reconnaissance Framework was created, this framework was tasked with active and passive information collection and analysis of data in near-realtime and was designed from an adapted Multi Sensor Data Fusion model. The hypothesis was made that if sufficiently different characteristics of a host could be identified; combined they could act as a unique fingerprint for that host, potentially allowing for the re-identification of that host, even if its IP address had changed. To this end the concept of Latency Based Multilateration was introduced, acting as an additional metric for remote host fingerprinting. The vast amount of information gathered by the AR-Framework required the development of visualisation tools which could illustrate this data in near-realtime and also provided various degrees of interaction to accommodate human interpretation of such data. Ultimately the data collected through the application of the near-realtime Distributed Sensor Network and AR-Framework provided a unique perspective of a malicious host demographic. Allowing for new correlations to be drawn between attributes such as common open ports and operating systems, location, and inferred intent of these malicious hosts. The result of which expands our current understanding of malicious hosts on the Internet and enables further research in the area.
- Full Text:
- Date Issued: 2018
A framework for responsive content adaptation in electronic display networks
- Authors: West, Philip
- Date: 2006
- Subjects: Computer networks , Cell phone systems , Wireless communication systems , Mobile communication systems , HTML (Document markup language) , XML (Document markup language)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4589 , http://hdl.handle.net/10962/d1004824 , Computer networks , Cell phone systems , Wireless communication systems , Mobile communication systems , HTML (Document markup language) , XML (Document markup language)
- Description: Recent trends show an increase in the availability and functionality of handheld devices, wireless network technology, and electronic display networks. We propose the novel integration of these technologies to provide wireless access to content delivered to large-screen display systems. Content adaptation is used as a method of reformatting web pages to display more appropriately on handheld devices, and to remove unwanted content. A framework is presented that facilitates content adaptation, implemented as an adaptation layer, which is extended to provide personalization of adaptation settings and response to network conditions. The framework is implemented as a proxy server for a wireless network, and handles HTML and XML documents. Once a document has been requested by a user, the HTML/XML is retrieved and parsed, creating a Document Object Model tree representation. It is then altered according to the user’s personal settings or predefined settings, based on current network usage and the network resources available. Three adaptation techniques were implemented; spatial representation, which generates an image map of the document, text summarization, which creates a tree view representation of a document, and tag extraction, which replaces specific tags with links. Three proof-of-concept systems were developed in order to test the robustness of the framework. A system for use with digital slide shows, a digital signage system, and a generalized system for use with the internet were implemented. Testing was performed by accessing sample web pages through the content adaptation proxy server. Tag extraction works correctly for all HTML and XML document structures, whereas spatial representation and text summarization are limited to a controlled subset. Results indicate that the adaptive system has the ability to reduce average bandwidth usage, by decreasing the amount of data on the network, thereby allowing a greater number of users access to content. This suggests that responsive content adaptation has a positive influence on network performance metrics.
- Full Text:
- Date Issued: 2006
- Authors: West, Philip
- Date: 2006
- Subjects: Computer networks , Cell phone systems , Wireless communication systems , Mobile communication systems , HTML (Document markup language) , XML (Document markup language)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4589 , http://hdl.handle.net/10962/d1004824 , Computer networks , Cell phone systems , Wireless communication systems , Mobile communication systems , HTML (Document markup language) , XML (Document markup language)
- Description: Recent trends show an increase in the availability and functionality of handheld devices, wireless network technology, and electronic display networks. We propose the novel integration of these technologies to provide wireless access to content delivered to large-screen display systems. Content adaptation is used as a method of reformatting web pages to display more appropriately on handheld devices, and to remove unwanted content. A framework is presented that facilitates content adaptation, implemented as an adaptation layer, which is extended to provide personalization of adaptation settings and response to network conditions. The framework is implemented as a proxy server for a wireless network, and handles HTML and XML documents. Once a document has been requested by a user, the HTML/XML is retrieved and parsed, creating a Document Object Model tree representation. It is then altered according to the user’s personal settings or predefined settings, based on current network usage and the network resources available. Three adaptation techniques were implemented; spatial representation, which generates an image map of the document, text summarization, which creates a tree view representation of a document, and tag extraction, which replaces specific tags with links. Three proof-of-concept systems were developed in order to test the robustness of the framework. A system for use with digital slide shows, a digital signage system, and a generalized system for use with the internet were implemented. Testing was performed by accessing sample web pages through the content adaptation proxy server. Tag extraction works correctly for all HTML and XML document structures, whereas spatial representation and text summarization are limited to a controlled subset. Results indicate that the adaptive system has the ability to reduce average bandwidth usage, by decreasing the amount of data on the network, thereby allowing a greater number of users access to content. This suggests that responsive content adaptation has a positive influence on network performance metrics.
- Full Text:
- Date Issued: 2006
A framework for scoring and tagging NetFlow data
- Authors: Sweeney, Michael John
- Date: 2019
- Subjects: NetFlow , Big data , High performance computing , Event processing (Computer science)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/65022 , vital:28654
- Description: With the increase in link speeds and the growth of the Internet, the volume of NetFlow data generated has increased significantly over time and processing these volumes has become a challenge, more specifically a Big Data challenge. With the advent of technologies and architectures designed to handle Big Data volumes, researchers have investigated their application to the processing of NetFlow data. This work builds on prior work wherein a scoring methodology was proposed for identifying anomalies in NetFlow by proposing and implementing a system that allows for automatic, real-time scoring through the adoption of Big Data stream processing architectures. The first part of the research looks at the means of event detection using the scoring approach and implementing as a number of individual, standalone components, each responsible for detecting and scoring a single type of flow trait. The second part is the implementation of these scoring components in a framework, named Themis1, capable of handling high volumes of data with low latency processing times. This was tackled using tools, technologies and architectural elements from the world of Big Data stream processing. The performance of the framework on the stream processing architecture was shown to demonstrate good flow throughput at low processing latencies on a single low end host. The successful demonstration of the framework on a single host opens the way to leverage the scaling capabilities afforded by the architectures and technologies used. This gives weight to the possibility of using this framework for real time threat detection using NetFlow data from larger networked environments.
- Full Text:
- Date Issued: 2019
- Authors: Sweeney, Michael John
- Date: 2019
- Subjects: NetFlow , Big data , High performance computing , Event processing (Computer science)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/65022 , vital:28654
- Description: With the increase in link speeds and the growth of the Internet, the volume of NetFlow data generated has increased significantly over time and processing these volumes has become a challenge, more specifically a Big Data challenge. With the advent of technologies and architectures designed to handle Big Data volumes, researchers have investigated their application to the processing of NetFlow data. This work builds on prior work wherein a scoring methodology was proposed for identifying anomalies in NetFlow by proposing and implementing a system that allows for automatic, real-time scoring through the adoption of Big Data stream processing architectures. The first part of the research looks at the means of event detection using the scoring approach and implementing as a number of individual, standalone components, each responsible for detecting and scoring a single type of flow trait. The second part is the implementation of these scoring components in a framework, named Themis1, capable of handling high volumes of data with low latency processing times. This was tackled using tools, technologies and architectural elements from the world of Big Data stream processing. The performance of the framework on the stream processing architecture was shown to demonstrate good flow throughput at low processing latencies on a single low end host. The successful demonstration of the framework on a single host opens the way to leverage the scaling capabilities afforded by the architectures and technologies used. This gives weight to the possibility of using this framework for real time threat detection using NetFlow data from larger networked environments.
- Full Text:
- Date Issued: 2019
A framework for the application of network telescope sensors in a global IP network
- Authors: Irwin, Barry Vivian William
- Date: 2011
- Subjects: Sensor networks Computer networks TCP/IP (Computer network protocol) Internet Computer security Computers -- Access control Computer networks -- Security measures Computer viruses Malware (Computer software)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4593 , http://hdl.handle.net/10962/d1004835
- Description: The use of Network Telescope systems has become increasingly popular amongst security researchers in recent years. This study provides a framework for the utilisation of this data. The research is based on a primary dataset of 40 million events spanning 50 months collected using a small (/24) passive network telescope located in African IP space. This research presents a number of differing ways in which the data can be analysed ranging from low level protocol based analysis to higher level analysis at the geopolitical and network topology level. Anomalous traffic and illustrative anecdotes are explored in detail and highlighted. A discussion relating to bogon traffic observed is also presented. Two novel visualisation tools are presented, which were developed to aid in the analysis of large network telescope datasets. The first is a three-dimensional visualisation tool which allows for live, near-realtime analysis, and the second is a two-dimensional fractal based plotting scheme which allows for plots of the entire IPv4 address space to be produced, and manipulated. Using the techniques and tools developed for the analysis of this dataset, a detailed analysis of traffic recorded as destined for port 445/tcp is presented. This includes the evaluation of traffic surrounding the outbreak of the Conficker worm in November 2008. A number of metrics relating to the description and quantification of network telescope configuration and the resultant traffic captures are described, the use of which it is hoped will facilitate greater and easier collaboration among researchers utilising this network security technology. The research concludes with suggestions relating to other applications of the data and intelligence that can be extracted from network telescopes, and their use as part of an organisation’s integrated network security systems
- Full Text:
- Date Issued: 2011
- Authors: Irwin, Barry Vivian William
- Date: 2011
- Subjects: Sensor networks Computer networks TCP/IP (Computer network protocol) Internet Computer security Computers -- Access control Computer networks -- Security measures Computer viruses Malware (Computer software)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4593 , http://hdl.handle.net/10962/d1004835
- Description: The use of Network Telescope systems has become increasingly popular amongst security researchers in recent years. This study provides a framework for the utilisation of this data. The research is based on a primary dataset of 40 million events spanning 50 months collected using a small (/24) passive network telescope located in African IP space. This research presents a number of differing ways in which the data can be analysed ranging from low level protocol based analysis to higher level analysis at the geopolitical and network topology level. Anomalous traffic and illustrative anecdotes are explored in detail and highlighted. A discussion relating to bogon traffic observed is also presented. Two novel visualisation tools are presented, which were developed to aid in the analysis of large network telescope datasets. The first is a three-dimensional visualisation tool which allows for live, near-realtime analysis, and the second is a two-dimensional fractal based plotting scheme which allows for plots of the entire IPv4 address space to be produced, and manipulated. Using the techniques and tools developed for the analysis of this dataset, a detailed analysis of traffic recorded as destined for port 445/tcp is presented. This includes the evaluation of traffic surrounding the outbreak of the Conficker worm in November 2008. A number of metrics relating to the description and quantification of network telescope configuration and the resultant traffic captures are described, the use of which it is hoped will facilitate greater and easier collaboration among researchers utilising this network security technology. The research concludes with suggestions relating to other applications of the data and intelligence that can be extracted from network telescopes, and their use as part of an organisation’s integrated network security systems
- Full Text:
- Date Issued: 2011