An exploration of the overlap between open source threat intelligence and active internet background radiation
- Authors: Pearson, Deon Turner
- Date: 2020
- Subjects: Computer networks -- Security measures , Computer networks -- Monitoring , Malware (Computer software) , TCP/IP (Computer network protocol) , Open source intelligence
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/103802 , vital:32299
- Description: Organisations and individuals are facing increasing persistent threats on the Internet from worms, port scanners, and malicious software (malware). These threats are constantly evolving as attack techniques are discovered. To aid in the detection and prevention of such threats, and to stay ahead of the adversaries conducting the attacks, security specialists are utilising Threat Intelligence (TI) data in their defense strategies. TI data can be obtained from a variety of different sources such as private routers, firewall logs, public archives, and public or private network telescopes. However, at the rate and ease at which TI is produced and published, specifically Open Source Threat Intelligence (OSINT), the quality is dropping, resulting in fragmented, context-less and variable data. This research utilised two sets of TI data, a collection of OSINT and active Internet Background Radiation (IBR). The data was collected over a period of 12 months, from 37 publicly available OSINT datasets and five IBR datasets. Through the identification and analysis of common data between the OSINT and IBR datasets, this research was able to gain insight into how effective OSINT is at detecting and potentially reducing ongoing malicious Internet traffic. As part of this research, a minimal framework for the collection, processing/analysis, and distribution of OSINT was developed and tested. The research focused on exploring areas in common between the two datasets, with the intention of creating an enriched, contextualised, and reduced set of malicious source IP addresses that could be published for consumers to use in their own environment. The findings of this research pointed towards a persistent group of IP addresses observed on both datasets, over the period under research. Using these persistent IP addresses, the research was able to identify specific services being targeted. Amongst these persistent IP addresses were significant packets from Mirai like IoT Malware on port 23/tcp and 2323/tcp as well as general scanning activity on port 445/TCP.
- Full Text:
- Date Issued: 2020
- Authors: Pearson, Deon Turner
- Date: 2020
- Subjects: Computer networks -- Security measures , Computer networks -- Monitoring , Malware (Computer software) , TCP/IP (Computer network protocol) , Open source intelligence
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/103802 , vital:32299
- Description: Organisations and individuals are facing increasing persistent threats on the Internet from worms, port scanners, and malicious software (malware). These threats are constantly evolving as attack techniques are discovered. To aid in the detection and prevention of such threats, and to stay ahead of the adversaries conducting the attacks, security specialists are utilising Threat Intelligence (TI) data in their defense strategies. TI data can be obtained from a variety of different sources such as private routers, firewall logs, public archives, and public or private network telescopes. However, at the rate and ease at which TI is produced and published, specifically Open Source Threat Intelligence (OSINT), the quality is dropping, resulting in fragmented, context-less and variable data. This research utilised two sets of TI data, a collection of OSINT and active Internet Background Radiation (IBR). The data was collected over a period of 12 months, from 37 publicly available OSINT datasets and five IBR datasets. Through the identification and analysis of common data between the OSINT and IBR datasets, this research was able to gain insight into how effective OSINT is at detecting and potentially reducing ongoing malicious Internet traffic. As part of this research, a minimal framework for the collection, processing/analysis, and distribution of OSINT was developed and tested. The research focused on exploring areas in common between the two datasets, with the intention of creating an enriched, contextualised, and reduced set of malicious source IP addresses that could be published for consumers to use in their own environment. The findings of this research pointed towards a persistent group of IP addresses observed on both datasets, over the period under research. Using these persistent IP addresses, the research was able to identify specific services being targeted. Amongst these persistent IP addresses were significant packets from Mirai like IoT Malware on port 23/tcp and 2323/tcp as well as general scanning activity on port 445/TCP.
- Full Text:
- Date Issued: 2020
A comparison of open source and proprietary digital forensic software
- Authors: Sonnekus, Michael Hendrik
- Date: 2015
- Subjects: Computer crimes , Computer crimes -- Investigation , Electronic evidence , Open source software
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4717 , http://hdl.handle.net/10962/d1017939
- Description: Scrutiny of the capabilities and accuracy of computer forensic tools is increasing as the number of incidents relying on digital evidence and the weight of that evidence increase. This thesis describes the capabilities of the leading proprietary and open source digital forensic tools. The capabilities of the tools were tested separately on digital media that had been formatted using Windows and Linux. Experiments were carried out with the intention of establishing whether the capabilities of open source computer forensics are similar to those of proprietary computer forensic tools, and whether these tools could complement one another. The tools were tested with regards to their capabilities to make and analyse digital forensic images in a forensically sound manner. The tests were carried out on each media type after deleting data from the media, and then repeated after formatting the media. The results of the experiments performed demonstrate that both proprietary and open source computer forensic tools have superior capabilities in different scenarios, and that the toolsets can be used to validate and complement one another. The implication of these findings is that investigators have an affordable means of validating their findings and are able to more effectively investigate digital media.
- Full Text:
- Date Issued: 2015
- Authors: Sonnekus, Michael Hendrik
- Date: 2015
- Subjects: Computer crimes , Computer crimes -- Investigation , Electronic evidence , Open source software
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4717 , http://hdl.handle.net/10962/d1017939
- Description: Scrutiny of the capabilities and accuracy of computer forensic tools is increasing as the number of incidents relying on digital evidence and the weight of that evidence increase. This thesis describes the capabilities of the leading proprietary and open source digital forensic tools. The capabilities of the tools were tested separately on digital media that had been formatted using Windows and Linux. Experiments were carried out with the intention of establishing whether the capabilities of open source computer forensics are similar to those of proprietary computer forensic tools, and whether these tools could complement one another. The tools were tested with regards to their capabilities to make and analyse digital forensic images in a forensically sound manner. The tests were carried out on each media type after deleting data from the media, and then repeated after formatting the media. The results of the experiments performed demonstrate that both proprietary and open source computer forensic tools have superior capabilities in different scenarios, and that the toolsets can be used to validate and complement one another. The implication of these findings is that investigators have an affordable means of validating their findings and are able to more effectively investigate digital media.
- Full Text:
- Date Issued: 2015
A study of real-time operating systems for microcomputers
- Authors: Wells, George Clifford
- Date: 1990
- Subjects: Operating systems (Computers) , Microcomputers
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4611 , http://hdl.handle.net/10962/d1004896 , Operating systems (Computers) , Microcomputers
- Description: This thesis describes the evaluation of four operating systems for microcomputers. The emphasis of the study is on the suitability of the operating systems for use in real-time applications, such as process control. The evaluation was performed in two sections. The first section was a quantitative assessment of the performance of the real-time features of the operating system. This was performed using benchmarks. The criteria for the benchmarks and their design are discussed. The second section was a qualitative assessment of the suitability of the operating systems for the development and implementation of real-time systems. This was assessed through the implementation of a small simulation of a manufacturing process and its associated control system. The simulation was designed using the Ward and Mellor real-time design method which was extended to handle the special case of a real-time simulation. The operating systems which were selected for the study covered a spectrum from general purpose operating systems to small, specialised real-time operating systems. From the quantitative assessment it emerged that QNX (from Quantum Software Systems) had the best overall performance. Qualitatively, UNIX was found to offer the best system development environment, but it does not have the performance and the characteristics required for real-time applications. This suggests that versions of UNIX that are adapted for real-time applications are worth careful consideration for use both as development systems and implementation systems.
- Full Text:
- Date Issued: 1990
- Authors: Wells, George Clifford
- Date: 1990
- Subjects: Operating systems (Computers) , Microcomputers
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4611 , http://hdl.handle.net/10962/d1004896 , Operating systems (Computers) , Microcomputers
- Description: This thesis describes the evaluation of four operating systems for microcomputers. The emphasis of the study is on the suitability of the operating systems for use in real-time applications, such as process control. The evaluation was performed in two sections. The first section was a quantitative assessment of the performance of the real-time features of the operating system. This was performed using benchmarks. The criteria for the benchmarks and their design are discussed. The second section was a qualitative assessment of the suitability of the operating systems for the development and implementation of real-time systems. This was assessed through the implementation of a small simulation of a manufacturing process and its associated control system. The simulation was designed using the Ward and Mellor real-time design method which was extended to handle the special case of a real-time simulation. The operating systems which were selected for the study covered a spectrum from general purpose operating systems to small, specialised real-time operating systems. From the quantitative assessment it emerged that QNX (from Quantum Software Systems) had the best overall performance. Qualitatively, UNIX was found to offer the best system development environment, but it does not have the performance and the characteristics required for real-time applications. This suggests that versions of UNIX that are adapted for real-time applications are worth careful consideration for use both as development systems and implementation systems.
- Full Text:
- Date Issued: 1990
Constructing a low-cost, open-source, VoiceXML
- Authors: King, Adam
- Date: 2007 , 2013-07-01
- Subjects: VoiceXML (Document markup language) , Asterisk (Computer file) , Internet telephony , Open source software
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4585 , http://hdl.handle.net/10962/d1004735 , VoiceXML (Document markup language) , Asterisk (Computer file) , Internet telephony , Open source software
- Description: Voice-enabled applications, applications that interact with a user via an audio channel, are used extensively today. Their use is growing as speech related technologies improve, as speech is one of the most natural methods of interaction. They can provide customer support as IVRs, can be used as an assistive technology, or can become an aural interface to the Internet. Given that the telephone is used extensively throughout the globe, the number of potential users of voice-enabled applications is very high. VoiceXML is a popular, open, high-level, standard means of creating voice-enabled applications which was designed to bring the benefits of web based development to services. While VoiceXML is an ideal language for creating these applications, VoiceXML gateways, the hardware and software responsible for interpreting VoiceXML applications and interfacing with the PSTN, are still expensive and so there is a need for a low-cost gateway. Asterisk, and open-source, TDM/VoIP telephony platform, can be used as a low-cost PSTN interface. This thesis investigates adding a VoiceXML service to Asterisk, creating a low-cost VoiceXML prototype gateway which is able to render voice-enabled applications. Following the Component-Based Software Engineering (CBSE) paradigm, the VoiceXML gateway is divided into a set of components which are sourced from the open-source community, and integrated to create the gateway. The browser requires a VoiceXML interpreter (OpenVXI), a Text-To-Speech engine (Festival) and a speech recognition engine (Sphinx 4). The integration of the components results in a low-cost, open-source VoiceXML gateway. System tests show that the integration of the components was successful, and that the system can handle concurrent calls. A fully compliant version of the gateway can be used in the real world to render voice-enabled applications at a low cost. , KMBT_363 , Adobe Acrobat 9.55 Paper Capture Plug-in
- Full Text:
- Date Issued: 2007
- Authors: King, Adam
- Date: 2007 , 2013-07-01
- Subjects: VoiceXML (Document markup language) , Asterisk (Computer file) , Internet telephony , Open source software
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4585 , http://hdl.handle.net/10962/d1004735 , VoiceXML (Document markup language) , Asterisk (Computer file) , Internet telephony , Open source software
- Description: Voice-enabled applications, applications that interact with a user via an audio channel, are used extensively today. Their use is growing as speech related technologies improve, as speech is one of the most natural methods of interaction. They can provide customer support as IVRs, can be used as an assistive technology, or can become an aural interface to the Internet. Given that the telephone is used extensively throughout the globe, the number of potential users of voice-enabled applications is very high. VoiceXML is a popular, open, high-level, standard means of creating voice-enabled applications which was designed to bring the benefits of web based development to services. While VoiceXML is an ideal language for creating these applications, VoiceXML gateways, the hardware and software responsible for interpreting VoiceXML applications and interfacing with the PSTN, are still expensive and so there is a need for a low-cost gateway. Asterisk, and open-source, TDM/VoIP telephony platform, can be used as a low-cost PSTN interface. This thesis investigates adding a VoiceXML service to Asterisk, creating a low-cost VoiceXML prototype gateway which is able to render voice-enabled applications. Following the Component-Based Software Engineering (CBSE) paradigm, the VoiceXML gateway is divided into a set of components which are sourced from the open-source community, and integrated to create the gateway. The browser requires a VoiceXML interpreter (OpenVXI), a Text-To-Speech engine (Festival) and a speech recognition engine (Sphinx 4). The integration of the components results in a low-cost, open-source VoiceXML gateway. System tests show that the integration of the components was successful, and that the system can handle concurrent calls. A fully compliant version of the gateway can be used in the real world to render voice-enabled applications at a low cost. , KMBT_363 , Adobe Acrobat 9.55 Paper Capture Plug-in
- Full Text:
- Date Issued: 2007
Natural Language Processing with machine learning for anomaly detection on system call logs
- Authors: Goosen, Christo
- Date: 2023-10-13
- Subjects: Natural language processing (Computer science) , Machine learning , Information security , Anomaly detection (Computer security) , Host-based intrusion detection system
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/424699 , vital:72176
- Description: Host intrusion detection systems and machine learning have been studied for many years especially on datasets like KDD99. Current research and systems are focused on low training and processing complex problems such as system call returns, which lack the system call arguments and potential traces of exploits run against a system. With respect to malware and vulnerabilities, signatures are relied upon, and the potential for natural language processing of the resulting logs and system call traces needs further experimentation. This research looks at unstructured raw system call traces from x86_64 bit GNU Linux operating systems with natural language processing and supervised and unsupervised machine learning techniques to identify current and unseen threats. The research explores whether these tools are within the skill set of information security professionals, or require data science professionals. The research makes use of an academic and modern system call dataset from Leipzig University and applies two machine learning models based on decision trees. Random Forest as the supervised algorithm is compared to the unsupervised Isolation Forest algorithm for this research, with each experiment repeated after hyper-parameter tuning. The research finds conclusive evidence that the Isolation Forest Tree algorithm is effective, when paired with a Principal Component Analysis, in identifying anomalies in the modern Leipzig Intrusion Detection Data Set (LID-DS) dataset combined with samples of executed malware from the Virus Total Academic dataset. The base or default model parameters produce sub-optimal results, whereas using a hyper-parameter tuning technique increases the accuracy to within promising levels for anomaly and potential zero day detection. , Thesis (MSc) -- Faculty of Science, Computer Science, 2023
- Full Text:
- Date Issued: 2023-10-13
- Authors: Goosen, Christo
- Date: 2023-10-13
- Subjects: Natural language processing (Computer science) , Machine learning , Information security , Anomaly detection (Computer security) , Host-based intrusion detection system
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/424699 , vital:72176
- Description: Host intrusion detection systems and machine learning have been studied for many years especially on datasets like KDD99. Current research and systems are focused on low training and processing complex problems such as system call returns, which lack the system call arguments and potential traces of exploits run against a system. With respect to malware and vulnerabilities, signatures are relied upon, and the potential for natural language processing of the resulting logs and system call traces needs further experimentation. This research looks at unstructured raw system call traces from x86_64 bit GNU Linux operating systems with natural language processing and supervised and unsupervised machine learning techniques to identify current and unseen threats. The research explores whether these tools are within the skill set of information security professionals, or require data science professionals. The research makes use of an academic and modern system call dataset from Leipzig University and applies two machine learning models based on decision trees. Random Forest as the supervised algorithm is compared to the unsupervised Isolation Forest algorithm for this research, with each experiment repeated after hyper-parameter tuning. The research finds conclusive evidence that the Isolation Forest Tree algorithm is effective, when paired with a Principal Component Analysis, in identifying anomalies in the modern Leipzig Intrusion Detection Data Set (LID-DS) dataset combined with samples of executed malware from the Virus Total Academic dataset. The base or default model parameters produce sub-optimal results, whereas using a hyper-parameter tuning technique increases the accuracy to within promising levels for anomaly and potential zero day detection. , Thesis (MSc) -- Faculty of Science, Computer Science, 2023
- Full Text:
- Date Issued: 2023-10-13
Modelling parallel and distributed virtual reality systems for performance analysis and comparison
- Authors: Bangay, Shaun Douglas
- Date: 1997
- Subjects: Virtual reality Computer simulation
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4657 , http://hdl.handle.net/10962/d1006656
- Description: Most Virtual Reality systems employ some form of parallel processing, making use of multiple processors which are often distributed over large areas geographically, and which communicate via various forms of message passing. The approaches to parallel decomposition differ for each system, as do the performance implications of each approach. Previous comparisons have only identified and categorized the different approaches. None have examined the performance issues involved in the different parallel decompositions. Performance measurement for a Virtual Reality system differs from that of other parallel systems in that some measure of the delays involved with the interaction of the separate components is required, in addition to the measure of the throughput of the system. Existing performance analysis approaches are typically not well suited to providing both these measures. This thesis describes the development of a performance analysis technique that is able to provide measures of both interaction latency and cycle time for a model of a Virtual Reality system. This technique allows performance measures to be generated as symbolic expressions describing the relationships between the delays in the model. It automatically generates constraint regions, specifying the values of the system parameters for which performance characteristics change. The performance analysis technique shows strong agreement with values measured from implementation of three common decomposition strategies on two message passing architectures. The technique is successfully applied to a range of parallel decomposition strategies found in Parallel and Distributed Virtual Reality systems. For each system, the primary decomposition techniques are isolated and analysed to determine their performance characteristics. This analysis allows a comparison of the various decomposition techniques, and in many cases reveals trends in their behaviour that would have gone unnoticed with alternative analysis techniques. The work described in this thesis supports the Performance Analysis and Comparison of Parallel and Distributed Virtual Reality systems. In addition it acts as a reference, describing the performance characteristics of decomposition strategies used in Virtual Reality systems.
- Full Text:
- Date Issued: 1997
- Authors: Bangay, Shaun Douglas
- Date: 1997
- Subjects: Virtual reality Computer simulation
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4657 , http://hdl.handle.net/10962/d1006656
- Description: Most Virtual Reality systems employ some form of parallel processing, making use of multiple processors which are often distributed over large areas geographically, and which communicate via various forms of message passing. The approaches to parallel decomposition differ for each system, as do the performance implications of each approach. Previous comparisons have only identified and categorized the different approaches. None have examined the performance issues involved in the different parallel decompositions. Performance measurement for a Virtual Reality system differs from that of other parallel systems in that some measure of the delays involved with the interaction of the separate components is required, in addition to the measure of the throughput of the system. Existing performance analysis approaches are typically not well suited to providing both these measures. This thesis describes the development of a performance analysis technique that is able to provide measures of both interaction latency and cycle time for a model of a Virtual Reality system. This technique allows performance measures to be generated as symbolic expressions describing the relationships between the delays in the model. It automatically generates constraint regions, specifying the values of the system parameters for which performance characteristics change. The performance analysis technique shows strong agreement with values measured from implementation of three common decomposition strategies on two message passing architectures. The technique is successfully applied to a range of parallel decomposition strategies found in Parallel and Distributed Virtual Reality systems. For each system, the primary decomposition techniques are isolated and analysed to determine their performance characteristics. This analysis allows a comparison of the various decomposition techniques, and in many cases reveals trends in their behaviour that would have gone unnoticed with alternative analysis techniques. The work described in this thesis supports the Performance Analysis and Comparison of Parallel and Distributed Virtual Reality systems. In addition it acts as a reference, describing the performance characteristics of decomposition strategies used in Virtual Reality systems.
- Full Text:
- Date Issued: 1997
Parallel implementation of a virtual reality system on a transputer architecture
- Authors: Bangay, Shaun Douglas
- Date: 1994 , 2012-10-11
- Subjects: Virtual reality , Computer simulation , Transputers
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4668 , http://hdl.handle.net/10962/d1006687 , Virtual reality , Computer simulation , Transputers
- Description: A Virtual Reality is a computer model of an environment, actual or imagined, presented to a user in as realistic a fashion as possible. Stereo goggles may be used to provide the user with a view of the modelled environment from within the environment, while a data-glove is used to interact with the environment. To simulate reality on a computer, the machine has to produce realistic images rapidly. Such a requirement usually necessitates expensive equipment. This thesis presents an implementation of a virtual reality system on a transputer architecture. The system is general, and is intended to provide support for the development of various virtual environments. The three main components of the system are the output device drivers, the input device drivers, and the virtual world kernel. This last component is responsible for the simulation of the virtual world. The rendering system is described in detail. Various methods for implementing the components of the graphics pipeline are discussed. These are then generalised to make use of the facilities provided by the transputer processor for parallel processing. A number of different decomposition techniques are implemented and compared. The emphasis in this section is on the speed at which the world can be rendered, and the interaction latency involved. In the best case, where almost linear speedup is obtained, a world containing over 250 polygons is rendered at 32 frames/second. The bandwidth of the transputer links is the major factor limiting speedup. A description is given of an input device driver which makes use of a powerglove. Techniques for overcoming the limitations of this device, and for interacting with the virtual world, are discussed. The virtual world kernel is designed to make extensive use of the parallel processing facilities provided by transputers. It is capable of providing support for mUltiple worlds concurrently, and for multiple users interacting with these worlds. Two applications are described that were successfully implemented using this system. The design of the system is compared with other recently developed virtual reality systems. Features that are common or advantageous in each of the systems are discussed. The system described in this thesis compares favourably, particularly in its use of parallel processors. , KMBT_223
- Full Text:
- Date Issued: 1994
- Authors: Bangay, Shaun Douglas
- Date: 1994 , 2012-10-11
- Subjects: Virtual reality , Computer simulation , Transputers
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4668 , http://hdl.handle.net/10962/d1006687 , Virtual reality , Computer simulation , Transputers
- Description: A Virtual Reality is a computer model of an environment, actual or imagined, presented to a user in as realistic a fashion as possible. Stereo goggles may be used to provide the user with a view of the modelled environment from within the environment, while a data-glove is used to interact with the environment. To simulate reality on a computer, the machine has to produce realistic images rapidly. Such a requirement usually necessitates expensive equipment. This thesis presents an implementation of a virtual reality system on a transputer architecture. The system is general, and is intended to provide support for the development of various virtual environments. The three main components of the system are the output device drivers, the input device drivers, and the virtual world kernel. This last component is responsible for the simulation of the virtual world. The rendering system is described in detail. Various methods for implementing the components of the graphics pipeline are discussed. These are then generalised to make use of the facilities provided by the transputer processor for parallel processing. A number of different decomposition techniques are implemented and compared. The emphasis in this section is on the speed at which the world can be rendered, and the interaction latency involved. In the best case, where almost linear speedup is obtained, a world containing over 250 polygons is rendered at 32 frames/second. The bandwidth of the transputer links is the major factor limiting speedup. A description is given of an input device driver which makes use of a powerglove. Techniques for overcoming the limitations of this device, and for interacting with the virtual world, are discussed. The virtual world kernel is designed to make extensive use of the parallel processing facilities provided by transputers. It is capable of providing support for mUltiple worlds concurrently, and for multiple users interacting with these worlds. Two applications are described that were successfully implemented using this system. The design of the system is compared with other recently developed virtual reality systems. Features that are common or advantageous in each of the systems are discussed. The system described in this thesis compares favourably, particularly in its use of parallel processors. , KMBT_223
- Full Text:
- Date Issued: 1994
A structural and functional specification of a SCIM for service interaction management and personalisation in the IMS
- Authors: Tsietsi, Mosiuoa Jeremia
- Date: 2012
- Subjects: Internet Protocol multimedia subsystem , Internet Protocol multimedia subsystem -- Specifications , Long-Term Evolution (Telecommunications) , European Telecommunications Standards Institute , Wireless communication systems , Multimedia communications
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4606 , http://hdl.handle.net/10962/d1004864 , Internet Protocol multimedia subsystem , Internet Protocol multimedia subsystem -- Specifications , Long-Term Evolution (Telecommunications) , European Telecommunications Standards Institute , Wireless communication systems , Multimedia communications
- Description: The Internet Protocol Multimedia Subsystem (IMS) is a component of the 3G mobile network that has been specified by standards development organisations such as the 3GPP (3rd Generation Partnership Project) and ETSI (European Telecommunication Standards Institute). IMS seeks to guarantee that the telecommunication network of the future provides subscribers with seamless access to services across disparate networks. In order to achieve this, it defines a service architecture that hosts application servers that provide subscribers with value added services. Typically, an application server bundles all the functionality it needs to execute the services it delivers, however this view is currently being challenged. It is now thought that services should be synthesised from simple building blocks called service capabilities. This decomposition would facilitate the re-use of service capabilities across multiple services and would support the creation of new services that could not have originally been conceived. The shift from monolithic services to those built from service capabilities poses a challenge to the current service model in IMS. To accommodate this, the 3GPP has defined an entity known as a service capability interaction manager (SCIM) that would be responsible for managing the interactions between service capabilities in order to realise complex services. Some of these interactions could potentially lead to undesirable results, which the SCIM must work to avoid. As an added requirement, it is believed that the network should allow policies to be applied to network services which the SCIM should be responsible for enforcing. At the time of writing, the functional and structural architecture of the SCIM has not yet been standardised. This thesis explores the current serv ice architecture of the IMS in detail. Proposals that address the structure and functions of the SCIM are carefully compared and contrasted. This investigation leads to the presentation of key aspects of the SCIM, and provides solutions that explain how it should interact with service capabilities, manage undesirable interactions and factor user and network operator policies into its execution model. A modified design of the IMS service layer that embeds the SCIM is subsequently presented and described. The design uses existing IMS protocols and requires no change in the behaviour of the standard IMS entities. In order to develop a testbed for experimental verification of the design, the identification of suitable software platforms was required. This thesis presents some of the most popular platforms currently used by developers such as the Open IMS Core and OpenSER, as well as an open source, Java-based, multimedia communication platform called Mobicents. As a precursor to the development of the SCIM, a converged multimedia service is presented that describes how a video streaming application that is leveraged by a web portal was implemented for an IMS testbed using Mobicents components. The Mobicents SIP Servlets container was subsequently used to model an initial prototype of the SCIM, using a mUlti-component telephony service to illustrate the proposed service execution model. The design focuses on SIP-based services only, but should also work for other types of IMS application servers as well.
- Full Text:
- Date Issued: 2012
- Authors: Tsietsi, Mosiuoa Jeremia
- Date: 2012
- Subjects: Internet Protocol multimedia subsystem , Internet Protocol multimedia subsystem -- Specifications , Long-Term Evolution (Telecommunications) , European Telecommunications Standards Institute , Wireless communication systems , Multimedia communications
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4606 , http://hdl.handle.net/10962/d1004864 , Internet Protocol multimedia subsystem , Internet Protocol multimedia subsystem -- Specifications , Long-Term Evolution (Telecommunications) , European Telecommunications Standards Institute , Wireless communication systems , Multimedia communications
- Description: The Internet Protocol Multimedia Subsystem (IMS) is a component of the 3G mobile network that has been specified by standards development organisations such as the 3GPP (3rd Generation Partnership Project) and ETSI (European Telecommunication Standards Institute). IMS seeks to guarantee that the telecommunication network of the future provides subscribers with seamless access to services across disparate networks. In order to achieve this, it defines a service architecture that hosts application servers that provide subscribers with value added services. Typically, an application server bundles all the functionality it needs to execute the services it delivers, however this view is currently being challenged. It is now thought that services should be synthesised from simple building blocks called service capabilities. This decomposition would facilitate the re-use of service capabilities across multiple services and would support the creation of new services that could not have originally been conceived. The shift from monolithic services to those built from service capabilities poses a challenge to the current service model in IMS. To accommodate this, the 3GPP has defined an entity known as a service capability interaction manager (SCIM) that would be responsible for managing the interactions between service capabilities in order to realise complex services. Some of these interactions could potentially lead to undesirable results, which the SCIM must work to avoid. As an added requirement, it is believed that the network should allow policies to be applied to network services which the SCIM should be responsible for enforcing. At the time of writing, the functional and structural architecture of the SCIM has not yet been standardised. This thesis explores the current serv ice architecture of the IMS in detail. Proposals that address the structure and functions of the SCIM are carefully compared and contrasted. This investigation leads to the presentation of key aspects of the SCIM, and provides solutions that explain how it should interact with service capabilities, manage undesirable interactions and factor user and network operator policies into its execution model. A modified design of the IMS service layer that embeds the SCIM is subsequently presented and described. The design uses existing IMS protocols and requires no change in the behaviour of the standard IMS entities. In order to develop a testbed for experimental verification of the design, the identification of suitable software platforms was required. This thesis presents some of the most popular platforms currently used by developers such as the Open IMS Core and OpenSER, as well as an open source, Java-based, multimedia communication platform called Mobicents. As a precursor to the development of the SCIM, a converged multimedia service is presented that describes how a video streaming application that is leveraged by a web portal was implemented for an IMS testbed using Mobicents components. The Mobicents SIP Servlets container was subsequently used to model an initial prototype of the SCIM, using a mUlti-component telephony service to illustrate the proposed service execution model. The design focuses on SIP-based services only, but should also work for other types of IMS application servers as well.
- Full Text:
- Date Issued: 2012
Prototyping a peer-to-peer session initiation protocol user agent
- Authors: Tsietsi, Mosiuoa Jeremia
- Date: 2008 , 2008-03-10
- Subjects: Computer networks , Computer network protocols -- Standards , Data transmission systems -- Standards , Peer-to-peer architecture (Computer networks) , Computer network architectures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4646 , http://hdl.handle.net/10962/d1006603 , Computer networks , Computer network protocols -- Standards , Data transmission systems -- Standards , Peer-to-peer architecture (Computer networks) , Computer network architectures
- Description: The Session Initiation Protocol (SIP) has in recent years become a popular protocol for the exchange of text, voice and video over IP networks. This thesis proposes the use of a class of structured peer to peer protocols - commonly known as Distributed Hash Tables (DHTs) - to provide a SIP overlay with services such as end-point location management and message relay, in the absence of traditional, centralised resources such as SIP proxies and registrars. A peer-to-peer layer named OverCord, which allows the interaction with any specific DHT protocol via the use of appropriate plug-ins, was designed, implemented and tested. This layer was then incorporated into a SIP user agent distributed by NIST (National Institute of Standards and Technology, USA). The modified user agent is capable of reliably establishing text, audio and video communication with similarly modified agents (peers) as well as conventional, centralized SIP overlays.
- Full Text:
- Date Issued: 2008
- Authors: Tsietsi, Mosiuoa Jeremia
- Date: 2008 , 2008-03-10
- Subjects: Computer networks , Computer network protocols -- Standards , Data transmission systems -- Standards , Peer-to-peer architecture (Computer networks) , Computer network architectures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4646 , http://hdl.handle.net/10962/d1006603 , Computer networks , Computer network protocols -- Standards , Data transmission systems -- Standards , Peer-to-peer architecture (Computer networks) , Computer network architectures
- Description: The Session Initiation Protocol (SIP) has in recent years become a popular protocol for the exchange of text, voice and video over IP networks. This thesis proposes the use of a class of structured peer to peer protocols - commonly known as Distributed Hash Tables (DHTs) - to provide a SIP overlay with services such as end-point location management and message relay, in the absence of traditional, centralised resources such as SIP proxies and registrars. A peer-to-peer layer named OverCord, which allows the interaction with any specific DHT protocol via the use of appropriate plug-ins, was designed, implemented and tested. This layer was then incorporated into a SIP user agent distributed by NIST (National Institute of Standards and Technology, USA). The modified user agent is capable of reliably establishing text, audio and video communication with similarly modified agents (peers) as well as conventional, centralized SIP overlays.
- Full Text:
- Date Issued: 2008
A detailed investigation of interoperability for web services
- Authors: Wright, Madeleine
- Date: 2006
- Subjects: Firefox , Web services , World Wide Web , Computer architecture , C# (Computer program language) , PHP (Computer program language) , Java (Computer program language)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4592 , http://hdl.handle.net/10962/d1004832 , Firefox , Web services , World Wide Web , Computer architecture , C# (Computer program language) , PHP (Computer program language) , Java (Computer program language)
- Description: The thesis presents a qualitative survey of web services' interoperability, offering a snapshot of development and trends at the end of 2005. It starts by examining the beginnings of web services in earlier distributed computing and middleware technologies, determining the distance from these approaches evident in current web-services architectures. It establishes a working definition of web services, examining the protocols that now seek to define it and the extent to which they contribute to its most crucial feature, interoperability. The thesis then considers the REST approach to web services as being in a class of its own, concluding that this approach to interoperable distributed computing is not only the simplest but also the most interoperable. It looks briefly at interoperability issues raised by technologies in the wider arena of Service Oriented Architecture. The chapter on protocols is complemented by a chapter that validates the qualitative findings by examining web services in practice. These have been implemented by a variety of toolkits and on different platforms. Included in the study is a preliminary examination of JAX-WS, the replacement for JAX-RPC, which is still under development. Although the main language of implementation is Java, the study includes services in C# and PHP and one implementation of a client using a Firefox extension. The study concludes that different forms of web service may co-exist with earlier middleware technologies. While remaining aware that there are still pitfalls that might yet derail the movement towards greater interoperability, the conclusion sounds an optimistic note that recent cooperation between different vendors may yet result in a solution that achieves interoperability through core web-service standards.
- Full Text:
- Date Issued: 2006
- Authors: Wright, Madeleine
- Date: 2006
- Subjects: Firefox , Web services , World Wide Web , Computer architecture , C# (Computer program language) , PHP (Computer program language) , Java (Computer program language)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4592 , http://hdl.handle.net/10962/d1004832 , Firefox , Web services , World Wide Web , Computer architecture , C# (Computer program language) , PHP (Computer program language) , Java (Computer program language)
- Description: The thesis presents a qualitative survey of web services' interoperability, offering a snapshot of development and trends at the end of 2005. It starts by examining the beginnings of web services in earlier distributed computing and middleware technologies, determining the distance from these approaches evident in current web-services architectures. It establishes a working definition of web services, examining the protocols that now seek to define it and the extent to which they contribute to its most crucial feature, interoperability. The thesis then considers the REST approach to web services as being in a class of its own, concluding that this approach to interoperable distributed computing is not only the simplest but also the most interoperable. It looks briefly at interoperability issues raised by technologies in the wider arena of Service Oriented Architecture. The chapter on protocols is complemented by a chapter that validates the qualitative findings by examining web services in practice. These have been implemented by a variety of toolkits and on different platforms. Included in the study is a preliminary examination of JAX-WS, the replacement for JAX-RPC, which is still under development. Although the main language of implementation is Java, the study includes services in C# and PHP and one implementation of a client using a Firefox extension. The study concludes that different forms of web service may co-exist with earlier middleware technologies. While remaining aware that there are still pitfalls that might yet derail the movement towards greater interoperability, the conclusion sounds an optimistic note that recent cooperation between different vendors may yet result in a solution that achieves interoperability through core web-service standards.
- Full Text:
- Date Issued: 2006
Investigating unimodal isolated signer-independent sign language recognition
- Authors: Marais, Marc Jason
- Date: 2024-04-04
- Subjects: Convolutional neural network , Sign language recognition , Human activity recognition , Pattern recognition systems , Neural networks (Computer science)
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/435343 , vital:73149
- Description: Sign language serves as the mode of communication for the Deaf and Hard of Hearing community, embodying a rich linguistic and cultural heritage. Recent Sign Language Recognition (SLR) system developments aim to facilitate seamless communication between the Deaf community and the broader society. However, most existing systems are limited by signer-dependent models, hindering their adaptability to diverse signing styles and signers, thus impeding their practical implementation in real-world scenarios. This research explores various unimodal approaches, both pose-based and vision-based, for isolated signer-independent SLR using RGB video input on the LSA64 and AUTSL datasets. The unimodal RGB-only input strategy provides a realistic SLR setting where alternative data sources are either unavailable or necessitate specialised equipment. Through systematic testing scenarios, isolated signer-independent SLR experiments are conducted on both datasets, primarily focusing on AUTSL – a signer-independent dataset. The vision-based R(2+1)D-18 model emerged as the top performer, achieving 90.64% accuracy on the unseen AUTSL dataset test split, closely followed by the pose-based Spatio- Temporal Graph Convolutional Network (ST-GCN) model with an accuracy of 89.95%. Furthermore, these models achieved comparable accuracies at a significantly lower computational demand. Notably, the pose-based approach demonstrates robust generalisation to substantial background and signer variation. Moreover, the pose-based approach demands significantly less computational power and training time than vision-based approaches. The proposed unimodal pose-based and vision-based systems were concluded to both be effective at classifying sign classes in the LSA64 and AUTSL datasets. , Thesis (MSc) -- Faculty of Science, Ichthyology and Fisheries Science, 2024
- Full Text:
- Date Issued: 2024-04-04
- Authors: Marais, Marc Jason
- Date: 2024-04-04
- Subjects: Convolutional neural network , Sign language recognition , Human activity recognition , Pattern recognition systems , Neural networks (Computer science)
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/435343 , vital:73149
- Description: Sign language serves as the mode of communication for the Deaf and Hard of Hearing community, embodying a rich linguistic and cultural heritage. Recent Sign Language Recognition (SLR) system developments aim to facilitate seamless communication between the Deaf community and the broader society. However, most existing systems are limited by signer-dependent models, hindering their adaptability to diverse signing styles and signers, thus impeding their practical implementation in real-world scenarios. This research explores various unimodal approaches, both pose-based and vision-based, for isolated signer-independent SLR using RGB video input on the LSA64 and AUTSL datasets. The unimodal RGB-only input strategy provides a realistic SLR setting where alternative data sources are either unavailable or necessitate specialised equipment. Through systematic testing scenarios, isolated signer-independent SLR experiments are conducted on both datasets, primarily focusing on AUTSL – a signer-independent dataset. The vision-based R(2+1)D-18 model emerged as the top performer, achieving 90.64% accuracy on the unseen AUTSL dataset test split, closely followed by the pose-based Spatio- Temporal Graph Convolutional Network (ST-GCN) model with an accuracy of 89.95%. Furthermore, these models achieved comparable accuracies at a significantly lower computational demand. Notably, the pose-based approach demonstrates robust generalisation to substantial background and signer variation. Moreover, the pose-based approach demands significantly less computational power and training time than vision-based approaches. The proposed unimodal pose-based and vision-based systems were concluded to both be effective at classifying sign classes in the LSA64 and AUTSL datasets. , Thesis (MSc) -- Faculty of Science, Ichthyology and Fisheries Science, 2024
- Full Text:
- Date Issued: 2024-04-04
The implementation of a mobile application to decrease occupational sitting through goal setting and social comparison
- Authors: Tsaoane, Moipone Lipalesa
- Date: 2022-10-14
- Subjects: Sedentary behavior , Sitting position , Feedback , Mobile apps , Behavior modification , Agile software development
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/365544 , vital:65758
- Description: Background: Feedback proves to be a valuable tool in behaviour change as it is said to increase compliance and improve the effectiveness of interventions. Interventions that focus on decreasing sedentary behaviour as an independent factor from physical activity are necessary, especially for office workers who spend most of their day seated. There is insufficient knowledge regarding the effectiveness of feedback as a tool to decrease sedentary behaviour. This project implemented a tool that can be used to determine this. To take advantage of the cost-effectiveness and scalability of digital technologies, a mobile application was selected as the mode of delivery. Method: The application was designed as an intervention, using the Theoretical Domains Framework. It was then implemented into a fully functioning application through an agile development process, using Xam- arin.Forms framework. Due to challenges with this framework, a second application was developed using the React Native framework. Pilot studies were used for testing, with the final one consisting of Rhodes University employees. Results: The Xamarin.Forms application proved to be unfeasible; some users experienced fatal errors and crashes. The React Native application worked as desired and produced accurate and consistent step count readings, proving feasible from a functionality standpoint. The agile methodology enabled the developer to focus on implementing and testing one component at a time, which made the development process more manageable. Conclusion: Future work must conduct empirical studies to determine if feedback is an effective tool compared to a control group and which type of feedback (between goal-setting and social comparison) is most effective. , Thesis (MSc) -- Faculty of Science, Computer Science, 2022
- Full Text:
- Date Issued: 2022-10-14
- Authors: Tsaoane, Moipone Lipalesa
- Date: 2022-10-14
- Subjects: Sedentary behavior , Sitting position , Feedback , Mobile apps , Behavior modification , Agile software development
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/365544 , vital:65758
- Description: Background: Feedback proves to be a valuable tool in behaviour change as it is said to increase compliance and improve the effectiveness of interventions. Interventions that focus on decreasing sedentary behaviour as an independent factor from physical activity are necessary, especially for office workers who spend most of their day seated. There is insufficient knowledge regarding the effectiveness of feedback as a tool to decrease sedentary behaviour. This project implemented a tool that can be used to determine this. To take advantage of the cost-effectiveness and scalability of digital technologies, a mobile application was selected as the mode of delivery. Method: The application was designed as an intervention, using the Theoretical Domains Framework. It was then implemented into a fully functioning application through an agile development process, using Xam- arin.Forms framework. Due to challenges with this framework, a second application was developed using the React Native framework. Pilot studies were used for testing, with the final one consisting of Rhodes University employees. Results: The Xamarin.Forms application proved to be unfeasible; some users experienced fatal errors and crashes. The React Native application worked as desired and produced accurate and consistent step count readings, proving feasible from a functionality standpoint. The agile methodology enabled the developer to focus on implementing and testing one component at a time, which made the development process more manageable. Conclusion: Future work must conduct empirical studies to determine if feedback is an effective tool compared to a control group and which type of feedback (between goal-setting and social comparison) is most effective. , Thesis (MSc) -- Faculty of Science, Computer Science, 2022
- Full Text:
- Date Issued: 2022-10-14
Leveraging LTSP to deploy a sustainable e-infrastructure for poor communities in South Africa
- Authors: Zvidzayi, Tichaona Manyara
- Date: 2022-10-14
- Subjects: Linux Terminal Server Project , Network computers , Thin client , Fat client , Cyberinfrastructure , Poverty reduction
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/365577 , vital:65761
- Description: Poverty alleviation is one of the main challenges the South African government is facing. Information and knowledge are key strategic resources for both social and economic development, and nowadays they most often rely on Information and Communication Technologies (ICTs). Poor communities have limited or no access to functioning e-infrastructure, which underpins ICT. The Siyakhula Living Lab (SLL) is a joint project between the universities of Rhodes and Fort Hare that has been running for over 15 years now. The SLL solution is currently implemented in schools in the Eastern Cape’s Dwesa-Mbhashe municipality as well as schools in Makhanda (formerly Grahamstown). Over the years, a number of blueprints for the meaningful connection of poor communities was developed. The research reported in this thesis sought to review and improve the Siyakhula Living Lab (SLL) blueprint regarding fixed computing infrastructure (as opposed to networking and applications). The review confirmed the viability of the GNU/Linux Terminal Server Project (LTSP) based computing infrastructure deployed in schools to serve the surrounding community. In 2019 LTSP was redesigned and rewritten to improve on the previous version. Amongst other improvements, LTSP19+ has a smaller memory footprint and supports a graphical way to prepare and maintain the client’s image using virtual machines. These improvements increase the potential life of ICT projects implementing the SLL solution, increasing the participation of members of the community (especially teachers) to the maintenance of the computing installations. The review recommends the switching from thin clients deployments to full ("thick") clients deployments, still booting from the network and mounting their file systems on a central server. The switch is motivated by reasons that go from cost-effectiveness to the ability to survive the sudden unavailability of the central server. From experience in the previous deployment, electrical power surge protection should be mandatory. Also, UPS to protect the file system of the central server should be configured to start the shutdown immediately on electrical power loss in order to protect the life of the UPS battery (and make it possible to use cheaper UPS that report only on network power loss). The research study contributed to one real-life computing infrastructure deployment in the Ntsika school in Makhanda and one re-deployment in the Ngwane school in the Dwesa-Mbhashe area. For about two years, the research also supported continuous maintenance for the Ntsika, Ngwane and Mpume schools. , Thesis (MSc) -- Faculty of Science, Computer Science, 2022
- Full Text:
- Date Issued: 2022-10-14
- Authors: Zvidzayi, Tichaona Manyara
- Date: 2022-10-14
- Subjects: Linux Terminal Server Project , Network computers , Thin client , Fat client , Cyberinfrastructure , Poverty reduction
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/365577 , vital:65761
- Description: Poverty alleviation is one of the main challenges the South African government is facing. Information and knowledge are key strategic resources for both social and economic development, and nowadays they most often rely on Information and Communication Technologies (ICTs). Poor communities have limited or no access to functioning e-infrastructure, which underpins ICT. The Siyakhula Living Lab (SLL) is a joint project between the universities of Rhodes and Fort Hare that has been running for over 15 years now. The SLL solution is currently implemented in schools in the Eastern Cape’s Dwesa-Mbhashe municipality as well as schools in Makhanda (formerly Grahamstown). Over the years, a number of blueprints for the meaningful connection of poor communities was developed. The research reported in this thesis sought to review and improve the Siyakhula Living Lab (SLL) blueprint regarding fixed computing infrastructure (as opposed to networking and applications). The review confirmed the viability of the GNU/Linux Terminal Server Project (LTSP) based computing infrastructure deployed in schools to serve the surrounding community. In 2019 LTSP was redesigned and rewritten to improve on the previous version. Amongst other improvements, LTSP19+ has a smaller memory footprint and supports a graphical way to prepare and maintain the client’s image using virtual machines. These improvements increase the potential life of ICT projects implementing the SLL solution, increasing the participation of members of the community (especially teachers) to the maintenance of the computing installations. The review recommends the switching from thin clients deployments to full ("thick") clients deployments, still booting from the network and mounting their file systems on a central server. The switch is motivated by reasons that go from cost-effectiveness to the ability to survive the sudden unavailability of the central server. From experience in the previous deployment, electrical power surge protection should be mandatory. Also, UPS to protect the file system of the central server should be configured to start the shutdown immediately on electrical power loss in order to protect the life of the UPS battery (and make it possible to use cheaper UPS that report only on network power loss). The research study contributed to one real-life computing infrastructure deployment in the Ntsika school in Makhanda and one re-deployment in the Ngwane school in the Dwesa-Mbhashe area. For about two years, the research also supported continuous maintenance for the Ntsika, Ngwane and Mpume schools. , Thesis (MSc) -- Faculty of Science, Computer Science, 2022
- Full Text:
- Date Issued: 2022-10-14