A Framework for using Open Source intelligence as a Digital Forensic Investigative tool
- Authors: Rule, Samantha Elizabeth
- Date: 2015
- Subjects: Open source intelligence , Criminal investigation , Electronic evidence
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4715 , http://hdl.handle.net/10962/d1017937
- Description: The proliferation of the Internet has amplified the use of social networking sites by creating a platform that encourages individuals to share information. As a result there is a wealth of information that is publically and easily accessible. This research explores whether open source intelligence (OSINT), which is freely available, could be used as a digital forensic investigative tool. A survey was created and sent to digital forensic investigators to establish whether they currently use OSINT when performing investigations. The survey results confirm that OSINT is being used by digital forensic investigators when performing investigations but there are currently no guidelines or frameworks available to support the use thereof. Additionally, the survey results showed a belief amongst those surveyed that evidence gleaned from OSINT sources is considered supplementary rather than evidentiary. The findings of this research led to the development of a framework that identifies and recommends key processes to follow when conducting OSINT investigations. The framework can assist digital forensic investigators to follow a structured and rigorous process, which may lead to the unanimous acceptance of information obtained via OSINT sources as evidentiary rather than supplementary in the near future.
- Full Text:
- Date Issued: 2015
- Authors: Rule, Samantha Elizabeth
- Date: 2015
- Subjects: Open source intelligence , Criminal investigation , Electronic evidence
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4715 , http://hdl.handle.net/10962/d1017937
- Description: The proliferation of the Internet has amplified the use of social networking sites by creating a platform that encourages individuals to share information. As a result there is a wealth of information that is publically and easily accessible. This research explores whether open source intelligence (OSINT), which is freely available, could be used as a digital forensic investigative tool. A survey was created and sent to digital forensic investigators to establish whether they currently use OSINT when performing investigations. The survey results confirm that OSINT is being used by digital forensic investigators when performing investigations but there are currently no guidelines or frameworks available to support the use thereof. Additionally, the survey results showed a belief amongst those surveyed that evidence gleaned from OSINT sources is considered supplementary rather than evidentiary. The findings of this research led to the development of a framework that identifies and recommends key processes to follow when conducting OSINT investigations. The framework can assist digital forensic investigators to follow a structured and rigorous process, which may lead to the unanimous acceptance of information obtained via OSINT sources as evidentiary rather than supplementary in the near future.
- Full Text:
- Date Issued: 2015
A grid based approach for the control and recall of the properties of IEEE 1394 audio devices
- Authors: Foulkes, Philip James
- Date: 2009
- Subjects: IEEE 1394 (Standard) , Computer sound processing , Digital communications , Local area networks (Computer networks) , Sound -- Recording and reproducing -- Digital techniques , Computational grids (Computer systems)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4594 , http://hdl.handle.net/10962/d1004836 , IEEE 1394 (Standard) , Computer sound processing , Digital communications , Local area networks (Computer networks) , Sound -- Recording and reproducing -- Digital techniques , Computational grids (Computer systems)
- Description: The control of modern audio studios is complex. Audio mixing desks have grown to the point where they contain thousands of parameters. The control surfaces of these devices do not reflect the routing and signal processing capabilities that the devices are capable of. Software audio mixing desk editors have been developed that allow for the remote control of these devices, but their graphical user interfaces retain the complexities of the audio mixing desk that they represent. In this thesis, we propose a grid approach to audio mixing. The developed grid audio mixing desk editor represents an audio mixing desk as a series of graphical routing matrices. These routing matrices expose the various signal processing points and signal flows that exist within an audio mixing desk. The routing matrices allow for audio signals to be routed within the device, and allow for the device’s parameters to be adjusted by selecting the appropriate signal processing points. With the use of the programming interfaces that are defined as part of the Studio Connections – Total Recall SDK, the audio mixing desk editor was integrated with compatible DAW applications to provide persistence of audio mixing desk parameter states. Many audio studios currently use digital networks to connect audio devices together. Audio and control signals are patched between devices through the use of software patchbays that run on computers. We propose a double grid-based FireWire patchbay aimed to simplify the patching of signals between audio devices on a FireWire network. The FireWire patchbay was implemented in such a way such that it can host software device editors that are Studio Connections compatible. This has allowed software device editors to be associated with the devices that are represented on the FireWire patchbay, thus allowing for studio wide control from a single application. The double grid-based patchbay was implemented such that it can be hosted by compatible DAW applications. Through this, the double grid-based patchbay application is able to provide the DAW application with the state of the parameters of the devices in a studio, as well as the connections between them. The DAW application may save this state data to its native song files. This state data may be passed back to the double grid-based patchbay when the song file is reloaded at a later stage. This state data may then be used by the patchbay to restore the parameters of the patchbay and its device editors to a previous state. This restored state may then be transferred to the hardware devices being represented by the patchbay.
- Full Text:
- Date Issued: 2009
- Authors: Foulkes, Philip James
- Date: 2009
- Subjects: IEEE 1394 (Standard) , Computer sound processing , Digital communications , Local area networks (Computer networks) , Sound -- Recording and reproducing -- Digital techniques , Computational grids (Computer systems)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4594 , http://hdl.handle.net/10962/d1004836 , IEEE 1394 (Standard) , Computer sound processing , Digital communications , Local area networks (Computer networks) , Sound -- Recording and reproducing -- Digital techniques , Computational grids (Computer systems)
- Description: The control of modern audio studios is complex. Audio mixing desks have grown to the point where they contain thousands of parameters. The control surfaces of these devices do not reflect the routing and signal processing capabilities that the devices are capable of. Software audio mixing desk editors have been developed that allow for the remote control of these devices, but their graphical user interfaces retain the complexities of the audio mixing desk that they represent. In this thesis, we propose a grid approach to audio mixing. The developed grid audio mixing desk editor represents an audio mixing desk as a series of graphical routing matrices. These routing matrices expose the various signal processing points and signal flows that exist within an audio mixing desk. The routing matrices allow for audio signals to be routed within the device, and allow for the device’s parameters to be adjusted by selecting the appropriate signal processing points. With the use of the programming interfaces that are defined as part of the Studio Connections – Total Recall SDK, the audio mixing desk editor was integrated with compatible DAW applications to provide persistence of audio mixing desk parameter states. Many audio studios currently use digital networks to connect audio devices together. Audio and control signals are patched between devices through the use of software patchbays that run on computers. We propose a double grid-based FireWire patchbay aimed to simplify the patching of signals between audio devices on a FireWire network. The FireWire patchbay was implemented in such a way such that it can host software device editors that are Studio Connections compatible. This has allowed software device editors to be associated with the devices that are represented on the FireWire patchbay, thus allowing for studio wide control from a single application. The double grid-based patchbay was implemented such that it can be hosted by compatible DAW applications. Through this, the double grid-based patchbay application is able to provide the DAW application with the state of the parameters of the devices in a studio, as well as the connections between them. The DAW application may save this state data to its native song files. This state data may be passed back to the double grid-based patchbay when the song file is reloaded at a later stage. This state data may then be used by the patchbay to restore the parameters of the patchbay and its device editors to a previous state. This restored state may then be transferred to the hardware devices being represented by the patchbay.
- Full Text:
- Date Issued: 2009
A knowledge-oriented, context-sensitive architectural framework for service deployment in marginalized rural communities
- Authors: Thinyane, Mamello P
- Date: 2009
- Subjects: Information technology Expert systems (Computer science) Software architecture User interfaces (Computer systems) Ethnoscience Social networks Rural development Technical assistance -- Developing countries Information networks -- Developing countries
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4599 , http://hdl.handle.net/10962/d1004843
- Description: The notion of a global knowledge society is somewhat of a misnomer due to the fact that large portions of the global community are not participants in this global knowledge society which is driven, shaped by and socio-technically biased towards a small fraction of the global population. Information and Communication Technology (ICT) is culture-sensitive and this is a dynamic that is largely ignored in the majority of ICT for Development (ICT4D) interventions, leading to the technological determinism flaw and ultimately a failure of the undertaken projects. The deployment of ICT solutions, in particular in the context of ICT4D, must be informed by the cultural and socio-technical profile of the deployment environments and solutions themselves must be developed with a focus towards context-sensitivity and ethnocentricity. In this thesis, we investigate the viability of a software architectural framework for the development of ICT solutions that are context-sensitive and ethnocentric1, and so aligned with the cultural and social dynamics within the environment of deployment. The conceptual framework, named PIASK, defines five tiers (presentation, interaction, access, social networking, and knowledge base) which allow for: behavioural completeness of the layer components; a modular and functionally decoupled architecture; and the flexibility to situate and contextualize the developed applications along the dimensions of the User Interface (UI), interaction modalities, usage metaphors, underlying Indigenous Knowledge (IK), and access protocols. We have developed a proof-of-concept service platform, called KnowNet, based on the PIASK architecture. KnowNet is built around the knowledge base layer, which consists of domain ontologies that encapsulate the knowledge in the platform, with an intrinsic flexibility to access secondary knowledge repositories. The domain ontologies constructed (as examples) are for the provisioning of eServices to support societal activities (e.g. commerce, health, agriculture, medicine) within a rural and marginalized area of Dwesa, in the Eastern Cape province of South Africa. The social networking layer allows for situating the platform within the local social systems. Heterogeneity of user profiles and multiplicity of end-user devices are handled through the access and the presentation components, and the service logic is implemented by the interaction components. This services platform validates the PIASK architecture for end-to-end provisioning of multi-modal, heterogeneous, ontology-based services. The development of KnowNet was informed on one hand by the latest trends within service architectures, semantic web technologies and social applications, and on the other hand by the context consideration based on the profile (IK systems dynamics, infrastructure, usability requirements) of the Dwesa community. The realization of the service platform is based on the JADE Multi-Agent System (MAS), and this shows the applicability and adequacy of MAS’s for service deployment in a rural context, at the same time providing key advantages such as platform fault-tolerance, robustness and flexibility. While the context of conceptualization of PIASK and the implementation of KnowNet is that of rurality and of ICT4D, the applicability of the architecture extends to other similarly heterogeneous and context-sensitive domains. KnowNet has been validated for functional and technical adequacy, and we have also undertaken an initial prevalidation for social context sensitivity. We observe that the five tier PIASK architecture provides an adequate framework for developing context-sensitive and ethnocentric software: by functionally separating and making explicit the social networking and access tier components, while still maintaining the traditional separation of presentation, business logic and data components.
- Full Text:
- Date Issued: 2009
- Authors: Thinyane, Mamello P
- Date: 2009
- Subjects: Information technology Expert systems (Computer science) Software architecture User interfaces (Computer systems) Ethnoscience Social networks Rural development Technical assistance -- Developing countries Information networks -- Developing countries
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4599 , http://hdl.handle.net/10962/d1004843
- Description: The notion of a global knowledge society is somewhat of a misnomer due to the fact that large portions of the global community are not participants in this global knowledge society which is driven, shaped by and socio-technically biased towards a small fraction of the global population. Information and Communication Technology (ICT) is culture-sensitive and this is a dynamic that is largely ignored in the majority of ICT for Development (ICT4D) interventions, leading to the technological determinism flaw and ultimately a failure of the undertaken projects. The deployment of ICT solutions, in particular in the context of ICT4D, must be informed by the cultural and socio-technical profile of the deployment environments and solutions themselves must be developed with a focus towards context-sensitivity and ethnocentricity. In this thesis, we investigate the viability of a software architectural framework for the development of ICT solutions that are context-sensitive and ethnocentric1, and so aligned with the cultural and social dynamics within the environment of deployment. The conceptual framework, named PIASK, defines five tiers (presentation, interaction, access, social networking, and knowledge base) which allow for: behavioural completeness of the layer components; a modular and functionally decoupled architecture; and the flexibility to situate and contextualize the developed applications along the dimensions of the User Interface (UI), interaction modalities, usage metaphors, underlying Indigenous Knowledge (IK), and access protocols. We have developed a proof-of-concept service platform, called KnowNet, based on the PIASK architecture. KnowNet is built around the knowledge base layer, which consists of domain ontologies that encapsulate the knowledge in the platform, with an intrinsic flexibility to access secondary knowledge repositories. The domain ontologies constructed (as examples) are for the provisioning of eServices to support societal activities (e.g. commerce, health, agriculture, medicine) within a rural and marginalized area of Dwesa, in the Eastern Cape province of South Africa. The social networking layer allows for situating the platform within the local social systems. Heterogeneity of user profiles and multiplicity of end-user devices are handled through the access and the presentation components, and the service logic is implemented by the interaction components. This services platform validates the PIASK architecture for end-to-end provisioning of multi-modal, heterogeneous, ontology-based services. The development of KnowNet was informed on one hand by the latest trends within service architectures, semantic web technologies and social applications, and on the other hand by the context consideration based on the profile (IK systems dynamics, infrastructure, usability requirements) of the Dwesa community. The realization of the service platform is based on the JADE Multi-Agent System (MAS), and this shows the applicability and adequacy of MAS’s for service deployment in a rural context, at the same time providing key advantages such as platform fault-tolerance, robustness and flexibility. While the context of conceptualization of PIASK and the implementation of KnowNet is that of rurality and of ICT4D, the applicability of the architecture extends to other similarly heterogeneous and context-sensitive domains. KnowNet has been validated for functional and technical adequacy, and we have also undertaken an initial prevalidation for social context sensitivity. We observe that the five tier PIASK architecture provides an adequate framework for developing context-sensitive and ethnocentric software: by functionally separating and making explicit the social networking and access tier components, while still maintaining the traditional separation of presentation, business logic and data components.
- Full Text:
- Date Issued: 2009
A longitudinal study of DNS traffic: understanding current DNS practice and abuse
- Authors: Van Zyl, Ignus
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3707 , vital:20537
- Description: This thesis examines a dataset spanning 21 months, containing 3,5 billion DNS packets. Traffic on TCP and UDP port 53, was captured on a production /24 IP block. The purpose of this thesis is twofold. The first is to create an understanding of current practice and behavior within the DNS infrastructure, the second to explore current threats faced by the DNS and the various systems that implement it. This is achieved by drawing on analysis and observations from the captured data. Aspects of the operation of DNS on the greater Internet are considered in this research with reference to the observed trends in the dataset, A thorough analysis of current DNS TTL implementation is made with respect to all response traffic, as well as sections looking at observed DNS TTL values for ,za domain replies and NX DOMAIN flagged replies. This thesis found that TTL values implemented are much lower than has been recommended in previous years, and that the TTL decrease is prevalent in most, but not all EE TTL implementation. With respect to the nature of DNS operations, this thesis also concerns itself with an analysis of the geoloeation of authoritative servers for local (,za) domains, and offers further observations towards the latency generated by the choice of authoritative server location for a given ,za domain. It was found that the majority of ,za domain authoritative servers are international, which results in latency generation that is multiple times greater than observed latencies for local authoritative servers. Further analysis is done with respect to NX DOM AIN behavior captured across the dataset. These findings outlined the cost of DNS miseonfiguration as well as highlighting instances of NXDOMAIN generation through malicious practice. With respect to DNS abuses, original research with respect to long-term scanning generated as a result of amplification attack activity on the greater Internet is presented. Many instances of amplification domain scans were captured during the packet capture, and an attempt is made to correlate that activity temporally with known amplification attack reports. The final area that this thesis deals with is the relatively new field of Bitflipping and Bitsquatting, delivering results on bitflip detection and evaluation over the course of the entire dataset. The detection methodology is outlined, and the final results are compared to findings given in recent bitflip literature.
- Full Text:
- Date Issued: 2016
- Authors: Van Zyl, Ignus
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3707 , vital:20537
- Description: This thesis examines a dataset spanning 21 months, containing 3,5 billion DNS packets. Traffic on TCP and UDP port 53, was captured on a production /24 IP block. The purpose of this thesis is twofold. The first is to create an understanding of current practice and behavior within the DNS infrastructure, the second to explore current threats faced by the DNS and the various systems that implement it. This is achieved by drawing on analysis and observations from the captured data. Aspects of the operation of DNS on the greater Internet are considered in this research with reference to the observed trends in the dataset, A thorough analysis of current DNS TTL implementation is made with respect to all response traffic, as well as sections looking at observed DNS TTL values for ,za domain replies and NX DOMAIN flagged replies. This thesis found that TTL values implemented are much lower than has been recommended in previous years, and that the TTL decrease is prevalent in most, but not all EE TTL implementation. With respect to the nature of DNS operations, this thesis also concerns itself with an analysis of the geoloeation of authoritative servers for local (,za) domains, and offers further observations towards the latency generated by the choice of authoritative server location for a given ,za domain. It was found that the majority of ,za domain authoritative servers are international, which results in latency generation that is multiple times greater than observed latencies for local authoritative servers. Further analysis is done with respect to NX DOM AIN behavior captured across the dataset. These findings outlined the cost of DNS miseonfiguration as well as highlighting instances of NXDOMAIN generation through malicious practice. With respect to DNS abuses, original research with respect to long-term scanning generated as a result of amplification attack activity on the greater Internet is presented. Many instances of amplification domain scans were captured during the packet capture, and an attempt is made to correlate that activity temporally with known amplification attack reports. The final area that this thesis deals with is the relatively new field of Bitflipping and Bitsquatting, delivering results on bitflip detection and evaluation over the course of the entire dataset. The detection methodology is outlined, and the final results are compared to findings given in recent bitflip literature.
- Full Text:
- Date Issued: 2016
A machine-independent microprogram development system
- Authors: Ward, Michael John
- Date: 1987 , 2013-03-11
- Subjects: Microprogramming
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4581 , http://hdl.handle.net/10962/d1003738 , Microprogramming
- Description: The aims of this project are twofold. They are firstly, to implement a microprogram development system that allows the programmer to write microcode for any microprogrammable machine, and secondly, to build a microprogrammable machine, incorporating the user friendliness of a simulator, while still providing the 'hands on' experience obtained actual hardware. Microprogram development involves a two stage process. The first step is to describe the target machine, using format descriptions and mnemonic-based template definitions. The second stage involves using the defined mnemonics to write the microcodes for the target machine. This includes an assembly phase to translate the mnemonics into the binary microinstructions. Three main components constitute the microprogrammable machine. The Arithmetic and Logic Unit (ALU) is built using chips from Advanced Micro Devices' Am29ØØ bit-slice family, the action of the Microprogram Control Unit (MCU) is simulated by software running on an IBM Personal Computer, and a section of the IBM PC's main memory acts as the Control Store (CS) for the system. The ALU is built on a prototyping card that plugs into one of the slots on the IBM PC's mother board. A hardware simulator program, that produces the effect of the ALU, has also been developed. A small assembly language has been developed using the system, to test the various functions of the system. A mini-assembler has also been written to facilitate assembly of the above language. A group of honours students at Rhodes University tested the microprogram development system. Their ideas and suggestions have been tabulated in this report and some of them have been used to enhance the system's performance. The concept of allowing 'inline' microinstructions in the macroprogram is also investigated in this report and a method of implementing this is shown.
- Full Text:
- Date Issued: 1987
- Authors: Ward, Michael John
- Date: 1987 , 2013-03-11
- Subjects: Microprogramming
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4581 , http://hdl.handle.net/10962/d1003738 , Microprogramming
- Description: The aims of this project are twofold. They are firstly, to implement a microprogram development system that allows the programmer to write microcode for any microprogrammable machine, and secondly, to build a microprogrammable machine, incorporating the user friendliness of a simulator, while still providing the 'hands on' experience obtained actual hardware. Microprogram development involves a two stage process. The first step is to describe the target machine, using format descriptions and mnemonic-based template definitions. The second stage involves using the defined mnemonics to write the microcodes for the target machine. This includes an assembly phase to translate the mnemonics into the binary microinstructions. Three main components constitute the microprogrammable machine. The Arithmetic and Logic Unit (ALU) is built using chips from Advanced Micro Devices' Am29ØØ bit-slice family, the action of the Microprogram Control Unit (MCU) is simulated by software running on an IBM Personal Computer, and a section of the IBM PC's main memory acts as the Control Store (CS) for the system. The ALU is built on a prototyping card that plugs into one of the slots on the IBM PC's mother board. A hardware simulator program, that produces the effect of the ALU, has also been developed. A small assembly language has been developed using the system, to test the various functions of the system. A mini-assembler has also been written to facilitate assembly of the above language. A group of honours students at Rhodes University tested the microprogram development system. Their ideas and suggestions have been tabulated in this report and some of them have been used to enhance the system's performance. The concept of allowing 'inline' microinstructions in the macroprogram is also investigated in this report and a method of implementing this is shown.
- Full Text:
- Date Issued: 1987
A mobile phone solution for ad-hoc hitch-hiking in South Africa
- Authors: Miteche, Sacha Patrick
- Date: 2014
- Subjects: Cell phones -- Information services , Cell phone users -- South Africa , Hitchhiking -- South Africa , Mobile communication systems -- Social aspects , Digital media -- South Africa , Information technology -- South Africa
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4702 , http://hdl.handle.net/10962/d1013340
- Description: The purpose of this study was to investigate the use of mobile phones in organizing ad-hoc vehicle ridesharing based on hitch-hiking trips involving private car drivers and commuters in South Africa. A study was conducted to learn how hitch-hiking trips are arranged in the urban and rural areas of the Eastern Cape. This involved carrying out interviews with hitch-hikers and participating in several trips. The study results provided the design specifications for a Dynamic Ridesharing System (DRS) tailor-made to the hitch-hiking culture of this context. The design of the DRS considered the delivery of the ad-hoc ridesharing service to the anticipated mobile phones owned by people who use hitch-hiking. The implementation of the system used the available open source solutions and guidelines under the Siyakhula Living Lab project, which promotes the use of Information and Communication Technology (ICT) in marginalized communities of South Africa. The developed prototype was tested in both the simulated and live environments, then followed by usability tests to establish the viability of the system. The results from the tests indicate an initial breakthrough in the process of modernizing the ad-hoc ridesharing of hitch-hiking which is used by a section of people in the urban and rural areas of South Africa.
- Full Text:
- Date Issued: 2014
- Authors: Miteche, Sacha Patrick
- Date: 2014
- Subjects: Cell phones -- Information services , Cell phone users -- South Africa , Hitchhiking -- South Africa , Mobile communication systems -- Social aspects , Digital media -- South Africa , Information technology -- South Africa
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4702 , http://hdl.handle.net/10962/d1013340
- Description: The purpose of this study was to investigate the use of mobile phones in organizing ad-hoc vehicle ridesharing based on hitch-hiking trips involving private car drivers and commuters in South Africa. A study was conducted to learn how hitch-hiking trips are arranged in the urban and rural areas of the Eastern Cape. This involved carrying out interviews with hitch-hikers and participating in several trips. The study results provided the design specifications for a Dynamic Ridesharing System (DRS) tailor-made to the hitch-hiking culture of this context. The design of the DRS considered the delivery of the ad-hoc ridesharing service to the anticipated mobile phones owned by people who use hitch-hiking. The implementation of the system used the available open source solutions and guidelines under the Siyakhula Living Lab project, which promotes the use of Information and Communication Technology (ICT) in marginalized communities of South Africa. The developed prototype was tested in both the simulated and live environments, then followed by usability tests to establish the viability of the system. The results from the tests indicate an initial breakthrough in the process of modernizing the ad-hoc ridesharing of hitch-hiking which is used by a section of people in the urban and rural areas of South Africa.
- Full Text:
- Date Issued: 2014
A mobile toolkit and customised location server for the creation of cross-referencing location-based services
- Ndakunda, Shange-Ishiwa Tangeni
- Authors: Ndakunda, Shange-Ishiwa Tangeni
- Date: 2013
- Subjects: Location-based services -- Security measures , Mobile communication systems -- Security measures , Digital communications , Java (Computer program language) , Application software -- Development -- Computer programs , User interfaces (Computer systems)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4703 , http://hdl.handle.net/10962/d1013604
- Description: Although there are several Software Development kits and Application Programming Interfaces for client-side location-based services development, they mostly involve the creation of self-referencing location-based services. Self-referencing location-based services include services such as geocoding, reverse geocoding, route management and navigation which focus on satisfying the location-based requirements of a single mobile device. There is a lack of open-source Software Development Kits for the development of client-side location-based services that are cross-referencing. Cross-referencing location-based services are designed for the sharing of location information amongst different entities on a given network. This project was undertaken to assemble, through incremental prototyping, a client-side Java Micro Edition location-based services Software Development Kit and a Mobicents location server to aid mobile network operators and developers alike in the quick creation of the transport and privacy protection of cross-referencing location-based applications on Session Initiation Protocol bearer networks. The privacy of the location information is protected using geolocation policies. Developers do not need to have an understanding of Session Initiation Protocol event signaling specifications or of the XML Configuration Access Protocol to use the tools that we put together. The developed tools are later consolidated using two sample applications, the friend-finder and child-tracker services. Developer guidelines are also provided, to aid in using the provided tools.
- Full Text:
- Date Issued: 2013
- Authors: Ndakunda, Shange-Ishiwa Tangeni
- Date: 2013
- Subjects: Location-based services -- Security measures , Mobile communication systems -- Security measures , Digital communications , Java (Computer program language) , Application software -- Development -- Computer programs , User interfaces (Computer systems)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4703 , http://hdl.handle.net/10962/d1013604
- Description: Although there are several Software Development kits and Application Programming Interfaces for client-side location-based services development, they mostly involve the creation of self-referencing location-based services. Self-referencing location-based services include services such as geocoding, reverse geocoding, route management and navigation which focus on satisfying the location-based requirements of a single mobile device. There is a lack of open-source Software Development Kits for the development of client-side location-based services that are cross-referencing. Cross-referencing location-based services are designed for the sharing of location information amongst different entities on a given network. This project was undertaken to assemble, through incremental prototyping, a client-side Java Micro Edition location-based services Software Development Kit and a Mobicents location server to aid mobile network operators and developers alike in the quick creation of the transport and privacy protection of cross-referencing location-based applications on Session Initiation Protocol bearer networks. The privacy of the location information is protected using geolocation policies. Developers do not need to have an understanding of Session Initiation Protocol event signaling specifications or of the XML Configuration Access Protocol to use the tools that we put together. The developed tools are later consolidated using two sample applications, the friend-finder and child-tracker services. Developer guidelines are also provided, to aid in using the provided tools.
- Full Text:
- Date Issued: 2013
A model for a context aware machine-based personal memory manager and its implementation using a visual programming environment
- Authors: Tsegaye, Melekam Asrat
- Date: 2007
- Subjects: Visual programming (Computer science) Memory management (Computer science) Memory -- Data processing
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4640 , http://hdl.handle.net/10962/d1006563
- Description: Memory is a part of cognition. It is essential for an individual to function normally in society. It encompasses an individual's lifetime experience, thus defining his identity. This thesis develops the concept of a machine-based personal memory manager which captures and manages an individual's day-to-day external memories. Rather than accumulating large amounts of data which has to be mined for useful memories, the machine-based memory manager automatically organizes memories as they are captured to enable their quick retrieval and use. The main functions of the machine-based memory manager envisioned in this thesis are the support and the augmentation of an individual's biological memory system. In the thesis, a model for a machine-based memory manager is developed. A visual programming environment, which can be used to build context aware applications as well as a proof-of-concept machine-based memory manager, is conceptualized and implemented. An experimental machine-based memory manager is implemented and evaluated. The model describes a machine-based memory manager which manages an individual's external memories by context. It addresses the management of external memories which accumulate over long periods of time by proposing a context aware file system which automatically organizes external memories by context. It describes how personal memory management can be facilitated by machine using six entities (life streams, memory producers, memory consumers, a memory manager, memory fragments and context descriptors) and the processes in which these entities participate (memory capture, memory encoding and decoding, memory decoding and retrieval). The visual programming environment represents a development tool which contains facilities that support context aware application programming. For example, it provides facilities which enable the definition and use of virtual sensors. It enables rapid programming with a focus on component re-use and dynamic composition of applications through a visual interface. The experimental machine-based memory manager serves as an example implementation of the machine-based memory manager which is described by the model developed in this thesis. The hardware used in its implementation consists of widely available components such as a camera, microphone and sub-notebook computer which are assembled in the form of a wearable computer. The software is constructed using the visual programming environment developed in this thesis. It contains multiple sensor drivers, context interpreters, a context aware file system as well as memory retrieval and presentation interfaces. The evaluation of the machine-based memory manager shows that it is possible to create a machine which monitors the states of an individual and his environment, and manages his external memories, thus supporting and augmenting his biological memory.
- Full Text:
- Date Issued: 2007
- Authors: Tsegaye, Melekam Asrat
- Date: 2007
- Subjects: Visual programming (Computer science) Memory management (Computer science) Memory -- Data processing
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4640 , http://hdl.handle.net/10962/d1006563
- Description: Memory is a part of cognition. It is essential for an individual to function normally in society. It encompasses an individual's lifetime experience, thus defining his identity. This thesis develops the concept of a machine-based personal memory manager which captures and manages an individual's day-to-day external memories. Rather than accumulating large amounts of data which has to be mined for useful memories, the machine-based memory manager automatically organizes memories as they are captured to enable their quick retrieval and use. The main functions of the machine-based memory manager envisioned in this thesis are the support and the augmentation of an individual's biological memory system. In the thesis, a model for a machine-based memory manager is developed. A visual programming environment, which can be used to build context aware applications as well as a proof-of-concept machine-based memory manager, is conceptualized and implemented. An experimental machine-based memory manager is implemented and evaluated. The model describes a machine-based memory manager which manages an individual's external memories by context. It addresses the management of external memories which accumulate over long periods of time by proposing a context aware file system which automatically organizes external memories by context. It describes how personal memory management can be facilitated by machine using six entities (life streams, memory producers, memory consumers, a memory manager, memory fragments and context descriptors) and the processes in which these entities participate (memory capture, memory encoding and decoding, memory decoding and retrieval). The visual programming environment represents a development tool which contains facilities that support context aware application programming. For example, it provides facilities which enable the definition and use of virtual sensors. It enables rapid programming with a focus on component re-use and dynamic composition of applications through a visual interface. The experimental machine-based memory manager serves as an example implementation of the machine-based memory manager which is described by the model developed in this thesis. The hardware used in its implementation consists of widely available components such as a camera, microphone and sub-notebook computer which are assembled in the form of a wearable computer. The software is constructed using the visual programming environment developed in this thesis. It contains multiple sensor drivers, context interpreters, a context aware file system as well as memory retrieval and presentation interfaces. The evaluation of the machine-based memory manager shows that it is possible to create a machine which monitors the states of an individual and his environment, and manages his external memories, thus supporting and augmenting his biological memory.
- Full Text:
- Date Issued: 2007
A multi-threading software countermeasure to mitigate side channel analysis in the time domain
- Authors: Frieslaar, Ibraheem
- Date: 2019
- Subjects: Computer security , Data encryption (Computer science) , Noise generators (Electronics)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/71152 , vital:29790
- Description: This research is the first of its kind to investigate the utilisation of a multi-threading software-based countermeasure to mitigate Side Channel Analysis (SCA) attacks, with a particular focus on the AES-128 cryptographic algorithm. This investigation is novel, as there has not been a software-based countermeasure relying on multi-threading to our knowledge. The research has been tested on the Atmel microcontrollers, as well as a more fully featured system in the form of the popular Raspberry Pi that utilises the ARM7 processor. The main contributions of this research is the introduction of a multi-threading software based countermeasure used to mitigate SCA attacks on both an embedded device and a Raspberry Pi. These threads are comprised of various mathematical operations which are utilised to generate electromagnetic (EM) noise resulting in the obfuscation of the execution of the AES-128 algorithm. A novel EM noise generator known as the FRIES noise generator is implemented to obfuscate data captured in the EM field. FRIES comprises of hiding the execution of AES-128 algorithm within the EM noise generated by the 512 Secure Hash Algorithm (SHA) from the libcrypto++ and OpenSSL libraries. In order to evaluate the proposed countermeasure, a novel attack methodology was developed where the entire secret AES-128 encryption key was recovered from a Raspberry Pi, which has not been achieved before. The FRIES noise generator was pitted against this new attack vector and other known noise generators. The results exhibited that the FRIES noise generator withstood this attack whilst other existing techniques still leaked out secret information. The visual location of the AES-128 encryption algorithm in the EM spectrum and key recovery was prevented. These results demonstrated that the proposed multi-threading software based countermeasure was able to be resistant to existing and new forms of attacks, thus verifying that a multi-threading software based countermeasure can serve to mitigate SCA attacks.
- Full Text:
- Date Issued: 2019
- Authors: Frieslaar, Ibraheem
- Date: 2019
- Subjects: Computer security , Data encryption (Computer science) , Noise generators (Electronics)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/71152 , vital:29790
- Description: This research is the first of its kind to investigate the utilisation of a multi-threading software-based countermeasure to mitigate Side Channel Analysis (SCA) attacks, with a particular focus on the AES-128 cryptographic algorithm. This investigation is novel, as there has not been a software-based countermeasure relying on multi-threading to our knowledge. The research has been tested on the Atmel microcontrollers, as well as a more fully featured system in the form of the popular Raspberry Pi that utilises the ARM7 processor. The main contributions of this research is the introduction of a multi-threading software based countermeasure used to mitigate SCA attacks on both an embedded device and a Raspberry Pi. These threads are comprised of various mathematical operations which are utilised to generate electromagnetic (EM) noise resulting in the obfuscation of the execution of the AES-128 algorithm. A novel EM noise generator known as the FRIES noise generator is implemented to obfuscate data captured in the EM field. FRIES comprises of hiding the execution of AES-128 algorithm within the EM noise generated by the 512 Secure Hash Algorithm (SHA) from the libcrypto++ and OpenSSL libraries. In order to evaluate the proposed countermeasure, a novel attack methodology was developed where the entire secret AES-128 encryption key was recovered from a Raspberry Pi, which has not been achieved before. The FRIES noise generator was pitted against this new attack vector and other known noise generators. The results exhibited that the FRIES noise generator withstood this attack whilst other existing techniques still leaked out secret information. The visual location of the AES-128 encryption algorithm in the EM spectrum and key recovery was prevented. These results demonstrated that the proposed multi-threading software based countermeasure was able to be resistant to existing and new forms of attacks, thus verifying that a multi-threading software based countermeasure can serve to mitigate SCA attacks.
- Full Text:
- Date Issued: 2019
A multispectral and machine learning approach to early stress classification in plants
- Authors: Poole, Louise Carmen
- Date: 2022-04-06
- Subjects: Machine learning , Neural networks (Computer science) , Multispectral imaging , Image processing , Plant stress detection
- Language: English
- Type: Master's thesis , text
- Identifier: http://hdl.handle.net/10962/232410 , vital:49989
- Description: Crop loss and failure can impact both a country’s economy and food security, often to devastating effects. As such, the importance of successfully detecting plant stresses early in their development is essential to minimize spread and damage to crop production. Identification of the stress and the stress-causing agent is the most critical and challenging step in plant and crop protection. With the development of and increase in ease of access to new equipment and technology in recent years, the use of spectroscopy in the early detection of plant diseases has become notably popular. This thesis narrows down the most suitable multispectral imaging techniques and machine learning algorithms for early stress detection. Datasets were collected of visible images and multispectral images. Dehydration was selected as the plant stress type for the main experiments, and data was collected from six plant species typically used in agriculture. Key contributions of this thesis include multispectral and visible datasets showing plant dehydration as well as a separate preliminary dataset on plant disease. Promising results on dehydration showed statistically significant accuracy improvements in the multispectral imaging compared to visible imaging for early stress detection, with multispectral input obtaining a 92.50% accuracy over visible input’s 77.50% on general plant species. The system was effective at stress detection on known plant species, with multispectral imaging introducing greater improvement to early stress detection than advanced stress detection. Furthermore, strong species discrimination was achieved when exclusively testing either early or advanced dehydration against healthy species. , Thesis (MSc) -- Faculty of Science, Ichthyology & Fisheries Sciences, 2022
- Full Text:
- Date Issued: 2022-04-06
- Authors: Poole, Louise Carmen
- Date: 2022-04-06
- Subjects: Machine learning , Neural networks (Computer science) , Multispectral imaging , Image processing , Plant stress detection
- Language: English
- Type: Master's thesis , text
- Identifier: http://hdl.handle.net/10962/232410 , vital:49989
- Description: Crop loss and failure can impact both a country’s economy and food security, often to devastating effects. As such, the importance of successfully detecting plant stresses early in their development is essential to minimize spread and damage to crop production. Identification of the stress and the stress-causing agent is the most critical and challenging step in plant and crop protection. With the development of and increase in ease of access to new equipment and technology in recent years, the use of spectroscopy in the early detection of plant diseases has become notably popular. This thesis narrows down the most suitable multispectral imaging techniques and machine learning algorithms for early stress detection. Datasets were collected of visible images and multispectral images. Dehydration was selected as the plant stress type for the main experiments, and data was collected from six plant species typically used in agriculture. Key contributions of this thesis include multispectral and visible datasets showing plant dehydration as well as a separate preliminary dataset on plant disease. Promising results on dehydration showed statistically significant accuracy improvements in the multispectral imaging compared to visible imaging for early stress detection, with multispectral input obtaining a 92.50% accuracy over visible input’s 77.50% on general plant species. The system was effective at stress detection on known plant species, with multispectral imaging introducing greater improvement to early stress detection than advanced stress detection. Furthermore, strong species discrimination was achieved when exclusively testing either early or advanced dehydration against healthy species. , Thesis (MSc) -- Faculty of Science, Ichthyology & Fisheries Sciences, 2022
- Full Text:
- Date Issued: 2022-04-06
A networking approach to sharing music studio resources
- Authors: Foss, Richard John
- Date: 1996
- Subjects: MIDI (Standard) Computer sound processing Sound -- Recording and reproducing -- Digital techniques
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4659 , http://hdl.handle.net/10962/d1006660
- Description: This thesis investigates the extent to which networking technology can be used to provide remote workstation access to a pool of shared music studio resources. A pilot system is described in which MIDI messages, studio control data, and audio signals flow between the workstations and a studio server. A booking and timing facility avoids contention and allows for accurate reports of studio usage. The operation of the system has been evaluated in terms of its ability to satislY three fundamental goals, namely the remote, shared and centralized access to studio resources. Three essential network configurations have been identified, incorporating a mix of star and bus topologies, and their relative potential for satisfYing the fundamental goals has been highlighted.
- Full Text:
- Date Issued: 1996
- Authors: Foss, Richard John
- Date: 1996
- Subjects: MIDI (Standard) Computer sound processing Sound -- Recording and reproducing -- Digital techniques
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4659 , http://hdl.handle.net/10962/d1006660
- Description: This thesis investigates the extent to which networking technology can be used to provide remote workstation access to a pool of shared music studio resources. A pilot system is described in which MIDI messages, studio control data, and audio signals flow between the workstations and a studio server. A booking and timing facility avoids contention and allows for accurate reports of studio usage. The operation of the system has been evaluated in terms of its ability to satislY three fundamental goals, namely the remote, shared and centralized access to studio resources. Three essential network configurations have been identified, incorporating a mix of star and bus topologies, and their relative potential for satisfYing the fundamental goals has been highlighted.
- Full Text:
- Date Issued: 1996
A parser generator system to handle complete syntax
- Authors: Ossher, Harold Leon
- Date: 1982
- Subjects: Grammar, Comparative and general -- Syntax Parsing (Computer grammar) Programming languages (Electronic computers) Compilers (Computer programs)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4571 , http://hdl.handle.net/10962/d1002036
- Description: To define a language completely, it is necessary to define both its syntax and semantics. If these definitions are in a suitable form, the parser and code-generator of a compiler, respectively, can be generated from them. This thesis addresses the problem of syntax definition and automatic parser generation.
- Full Text:
- Date Issued: 1982
- Authors: Ossher, Harold Leon
- Date: 1982
- Subjects: Grammar, Comparative and general -- Syntax Parsing (Computer grammar) Programming languages (Electronic computers) Compilers (Computer programs)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4571 , http://hdl.handle.net/10962/d1002036
- Description: To define a language completely, it is necessary to define both its syntax and semantics. If these definitions are in a suitable form, the parser and code-generator of a compiler, respectively, can be generated from them. This thesis addresses the problem of syntax definition and automatic parser generation.
- Full Text:
- Date Issued: 1982
A platform for computer-assisted multilingual literacy development
- Authors: Mudimba, Bwini Chizabubi
- Date: 2011
- Subjects: FundaWethu , Language acquisition -- Computer-assisted instruction , Language arts (Elementary) -- Computer-assisted instruction , Language and education , Education, Bilingual , Computer-assisted instruction , Educational technology , Computers and literacy
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4600 , http://hdl.handle.net/10962/d1004850 , FundaWethu , Language acquisition -- Computer-assisted instruction , Language arts (Elementary) -- Computer-assisted instruction , Language and education , Education, Bilingual , Computer-assisted instruction , Educational technology , Computers and literacy
- Description: FundaWethu is reading software that is designed to deliver reading lessons to Grade R-3 (foundation phase) children who are learning to read in a multilingual context. Starting from a premise that the system should be both educative and entertaining, the system allows literacy researchers or teachers to construct rich multimedia reading lessons, with text, pictures (possibly animated), and audio files. Using the design-based research methodology which is problem driven and iterative, we followed a user-centred design process in creating FundaWethu. To promote sustainability of the software, we chose to bring teachers on board as “co-designers” using the lesson authoring tool. We made the authoring tool simple enough for use by non computer specialists, but expressive enough to enable a wide range of beginners reading exercises to be constructed in a number of different languages (indigenous South African languages in particular). This project therefore centred on the use of designbased research to build FundaWethu, the design and construction of FundaWethu and the usability study carried out to determine the adequacy of FundaWethu.
- Full Text:
- Date Issued: 2011
- Authors: Mudimba, Bwini Chizabubi
- Date: 2011
- Subjects: FundaWethu , Language acquisition -- Computer-assisted instruction , Language arts (Elementary) -- Computer-assisted instruction , Language and education , Education, Bilingual , Computer-assisted instruction , Educational technology , Computers and literacy
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4600 , http://hdl.handle.net/10962/d1004850 , FundaWethu , Language acquisition -- Computer-assisted instruction , Language arts (Elementary) -- Computer-assisted instruction , Language and education , Education, Bilingual , Computer-assisted instruction , Educational technology , Computers and literacy
- Description: FundaWethu is reading software that is designed to deliver reading lessons to Grade R-3 (foundation phase) children who are learning to read in a multilingual context. Starting from a premise that the system should be both educative and entertaining, the system allows literacy researchers or teachers to construct rich multimedia reading lessons, with text, pictures (possibly animated), and audio files. Using the design-based research methodology which is problem driven and iterative, we followed a user-centred design process in creating FundaWethu. To promote sustainability of the software, we chose to bring teachers on board as “co-designers” using the lesson authoring tool. We made the authoring tool simple enough for use by non computer specialists, but expressive enough to enable a wide range of beginners reading exercises to be constructed in a number of different languages (indigenous South African languages in particular). This project therefore centred on the use of designbased research to build FundaWethu, the design and construction of FundaWethu and the usability study carried out to determine the adequacy of FundaWethu.
- Full Text:
- Date Issued: 2011
A proxy approach to protocol interoperability within digital audio networks
- Authors: Igumbor, Osedum Peter
- Date: 2010
- Subjects: Digital communications , Local area networks (Computer networks) , Computer sound processing , Computer networks , Computer network protocols
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4601 , http://hdl.handle.net/10962/d1004852 , Digital communications , Local area networks (Computer networks) , Computer sound processing , Computer networks , Computer network protocols
- Description: Digital audio networks are becoming the preferred solution for the interconnection of professional audio devices. Prominent amongst their advantages are: reduced noise interference, signal multiplexing, and a reduction in the number of cables connecting networked devices. In the context of professional audio, digital networks have been used to connect devices including: mixers, effects units, preamplifiers, breakout boxes, computers, monitoring controllers, and synthesizers. Such networks are governed by protocols that define the connection management rocedures, and device synchronization processes of devices that conform to the protocols. A wide range of digital audio network control protocols exist, each defining specific hardware requirements of devices that conform to them. Device parameter control is achieved by sending a protocol message that indicates the target parameter, and the action that should be performed on the parameter. Typically, a device will conform to only one protocol. By implication, only devices that conform to a specific protocol can communicate with each other, and only a controller that conforms to the protocol can control such devices. This results in the isolation of devices that conform to disparate protocols, since devices of different protocols cannot communicate with each other. This is currently a challenge in the professional music industry, particularly where digital networks are used for audio device control. This investigation seeks to resolve the issue of interoperability between professional audio devices that conform to different digital audio network protocols. This thesis proposes the use of a proxy that allows for the translation of protocol messages, as a solution to the interoperability problem. The proxy abstracts devices of one protocol in terms of another, hence allowing all the networked devices to appear as conforming to the same protocol. The proxy receives messages on behalf of the abstracted device, and then fulfills them in accordance with the protocol that the abstracted device conforms to. Any number of protocol devices can be abstracted within such a proxy. This has the added advantage of allowing a common controller to control devices that conform to the different protocols.
- Full Text:
- Date Issued: 2010
- Authors: Igumbor, Osedum Peter
- Date: 2010
- Subjects: Digital communications , Local area networks (Computer networks) , Computer sound processing , Computer networks , Computer network protocols
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4601 , http://hdl.handle.net/10962/d1004852 , Digital communications , Local area networks (Computer networks) , Computer sound processing , Computer networks , Computer network protocols
- Description: Digital audio networks are becoming the preferred solution for the interconnection of professional audio devices. Prominent amongst their advantages are: reduced noise interference, signal multiplexing, and a reduction in the number of cables connecting networked devices. In the context of professional audio, digital networks have been used to connect devices including: mixers, effects units, preamplifiers, breakout boxes, computers, monitoring controllers, and synthesizers. Such networks are governed by protocols that define the connection management rocedures, and device synchronization processes of devices that conform to the protocols. A wide range of digital audio network control protocols exist, each defining specific hardware requirements of devices that conform to them. Device parameter control is achieved by sending a protocol message that indicates the target parameter, and the action that should be performed on the parameter. Typically, a device will conform to only one protocol. By implication, only devices that conform to a specific protocol can communicate with each other, and only a controller that conforms to the protocol can control such devices. This results in the isolation of devices that conform to disparate protocols, since devices of different protocols cannot communicate with each other. This is currently a challenge in the professional music industry, particularly where digital networks are used for audio device control. This investigation seeks to resolve the issue of interoperability between professional audio devices that conform to different digital audio network protocols. This thesis proposes the use of a proxy that allows for the translation of protocol messages, as a solution to the interoperability problem. The proxy abstracts devices of one protocol in terms of another, hence allowing all the networked devices to appear as conforming to the same protocol. The proxy receives messages on behalf of the abstracted device, and then fulfills them in accordance with the protocol that the abstracted device conforms to. Any number of protocol devices can be abstracted within such a proxy. This has the added advantage of allowing a common controller to control devices that conform to the different protocols.
- Full Text:
- Date Issued: 2010
A remote interactive music keyboard tuition system
- Authors: Newton, Mark Brian
- Date: 2005
- Subjects: Computer-assisted instruction , Keyboard instrument music -- Instruction and study , Music -- Computer assisted instruction , Music in education
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4603 , http://hdl.handle.net/10962/d1004860 , Computer-assisted instruction , Keyboard instrument music -- Instruction and study , Music -- Computer assisted instruction , Music in education
- Description: A networked multimedia system to assist teaching music keyboard skills to a class is described. Teaching practical music lessons requires a large amount of interaction between the teacher and student and is thus teacher intensive. Although there is a range of computer software available for learning how to play the keyboard, these programs cannot replace the guidance of a music teacher. The possibility of combining the music applications with video conferencing technology for use in a keyboard class is discussed. An ideal system is described that incorporates the benefits of video conferencing and music applications for use in a classroom. A design of the ideal system is described and implemented. Certain design and implementation decisions are explained and the performance of the implementation examined. The system would enable a music teacher to effectively teach a music class keyboard skills.
- Full Text:
- Date Issued: 2005
- Authors: Newton, Mark Brian
- Date: 2005
- Subjects: Computer-assisted instruction , Keyboard instrument music -- Instruction and study , Music -- Computer assisted instruction , Music in education
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4603 , http://hdl.handle.net/10962/d1004860 , Computer-assisted instruction , Keyboard instrument music -- Instruction and study , Music -- Computer assisted instruction , Music in education
- Description: A networked multimedia system to assist teaching music keyboard skills to a class is described. Teaching practical music lessons requires a large amount of interaction between the teacher and student and is thus teacher intensive. Although there is a range of computer software available for learning how to play the keyboard, these programs cannot replace the guidance of a music teacher. The possibility of combining the music applications with video conferencing technology for use in a keyboard class is discussed. An ideal system is described that incorporates the benefits of video conferencing and music applications for use in a classroom. A design of the ideal system is described and implemented. Certain design and implementation decisions are explained and the performance of the implementation examined. The system would enable a music teacher to effectively teach a music class keyboard skills.
- Full Text:
- Date Issued: 2005
A review of the Siyakhula Living Lab’s network solution for Internet in marginalized communities
- Muchatibaya, Hilbert Munashe
- Authors: Muchatibaya, Hilbert Munashe
- Date: 2022-10-14
- Subjects: Information and communication technologies for development , Information technology South Africa , Access network , User experience , Local area networks (Computer networks) South Africa
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/364943 , vital:65664
- Description: Changes within Information and Communication Technology (ICT) over the past decade required a review of the network layer component deployed in the Siyakhula Living Lab (SLL), a long-term joint venture between the Telkom Centres of Excellence hosted at University of Fort Hare and Rhodes University in South Africa. The SLL overall solution for the sustainable internet in poor communities consists of three main components – the computing infrastructure layer, the network layer, and the e-services layer. At the core of the network layer is the concept of BI, a high-speed local area network realized through easy-to deploy wireless technologies that establish point-to-multipoint connections among schools within a limited geographical area. Schools within the broadband island become then Digital Access Nodes (DANs), with computing infrastructure that provides access to the network. The review, reported in this thesis, aimed at determining whether the model for the network layer was still able to meet the needs of marginalized communities in South Africa, given the recent changes in ICT. The research work used the living lab methodology – a grassroots, user-driven approach that emphasizes co-creation between the beneficiaries and external entities (researchers, industry partners and the government) - to do viability tests on the solution for the network component. The viability tests included lab and field experiments, to produce the qualitative and quantitative data needed to propose an updated blueprint. The results of the review found that the network topology used in the SLL’s network, the BI, is still viable, while WiMAX is now outdated. Also, the in-network web cache, Squid, is no longer effective, given the switch to HTTPS and the pervasive presence of advertising. The solution to the first issue is outdoor Wi-Fi, a proven solution easily deployable in grass-roots fashion. The second issue can be mitigated by leveraging Squid’s ‘bumping’ and splicing features; deploying a browser extension to make picture download optional; and using Pihole, a DNS sinkhole. Hopefully, the revised solution could become a component of South African Government’s broadband plan, “SA Connect”. , Thesis (MSc) -- Faculty of Science, Computer Science, 2022
- Full Text:
- Date Issued: 2022-10-14
- Authors: Muchatibaya, Hilbert Munashe
- Date: 2022-10-14
- Subjects: Information and communication technologies for development , Information technology South Africa , Access network , User experience , Local area networks (Computer networks) South Africa
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/364943 , vital:65664
- Description: Changes within Information and Communication Technology (ICT) over the past decade required a review of the network layer component deployed in the Siyakhula Living Lab (SLL), a long-term joint venture between the Telkom Centres of Excellence hosted at University of Fort Hare and Rhodes University in South Africa. The SLL overall solution for the sustainable internet in poor communities consists of three main components – the computing infrastructure layer, the network layer, and the e-services layer. At the core of the network layer is the concept of BI, a high-speed local area network realized through easy-to deploy wireless technologies that establish point-to-multipoint connections among schools within a limited geographical area. Schools within the broadband island become then Digital Access Nodes (DANs), with computing infrastructure that provides access to the network. The review, reported in this thesis, aimed at determining whether the model for the network layer was still able to meet the needs of marginalized communities in South Africa, given the recent changes in ICT. The research work used the living lab methodology – a grassroots, user-driven approach that emphasizes co-creation between the beneficiaries and external entities (researchers, industry partners and the government) - to do viability tests on the solution for the network component. The viability tests included lab and field experiments, to produce the qualitative and quantitative data needed to propose an updated blueprint. The results of the review found that the network topology used in the SLL’s network, the BI, is still viable, while WiMAX is now outdated. Also, the in-network web cache, Squid, is no longer effective, given the switch to HTTPS and the pervasive presence of advertising. The solution to the first issue is outdoor Wi-Fi, a proven solution easily deployable in grass-roots fashion. The second issue can be mitigated by leveraging Squid’s ‘bumping’ and splicing features; deploying a browser extension to make picture download optional; and using Pihole, a DNS sinkhole. Hopefully, the revised solution could become a component of South African Government’s broadband plan, “SA Connect”. , Thesis (MSc) -- Faculty of Science, Computer Science, 2022
- Full Text:
- Date Issued: 2022-10-14
A risk-based framework for new products: a South African telecommunication’s study
- Authors: Jeffries, Michael
- Date: 2017
- Subjects: Telephone companies -- Risk management -- South Africa , Telephone companies -- South Africa -- Case studies , Telecommunication -- Security measures -- South Africa
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/4765 , vital:20722
- Description: The integrated reports of Vodacom, Telkom and MTN — telecommunication organisations in South Africa — show that they are diversifying their product offerings from traditional voice and data services. These organisations are including new offerings covering the financial, health, insurance and mobile education services. The potential exists for these organisations to launch products that are substandard and which either do not take into account customer needs or do not comply with current legislations or regulations. Most telecommunication organisations have a well-defined enterprise risk management program, to ensure compliance to King III, however risk management processes specifically for new products and services might be lacking. The responsibility usually resides with the product managers for the implementation of robust products; however they do not always have the correct skillset to ensure adherence to governance requirements and therefore might not be aware of which laws they might not be adhering to, or understand the customers’ requirements. More complex products, additional competition, changes to technology and new business ventures have reinforced the need to manage risk on telecommunication products. Failure to take risk requirements into account could lead to potential fines, damage the organisation’s reputation which could lead to customers churning from these service providers. This research analyses three periods of data captured from a mobile telecommunication organisation to assess the current status quo of risk management maturity within the organisation’s product and service environment. Based on the analysis as well as industry best practices, a risk management framework for products is proposed that can assist product managers analyse concepts to ensure adherence to governance requirements. This could assist new product or service offerings in the marketplace do not create a perception of distrust by consumers.
- Full Text:
- Date Issued: 2017
- Authors: Jeffries, Michael
- Date: 2017
- Subjects: Telephone companies -- Risk management -- South Africa , Telephone companies -- South Africa -- Case studies , Telecommunication -- Security measures -- South Africa
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/4765 , vital:20722
- Description: The integrated reports of Vodacom, Telkom and MTN — telecommunication organisations in South Africa — show that they are diversifying their product offerings from traditional voice and data services. These organisations are including new offerings covering the financial, health, insurance and mobile education services. The potential exists for these organisations to launch products that are substandard and which either do not take into account customer needs or do not comply with current legislations or regulations. Most telecommunication organisations have a well-defined enterprise risk management program, to ensure compliance to King III, however risk management processes specifically for new products and services might be lacking. The responsibility usually resides with the product managers for the implementation of robust products; however they do not always have the correct skillset to ensure adherence to governance requirements and therefore might not be aware of which laws they might not be adhering to, or understand the customers’ requirements. More complex products, additional competition, changes to technology and new business ventures have reinforced the need to manage risk on telecommunication products. Failure to take risk requirements into account could lead to potential fines, damage the organisation’s reputation which could lead to customers churning from these service providers. This research analyses three periods of data captured from a mobile telecommunication organisation to assess the current status quo of risk management maturity within the organisation’s product and service environment. Based on the analysis as well as industry best practices, a risk management framework for products is proposed that can assist product managers analyse concepts to ensure adherence to governance requirements. This could assist new product or service offerings in the marketplace do not create a perception of distrust by consumers.
- Full Text:
- Date Issued: 2017
A structural and functional specification of a SCIM for service interaction management and personalisation in the IMS
- Authors: Tsietsi, Mosiuoa Jeremia
- Date: 2012
- Subjects: Internet Protocol multimedia subsystem , Internet Protocol multimedia subsystem -- Specifications , Long-Term Evolution (Telecommunications) , European Telecommunications Standards Institute , Wireless communication systems , Multimedia communications
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4606 , http://hdl.handle.net/10962/d1004864 , Internet Protocol multimedia subsystem , Internet Protocol multimedia subsystem -- Specifications , Long-Term Evolution (Telecommunications) , European Telecommunications Standards Institute , Wireless communication systems , Multimedia communications
- Description: The Internet Protocol Multimedia Subsystem (IMS) is a component of the 3G mobile network that has been specified by standards development organisations such as the 3GPP (3rd Generation Partnership Project) and ETSI (European Telecommunication Standards Institute). IMS seeks to guarantee that the telecommunication network of the future provides subscribers with seamless access to services across disparate networks. In order to achieve this, it defines a service architecture that hosts application servers that provide subscribers with value added services. Typically, an application server bundles all the functionality it needs to execute the services it delivers, however this view is currently being challenged. It is now thought that services should be synthesised from simple building blocks called service capabilities. This decomposition would facilitate the re-use of service capabilities across multiple services and would support the creation of new services that could not have originally been conceived. The shift from monolithic services to those built from service capabilities poses a challenge to the current service model in IMS. To accommodate this, the 3GPP has defined an entity known as a service capability interaction manager (SCIM) that would be responsible for managing the interactions between service capabilities in order to realise complex services. Some of these interactions could potentially lead to undesirable results, which the SCIM must work to avoid. As an added requirement, it is believed that the network should allow policies to be applied to network services which the SCIM should be responsible for enforcing. At the time of writing, the functional and structural architecture of the SCIM has not yet been standardised. This thesis explores the current serv ice architecture of the IMS in detail. Proposals that address the structure and functions of the SCIM are carefully compared and contrasted. This investigation leads to the presentation of key aspects of the SCIM, and provides solutions that explain how it should interact with service capabilities, manage undesirable interactions and factor user and network operator policies into its execution model. A modified design of the IMS service layer that embeds the SCIM is subsequently presented and described. The design uses existing IMS protocols and requires no change in the behaviour of the standard IMS entities. In order to develop a testbed for experimental verification of the design, the identification of suitable software platforms was required. This thesis presents some of the most popular platforms currently used by developers such as the Open IMS Core and OpenSER, as well as an open source, Java-based, multimedia communication platform called Mobicents. As a precursor to the development of the SCIM, a converged multimedia service is presented that describes how a video streaming application that is leveraged by a web portal was implemented for an IMS testbed using Mobicents components. The Mobicents SIP Servlets container was subsequently used to model an initial prototype of the SCIM, using a mUlti-component telephony service to illustrate the proposed service execution model. The design focuses on SIP-based services only, but should also work for other types of IMS application servers as well.
- Full Text:
- Date Issued: 2012
- Authors: Tsietsi, Mosiuoa Jeremia
- Date: 2012
- Subjects: Internet Protocol multimedia subsystem , Internet Protocol multimedia subsystem -- Specifications , Long-Term Evolution (Telecommunications) , European Telecommunications Standards Institute , Wireless communication systems , Multimedia communications
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4606 , http://hdl.handle.net/10962/d1004864 , Internet Protocol multimedia subsystem , Internet Protocol multimedia subsystem -- Specifications , Long-Term Evolution (Telecommunications) , European Telecommunications Standards Institute , Wireless communication systems , Multimedia communications
- Description: The Internet Protocol Multimedia Subsystem (IMS) is a component of the 3G mobile network that has been specified by standards development organisations such as the 3GPP (3rd Generation Partnership Project) and ETSI (European Telecommunication Standards Institute). IMS seeks to guarantee that the telecommunication network of the future provides subscribers with seamless access to services across disparate networks. In order to achieve this, it defines a service architecture that hosts application servers that provide subscribers with value added services. Typically, an application server bundles all the functionality it needs to execute the services it delivers, however this view is currently being challenged. It is now thought that services should be synthesised from simple building blocks called service capabilities. This decomposition would facilitate the re-use of service capabilities across multiple services and would support the creation of new services that could not have originally been conceived. The shift from monolithic services to those built from service capabilities poses a challenge to the current service model in IMS. To accommodate this, the 3GPP has defined an entity known as a service capability interaction manager (SCIM) that would be responsible for managing the interactions between service capabilities in order to realise complex services. Some of these interactions could potentially lead to undesirable results, which the SCIM must work to avoid. As an added requirement, it is believed that the network should allow policies to be applied to network services which the SCIM should be responsible for enforcing. At the time of writing, the functional and structural architecture of the SCIM has not yet been standardised. This thesis explores the current serv ice architecture of the IMS in detail. Proposals that address the structure and functions of the SCIM are carefully compared and contrasted. This investigation leads to the presentation of key aspects of the SCIM, and provides solutions that explain how it should interact with service capabilities, manage undesirable interactions and factor user and network operator policies into its execution model. A modified design of the IMS service layer that embeds the SCIM is subsequently presented and described. The design uses existing IMS protocols and requires no change in the behaviour of the standard IMS entities. In order to develop a testbed for experimental verification of the design, the identification of suitable software platforms was required. This thesis presents some of the most popular platforms currently used by developers such as the Open IMS Core and OpenSER, as well as an open source, Java-based, multimedia communication platform called Mobicents. As a precursor to the development of the SCIM, a converged multimedia service is presented that describes how a video streaming application that is leveraged by a web portal was implemented for an IMS testbed using Mobicents components. The Mobicents SIP Servlets container was subsequently used to model an initial prototype of the SCIM, using a mUlti-component telephony service to illustrate the proposed service execution model. The design focuses on SIP-based services only, but should also work for other types of IMS application servers as well.
- Full Text:
- Date Issued: 2012
A study of malicious software on the macOS operating system
- Authors: Regensberg, Mark Alan
- Date: 2019
- Subjects: Malware (Computer software) , Computer security , Computer viruses , Mac OS
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92302 , vital:30701
- Description: Much of the published malware research begins with a common refrain: the cost, quantum and complexity of threats are increasing, and research and practice should prioritise efforts to automate and reduce times to detect and prevent malware, while improving the consistency of categories and taxonomies applied to modern malware. Existing work related to malware targeting Apple's macOS platform has not been spared this approach, although limited research has been conducted on the true nature of threats faced by users of the operating system. While macOS focused research available consistently notes an increase in macOS users, devices and ultimately in threats, an opportunity exists to understand the real nature of threats faced by macOS users and suggest potential avenues for future work. This research provides a view of the current state of macOS malware by analysing and exploring a dataset of malware detections on macOS endpoints captured over a period of eleven months by an anti-malware software vendor. The dataset is augmented with malware information provided by the widely used Virus. Total service, as well as the application of prior automated malware categorisation work, AVClass to categorise and SSDeep to cluster and report on observed data. With Windows and Android platforms frequently in the spotlight as targets for highly disruptive malware like botnets, ransomware and cryptominers, research and intuition seem to suggest the threat of malware on this increasingly popular platform should be growing and evolving accordingly. Findings suggests that the direction and nature of growth and evolution may not be entirely as clear as industry reports suggest. Adware and Potentially Unwanted Applications (PUAs) make up the vast majority of the detected threats, with remote access trojans (RATs), ransomware and cryptocurrency miners comprising a relatively small proportion of the detected malware. This provides a number of avenues for potential future work to compare and contrast with research on other platforms, as well as identification of key factors that may influence its growth in the future.
- Full Text:
- Date Issued: 2019
- Authors: Regensberg, Mark Alan
- Date: 2019
- Subjects: Malware (Computer software) , Computer security , Computer viruses , Mac OS
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92302 , vital:30701
- Description: Much of the published malware research begins with a common refrain: the cost, quantum and complexity of threats are increasing, and research and practice should prioritise efforts to automate and reduce times to detect and prevent malware, while improving the consistency of categories and taxonomies applied to modern malware. Existing work related to malware targeting Apple's macOS platform has not been spared this approach, although limited research has been conducted on the true nature of threats faced by users of the operating system. While macOS focused research available consistently notes an increase in macOS users, devices and ultimately in threats, an opportunity exists to understand the real nature of threats faced by macOS users and suggest potential avenues for future work. This research provides a view of the current state of macOS malware by analysing and exploring a dataset of malware detections on macOS endpoints captured over a period of eleven months by an anti-malware software vendor. The dataset is augmented with malware information provided by the widely used Virus. Total service, as well as the application of prior automated malware categorisation work, AVClass to categorise and SSDeep to cluster and report on observed data. With Windows and Android platforms frequently in the spotlight as targets for highly disruptive malware like botnets, ransomware and cryptominers, research and intuition seem to suggest the threat of malware on this increasingly popular platform should be growing and evolving accordingly. Findings suggests that the direction and nature of growth and evolution may not be entirely as clear as industry reports suggest. Adware and Potentially Unwanted Applications (PUAs) make up the vast majority of the detected threats, with remote access trojans (RATs), ransomware and cryptocurrency miners comprising a relatively small proportion of the detected malware. This provides a number of avenues for potential future work to compare and contrast with research on other platforms, as well as identification of key factors that may influence its growth in the future.
- Full Text:
- Date Issued: 2019
A study of real-time operating systems for microcomputers
- Authors: Wells, George Clifford
- Date: 1990
- Subjects: Operating systems (Computers) , Microcomputers
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4611 , http://hdl.handle.net/10962/d1004896 , Operating systems (Computers) , Microcomputers
- Description: This thesis describes the evaluation of four operating systems for microcomputers. The emphasis of the study is on the suitability of the operating systems for use in real-time applications, such as process control. The evaluation was performed in two sections. The first section was a quantitative assessment of the performance of the real-time features of the operating system. This was performed using benchmarks. The criteria for the benchmarks and their design are discussed. The second section was a qualitative assessment of the suitability of the operating systems for the development and implementation of real-time systems. This was assessed through the implementation of a small simulation of a manufacturing process and its associated control system. The simulation was designed using the Ward and Mellor real-time design method which was extended to handle the special case of a real-time simulation. The operating systems which were selected for the study covered a spectrum from general purpose operating systems to small, specialised real-time operating systems. From the quantitative assessment it emerged that QNX (from Quantum Software Systems) had the best overall performance. Qualitatively, UNIX was found to offer the best system development environment, but it does not have the performance and the characteristics required for real-time applications. This suggests that versions of UNIX that are adapted for real-time applications are worth careful consideration for use both as development systems and implementation systems.
- Full Text:
- Date Issued: 1990
- Authors: Wells, George Clifford
- Date: 1990
- Subjects: Operating systems (Computers) , Microcomputers
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4611 , http://hdl.handle.net/10962/d1004896 , Operating systems (Computers) , Microcomputers
- Description: This thesis describes the evaluation of four operating systems for microcomputers. The emphasis of the study is on the suitability of the operating systems for use in real-time applications, such as process control. The evaluation was performed in two sections. The first section was a quantitative assessment of the performance of the real-time features of the operating system. This was performed using benchmarks. The criteria for the benchmarks and their design are discussed. The second section was a qualitative assessment of the suitability of the operating systems for the development and implementation of real-time systems. This was assessed through the implementation of a small simulation of a manufacturing process and its associated control system. The simulation was designed using the Ward and Mellor real-time design method which was extended to handle the special case of a real-time simulation. The operating systems which were selected for the study covered a spectrum from general purpose operating systems to small, specialised real-time operating systems. From the quantitative assessment it emerged that QNX (from Quantum Software Systems) had the best overall performance. Qualitatively, UNIX was found to offer the best system development environment, but it does not have the performance and the characteristics required for real-time applications. This suggests that versions of UNIX that are adapted for real-time applications are worth careful consideration for use both as development systems and implementation systems.
- Full Text:
- Date Issued: 1990