Distributed authentication for resource control
- Authors: Burdis, Keith Robert
- Date: 2000
- Subjects: Computers -- Access control , Data protection , Computer networks -- Security measures , Electronic data processing departments -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4630 , http://hdl.handle.net/10962/d1006512 , Computers -- Access control , Data protection , Computer networks -- Security measures , Electronic data processing departments -- Security measures
- Description: This thesis examines distributed authentication in the process of controlling computing resources. We investigate user sign-on and two of the main authentication technologies that can be used to control a resource through authentication and providing additional security services. The problems with the existing sign-on scenario are that users have too much credential information to manage and are prompted for this information too often. Single Sign-On (SSO) is a viable solution to this problem if physical procedures are introduced to minimise the risks associated with its use. The Generic Security Services API (GSS-API) provides security services in a manner in- dependent of the environment in which these security services are used, encapsulating security functionality and insulating users from changes in security technology. The un- derlying security functionality is provided by GSS-API mechanisms. We developed the Secure Remote Password GSS-API Mechanism (SRPGM) to provide a mechanism that has low infrastructure requirements, is password-based and does not require the use of long-term asymmetric keys. We provide implementations of the Java GSS-API bindings and the LIPKEY and SRPGM GSS-API mechanisms. The Secure Authentication and Security Layer (SASL) provides security to connection- based Internet protocols. After finding deficiencies in existing SASL mechanisms we de- veloped the Secure Remote Password SASL mechanism (SRP-SASL) that provides strong password-based authentication and countermeasures against known attacks, while still be- ing simple and easy to implement. We provide implementations of the Java SASL binding and several SASL mechanisms, including SRP-SASL.
- Full Text:
- Date Issued: 2000
- Authors: Burdis, Keith Robert
- Date: 2000
- Subjects: Computers -- Access control , Data protection , Computer networks -- Security measures , Electronic data processing departments -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4630 , http://hdl.handle.net/10962/d1006512 , Computers -- Access control , Data protection , Computer networks -- Security measures , Electronic data processing departments -- Security measures
- Description: This thesis examines distributed authentication in the process of controlling computing resources. We investigate user sign-on and two of the main authentication technologies that can be used to control a resource through authentication and providing additional security services. The problems with the existing sign-on scenario are that users have too much credential information to manage and are prompted for this information too often. Single Sign-On (SSO) is a viable solution to this problem if physical procedures are introduced to minimise the risks associated with its use. The Generic Security Services API (GSS-API) provides security services in a manner in- dependent of the environment in which these security services are used, encapsulating security functionality and insulating users from changes in security technology. The un- derlying security functionality is provided by GSS-API mechanisms. We developed the Secure Remote Password GSS-API Mechanism (SRPGM) to provide a mechanism that has low infrastructure requirements, is password-based and does not require the use of long-term asymmetric keys. We provide implementations of the Java GSS-API bindings and the LIPKEY and SRPGM GSS-API mechanisms. The Secure Authentication and Security Layer (SASL) provides security to connection- based Internet protocols. After finding deficiencies in existing SASL mechanisms we de- veloped the Secure Remote Password SASL mechanism (SRP-SASL) that provides strong password-based authentication and countermeasures against known attacks, while still be- ing simple and easy to implement. We provide implementations of the Java SASL binding and several SASL mechanisms, including SRP-SASL.
- Full Text:
- Date Issued: 2000
Email meets issue-tracking: a prototype implementation
- Authors: Kwinana, Zukhanye N
- Date: 2006 , 2013-06-11
- Subjects: Microsoft Visual studio , Electronic mail systems , Computer networks , eXtreme programming , Computer software -- Development
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4614 , http://hdl.handle.net/10962/d1005644 , Microsoft Visual studio , Electronic mail systems , Computer networks , eXtreme programming , Computer software -- Development
- Description: The use of electronic mail (email) has evolved from sending simple messages to task delegation and management. Most mail clients, however, have not kept up with the evolution and as a result have limited task management features available. On the other hand, while issue tracking systems offer useful task management functionality, they are not as widespread as emails and also have a few drawbacks. This thesis reports on the exploration of the integration of the ubiquitous nature of email with the task management features of issue-tracking systems. We explore this using simple ad-hoc as well as semi-automated tasks. With these two working together, tasks can be delegated from email clients without needing to switch between the two environments. It brings some of the benefits of issue tracking systems closer to our email users.The system is developed using Microsoft VisuaI Studio.NET. with the code written in C#. The eXtreme Programming (XP) methodology was used during the development of the proof-of-concept prototype that demonstrates the integration of the two environments, as we were faced at first with vague requirements bound to change, as we better understood the problem domain through our development. XP allowed us to skip an extended and comprehensive initial design process and incrementally develop the system, making refinements and extensions as we encountered the need for them. This alleviated the need to make upfront decisions that were based on minimal knowledge of what to expect during development. This thesis describes the implementation of the prototype and the decisions made with each step taken towards developing an email-based issue tracking system. With the two environments working together, we can now easily track issues from our email clients without needing to switch to another system. , KMBT_363 , Adobe Acrobat 9.54 Paper Capture Plug-in
- Full Text:
- Date Issued: 2006
- Authors: Kwinana, Zukhanye N
- Date: 2006 , 2013-06-11
- Subjects: Microsoft Visual studio , Electronic mail systems , Computer networks , eXtreme programming , Computer software -- Development
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4614 , http://hdl.handle.net/10962/d1005644 , Microsoft Visual studio , Electronic mail systems , Computer networks , eXtreme programming , Computer software -- Development
- Description: The use of electronic mail (email) has evolved from sending simple messages to task delegation and management. Most mail clients, however, have not kept up with the evolution and as a result have limited task management features available. On the other hand, while issue tracking systems offer useful task management functionality, they are not as widespread as emails and also have a few drawbacks. This thesis reports on the exploration of the integration of the ubiquitous nature of email with the task management features of issue-tracking systems. We explore this using simple ad-hoc as well as semi-automated tasks. With these two working together, tasks can be delegated from email clients without needing to switch between the two environments. It brings some of the benefits of issue tracking systems closer to our email users.The system is developed using Microsoft VisuaI Studio.NET. with the code written in C#. The eXtreme Programming (XP) methodology was used during the development of the proof-of-concept prototype that demonstrates the integration of the two environments, as we were faced at first with vague requirements bound to change, as we better understood the problem domain through our development. XP allowed us to skip an extended and comprehensive initial design process and incrementally develop the system, making refinements and extensions as we encountered the need for them. This alleviated the need to make upfront decisions that were based on minimal knowledge of what to expect during development. This thesis describes the implementation of the prototype and the decisions made with each step taken towards developing an email-based issue tracking system. With the two environments working together, we can now easily track issues from our email clients without needing to switch to another system. , KMBT_363 , Adobe Acrobat 9.54 Paper Capture Plug-in
- Full Text:
- Date Issued: 2006
Novel approaches to the monitoring of computer networks
- Authors: Halse, G A
- Date: 2003
- Subjects: Computer networks , Computer networks -- Management , Computer networks -- South Africa -- Grahamstown , Rhodes University -- Information Technology Division
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4645 , http://hdl.handle.net/10962/d1006601
- Description: Traditional network monitoring techniques suffer from a number of limitations. They are usually designed to solve the most general case, and as a result often fall short of expectation. This project sets out to provide the network administrator with a set of alternative tools to solve specific, but common, problems. It uses the network at Rhodes University as a case study and addresses a number of issues that arise on this network. Four problematic areas are identified within this network: the automatic determination of network topology and layout, the tracking of network growth, the determination of the physical and logical locations of hosts on the network, and the need for intelligent fault reporting systems. These areas are chosen because other network monitoring techniques have failed to adequately address these problems, and because they present problems that are common across a large number of networks. Each area is examined separately and a solution is sought for each of the problems identified. As a result, a set of tools is developed to solve these problems using a number of novel network monitoring techniques. These tools are designed to be as portable as possible so as not to limit their use to the case study network. Their use within Rhodes, as well as their applicability to other situations is discussed. In all cases, any limitations and shortfalls in the approaches that were employed are examined.
- Full Text:
- Date Issued: 2003
- Authors: Halse, G A
- Date: 2003
- Subjects: Computer networks , Computer networks -- Management , Computer networks -- South Africa -- Grahamstown , Rhodes University -- Information Technology Division
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4645 , http://hdl.handle.net/10962/d1006601
- Description: Traditional network monitoring techniques suffer from a number of limitations. They are usually designed to solve the most general case, and as a result often fall short of expectation. This project sets out to provide the network administrator with a set of alternative tools to solve specific, but common, problems. It uses the network at Rhodes University as a case study and addresses a number of issues that arise on this network. Four problematic areas are identified within this network: the automatic determination of network topology and layout, the tracking of network growth, the determination of the physical and logical locations of hosts on the network, and the need for intelligent fault reporting systems. These areas are chosen because other network monitoring techniques have failed to adequately address these problems, and because they present problems that are common across a large number of networks. Each area is examined separately and a solution is sought for each of the problems identified. As a result, a set of tools is developed to solve these problems using a number of novel network monitoring techniques. These tools are designed to be as portable as possible so as not to limit their use to the case study network. Their use within Rhodes, as well as their applicability to other situations is discussed. In all cases, any limitations and shortfalls in the approaches that were employed are examined.
- Full Text:
- Date Issued: 2003
Service provisioning in two open-source SIP implementation, cinema and vocal
- Authors: Hsieh, Ming Chih
- Date: 2013-06-18
- Subjects: Real-time data processing , Computer network protocols , Internet telephony , Digital telephone systems , Communication -- Technological innovations
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4687 , http://hdl.handle.net/10962/d1008195 , Real-time data processing , Computer network protocols , Internet telephony , Digital telephone systems , Communication -- Technological innovations
- Description: The distribution of real-time multimedia streams is seen nowadays as the next step forward for the Internet. One of the most obvious uses of such streams is to support telephony over the Internet, replacing and improving traditional telephony. This thesis investigates the development and deployment of services in two Internet telephony environments, namely CINEMA (Columbia InterNet Extensible Multimedia Architecture) and VOCAL (Vovida Open Communication Application Library), both based on the Session Initiation Protocol (SIP) and open-sourced. A classification of services is proposed, which divides services into two large groups: basic and advanced services. Basic services are services such as making point-to-point calls, registering with the server and making calls via the server. Any other service is considered an advanced service. Advanced services are defined by four categories: Call Related, Interactive, Internetworking and Hybrid. New services were implemented for the Call Related, Interactive and Internetworking categories. First, features involving call blocking, call screening and missed calls were implemented in the two environments in order to investigate Call-related services. Next, a notification feature was implemented in both environments in order to investigate Interactive services. Finally, a translator between MGCP and SIP was developed to investigate an Internetworking service in the VOCAL environment. The practical implementation of the new features just described was used to answer questions about the location of the services, as well as the level of required expertise and the ease or difficulty experienced in creating services in each of the two environments. , KMBT_363 , Adobe Acrobat 9.54 Paper Capture Plug-in
- Full Text:
- Authors: Hsieh, Ming Chih
- Date: 2013-06-18
- Subjects: Real-time data processing , Computer network protocols , Internet telephony , Digital telephone systems , Communication -- Technological innovations
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4687 , http://hdl.handle.net/10962/d1008195 , Real-time data processing , Computer network protocols , Internet telephony , Digital telephone systems , Communication -- Technological innovations
- Description: The distribution of real-time multimedia streams is seen nowadays as the next step forward for the Internet. One of the most obvious uses of such streams is to support telephony over the Internet, replacing and improving traditional telephony. This thesis investigates the development and deployment of services in two Internet telephony environments, namely CINEMA (Columbia InterNet Extensible Multimedia Architecture) and VOCAL (Vovida Open Communication Application Library), both based on the Session Initiation Protocol (SIP) and open-sourced. A classification of services is proposed, which divides services into two large groups: basic and advanced services. Basic services are services such as making point-to-point calls, registering with the server and making calls via the server. Any other service is considered an advanced service. Advanced services are defined by four categories: Call Related, Interactive, Internetworking and Hybrid. New services were implemented for the Call Related, Interactive and Internetworking categories. First, features involving call blocking, call screening and missed calls were implemented in the two environments in order to investigate Call-related services. Next, a notification feature was implemented in both environments in order to investigate Interactive services. Finally, a translator between MGCP and SIP was developed to investigate an Internetworking service in the VOCAL environment. The practical implementation of the new features just described was used to answer questions about the location of the services, as well as the level of required expertise and the ease or difficulty experienced in creating services in each of the two environments. , KMBT_363 , Adobe Acrobat 9.54 Paper Capture Plug-in
- Full Text:
Simplified menu-driven data analysis tool with macro-like automation
- Authors: Kazembe, Luntha
- Date: 2022-10-14
- Subjects: Data analysis , Macro instructions (Electronic computers) , Quantitative research Software , Python (Computer program language) , Scripting languages (Computer science)
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/362905 , vital:65373
- Description: This study seeks to improve the data analysis process for individuals and small businesses with limited resources by developing a simplified data analysis software tool that allows users to carry out data analysis effectively and efficiently. Design considerations were identified to address limitations common in such environments, these included making the tool easy-to-use, requiring only a basic understanding of the data analysis process, designing the tool in manner that minimises computing resource requirements and user interaction and implementing it using Python which is open-source, effective and efficient in processing data. We develop a prototype simplified data analysis tool as a proof-of-concept. The tool has two components, namely, core elements which provide functionality for the data anal- ysis process including data collection, transformations, analysis and visualizations, and automation and performance enhancements to improve the data analysis process. The automation enhancements consist of the record and playback macro feature while the performance enhancements include multiprocessing and multi-threading abilities. The data analysis software was developed to analyse various alpha-numeric data formats by using a variety of statistical and mathematical techniques. The record and playback macro feature enhances the data analysis process by saving users time and computing resources when analysing large volumes of data or carrying out repetitive data analysis tasks. The feature has two components namely, the record component that is used to record data analysis steps and the playback component used to execute recorded steps. The simplified data analysis tool has parallelization designed and implemented which allows users to carry out two or more analysis tasks at a time, this improves productivity as users can do other tasks while the tool is processing data using recorded steps in the background. The tool was created and subsequently tested using common analysis scenarios applied to network data, log data and stock data. Results show that decision-making requirements such as accurate information, can be satisfied using this analysis tool. Based on the functionality implemented, similar analysis functionality to that provided by Microsoft Excel is available, but in a simplified manner. Moreover, a more sophisticated macro functionality is provided for the execution of repetitive tasks using the recording feature. Overall, the study found that the simplified data analysis tool is functional, usable, scalable, efficient and can carry out multiple analysis tasks simultaneously. , Thesis (MSc) -- Faculty of Science, Computer Science, 2022
- Full Text:
- Date Issued: 2022-10-14
- Authors: Kazembe, Luntha
- Date: 2022-10-14
- Subjects: Data analysis , Macro instructions (Electronic computers) , Quantitative research Software , Python (Computer program language) , Scripting languages (Computer science)
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/362905 , vital:65373
- Description: This study seeks to improve the data analysis process for individuals and small businesses with limited resources by developing a simplified data analysis software tool that allows users to carry out data analysis effectively and efficiently. Design considerations were identified to address limitations common in such environments, these included making the tool easy-to-use, requiring only a basic understanding of the data analysis process, designing the tool in manner that minimises computing resource requirements and user interaction and implementing it using Python which is open-source, effective and efficient in processing data. We develop a prototype simplified data analysis tool as a proof-of-concept. The tool has two components, namely, core elements which provide functionality for the data anal- ysis process including data collection, transformations, analysis and visualizations, and automation and performance enhancements to improve the data analysis process. The automation enhancements consist of the record and playback macro feature while the performance enhancements include multiprocessing and multi-threading abilities. The data analysis software was developed to analyse various alpha-numeric data formats by using a variety of statistical and mathematical techniques. The record and playback macro feature enhances the data analysis process by saving users time and computing resources when analysing large volumes of data or carrying out repetitive data analysis tasks. The feature has two components namely, the record component that is used to record data analysis steps and the playback component used to execute recorded steps. The simplified data analysis tool has parallelization designed and implemented which allows users to carry out two or more analysis tasks at a time, this improves productivity as users can do other tasks while the tool is processing data using recorded steps in the background. The tool was created and subsequently tested using common analysis scenarios applied to network data, log data and stock data. Results show that decision-making requirements such as accurate information, can be satisfied using this analysis tool. Based on the functionality implemented, similar analysis functionality to that provided by Microsoft Excel is available, but in a simplified manner. Moreover, a more sophisticated macro functionality is provided for the execution of repetitive tasks using the recording feature. Overall, the study found that the simplified data analysis tool is functional, usable, scalable, efficient and can carry out multiple analysis tasks simultaneously. , Thesis (MSc) -- Faculty of Science, Computer Science, 2022
- Full Text:
- Date Issued: 2022-10-14
A multi-threading software countermeasure to mitigate side channel analysis in the time domain
- Authors: Frieslaar, Ibraheem
- Date: 2019
- Subjects: Computer security , Data encryption (Computer science) , Noise generators (Electronics)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/71152 , vital:29790
- Description: This research is the first of its kind to investigate the utilisation of a multi-threading software-based countermeasure to mitigate Side Channel Analysis (SCA) attacks, with a particular focus on the AES-128 cryptographic algorithm. This investigation is novel, as there has not been a software-based countermeasure relying on multi-threading to our knowledge. The research has been tested on the Atmel microcontrollers, as well as a more fully featured system in the form of the popular Raspberry Pi that utilises the ARM7 processor. The main contributions of this research is the introduction of a multi-threading software based countermeasure used to mitigate SCA attacks on both an embedded device and a Raspberry Pi. These threads are comprised of various mathematical operations which are utilised to generate electromagnetic (EM) noise resulting in the obfuscation of the execution of the AES-128 algorithm. A novel EM noise generator known as the FRIES noise generator is implemented to obfuscate data captured in the EM field. FRIES comprises of hiding the execution of AES-128 algorithm within the EM noise generated by the 512 Secure Hash Algorithm (SHA) from the libcrypto++ and OpenSSL libraries. In order to evaluate the proposed countermeasure, a novel attack methodology was developed where the entire secret AES-128 encryption key was recovered from a Raspberry Pi, which has not been achieved before. The FRIES noise generator was pitted against this new attack vector and other known noise generators. The results exhibited that the FRIES noise generator withstood this attack whilst other existing techniques still leaked out secret information. The visual location of the AES-128 encryption algorithm in the EM spectrum and key recovery was prevented. These results demonstrated that the proposed multi-threading software based countermeasure was able to be resistant to existing and new forms of attacks, thus verifying that a multi-threading software based countermeasure can serve to mitigate SCA attacks.
- Full Text:
- Date Issued: 2019
- Authors: Frieslaar, Ibraheem
- Date: 2019
- Subjects: Computer security , Data encryption (Computer science) , Noise generators (Electronics)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/71152 , vital:29790
- Description: This research is the first of its kind to investigate the utilisation of a multi-threading software-based countermeasure to mitigate Side Channel Analysis (SCA) attacks, with a particular focus on the AES-128 cryptographic algorithm. This investigation is novel, as there has not been a software-based countermeasure relying on multi-threading to our knowledge. The research has been tested on the Atmel microcontrollers, as well as a more fully featured system in the form of the popular Raspberry Pi that utilises the ARM7 processor. The main contributions of this research is the introduction of a multi-threading software based countermeasure used to mitigate SCA attacks on both an embedded device and a Raspberry Pi. These threads are comprised of various mathematical operations which are utilised to generate electromagnetic (EM) noise resulting in the obfuscation of the execution of the AES-128 algorithm. A novel EM noise generator known as the FRIES noise generator is implemented to obfuscate data captured in the EM field. FRIES comprises of hiding the execution of AES-128 algorithm within the EM noise generated by the 512 Secure Hash Algorithm (SHA) from the libcrypto++ and OpenSSL libraries. In order to evaluate the proposed countermeasure, a novel attack methodology was developed where the entire secret AES-128 encryption key was recovered from a Raspberry Pi, which has not been achieved before. The FRIES noise generator was pitted against this new attack vector and other known noise generators. The results exhibited that the FRIES noise generator withstood this attack whilst other existing techniques still leaked out secret information. The visual location of the AES-128 encryption algorithm in the EM spectrum and key recovery was prevented. These results demonstrated that the proposed multi-threading software based countermeasure was able to be resistant to existing and new forms of attacks, thus verifying that a multi-threading software based countermeasure can serve to mitigate SCA attacks.
- Full Text:
- Date Issued: 2019
Visual based finger interactions for mobile phones
- Authors: Kerr, Simon
- Date: 2010 , 2010-03-15
- Subjects: User interfaces (Computer systems) , Mobile communication systems -- Design and construction , Cell phones -- Software , Mobile communication systems -- Technological innovations , Information display systems , Cell phones -- Technological innovations
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4652 , http://hdl.handle.net/10962/d1006621 , User interfaces (Computer systems) , Mobile communication systems -- Design and construction , Cell phones -- Software , Mobile communication systems -- Technological innovations , Information display systems , Cell phones -- Technological innovations
- Description: Vision based technology such as motion detection has long been limited to the domain of powerful processor intensive systems such as desktop PCs and specialist hardware solutions. With the advent of much faster mobile phone processors and memory, a plethora of feature rich software and hardware is being deployed onto the mobile platform, most notably onto high powered devices called smart phones. Interaction interfaces such as touchscreens allow for improved usability but obscure the phone’s screen. Since the majority of smart phones are equipped with cameras, it has become feasible to combine their powerful processors, large memory capacity and the camera to support new ways of interacting with the phone which do not obscure the screen. However, it is not clear whether or not these processor intensive visual interactions can in fact be run at an acceptable speed on current mobile handsets or whether they will offer the user a better experience than the current number pad and direction keys present on the majority of mobile phones. A vision based finger interaction technique is proposed which uses the back of device camera to track the user’s finger. This allows the user to interact with the mobile phone with mouse based movements, gestures and steering based interactions. A simple colour thresholding algorithm was implemented in Java, Python and C++. Various benchmarks and tests conducted on a Nokia N95 smart phone revealed that on current hardware and with current programming environments only native C++ yields results plausible for real time interactions (a key requirement for vision based interactions). It is also shown that different lighting levels and background environments affects the accuracy of the system with background and finger contrast playing a large role. Finally a user study was conducted to ascertain the overall user’s satisfaction between keypad interactions and the finger interaction techniques concluding that the new finger interaction technique is well suited to steering based interactions and in time, mouse style movements. Simple navigation is better suited to the directional keypad.
- Full Text:
- Date Issued: 2010
- Authors: Kerr, Simon
- Date: 2010 , 2010-03-15
- Subjects: User interfaces (Computer systems) , Mobile communication systems -- Design and construction , Cell phones -- Software , Mobile communication systems -- Technological innovations , Information display systems , Cell phones -- Technological innovations
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4652 , http://hdl.handle.net/10962/d1006621 , User interfaces (Computer systems) , Mobile communication systems -- Design and construction , Cell phones -- Software , Mobile communication systems -- Technological innovations , Information display systems , Cell phones -- Technological innovations
- Description: Vision based technology such as motion detection has long been limited to the domain of powerful processor intensive systems such as desktop PCs and specialist hardware solutions. With the advent of much faster mobile phone processors and memory, a plethora of feature rich software and hardware is being deployed onto the mobile platform, most notably onto high powered devices called smart phones. Interaction interfaces such as touchscreens allow for improved usability but obscure the phone’s screen. Since the majority of smart phones are equipped with cameras, it has become feasible to combine their powerful processors, large memory capacity and the camera to support new ways of interacting with the phone which do not obscure the screen. However, it is not clear whether or not these processor intensive visual interactions can in fact be run at an acceptable speed on current mobile handsets or whether they will offer the user a better experience than the current number pad and direction keys present on the majority of mobile phones. A vision based finger interaction technique is proposed which uses the back of device camera to track the user’s finger. This allows the user to interact with the mobile phone with mouse based movements, gestures and steering based interactions. A simple colour thresholding algorithm was implemented in Java, Python and C++. Various benchmarks and tests conducted on a Nokia N95 smart phone revealed that on current hardware and with current programming environments only native C++ yields results plausible for real time interactions (a key requirement for vision based interactions). It is also shown that different lighting levels and background environments affects the accuracy of the system with background and finger contrast playing a large role. Finally a user study was conducted to ascertain the overall user’s satisfaction between keypad interactions and the finger interaction techniques concluding that the new finger interaction technique is well suited to steering based interactions and in time, mouse style movements. Simple navigation is better suited to the directional keypad.
- Full Text:
- Date Issued: 2010
File integrity checking
- Authors: Motara, Yusuf Moosa
- Date: 2006
- Subjects: Linux , Operating systems (Computers) , Database design , Computer security
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4682 , http://hdl.handle.net/10962/d1007701 , Linux , Operating systems (Computers) , Database design , Computer security
- Description: This thesis looks at file execution as an attack vector that leads to the execution of unauthorized code. File integrity checking is examined as a means of removing this attack vector, and the design, implementation, and evaluation of a best-of-breed file integrity checker for the Linux operating system is undertaken. We conclude that the resultant file integrity checker does succeed in removing file execution as an attack vector, does so at a computational cost that is negligible, and displays innovative and useful features that are not currently found in any other Linux file integrity checker.
- Full Text:
- Date Issued: 2006
- Authors: Motara, Yusuf Moosa
- Date: 2006
- Subjects: Linux , Operating systems (Computers) , Database design , Computer security
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4682 , http://hdl.handle.net/10962/d1007701 , Linux , Operating systems (Computers) , Database design , Computer security
- Description: This thesis looks at file execution as an attack vector that leads to the execution of unauthorized code. File integrity checking is examined as a means of removing this attack vector, and the design, implementation, and evaluation of a best-of-breed file integrity checker for the Linux operating system is undertaken. We conclude that the resultant file integrity checker does succeed in removing file execution as an attack vector, does so at a computational cost that is negligible, and displays innovative and useful features that are not currently found in any other Linux file integrity checker.
- Full Text:
- Date Issued: 2006
Network simulation for professional audio networks
- Authors: Otten, Fred
- Date: 2015
- Subjects: Sound engineers , Ethernet (Local area network system) , Computer networks , Computer simulation
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4713 , http://hdl.handle.net/10962/d1017935
- Description: Audio Engineers are required to design and deploy large multi-channel sound systems which meet a set of requirements and use networking technologies such as Firewire and Ethernet AVB. Bandwidth utilisation and parameter groupings are among the factors which need to be considered in these designs. An implementation of an extensible, generic simulation framework would allow audio engineers to easily compare protocols and networking technologies and get near real time responses with regards to bandwidth utilisation. Our hypothesis is that an application-level capability can be developed which uses a network simulation framework to enable this process and enhances the audio engineer’s experience of designing and configuring a network. This thesis presents a new, extensible simulation framework which can be utilised to simulate professional audio networks. This framework is utilised to develop an application - AudioNetSim - based on the requirements of an audio engineer. The thesis describes the AudioNetSim models and implementations for Ethernet AVB, Firewire and the AES- 64 control protocol. AudioNetSim enables bandwidth usage determination for any network configuration and connection scenario and is used to compare Firewire and Ethernet AVB bandwidth utilisation. It also applies graph theory to the circular join problem and provides a solution to detect circular joins.
- Full Text:
- Date Issued: 2015
- Authors: Otten, Fred
- Date: 2015
- Subjects: Sound engineers , Ethernet (Local area network system) , Computer networks , Computer simulation
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4713 , http://hdl.handle.net/10962/d1017935
- Description: Audio Engineers are required to design and deploy large multi-channel sound systems which meet a set of requirements and use networking technologies such as Firewire and Ethernet AVB. Bandwidth utilisation and parameter groupings are among the factors which need to be considered in these designs. An implementation of an extensible, generic simulation framework would allow audio engineers to easily compare protocols and networking technologies and get near real time responses with regards to bandwidth utilisation. Our hypothesis is that an application-level capability can be developed which uses a network simulation framework to enable this process and enhances the audio engineer’s experience of designing and configuring a network. This thesis presents a new, extensible simulation framework which can be utilised to simulate professional audio networks. This framework is utilised to develop an application - AudioNetSim - based on the requirements of an audio engineer. The thesis describes the AudioNetSim models and implementations for Ethernet AVB, Firewire and the AES- 64 control protocol. AudioNetSim enables bandwidth usage determination for any network configuration and connection scenario and is used to compare Firewire and Ethernet AVB bandwidth utilisation. It also applies graph theory to the circular join problem and provides a solution to detect circular joins.
- Full Text:
- Date Issued: 2015
Adaptive flow management of multimedia data with a variable quality of service
- Authors: Littlejohn, Paul Stephen
- Date: 1999
- Subjects: Multimedia systems , Multimedia systems -- Evaluation
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4605 , http://hdl.handle.net/10962/d1004863 , Multimedia systems , Multimedia systems -- Evaluation
- Description: Much of the current research involving the delivery of multimedia data focuses on the need to maintain a constant Quality of Service (QoS) throughout the lifetime of the connection. Delivery of a constant QoS requires that a guaranteed bandwidth is available for the entire connection. Techniques, such as resource reservation, are able to provide for this. These approaches work well across networks that are fairly homogeneous, and which have sufficient resources to sustain the guarantees, but are not currently viable over either heterogeneous or unreliable networks. To cater for the great number of networks (including the Internet) which do not conform to the ideal conditions required by constant Quality of Service mechanisms, this thesis proposes a different approach, that of dynamically adjusting the QoS in response to changing network conditions. Instead of optimizing the Quality of Service, the approach used in this thesis seeks to ensure the delivery of the information, at the best possible quality, as determined by the carrying ability of the poorest segment in the network link. To illustrate and examine this model, a service-adaptive system is described, which allows for the streaming of multimedia audio data across a network using the RealTime Transport Protocol. This application continually adjusts its service requests in response to the current network conditions. A client/server model is outlined whereby the server attempts to provide scalable media content, in this case audio data, to a client at the highest possible Quality of Service. The thesis presents and evaluates a number of renegotiation methods for adjusting the Quality of Service between the client and server. An A djusted QoS renegotiation method algorithm is suggested, which delivers the best possible quality, within an acceptable loss boundary.
- Full Text:
- Date Issued: 1999
- Authors: Littlejohn, Paul Stephen
- Date: 1999
- Subjects: Multimedia systems , Multimedia systems -- Evaluation
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4605 , http://hdl.handle.net/10962/d1004863 , Multimedia systems , Multimedia systems -- Evaluation
- Description: Much of the current research involving the delivery of multimedia data focuses on the need to maintain a constant Quality of Service (QoS) throughout the lifetime of the connection. Delivery of a constant QoS requires that a guaranteed bandwidth is available for the entire connection. Techniques, such as resource reservation, are able to provide for this. These approaches work well across networks that are fairly homogeneous, and which have sufficient resources to sustain the guarantees, but are not currently viable over either heterogeneous or unreliable networks. To cater for the great number of networks (including the Internet) which do not conform to the ideal conditions required by constant Quality of Service mechanisms, this thesis proposes a different approach, that of dynamically adjusting the QoS in response to changing network conditions. Instead of optimizing the Quality of Service, the approach used in this thesis seeks to ensure the delivery of the information, at the best possible quality, as determined by the carrying ability of the poorest segment in the network link. To illustrate and examine this model, a service-adaptive system is described, which allows for the streaming of multimedia audio data across a network using the RealTime Transport Protocol. This application continually adjusts its service requests in response to the current network conditions. A client/server model is outlined whereby the server attempts to provide scalable media content, in this case audio data, to a client at the highest possible Quality of Service. The thesis presents and evaluates a number of renegotiation methods for adjusting the Quality of Service between the client and server. An A djusted QoS renegotiation method algorithm is suggested, which delivers the best possible quality, within an acceptable loss boundary.
- Full Text:
- Date Issued: 1999
Preimages for SHA-1
- Authors: Motara, Yusuf Moosa
- Date: 2018
- Subjects: Data encryption (Computer science) , Computer security -- Software , Hashing (Computer science) , Data compression (Computer science) , Preimage , Secure Hash Algorithm 1 (SHA-1)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/57885 , vital:27004
- Description: This research explores the problem of finding a preimage — an input that, when passed through a particular function, will result in a pre-specified output — for the compression function of the SHA-1 cryptographic hash. This problem is much more difficult than the problem of finding a collision for a hash function, and preimage attacks for very few popular hash functions are known. The research begins by introducing the field and giving an overview of the existing work in the area. A thorough analysis of the compression function is made, resulting in alternative formulations for both parts of the function, and both statistical and theoretical tools to determine the difficulty of the SHA-1 preimage problem. Different representations (And- Inverter Graph, Binary Decision Diagram, Conjunctive Normal Form, Constraint Satisfaction form, and Disjunctive Normal Form) and associated tools to manipulate and/or analyse these representations are then applied and explored, and results are collected and interpreted. In conclusion, the SHA-1 preimage problem remains unsolved and insoluble for the foreseeable future. The primary issue is one of efficient representation; despite a promising theoretical difficulty, both the diffusion characteristics and the depth of the tree stand in the way of efficient search. Despite this, the research served to confirm and quantify the difficulty of the problem both theoretically, using Schaefer's Theorem, and practically, in the context of different representations.
- Full Text:
- Date Issued: 2018
- Authors: Motara, Yusuf Moosa
- Date: 2018
- Subjects: Data encryption (Computer science) , Computer security -- Software , Hashing (Computer science) , Data compression (Computer science) , Preimage , Secure Hash Algorithm 1 (SHA-1)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/57885 , vital:27004
- Description: This research explores the problem of finding a preimage — an input that, when passed through a particular function, will result in a pre-specified output — for the compression function of the SHA-1 cryptographic hash. This problem is much more difficult than the problem of finding a collision for a hash function, and preimage attacks for very few popular hash functions are known. The research begins by introducing the field and giving an overview of the existing work in the area. A thorough analysis of the compression function is made, resulting in alternative formulations for both parts of the function, and both statistical and theoretical tools to determine the difficulty of the SHA-1 preimage problem. Different representations (And- Inverter Graph, Binary Decision Diagram, Conjunctive Normal Form, Constraint Satisfaction form, and Disjunctive Normal Form) and associated tools to manipulate and/or analyse these representations are then applied and explored, and results are collected and interpreted. In conclusion, the SHA-1 preimage problem remains unsolved and insoluble for the foreseeable future. The primary issue is one of efficient representation; despite a promising theoretical difficulty, both the diffusion characteristics and the depth of the tree stand in the way of efficient search. Despite this, the research served to confirm and quantify the difficulty of the problem both theoretically, using Schaefer's Theorem, and practically, in the context of different representations.
- Full Text:
- Date Issued: 2018
Behavioural model debugging in Linda
- Authors: Sewry, David Andrew
- Date: 1994
- Subjects: LINDA (Computer system) Debugging in computer science
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4674 , http://hdl.handle.net/10962/d1006697
- Description: This thesis investigates event-based behavioural model debugging in Linda. A study is presented of the Linda parallel programming paradigm, its amenability to debugging, and a model for debugging Linda programs using Milner's CCS. In support of the construction of expected behaviour models, a Linda program specification language is proposed. A behaviour recognition engine that is based on such specifications is also discussed. It is shown that Linda's distinctive characteristics make it amenable to debugging without the usual problems associated with paraUel debuggers. Furthermore, it is shown that a behavioural model debugger, based on the proposed specification language, effectively exploits the debugging opportunity. The ideas developed in the thesis are demonstrated in an experimental Modula-2 Linda system.
- Full Text:
- Date Issued: 1994
- Authors: Sewry, David Andrew
- Date: 1994
- Subjects: LINDA (Computer system) Debugging in computer science
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4674 , http://hdl.handle.net/10962/d1006697
- Description: This thesis investigates event-based behavioural model debugging in Linda. A study is presented of the Linda parallel programming paradigm, its amenability to debugging, and a model for debugging Linda programs using Milner's CCS. In support of the construction of expected behaviour models, a Linda program specification language is proposed. A behaviour recognition engine that is based on such specifications is also discussed. It is shown that Linda's distinctive characteristics make it amenable to debugging without the usual problems associated with paraUel debuggers. Furthermore, it is shown that a behavioural model debugger, based on the proposed specification language, effectively exploits the debugging opportunity. The ideas developed in the thesis are demonstrated in an experimental Modula-2 Linda system.
- Full Text:
- Date Issued: 1994
An investigation of issues of privacy, anonymity and multi-factor authentication in an open environment
- Authors: Miles, Shaun Graeme
- Date: 2012-06-20
- Subjects: Electronic data processing departments -- Security measures , Electronic data processing departments , Privacy, Right of , Computer security , Data protection , Computers -- Access control
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4656 , http://hdl.handle.net/10962/d1006653 , Electronic data processing departments -- Security measures , Electronic data processing departments , Privacy, Right of , Computer security , Data protection , Computers -- Access control
- Description: This thesis performs an investigation into issues concerning the broad area ofIdentity and Access Management, with a focus on open environments. Through literature research the issues of privacy, anonymity and access control are identified. The issue of privacy is an inherent problem due to the nature of the digital network environment. Information can be duplicated and modified regardless of the wishes and intentions ofthe owner of that information unless proper measures are taken to secure the environment. Once information is published or divulged on the network, there is very little way of controlling the subsequent usage of that information. To address this issue a model for privacy is presented that follows the user centric paradigm of meta-identity. The lack of anonymity, where security measures can be thwarted through the observation of the environment, is a concern for users and systems. By an attacker observing the communication channel and monitoring the interactions between users and systems over a long enough period of time, it is possible to infer knowledge about the users and systems. This knowledge is used to build an identity profile of potential victims to be used in subsequent attacks. To address the problem, mechanisms for providing an acceptable level of anonymity while maintaining adequate accountability (from a legal standpoint) are explored. In terms of access control, the inherent weakness of single factor authentication mechanisms is discussed. The typical mechanism is the user-name and password pair, which provides a single point of failure. By increasing the factors used in authentication, the amount of work required to compromise the system increases non-linearly. Within an open network, several aspects hinder wide scale adoption and use of multi-factor authentication schemes, such as token management and the impact on usability. The framework is developed from a Utopian point of view, with the aim of being applicable to many situations as opposed to a single specific domain. The framework incorporates multi-factor authentication over multiple paths using mobile phones and GSM networks, and explores the usefulness of such an approach. The models are in tum analysed, providing a discussion into the assumptions made and the problems faced by each model. , Adobe Acrobat Pro 9.5.1 , Adobe Acrobat 9.51 Paper Capture Plug-in
- Full Text:
- Authors: Miles, Shaun Graeme
- Date: 2012-06-20
- Subjects: Electronic data processing departments -- Security measures , Electronic data processing departments , Privacy, Right of , Computer security , Data protection , Computers -- Access control
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4656 , http://hdl.handle.net/10962/d1006653 , Electronic data processing departments -- Security measures , Electronic data processing departments , Privacy, Right of , Computer security , Data protection , Computers -- Access control
- Description: This thesis performs an investigation into issues concerning the broad area ofIdentity and Access Management, with a focus on open environments. Through literature research the issues of privacy, anonymity and access control are identified. The issue of privacy is an inherent problem due to the nature of the digital network environment. Information can be duplicated and modified regardless of the wishes and intentions ofthe owner of that information unless proper measures are taken to secure the environment. Once information is published or divulged on the network, there is very little way of controlling the subsequent usage of that information. To address this issue a model for privacy is presented that follows the user centric paradigm of meta-identity. The lack of anonymity, where security measures can be thwarted through the observation of the environment, is a concern for users and systems. By an attacker observing the communication channel and monitoring the interactions between users and systems over a long enough period of time, it is possible to infer knowledge about the users and systems. This knowledge is used to build an identity profile of potential victims to be used in subsequent attacks. To address the problem, mechanisms for providing an acceptable level of anonymity while maintaining adequate accountability (from a legal standpoint) are explored. In terms of access control, the inherent weakness of single factor authentication mechanisms is discussed. The typical mechanism is the user-name and password pair, which provides a single point of failure. By increasing the factors used in authentication, the amount of work required to compromise the system increases non-linearly. Within an open network, several aspects hinder wide scale adoption and use of multi-factor authentication schemes, such as token management and the impact on usability. The framework is developed from a Utopian point of view, with the aim of being applicable to many situations as opposed to a single specific domain. The framework incorporates multi-factor authentication over multiple paths using mobile phones and GSM networks, and explores the usefulness of such an approach. The models are in tum analysed, providing a discussion into the assumptions made and the problems faced by each model. , Adobe Acrobat Pro 9.5.1 , Adobe Acrobat 9.51 Paper Capture Plug-in
- Full Text:
An analysis of malware evasion techniques against modern AV engines
- Authors: Haffejee, Jameel
- Date: 2015
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:20979 , http://hdl.handle.net/10962/5821
- Description: This research empirically tested the response of antivirus applications to binaries that use virus-like evasion techniques. In order to achieve this, a number of binaries are processed using a number of evasion methods and are then deployed against several antivirus engines. The research also documents the process of setting up an environment for testing antivirus engines, including building the evasion techniques used in the tests. The results of the empirical tests illustrate that an attacker can evade multiple antivirus engines without much effort using well-known evasion techniques. Furthermore, some antivirus engines may respond to the occurrence of an evasion technique instead of the presence of any malicious code. In practical terms, this shows that while antivirus applications are useful for protecting against known threats, their effectiveness against unknown or modified threats is limited.
- Full Text:
- Date Issued: 2015
- Authors: Haffejee, Jameel
- Date: 2015
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:20979 , http://hdl.handle.net/10962/5821
- Description: This research empirically tested the response of antivirus applications to binaries that use virus-like evasion techniques. In order to achieve this, a number of binaries are processed using a number of evasion methods and are then deployed against several antivirus engines. The research also documents the process of setting up an environment for testing antivirus engines, including building the evasion techniques used in the tests. The results of the empirical tests illustrate that an attacker can evade multiple antivirus engines without much effort using well-known evasion techniques. Furthermore, some antivirus engines may respond to the occurrence of an evasion technique instead of the presence of any malicious code. In practical terms, this shows that while antivirus applications are useful for protecting against known threats, their effectiveness against unknown or modified threats is limited.
- Full Text:
- Date Issued: 2015
Securing software development using developer access control
- Authors: Ongers, Grant
- Date: 2020
- Subjects: Computer software -- Development , Computers -- Access control , Computer security -- Software , Computer networks -- Security measures , Source code (Computer science) , Plug-ins (Computer programs) , Data encryption (Computer science) , Network Access Control , Data Loss Prevention , Google’s BeyondCorp , Confidentiality, Integrity and Availability (CIA) triad
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/149022 , vital:38796
- Description: This research is aimed at software development companies and highlights the unique information security concerns in the context of a non-malicious software developer’s work environment; and furthermore explores an application driven solution which focuses specifically on providing developer environments with access control for source code repositories. In order to achieve that, five goals were defined as discussed in section 1.3. The application designed to provide the developer environment with access control to source code repositories was modelled on lessons taken from the principles of Network Access Control (NAC), Data Loss Prevention (DLP), and Google’s BeyondCorp (GBC) for zero-trust end-user computing. The intention of this research is to provide software developers with maximum access to source code without compromising Confidentiality, as per the Confidentiality, Integrity and Availability (CIA) triad. Employing data gleaned from examining the characteristics of DLP, NAC, and Beyond- Corp—proof-of-concept code was developed to regulate access to the developer’s environment and source code. The system required sufficient flexibility to support the diversity of software development environments. In order to achieve this, a modular design was selected. The system comprised a client side agent and a plug-in-ready server component. The client side agent mounts and dismounts encrypted volumes containing source code. Furthermore, it provides the server with information of the client that is demanded by plug-ins. The server side service provided encryption keys to facilitate the mounting of the volumes and, through plug-ins, asked questions of the client agent to determine whether access should be granted. The solution was then tested with integration and system testing. There were plans to have it used by development teams who were then to be surveyed as to their view on the proof of concept but this proved impossible. The conclusion provides a basis by which organisations that develop software can better balance the two corners of the CIA triad most often in conflict: Confidentiality in terms of their source code against the Availability of the same to developers.
- Full Text:
- Date Issued: 2020
- Authors: Ongers, Grant
- Date: 2020
- Subjects: Computer software -- Development , Computers -- Access control , Computer security -- Software , Computer networks -- Security measures , Source code (Computer science) , Plug-ins (Computer programs) , Data encryption (Computer science) , Network Access Control , Data Loss Prevention , Google’s BeyondCorp , Confidentiality, Integrity and Availability (CIA) triad
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/149022 , vital:38796
- Description: This research is aimed at software development companies and highlights the unique information security concerns in the context of a non-malicious software developer’s work environment; and furthermore explores an application driven solution which focuses specifically on providing developer environments with access control for source code repositories. In order to achieve that, five goals were defined as discussed in section 1.3. The application designed to provide the developer environment with access control to source code repositories was modelled on lessons taken from the principles of Network Access Control (NAC), Data Loss Prevention (DLP), and Google’s BeyondCorp (GBC) for zero-trust end-user computing. The intention of this research is to provide software developers with maximum access to source code without compromising Confidentiality, as per the Confidentiality, Integrity and Availability (CIA) triad. Employing data gleaned from examining the characteristics of DLP, NAC, and Beyond- Corp—proof-of-concept code was developed to regulate access to the developer’s environment and source code. The system required sufficient flexibility to support the diversity of software development environments. In order to achieve this, a modular design was selected. The system comprised a client side agent and a plug-in-ready server component. The client side agent mounts and dismounts encrypted volumes containing source code. Furthermore, it provides the server with information of the client that is demanded by plug-ins. The server side service provided encryption keys to facilitate the mounting of the volumes and, through plug-ins, asked questions of the client agent to determine whether access should be granted. The solution was then tested with integration and system testing. There were plans to have it used by development teams who were then to be surveyed as to their view on the proof of concept but this proved impossible. The conclusion provides a basis by which organisations that develop software can better balance the two corners of the CIA triad most often in conflict: Confidentiality in terms of their source code against the Availability of the same to developers.
- Full Text:
- Date Issued: 2020
A comparative study of CERBER, MAKTUB and LOCKY Ransomware using a Hybridised-Malware analysis
- Authors: Schmitt, Veronica
- Date: 2019
- Subjects: Microsoft Windows (Computer file) , Data protection , Computer crimes -- Prevention , Computer security , Computer networks -- Security measures , Computers -- Access control , Malware (Computer software)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92313 , vital:30702
- Description: There has been a significant increase in the prevalence of Ransomware attacks in the preceding four years to date. This indicates that the battle has not yet been won defending against this class of malware. This research proposes that by identifying the similarities within the operational framework of Ransomware strains, a better overall understanding of their operation and function can be achieved. This, in turn, will aid in a quicker response to future attacks. With the average Ransomware attack taking two hours to be identified, it shows that there is not yet a clear understanding as to why these attacks are so successful. Research into Ransomware is limited by what is currently known on the topic. Due to the limitations of the research the decision was taken to only examined three samples of Ransomware from different families. This was decided due to the complexities and comprehensive nature of the research. The in depth nature of the research and the time constraints associated with it did not allow for proof of concept of this framework to be tested on more than three families, but the exploratory work was promising and should be further explored in future research. The aim of the research is to follow the Hybrid-Malware analysis framework which consists of both static and the dynamic analysis phases, in addition to the digital forensic examination of the infected system. This allows for signature-based findings, along with behavioural and forensic findings all in one. This information allows for a better understanding of how this malware is designed and how it infects and remains persistent on a system. The operating system which has been chosen is the Microsoft Window 7 operating system which is still utilised by a significant proportion of Windows users especially in the corporate environment. The experiment process was designed to enable the researcher the ability to collect information regarding the Ransomware and every aspect of its behaviour and communication on a target system. The results can be compared across the three strains to identify the commonalities. The initial hypothesis was that Ransomware variants are all much like an instant cake box consists of specific building blocks which remain the same with the flavouring of the cake mix being the unique feature.
- Full Text:
- Date Issued: 2019
- Authors: Schmitt, Veronica
- Date: 2019
- Subjects: Microsoft Windows (Computer file) , Data protection , Computer crimes -- Prevention , Computer security , Computer networks -- Security measures , Computers -- Access control , Malware (Computer software)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92313 , vital:30702
- Description: There has been a significant increase in the prevalence of Ransomware attacks in the preceding four years to date. This indicates that the battle has not yet been won defending against this class of malware. This research proposes that by identifying the similarities within the operational framework of Ransomware strains, a better overall understanding of their operation and function can be achieved. This, in turn, will aid in a quicker response to future attacks. With the average Ransomware attack taking two hours to be identified, it shows that there is not yet a clear understanding as to why these attacks are so successful. Research into Ransomware is limited by what is currently known on the topic. Due to the limitations of the research the decision was taken to only examined three samples of Ransomware from different families. This was decided due to the complexities and comprehensive nature of the research. The in depth nature of the research and the time constraints associated with it did not allow for proof of concept of this framework to be tested on more than three families, but the exploratory work was promising and should be further explored in future research. The aim of the research is to follow the Hybrid-Malware analysis framework which consists of both static and the dynamic analysis phases, in addition to the digital forensic examination of the infected system. This allows for signature-based findings, along with behavioural and forensic findings all in one. This information allows for a better understanding of how this malware is designed and how it infects and remains persistent on a system. The operating system which has been chosen is the Microsoft Window 7 operating system which is still utilised by a significant proportion of Windows users especially in the corporate environment. The experiment process was designed to enable the researcher the ability to collect information regarding the Ransomware and every aspect of its behaviour and communication on a target system. The results can be compared across the three strains to identify the commonalities. The initial hypothesis was that Ransomware variants are all much like an instant cake box consists of specific building blocks which remain the same with the flavouring of the cake mix being the unique feature.
- Full Text:
- Date Issued: 2019
Prototyping a peer-to-peer session initiation protocol user agent
- Authors: Tsietsi, Mosiuoa Jeremia
- Date: 2008 , 2008-03-10
- Subjects: Computer networks , Computer network protocols -- Standards , Data transmission systems -- Standards , Peer-to-peer architecture (Computer networks) , Computer network architectures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4646 , http://hdl.handle.net/10962/d1006603 , Computer networks , Computer network protocols -- Standards , Data transmission systems -- Standards , Peer-to-peer architecture (Computer networks) , Computer network architectures
- Description: The Session Initiation Protocol (SIP) has in recent years become a popular protocol for the exchange of text, voice and video over IP networks. This thesis proposes the use of a class of structured peer to peer protocols - commonly known as Distributed Hash Tables (DHTs) - to provide a SIP overlay with services such as end-point location management and message relay, in the absence of traditional, centralised resources such as SIP proxies and registrars. A peer-to-peer layer named OverCord, which allows the interaction with any specific DHT protocol via the use of appropriate plug-ins, was designed, implemented and tested. This layer was then incorporated into a SIP user agent distributed by NIST (National Institute of Standards and Technology, USA). The modified user agent is capable of reliably establishing text, audio and video communication with similarly modified agents (peers) as well as conventional, centralized SIP overlays.
- Full Text:
- Date Issued: 2008
- Authors: Tsietsi, Mosiuoa Jeremia
- Date: 2008 , 2008-03-10
- Subjects: Computer networks , Computer network protocols -- Standards , Data transmission systems -- Standards , Peer-to-peer architecture (Computer networks) , Computer network architectures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4646 , http://hdl.handle.net/10962/d1006603 , Computer networks , Computer network protocols -- Standards , Data transmission systems -- Standards , Peer-to-peer architecture (Computer networks) , Computer network architectures
- Description: The Session Initiation Protocol (SIP) has in recent years become a popular protocol for the exchange of text, voice and video over IP networks. This thesis proposes the use of a class of structured peer to peer protocols - commonly known as Distributed Hash Tables (DHTs) - to provide a SIP overlay with services such as end-point location management and message relay, in the absence of traditional, centralised resources such as SIP proxies and registrars. A peer-to-peer layer named OverCord, which allows the interaction with any specific DHT protocol via the use of appropriate plug-ins, was designed, implemented and tested. This layer was then incorporated into a SIP user agent distributed by NIST (National Institute of Standards and Technology, USA). The modified user agent is capable of reliably establishing text, audio and video communication with similarly modified agents (peers) as well as conventional, centralized SIP overlays.
- Full Text:
- Date Issued: 2008
An investigation into the use of intuitive control interfaces and distributed processing for enhanced three dimensional sound localization
- Authors: Hedges, M L
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/2992 , vital:20350
- Description: This thesis investigates the feasibility of using gestures as a means of control for localizing three dimesional (3D) sound sources in a distributed immersive audio system. A prototype system was implemented and tested which uses state of the art technology to achieve the stated goals. A Windows Kinect is used for gesture recognition which translates human gestures into control messages by the prototype system, which in turn performs actions based on the recognized gestures. The term distributed in the context of this system refers to the audio processing capacity. The prototype system partitions and allocates the processing load between a number of endpoints. The reallocated processing load consists of the mixing of audio samples according to a specification. The endpoints used in this research are XMOS AVB endpoints. The firmware on these endpoints were modified to include the audio mixing capability which was controlled by a state of the art audio distribution networking standard, Ethernet AVB. The hardware used for the implementation of the prototype system is relatively cost efficient in comparison to professional audio hardware, and is also commercially available for end users. The successful implementation and results from user testing of the prototype system demonstrates how it is a feasible option for recording the localization of a sound source. The ability to partition the processing provides a modular approach to building immersive sound systems. This removes the constraint of a centralized mixing console with a predetermined speaker configuration.
- Full Text:
- Date Issued: 2016
- Authors: Hedges, M L
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/2992 , vital:20350
- Description: This thesis investigates the feasibility of using gestures as a means of control for localizing three dimesional (3D) sound sources in a distributed immersive audio system. A prototype system was implemented and tested which uses state of the art technology to achieve the stated goals. A Windows Kinect is used for gesture recognition which translates human gestures into control messages by the prototype system, which in turn performs actions based on the recognized gestures. The term distributed in the context of this system refers to the audio processing capacity. The prototype system partitions and allocates the processing load between a number of endpoints. The reallocated processing load consists of the mixing of audio samples according to a specification. The endpoints used in this research are XMOS AVB endpoints. The firmware on these endpoints were modified to include the audio mixing capability which was controlled by a state of the art audio distribution networking standard, Ethernet AVB. The hardware used for the implementation of the prototype system is relatively cost efficient in comparison to professional audio hardware, and is also commercially available for end users. The successful implementation and results from user testing of the prototype system demonstrates how it is a feasible option for recording the localization of a sound source. The ability to partition the processing provides a modular approach to building immersive sound systems. This removes the constraint of a centralized mixing console with a predetermined speaker configuration.
- Full Text:
- Date Issued: 2016
Bluetooth audio and video streaming on the J2ME platform
- Authors: Sahd, Curtis Lee
- Date: 2011 , 2010-09-09
- Subjects: Bluetooth technology , Mobile communication systems , Communication -- Technological innovations , Communication -- Network analysis , Wireless communication systems , L2TP (Computer network protocol) , Computer network protocols , Streaming audio , Streaming video
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4633 , http://hdl.handle.net/10962/d1006521 , Bluetooth technology , Mobile communication systems , Communication -- Technological innovations , Communication -- Network analysis , Wireless communication systems , L2TP (Computer network protocol) , Computer network protocols , Streaming audio , Streaming video
- Description: With the increase in bandwidth, more widespread distribution of media, and increased capability of mobile devices, multimedia streaming has not only become feasible, but more economical in terms of space occupied by the media file and the costs involved in attaining it. Although much attention has been paid to peer to peer media streaming over the Internet using HTTP and RTSP, little research has focussed on the use of the Bluetooth protocol for streaming audio and video between mobile devices. This project investigates the feasibility of Bluetooth as a protocol for audio and video streaming between mobile phones using the J2ME platform, through the analysis of Bluetooth protocols, media formats, optimum packet sizes, and the effects of distance on transfer speed. A comparison was made between RFCOMM and L2CAP to determine which protocol could support the fastest transfer speed between two mobile devices. The L2CAP protocol proved to be the most suitable, providing average transfer rates of 136.17 KBps. Using this protocol a second experiment was undertaken to determine the most suitable media format for streaming in terms of: file size, bandwidth usage, quality, and ease of implementation. Out of the eight media formats investigated, the MP3 format provided the smallest file size, smallest bandwidth usage, best quality and highest ease of implementation. Another experiment was conducted to determine the optimum packet size for transfer between devices. A tradeoff was found between packet size and the quality of the sound file, with highest transfer rates being recorded with the MTU size of 668 bytes (136.58 KBps). The class of Bluetooth transmitter typically used in mobile devices (class 2) is considered a weak signal and is adversely affected by distance. As such, the final investigation that was undertaken was aimed at determining the effects of distance on audio streaming and playback. As can be expected, when devices were situated close to each other, the transfer speeds obtained were higher than when devices were far apart. Readings were taken at varying distances (1-15 metres), with erratic transfer speeds observed from 7 metres onwards. This research showed that audio streaming on the J2ME platform is feasible, however using the currently available class of Bluetooth transmitter, video streaming is not feasible. Video files were only playable once the entire media file had been transferred.
- Full Text:
- Date Issued: 2011
- Authors: Sahd, Curtis Lee
- Date: 2011 , 2010-09-09
- Subjects: Bluetooth technology , Mobile communication systems , Communication -- Technological innovations , Communication -- Network analysis , Wireless communication systems , L2TP (Computer network protocol) , Computer network protocols , Streaming audio , Streaming video
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4633 , http://hdl.handle.net/10962/d1006521 , Bluetooth technology , Mobile communication systems , Communication -- Technological innovations , Communication -- Network analysis , Wireless communication systems , L2TP (Computer network protocol) , Computer network protocols , Streaming audio , Streaming video
- Description: With the increase in bandwidth, more widespread distribution of media, and increased capability of mobile devices, multimedia streaming has not only become feasible, but more economical in terms of space occupied by the media file and the costs involved in attaining it. Although much attention has been paid to peer to peer media streaming over the Internet using HTTP and RTSP, little research has focussed on the use of the Bluetooth protocol for streaming audio and video between mobile devices. This project investigates the feasibility of Bluetooth as a protocol for audio and video streaming between mobile phones using the J2ME platform, through the analysis of Bluetooth protocols, media formats, optimum packet sizes, and the effects of distance on transfer speed. A comparison was made between RFCOMM and L2CAP to determine which protocol could support the fastest transfer speed between two mobile devices. The L2CAP protocol proved to be the most suitable, providing average transfer rates of 136.17 KBps. Using this protocol a second experiment was undertaken to determine the most suitable media format for streaming in terms of: file size, bandwidth usage, quality, and ease of implementation. Out of the eight media formats investigated, the MP3 format provided the smallest file size, smallest bandwidth usage, best quality and highest ease of implementation. Another experiment was conducted to determine the optimum packet size for transfer between devices. A tradeoff was found between packet size and the quality of the sound file, with highest transfer rates being recorded with the MTU size of 668 bytes (136.58 KBps). The class of Bluetooth transmitter typically used in mobile devices (class 2) is considered a weak signal and is adversely affected by distance. As such, the final investigation that was undertaken was aimed at determining the effects of distance on audio streaming and playback. As can be expected, when devices were situated close to each other, the transfer speeds obtained were higher than when devices were far apart. Readings were taken at varying distances (1-15 metres), with erratic transfer speeds observed from 7 metres onwards. This research showed that audio streaming on the J2ME platform is feasible, however using the currently available class of Bluetooth transmitter, video streaming is not feasible. Video files were only playable once the entire media file had been transferred.
- Full Text:
- Date Issued: 2011
A Framework for using Open Source intelligence as a Digital Forensic Investigative tool
- Authors: Rule, Samantha Elizabeth
- Date: 2015
- Subjects: Open source intelligence , Criminal investigation , Electronic evidence
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4715 , http://hdl.handle.net/10962/d1017937
- Description: The proliferation of the Internet has amplified the use of social networking sites by creating a platform that encourages individuals to share information. As a result there is a wealth of information that is publically and easily accessible. This research explores whether open source intelligence (OSINT), which is freely available, could be used as a digital forensic investigative tool. A survey was created and sent to digital forensic investigators to establish whether they currently use OSINT when performing investigations. The survey results confirm that OSINT is being used by digital forensic investigators when performing investigations but there are currently no guidelines or frameworks available to support the use thereof. Additionally, the survey results showed a belief amongst those surveyed that evidence gleaned from OSINT sources is considered supplementary rather than evidentiary. The findings of this research led to the development of a framework that identifies and recommends key processes to follow when conducting OSINT investigations. The framework can assist digital forensic investigators to follow a structured and rigorous process, which may lead to the unanimous acceptance of information obtained via OSINT sources as evidentiary rather than supplementary in the near future.
- Full Text:
- Date Issued: 2015
- Authors: Rule, Samantha Elizabeth
- Date: 2015
- Subjects: Open source intelligence , Criminal investigation , Electronic evidence
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4715 , http://hdl.handle.net/10962/d1017937
- Description: The proliferation of the Internet has amplified the use of social networking sites by creating a platform that encourages individuals to share information. As a result there is a wealth of information that is publically and easily accessible. This research explores whether open source intelligence (OSINT), which is freely available, could be used as a digital forensic investigative tool. A survey was created and sent to digital forensic investigators to establish whether they currently use OSINT when performing investigations. The survey results confirm that OSINT is being used by digital forensic investigators when performing investigations but there are currently no guidelines or frameworks available to support the use thereof. Additionally, the survey results showed a belief amongst those surveyed that evidence gleaned from OSINT sources is considered supplementary rather than evidentiary. The findings of this research led to the development of a framework that identifies and recommends key processes to follow when conducting OSINT investigations. The framework can assist digital forensic investigators to follow a structured and rigorous process, which may lead to the unanimous acceptance of information obtained via OSINT sources as evidentiary rather than supplementary in the near future.
- Full Text:
- Date Issued: 2015