An investigation into the prevalence and growth of phishing attacks against South African financial targets
- Authors: Lala, Darshan Magan
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3157 , vital:20379
- Description: Phishing in the electronic communications medium is the act of sending unsolicited email messages with the intention of masquerading as a reputed business. The objective is to deceive the recipient into divulging personal and sensitive information such as bank account details, credit card numbers and passwords. Attacks against financial services are the most common types of targets for scammers. Phishing attacks in South Africa have cost businesses and consumers substantial amounts of financial loss. This research investigated existing literature to understand the basic concepts of email, phishing, spam and how these fit together. The research also looks into the increasing growth of phishing worldwide and in particular against South African targets. A quantitative study is performed and reported on; this involves the study and analysis of phishing statistics in a data set provided by the South African Anti-Phishing Working Group. The data set contains phishing URL information, country code where the site has been hosted, targeted company name, IP address information and timestamp of the phishing site. The data set contains 161 different phishing targets. The research primarily focuses on the trend in phishing attacks against six South African based financial institutions, but also correlates this with the overall global trend using statistical analysis. The results from the study of the data set are compared to existing statistics and literature regarding the prevalence and growth of phishing in South Africa. The question that this research answers is whether or not the prevalence and growth of phishing in South Africa correlates with the global trend in phishing attacks. The findings indicate that certain correlations exist between some of the South African phishing targets and global phishing trends.
- Full Text:
- Date Issued: 2016
- Authors: Lala, Darshan Magan
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3157 , vital:20379
- Description: Phishing in the electronic communications medium is the act of sending unsolicited email messages with the intention of masquerading as a reputed business. The objective is to deceive the recipient into divulging personal and sensitive information such as bank account details, credit card numbers and passwords. Attacks against financial services are the most common types of targets for scammers. Phishing attacks in South Africa have cost businesses and consumers substantial amounts of financial loss. This research investigated existing literature to understand the basic concepts of email, phishing, spam and how these fit together. The research also looks into the increasing growth of phishing worldwide and in particular against South African targets. A quantitative study is performed and reported on; this involves the study and analysis of phishing statistics in a data set provided by the South African Anti-Phishing Working Group. The data set contains phishing URL information, country code where the site has been hosted, targeted company name, IP address information and timestamp of the phishing site. The data set contains 161 different phishing targets. The research primarily focuses on the trend in phishing attacks against six South African based financial institutions, but also correlates this with the overall global trend using statistical analysis. The results from the study of the data set are compared to existing statistics and literature regarding the prevalence and growth of phishing in South Africa. The question that this research answers is whether or not the prevalence and growth of phishing in South Africa correlates with the global trend in phishing attacks. The findings indicate that certain correlations exist between some of the South African phishing targets and global phishing trends.
- Full Text:
- Date Issued: 2016
An investigation into the use of intuitive control interfaces and distributed processing for enhanced three dimensional sound localization
- Authors: Hedges, M L
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/2992 , vital:20350
- Description: This thesis investigates the feasibility of using gestures as a means of control for localizing three dimesional (3D) sound sources in a distributed immersive audio system. A prototype system was implemented and tested which uses state of the art technology to achieve the stated goals. A Windows Kinect is used for gesture recognition which translates human gestures into control messages by the prototype system, which in turn performs actions based on the recognized gestures. The term distributed in the context of this system refers to the audio processing capacity. The prototype system partitions and allocates the processing load between a number of endpoints. The reallocated processing load consists of the mixing of audio samples according to a specification. The endpoints used in this research are XMOS AVB endpoints. The firmware on these endpoints were modified to include the audio mixing capability which was controlled by a state of the art audio distribution networking standard, Ethernet AVB. The hardware used for the implementation of the prototype system is relatively cost efficient in comparison to professional audio hardware, and is also commercially available for end users. The successful implementation and results from user testing of the prototype system demonstrates how it is a feasible option for recording the localization of a sound source. The ability to partition the processing provides a modular approach to building immersive sound systems. This removes the constraint of a centralized mixing console with a predetermined speaker configuration.
- Full Text:
- Date Issued: 2016
- Authors: Hedges, M L
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/2992 , vital:20350
- Description: This thesis investigates the feasibility of using gestures as a means of control for localizing three dimesional (3D) sound sources in a distributed immersive audio system. A prototype system was implemented and tested which uses state of the art technology to achieve the stated goals. A Windows Kinect is used for gesture recognition which translates human gestures into control messages by the prototype system, which in turn performs actions based on the recognized gestures. The term distributed in the context of this system refers to the audio processing capacity. The prototype system partitions and allocates the processing load between a number of endpoints. The reallocated processing load consists of the mixing of audio samples according to a specification. The endpoints used in this research are XMOS AVB endpoints. The firmware on these endpoints were modified to include the audio mixing capability which was controlled by a state of the art audio distribution networking standard, Ethernet AVB. The hardware used for the implementation of the prototype system is relatively cost efficient in comparison to professional audio hardware, and is also commercially available for end users. The successful implementation and results from user testing of the prototype system demonstrates how it is a feasible option for recording the localization of a sound source. The ability to partition the processing provides a modular approach to building immersive sound systems. This removes the constraint of a centralized mixing console with a predetermined speaker configuration.
- Full Text:
- Date Issued: 2016
An investigation into the use of intuitive control interfaces and distributed processing for enhanced three dimensional sound localization
- Authors: Hedges, Mitchell Lawrence
- Date: 2016
- Subjects: Human-computer interaction , Acoustic localization , Sound -- Equipment and supplies , Acoustical engineering , Surround-sound systems , Wireless sensor nodes
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4724 , http://hdl.handle.net/10962/d1020615
- Description: This thesis investigates the feasibility of using gestures as a means of control for localizing three dimensional (3D) sound sources in a distributed immersive audio system. A prototype system was implemented and tested which uses state of the art technology to achieve the stated goals. A Windows Kinect is used for gesture recognition which translates human gestures into control messages by the prototype system, which in turn performs actions based on the recognized gestures. The term distributed in the context of this system refers to the audio processing capacity. The prototype system partitions and allocates the processing load between a number of endpoints. The reallocated processing load consists of the mixing of audio samples according to a specification. The endpoints used in this research are XMOS AVB endpoints. The firmware on these endpoints were modified to include the audio mixing capability which was controlled by a state of the art audio distribution networking standard, Ethernet AVB. The hardware used for the implementation of the prototype system is relatively cost efficient in comparison to professional audio hardware, and is also commercially available for end users. the successful implementation and results from user testing of the prototype system demonstrates how it is a feasible option for recording the localization of a sound source. The ability to partition the processing provides a modular approach to building immersive sound systems. This removes the constraint of a centralized mixing console with a predetermined speaker configuration.
- Full Text:
- Date Issued: 2016
- Authors: Hedges, Mitchell Lawrence
- Date: 2016
- Subjects: Human-computer interaction , Acoustic localization , Sound -- Equipment and supplies , Acoustical engineering , Surround-sound systems , Wireless sensor nodes
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4724 , http://hdl.handle.net/10962/d1020615
- Description: This thesis investigates the feasibility of using gestures as a means of control for localizing three dimensional (3D) sound sources in a distributed immersive audio system. A prototype system was implemented and tested which uses state of the art technology to achieve the stated goals. A Windows Kinect is used for gesture recognition which translates human gestures into control messages by the prototype system, which in turn performs actions based on the recognized gestures. The term distributed in the context of this system refers to the audio processing capacity. The prototype system partitions and allocates the processing load between a number of endpoints. The reallocated processing load consists of the mixing of audio samples according to a specification. The endpoints used in this research are XMOS AVB endpoints. The firmware on these endpoints were modified to include the audio mixing capability which was controlled by a state of the art audio distribution networking standard, Ethernet AVB. The hardware used for the implementation of the prototype system is relatively cost efficient in comparison to professional audio hardware, and is also commercially available for end users. the successful implementation and results from user testing of the prototype system demonstrates how it is a feasible option for recording the localization of a sound source. The ability to partition the processing provides a modular approach to building immersive sound systems. This removes the constraint of a centralized mixing console with a predetermined speaker configuration.
- Full Text:
- Date Issued: 2016
Detecting derivative malware samples using deobfuscation-assisted similarity analysis
- Authors: Wrench, Peter Mark
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/383 , vital:19954
- Description: The overwhelming popularity of PHP as a hosting platform has made it the language of choice for developers of Remote Access Trojans (RATs or web shells) and other malicious software. These shells are typically used to compromise and monetise web platforms by providing the attacker with basic remote access to the system, including _le transfer, command execution, network reconnaissance, and database connectivity. Once infected, compromised systems can be used to defraud users by hosting phishing sites, performing Distributed Denial of Service attacks, or serving as anonymous platforms for sending spam or other malfeasance. The vast majority of these threats are largely derivative, incorporating core capabilities found in more established RATs such as c99 and r57. Authors of malicious software routinely produce new shell variants by modifying the behaviours of these ubiquitous RATs, either to add desired functionality or to avoid detection by signature-based detection systems. Once these modified shells are eventually identified (or additional functionality is required), the process of shell adaptation begins again. The end result of this iterative process is a web of separate but related shell variants, many of which are at least partially derived from one of the more popular and influential RATs. In response to the problem outlined above, the author set out to design and implement a system capable of circumventing common obfuscation techniques and identifying derivative malware samples in a given collection. To begin with, a decoder component was developed to syntactically deobfuscate and normalise PHP code by detecting and reversing idiomatic obfuscation constructs, and to apply uniform formatting conventions to all system inputs. A unified malware analysis framework, called Viper, was then extended to create a modular similarity analysis system comprised of individual feature extraction modules, modules responsible for batch processing, a matrix module for comparing sample features, and two visualisation modules capable of generating visual representations of shell similarity. The principal conclusion of the research was that the deobfuscation performed by the decoder component prior to analysis dramatically improved the observed levels of similarity between test samples. This in turn allowed the modular similarity analysis system to identify derivative clusters (or families) within a large collection of shells more accurately. Techniques for isolating and re-rendering these clusters were also developed and demonstrated to be effective at increasing the amount of detail available for evaluating the relative magnitudes of the relationships within each cluster.
- Full Text:
- Date Issued: 2016
- Authors: Wrench, Peter Mark
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/383 , vital:19954
- Description: The overwhelming popularity of PHP as a hosting platform has made it the language of choice for developers of Remote Access Trojans (RATs or web shells) and other malicious software. These shells are typically used to compromise and monetise web platforms by providing the attacker with basic remote access to the system, including _le transfer, command execution, network reconnaissance, and database connectivity. Once infected, compromised systems can be used to defraud users by hosting phishing sites, performing Distributed Denial of Service attacks, or serving as anonymous platforms for sending spam or other malfeasance. The vast majority of these threats are largely derivative, incorporating core capabilities found in more established RATs such as c99 and r57. Authors of malicious software routinely produce new shell variants by modifying the behaviours of these ubiquitous RATs, either to add desired functionality or to avoid detection by signature-based detection systems. Once these modified shells are eventually identified (or additional functionality is required), the process of shell adaptation begins again. The end result of this iterative process is a web of separate but related shell variants, many of which are at least partially derived from one of the more popular and influential RATs. In response to the problem outlined above, the author set out to design and implement a system capable of circumventing common obfuscation techniques and identifying derivative malware samples in a given collection. To begin with, a decoder component was developed to syntactically deobfuscate and normalise PHP code by detecting and reversing idiomatic obfuscation constructs, and to apply uniform formatting conventions to all system inputs. A unified malware analysis framework, called Viper, was then extended to create a modular similarity analysis system comprised of individual feature extraction modules, modules responsible for batch processing, a matrix module for comparing sample features, and two visualisation modules capable of generating visual representations of shell similarity. The principal conclusion of the research was that the deobfuscation performed by the decoder component prior to analysis dramatically improved the observed levels of similarity between test samples. This in turn allowed the modular similarity analysis system to identify derivative clusters (or families) within a large collection of shells more accurately. Techniques for isolating and re-rendering these clusters were also developed and demonstrated to be effective at increasing the amount of detail available for evaluating the relative magnitudes of the relationships within each cluster.
- Full Text:
- Date Issued: 2016
FRAME: frame routing and manipulation engine
- Authors: Pennefather, Sean Niel
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3608 , vital:20529
- Description: This research reports on the design and implementation of FRAME: an embedded hardware network processing platform designed to perform network frame manipulation and monitoring. This is possible at line speeds compliant with the IEEE 802.3 Ethernet standard. The system provides frame manipulation functionality to aid in the development and implementation of network testing environments. Platform cost and ease of use are both considered during design resulting in fabrication of hardware and the development of Link, a Domain Specific Language used to create custom applications that are compatible with the platform. Functionality of the resulting platform is shown through conformance testing of designed modules and application examples. Throughput testing showed that the peak throughput achievable by the platform is limited to 86.4 Mbit/s, comparable to commodity 100 Mbit hardware and the total cost of the prototype platform ranged between $220 and $254.
- Full Text:
- Date Issued: 2016
- Authors: Pennefather, Sean Niel
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3608 , vital:20529
- Description: This research reports on the design and implementation of FRAME: an embedded hardware network processing platform designed to perform network frame manipulation and monitoring. This is possible at line speeds compliant with the IEEE 802.3 Ethernet standard. The system provides frame manipulation functionality to aid in the development and implementation of network testing environments. Platform cost and ease of use are both considered during design resulting in fabrication of hardware and the development of Link, a Domain Specific Language used to create custom applications that are compatible with the platform. Functionality of the resulting platform is shown through conformance testing of designed modules and application examples. Throughput testing showed that the peak throughput achievable by the platform is limited to 86.4 Mbit/s, comparable to commodity 100 Mbit hardware and the total cost of the prototype platform ranged between $220 and $254.
- Full Text:
- Date Issued: 2016
GPU Accelerated protocol analysis for large and long-term traffic traces
- Nottingham, Alastair Timothy
- Authors: Nottingham, Alastair Timothy
- Date: 2016
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/910 , vital:20002
- Description: This thesis describes the design and implementation of GPF+, a complete general packet classification system developed using Nvidia CUDA for Compute Capability 3.5+ GPUs. This system was developed with the aim of accelerating the analysis of arbitrary network protocols within network traffic traces using inexpensive, massively parallel commodity hardware. GPF+ and its supporting components are specifically intended to support the processing of large, long-term network packet traces such as those produced by network telescopes, which are currently difficult and time consuming to analyse. The GPF+ classifier is based on prior research in the field, which produced a prototype classifier called GPF, targeted at Compute Capability 1.3 GPUs. GPF+ greatly extends the GPF model, improving runtime flexibility and scalability, whilst maintaining high execution efficiency. GPF+ incorporates a compact, lightweight registerbased state machine that supports massively-parallel, multi-match filter predicate evaluation, as well as efficient arbitrary field extraction. GPF+ tracks packet composition during execution, and adjusts processing at runtime to avoid redundant memory transactions and unnecessary computation through warp-voting. GPF+ additionally incorporates a 128-bit in-thread cache, accelerated through register shuffling, to accelerate access to packet data in slow GPU global memory. GPF+ uses a high-level DSL to simplify protocol and filter creation, whilst better facilitating protocol reuse. The system is supported by a pipeline of multi-threaded high-performance host components, which communicate asynchronously through 0MQ messaging middleware to buffer, index, and dispatch packet data on the host system. The system was evaluated using high-end Kepler (Nvidia GTX Titan) and entry level Maxwell (Nvidia GTX 750) GPUs. The results of this evaluation showed high system performance, limited only by device side IO (600MBps) in all tests. GPF+ maintained high occupancy and device utilisation in all tests, without significant serialisation, and showed improved scaling to more complex filter sets. Results were used to visualise captures of up to 160 GB in seconds, and to extract and pre-filter captures small enough to be easily analysed in applications such as Wireshark.
- Full Text:
- Date Issued: 2016
- Authors: Nottingham, Alastair Timothy
- Date: 2016
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/910 , vital:20002
- Description: This thesis describes the design and implementation of GPF+, a complete general packet classification system developed using Nvidia CUDA for Compute Capability 3.5+ GPUs. This system was developed with the aim of accelerating the analysis of arbitrary network protocols within network traffic traces using inexpensive, massively parallel commodity hardware. GPF+ and its supporting components are specifically intended to support the processing of large, long-term network packet traces such as those produced by network telescopes, which are currently difficult and time consuming to analyse. The GPF+ classifier is based on prior research in the field, which produced a prototype classifier called GPF, targeted at Compute Capability 1.3 GPUs. GPF+ greatly extends the GPF model, improving runtime flexibility and scalability, whilst maintaining high execution efficiency. GPF+ incorporates a compact, lightweight registerbased state machine that supports massively-parallel, multi-match filter predicate evaluation, as well as efficient arbitrary field extraction. GPF+ tracks packet composition during execution, and adjusts processing at runtime to avoid redundant memory transactions and unnecessary computation through warp-voting. GPF+ additionally incorporates a 128-bit in-thread cache, accelerated through register shuffling, to accelerate access to packet data in slow GPU global memory. GPF+ uses a high-level DSL to simplify protocol and filter creation, whilst better facilitating protocol reuse. The system is supported by a pipeline of multi-threaded high-performance host components, which communicate asynchronously through 0MQ messaging middleware to buffer, index, and dispatch packet data on the host system. The system was evaluated using high-end Kepler (Nvidia GTX Titan) and entry level Maxwell (Nvidia GTX 750) GPUs. The results of this evaluation showed high system performance, limited only by device side IO (600MBps) in all tests. GPF+ maintained high occupancy and device utilisation in all tests, without significant serialisation, and showed improved scaling to more complex filter sets. Results were used to visualise captures of up to 160 GB in seconds, and to extract and pre-filter captures small enough to be easily analysed in applications such as Wireshark.
- Full Text:
- Date Issued: 2016
Information security concerns around enterprise bring your own device adoption in South African higher education institutions
- Authors: Sauls, Gershwin Ashton
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3619 , vital:20530
- Description: The research carried out in this thesis is an investigation into the information security concerns around the use of personally-owned mobile devices within South African universities. This concept, which is more commonly known as Bring Your Own Device or BYOD has raised many data loss concerns for organizational IT Departments across various industries worldwide. Universities as institutions are designed to facilitate research and learning and as such, have a strong culture toward the sharing of information which complicates management of these data loss concerns even further. As such, the objectives of the research were to determine the acceptance levels of BYOD within South African universities in relation to the perceived security risks. Thereafter, an investigation into which security practices, if any, that South African universities are using to minimize the information security concerns was carried out by means of a targeted online questionnaire. An extensive literature review was first carried out to evaluate the motivation for the research and to assess advantages of using Smartphone and Tablet PC’s for work related purposes. Thereafter, to determine security concerns, other surveys and related work was consulted to determine the relevant questions needed by the online questionnaire. The quantity of comprehensive academic studies concerning the security aspects of BYOD within organizations was very limited and because of this reason, the research took on a highly exploratory design. Finally, the research deliberated on the results of the online questionnaire and concluded with a strategy for the implementation of a mobile device security strategy for using personally-owned devices in a work-related environment.
- Full Text:
- Date Issued: 2016
- Authors: Sauls, Gershwin Ashton
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3619 , vital:20530
- Description: The research carried out in this thesis is an investigation into the information security concerns around the use of personally-owned mobile devices within South African universities. This concept, which is more commonly known as Bring Your Own Device or BYOD has raised many data loss concerns for organizational IT Departments across various industries worldwide. Universities as institutions are designed to facilitate research and learning and as such, have a strong culture toward the sharing of information which complicates management of these data loss concerns even further. As such, the objectives of the research were to determine the acceptance levels of BYOD within South African universities in relation to the perceived security risks. Thereafter, an investigation into which security practices, if any, that South African universities are using to minimize the information security concerns was carried out by means of a targeted online questionnaire. An extensive literature review was first carried out to evaluate the motivation for the research and to assess advantages of using Smartphone and Tablet PC’s for work related purposes. Thereafter, to determine security concerns, other surveys and related work was consulted to determine the relevant questions needed by the online questionnaire. The quantity of comprehensive academic studies concerning the security aspects of BYOD within organizations was very limited and because of this reason, the research took on a highly exploratory design. Finally, the research deliberated on the results of the online questionnaire and concluded with a strategy for the implementation of a mobile device security strategy for using personally-owned devices in a work-related environment.
- Full Text:
- Date Issued: 2016
Internal fingerprint extraction
- Authors: Darlow, Luke Nicholas
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/2959 , vital:20347
- Description: Fingerprints are a non-invasive biometric that possess significant advantages. However, they are subject to surface erosion and damage; distortion upon scanning; and are vulnerable to fingerprint spoofing. The internal fingerprint exists as the undulations of the papillary junction - an intermediary layer of skin - and provides a solution to these disadvantages. Optical coherence tomography is used to capture the internal fingerprint. A depth profile of the papillary junction throughout the OCT scans is first constructed using fuzzy c-means clustering and a fine-tuning procedure. This information is then used to define localised regions over which to average pixels for the resultant internal fingerprint. When compared to a ground-truth internal fingerprint zone, the internal fingerprint zone detected automatically is within the measured bounds of human error. With a mean- squared-error of 21.3 and structural similarity of 96.4%, the internal fingerprint zone was successfully found and described. The extracted fingerprints exceed their surface counterparts with respect to orientation certainty and NFIQ scores (both of which are respected fingerprint quality assessment criteria). Internal to surface fingerprint correspondence and internal fingerprint cross correspondence were also measured. A larger scanned region is shown to be advantageous as internal fingerprints extracted from these scans have good surface correspondence (75% had at least one true match with a surface counterpart). It is also evidenced that internal fingerprints can constitute a fingerprint database. 96% of the internal fingerprints extracted had at least one corresponding match with another internal fingerprint. When compared to surface fingerprints cropped to match the internal fingerprints’ representative area and locality, the internal fingerprints outperformed these cropped surface counterparts. The internal fingerprint is an attractive biometric solution. This research develops a novel approach to extracting the internal fingerprint and is an asset to the further development of technologies surrounding fingerprint extraction from OCT scans. No earlier work has extracted or tested the internal fingerprint to the degree that this research has.
- Full Text:
- Date Issued: 2016
- Authors: Darlow, Luke Nicholas
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/2959 , vital:20347
- Description: Fingerprints are a non-invasive biometric that possess significant advantages. However, they are subject to surface erosion and damage; distortion upon scanning; and are vulnerable to fingerprint spoofing. The internal fingerprint exists as the undulations of the papillary junction - an intermediary layer of skin - and provides a solution to these disadvantages. Optical coherence tomography is used to capture the internal fingerprint. A depth profile of the papillary junction throughout the OCT scans is first constructed using fuzzy c-means clustering and a fine-tuning procedure. This information is then used to define localised regions over which to average pixels for the resultant internal fingerprint. When compared to a ground-truth internal fingerprint zone, the internal fingerprint zone detected automatically is within the measured bounds of human error. With a mean- squared-error of 21.3 and structural similarity of 96.4%, the internal fingerprint zone was successfully found and described. The extracted fingerprints exceed their surface counterparts with respect to orientation certainty and NFIQ scores (both of which are respected fingerprint quality assessment criteria). Internal to surface fingerprint correspondence and internal fingerprint cross correspondence were also measured. A larger scanned region is shown to be advantageous as internal fingerprints extracted from these scans have good surface correspondence (75% had at least one true match with a surface counterpart). It is also evidenced that internal fingerprints can constitute a fingerprint database. 96% of the internal fingerprints extracted had at least one corresponding match with another internal fingerprint. When compared to surface fingerprints cropped to match the internal fingerprints’ representative area and locality, the internal fingerprints outperformed these cropped surface counterparts. The internal fingerprint is an attractive biometric solution. This research develops a novel approach to extracting the internal fingerprint and is an asset to the further development of technologies surrounding fingerprint extraction from OCT scans. No earlier work has extracted or tested the internal fingerprint to the degree that this research has.
- Full Text:
- Date Issued: 2016
Selecting and augmenting a FOSS development and deployment environment for personalized video-oriented services in a Telco context
- Authors: Shibeshi, Zelalem Sintayehu
- Date: 2016
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/943 , vital:20005
- Description: The great demand for video services on the Internet is one contributing factor that led telecom companies to search for solutions to deliver innovative video services, using the different access technologies managed by them and leveraging the capacity of enforcing Quality of Service (QoS). One part of the solution was an infrastructure that guarantees QoS for these services, in the form of the IP Multimedia Subsystem (IMS) framework. The IMS framework was developed for delivering innovative multimedia services, but IMS alone does not provide the required services. This has led to further work in the area of multimedia service architectures. One noteworthy architecture is IPTV. IPTV is more than what its name implies, as it allows the development of various innovative video-oriented services and not just tv. When IPTV was introduced, many thought that it would bring back the revenue loss that telecom companies experienced to over-the-top (OTT) service providers. However, despite all its promises, the IPTV implementation has not shown as wide an uptake as one would expect. Although there could be various reasons for the slow penetration of IPTV, one reason could be the technical challenge that IPTV poses to service developers. One of the main reasons for the embarking of the research reported in this thesis was to identify and select free and open source software (FOSS) based platforms and augment them for easy development and deployment of video-oriented services. The thesis motivated how the IPTV architecture, with some modification, can be a good architecture to develop innovative video-oriented services. For a better understanding and investigate the issues of video-oriented service development on different platforms, we followed an incremental and iterative prototyping method. As a result, various video-oriented services were first developed and implementation-related issues were analyzed. This has helped us to identify problems that service developers face, including the requirement to utilize a number of protocols to develop an IPTV-based video-oriented service and the lack of a platform that provides a consistent programming interface to implement them all. The process also helped us to identify new uses cases through the process. As part of our selection process, we found that the Mobicents service development platform can be used as the basis for a good service development and deployment environment for video-oriented services. Mobicents is a Java-based service delivery platform for quick development, deployment and management of next generation network applications. Mobicents is a good choice because it provides a consistent programming interface and supports the various protocols needed in a consistent manner or an easy way to include the support for them. We used Mobicents to compose the environment that developers can use to build video-oriented services. Specifically we developed components and service building blocks that service developer can use to develop various innovative video-oriented services. During our research, we also identified various issues with regard to support from streaming servers in general and open source streaming servers in particular and also with the protocol they use. Specifically, we identified issues with Real Time Streaming Protocol (RTSP), a protocol specified as the media control protocol in the IPTV specification, and made proposals for solving them. We developed an RSTP proxy to augment the features lacking in the current streaming servers and implemented some of the features we proposed in it.
- Full Text:
- Date Issued: 2016
- Authors: Shibeshi, Zelalem Sintayehu
- Date: 2016
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/943 , vital:20005
- Description: The great demand for video services on the Internet is one contributing factor that led telecom companies to search for solutions to deliver innovative video services, using the different access technologies managed by them and leveraging the capacity of enforcing Quality of Service (QoS). One part of the solution was an infrastructure that guarantees QoS for these services, in the form of the IP Multimedia Subsystem (IMS) framework. The IMS framework was developed for delivering innovative multimedia services, but IMS alone does not provide the required services. This has led to further work in the area of multimedia service architectures. One noteworthy architecture is IPTV. IPTV is more than what its name implies, as it allows the development of various innovative video-oriented services and not just tv. When IPTV was introduced, many thought that it would bring back the revenue loss that telecom companies experienced to over-the-top (OTT) service providers. However, despite all its promises, the IPTV implementation has not shown as wide an uptake as one would expect. Although there could be various reasons for the slow penetration of IPTV, one reason could be the technical challenge that IPTV poses to service developers. One of the main reasons for the embarking of the research reported in this thesis was to identify and select free and open source software (FOSS) based platforms and augment them for easy development and deployment of video-oriented services. The thesis motivated how the IPTV architecture, with some modification, can be a good architecture to develop innovative video-oriented services. For a better understanding and investigate the issues of video-oriented service development on different platforms, we followed an incremental and iterative prototyping method. As a result, various video-oriented services were first developed and implementation-related issues were analyzed. This has helped us to identify problems that service developers face, including the requirement to utilize a number of protocols to develop an IPTV-based video-oriented service and the lack of a platform that provides a consistent programming interface to implement them all. The process also helped us to identify new uses cases through the process. As part of our selection process, we found that the Mobicents service development platform can be used as the basis for a good service development and deployment environment for video-oriented services. Mobicents is a Java-based service delivery platform for quick development, deployment and management of next generation network applications. Mobicents is a good choice because it provides a consistent programming interface and supports the various protocols needed in a consistent manner or an easy way to include the support for them. We used Mobicents to compose the environment that developers can use to build video-oriented services. Specifically we developed components and service building blocks that service developer can use to develop various innovative video-oriented services. During our research, we also identified various issues with regard to support from streaming servers in general and open source streaming servers in particular and also with the protocol they use. Specifically, we identified issues with Real Time Streaming Protocol (RTSP), a protocol specified as the media control protocol in the IPTV specification, and made proposals for solving them. We developed an RSTP proxy to augment the features lacking in the current streaming servers and implemented some of the features we proposed in it.
- Full Text:
- Date Issued: 2016
The design, development and evaluation of cross-platform mobile applications and services supporting social accountability monitoring
- Authors: Reynell, Edward Robin
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3652 , vital:20533
- Description: Local government processes require meaningful and effective participation from both citizens and their governments in order to remain truly democratic. This project investigates the use of mobile phones as a tool for supporting this participation. MobiSAM, a system which aims to enhance the Social Accountability Monitoring (SAM) methodology at local government level, has been designed and implemented. The research presented in this thesis examines tools and techniques for the development of cross-platform client applications, allowing access to the MobiSAM service, across heterogeneous mobile platforms, handsets and interaction styles. Particular attention is paid to providing an easily navigated user interface (UI), as well as offering clear and concise visualisation capabilities. Depending on the host device, interactivity is also included within these visualisations, potentially helping provide further insight into the visualised data. Guided by the results obtained from a comprehensive baseline study of the Grahamstown area, steps are taken in an attempt to lower the barrier of entry to using the MobiSAM service, potentially maximising its market reach. These include extending client application support to all identified mobile platforms (including feature phones); providing multi-language UIs (in English, isiXhosa and Afrikaans); as well as ensuring client application data usage is kept to a minimum. The particular strengths of a given device are also leveraged, such as its camera capabilities and built-in Global Positioning System (GPS) module, potentially allowing for more effective engagement with local municipalities. Additionally, a Short Message Service (SMS) gateway is developed, allowing all Global System for Mobile Communications (GSM) compatible handsets access to the MobiSAM service via traditional SMS. Following an iterative, user-centred design process, a thorough evaluation of the client application is also performed, in an attempt to gather feedback relating to the navigation and visualisation capabilities. The results of which are used to further refine its design. A comparative usability evaluation using two different versions of the cross-platform client application is also undertaken, highlighting the perceived memorability, learnabilitv and satisfaction of each. Results from the evaluation reveals which version of the client application is to be deployed during future pilot studies.
- Full Text:
- Date Issued: 2016
- Authors: Reynell, Edward Robin
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3652 , vital:20533
- Description: Local government processes require meaningful and effective participation from both citizens and their governments in order to remain truly democratic. This project investigates the use of mobile phones as a tool for supporting this participation. MobiSAM, a system which aims to enhance the Social Accountability Monitoring (SAM) methodology at local government level, has been designed and implemented. The research presented in this thesis examines tools and techniques for the development of cross-platform client applications, allowing access to the MobiSAM service, across heterogeneous mobile platforms, handsets and interaction styles. Particular attention is paid to providing an easily navigated user interface (UI), as well as offering clear and concise visualisation capabilities. Depending on the host device, interactivity is also included within these visualisations, potentially helping provide further insight into the visualised data. Guided by the results obtained from a comprehensive baseline study of the Grahamstown area, steps are taken in an attempt to lower the barrier of entry to using the MobiSAM service, potentially maximising its market reach. These include extending client application support to all identified mobile platforms (including feature phones); providing multi-language UIs (in English, isiXhosa and Afrikaans); as well as ensuring client application data usage is kept to a minimum. The particular strengths of a given device are also leveraged, such as its camera capabilities and built-in Global Positioning System (GPS) module, potentially allowing for more effective engagement with local municipalities. Additionally, a Short Message Service (SMS) gateway is developed, allowing all Global System for Mobile Communications (GSM) compatible handsets access to the MobiSAM service via traditional SMS. Following an iterative, user-centred design process, a thorough evaluation of the client application is also performed, in an attempt to gather feedback relating to the navigation and visualisation capabilities. The results of which are used to further refine its design. A comparative usability evaluation using two different versions of the cross-platform client application is also undertaken, highlighting the perceived memorability, learnabilitv and satisfaction of each. Results from the evaluation reveals which version of the client application is to be deployed during future pilot studies.
- Full Text:
- Date Issued: 2016
The development of a discovery and control environment for networked audio devices based on a study of current audio control protocols
- Authors: Eales, Andrew Arnold
- Date: 2016
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/539 , vital:19968
- Description: This dissertation develops a standard device model for networked audio devices and introduces a novel discovery and control environment that uses the developed device model. The proposed standard device model is derived from a study of current audio control protocols. Both the functional capabilities and design principles of audio control protocols are investigated with an emphasis on Open Sound Control, SNMP and IEC-62379, AES64, CopperLan and UPnP. An abstract model of networked audio devices is developed, and the model is implemented in each of the previously mentioned control protocols. This model is also used within a novel discovery and control environment designed around a distributed associative memory termed an object space. This environment challenges the accepted notions of the functionality provided by a control protocol. The study concludes by comparing the salient features of the different control protocols encountered in this study. Different approaches to control protocol design are considered, and several design heuristics for control protocols are proposed.
- Full Text:
- Date Issued: 2016
- Authors: Eales, Andrew Arnold
- Date: 2016
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/539 , vital:19968
- Description: This dissertation develops a standard device model for networked audio devices and introduces a novel discovery and control environment that uses the developed device model. The proposed standard device model is derived from a study of current audio control protocols. Both the functional capabilities and design principles of audio control protocols are investigated with an emphasis on Open Sound Control, SNMP and IEC-62379, AES64, CopperLan and UPnP. An abstract model of networked audio devices is developed, and the model is implemented in each of the previously mentioned control protocols. This model is also used within a novel discovery and control environment designed around a distributed associative memory termed an object space. This environment challenges the accepted notions of the functionality provided by a control protocol. The study concludes by comparing the salient features of the different control protocols encountered in this study. Different approaches to control protocol design are considered, and several design heuristics for control protocols are proposed.
- Full Text:
- Date Issued: 2016
Toward an automated botnet analysis framework: a darkcomet case-study
- Authors: du Bruyn, Jeremy Cecil
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/2937 , vital:20344
- Full Text:
- Date Issued: 2016
- Authors: du Bruyn, Jeremy Cecil
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/2937 , vital:20344
- Full Text:
- Date Issued: 2016
A comparison of open source and proprietary digital forensic software
- Authors: Sonnekus, Michael Hendrik
- Date: 2015
- Subjects: Computer crimes , Computer crimes -- Investigation , Electronic evidence , Open source software
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4717 , http://hdl.handle.net/10962/d1017939
- Description: Scrutiny of the capabilities and accuracy of computer forensic tools is increasing as the number of incidents relying on digital evidence and the weight of that evidence increase. This thesis describes the capabilities of the leading proprietary and open source digital forensic tools. The capabilities of the tools were tested separately on digital media that had been formatted using Windows and Linux. Experiments were carried out with the intention of establishing whether the capabilities of open source computer forensics are similar to those of proprietary computer forensic tools, and whether these tools could complement one another. The tools were tested with regards to their capabilities to make and analyse digital forensic images in a forensically sound manner. The tests were carried out on each media type after deleting data from the media, and then repeated after formatting the media. The results of the experiments performed demonstrate that both proprietary and open source computer forensic tools have superior capabilities in different scenarios, and that the toolsets can be used to validate and complement one another. The implication of these findings is that investigators have an affordable means of validating their findings and are able to more effectively investigate digital media.
- Full Text:
- Date Issued: 2015
- Authors: Sonnekus, Michael Hendrik
- Date: 2015
- Subjects: Computer crimes , Computer crimes -- Investigation , Electronic evidence , Open source software
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4717 , http://hdl.handle.net/10962/d1017939
- Description: Scrutiny of the capabilities and accuracy of computer forensic tools is increasing as the number of incidents relying on digital evidence and the weight of that evidence increase. This thesis describes the capabilities of the leading proprietary and open source digital forensic tools. The capabilities of the tools were tested separately on digital media that had been formatted using Windows and Linux. Experiments were carried out with the intention of establishing whether the capabilities of open source computer forensics are similar to those of proprietary computer forensic tools, and whether these tools could complement one another. The tools were tested with regards to their capabilities to make and analyse digital forensic images in a forensically sound manner. The tests were carried out on each media type after deleting data from the media, and then repeated after formatting the media. The results of the experiments performed demonstrate that both proprietary and open source computer forensic tools have superior capabilities in different scenarios, and that the toolsets can be used to validate and complement one another. The implication of these findings is that investigators have an affordable means of validating their findings and are able to more effectively investigate digital media.
- Full Text:
- Date Issued: 2015
A Framework for using Open Source intelligence as a Digital Forensic Investigative tool
- Authors: Rule, Samantha Elizabeth
- Date: 2015
- Subjects: Open source intelligence , Criminal investigation , Electronic evidence
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4715 , http://hdl.handle.net/10962/d1017937
- Description: The proliferation of the Internet has amplified the use of social networking sites by creating a platform that encourages individuals to share information. As a result there is a wealth of information that is publically and easily accessible. This research explores whether open source intelligence (OSINT), which is freely available, could be used as a digital forensic investigative tool. A survey was created and sent to digital forensic investigators to establish whether they currently use OSINT when performing investigations. The survey results confirm that OSINT is being used by digital forensic investigators when performing investigations but there are currently no guidelines or frameworks available to support the use thereof. Additionally, the survey results showed a belief amongst those surveyed that evidence gleaned from OSINT sources is considered supplementary rather than evidentiary. The findings of this research led to the development of a framework that identifies and recommends key processes to follow when conducting OSINT investigations. The framework can assist digital forensic investigators to follow a structured and rigorous process, which may lead to the unanimous acceptance of information obtained via OSINT sources as evidentiary rather than supplementary in the near future.
- Full Text:
- Date Issued: 2015
- Authors: Rule, Samantha Elizabeth
- Date: 2015
- Subjects: Open source intelligence , Criminal investigation , Electronic evidence
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4715 , http://hdl.handle.net/10962/d1017937
- Description: The proliferation of the Internet has amplified the use of social networking sites by creating a platform that encourages individuals to share information. As a result there is a wealth of information that is publically and easily accessible. This research explores whether open source intelligence (OSINT), which is freely available, could be used as a digital forensic investigative tool. A survey was created and sent to digital forensic investigators to establish whether they currently use OSINT when performing investigations. The survey results confirm that OSINT is being used by digital forensic investigators when performing investigations but there are currently no guidelines or frameworks available to support the use thereof. Additionally, the survey results showed a belief amongst those surveyed that evidence gleaned from OSINT sources is considered supplementary rather than evidentiary. The findings of this research led to the development of a framework that identifies and recommends key processes to follow when conducting OSINT investigations. The framework can assist digital forensic investigators to follow a structured and rigorous process, which may lead to the unanimous acceptance of information obtained via OSINT sources as evidentiary rather than supplementary in the near future.
- Full Text:
- Date Issued: 2015
Amber : a aero-interaction honeypot with distributed intelligence
- Authors: Schoeman, Adam
- Date: 2015
- Subjects: Security systems -- Security measures , Computer viruses , Intrusion detection systems (Computer security) , Computer security
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4716 , http://hdl.handle.net/10962/d1017938
- Description: For the greater part, security controls are based on the principle of Decision through Detection (DtD). The exception to this is a honeypot, which analyses interactions between a third party and itself, while occupying a piece of unused information space. As honeypots are not located on productive information resources, any interaction with it can be assumed to be non-productive. This allows the honeypot to make decisions based simply on the presence of data, rather than on the behaviour of the data. But due to limited resources in human capital, honeypots’ uptake in the South African market has been underwhelming. Amber attempts to change this by offering a zero-interaction security system, which will use the honeypot approach of decision through Presence (DtP) to generate a blacklist of third parties, which can be passed on to a network enforcer. Empirical testing has proved the usefulness of this alternative and low cost approach in defending networks. The functionality of the system was also extended by installing nodes in different geographical locations, and streaming their detections into the central Amber hive.
- Full Text:
- Date Issued: 2015
- Authors: Schoeman, Adam
- Date: 2015
- Subjects: Security systems -- Security measures , Computer viruses , Intrusion detection systems (Computer security) , Computer security
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4716 , http://hdl.handle.net/10962/d1017938
- Description: For the greater part, security controls are based on the principle of Decision through Detection (DtD). The exception to this is a honeypot, which analyses interactions between a third party and itself, while occupying a piece of unused information space. As honeypots are not located on productive information resources, any interaction with it can be assumed to be non-productive. This allows the honeypot to make decisions based simply on the presence of data, rather than on the behaviour of the data. But due to limited resources in human capital, honeypots’ uptake in the South African market has been underwhelming. Amber attempts to change this by offering a zero-interaction security system, which will use the honeypot approach of decision through Presence (DtP) to generate a blacklist of third parties, which can be passed on to a network enforcer. Empirical testing has proved the usefulness of this alternative and low cost approach in defending networks. The functionality of the system was also extended by installing nodes in different geographical locations, and streaming their detections into the central Amber hive.
- Full Text:
- Date Issued: 2015
An analysis of malware evasion techniques against modern AV engines
- Authors: Haffejee, Jameel
- Date: 2015
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:20979 , http://hdl.handle.net/10962/5821
- Description: This research empirically tested the response of antivirus applications to binaries that use virus-like evasion techniques. In order to achieve this, a number of binaries are processed using a number of evasion methods and are then deployed against several antivirus engines. The research also documents the process of setting up an environment for testing antivirus engines, including building the evasion techniques used in the tests. The results of the empirical tests illustrate that an attacker can evade multiple antivirus engines without much effort using well-known evasion techniques. Furthermore, some antivirus engines may respond to the occurrence of an evasion technique instead of the presence of any malicious code. In practical terms, this shows that while antivirus applications are useful for protecting against known threats, their effectiveness against unknown or modified threats is limited.
- Full Text:
- Date Issued: 2015
- Authors: Haffejee, Jameel
- Date: 2015
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:20979 , http://hdl.handle.net/10962/5821
- Description: This research empirically tested the response of antivirus applications to binaries that use virus-like evasion techniques. In order to achieve this, a number of binaries are processed using a number of evasion methods and are then deployed against several antivirus engines. The research also documents the process of setting up an environment for testing antivirus engines, including building the evasion techniques used in the tests. The results of the empirical tests illustrate that an attacker can evade multiple antivirus engines without much effort using well-known evasion techniques. Furthermore, some antivirus engines may respond to the occurrence of an evasion technique instead of the presence of any malicious code. In practical terms, this shows that while antivirus applications are useful for protecting against known threats, their effectiveness against unknown or modified threats is limited.
- Full Text:
- Date Issued: 2015
An analysis of the risk exposure of adopting IPV6 in enterprise networks
- Authors: Berko, Istvan Sandor
- Date: 2015
- Subjects: International Workshop on Deploying the Future Infrastructure , Computer networks , Computer networks -- Security measures , Computer network protocols
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4722 , http://hdl.handle.net/10962/d1018918
- Description: The IPv6 increased address pool presents changes in resource impact to the Enterprise that, if not adequately addressed, can change risks that are locally significant in IPv4 to risks that can impact the Enterprise in its entirety. The expected conclusion is that the IPv6 environment will impose significant changes in the Enterprise environment - which may negatively impact organisational security if the IPv6 nuances are not adequately addressed. This thesis reviews the risks related to the operation of enterprise networks with the introduction of IPv6. The global trends are discussed to provide insight and background to the IPv6 research space. Analysing the current state of readiness in enterprise networks, quantifies the value of developing this thesis. The base controls that should be deployed in enterprise networks to prevent the abuse of IPv6 through tunnelling and the protection of the enterprise access layer are discussed. A series of case studies are presented which identify and analyse the impact of certain changes in the IPv6 protocol on the enterprise networks. The case studies also identify mitigation techniques to reduce risk.
- Full Text:
- Date Issued: 2015
- Authors: Berko, Istvan Sandor
- Date: 2015
- Subjects: International Workshop on Deploying the Future Infrastructure , Computer networks , Computer networks -- Security measures , Computer network protocols
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4722 , http://hdl.handle.net/10962/d1018918
- Description: The IPv6 increased address pool presents changes in resource impact to the Enterprise that, if not adequately addressed, can change risks that are locally significant in IPv4 to risks that can impact the Enterprise in its entirety. The expected conclusion is that the IPv6 environment will impose significant changes in the Enterprise environment - which may negatively impact organisational security if the IPv6 nuances are not adequately addressed. This thesis reviews the risks related to the operation of enterprise networks with the introduction of IPv6. The global trends are discussed to provide insight and background to the IPv6 research space. Analysing the current state of readiness in enterprise networks, quantifies the value of developing this thesis. The base controls that should be deployed in enterprise networks to prevent the abuse of IPv6 through tunnelling and the protection of the enterprise access layer are discussed. A series of case studies are presented which identify and analyse the impact of certain changes in the IPv6 protocol on the enterprise networks. The case studies also identify mitigation techniques to reduce risk.
- Full Text:
- Date Issued: 2015
An investigation into the role played by perceived security concerns in the adoption of mobile money services : a Zimbabwean case study
- Authors: Madebwe, Charles
- Date: 2015
- Subjects: Banks and banking, Mobile -- Zimbabwe , Global system for mobile communications , Cell phones -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4711 , http://hdl.handle.net/10962/d1017933
- Description: The ubiquitous nature of mobile phones and their popularity has led to opportunistic value added services (VAS), such as mobile money, riding on this phenomenon to be implemented. Several studies have been done to find factors that influence the adoption of mobile money and other information systems. The thesis looks at factors determining the uptake of mobile money over cellular networks with a special emphasis on aspects relating to perceived security even though other factors namely perceived usefulness, perceived ease of use, perceived trust and perceived cost were also looked at. The research further looks at the security threats introduced to mobile money by virtue of the nature, architecture, standards and protocols of Global System for Mobile Communications (GSM). The model employed for this research was the Technology Acceptance Model (TAM). Literature review was done on the security of GSM. Data was collected from a sample population around Harare, Zimbabwe using physical questionnaires. Statistical tests were performed on the collected data to find the significance of each construct to mobile money adoption. The research has found positive correlation between perceived security concerns and the adoption of money mobile money services over cellular networks. Perceived usefulness was found to be the most important factor in the adoption of mobile money. The research also found that customers need to trust the network service provider and the systems in use for them to adopt mobile money. Other factors driving consumer adoption were found to be perceived ease of use and perceived cost. The findings show that players who intend to introduce mobile money should strive to offer secure and useful systems that are trustworthy without making the service expensive or difficult to use. Literature review done showed that there is a possibility of compromising mobile money transactions done over GSM
- Full Text:
- Date Issued: 2015
- Authors: Madebwe, Charles
- Date: 2015
- Subjects: Banks and banking, Mobile -- Zimbabwe , Global system for mobile communications , Cell phones -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4711 , http://hdl.handle.net/10962/d1017933
- Description: The ubiquitous nature of mobile phones and their popularity has led to opportunistic value added services (VAS), such as mobile money, riding on this phenomenon to be implemented. Several studies have been done to find factors that influence the adoption of mobile money and other information systems. The thesis looks at factors determining the uptake of mobile money over cellular networks with a special emphasis on aspects relating to perceived security even though other factors namely perceived usefulness, perceived ease of use, perceived trust and perceived cost were also looked at. The research further looks at the security threats introduced to mobile money by virtue of the nature, architecture, standards and protocols of Global System for Mobile Communications (GSM). The model employed for this research was the Technology Acceptance Model (TAM). Literature review was done on the security of GSM. Data was collected from a sample population around Harare, Zimbabwe using physical questionnaires. Statistical tests were performed on the collected data to find the significance of each construct to mobile money adoption. The research has found positive correlation between perceived security concerns and the adoption of money mobile money services over cellular networks. Perceived usefulness was found to be the most important factor in the adoption of mobile money. The research also found that customers need to trust the network service provider and the systems in use for them to adopt mobile money. Other factors driving consumer adoption were found to be perceived ease of use and perceived cost. The findings show that players who intend to introduce mobile money should strive to offer secure and useful systems that are trustworthy without making the service expensive or difficult to use. Literature review done showed that there is a possibility of compromising mobile money transactions done over GSM
- Full Text:
- Date Issued: 2015
An investigation of ISO/IEC 27001 adoption in South Africa
- Authors: Coetzer, Christo
- Date: 2015
- Subjects: ISO 27001 Standard , Information technology -- Security measures , Computer security , Data protection
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4720 , http://hdl.handle.net/10962/d1018669
- Description: The research objective of this study is to investigate the low adoption of the ISO/IEC 27001 standard in South African organisations. This study does not differentiate between the ISO/IEC 27001:2005 and ISO/IEC 27001:2013 versions, as the focus is on adoption of the ISO/IEC 27001 standard. A survey-based research design was selected as the data collection method. The research instruments used in this study include a web-based questionnaire and in-person interviews with the participants. Based on the findings of this research, the organisations that participated in this study have an understanding of the ISO/IEC 27001 standard; however, fewer than a quarter of these have fully adopted the ISO/IEC 27001 standard. Furthermore, the main business objectives for organisations that have adopted the ISO/IEC 27001 standard were to ensure legal and regulatory compliance, and to fulfil client requirements. An Information Security Management System management guide based on the ISO/IEC 27001 Plan-Do-Check-Act model is developed to help organisations interested in the standard move towards ISO/IEC 27001 compliance.
- Full Text:
- Date Issued: 2015
- Authors: Coetzer, Christo
- Date: 2015
- Subjects: ISO 27001 Standard , Information technology -- Security measures , Computer security , Data protection
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4720 , http://hdl.handle.net/10962/d1018669
- Description: The research objective of this study is to investigate the low adoption of the ISO/IEC 27001 standard in South African organisations. This study does not differentiate between the ISO/IEC 27001:2005 and ISO/IEC 27001:2013 versions, as the focus is on adoption of the ISO/IEC 27001 standard. A survey-based research design was selected as the data collection method. The research instruments used in this study include a web-based questionnaire and in-person interviews with the participants. Based on the findings of this research, the organisations that participated in this study have an understanding of the ISO/IEC 27001 standard; however, fewer than a quarter of these have fully adopted the ISO/IEC 27001 standard. Furthermore, the main business objectives for organisations that have adopted the ISO/IEC 27001 standard were to ensure legal and regulatory compliance, and to fulfil client requirements. An Information Security Management System management guide based on the ISO/IEC 27001 Plan-Do-Check-Act model is developed to help organisations interested in the standard move towards ISO/IEC 27001 compliance.
- Full Text:
- Date Issued: 2015
Building an E-health system for health awareness campaigns in poor areas
- Authors: Gremu, Chikumbutso David
- Date: 2015
- Subjects: National health services -- South Africa , Medical informatics , Public health -- Information services
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4708 , http://hdl.handle.net/10962/d1017930
- Description: Appropriate e-services as well as revenue generation capabilities are key to the deployment and the sustainability for ICT installations in poor areas, particularly common in developing country. The area of e-Health is a promising area for e-services that are both important to the population in those areas and potentially of direct interest to National Health Organizations, which already spend money for Health campaigns there. This thesis focuses on the design, implementation, and full functional testing of HealthAware, an application that allows health organization to set up targeted awareness campaigns for poor areas. Requirements for such application are very specific, starting from the fact that the preparation of the campaign and its execution/consumption happen in two different environments from a technological and social point of view. Part of the research work done for this thesis was to make the above requirements explicit and then use them in the design. This phase of the research was facilitated by the fact that the thesis' work was executed within the context of the Siyakhula Living Lab (SLL; www.siyakhulaLL.org), which has accumulated multi-year experience of ICT deployment in such areas. As a result of the found requirements, HealthAware comprises two components, which are web-based, Java applications that run in a peer-to-peer fashion. The first component, the Dashboard, is used to create, manage, and publish information for conducting awareness campaigns or surveys. The second component, HealthMessenger, facilitates users' access to the campaigns or surveys that were created using the Dashboard. The HealthMessenger was designed to be hosted on TeleWeaver while the Dashboard is hosted independently of TeleWeaver and simply communicates with the HealthMessenger through webservices. TeleWeaver is an application integration platform developed within the SLL to host software applications for poor areas. Using a core service of TeleWeaver, the profile service, where all the users' defining elements are contained, campaigns and surveys can be easily and effectively targeted, for example to match specific demographics or geographic locations. Revenue generation is attained via the logging of the interactions of the target users in the communities with the applications in TeleWeaver, from which billing data is generated according to the specific contractual agreements with the National Health Organization. From a general point of view, HealthAware contributes to the concrete realizations of a bidirectional access channel between Health Organizations and users in poor communities, which not only allows the communication of appropriate content in both directions, but get 'monetized' and in so doing becomes a revenue generator.
- Full Text:
- Date Issued: 2015
- Authors: Gremu, Chikumbutso David
- Date: 2015
- Subjects: National health services -- South Africa , Medical informatics , Public health -- Information services
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4708 , http://hdl.handle.net/10962/d1017930
- Description: Appropriate e-services as well as revenue generation capabilities are key to the deployment and the sustainability for ICT installations in poor areas, particularly common in developing country. The area of e-Health is a promising area for e-services that are both important to the population in those areas and potentially of direct interest to National Health Organizations, which already spend money for Health campaigns there. This thesis focuses on the design, implementation, and full functional testing of HealthAware, an application that allows health organization to set up targeted awareness campaigns for poor areas. Requirements for such application are very specific, starting from the fact that the preparation of the campaign and its execution/consumption happen in two different environments from a technological and social point of view. Part of the research work done for this thesis was to make the above requirements explicit and then use them in the design. This phase of the research was facilitated by the fact that the thesis' work was executed within the context of the Siyakhula Living Lab (SLL; www.siyakhulaLL.org), which has accumulated multi-year experience of ICT deployment in such areas. As a result of the found requirements, HealthAware comprises two components, which are web-based, Java applications that run in a peer-to-peer fashion. The first component, the Dashboard, is used to create, manage, and publish information for conducting awareness campaigns or surveys. The second component, HealthMessenger, facilitates users' access to the campaigns or surveys that were created using the Dashboard. The HealthMessenger was designed to be hosted on TeleWeaver while the Dashboard is hosted independently of TeleWeaver and simply communicates with the HealthMessenger through webservices. TeleWeaver is an application integration platform developed within the SLL to host software applications for poor areas. Using a core service of TeleWeaver, the profile service, where all the users' defining elements are contained, campaigns and surveys can be easily and effectively targeted, for example to match specific demographics or geographic locations. Revenue generation is attained via the logging of the interactions of the target users in the communities with the applications in TeleWeaver, from which billing data is generated according to the specific contractual agreements with the National Health Organization. From a general point of view, HealthAware contributes to the concrete realizations of a bidirectional access channel between Health Organizations and users in poor communities, which not only allows the communication of appropriate content in both directions, but get 'monetized' and in so doing becomes a revenue generator.
- Full Text:
- Date Issued: 2015