A longitudinal study of DNS traffic: understanding current DNS practice and abuse
- Authors: Van Zyl, Ignus
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3707 , vital:20537
- Description: This thesis examines a dataset spanning 21 months, containing 3,5 billion DNS packets. Traffic on TCP and UDP port 53, was captured on a production /24 IP block. The purpose of this thesis is twofold. The first is to create an understanding of current practice and behavior within the DNS infrastructure, the second to explore current threats faced by the DNS and the various systems that implement it. This is achieved by drawing on analysis and observations from the captured data. Aspects of the operation of DNS on the greater Internet are considered in this research with reference to the observed trends in the dataset, A thorough analysis of current DNS TTL implementation is made with respect to all response traffic, as well as sections looking at observed DNS TTL values for ,za domain replies and NX DOMAIN flagged replies. This thesis found that TTL values implemented are much lower than has been recommended in previous years, and that the TTL decrease is prevalent in most, but not all EE TTL implementation. With respect to the nature of DNS operations, this thesis also concerns itself with an analysis of the geoloeation of authoritative servers for local (,za) domains, and offers further observations towards the latency generated by the choice of authoritative server location for a given ,za domain. It was found that the majority of ,za domain authoritative servers are international, which results in latency generation that is multiple times greater than observed latencies for local authoritative servers. Further analysis is done with respect to NX DOM AIN behavior captured across the dataset. These findings outlined the cost of DNS miseonfiguration as well as highlighting instances of NXDOMAIN generation through malicious practice. With respect to DNS abuses, original research with respect to long-term scanning generated as a result of amplification attack activity on the greater Internet is presented. Many instances of amplification domain scans were captured during the packet capture, and an attempt is made to correlate that activity temporally with known amplification attack reports. The final area that this thesis deals with is the relatively new field of Bitflipping and Bitsquatting, delivering results on bitflip detection and evaluation over the course of the entire dataset. The detection methodology is outlined, and the final results are compared to findings given in recent bitflip literature.
- Full Text:
- Date Issued: 2016
- Authors: Van Zyl, Ignus
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3707 , vital:20537
- Description: This thesis examines a dataset spanning 21 months, containing 3,5 billion DNS packets. Traffic on TCP and UDP port 53, was captured on a production /24 IP block. The purpose of this thesis is twofold. The first is to create an understanding of current practice and behavior within the DNS infrastructure, the second to explore current threats faced by the DNS and the various systems that implement it. This is achieved by drawing on analysis and observations from the captured data. Aspects of the operation of DNS on the greater Internet are considered in this research with reference to the observed trends in the dataset, A thorough analysis of current DNS TTL implementation is made with respect to all response traffic, as well as sections looking at observed DNS TTL values for ,za domain replies and NX DOMAIN flagged replies. This thesis found that TTL values implemented are much lower than has been recommended in previous years, and that the TTL decrease is prevalent in most, but not all EE TTL implementation. With respect to the nature of DNS operations, this thesis also concerns itself with an analysis of the geoloeation of authoritative servers for local (,za) domains, and offers further observations towards the latency generated by the choice of authoritative server location for a given ,za domain. It was found that the majority of ,za domain authoritative servers are international, which results in latency generation that is multiple times greater than observed latencies for local authoritative servers. Further analysis is done with respect to NX DOM AIN behavior captured across the dataset. These findings outlined the cost of DNS miseonfiguration as well as highlighting instances of NXDOMAIN generation through malicious practice. With respect to DNS abuses, original research with respect to long-term scanning generated as a result of amplification attack activity on the greater Internet is presented. Many instances of amplification domain scans were captured during the packet capture, and an attempt is made to correlate that activity temporally with known amplification attack reports. The final area that this thesis deals with is the relatively new field of Bitflipping and Bitsquatting, delivering results on bitflip detection and evaluation over the course of the entire dataset. The detection methodology is outlined, and the final results are compared to findings given in recent bitflip literature.
- Full Text:
- Date Issued: 2016
An investigation into the prevalence and growth of phishing attacks against South African financial targets
- Authors: Lala, Darshan Magan
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3157 , vital:20379
- Description: Phishing in the electronic communications medium is the act of sending unsolicited email messages with the intention of masquerading as a reputed business. The objective is to deceive the recipient into divulging personal and sensitive information such as bank account details, credit card numbers and passwords. Attacks against financial services are the most common types of targets for scammers. Phishing attacks in South Africa have cost businesses and consumers substantial amounts of financial loss. This research investigated existing literature to understand the basic concepts of email, phishing, spam and how these fit together. The research also looks into the increasing growth of phishing worldwide and in particular against South African targets. A quantitative study is performed and reported on; this involves the study and analysis of phishing statistics in a data set provided by the South African Anti-Phishing Working Group. The data set contains phishing URL information, country code where the site has been hosted, targeted company name, IP address information and timestamp of the phishing site. The data set contains 161 different phishing targets. The research primarily focuses on the trend in phishing attacks against six South African based financial institutions, but also correlates this with the overall global trend using statistical analysis. The results from the study of the data set are compared to existing statistics and literature regarding the prevalence and growth of phishing in South Africa. The question that this research answers is whether or not the prevalence and growth of phishing in South Africa correlates with the global trend in phishing attacks. The findings indicate that certain correlations exist between some of the South African phishing targets and global phishing trends.
- Full Text:
- Date Issued: 2016
- Authors: Lala, Darshan Magan
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3157 , vital:20379
- Description: Phishing in the electronic communications medium is the act of sending unsolicited email messages with the intention of masquerading as a reputed business. The objective is to deceive the recipient into divulging personal and sensitive information such as bank account details, credit card numbers and passwords. Attacks against financial services are the most common types of targets for scammers. Phishing attacks in South Africa have cost businesses and consumers substantial amounts of financial loss. This research investigated existing literature to understand the basic concepts of email, phishing, spam and how these fit together. The research also looks into the increasing growth of phishing worldwide and in particular against South African targets. A quantitative study is performed and reported on; this involves the study and analysis of phishing statistics in a data set provided by the South African Anti-Phishing Working Group. The data set contains phishing URL information, country code where the site has been hosted, targeted company name, IP address information and timestamp of the phishing site. The data set contains 161 different phishing targets. The research primarily focuses on the trend in phishing attacks against six South African based financial institutions, but also correlates this with the overall global trend using statistical analysis. The results from the study of the data set are compared to existing statistics and literature regarding the prevalence and growth of phishing in South Africa. The question that this research answers is whether or not the prevalence and growth of phishing in South Africa correlates with the global trend in phishing attacks. The findings indicate that certain correlations exist between some of the South African phishing targets and global phishing trends.
- Full Text:
- Date Issued: 2016
An investigation into the use of intuitive control interfaces and distributed processing for enhanced three dimensional sound localization
- Authors: Hedges, M L
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/2992 , vital:20350
- Description: This thesis investigates the feasibility of using gestures as a means of control for localizing three dimesional (3D) sound sources in a distributed immersive audio system. A prototype system was implemented and tested which uses state of the art technology to achieve the stated goals. A Windows Kinect is used for gesture recognition which translates human gestures into control messages by the prototype system, which in turn performs actions based on the recognized gestures. The term distributed in the context of this system refers to the audio processing capacity. The prototype system partitions and allocates the processing load between a number of endpoints. The reallocated processing load consists of the mixing of audio samples according to a specification. The endpoints used in this research are XMOS AVB endpoints. The firmware on these endpoints were modified to include the audio mixing capability which was controlled by a state of the art audio distribution networking standard, Ethernet AVB. The hardware used for the implementation of the prototype system is relatively cost efficient in comparison to professional audio hardware, and is also commercially available for end users. The successful implementation and results from user testing of the prototype system demonstrates how it is a feasible option for recording the localization of a sound source. The ability to partition the processing provides a modular approach to building immersive sound systems. This removes the constraint of a centralized mixing console with a predetermined speaker configuration.
- Full Text:
- Date Issued: 2016
- Authors: Hedges, M L
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/2992 , vital:20350
- Description: This thesis investigates the feasibility of using gestures as a means of control for localizing three dimesional (3D) sound sources in a distributed immersive audio system. A prototype system was implemented and tested which uses state of the art technology to achieve the stated goals. A Windows Kinect is used for gesture recognition which translates human gestures into control messages by the prototype system, which in turn performs actions based on the recognized gestures. The term distributed in the context of this system refers to the audio processing capacity. The prototype system partitions and allocates the processing load between a number of endpoints. The reallocated processing load consists of the mixing of audio samples according to a specification. The endpoints used in this research are XMOS AVB endpoints. The firmware on these endpoints were modified to include the audio mixing capability which was controlled by a state of the art audio distribution networking standard, Ethernet AVB. The hardware used for the implementation of the prototype system is relatively cost efficient in comparison to professional audio hardware, and is also commercially available for end users. The successful implementation and results from user testing of the prototype system demonstrates how it is a feasible option for recording the localization of a sound source. The ability to partition the processing provides a modular approach to building immersive sound systems. This removes the constraint of a centralized mixing console with a predetermined speaker configuration.
- Full Text:
- Date Issued: 2016
An investigation into the use of intuitive control interfaces and distributed processing for enhanced three dimensional sound localization
- Authors: Hedges, Mitchell Lawrence
- Date: 2016
- Subjects: Human-computer interaction , Acoustic localization , Sound -- Equipment and supplies , Acoustical engineering , Surround-sound systems , Wireless sensor nodes
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4724 , http://hdl.handle.net/10962/d1020615
- Description: This thesis investigates the feasibility of using gestures as a means of control for localizing three dimensional (3D) sound sources in a distributed immersive audio system. A prototype system was implemented and tested which uses state of the art technology to achieve the stated goals. A Windows Kinect is used for gesture recognition which translates human gestures into control messages by the prototype system, which in turn performs actions based on the recognized gestures. The term distributed in the context of this system refers to the audio processing capacity. The prototype system partitions and allocates the processing load between a number of endpoints. The reallocated processing load consists of the mixing of audio samples according to a specification. The endpoints used in this research are XMOS AVB endpoints. The firmware on these endpoints were modified to include the audio mixing capability which was controlled by a state of the art audio distribution networking standard, Ethernet AVB. The hardware used for the implementation of the prototype system is relatively cost efficient in comparison to professional audio hardware, and is also commercially available for end users. the successful implementation and results from user testing of the prototype system demonstrates how it is a feasible option for recording the localization of a sound source. The ability to partition the processing provides a modular approach to building immersive sound systems. This removes the constraint of a centralized mixing console with a predetermined speaker configuration.
- Full Text:
- Date Issued: 2016
- Authors: Hedges, Mitchell Lawrence
- Date: 2016
- Subjects: Human-computer interaction , Acoustic localization , Sound -- Equipment and supplies , Acoustical engineering , Surround-sound systems , Wireless sensor nodes
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4724 , http://hdl.handle.net/10962/d1020615
- Description: This thesis investigates the feasibility of using gestures as a means of control for localizing three dimensional (3D) sound sources in a distributed immersive audio system. A prototype system was implemented and tested which uses state of the art technology to achieve the stated goals. A Windows Kinect is used for gesture recognition which translates human gestures into control messages by the prototype system, which in turn performs actions based on the recognized gestures. The term distributed in the context of this system refers to the audio processing capacity. The prototype system partitions and allocates the processing load between a number of endpoints. The reallocated processing load consists of the mixing of audio samples according to a specification. The endpoints used in this research are XMOS AVB endpoints. The firmware on these endpoints were modified to include the audio mixing capability which was controlled by a state of the art audio distribution networking standard, Ethernet AVB. The hardware used for the implementation of the prototype system is relatively cost efficient in comparison to professional audio hardware, and is also commercially available for end users. the successful implementation and results from user testing of the prototype system demonstrates how it is a feasible option for recording the localization of a sound source. The ability to partition the processing provides a modular approach to building immersive sound systems. This removes the constraint of a centralized mixing console with a predetermined speaker configuration.
- Full Text:
- Date Issued: 2016
Detecting derivative malware samples using deobfuscation-assisted similarity analysis
- Authors: Wrench, Peter Mark
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/383 , vital:19954
- Description: The overwhelming popularity of PHP as a hosting platform has made it the language of choice for developers of Remote Access Trojans (RATs or web shells) and other malicious software. These shells are typically used to compromise and monetise web platforms by providing the attacker with basic remote access to the system, including _le transfer, command execution, network reconnaissance, and database connectivity. Once infected, compromised systems can be used to defraud users by hosting phishing sites, performing Distributed Denial of Service attacks, or serving as anonymous platforms for sending spam or other malfeasance. The vast majority of these threats are largely derivative, incorporating core capabilities found in more established RATs such as c99 and r57. Authors of malicious software routinely produce new shell variants by modifying the behaviours of these ubiquitous RATs, either to add desired functionality or to avoid detection by signature-based detection systems. Once these modified shells are eventually identified (or additional functionality is required), the process of shell adaptation begins again. The end result of this iterative process is a web of separate but related shell variants, many of which are at least partially derived from one of the more popular and influential RATs. In response to the problem outlined above, the author set out to design and implement a system capable of circumventing common obfuscation techniques and identifying derivative malware samples in a given collection. To begin with, a decoder component was developed to syntactically deobfuscate and normalise PHP code by detecting and reversing idiomatic obfuscation constructs, and to apply uniform formatting conventions to all system inputs. A unified malware analysis framework, called Viper, was then extended to create a modular similarity analysis system comprised of individual feature extraction modules, modules responsible for batch processing, a matrix module for comparing sample features, and two visualisation modules capable of generating visual representations of shell similarity. The principal conclusion of the research was that the deobfuscation performed by the decoder component prior to analysis dramatically improved the observed levels of similarity between test samples. This in turn allowed the modular similarity analysis system to identify derivative clusters (or families) within a large collection of shells more accurately. Techniques for isolating and re-rendering these clusters were also developed and demonstrated to be effective at increasing the amount of detail available for evaluating the relative magnitudes of the relationships within each cluster.
- Full Text:
- Date Issued: 2016
- Authors: Wrench, Peter Mark
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/383 , vital:19954
- Description: The overwhelming popularity of PHP as a hosting platform has made it the language of choice for developers of Remote Access Trojans (RATs or web shells) and other malicious software. These shells are typically used to compromise and monetise web platforms by providing the attacker with basic remote access to the system, including _le transfer, command execution, network reconnaissance, and database connectivity. Once infected, compromised systems can be used to defraud users by hosting phishing sites, performing Distributed Denial of Service attacks, or serving as anonymous platforms for sending spam or other malfeasance. The vast majority of these threats are largely derivative, incorporating core capabilities found in more established RATs such as c99 and r57. Authors of malicious software routinely produce new shell variants by modifying the behaviours of these ubiquitous RATs, either to add desired functionality or to avoid detection by signature-based detection systems. Once these modified shells are eventually identified (or additional functionality is required), the process of shell adaptation begins again. The end result of this iterative process is a web of separate but related shell variants, many of which are at least partially derived from one of the more popular and influential RATs. In response to the problem outlined above, the author set out to design and implement a system capable of circumventing common obfuscation techniques and identifying derivative malware samples in a given collection. To begin with, a decoder component was developed to syntactically deobfuscate and normalise PHP code by detecting and reversing idiomatic obfuscation constructs, and to apply uniform formatting conventions to all system inputs. A unified malware analysis framework, called Viper, was then extended to create a modular similarity analysis system comprised of individual feature extraction modules, modules responsible for batch processing, a matrix module for comparing sample features, and two visualisation modules capable of generating visual representations of shell similarity. The principal conclusion of the research was that the deobfuscation performed by the decoder component prior to analysis dramatically improved the observed levels of similarity between test samples. This in turn allowed the modular similarity analysis system to identify derivative clusters (or families) within a large collection of shells more accurately. Techniques for isolating and re-rendering these clusters were also developed and demonstrated to be effective at increasing the amount of detail available for evaluating the relative magnitudes of the relationships within each cluster.
- Full Text:
- Date Issued: 2016
FRAME: frame routing and manipulation engine
- Authors: Pennefather, Sean Niel
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3608 , vital:20529
- Description: This research reports on the design and implementation of FRAME: an embedded hardware network processing platform designed to perform network frame manipulation and monitoring. This is possible at line speeds compliant with the IEEE 802.3 Ethernet standard. The system provides frame manipulation functionality to aid in the development and implementation of network testing environments. Platform cost and ease of use are both considered during design resulting in fabrication of hardware and the development of Link, a Domain Specific Language used to create custom applications that are compatible with the platform. Functionality of the resulting platform is shown through conformance testing of designed modules and application examples. Throughput testing showed that the peak throughput achievable by the platform is limited to 86.4 Mbit/s, comparable to commodity 100 Mbit hardware and the total cost of the prototype platform ranged between $220 and $254.
- Full Text:
- Date Issued: 2016
- Authors: Pennefather, Sean Niel
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3608 , vital:20529
- Description: This research reports on the design and implementation of FRAME: an embedded hardware network processing platform designed to perform network frame manipulation and monitoring. This is possible at line speeds compliant with the IEEE 802.3 Ethernet standard. The system provides frame manipulation functionality to aid in the development and implementation of network testing environments. Platform cost and ease of use are both considered during design resulting in fabrication of hardware and the development of Link, a Domain Specific Language used to create custom applications that are compatible with the platform. Functionality of the resulting platform is shown through conformance testing of designed modules and application examples. Throughput testing showed that the peak throughput achievable by the platform is limited to 86.4 Mbit/s, comparable to commodity 100 Mbit hardware and the total cost of the prototype platform ranged between $220 and $254.
- Full Text:
- Date Issued: 2016
GPU Accelerated protocol analysis for large and long-term traffic traces
- Nottingham, Alastair Timothy
- Authors: Nottingham, Alastair Timothy
- Date: 2016
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/910 , vital:20002
- Description: This thesis describes the design and implementation of GPF+, a complete general packet classification system developed using Nvidia CUDA for Compute Capability 3.5+ GPUs. This system was developed with the aim of accelerating the analysis of arbitrary network protocols within network traffic traces using inexpensive, massively parallel commodity hardware. GPF+ and its supporting components are specifically intended to support the processing of large, long-term network packet traces such as those produced by network telescopes, which are currently difficult and time consuming to analyse. The GPF+ classifier is based on prior research in the field, which produced a prototype classifier called GPF, targeted at Compute Capability 1.3 GPUs. GPF+ greatly extends the GPF model, improving runtime flexibility and scalability, whilst maintaining high execution efficiency. GPF+ incorporates a compact, lightweight registerbased state machine that supports massively-parallel, multi-match filter predicate evaluation, as well as efficient arbitrary field extraction. GPF+ tracks packet composition during execution, and adjusts processing at runtime to avoid redundant memory transactions and unnecessary computation through warp-voting. GPF+ additionally incorporates a 128-bit in-thread cache, accelerated through register shuffling, to accelerate access to packet data in slow GPU global memory. GPF+ uses a high-level DSL to simplify protocol and filter creation, whilst better facilitating protocol reuse. The system is supported by a pipeline of multi-threaded high-performance host components, which communicate asynchronously through 0MQ messaging middleware to buffer, index, and dispatch packet data on the host system. The system was evaluated using high-end Kepler (Nvidia GTX Titan) and entry level Maxwell (Nvidia GTX 750) GPUs. The results of this evaluation showed high system performance, limited only by device side IO (600MBps) in all tests. GPF+ maintained high occupancy and device utilisation in all tests, without significant serialisation, and showed improved scaling to more complex filter sets. Results were used to visualise captures of up to 160 GB in seconds, and to extract and pre-filter captures small enough to be easily analysed in applications such as Wireshark.
- Full Text:
- Date Issued: 2016
- Authors: Nottingham, Alastair Timothy
- Date: 2016
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/910 , vital:20002
- Description: This thesis describes the design and implementation of GPF+, a complete general packet classification system developed using Nvidia CUDA for Compute Capability 3.5+ GPUs. This system was developed with the aim of accelerating the analysis of arbitrary network protocols within network traffic traces using inexpensive, massively parallel commodity hardware. GPF+ and its supporting components are specifically intended to support the processing of large, long-term network packet traces such as those produced by network telescopes, which are currently difficult and time consuming to analyse. The GPF+ classifier is based on prior research in the field, which produced a prototype classifier called GPF, targeted at Compute Capability 1.3 GPUs. GPF+ greatly extends the GPF model, improving runtime flexibility and scalability, whilst maintaining high execution efficiency. GPF+ incorporates a compact, lightweight registerbased state machine that supports massively-parallel, multi-match filter predicate evaluation, as well as efficient arbitrary field extraction. GPF+ tracks packet composition during execution, and adjusts processing at runtime to avoid redundant memory transactions and unnecessary computation through warp-voting. GPF+ additionally incorporates a 128-bit in-thread cache, accelerated through register shuffling, to accelerate access to packet data in slow GPU global memory. GPF+ uses a high-level DSL to simplify protocol and filter creation, whilst better facilitating protocol reuse. The system is supported by a pipeline of multi-threaded high-performance host components, which communicate asynchronously through 0MQ messaging middleware to buffer, index, and dispatch packet data on the host system. The system was evaluated using high-end Kepler (Nvidia GTX Titan) and entry level Maxwell (Nvidia GTX 750) GPUs. The results of this evaluation showed high system performance, limited only by device side IO (600MBps) in all tests. GPF+ maintained high occupancy and device utilisation in all tests, without significant serialisation, and showed improved scaling to more complex filter sets. Results were used to visualise captures of up to 160 GB in seconds, and to extract and pre-filter captures small enough to be easily analysed in applications such as Wireshark.
- Full Text:
- Date Issued: 2016
Information security concerns around enterprise bring your own device adoption in South African higher education institutions
- Authors: Sauls, Gershwin Ashton
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3619 , vital:20530
- Description: The research carried out in this thesis is an investigation into the information security concerns around the use of personally-owned mobile devices within South African universities. This concept, which is more commonly known as Bring Your Own Device or BYOD has raised many data loss concerns for organizational IT Departments across various industries worldwide. Universities as institutions are designed to facilitate research and learning and as such, have a strong culture toward the sharing of information which complicates management of these data loss concerns even further. As such, the objectives of the research were to determine the acceptance levels of BYOD within South African universities in relation to the perceived security risks. Thereafter, an investigation into which security practices, if any, that South African universities are using to minimize the information security concerns was carried out by means of a targeted online questionnaire. An extensive literature review was first carried out to evaluate the motivation for the research and to assess advantages of using Smartphone and Tablet PC’s for work related purposes. Thereafter, to determine security concerns, other surveys and related work was consulted to determine the relevant questions needed by the online questionnaire. The quantity of comprehensive academic studies concerning the security aspects of BYOD within organizations was very limited and because of this reason, the research took on a highly exploratory design. Finally, the research deliberated on the results of the online questionnaire and concluded with a strategy for the implementation of a mobile device security strategy for using personally-owned devices in a work-related environment.
- Full Text:
- Date Issued: 2016
- Authors: Sauls, Gershwin Ashton
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3619 , vital:20530
- Description: The research carried out in this thesis is an investigation into the information security concerns around the use of personally-owned mobile devices within South African universities. This concept, which is more commonly known as Bring Your Own Device or BYOD has raised many data loss concerns for organizational IT Departments across various industries worldwide. Universities as institutions are designed to facilitate research and learning and as such, have a strong culture toward the sharing of information which complicates management of these data loss concerns even further. As such, the objectives of the research were to determine the acceptance levels of BYOD within South African universities in relation to the perceived security risks. Thereafter, an investigation into which security practices, if any, that South African universities are using to minimize the information security concerns was carried out by means of a targeted online questionnaire. An extensive literature review was first carried out to evaluate the motivation for the research and to assess advantages of using Smartphone and Tablet PC’s for work related purposes. Thereafter, to determine security concerns, other surveys and related work was consulted to determine the relevant questions needed by the online questionnaire. The quantity of comprehensive academic studies concerning the security aspects of BYOD within organizations was very limited and because of this reason, the research took on a highly exploratory design. Finally, the research deliberated on the results of the online questionnaire and concluded with a strategy for the implementation of a mobile device security strategy for using personally-owned devices in a work-related environment.
- Full Text:
- Date Issued: 2016
Internal fingerprint extraction
- Authors: Darlow, Luke Nicholas
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/2959 , vital:20347
- Description: Fingerprints are a non-invasive biometric that possess significant advantages. However, they are subject to surface erosion and damage; distortion upon scanning; and are vulnerable to fingerprint spoofing. The internal fingerprint exists as the undulations of the papillary junction - an intermediary layer of skin - and provides a solution to these disadvantages. Optical coherence tomography is used to capture the internal fingerprint. A depth profile of the papillary junction throughout the OCT scans is first constructed using fuzzy c-means clustering and a fine-tuning procedure. This information is then used to define localised regions over which to average pixels for the resultant internal fingerprint. When compared to a ground-truth internal fingerprint zone, the internal fingerprint zone detected automatically is within the measured bounds of human error. With a mean- squared-error of 21.3 and structural similarity of 96.4%, the internal fingerprint zone was successfully found and described. The extracted fingerprints exceed their surface counterparts with respect to orientation certainty and NFIQ scores (both of which are respected fingerprint quality assessment criteria). Internal to surface fingerprint correspondence and internal fingerprint cross correspondence were also measured. A larger scanned region is shown to be advantageous as internal fingerprints extracted from these scans have good surface correspondence (75% had at least one true match with a surface counterpart). It is also evidenced that internal fingerprints can constitute a fingerprint database. 96% of the internal fingerprints extracted had at least one corresponding match with another internal fingerprint. When compared to surface fingerprints cropped to match the internal fingerprints’ representative area and locality, the internal fingerprints outperformed these cropped surface counterparts. The internal fingerprint is an attractive biometric solution. This research develops a novel approach to extracting the internal fingerprint and is an asset to the further development of technologies surrounding fingerprint extraction from OCT scans. No earlier work has extracted or tested the internal fingerprint to the degree that this research has.
- Full Text:
- Date Issued: 2016
- Authors: Darlow, Luke Nicholas
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/2959 , vital:20347
- Description: Fingerprints are a non-invasive biometric that possess significant advantages. However, they are subject to surface erosion and damage; distortion upon scanning; and are vulnerable to fingerprint spoofing. The internal fingerprint exists as the undulations of the papillary junction - an intermediary layer of skin - and provides a solution to these disadvantages. Optical coherence tomography is used to capture the internal fingerprint. A depth profile of the papillary junction throughout the OCT scans is first constructed using fuzzy c-means clustering and a fine-tuning procedure. This information is then used to define localised regions over which to average pixels for the resultant internal fingerprint. When compared to a ground-truth internal fingerprint zone, the internal fingerprint zone detected automatically is within the measured bounds of human error. With a mean- squared-error of 21.3 and structural similarity of 96.4%, the internal fingerprint zone was successfully found and described. The extracted fingerprints exceed their surface counterparts with respect to orientation certainty and NFIQ scores (both of which are respected fingerprint quality assessment criteria). Internal to surface fingerprint correspondence and internal fingerprint cross correspondence were also measured. A larger scanned region is shown to be advantageous as internal fingerprints extracted from these scans have good surface correspondence (75% had at least one true match with a surface counterpart). It is also evidenced that internal fingerprints can constitute a fingerprint database. 96% of the internal fingerprints extracted had at least one corresponding match with another internal fingerprint. When compared to surface fingerprints cropped to match the internal fingerprints’ representative area and locality, the internal fingerprints outperformed these cropped surface counterparts. The internal fingerprint is an attractive biometric solution. This research develops a novel approach to extracting the internal fingerprint and is an asset to the further development of technologies surrounding fingerprint extraction from OCT scans. No earlier work has extracted or tested the internal fingerprint to the degree that this research has.
- Full Text:
- Date Issued: 2016
Selecting and augmenting a FOSS development and deployment environment for personalized video-oriented services in a Telco context
- Authors: Shibeshi, Zelalem Sintayehu
- Date: 2016
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/943 , vital:20005
- Description: The great demand for video services on the Internet is one contributing factor that led telecom companies to search for solutions to deliver innovative video services, using the different access technologies managed by them and leveraging the capacity of enforcing Quality of Service (QoS). One part of the solution was an infrastructure that guarantees QoS for these services, in the form of the IP Multimedia Subsystem (IMS) framework. The IMS framework was developed for delivering innovative multimedia services, but IMS alone does not provide the required services. This has led to further work in the area of multimedia service architectures. One noteworthy architecture is IPTV. IPTV is more than what its name implies, as it allows the development of various innovative video-oriented services and not just tv. When IPTV was introduced, many thought that it would bring back the revenue loss that telecom companies experienced to over-the-top (OTT) service providers. However, despite all its promises, the IPTV implementation has not shown as wide an uptake as one would expect. Although there could be various reasons for the slow penetration of IPTV, one reason could be the technical challenge that IPTV poses to service developers. One of the main reasons for the embarking of the research reported in this thesis was to identify and select free and open source software (FOSS) based platforms and augment them for easy development and deployment of video-oriented services. The thesis motivated how the IPTV architecture, with some modification, can be a good architecture to develop innovative video-oriented services. For a better understanding and investigate the issues of video-oriented service development on different platforms, we followed an incremental and iterative prototyping method. As a result, various video-oriented services were first developed and implementation-related issues were analyzed. This has helped us to identify problems that service developers face, including the requirement to utilize a number of protocols to develop an IPTV-based video-oriented service and the lack of a platform that provides a consistent programming interface to implement them all. The process also helped us to identify new uses cases through the process. As part of our selection process, we found that the Mobicents service development platform can be used as the basis for a good service development and deployment environment for video-oriented services. Mobicents is a Java-based service delivery platform for quick development, deployment and management of next generation network applications. Mobicents is a good choice because it provides a consistent programming interface and supports the various protocols needed in a consistent manner or an easy way to include the support for them. We used Mobicents to compose the environment that developers can use to build video-oriented services. Specifically we developed components and service building blocks that service developer can use to develop various innovative video-oriented services. During our research, we also identified various issues with regard to support from streaming servers in general and open source streaming servers in particular and also with the protocol they use. Specifically, we identified issues with Real Time Streaming Protocol (RTSP), a protocol specified as the media control protocol in the IPTV specification, and made proposals for solving them. We developed an RSTP proxy to augment the features lacking in the current streaming servers and implemented some of the features we proposed in it.
- Full Text:
- Date Issued: 2016
- Authors: Shibeshi, Zelalem Sintayehu
- Date: 2016
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/943 , vital:20005
- Description: The great demand for video services on the Internet is one contributing factor that led telecom companies to search for solutions to deliver innovative video services, using the different access technologies managed by them and leveraging the capacity of enforcing Quality of Service (QoS). One part of the solution was an infrastructure that guarantees QoS for these services, in the form of the IP Multimedia Subsystem (IMS) framework. The IMS framework was developed for delivering innovative multimedia services, but IMS alone does not provide the required services. This has led to further work in the area of multimedia service architectures. One noteworthy architecture is IPTV. IPTV is more than what its name implies, as it allows the development of various innovative video-oriented services and not just tv. When IPTV was introduced, many thought that it would bring back the revenue loss that telecom companies experienced to over-the-top (OTT) service providers. However, despite all its promises, the IPTV implementation has not shown as wide an uptake as one would expect. Although there could be various reasons for the slow penetration of IPTV, one reason could be the technical challenge that IPTV poses to service developers. One of the main reasons for the embarking of the research reported in this thesis was to identify and select free and open source software (FOSS) based platforms and augment them for easy development and deployment of video-oriented services. The thesis motivated how the IPTV architecture, with some modification, can be a good architecture to develop innovative video-oriented services. For a better understanding and investigate the issues of video-oriented service development on different platforms, we followed an incremental and iterative prototyping method. As a result, various video-oriented services were first developed and implementation-related issues were analyzed. This has helped us to identify problems that service developers face, including the requirement to utilize a number of protocols to develop an IPTV-based video-oriented service and the lack of a platform that provides a consistent programming interface to implement them all. The process also helped us to identify new uses cases through the process. As part of our selection process, we found that the Mobicents service development platform can be used as the basis for a good service development and deployment environment for video-oriented services. Mobicents is a Java-based service delivery platform for quick development, deployment and management of next generation network applications. Mobicents is a good choice because it provides a consistent programming interface and supports the various protocols needed in a consistent manner or an easy way to include the support for them. We used Mobicents to compose the environment that developers can use to build video-oriented services. Specifically we developed components and service building blocks that service developer can use to develop various innovative video-oriented services. During our research, we also identified various issues with regard to support from streaming servers in general and open source streaming servers in particular and also with the protocol they use. Specifically, we identified issues with Real Time Streaming Protocol (RTSP), a protocol specified as the media control protocol in the IPTV specification, and made proposals for solving them. We developed an RSTP proxy to augment the features lacking in the current streaming servers and implemented some of the features we proposed in it.
- Full Text:
- Date Issued: 2016
The design, development and evaluation of cross-platform mobile applications and services supporting social accountability monitoring
- Authors: Reynell, Edward Robin
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3652 , vital:20533
- Description: Local government processes require meaningful and effective participation from both citizens and their governments in order to remain truly democratic. This project investigates the use of mobile phones as a tool for supporting this participation. MobiSAM, a system which aims to enhance the Social Accountability Monitoring (SAM) methodology at local government level, has been designed and implemented. The research presented in this thesis examines tools and techniques for the development of cross-platform client applications, allowing access to the MobiSAM service, across heterogeneous mobile platforms, handsets and interaction styles. Particular attention is paid to providing an easily navigated user interface (UI), as well as offering clear and concise visualisation capabilities. Depending on the host device, interactivity is also included within these visualisations, potentially helping provide further insight into the visualised data. Guided by the results obtained from a comprehensive baseline study of the Grahamstown area, steps are taken in an attempt to lower the barrier of entry to using the MobiSAM service, potentially maximising its market reach. These include extending client application support to all identified mobile platforms (including feature phones); providing multi-language UIs (in English, isiXhosa and Afrikaans); as well as ensuring client application data usage is kept to a minimum. The particular strengths of a given device are also leveraged, such as its camera capabilities and built-in Global Positioning System (GPS) module, potentially allowing for more effective engagement with local municipalities. Additionally, a Short Message Service (SMS) gateway is developed, allowing all Global System for Mobile Communications (GSM) compatible handsets access to the MobiSAM service via traditional SMS. Following an iterative, user-centred design process, a thorough evaluation of the client application is also performed, in an attempt to gather feedback relating to the navigation and visualisation capabilities. The results of which are used to further refine its design. A comparative usability evaluation using two different versions of the cross-platform client application is also undertaken, highlighting the perceived memorability, learnabilitv and satisfaction of each. Results from the evaluation reveals which version of the client application is to be deployed during future pilot studies.
- Full Text:
- Date Issued: 2016
- Authors: Reynell, Edward Robin
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3652 , vital:20533
- Description: Local government processes require meaningful and effective participation from both citizens and their governments in order to remain truly democratic. This project investigates the use of mobile phones as a tool for supporting this participation. MobiSAM, a system which aims to enhance the Social Accountability Monitoring (SAM) methodology at local government level, has been designed and implemented. The research presented in this thesis examines tools and techniques for the development of cross-platform client applications, allowing access to the MobiSAM service, across heterogeneous mobile platforms, handsets and interaction styles. Particular attention is paid to providing an easily navigated user interface (UI), as well as offering clear and concise visualisation capabilities. Depending on the host device, interactivity is also included within these visualisations, potentially helping provide further insight into the visualised data. Guided by the results obtained from a comprehensive baseline study of the Grahamstown area, steps are taken in an attempt to lower the barrier of entry to using the MobiSAM service, potentially maximising its market reach. These include extending client application support to all identified mobile platforms (including feature phones); providing multi-language UIs (in English, isiXhosa and Afrikaans); as well as ensuring client application data usage is kept to a minimum. The particular strengths of a given device are also leveraged, such as its camera capabilities and built-in Global Positioning System (GPS) module, potentially allowing for more effective engagement with local municipalities. Additionally, a Short Message Service (SMS) gateway is developed, allowing all Global System for Mobile Communications (GSM) compatible handsets access to the MobiSAM service via traditional SMS. Following an iterative, user-centred design process, a thorough evaluation of the client application is also performed, in an attempt to gather feedback relating to the navigation and visualisation capabilities. The results of which are used to further refine its design. A comparative usability evaluation using two different versions of the cross-platform client application is also undertaken, highlighting the perceived memorability, learnabilitv and satisfaction of each. Results from the evaluation reveals which version of the client application is to be deployed during future pilot studies.
- Full Text:
- Date Issued: 2016
The development of a discovery and control environment for networked audio devices based on a study of current audio control protocols
- Authors: Eales, Andrew Arnold
- Date: 2016
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/539 , vital:19968
- Description: This dissertation develops a standard device model for networked audio devices and introduces a novel discovery and control environment that uses the developed device model. The proposed standard device model is derived from a study of current audio control protocols. Both the functional capabilities and design principles of audio control protocols are investigated with an emphasis on Open Sound Control, SNMP and IEC-62379, AES64, CopperLan and UPnP. An abstract model of networked audio devices is developed, and the model is implemented in each of the previously mentioned control protocols. This model is also used within a novel discovery and control environment designed around a distributed associative memory termed an object space. This environment challenges the accepted notions of the functionality provided by a control protocol. The study concludes by comparing the salient features of the different control protocols encountered in this study. Different approaches to control protocol design are considered, and several design heuristics for control protocols are proposed.
- Full Text:
- Date Issued: 2016
- Authors: Eales, Andrew Arnold
- Date: 2016
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/539 , vital:19968
- Description: This dissertation develops a standard device model for networked audio devices and introduces a novel discovery and control environment that uses the developed device model. The proposed standard device model is derived from a study of current audio control protocols. Both the functional capabilities and design principles of audio control protocols are investigated with an emphasis on Open Sound Control, SNMP and IEC-62379, AES64, CopperLan and UPnP. An abstract model of networked audio devices is developed, and the model is implemented in each of the previously mentioned control protocols. This model is also used within a novel discovery and control environment designed around a distributed associative memory termed an object space. This environment challenges the accepted notions of the functionality provided by a control protocol. The study concludes by comparing the salient features of the different control protocols encountered in this study. Different approaches to control protocol design are considered, and several design heuristics for control protocols are proposed.
- Full Text:
- Date Issued: 2016
Toward an automated botnet analysis framework: a darkcomet case-study
- Authors: du Bruyn, Jeremy Cecil
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/2937 , vital:20344
- Full Text:
- Date Issued: 2016
- Authors: du Bruyn, Jeremy Cecil
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/2937 , vital:20344
- Full Text:
- Date Issued: 2016
- «
- ‹
- 1
- ›
- »