A formalised ontology for network attack classification
- Authors: Van Heerden, Renier Pelser
- Date: 2014
- Subjects: Computer networks -- Security measures Computer security Computer crimes -- Investigation Computer crimes -- Prevention
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4691 , http://hdl.handle.net/10962/d1011603
- Description: One of the most popular attack vectors against computers are their network connections. Attacks on computers through their networks are commonplace and have various levels of complexity. This research formally describes network-based computer attacks in the form of a story, formally and within an ontology. The ontology categorises network attacks where attack scenarios are the focal class. This class consists of: Denial-of- Service, Industrial Espionage, Web Defacement, Unauthorised Data Access, Financial Theft, Industrial Sabotage, Cyber-Warfare, Resource Theft, System Compromise, and Runaway Malware. This ontology was developed by building a taxonomy and a temporal network attack model. Network attack instances (also know as individuals) are classified according to their respective attack scenarios, with the use of an automated reasoner within the ontology. The automated reasoner deductions are verified formally; and via the automated reasoner, a relaxed set of scenarios is determined, which is relevant in a near real-time environment. A prototype system (called Aeneas) was developed to classify network-based attacks. Aeneas integrates the sensors into a detection system that can classify network attacks in a near real-time environment. To verify the ontology and the prototype Aeneas, a virtual test bed was developed in which network-based attacks were generated to verify the detection system. Aeneas was able to detect incoming attacks and classify them according to their scenario. The novel part of this research is the attack scenarios that are described in the form of a story, as well as formally and in an ontology. The ontology is used in a novel way to determine to which class attack instances belong and how the network attack ontology is affected in a near real-time environment.
- Full Text:
- Date Issued: 2014
- Authors: Van Heerden, Renier Pelser
- Date: 2014
- Subjects: Computer networks -- Security measures Computer security Computer crimes -- Investigation Computer crimes -- Prevention
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4691 , http://hdl.handle.net/10962/d1011603
- Description: One of the most popular attack vectors against computers are their network connections. Attacks on computers through their networks are commonplace and have various levels of complexity. This research formally describes network-based computer attacks in the form of a story, formally and within an ontology. The ontology categorises network attacks where attack scenarios are the focal class. This class consists of: Denial-of- Service, Industrial Espionage, Web Defacement, Unauthorised Data Access, Financial Theft, Industrial Sabotage, Cyber-Warfare, Resource Theft, System Compromise, and Runaway Malware. This ontology was developed by building a taxonomy and a temporal network attack model. Network attack instances (also know as individuals) are classified according to their respective attack scenarios, with the use of an automated reasoner within the ontology. The automated reasoner deductions are verified formally; and via the automated reasoner, a relaxed set of scenarios is determined, which is relevant in a near real-time environment. A prototype system (called Aeneas) was developed to classify network-based attacks. Aeneas integrates the sensors into a detection system that can classify network attacks in a near real-time environment. To verify the ontology and the prototype Aeneas, a virtual test bed was developed in which network-based attacks were generated to verify the detection system. Aeneas was able to detect incoming attacks and classify them according to their scenario. The novel part of this research is the attack scenarios that are described in the form of a story, as well as formally and in an ontology. The ontology is used in a novel way to determine to which class attack instances belong and how the network attack ontology is affected in a near real-time environment.
- Full Text:
- Date Issued: 2014
A framework for high speed lexical classification of malicious URLs
- Authors: Egan, Shaun Peter
- Date: 2014
- Subjects: Internet -- Security measures -- Research , Uniform Resource Identifiers -- Security measures -- Research , Neural networks (Computer science) -- Research , Computer security -- Research , Computer crimes -- Prevention , Phishing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4696 , http://hdl.handle.net/10962/d1011933 , Internet -- Security measures -- Research , Uniform Resource Identifiers -- Security measures -- Research , Neural networks (Computer science) -- Research , Computer security -- Research , Computer crimes -- Prevention , Phishing
- Description: Phishing attacks employ social engineering to target end-users, with the goal of stealing identifying or sensitive information. This information is used in activities such as identity theft or financial fraud. During a phishing campaign, attackers distribute URLs which; along with false information, point to fraudulent resources in an attempt to deceive users into requesting the resource. These URLs are made obscure through the use of several techniques which make automated detection difficult. Current methods used to detect malicious URLs face multiple problems which attackers use to their advantage. These problems include: the time required to react to new attacks; shifts in trends in URL obfuscation and usability problems caused by the latency incurred by the lookups required by these approaches. A new method of identifying malicious URLs using Artificial Neural Networks (ANNs) has been shown to be effective by several authors. The simple method of classification performed by ANNs result in very high classification speeds with little impact on usability. Samples used for the training, validation and testing of these ANNs are gathered from Phishtank and Open Directory. Words selected from the different sections of the samples are used to create a `Bag-of-Words (BOW)' which is used as a binary input vector indicating the presence of a word for a given sample. Twenty additional features which measure lexical attributes of the sample are used to increase classification accuracy. A framework that is capable of generating these classifiers in an automated fashion is implemented. These classifiers are automatically stored on a remote update distribution service which has been built to supply updates to classifier implementations. An example browser plugin is created and uses ANNs provided by this service. It is both capable of classifying URLs requested by a user in real time and is able to block these requests. The framework is tested in terms of training time and classification accuracy. Classification speed and the effectiveness of compression algorithms on the data required to distribute updates is tested. It is concluded that it is possible to generate these ANNs in a frequent fashion, and in a method that is small enough to distribute easily. It is also shown that classifications are made at high-speed with high-accuracy, resulting in little impact on usability.
- Full Text:
- Date Issued: 2014
- Authors: Egan, Shaun Peter
- Date: 2014
- Subjects: Internet -- Security measures -- Research , Uniform Resource Identifiers -- Security measures -- Research , Neural networks (Computer science) -- Research , Computer security -- Research , Computer crimes -- Prevention , Phishing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4696 , http://hdl.handle.net/10962/d1011933 , Internet -- Security measures -- Research , Uniform Resource Identifiers -- Security measures -- Research , Neural networks (Computer science) -- Research , Computer security -- Research , Computer crimes -- Prevention , Phishing
- Description: Phishing attacks employ social engineering to target end-users, with the goal of stealing identifying or sensitive information. This information is used in activities such as identity theft or financial fraud. During a phishing campaign, attackers distribute URLs which; along with false information, point to fraudulent resources in an attempt to deceive users into requesting the resource. These URLs are made obscure through the use of several techniques which make automated detection difficult. Current methods used to detect malicious URLs face multiple problems which attackers use to their advantage. These problems include: the time required to react to new attacks; shifts in trends in URL obfuscation and usability problems caused by the latency incurred by the lookups required by these approaches. A new method of identifying malicious URLs using Artificial Neural Networks (ANNs) has been shown to be effective by several authors. The simple method of classification performed by ANNs result in very high classification speeds with little impact on usability. Samples used for the training, validation and testing of these ANNs are gathered from Phishtank and Open Directory. Words selected from the different sections of the samples are used to create a `Bag-of-Words (BOW)' which is used as a binary input vector indicating the presence of a word for a given sample. Twenty additional features which measure lexical attributes of the sample are used to increase classification accuracy. A framework that is capable of generating these classifiers in an automated fashion is implemented. These classifiers are automatically stored on a remote update distribution service which has been built to supply updates to classifier implementations. An example browser plugin is created and uses ANNs provided by this service. It is both capable of classifying URLs requested by a user in real time and is able to block these requests. The framework is tested in terms of training time and classification accuracy. Classification speed and the effectiveness of compression algorithms on the data required to distribute updates is tested. It is concluded that it is possible to generate these ANNs in a frequent fashion, and in a method that is small enough to distribute easily. It is also shown that classifications are made at high-speed with high-accuracy, resulting in little impact on usability.
- Full Text:
- Date Issued: 2014
A mobile phone solution for ad-hoc hitch-hiking in South Africa
- Authors: Miteche, Sacha Patrick
- Date: 2014
- Subjects: Cell phones -- Information services , Cell phone users -- South Africa , Hitchhiking -- South Africa , Mobile communication systems -- Social aspects , Digital media -- South Africa , Information technology -- South Africa
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4702 , http://hdl.handle.net/10962/d1013340
- Description: The purpose of this study was to investigate the use of mobile phones in organizing ad-hoc vehicle ridesharing based on hitch-hiking trips involving private car drivers and commuters in South Africa. A study was conducted to learn how hitch-hiking trips are arranged in the urban and rural areas of the Eastern Cape. This involved carrying out interviews with hitch-hikers and participating in several trips. The study results provided the design specifications for a Dynamic Ridesharing System (DRS) tailor-made to the hitch-hiking culture of this context. The design of the DRS considered the delivery of the ad-hoc ridesharing service to the anticipated mobile phones owned by people who use hitch-hiking. The implementation of the system used the available open source solutions and guidelines under the Siyakhula Living Lab project, which promotes the use of Information and Communication Technology (ICT) in marginalized communities of South Africa. The developed prototype was tested in both the simulated and live environments, then followed by usability tests to establish the viability of the system. The results from the tests indicate an initial breakthrough in the process of modernizing the ad-hoc ridesharing of hitch-hiking which is used by a section of people in the urban and rural areas of South Africa.
- Full Text:
- Date Issued: 2014
- Authors: Miteche, Sacha Patrick
- Date: 2014
- Subjects: Cell phones -- Information services , Cell phone users -- South Africa , Hitchhiking -- South Africa , Mobile communication systems -- Social aspects , Digital media -- South Africa , Information technology -- South Africa
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4702 , http://hdl.handle.net/10962/d1013340
- Description: The purpose of this study was to investigate the use of mobile phones in organizing ad-hoc vehicle ridesharing based on hitch-hiking trips involving private car drivers and commuters in South Africa. A study was conducted to learn how hitch-hiking trips are arranged in the urban and rural areas of the Eastern Cape. This involved carrying out interviews with hitch-hikers and participating in several trips. The study results provided the design specifications for a Dynamic Ridesharing System (DRS) tailor-made to the hitch-hiking culture of this context. The design of the DRS considered the delivery of the ad-hoc ridesharing service to the anticipated mobile phones owned by people who use hitch-hiking. The implementation of the system used the available open source solutions and guidelines under the Siyakhula Living Lab project, which promotes the use of Information and Communication Technology (ICT) in marginalized communities of South Africa. The developed prototype was tested in both the simulated and live environments, then followed by usability tests to establish the viability of the system. The results from the tests indicate an initial breakthrough in the process of modernizing the ad-hoc ridesharing of hitch-hiking which is used by a section of people in the urban and rural areas of South Africa.
- Full Text:
- Date Issued: 2014
A study of South African computer users' password usage habits and attitude towards password security
- Authors: Friendman, Brandon
- Date: 2014
- Subjects: Computers -- Access control -- Passwords , Computer users -- Attitudes , Internet -- Access control , Internet -- Security measures , Internet -- Management , Data protection
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: vital:4700
- Description: The challenge of having to create and remember a secure password for each user account has become a problem for many computer users and can lead to bad password management practices. Simpler and less secure passwords are often selected and are regularly reused across multiple user accounts. Computer users within corporations and institutions are subject to password policies, policies which require users to create passwords of a specified length and composition and change passwords regularly. These policies often prevent users from reusing previous selected passwords. Security vendors and professionals have sought to improve or even replace password authentication. Technologies such as multi-factor authentication and single sign-on have been developed to complement or even replace password authentication. The objective of the study was to investigate the password habits of South African computer and internet users. The aim was to assess their attitudes toward password security, to determine whether password policies a↵ect the manner in which they manage their passwords and to investigate their exposure to alternate authentication technologies. The results from the online survey demonstrated that password practices of the participants across their professional and personal contexts were generally insecure. Participants often used shorter, simpler and ultimately less secure passwords. Participants would try to memorise all of their passwords or reuse the same password on most of their accounts. Many participants had not received any security awareness training, and additional security technologies (such as multi-factor authentication or password managers) were seldom used or provided to them. The password policies encountered by the participants in their organisations did little towards encouraging the users to apply more secure password practices. Users lack the knowledge and understanding about password security as they had received little or no training pertaining to it.
- Full Text:
- Date Issued: 2014
- Authors: Friendman, Brandon
- Date: 2014
- Subjects: Computers -- Access control -- Passwords , Computer users -- Attitudes , Internet -- Access control , Internet -- Security measures , Internet -- Management , Data protection
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: vital:4700
- Description: The challenge of having to create and remember a secure password for each user account has become a problem for many computer users and can lead to bad password management practices. Simpler and less secure passwords are often selected and are regularly reused across multiple user accounts. Computer users within corporations and institutions are subject to password policies, policies which require users to create passwords of a specified length and composition and change passwords regularly. These policies often prevent users from reusing previous selected passwords. Security vendors and professionals have sought to improve or even replace password authentication. Technologies such as multi-factor authentication and single sign-on have been developed to complement or even replace password authentication. The objective of the study was to investigate the password habits of South African computer and internet users. The aim was to assess their attitudes toward password security, to determine whether password policies a↵ect the manner in which they manage their passwords and to investigate their exposure to alternate authentication technologies. The results from the online survey demonstrated that password practices of the participants across their professional and personal contexts were generally insecure. Participants often used shorter, simpler and ultimately less secure passwords. Participants would try to memorise all of their passwords or reuse the same password on most of their accounts. Many participants had not received any security awareness training, and additional security technologies (such as multi-factor authentication or password managers) were seldom used or provided to them. The password policies encountered by the participants in their organisations did little towards encouraging the users to apply more secure password practices. Users lack the knowledge and understanding about password security as they had received little or no training pertaining to it.
- Full Text:
- Date Issued: 2014
An examination of validation practices in relation to the forensic acquisition of digital evidence in South Africa
- Authors: Jordaan, Jason
- Date: 2014
- Subjects: Electronic evidence , Evidence, Criminal , Forensic sciences , Evidence, Criminal -- South Africa -- Law and legislation
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4706 , http://hdl.handle.net/10962/d1016361
- Description: The acquisition of digital evidence is the most crucial part of the entire digital forensics process. During this process, digital evidence is acquired in a forensically sound manner to ensure the legal admissibility and reliability of that evidence in court. In the acquisition process various hardware or software tools are used to acquire the digital evidence. All of the digital forensic standards relating to the acquisition of digital evidence require that the hardware and software tools used in the acquisition process are validated as functioning correctly and reliably, as this lends credibility to the evidence in court. In fact the Electronic Communications and Transactions Act 25 of 2002 in South Africa specifically requires courts to consider issues such as reliability and the manner in which the integrity of digital evidence is ensured when assessing the evidential weight of digital evidence. Previous research into quality assurance in the practice of digital forensics in South Africa identified that in general, tool validation was not performed, and as such a hypothesis was proposed that digital forensic practitioners in South Africa make use of hardware and/or software tools for the forensic acquisition of digital evidence, whose validity and/or reliability cannot be objectively proven. As such the reliability of any digital evidence preserved using those tools is potentially unreliable. This hypothesis was tested in the research through the use of a survey of digital forensic practitioners in South Africa. The research established that the majority of digital forensic practitioners do not use tools in the forensic acquisition of digital evidence that can be proven to be validated and/or reliable. While just under a fifth of digital forensic practitioners can provide some proof of validation and/or reliability, the proof of validation does not meet formal international standards. In essence this means that digital evidence, which is preserved through the use of specific hardware and/or software tools for subsequent presentation and reliance upon as evidence in a court of law, is preserved by tools where the objective and scientific validity thereof has not been determined. Since South African courts must consider reliability in terms of Section 15(3) of the Electronic Communications and Transactions Act 25 of 2002 in assessing the weight of digital evidence, this is undermined through the current state of practice in South Africa by digital forensic practitioners.
- Full Text:
- Date Issued: 2014
- Authors: Jordaan, Jason
- Date: 2014
- Subjects: Electronic evidence , Evidence, Criminal , Forensic sciences , Evidence, Criminal -- South Africa -- Law and legislation
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4706 , http://hdl.handle.net/10962/d1016361
- Description: The acquisition of digital evidence is the most crucial part of the entire digital forensics process. During this process, digital evidence is acquired in a forensically sound manner to ensure the legal admissibility and reliability of that evidence in court. In the acquisition process various hardware or software tools are used to acquire the digital evidence. All of the digital forensic standards relating to the acquisition of digital evidence require that the hardware and software tools used in the acquisition process are validated as functioning correctly and reliably, as this lends credibility to the evidence in court. In fact the Electronic Communications and Transactions Act 25 of 2002 in South Africa specifically requires courts to consider issues such as reliability and the manner in which the integrity of digital evidence is ensured when assessing the evidential weight of digital evidence. Previous research into quality assurance in the practice of digital forensics in South Africa identified that in general, tool validation was not performed, and as such a hypothesis was proposed that digital forensic practitioners in South Africa make use of hardware and/or software tools for the forensic acquisition of digital evidence, whose validity and/or reliability cannot be objectively proven. As such the reliability of any digital evidence preserved using those tools is potentially unreliable. This hypothesis was tested in the research through the use of a survey of digital forensic practitioners in South Africa. The research established that the majority of digital forensic practitioners do not use tools in the forensic acquisition of digital evidence that can be proven to be validated and/or reliable. While just under a fifth of digital forensic practitioners can provide some proof of validation and/or reliability, the proof of validation does not meet formal international standards. In essence this means that digital evidence, which is preserved through the use of specific hardware and/or software tools for subsequent presentation and reliance upon as evidence in a court of law, is preserved by tools where the objective and scientific validity thereof has not been determined. Since South African courts must consider reliability in terms of Section 15(3) of the Electronic Communications and Transactions Act 25 of 2002 in assessing the weight of digital evidence, this is undermined through the current state of practice in South Africa by digital forensic practitioners.
- Full Text:
- Date Issued: 2014
An exploration into the use of webinjects by financial malware
- Authors: Forrester, Jock Ingram
- Date: 2014
- Subjects: Malware (Computer software) -- Analysis , Internet fraud , Computer crimes , Computer security , Electronic commerce
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4697 , http://hdl.handle.net/10962/d1012079 , Malware (Computer software) -- Analysis , Internet fraud , Computer crimes , Computer security , Electronic commerce
- Description: As the number of computing devices connected to the Internet increases and the Internet itself becomes more pervasive, so does the opportunity for criminals to use these devices in cybercrimes. Supporting the increase in cybercrime is the growth and maturity of the digital underground economy with strong links to its more visible and physical counterpart. The digital underground economy provides software and related services to equip the entrepreneurial cybercriminal with the appropriate skills and required tools. Financial malware, particularly the capability for injection of code into web browsers, has become one of the more profitable cybercrime tool sets due to its versatility and adaptability when targeting clients of institutions with an online presence, both in and outside of the financial industry. There are numerous families of financial malware available for use, with perhaps the most prevalent being Zeus and SpyEye. Criminals create (or purchase) and grow botnets of computing devices infected with financial malware that has been configured to attack clients of certain websites. In the research data set there are 483 configuration files containing approximately 40 000 webinjects that were captured from various financial malware botnets between October 2010 and June 2012. They were processed and analysed to determine the methods used by criminals to defraud either the user of the computing device, or the institution of which the user is a client. The configuration files contain the injection code that is executed in the web browser to create a surrogate interface, which is then used by the criminal to interact with the user and institution in order to commit fraud. Demographics on the captured data set are presented and case studies are documented based on the various methods used to defraud and bypass financial security controls across multiple industries. The case studies cover techniques used in social engineering, bypassing security controls and automated transfers.
- Full Text:
- Date Issued: 2014
- Authors: Forrester, Jock Ingram
- Date: 2014
- Subjects: Malware (Computer software) -- Analysis , Internet fraud , Computer crimes , Computer security , Electronic commerce
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4697 , http://hdl.handle.net/10962/d1012079 , Malware (Computer software) -- Analysis , Internet fraud , Computer crimes , Computer security , Electronic commerce
- Description: As the number of computing devices connected to the Internet increases and the Internet itself becomes more pervasive, so does the opportunity for criminals to use these devices in cybercrimes. Supporting the increase in cybercrime is the growth and maturity of the digital underground economy with strong links to its more visible and physical counterpart. The digital underground economy provides software and related services to equip the entrepreneurial cybercriminal with the appropriate skills and required tools. Financial malware, particularly the capability for injection of code into web browsers, has become one of the more profitable cybercrime tool sets due to its versatility and adaptability when targeting clients of institutions with an online presence, both in and outside of the financial industry. There are numerous families of financial malware available for use, with perhaps the most prevalent being Zeus and SpyEye. Criminals create (or purchase) and grow botnets of computing devices infected with financial malware that has been configured to attack clients of certain websites. In the research data set there are 483 configuration files containing approximately 40 000 webinjects that were captured from various financial malware botnets between October 2010 and June 2012. They were processed and analysed to determine the methods used by criminals to defraud either the user of the computing device, or the institution of which the user is a client. The configuration files contain the injection code that is executed in the web browser to create a surrogate interface, which is then used by the criminal to interact with the user and institution in order to commit fraud. Demographics on the captured data set are presented and case studies are documented based on the various methods used to defraud and bypass financial security controls across multiple industries. The case studies cover techniques used in social engineering, bypassing security controls and automated transfers.
- Full Text:
- Date Issued: 2014
An investigation into XSets of primitive behaviours for emergent behaviour in stigmergic and message passing antlike agents
- Authors: Chibaya, Colin
- Date: 2014
- Subjects: Ants -- Behavior -- Computer programs , Insects -- Behavior -- Computer programs , Ant communities -- Behavior , Insect societies
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4698 , http://hdl.handle.net/10962/d1012965
- Description: Ants are fascinating creatures - not so much because they are intelligent on their own, but because as a group they display compelling emergent behaviour (the extent to which one observes features in a swarm which cannot be traced back to the actions of swarm members). What does each swarm member do which allows deliberate engineering of emergent behaviour? We investigate the development of a language for programming swarms of ant agents towards desired emergent behaviour. Five aspects of stigmergic (pheromone sensitive computational devices in which a non-symbolic form of communication that is indirectly mediated via the environment arises) and message passing ant agents (computational devices which rely on implicit communication spaces in which direction vectors are shared one-on-one) are studied. First, we investigate the primitive behaviours which characterize ant agents' discrete actions at individual levels. Ten such primitive behaviours are identified as candidate building blocks of the ant agent language sought. We then study mechanisms in which primitive behaviours are put together into XSets (collection of primitive behaviours, parameter values, and meta information which spells out how and when primitive behaviours are used). Various permutations of XSets are possible which define the search space for best performer XSets for particular tasks. Genetic programming principles are proposed as a search strategy for best performer XSets that would allow particular emergent behaviour to occur. XSets in the search space are evolved over various genetic generations and tested for abilities to allow path finding (as proof of concept). XSets are ranked according to the indices of merit (fitness measures which indicate how well XSets allow particular emergent behaviour to occur) they achieve. Best performer XSets for the path finding task are identifed and reported. We validate the results yield when best performer XSets are used with regard to normality, correlation, similarities in variation, and similarities between mean performances over time. Commonly, the simulation results yield pass most statistical tests. The last aspect we study is the application of best performer XSets to different problem tasks. Five experiments are administered in this regard. The first experiment assesses XSets' abilities to allow multiple targets location (ant agents' abilities to locate continuous regions of targets), and found out that best performer XSets are problem independent. However both categories of XSets are sensitive to changes in agent density. We test the influences of individual primitive behaviours and the effects of the sequences of primitive behaviours to the indices of merit of XSets and found out that most primitive behaviours are indispensable, especially when specific sequences are prescribed. The effects of pheromone dissipation to the indices of merit of stigmergic XSets are also scrutinized. Precisely, dissipation is not causal. Rather, it enhances convergence. Overall, this work successfully identify the discrete primitive behaviours of stigmergic and message passing ant-like devices. It successfully put these primitive behaviours together into XSets which characterize a language for programming ant-like devices towards desired emergent behaviour. This XSets approach is a new ant language representation with which a wider domain of emergent tasks can be resolved.
- Full Text:
- Date Issued: 2014
- Authors: Chibaya, Colin
- Date: 2014
- Subjects: Ants -- Behavior -- Computer programs , Insects -- Behavior -- Computer programs , Ant communities -- Behavior , Insect societies
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4698 , http://hdl.handle.net/10962/d1012965
- Description: Ants are fascinating creatures - not so much because they are intelligent on their own, but because as a group they display compelling emergent behaviour (the extent to which one observes features in a swarm which cannot be traced back to the actions of swarm members). What does each swarm member do which allows deliberate engineering of emergent behaviour? We investigate the development of a language for programming swarms of ant agents towards desired emergent behaviour. Five aspects of stigmergic (pheromone sensitive computational devices in which a non-symbolic form of communication that is indirectly mediated via the environment arises) and message passing ant agents (computational devices which rely on implicit communication spaces in which direction vectors are shared one-on-one) are studied. First, we investigate the primitive behaviours which characterize ant agents' discrete actions at individual levels. Ten such primitive behaviours are identified as candidate building blocks of the ant agent language sought. We then study mechanisms in which primitive behaviours are put together into XSets (collection of primitive behaviours, parameter values, and meta information which spells out how and when primitive behaviours are used). Various permutations of XSets are possible which define the search space for best performer XSets for particular tasks. Genetic programming principles are proposed as a search strategy for best performer XSets that would allow particular emergent behaviour to occur. XSets in the search space are evolved over various genetic generations and tested for abilities to allow path finding (as proof of concept). XSets are ranked according to the indices of merit (fitness measures which indicate how well XSets allow particular emergent behaviour to occur) they achieve. Best performer XSets for the path finding task are identifed and reported. We validate the results yield when best performer XSets are used with regard to normality, correlation, similarities in variation, and similarities between mean performances over time. Commonly, the simulation results yield pass most statistical tests. The last aspect we study is the application of best performer XSets to different problem tasks. Five experiments are administered in this regard. The first experiment assesses XSets' abilities to allow multiple targets location (ant agents' abilities to locate continuous regions of targets), and found out that best performer XSets are problem independent. However both categories of XSets are sensitive to changes in agent density. We test the influences of individual primitive behaviours and the effects of the sequences of primitive behaviours to the indices of merit of XSets and found out that most primitive behaviours are indispensable, especially when specific sequences are prescribed. The effects of pheromone dissipation to the indices of merit of stigmergic XSets are also scrutinized. Precisely, dissipation is not causal. Rather, it enhances convergence. Overall, this work successfully identify the discrete primitive behaviours of stigmergic and message passing ant-like devices. It successfully put these primitive behaviours together into XSets which characterize a language for programming ant-like devices towards desired emergent behaviour. This XSets approach is a new ant language representation with which a wider domain of emergent tasks can be resolved.
- Full Text:
- Date Issued: 2014
An investigation of parameter relationships in a high-speed digital multimedia environment
- Authors: Chigwamba, Nyasha
- Date: 2014
- Subjects: Multimedia communications , Digital communications , Local area networks (Computer networks) , Computer network architectures , Computer network protocols , Computer sound processing , Sound -- Recording and reproducing -- Digital techniques
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4725 , http://hdl.handle.net/10962/d1021153
- Description: With the rapid adoption of multimedia network technologies, a number of companies and standards bodies are introducing technologies that enhance user experience in networked multimedia environments. These technologies focus on device discovery, connection management, control, and monitoring. This study focused on control and monitoring. Multimedia networks make it possible for devices that are part of the same network to reside in different physical locations. These devices contain parameters that are used to control particular features, such as speaker volume, bass, amplifier gain, and video resolution. It is often necessary for changes in one parameter to affect other parameters, such as a synchronised change between volume and bass parameters, or collective control of multiple parameters. Thus, relationships are required between the parameters. In addition, some devices contain parameters, such as voltage, temperature, and audio level, that require constant monitoring to enable corrective action when thresholds are exceeded. Therefore, a mechanism for monitoring networked devices is required. This thesis proposes relationships that are essential for the proper functioning of a multimedia network and that should, therefore, be incorporated in standard form into a protocol, such that all devices can depend on them. Implementation mechanisms for these relationships were created. Parameter grouping and monitoring capabilities within mixing console implementations and existing control protocols were reviewed. A number of requirements for parameter grouping and monitoring were derived from this review. These requirements include a formal classification of relationship types, the ability to create relationships between parameters with different underlying value units, the ability to create relationships between parameters residing on different devices on a network, and the use of an event-driven mechanism for parameter monitoring. These requirements were the criteria used to govern the implementation mechanisms that were created as part of this study. Parameter grouping and monitoring mechanisms were implemented for the XFN protocol. The mechanisms implemented fulfil the requirements derived from the review of capabilities of mixing consoles and existing control protocols. The formal classification of relationship types was implemented within XFN parameters using lists that keep track of the relationships between each XFN parameter and other XFN parameters that reside on the same device or on other devices on the network. A common value unit, known as the global unit, was defined for use as the value format within value update messages between XFN parameters that have relationships. Mapping tables were used to translate the global unit values to application-specific (universal) units, such as decibels (dB). A mechanism for bulk parameter retrieval within the XFN protocol was augmented to produce an event-driven mechanism for parameter monitoring. These implementation mechanisms were applied to an XFN-protocol-compliant graphical control application to demonstrate their usage within an end user context. At the time of this study, the XFN protocol was undergoing standardisation within the Audio Engineering Society. The AES-64 standard has now been approved. Most of the implementation mechanisms resulting from this study have been incorporated into this standard.
- Full Text:
- Date Issued: 2014
- Authors: Chigwamba, Nyasha
- Date: 2014
- Subjects: Multimedia communications , Digital communications , Local area networks (Computer networks) , Computer network architectures , Computer network protocols , Computer sound processing , Sound -- Recording and reproducing -- Digital techniques
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4725 , http://hdl.handle.net/10962/d1021153
- Description: With the rapid adoption of multimedia network technologies, a number of companies and standards bodies are introducing technologies that enhance user experience in networked multimedia environments. These technologies focus on device discovery, connection management, control, and monitoring. This study focused on control and monitoring. Multimedia networks make it possible for devices that are part of the same network to reside in different physical locations. These devices contain parameters that are used to control particular features, such as speaker volume, bass, amplifier gain, and video resolution. It is often necessary for changes in one parameter to affect other parameters, such as a synchronised change between volume and bass parameters, or collective control of multiple parameters. Thus, relationships are required between the parameters. In addition, some devices contain parameters, such as voltage, temperature, and audio level, that require constant monitoring to enable corrective action when thresholds are exceeded. Therefore, a mechanism for monitoring networked devices is required. This thesis proposes relationships that are essential for the proper functioning of a multimedia network and that should, therefore, be incorporated in standard form into a protocol, such that all devices can depend on them. Implementation mechanisms for these relationships were created. Parameter grouping and monitoring capabilities within mixing console implementations and existing control protocols were reviewed. A number of requirements for parameter grouping and monitoring were derived from this review. These requirements include a formal classification of relationship types, the ability to create relationships between parameters with different underlying value units, the ability to create relationships between parameters residing on different devices on a network, and the use of an event-driven mechanism for parameter monitoring. These requirements were the criteria used to govern the implementation mechanisms that were created as part of this study. Parameter grouping and monitoring mechanisms were implemented for the XFN protocol. The mechanisms implemented fulfil the requirements derived from the review of capabilities of mixing consoles and existing control protocols. The formal classification of relationship types was implemented within XFN parameters using lists that keep track of the relationships between each XFN parameter and other XFN parameters that reside on the same device or on other devices on the network. A common value unit, known as the global unit, was defined for use as the value format within value update messages between XFN parameters that have relationships. Mapping tables were used to translate the global unit values to application-specific (universal) units, such as decibels (dB). A mechanism for bulk parameter retrieval within the XFN protocol was augmented to produce an event-driven mechanism for parameter monitoring. These implementation mechanisms were applied to an XFN-protocol-compliant graphical control application to demonstrate their usage within an end user context. At the time of this study, the XFN protocol was undergoing standardisation within the Audio Engineering Society. The AES-64 standard has now been approved. Most of the implementation mechanisms resulting from this study have been incorporated into this standard.
- Full Text:
- Date Issued: 2014
An investigation of protocol command translation as a means to enable interoperability between networked audio devices
- Authors: Igumbor, Osedum Peter
- Date: 2014
- Subjects: Streaming audio Data transmission systems Computer network protocols Computer networks -- Management Command languages (Computer science)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4689 , http://hdl.handle.net/10962/d1011128
- Description: Digital audio networks allow multiple channels of audio to be streamed between devices. This eliminates the need for many different cables to route audio between devices. An added advantage of digital audio networks is the ability to configure and control the networked devices from a common control point. Common control of networked devices enables a sound engineer to establish and destroy audio stream connections between networked devices that are distances apart. On a digital audio network, an audio transport technology enables the exchange of data streams. Typically, an audio transport technology is capable of transporting both control messages and audio data streams. There exist a number of audio transport technologies. Some of these technologies implement data transport by exchanging OSI/ISO layer 2 data frames, while others transport data within OSI/ISO layer 3 packets. There are some approaches to achieving interoperability between devices that utilize different audio transport technologies. A digital audio device typically implements an audio control protocol, which enables it process configuration and control messages from a remote controller. An audio control protocol also defines the structure of the messages that are exchanged between compliant devices. There are currently a wide range of audio control protocols. Some audio control protocols utilize layer 3 audio transport technology, while others utilize layer 2 audio transport technology. An audio device can only communicate with other devices that implement the same control protocol, irrespective of a common transport technology that connects the devices. The existence of different audio control protocols among devices on a network results in a situation where the devices are unable to communicate with each other. Furthermore, a single control application is unable to establish or destroy audio stream connections between the networked devices, since they implement different control protocols. When an audio engineer is designing an audio network installation, this interoperability challenge restricts the choice of devices that can be included. Even when audio transport interoperability has been achieved, common control of the devices remains a challenge. This research investigates protocol command translation as a means to enable interoperability between networked audio devices that implement different audio control protocols. It proposes the use of a command translator that is capable of receiving messages conforming to one protocol from any of the networked devices, translating the received message to conform to a different control protocol, then transmitting the translated message to the intended target which understands the translated protocol message. In so doing, the command translator enables common control of the networked devices, since a control application is able to configure and control devices that conform to different protocols by utilizing the command translator to perform appropriate protocol translation.
- Full Text:
- Date Issued: 2014
- Authors: Igumbor, Osedum Peter
- Date: 2014
- Subjects: Streaming audio Data transmission systems Computer network protocols Computer networks -- Management Command languages (Computer science)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4689 , http://hdl.handle.net/10962/d1011128
- Description: Digital audio networks allow multiple channels of audio to be streamed between devices. This eliminates the need for many different cables to route audio between devices. An added advantage of digital audio networks is the ability to configure and control the networked devices from a common control point. Common control of networked devices enables a sound engineer to establish and destroy audio stream connections between networked devices that are distances apart. On a digital audio network, an audio transport technology enables the exchange of data streams. Typically, an audio transport technology is capable of transporting both control messages and audio data streams. There exist a number of audio transport technologies. Some of these technologies implement data transport by exchanging OSI/ISO layer 2 data frames, while others transport data within OSI/ISO layer 3 packets. There are some approaches to achieving interoperability between devices that utilize different audio transport technologies. A digital audio device typically implements an audio control protocol, which enables it process configuration and control messages from a remote controller. An audio control protocol also defines the structure of the messages that are exchanged between compliant devices. There are currently a wide range of audio control protocols. Some audio control protocols utilize layer 3 audio transport technology, while others utilize layer 2 audio transport technology. An audio device can only communicate with other devices that implement the same control protocol, irrespective of a common transport technology that connects the devices. The existence of different audio control protocols among devices on a network results in a situation where the devices are unable to communicate with each other. Furthermore, a single control application is unable to establish or destroy audio stream connections between the networked devices, since they implement different control protocols. When an audio engineer is designing an audio network installation, this interoperability challenge restricts the choice of devices that can be included. Even when audio transport interoperability has been achieved, common control of the devices remains a challenge. This research investigates protocol command translation as a means to enable interoperability between networked audio devices that implement different audio control protocols. It proposes the use of a command translator that is capable of receiving messages conforming to one protocol from any of the networked devices, translating the received message to conform to a different control protocol, then transmitting the translated message to the intended target which understands the translated protocol message. In so doing, the command translator enables common control of the networked devices, since a control application is able to configure and control devices that conform to different protocols by utilizing the command translator to perform appropriate protocol translation.
- Full Text:
- Date Issued: 2014
An investigation of the XMOS XSl architecture as a platform for development of audio control standards
- Authors: Dibley, James
- Date: 2014
- Subjects: Microcontrollers -- Research , Streaming audio -- Standards -- Research , Computer sound processing -- Research , Computer network protocols -- Standards -- Research , Communication -- Technological innovations -- Research
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4694 , http://hdl.handle.net/10962/d1011789 , Microcontrollers -- Research , Streaming audio -- Standards -- Research , Computer sound processing -- Research , Computer network protocols -- Standards -- Research , Communication -- Technological innovations -- Research
- Description: This thesis investigates the feasiblity of using a new microcontroller architecture, the XMOS XS1, in the research and development of control standards for audio distribution networks. This investigation is conducted in the context of an emerging audio distribution network standard, Ethernet Audio/Video Bridging (`Ethernet AVB'), and an emerging audio control standard, AES-64. The thesis describes these emerging standards, the XMOS XS1 architecture (including its associated programming language, XC), and the open-source implementation of an Ethernet AVB streaming audio device based on the XMOS XS1 architecture. It is shown how the XMOS XS1 architecture and its associated features, focusing on the XC language's mechanisms for concurrency, event-driven programming, and integration of C software modules, enable a powerful implementation of the AES-64 control standard. Feasibility is demonstrated by the implementation of an AES-64 protocol stack and its integration into an XMOS XS1-based Ethernet AVB streaming audio device, providing control of Ethernet AVB features and audio hardware, as well as implementations of advanced AES-64 control mechanisms. It is demonstrated that the XMOS XS1 architecture is a compelling platform for the development of audio control standards, and has enabled the implementation of AES-64 connection management and control over standards-compliant Ethernet AVB streaming audio devices where no such implementation previously existed. The research additionally describes a linear design method for applications based on the XMOS XS1 architecture, and provides a baseline implementation reference for the AES-64 control standard where none previously existed.
- Full Text:
- Date Issued: 2014
- Authors: Dibley, James
- Date: 2014
- Subjects: Microcontrollers -- Research , Streaming audio -- Standards -- Research , Computer sound processing -- Research , Computer network protocols -- Standards -- Research , Communication -- Technological innovations -- Research
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4694 , http://hdl.handle.net/10962/d1011789 , Microcontrollers -- Research , Streaming audio -- Standards -- Research , Computer sound processing -- Research , Computer network protocols -- Standards -- Research , Communication -- Technological innovations -- Research
- Description: This thesis investigates the feasiblity of using a new microcontroller architecture, the XMOS XS1, in the research and development of control standards for audio distribution networks. This investigation is conducted in the context of an emerging audio distribution network standard, Ethernet Audio/Video Bridging (`Ethernet AVB'), and an emerging audio control standard, AES-64. The thesis describes these emerging standards, the XMOS XS1 architecture (including its associated programming language, XC), and the open-source implementation of an Ethernet AVB streaming audio device based on the XMOS XS1 architecture. It is shown how the XMOS XS1 architecture and its associated features, focusing on the XC language's mechanisms for concurrency, event-driven programming, and integration of C software modules, enable a powerful implementation of the AES-64 control standard. Feasibility is demonstrated by the implementation of an AES-64 protocol stack and its integration into an XMOS XS1-based Ethernet AVB streaming audio device, providing control of Ethernet AVB features and audio hardware, as well as implementations of advanced AES-64 control mechanisms. It is demonstrated that the XMOS XS1 architecture is a compelling platform for the development of audio control standards, and has enabled the implementation of AES-64 connection management and control over standards-compliant Ethernet AVB streaming audio devices where no such implementation previously existed. The research additionally describes a linear design method for applications based on the XMOS XS1 architecture, and provides a baseline implementation reference for the AES-64 control standard where none previously existed.
- Full Text:
- Date Issued: 2014
Classification of the difficulty in accelerating problems using GPUs
- Authors: Tristram, Uvedale Roy
- Date: 2014
- Subjects: Graphics processing units , Computer algorithms , Computer programming , Problem solving -- Data processing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4699 , http://hdl.handle.net/10962/d1012978
- Description: Scientists continually require additional processing power, as this enables them to compute larger problem sizes, use more complex models and algorithms, and solve problems previously thought computationally impractical. General-purpose computation on graphics processing units (GPGPU) can help in this regard, as there is great potential in using graphics processors to accelerate many scientific models and algorithms. However, some problems are considerably harder to accelerate than others, and it may be challenging for those new to GPGPU to ascertain the difficulty of accelerating a particular problem or seek appropriate optimisation guidance. Through what was learned in the acceleration of a hydrological uncertainty ensemble model, large numbers of k-difference string comparisons, and a radix sort, problem attributes have been identified that can assist in the evaluation of the difficulty in accelerating a problem using GPUs. The identified attributes are inherent parallelism, branch divergence, problem size, required computational parallelism, memory access pattern regularity, data transfer overhead, and thread cooperation. Using these attributes as difficulty indicators, an initial problem difficulty classification framework has been created that aids in GPU acceleration difficulty evaluation. This framework further facilitates directed guidance on suggested optimisations and required knowledge based on problem classification, which has been demonstrated for the aforementioned accelerated problems. It is anticipated that this framework, or a derivative thereof, will prove to be a useful resource for new or novice GPGPU developers in the evaluation of potential problems for GPU acceleration.
- Full Text:
- Date Issued: 2014
- Authors: Tristram, Uvedale Roy
- Date: 2014
- Subjects: Graphics processing units , Computer algorithms , Computer programming , Problem solving -- Data processing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4699 , http://hdl.handle.net/10962/d1012978
- Description: Scientists continually require additional processing power, as this enables them to compute larger problem sizes, use more complex models and algorithms, and solve problems previously thought computationally impractical. General-purpose computation on graphics processing units (GPGPU) can help in this regard, as there is great potential in using graphics processors to accelerate many scientific models and algorithms. However, some problems are considerably harder to accelerate than others, and it may be challenging for those new to GPGPU to ascertain the difficulty of accelerating a particular problem or seek appropriate optimisation guidance. Through what was learned in the acceleration of a hydrological uncertainty ensemble model, large numbers of k-difference string comparisons, and a radix sort, problem attributes have been identified that can assist in the evaluation of the difficulty in accelerating a problem using GPUs. The identified attributes are inherent parallelism, branch divergence, problem size, required computational parallelism, memory access pattern regularity, data transfer overhead, and thread cooperation. Using these attributes as difficulty indicators, an initial problem difficulty classification framework has been created that aids in GPU acceleration difficulty evaluation. This framework further facilitates directed guidance on suggested optimisations and required knowledge based on problem classification, which has been demonstrated for the aforementioned accelerated problems. It is anticipated that this framework, or a derivative thereof, will prove to be a useful resource for new or novice GPGPU developers in the evaluation of potential problems for GPU acceleration.
- Full Text:
- Date Issued: 2014
Cloud information security : a higher education perspective
- Authors: Van der Schyff, Karl Izak
- Date: 2014
- Subjects: Cloud computing -- Security measures , Information technology -- Security measures , Data protection , Internet in higher education , Education, Higher -- Technological innovations
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4692 , http://hdl.handle.net/10962/d1011607 , Cloud computing -- Security measures , Information technology -- Security measures , Data protection , Internet in higher education , Education, Higher -- Technological innovations
- Description: In recent years higher education institutions have come under increasing financial pressure. This has not only prompted universities to investigate more cost effective means of delivering course content and maintaining research output, but also to investigate the administrative functions that accompany them. As such, many South African universities have either adopted or are in the process of adopting some form of cloud computing given the recent drop in bandwidth costs. However, this adoption process has raised concerns about the security of cloud-based information and this has, in some cases, had a negative impact on the adoption process. In an effort to study these concerns many researchers have employed a positivist approach with little, if any, focus on the operational context of these universities. Moreover, there has been very little research, specifically within the South African context. This study addresses some of these concerns by investigating the threats and security incident response life cycle within a higher education cloud. This was done by initially conducting a small scale survey and a detailed thematic analysis of twelve interviews from three South African universities. The identified themes and their corresponding analyses and interpretation contribute on both a practical and theoretical level with the practical contributions relating to a set of security driven criteria for selecting cloud providers as well as recommendations for universities who have or are in the process of adopting cloud computing. Theoretically several conceptual frameworks are offered allowing the researcher to convey his understanding of how the aforementioned practical concepts relate to each other as well as the concepts that constitute the research questions of this study.
- Full Text:
- Date Issued: 2014
- Authors: Van der Schyff, Karl Izak
- Date: 2014
- Subjects: Cloud computing -- Security measures , Information technology -- Security measures , Data protection , Internet in higher education , Education, Higher -- Technological innovations
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4692 , http://hdl.handle.net/10962/d1011607 , Cloud computing -- Security measures , Information technology -- Security measures , Data protection , Internet in higher education , Education, Higher -- Technological innovations
- Description: In recent years higher education institutions have come under increasing financial pressure. This has not only prompted universities to investigate more cost effective means of delivering course content and maintaining research output, but also to investigate the administrative functions that accompany them. As such, many South African universities have either adopted or are in the process of adopting some form of cloud computing given the recent drop in bandwidth costs. However, this adoption process has raised concerns about the security of cloud-based information and this has, in some cases, had a negative impact on the adoption process. In an effort to study these concerns many researchers have employed a positivist approach with little, if any, focus on the operational context of these universities. Moreover, there has been very little research, specifically within the South African context. This study addresses some of these concerns by investigating the threats and security incident response life cycle within a higher education cloud. This was done by initially conducting a small scale survey and a detailed thematic analysis of twelve interviews from three South African universities. The identified themes and their corresponding analyses and interpretation contribute on both a practical and theoretical level with the practical contributions relating to a set of security driven criteria for selecting cloud providers as well as recommendations for universities who have or are in the process of adopting cloud computing. Theoretically several conceptual frameworks are offered allowing the researcher to convey his understanding of how the aforementioned practical concepts relate to each other as well as the concepts that constitute the research questions of this study.
- Full Text:
- Date Issued: 2014
Correlation and comparative analysis of traffic across five network telescopes
- Nkhumeleni, Thizwilondi Moses
- Authors: Nkhumeleni, Thizwilondi Moses
- Date: 2014
- Subjects: Sensor networks , Computer networks , TCP/IP (Computer network protocol) , Computer networks -- Management , Electronic data processing -- Management
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4693 , http://hdl.handle.net/10962/d1011668 , Sensor networks , Computer networks , TCP/IP (Computer network protocol) , Computer networks -- Management , Electronic data processing -- Management
- Description: Monitoring unused IP address space by using network telescopes provides a favourable environment for researchers to study and detect malware, worms, denial of service and scanning activities. Research in the field of network telescopes has progressed over the past decade resulting in the development of an increased number of overlapping datasets. Rhodes University's network of telescope sensors has continued to grow with additional network telescopes being brought online. At the time of writing, Rhodes University has a distributed network of five relatively small /24 network telescopes. With five network telescope sensors, this research focuses on comparative and correlation analysis of traffic activity across the network of telescope sensors. To aid summarisation and visualisation techniques, time series' representing time-based traffic activity, are constructed. By employing an iterative experimental process of captured traffic, two natural categories of the five network telescopes are presented. Using the cross- and auto-correlation methods of time series analysis, moderate correlation of traffic activity was achieved between telescope sensors in each category. Weak to moderate correlation was calculated when comparing category A and category B network telescopes' datasets. Results were significantly improved by studying TCP traffic separately. Moderate to strong correlation coefficients in each category were calculated when using TCP traffic only. UDP traffic analysis showed weaker correlation between sensors, however the uniformity of ICMP traffic showed correlation of traffic activity across all sensors. The results confirmed the visual observation of traffic relativity in telescope sensors within the same category and quantitatively analysed the correlation of network telescopes' traffic activity.
- Full Text:
- Date Issued: 2014
- Authors: Nkhumeleni, Thizwilondi Moses
- Date: 2014
- Subjects: Sensor networks , Computer networks , TCP/IP (Computer network protocol) , Computer networks -- Management , Electronic data processing -- Management
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4693 , http://hdl.handle.net/10962/d1011668 , Sensor networks , Computer networks , TCP/IP (Computer network protocol) , Computer networks -- Management , Electronic data processing -- Management
- Description: Monitoring unused IP address space by using network telescopes provides a favourable environment for researchers to study and detect malware, worms, denial of service and scanning activities. Research in the field of network telescopes has progressed over the past decade resulting in the development of an increased number of overlapping datasets. Rhodes University's network of telescope sensors has continued to grow with additional network telescopes being brought online. At the time of writing, Rhodes University has a distributed network of five relatively small /24 network telescopes. With five network telescope sensors, this research focuses on comparative and correlation analysis of traffic activity across the network of telescope sensors. To aid summarisation and visualisation techniques, time series' representing time-based traffic activity, are constructed. By employing an iterative experimental process of captured traffic, two natural categories of the five network telescopes are presented. Using the cross- and auto-correlation methods of time series analysis, moderate correlation of traffic activity was achieved between telescope sensors in each category. Weak to moderate correlation was calculated when comparing category A and category B network telescopes' datasets. Results were significantly improved by studying TCP traffic separately. Moderate to strong correlation coefficients in each category were calculated when using TCP traffic only. UDP traffic analysis showed weaker correlation between sensors, however the uniformity of ICMP traffic showed correlation of traffic activity across all sensors. The results confirmed the visual observation of traffic relativity in telescope sensors within the same category and quantitatively analysed the correlation of network telescopes' traffic activity.
- Full Text:
- Date Issued: 2014
Data-centric security : towards a utopian model for protecting corporate data on mobile devices
- Authors: Mayisela, Simphiwe Hector
- Date: 2014
- Subjects: Computer security , Computer networks -- Security measures , Business enterprises -- Computer networks -- Security measures , Mobile computing -- Security measures , Mobile communication systems -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4688 , http://hdl.handle.net/10962/d1011094 , Computer security , Computer networks -- Security measures , Business enterprises -- Computer networks -- Security measures , Mobile computing -- Security measures , Mobile communication systems -- Security measures
- Description: Data-centric security is significant in understanding, assessing and mitigating the various risks and impacts of sharing information outside corporate boundaries. Information generally leaves corporate boundaries through mobile devices. Mobile devices continue to evolve as multi-functional tools for everyday life, surpassing their initial intended use. This added capability and increasingly extensive use of mobile devices does not come without a degree of risk - hence the need to guard and protect information as it exists beyond the corporate boundaries and throughout its lifecycle. Literature on existing models crafted to protect data, rather than infrastructure in which the data resides, is reviewed. Technologies that organisations have implemented to adopt the data-centric model are studied. A utopian model that takes into account the shortcomings of existing technologies and deficiencies of common theories is proposed. Two sets of qualitative studies are reported; the first is a preliminary online survey to assess the ubiquity of mobile devices and extent of technology adoption towards implementation of data-centric model; and the second comprises of a focus survey and expert interviews pertaining on technologies that organisations have implemented to adopt the data-centric model. The latter study revealed insufficient data at the time of writing for the results to be statistically significant; however; indicative trends supported the assertions documented in the literature review. The question that this research answers is whether or not current technology implementations designed to mitigate risks from mobile devices, actually address business requirements. This research question, answered through these two sets qualitative studies, discovered inconsistencies between the technology implementations and business requirements. The thesis concludes by proposing a realistic model, based on the outcome of the qualitative study, which bridges the gap between the technology implementations and business requirements. Future work which could perhaps be conducted in light of the findings and the comments from this research is also considered.
- Full Text:
- Date Issued: 2014
- Authors: Mayisela, Simphiwe Hector
- Date: 2014
- Subjects: Computer security , Computer networks -- Security measures , Business enterprises -- Computer networks -- Security measures , Mobile computing -- Security measures , Mobile communication systems -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4688 , http://hdl.handle.net/10962/d1011094 , Computer security , Computer networks -- Security measures , Business enterprises -- Computer networks -- Security measures , Mobile computing -- Security measures , Mobile communication systems -- Security measures
- Description: Data-centric security is significant in understanding, assessing and mitigating the various risks and impacts of sharing information outside corporate boundaries. Information generally leaves corporate boundaries through mobile devices. Mobile devices continue to evolve as multi-functional tools for everyday life, surpassing their initial intended use. This added capability and increasingly extensive use of mobile devices does not come without a degree of risk - hence the need to guard and protect information as it exists beyond the corporate boundaries and throughout its lifecycle. Literature on existing models crafted to protect data, rather than infrastructure in which the data resides, is reviewed. Technologies that organisations have implemented to adopt the data-centric model are studied. A utopian model that takes into account the shortcomings of existing technologies and deficiencies of common theories is proposed. Two sets of qualitative studies are reported; the first is a preliminary online survey to assess the ubiquity of mobile devices and extent of technology adoption towards implementation of data-centric model; and the second comprises of a focus survey and expert interviews pertaining on technologies that organisations have implemented to adopt the data-centric model. The latter study revealed insufficient data at the time of writing for the results to be statistically significant; however; indicative trends supported the assertions documented in the literature review. The question that this research answers is whether or not current technology implementations designed to mitigate risks from mobile devices, actually address business requirements. This research question, answered through these two sets qualitative studies, discovered inconsistencies between the technology implementations and business requirements. The thesis concludes by proposing a realistic model, based on the outcome of the qualitative study, which bridges the gap between the technology implementations and business requirements. Future work which could perhaps be conducted in light of the findings and the comments from this research is also considered.
- Full Text:
- Date Issued: 2014
DNS traffic based classifiers for the automatic classification of botnet domains
- Authors: Stalmans, Etienne Raymond
- Date: 2014
- Subjects: Denial of service attacks -- Research , Computer security -- Research , Internet -- Security measures -- Research , Malware (Computer software) , Spam (Electronic mail) , Phishing , Command and control systems
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4684 , http://hdl.handle.net/10962/d1007739
- Description: Networks of maliciously compromised computers, known as botnets, consisting of thousands of hosts have emerged as a serious threat to Internet security in recent years. These compromised systems, under the control of an operator are used to steal data, distribute malware and spam, launch phishing attacks and in Distributed Denial-of-Service (DDoS) attacks. The operators of these botnets use Command and Control (C2) servers to communicate with the members of the botnet and send commands. The communications channels between the C2 nodes and endpoints have employed numerous detection avoidance mechanisms to prevent the shutdown of the C2 servers. Two prevalent detection avoidance techniques used by current botnets are algorithmically generated domain names and DNS Fast-Flux. The use of these mechanisms can however be observed and used to create distinct signatures that in turn can be used to detect DNS domains being used for C2 operation. This report details research conducted into the implementation of three classes of classification techniques that exploit these signatures in order to accurately detect botnet traffic. The techniques described make use of the traffic from DNS query responses created when members of a botnet try to contact the C2 servers. Traffic observation and categorisation is passive from the perspective of the communicating nodes. The first set of classifiers explored employ frequency analysis to detect the algorithmically generated domain names used by botnets. These were found to have a high degree of accuracy with a low false positive rate. The characteristics of Fast-Flux domains are used in the second set of classifiers. It is shown that using these characteristics Fast-Flux domains can be accurately identified and differentiated from legitimate domains (such as Content Distribution Networks exhibit similar behaviour). The final set of classifiers use spatial autocorrelation to detect Fast-Flux domains based on the geographic distribution of the botnet C2 servers to which the detected domains resolve. It is shown that botnet C2 servers can be detected solely based on their geographic location. This technique is shown to clearly distinguish between malicious and legitimate domains. The implemented classifiers are lightweight and use existing network traffic to detect botnets and thus do not require major architectural changes to the network. The performance impact of implementing classification of DNS traffic is examined and it is shown that the performance impact is at an acceptable level.
- Full Text:
- Date Issued: 2014
- Authors: Stalmans, Etienne Raymond
- Date: 2014
- Subjects: Denial of service attacks -- Research , Computer security -- Research , Internet -- Security measures -- Research , Malware (Computer software) , Spam (Electronic mail) , Phishing , Command and control systems
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4684 , http://hdl.handle.net/10962/d1007739
- Description: Networks of maliciously compromised computers, known as botnets, consisting of thousands of hosts have emerged as a serious threat to Internet security in recent years. These compromised systems, under the control of an operator are used to steal data, distribute malware and spam, launch phishing attacks and in Distributed Denial-of-Service (DDoS) attacks. The operators of these botnets use Command and Control (C2) servers to communicate with the members of the botnet and send commands. The communications channels between the C2 nodes and endpoints have employed numerous detection avoidance mechanisms to prevent the shutdown of the C2 servers. Two prevalent detection avoidance techniques used by current botnets are algorithmically generated domain names and DNS Fast-Flux. The use of these mechanisms can however be observed and used to create distinct signatures that in turn can be used to detect DNS domains being used for C2 operation. This report details research conducted into the implementation of three classes of classification techniques that exploit these signatures in order to accurately detect botnet traffic. The techniques described make use of the traffic from DNS query responses created when members of a botnet try to contact the C2 servers. Traffic observation and categorisation is passive from the perspective of the communicating nodes. The first set of classifiers explored employ frequency analysis to detect the algorithmically generated domain names used by botnets. These were found to have a high degree of accuracy with a low false positive rate. The characteristics of Fast-Flux domains are used in the second set of classifiers. It is shown that using these characteristics Fast-Flux domains can be accurately identified and differentiated from legitimate domains (such as Content Distribution Networks exhibit similar behaviour). The final set of classifiers use spatial autocorrelation to detect Fast-Flux domains based on the geographic distribution of the botnet C2 servers to which the detected domains resolve. It is shown that botnet C2 servers can be detected solely based on their geographic location. This technique is shown to clearly distinguish between malicious and legitimate domains. The implemented classifiers are lightweight and use existing network traffic to detect botnets and thus do not require major architectural changes to the network. The performance impact of implementing classification of DNS traffic is examined and it is shown that the performance impact is at an acceptable level.
- Full Text:
- Date Issued: 2014
The role of computational thinking in introductory computer science
- Authors: Gouws, Lindsey Ann
- Date: 2014
- Subjects: Computer science , Computational complexity , Problem solving -- Study and teaching
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4690 , http://hdl.handle.net/10962/d1011152 , Computer science , Computational complexity , Problem solving -- Study and teaching
- Description: Computational thinking (CT) is gaining recognition as an important skill for students, both in computer science and other disciplines. Although there has been much focus on this field in recent years, it is rarely taught as a formal course, and there is little consensus on what exactly CT entails and how to teach and evaluate it. This research addresses the lack of resources for integrating CT into the introductory computer science curriculum. The question that we aim to answer is whether CT can be evaluated in a meaningful way. A CT framework that outlines the skills and techniques comprising CT and describes the nature of student engagement was developed; this is used as the basis for this research. An assessment (CT test) was then created to gauge the ability of incoming students, and a CT-specfic computer game was developed based on the analysis of an existing game. A set of problem solving strategies and practice activities were then recommended based on criteria defined in the framework. The results revealed that the CT abilities of first year university students are relatively poor, but that the students' scores for the CT test could be used as a predictor for their future success in computer science courses. The framework developed for this research proved successful when applied to the test, computer game evaluation, and classification of strategies and activities. Through this research, we established that CT is a skill that first year computer science students are lacking, and that using CT exercises alongside traditional programming instruction can improve students' learning experiences.
- Full Text:
- Date Issued: 2014
- Authors: Gouws, Lindsey Ann
- Date: 2014
- Subjects: Computer science , Computational complexity , Problem solving -- Study and teaching
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4690 , http://hdl.handle.net/10962/d1011152 , Computer science , Computational complexity , Problem solving -- Study and teaching
- Description: Computational thinking (CT) is gaining recognition as an important skill for students, both in computer science and other disciplines. Although there has been much focus on this field in recent years, it is rarely taught as a formal course, and there is little consensus on what exactly CT entails and how to teach and evaluate it. This research addresses the lack of resources for integrating CT into the introductory computer science curriculum. The question that we aim to answer is whether CT can be evaluated in a meaningful way. A CT framework that outlines the skills and techniques comprising CT and describes the nature of student engagement was developed; this is used as the basis for this research. An assessment (CT test) was then created to gauge the ability of incoming students, and a CT-specfic computer game was developed based on the analysis of an existing game. A set of problem solving strategies and practice activities were then recommended based on criteria defined in the framework. The results revealed that the CT abilities of first year university students are relatively poor, but that the students' scores for the CT test could be used as a predictor for their future success in computer science courses. The framework developed for this research proved successful when applied to the test, computer game evaluation, and classification of strategies and activities. Through this research, we established that CT is a skill that first year computer science students are lacking, and that using CT exercises alongside traditional programming instruction can improve students' learning experiences.
- Full Text:
- Date Issued: 2014
Web-based M-learning system for ad-hoc learning of mathematical concepts amongst first year students at the University of Namibia
- Authors: Ntinda, Maria Ndapewa
- Date: 2014
- Subjects: Mathematics -- Study and teaching (Higher) -- Namibia , Mathematics -- Technological innovations , Mobile communication systems in education , Teaching -- Aids and devices , Educational innovations , Open source software
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4701 , http://hdl.handle.net/10962/d1013174
- Description: In the last decade, there has been an increase in the number of web-enabled mobile devices, offering a new platform that can be targeted for the development of learning applications. Worldwide, developers have taken initiatives in developing mobile learning (M-learning) systems to provide students with access to learning materials regardless of time and location. The purpose of this study was to investigate whether it is viable for first year students enrolled at the University of Namibia (UNAM) to use mobile phones for ad-hoc learning of mathematical concepts. A system, EnjoyMath, aiming to assist students in preparing for tests, examinations, review contents and reinforce knowledge acquired during traditional classroom interactions was designed and implemented. The EnjoyMath system was designed and implemented through the use of the Human Centred Design (HCD) methodology. Two revolutions of the four-step process of the HCD cycle were completed in this study. Due to the distance between UNAM and Rhodes University (where the researcher was based), the researcher could not always work in close relation with the UNAM students. Students from the Extended Study Unit (ESU) at Rhodes University were therefore selected in the first iteration of the project due to their proximity to the researcher and their similar demographics to the first year UNAM students, while the UNAM students were targeted in the second iteration of the study. This thesis presents the outcome of the two pre-intervention studies of the first-year students' perceptions about M-learning conducted at Rhodes University and UNAM. The results of the pre-intervention studies showed that the students are enthusiastic about using an M-learning system, because it would allow them to put in more time to practice their skills whenever and wherever they are. Moreover, the thesis presents the different stages undertaken to develop the EnjoyMath system using Open Source Software (PHP and MySQL). The results of a user study (post-intervention) conducted with participants at UNAM, ascertained the participants' perception of the usability of the EnjoyMath system and are also presented in this thesis. The EnjoyMath system was perceived by the participants to be "passable"; hence an M-learning system could be used to compliment an E-learning system at UNAM.
- Full Text:
- Date Issued: 2014
- Authors: Ntinda, Maria Ndapewa
- Date: 2014
- Subjects: Mathematics -- Study and teaching (Higher) -- Namibia , Mathematics -- Technological innovations , Mobile communication systems in education , Teaching -- Aids and devices , Educational innovations , Open source software
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4701 , http://hdl.handle.net/10962/d1013174
- Description: In the last decade, there has been an increase in the number of web-enabled mobile devices, offering a new platform that can be targeted for the development of learning applications. Worldwide, developers have taken initiatives in developing mobile learning (M-learning) systems to provide students with access to learning materials regardless of time and location. The purpose of this study was to investigate whether it is viable for first year students enrolled at the University of Namibia (UNAM) to use mobile phones for ad-hoc learning of mathematical concepts. A system, EnjoyMath, aiming to assist students in preparing for tests, examinations, review contents and reinforce knowledge acquired during traditional classroom interactions was designed and implemented. The EnjoyMath system was designed and implemented through the use of the Human Centred Design (HCD) methodology. Two revolutions of the four-step process of the HCD cycle were completed in this study. Due to the distance between UNAM and Rhodes University (where the researcher was based), the researcher could not always work in close relation with the UNAM students. Students from the Extended Study Unit (ESU) at Rhodes University were therefore selected in the first iteration of the project due to their proximity to the researcher and their similar demographics to the first year UNAM students, while the UNAM students were targeted in the second iteration of the study. This thesis presents the outcome of the two pre-intervention studies of the first-year students' perceptions about M-learning conducted at Rhodes University and UNAM. The results of the pre-intervention studies showed that the students are enthusiastic about using an M-learning system, because it would allow them to put in more time to practice their skills whenever and wherever they are. Moreover, the thesis presents the different stages undertaken to develop the EnjoyMath system using Open Source Software (PHP and MySQL). The results of a user study (post-intervention) conducted with participants at UNAM, ascertained the participants' perception of the usability of the EnjoyMath system and are also presented in this thesis. The EnjoyMath system was perceived by the participants to be "passable"; hence an M-learning system could be used to compliment an E-learning system at UNAM.
- Full Text:
- Date Issued: 2014
- «
- ‹
- 1
- ›
- »