Information technology audits in South African higher education institutions
- Authors: Angus, Lynne
- Date: 2013 , 2013-09-11
- Subjects: Electronic data processing -- Auditing , Delphi method , Education, Higher -- Computer networks -- Security measures , Information technology -- Security measures , COBIT (Information technology management standard) , IT infrastructure library , International Organization for Standardization
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4615 , http://hdl.handle.net/10962/d1006023 , Electronic data processing -- Auditing , Delphi method , Education, Higher -- Computer networks -- Security measures , Information technology -- Security measures , COBIT (Information technology management standard) , IT infrastructure library , International Organization for Standardization
- Description: The use of technology for competitive advantage has become a necessity, not only for corporate organisations, but for higher education institutions (HEIs) as well. Consequently, corporate organisations and HEIs alike must be equipped to protect against the pervasive nature of technology. To do this, they implement controls and undergo audits to ensure these controls are implemented correctly. Although HEIs are a different kind of entity to corporate organisations, HEI information technology (IT) audits are based on the same criteria as those for corporate organisations. The primary aim of this research, therefore, was to develop a set of IT control criteria that are relevant to be tested in IT audits for South African HEIs. The research method used was the Delphi technique. Data was collected, analysed, and used as feedback on which to progress to the next round of data collection. Two lists were obtained: a list of the top IT controls relevant to be tested at any organisation, and a list of the top IT controls relevant to be tested at a South African HEI. Comparison of the two lists shows that although there are some differences in the ranking of criteria used to audit corporate organisations as opposed to HEIs, the final two lists of criteria do not differ significantly. Therefore, it was shown that the same broad IT controls are required to be tested in an IT audit for a South African HEI. However, this research suggests that the risk weighting put on particular IT controls should possibly differ for HEIs, as HEIs face differing IT risks. If further studies can be established which cater for more specific controls, then the combined effect of this study and future ones will be a valuable contribution to knowledge for IT audits in a South African higher education context.
- Full Text:
- Date Issued: 2013
- Authors: Angus, Lynne
- Date: 2013 , 2013-09-11
- Subjects: Electronic data processing -- Auditing , Delphi method , Education, Higher -- Computer networks -- Security measures , Information technology -- Security measures , COBIT (Information technology management standard) , IT infrastructure library , International Organization for Standardization
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4615 , http://hdl.handle.net/10962/d1006023 , Electronic data processing -- Auditing , Delphi method , Education, Higher -- Computer networks -- Security measures , Information technology -- Security measures , COBIT (Information technology management standard) , IT infrastructure library , International Organization for Standardization
- Description: The use of technology for competitive advantage has become a necessity, not only for corporate organisations, but for higher education institutions (HEIs) as well. Consequently, corporate organisations and HEIs alike must be equipped to protect against the pervasive nature of technology. To do this, they implement controls and undergo audits to ensure these controls are implemented correctly. Although HEIs are a different kind of entity to corporate organisations, HEI information technology (IT) audits are based on the same criteria as those for corporate organisations. The primary aim of this research, therefore, was to develop a set of IT control criteria that are relevant to be tested in IT audits for South African HEIs. The research method used was the Delphi technique. Data was collected, analysed, and used as feedback on which to progress to the next round of data collection. Two lists were obtained: a list of the top IT controls relevant to be tested at any organisation, and a list of the top IT controls relevant to be tested at a South African HEI. Comparison of the two lists shows that although there are some differences in the ranking of criteria used to audit corporate organisations as opposed to HEIs, the final two lists of criteria do not differ significantly. Therefore, it was shown that the same broad IT controls are required to be tested in an IT audit for a South African HEI. However, this research suggests that the risk weighting put on particular IT controls should possibly differ for HEIs, as HEIs face differing IT risks. If further studies can be established which cater for more specific controls, then the combined effect of this study and future ones will be a valuable contribution to knowledge for IT audits in a South African higher education context.
- Full Text:
- Date Issued: 2013
Towards a capability maturity model for a cyber range
- Authors: Aschmann, Michael Joseph
- Date: 2020
- Subjects: Computer software -- Development , Computer security
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/163142 , vital:41013
- Description: This work describes research undertaken towards the development of a Capability Maturity Model (CMM) for Cyber Ranges (CRs) focused on cyber security. Global cyber security needs are on the rise, and the need for attribution within the cyber domain is of particular concern. This has prompted major efforts to enhance cyber capabilities within organisations to increase their total cyber resilience posture. These efforts include, but are not limited to, the testing of computational devices, networks, and applications, and cyber skills training focused on prevention, detection and cyber attack response. A cyber range allows for the testing of the computational environment. By developing cyber events within a confined virtual or sand-boxed cyber environment, a cyber range can prepare the next generation of cyber security specialists to handle a variety of potential cyber attacks. Cyber ranges have different purposes, each designed to fulfil a different computational testing and cyber training goal; consequently, cyber ranges can vary greatly in the level of variety, capability, maturity and complexity. As cyber ranges proliferate and become more and more valued as tools for cyber security, a method to classify or rate them becomes essential. Yet while a universal criteria for measuring cyber ranges in terms of their capability maturity levels becomes more critical, there are currently very limited resources for researchers aiming to perform this kind of work. For this reason, this work proposes and describes a CMM, designed to give organisations the ability to benchmark the capability maturity of a given cyber range. This research adopted a synthesised approach to the development of a CMM, grounded in prior research and focused on the production of a conceptual model that provides a useful level of abstraction. In order to achieve this goal, the core capability elements of a cyber range are defined with their relative importance, allowing for the development of a proposed classification cyber range levels. An analysis of data gathered during the course of an expert review, together with other research, further supported the development of the conceptual model. In the context of cyber range capability, classification will include the ability of the cyber range to perform its functions optimally with different core capability elements, focusing on the Measurement of Capability (MoC) with its elements, namely effect, performance and threat ability. Cyber range maturity can evolve over time and can be defined through the Measurement of Maturity (MoM) with its elements, namely people, processes, technology. The combination of these measurements utilising the CMM for a CR determines the capability maturity level of a CR. The primary outcome of this research is the proposed level-based CMM framework for a cyber range, developed using adopted and synthesised CMMs, the analysis of an expert review, and the mapping of the results.
- Full Text:
- Date Issued: 2020
- Authors: Aschmann, Michael Joseph
- Date: 2020
- Subjects: Computer software -- Development , Computer security
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/163142 , vital:41013
- Description: This work describes research undertaken towards the development of a Capability Maturity Model (CMM) for Cyber Ranges (CRs) focused on cyber security. Global cyber security needs are on the rise, and the need for attribution within the cyber domain is of particular concern. This has prompted major efforts to enhance cyber capabilities within organisations to increase their total cyber resilience posture. These efforts include, but are not limited to, the testing of computational devices, networks, and applications, and cyber skills training focused on prevention, detection and cyber attack response. A cyber range allows for the testing of the computational environment. By developing cyber events within a confined virtual or sand-boxed cyber environment, a cyber range can prepare the next generation of cyber security specialists to handle a variety of potential cyber attacks. Cyber ranges have different purposes, each designed to fulfil a different computational testing and cyber training goal; consequently, cyber ranges can vary greatly in the level of variety, capability, maturity and complexity. As cyber ranges proliferate and become more and more valued as tools for cyber security, a method to classify or rate them becomes essential. Yet while a universal criteria for measuring cyber ranges in terms of their capability maturity levels becomes more critical, there are currently very limited resources for researchers aiming to perform this kind of work. For this reason, this work proposes and describes a CMM, designed to give organisations the ability to benchmark the capability maturity of a given cyber range. This research adopted a synthesised approach to the development of a CMM, grounded in prior research and focused on the production of a conceptual model that provides a useful level of abstraction. In order to achieve this goal, the core capability elements of a cyber range are defined with their relative importance, allowing for the development of a proposed classification cyber range levels. An analysis of data gathered during the course of an expert review, together with other research, further supported the development of the conceptual model. In the context of cyber range capability, classification will include the ability of the cyber range to perform its functions optimally with different core capability elements, focusing on the Measurement of Capability (MoC) with its elements, namely effect, performance and threat ability. Cyber range maturity can evolve over time and can be defined through the Measurement of Maturity (MoM) with its elements, namely people, processes, technology. The combination of these measurements utilising the CMM for a CR determines the capability maturity level of a CR. The primary outcome of this research is the proposed level-based CMM framework for a cyber range, developed using adopted and synthesised CMMs, the analysis of an expert review, and the mapping of the results.
- Full Text:
- Date Issued: 2020
Cogitator : a parallel, fuzzy, database-driven expert system
- Authors: Baise, Paul
- Date: 1994 , 2012-10-08
- Subjects: Expert systems (Computer science) , Artificial intelligence -- Computer programs , System design , Cogitator (Computer system)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4667 , http://hdl.handle.net/10962/d1006684 , Expert systems (Computer science) , Artificial intelligence -- Computer programs , System design , Cogitator (Computer system)
- Description: The quest to build anthropomorphic machines has led researchers to focus on knowledge and the manipulation thereof. Recently, the expert system was proposed as a solution, working well in small, well understood domains. However these initial attempts highlighted the tedious process associated with building systems to display intelligence, the most notable being the Knowledge Acquisition Bottleneck. Attempts to circumvent this problem have led researchers to propose the use of machine learning databases as a source of knowledge. Attempts to utilise databases as sources of knowledge has led to the development Database-Driven Expert Systems. Furthermore, it has been ascertained that a requisite for intelligent systems is powerful computation. In response to these problems and proposals, a new type of database-driven expert system, Cogitator is proposed. It is shown to circumvent the Knowledge Acquisition Bottleneck and posess many other advantages over both traditional expert systems and connectionist systems, whilst having non-serious disadvantages. , KMBT_223
- Full Text:
- Date Issued: 1994
- Authors: Baise, Paul
- Date: 1994 , 2012-10-08
- Subjects: Expert systems (Computer science) , Artificial intelligence -- Computer programs , System design , Cogitator (Computer system)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4667 , http://hdl.handle.net/10962/d1006684 , Expert systems (Computer science) , Artificial intelligence -- Computer programs , System design , Cogitator (Computer system)
- Description: The quest to build anthropomorphic machines has led researchers to focus on knowledge and the manipulation thereof. Recently, the expert system was proposed as a solution, working well in small, well understood domains. However these initial attempts highlighted the tedious process associated with building systems to display intelligence, the most notable being the Knowledge Acquisition Bottleneck. Attempts to circumvent this problem have led researchers to propose the use of machine learning databases as a source of knowledge. Attempts to utilise databases as sources of knowledge has led to the development Database-Driven Expert Systems. Furthermore, it has been ascertained that a requisite for intelligent systems is powerful computation. In response to these problems and proposals, a new type of database-driven expert system, Cogitator is proposed. It is shown to circumvent the Knowledge Acquisition Bottleneck and posess many other advantages over both traditional expert systems and connectionist systems, whilst having non-serious disadvantages. , KMBT_223
- Full Text:
- Date Issued: 1994
Modelling parallel and distributed virtual reality systems for performance analysis and comparison
- Authors: Bangay, Shaun Douglas
- Date: 1997
- Subjects: Virtual reality Computer simulation
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4657 , http://hdl.handle.net/10962/d1006656
- Description: Most Virtual Reality systems employ some form of parallel processing, making use of multiple processors which are often distributed over large areas geographically, and which communicate via various forms of message passing. The approaches to parallel decomposition differ for each system, as do the performance implications of each approach. Previous comparisons have only identified and categorized the different approaches. None have examined the performance issues involved in the different parallel decompositions. Performance measurement for a Virtual Reality system differs from that of other parallel systems in that some measure of the delays involved with the interaction of the separate components is required, in addition to the measure of the throughput of the system. Existing performance analysis approaches are typically not well suited to providing both these measures. This thesis describes the development of a performance analysis technique that is able to provide measures of both interaction latency and cycle time for a model of a Virtual Reality system. This technique allows performance measures to be generated as symbolic expressions describing the relationships between the delays in the model. It automatically generates constraint regions, specifying the values of the system parameters for which performance characteristics change. The performance analysis technique shows strong agreement with values measured from implementation of three common decomposition strategies on two message passing architectures. The technique is successfully applied to a range of parallel decomposition strategies found in Parallel and Distributed Virtual Reality systems. For each system, the primary decomposition techniques are isolated and analysed to determine their performance characteristics. This analysis allows a comparison of the various decomposition techniques, and in many cases reveals trends in their behaviour that would have gone unnoticed with alternative analysis techniques. The work described in this thesis supports the Performance Analysis and Comparison of Parallel and Distributed Virtual Reality systems. In addition it acts as a reference, describing the performance characteristics of decomposition strategies used in Virtual Reality systems.
- Full Text:
- Date Issued: 1997
- Authors: Bangay, Shaun Douglas
- Date: 1997
- Subjects: Virtual reality Computer simulation
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4657 , http://hdl.handle.net/10962/d1006656
- Description: Most Virtual Reality systems employ some form of parallel processing, making use of multiple processors which are often distributed over large areas geographically, and which communicate via various forms of message passing. The approaches to parallel decomposition differ for each system, as do the performance implications of each approach. Previous comparisons have only identified and categorized the different approaches. None have examined the performance issues involved in the different parallel decompositions. Performance measurement for a Virtual Reality system differs from that of other parallel systems in that some measure of the delays involved with the interaction of the separate components is required, in addition to the measure of the throughput of the system. Existing performance analysis approaches are typically not well suited to providing both these measures. This thesis describes the development of a performance analysis technique that is able to provide measures of both interaction latency and cycle time for a model of a Virtual Reality system. This technique allows performance measures to be generated as symbolic expressions describing the relationships between the delays in the model. It automatically generates constraint regions, specifying the values of the system parameters for which performance characteristics change. The performance analysis technique shows strong agreement with values measured from implementation of three common decomposition strategies on two message passing architectures. The technique is successfully applied to a range of parallel decomposition strategies found in Parallel and Distributed Virtual Reality systems. For each system, the primary decomposition techniques are isolated and analysed to determine their performance characteristics. This analysis allows a comparison of the various decomposition techniques, and in many cases reveals trends in their behaviour that would have gone unnoticed with alternative analysis techniques. The work described in this thesis supports the Performance Analysis and Comparison of Parallel and Distributed Virtual Reality systems. In addition it acts as a reference, describing the performance characteristics of decomposition strategies used in Virtual Reality systems.
- Full Text:
- Date Issued: 1997
Parallel implementation of a virtual reality system on a transputer architecture
- Authors: Bangay, Shaun Douglas
- Date: 1994 , 2012-10-11
- Subjects: Virtual reality , Computer simulation , Transputers
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4668 , http://hdl.handle.net/10962/d1006687 , Virtual reality , Computer simulation , Transputers
- Description: A Virtual Reality is a computer model of an environment, actual or imagined, presented to a user in as realistic a fashion as possible. Stereo goggles may be used to provide the user with a view of the modelled environment from within the environment, while a data-glove is used to interact with the environment. To simulate reality on a computer, the machine has to produce realistic images rapidly. Such a requirement usually necessitates expensive equipment. This thesis presents an implementation of a virtual reality system on a transputer architecture. The system is general, and is intended to provide support for the development of various virtual environments. The three main components of the system are the output device drivers, the input device drivers, and the virtual world kernel. This last component is responsible for the simulation of the virtual world. The rendering system is described in detail. Various methods for implementing the components of the graphics pipeline are discussed. These are then generalised to make use of the facilities provided by the transputer processor for parallel processing. A number of different decomposition techniques are implemented and compared. The emphasis in this section is on the speed at which the world can be rendered, and the interaction latency involved. In the best case, where almost linear speedup is obtained, a world containing over 250 polygons is rendered at 32 frames/second. The bandwidth of the transputer links is the major factor limiting speedup. A description is given of an input device driver which makes use of a powerglove. Techniques for overcoming the limitations of this device, and for interacting with the virtual world, are discussed. The virtual world kernel is designed to make extensive use of the parallel processing facilities provided by transputers. It is capable of providing support for mUltiple worlds concurrently, and for multiple users interacting with these worlds. Two applications are described that were successfully implemented using this system. The design of the system is compared with other recently developed virtual reality systems. Features that are common or advantageous in each of the systems are discussed. The system described in this thesis compares favourably, particularly in its use of parallel processors. , KMBT_223
- Full Text:
- Date Issued: 1994
- Authors: Bangay, Shaun Douglas
- Date: 1994 , 2012-10-11
- Subjects: Virtual reality , Computer simulation , Transputers
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4668 , http://hdl.handle.net/10962/d1006687 , Virtual reality , Computer simulation , Transputers
- Description: A Virtual Reality is a computer model of an environment, actual or imagined, presented to a user in as realistic a fashion as possible. Stereo goggles may be used to provide the user with a view of the modelled environment from within the environment, while a data-glove is used to interact with the environment. To simulate reality on a computer, the machine has to produce realistic images rapidly. Such a requirement usually necessitates expensive equipment. This thesis presents an implementation of a virtual reality system on a transputer architecture. The system is general, and is intended to provide support for the development of various virtual environments. The three main components of the system are the output device drivers, the input device drivers, and the virtual world kernel. This last component is responsible for the simulation of the virtual world. The rendering system is described in detail. Various methods for implementing the components of the graphics pipeline are discussed. These are then generalised to make use of the facilities provided by the transputer processor for parallel processing. A number of different decomposition techniques are implemented and compared. The emphasis in this section is on the speed at which the world can be rendered, and the interaction latency involved. In the best case, where almost linear speedup is obtained, a world containing over 250 polygons is rendered at 32 frames/second. The bandwidth of the transputer links is the major factor limiting speedup. A description is given of an input device driver which makes use of a powerglove. Techniques for overcoming the limitations of this device, and for interacting with the virtual world, are discussed. The virtual world kernel is designed to make extensive use of the parallel processing facilities provided by transputers. It is capable of providing support for mUltiple worlds concurrently, and for multiple users interacting with these worlds. Two applications are described that were successfully implemented using this system. The design of the system is compared with other recently developed virtual reality systems. Features that are common or advantageous in each of the systems are discussed. The system described in this thesis compares favourably, particularly in its use of parallel processors. , KMBT_223
- Full Text:
- Date Issued: 1994
An analysis of the risk exposure of adopting IPV6 in enterprise networks
- Authors: Berko, Istvan Sandor
- Date: 2015
- Subjects: International Workshop on Deploying the Future Infrastructure , Computer networks , Computer networks -- Security measures , Computer network protocols
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4722 , http://hdl.handle.net/10962/d1018918
- Description: The IPv6 increased address pool presents changes in resource impact to the Enterprise that, if not adequately addressed, can change risks that are locally significant in IPv4 to risks that can impact the Enterprise in its entirety. The expected conclusion is that the IPv6 environment will impose significant changes in the Enterprise environment - which may negatively impact organisational security if the IPv6 nuances are not adequately addressed. This thesis reviews the risks related to the operation of enterprise networks with the introduction of IPv6. The global trends are discussed to provide insight and background to the IPv6 research space. Analysing the current state of readiness in enterprise networks, quantifies the value of developing this thesis. The base controls that should be deployed in enterprise networks to prevent the abuse of IPv6 through tunnelling and the protection of the enterprise access layer are discussed. A series of case studies are presented which identify and analyse the impact of certain changes in the IPv6 protocol on the enterprise networks. The case studies also identify mitigation techniques to reduce risk.
- Full Text:
- Date Issued: 2015
- Authors: Berko, Istvan Sandor
- Date: 2015
- Subjects: International Workshop on Deploying the Future Infrastructure , Computer networks , Computer networks -- Security measures , Computer network protocols
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4722 , http://hdl.handle.net/10962/d1018918
- Description: The IPv6 increased address pool presents changes in resource impact to the Enterprise that, if not adequately addressed, can change risks that are locally significant in IPv4 to risks that can impact the Enterprise in its entirety. The expected conclusion is that the IPv6 environment will impose significant changes in the Enterprise environment - which may negatively impact organisational security if the IPv6 nuances are not adequately addressed. This thesis reviews the risks related to the operation of enterprise networks with the introduction of IPv6. The global trends are discussed to provide insight and background to the IPv6 research space. Analysing the current state of readiness in enterprise networks, quantifies the value of developing this thesis. The base controls that should be deployed in enterprise networks to prevent the abuse of IPv6 through tunnelling and the protection of the enterprise access layer are discussed. A series of case studies are presented which identify and analyse the impact of certain changes in the IPv6 protocol on the enterprise networks. The case studies also identify mitigation techniques to reduce risk.
- Full Text:
- Date Issued: 2015
Targeted attack detection by means of free and open source solutions
- Authors: Bernardo, Louis F
- Date: 2019
- Subjects: Computer networks -- Security measures , Information technology -- Security measures , Computer security -- Management , Data protection
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92269 , vital:30703
- Description: Compliance requirements are part of everyday business requirements for various areas, such as retail and medical services. As part of compliance it may be required to have infrastructure in place to monitor the activities in the environment to ensure that the relevant data and environment is sufficiently protected. At the core of such monitoring solutions one would find some type of data repository, or database, to store and ultimately correlate the captured events. Such solutions are commonly called Security Information and Event Management, or SIEM for short. Larger companies have been known to use commercial solutions such as IBM's Qradar, Logrythm, or Splunk. However, these come at significant cost and arent suitable for smaller businesses with limited budgets. These solutions require manual configuration of event correlation for detection of activities that place the environment in danger. This usually requires vendor implementation assistance that also would come at a cost. Alternatively, there are open source solutions that provide the required functionality. This research will demonstrate building an open source solution, with minimal to no cost for hardware or software, while still maintaining the capability of detecting targeted attacks. The solution presented in this research includes Wazuh, which is a combination of OSSEC and the ELK stack, integrated with an Network Intrusion Detection System (NIDS). The success of the integration, is determined by measuring postive attack detection based on each different configuration options. To perform the testing, a deliberately vulnerable platform named Metasploitable will be used as a victim host. The victim host vulnerabilities were created specifically to serve as target for Metasploit. The attacks were generated by utilising Metasploit Framework on a prebuilt Kali Linux host.
- Full Text:
- Date Issued: 2019
- Authors: Bernardo, Louis F
- Date: 2019
- Subjects: Computer networks -- Security measures , Information technology -- Security measures , Computer security -- Management , Data protection
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92269 , vital:30703
- Description: Compliance requirements are part of everyday business requirements for various areas, such as retail and medical services. As part of compliance it may be required to have infrastructure in place to monitor the activities in the environment to ensure that the relevant data and environment is sufficiently protected. At the core of such monitoring solutions one would find some type of data repository, or database, to store and ultimately correlate the captured events. Such solutions are commonly called Security Information and Event Management, or SIEM for short. Larger companies have been known to use commercial solutions such as IBM's Qradar, Logrythm, or Splunk. However, these come at significant cost and arent suitable for smaller businesses with limited budgets. These solutions require manual configuration of event correlation for detection of activities that place the environment in danger. This usually requires vendor implementation assistance that also would come at a cost. Alternatively, there are open source solutions that provide the required functionality. This research will demonstrate building an open source solution, with minimal to no cost for hardware or software, while still maintaining the capability of detecting targeted attacks. The solution presented in this research includes Wazuh, which is a combination of OSSEC and the ELK stack, integrated with an Network Intrusion Detection System (NIDS). The success of the integration, is determined by measuring postive attack detection based on each different configuration options. To perform the testing, a deliberately vulnerable platform named Metasploitable will be used as a victim host. The victim host vulnerabilities were created specifically to serve as target for Metasploit. The attacks were generated by utilising Metasploit Framework on a prebuilt Kali Linux host.
- Full Text:
- Date Issued: 2019
Evaluating the cyber security skills gap relating to penetration testing
- Authors: Beukes, Dirk Johannes
- Date: 2021
- Subjects: Computer networks -- Security measures , Computer networks -- Monitoring , Computer networks -- Management , Data protection , Information technology -- Security measures , Professionals -- Supply and demand , Electronic data personnel -- Supply and demand
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/171120 , vital:42021
- Description: Information Technology (IT) is growing rapidly and has become an integral part of daily life. It provides a boundless list of services and opportunities, generating boundless sources of information, which could be abused or exploited. Due to this growth, there are thousands of new users added to the grid using computer systems in a static and mobile environment; this fact alone creates endless volumes of data to be exploited and hardware devices to be abused by the wrong people. The growth in the IT environment adds challenges that may affect users in their personal, professional, and business lives. There are constant threats on corporate and private computer networks and computer systems. In the corporate environment companies try to eliminate the threat by testing networks making use of penetration tests and by implementing cyber awareness programs to make employees more aware of the cyber threat. Penetration tests and vulnerability assessments are undervalued; are seen as a formality and are not used to increase system security. If used regularly the computer system will be more secure and attacks minimized. With the growth in technology, industries all over the globe become fully dependent on information systems in doing their day-to-day business. As technology evolves and new technology becomes available, the bigger the risk becomes to protect against the dangers which come with this new technology. For industry to protect itself against this growth in technology, personnel with a certain skill set is needed. This is where cyber security plays a very important role in the protection of information systems to ensure the confidentiality, integrity and availability of the information system itself and the data on the system. Due to this drive to secure information systems, the need for cyber security by professionals is on the rise as well. It is estimated that there is a shortage of one million cyber security professionals globally. What is the reason for this skills shortage? Will it be possible to close this skills shortage gap? This study is about identifying the skills gap and identifying possible ways to close this skills gap. In this study, research was conducted on the cyber security international standards, cyber security training at universities and international certification focusing specifically on penetration testing, the evaluation of the need of industry while recruiting new penetration testers, finishing with suggestions on how to fill possible gaps in the skills market with a conclusion.
- Full Text:
- Date Issued: 2021
- Authors: Beukes, Dirk Johannes
- Date: 2021
- Subjects: Computer networks -- Security measures , Computer networks -- Monitoring , Computer networks -- Management , Data protection , Information technology -- Security measures , Professionals -- Supply and demand , Electronic data personnel -- Supply and demand
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/171120 , vital:42021
- Description: Information Technology (IT) is growing rapidly and has become an integral part of daily life. It provides a boundless list of services and opportunities, generating boundless sources of information, which could be abused or exploited. Due to this growth, there are thousands of new users added to the grid using computer systems in a static and mobile environment; this fact alone creates endless volumes of data to be exploited and hardware devices to be abused by the wrong people. The growth in the IT environment adds challenges that may affect users in their personal, professional, and business lives. There are constant threats on corporate and private computer networks and computer systems. In the corporate environment companies try to eliminate the threat by testing networks making use of penetration tests and by implementing cyber awareness programs to make employees more aware of the cyber threat. Penetration tests and vulnerability assessments are undervalued; are seen as a formality and are not used to increase system security. If used regularly the computer system will be more secure and attacks minimized. With the growth in technology, industries all over the globe become fully dependent on information systems in doing their day-to-day business. As technology evolves and new technology becomes available, the bigger the risk becomes to protect against the dangers which come with this new technology. For industry to protect itself against this growth in technology, personnel with a certain skill set is needed. This is where cyber security plays a very important role in the protection of information systems to ensure the confidentiality, integrity and availability of the information system itself and the data on the system. Due to this drive to secure information systems, the need for cyber security by professionals is on the rise as well. It is estimated that there is a shortage of one million cyber security professionals globally. What is the reason for this skills shortage? Will it be possible to close this skills shortage gap? This study is about identifying the skills gap and identifying possible ways to close this skills gap. In this study, research was conducted on the cyber security international standards, cyber security training at universities and international certification focusing specifically on penetration testing, the evaluation of the need of industry while recruiting new penetration testers, finishing with suggestions on how to fill possible gaps in the skills market with a conclusion.
- Full Text:
- Date Issued: 2021
Search engine poisoning and its prevalence in modern search engines
- Authors: Blaauw, Pieter
- Date: 2013
- Subjects: Web search engines Internet searching World Wide Web Malware (Computer software) Computer viruses Rootkits (Computer software) Spyware (Computer software)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4572 , http://hdl.handle.net/10962/d1002037
- Description: The prevalence of Search Engine Poisoning in trending topics and popular search terms on the web within search engines is investigated. Search Engine Poisoning is the act of manipulating search engines in order to display search results from websites infected with malware. Research done between February and August 2012, using both manual and automated techniques, shows us how easily the criminal element manages to insert malicious content into web pages related to popular search terms within search engines. In order to provide the reader with a clear overview and understanding of the motives and the methods of the operators of Search Engine Poisoning campaigns, an in-depth review of automated and semi-automated web exploit kits is done, as well as looking into the motives for running these campaigns. Three high profile case studies are examined, and the various Search Engine Poisoning campaigns associated with these case studies are discussed in detail to the reader. From February to August 2012, data was collected from the top trending topics on Google’s search engine along with the top listed sites related to these topics, and then passed through various automated tools to discover if these results have been infiltrated by the operators of Search Engine Poisoning campaings, and the results of these automated scans are then discussed in detail. During the research period, manual searching for Search Engine Poisoning campaigns was also done, using high profile news events and popular search terms. These results are analysed in detail to determine the methods of attack, the purpose of the attack and the parties behind it
- Full Text:
- Date Issued: 2013
- Authors: Blaauw, Pieter
- Date: 2013
- Subjects: Web search engines Internet searching World Wide Web Malware (Computer software) Computer viruses Rootkits (Computer software) Spyware (Computer software)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4572 , http://hdl.handle.net/10962/d1002037
- Description: The prevalence of Search Engine Poisoning in trending topics and popular search terms on the web within search engines is investigated. Search Engine Poisoning is the act of manipulating search engines in order to display search results from websites infected with malware. Research done between February and August 2012, using both manual and automated techniques, shows us how easily the criminal element manages to insert malicious content into web pages related to popular search terms within search engines. In order to provide the reader with a clear overview and understanding of the motives and the methods of the operators of Search Engine Poisoning campaigns, an in-depth review of automated and semi-automated web exploit kits is done, as well as looking into the motives for running these campaigns. Three high profile case studies are examined, and the various Search Engine Poisoning campaigns associated with these case studies are discussed in detail to the reader. From February to August 2012, data was collected from the top trending topics on Google’s search engine along with the top listed sites related to these topics, and then passed through various automated tools to discover if these results have been infiltrated by the operators of Search Engine Poisoning campaings, and the results of these automated scans are then discussed in detail. During the research period, manual searching for Search Engine Poisoning campaigns was also done, using high profile news events and popular search terms. These results are analysed in detail to determine the methods of attack, the purpose of the attack and the parties behind it
- Full Text:
- Date Issued: 2013
Software quality assurance in a remote client/contractor context
- Authors: Black, Angus Hugh
- Date: 2006
- Subjects: Computer software -- Quality control , Software engineering , Information technology
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4648 , http://hdl.handle.net/10962/d1006615 , Computer software -- Quality control , Software engineering , Information technology
- Description: With the reliance on information technology and the software that this technology utilizes increasing every day, it is of paramount importance that software developed be of an acceptable quality. This quality can be achieved through the utilization of various software engineering standards and guidelines. The question is, to what extent do these standards and guidelines need to be utilized and how are these standards and guidelines implemented? This research focuses on how guidelines developed by standardization bodies and the unified process developed by Rational can be integrated to achieve a suitable process and version control system within the context of a remote client/contractor small team environment.
- Full Text:
- Date Issued: 2006
- Authors: Black, Angus Hugh
- Date: 2006
- Subjects: Computer software -- Quality control , Software engineering , Information technology
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4648 , http://hdl.handle.net/10962/d1006615 , Computer software -- Quality control , Software engineering , Information technology
- Description: With the reliance on information technology and the software that this technology utilizes increasing every day, it is of paramount importance that software developed be of an acceptable quality. This quality can be achieved through the utilization of various software engineering standards and guidelines. The question is, to what extent do these standards and guidelines need to be utilized and how are these standards and guidelines implemented? This research focuses on how guidelines developed by standardization bodies and the unified process developed by Rational can be integrated to achieve a suitable process and version control system within the context of a remote client/contractor small team environment.
- Full Text:
- Date Issued: 2006
The P.R.O. expert system shell
- Authors: Bradshaw, John
- Date: 1987 , 2013-04-03
- Subjects: Expert systems (Computer science)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4617 , http://hdl.handle.net/10962/d1006302 , Expert systems (Computer science)
- Description: This thesis reports the research which led to the development of the P.R .O. Expert System Shell. The P.R.O . System is primarily, though not exclusively , designed for use in ecological domains. In the light of two specific expert systems, The RCS (River Conservation Status) and the Aquaculture Systems, which were developed as part of this research, a number of areas of importance have been identified. The most significant of these is the need to handle uncertainty effectively. The style of knowledge representation to be implemented also plays an important role. After consulting the relevant literature and the available microcomputer expert system shells, a number of ideas have been included in the P.R.O. System. The P.R.O . System is a backward chaining, production system based expert system shell. It embodies a simple but effective method of handling uncertainty. An important feature of this method is that it takes cognizance of the different relative importances of the conditions which need to be satisfied before a conclusion can be reached. The knowledge base consists of more than rules and questions. It also contains meta-knowledge, which is used by the inference engine. The P.R.O. System has been designed to be of practical use. Its strongest recommendations are therefore, that the two non-trivial systems which have been implemented in it, have been accepted by the experts and their peers as systems which produce good, accurate answers . , KMBT_363 , Adobe Acrobat 9.53 Paper Capture Plug-in
- Full Text:
- Date Issued: 1987
- Authors: Bradshaw, John
- Date: 1987 , 2013-04-03
- Subjects: Expert systems (Computer science)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4617 , http://hdl.handle.net/10962/d1006302 , Expert systems (Computer science)
- Description: This thesis reports the research which led to the development of the P.R .O. Expert System Shell. The P.R.O . System is primarily, though not exclusively , designed for use in ecological domains. In the light of two specific expert systems, The RCS (River Conservation Status) and the Aquaculture Systems, which were developed as part of this research, a number of areas of importance have been identified. The most significant of these is the need to handle uncertainty effectively. The style of knowledge representation to be implemented also plays an important role. After consulting the relevant literature and the available microcomputer expert system shells, a number of ideas have been included in the P.R.O. System. The P.R.O . System is a backward chaining, production system based expert system shell. It embodies a simple but effective method of handling uncertainty. An important feature of this method is that it takes cognizance of the different relative importances of the conditions which need to be satisfied before a conclusion can be reached. The knowledge base consists of more than rules and questions. It also contains meta-knowledge, which is used by the inference engine. The P.R.O. System has been designed to be of practical use. Its strongest recommendations are therefore, that the two non-trivial systems which have been implemented in it, have been accepted by the experts and their peers as systems which produce good, accurate answers . , KMBT_363 , Adobe Acrobat 9.53 Paper Capture Plug-in
- Full Text:
- Date Issued: 1987
Models of internet connectivity for secondary schools in the Grahamstown circuit
- Authors: Brandt, Ingrid Gisélle
- Date: 2006
- Subjects: Internet in education -- South Africa -- Grahamstown , Computer networks -- South Africa -- Grahamstown , Information technology -- Study and teaching (Secondary) -- South Africa -- Grahamstown , Information technology -- South Africa -- Grahamstown , Computer-assisted instruction -- South Africa -- Grahamstown , Telecommunication in education -- South Africa -- Grahamstown , Education, Secondary -- South Africa -- Grahamstown -- Data processing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4641 , http://hdl.handle.net/10962/d1006566 , Internet in education -- South Africa -- Grahamstown , Computer networks -- South Africa -- Grahamstown , Information technology -- Study and teaching (Secondary) -- South Africa -- Grahamstown , Information technology -- South Africa -- Grahamstown , Computer-assisted instruction -- South Africa -- Grahamstown , Telecommunication in education -- South Africa -- Grahamstown , Education, Secondary -- South Africa -- Grahamstown -- Data processing
- Description: Information and Communication Technologies (ICTs) are becoming more pervasive in South African schools and are increasingly considered valuable tools in education, promoting the development of higher cognitive processes and allowing teachers and learners access to a plethora of information. This study investigates models of Internet connectivity for secondary schools in the Grahamstown Circuit. The various networking technologies currently available to South African schools, or likely to become available to South African schools in the future, are described along with the telecommunications legislation which governs their use in South Africa. Furthermore, current ICT in education projects taking place in South Africa are described together with current ICT in education policy in South Africa. This information forms the backdrop of a detailed schools survey that was conducted at all the 13 secondary schools in the Grahamstown Circuit and enriched with experimental work in the provision of metropolitan network links to selected schools, mostly via Wi-Fi. The result of the investigation is the proposal of a Grahamstown Circuit Metropolitan Education Network.
- Full Text:
- Date Issued: 2006
- Authors: Brandt, Ingrid Gisélle
- Date: 2006
- Subjects: Internet in education -- South Africa -- Grahamstown , Computer networks -- South Africa -- Grahamstown , Information technology -- Study and teaching (Secondary) -- South Africa -- Grahamstown , Information technology -- South Africa -- Grahamstown , Computer-assisted instruction -- South Africa -- Grahamstown , Telecommunication in education -- South Africa -- Grahamstown , Education, Secondary -- South Africa -- Grahamstown -- Data processing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4641 , http://hdl.handle.net/10962/d1006566 , Internet in education -- South Africa -- Grahamstown , Computer networks -- South Africa -- Grahamstown , Information technology -- Study and teaching (Secondary) -- South Africa -- Grahamstown , Information technology -- South Africa -- Grahamstown , Computer-assisted instruction -- South Africa -- Grahamstown , Telecommunication in education -- South Africa -- Grahamstown , Education, Secondary -- South Africa -- Grahamstown -- Data processing
- Description: Information and Communication Technologies (ICTs) are becoming more pervasive in South African schools and are increasingly considered valuable tools in education, promoting the development of higher cognitive processes and allowing teachers and learners access to a plethora of information. This study investigates models of Internet connectivity for secondary schools in the Grahamstown Circuit. The various networking technologies currently available to South African schools, or likely to become available to South African schools in the future, are described along with the telecommunications legislation which governs their use in South Africa. Furthermore, current ICT in education projects taking place in South Africa are described together with current ICT in education policy in South Africa. This information forms the backdrop of a detailed schools survey that was conducted at all the 13 secondary schools in the Grahamstown Circuit and enriched with experimental work in the provision of metropolitan network links to selected schools, mostly via Wi-Fi. The result of the investigation is the proposal of a Grahamstown Circuit Metropolitan Education Network.
- Full Text:
- Date Issued: 2006
Investigating combinations of feature extraction and classification for improved image-based multimodal biometric systems at the feature level
- Authors: Brown, Dane
- Date: 2018
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/63470 , vital:28414
- Description: Multimodal biometrics has become a popular means of overcoming the limitations of unimodal biometric systems. However, the rich information particular to the feature level is of a complex nature and leveraging its potential without overfitting a classifier is not well studied. This research investigates feature-classifier combinations on the fingerprint, face, palmprint, and iris modalities to effectively fuse their feature vectors for a complementary result. The effects of different feature-classifier combinations are thus isolated to identify novel or improved algorithms. A new face segmentation algorithm is shown to increase consistency in nominal and extreme scenarios. Moreover, two novel feature extraction techniques demonstrate better adaptation to dynamic lighting conditions, while reducing feature dimensionality to the benefit of classifiers. A comprehensive set of unimodal experiments are carried out to evaluate both verification and identification performance on a variety of datasets using four classifiers, namely Eigen, Fisher, Local Binary Pattern Histogram and linear Support Vector Machine on various feature extraction methods. The recognition performance of the proposed algorithms are shown to outperform the vast majority of related studies, when using the same dataset under the same test conditions. In the unimodal comparisons presented, the proposed approaches outperform existing systems even when given a handicap such as fewer training samples or data with a greater number of classes. A separate comprehensive set of experiments on feature fusion show that combining modality data provides a substantial increase in accuracy, with only a few exceptions that occur when differences in the image data quality of two modalities are substantial. However, when two poor quality datasets are fused, noticeable gains in recognition performance are realized when using the novel feature extraction approach. Finally, feature-fusion guidelines are proposed to provide the necessary insight to leverage the rich information effectively when fusing multiple biometric modalities at the feature level. These guidelines serve as the foundation to better understand and construct biometric systems that are effective in a variety of applications.
- Full Text:
- Date Issued: 2018
- Authors: Brown, Dane
- Date: 2018
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/63470 , vital:28414
- Description: Multimodal biometrics has become a popular means of overcoming the limitations of unimodal biometric systems. However, the rich information particular to the feature level is of a complex nature and leveraging its potential without overfitting a classifier is not well studied. This research investigates feature-classifier combinations on the fingerprint, face, palmprint, and iris modalities to effectively fuse their feature vectors for a complementary result. The effects of different feature-classifier combinations are thus isolated to identify novel or improved algorithms. A new face segmentation algorithm is shown to increase consistency in nominal and extreme scenarios. Moreover, two novel feature extraction techniques demonstrate better adaptation to dynamic lighting conditions, while reducing feature dimensionality to the benefit of classifiers. A comprehensive set of unimodal experiments are carried out to evaluate both verification and identification performance on a variety of datasets using four classifiers, namely Eigen, Fisher, Local Binary Pattern Histogram and linear Support Vector Machine on various feature extraction methods. The recognition performance of the proposed algorithms are shown to outperform the vast majority of related studies, when using the same dataset under the same test conditions. In the unimodal comparisons presented, the proposed approaches outperform existing systems even when given a handicap such as fewer training samples or data with a greater number of classes. A separate comprehensive set of experiments on feature fusion show that combining modality data provides a substantial increase in accuracy, with only a few exceptions that occur when differences in the image data quality of two modalities are substantial. However, when two poor quality datasets are fused, noticeable gains in recognition performance are realized when using the novel feature extraction approach. Finally, feature-fusion guidelines are proposed to provide the necessary insight to leverage the rich information effectively when fusing multiple biometric modalities at the feature level. These guidelines serve as the foundation to better understand and construct biometric systems that are effective in a variety of applications.
- Full Text:
- Date Issued: 2018
The automatic generation of code generators with particular reference to cobol
- Authors: Bulmer, Allan Roy
- Date: 1980 , 2013-03-15
- Subjects: Code generators , COBOL (Computer program language)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4613 , http://hdl.handle.net/10962/d1004913 , Code generators , COBOL (Computer program language)
- Full Text:
- Date Issued: 1980
- Authors: Bulmer, Allan Roy
- Date: 1980 , 2013-03-15
- Subjects: Code generators , COBOL (Computer program language)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4613 , http://hdl.handle.net/10962/d1004913 , Code generators , COBOL (Computer program language)
- Full Text:
- Date Issued: 1980
Distributed authentication for resource control
- Authors: Burdis, Keith Robert
- Date: 2000
- Subjects: Computers -- Access control , Data protection , Computer networks -- Security measures , Electronic data processing departments -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4630 , http://hdl.handle.net/10962/d1006512 , Computers -- Access control , Data protection , Computer networks -- Security measures , Electronic data processing departments -- Security measures
- Description: This thesis examines distributed authentication in the process of controlling computing resources. We investigate user sign-on and two of the main authentication technologies that can be used to control a resource through authentication and providing additional security services. The problems with the existing sign-on scenario are that users have too much credential information to manage and are prompted for this information too often. Single Sign-On (SSO) is a viable solution to this problem if physical procedures are introduced to minimise the risks associated with its use. The Generic Security Services API (GSS-API) provides security services in a manner in- dependent of the environment in which these security services are used, encapsulating security functionality and insulating users from changes in security technology. The un- derlying security functionality is provided by GSS-API mechanisms. We developed the Secure Remote Password GSS-API Mechanism (SRPGM) to provide a mechanism that has low infrastructure requirements, is password-based and does not require the use of long-term asymmetric keys. We provide implementations of the Java GSS-API bindings and the LIPKEY and SRPGM GSS-API mechanisms. The Secure Authentication and Security Layer (SASL) provides security to connection- based Internet protocols. After finding deficiencies in existing SASL mechanisms we de- veloped the Secure Remote Password SASL mechanism (SRP-SASL) that provides strong password-based authentication and countermeasures against known attacks, while still be- ing simple and easy to implement. We provide implementations of the Java SASL binding and several SASL mechanisms, including SRP-SASL.
- Full Text:
- Date Issued: 2000
- Authors: Burdis, Keith Robert
- Date: 2000
- Subjects: Computers -- Access control , Data protection , Computer networks -- Security measures , Electronic data processing departments -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4630 , http://hdl.handle.net/10962/d1006512 , Computers -- Access control , Data protection , Computer networks -- Security measures , Electronic data processing departments -- Security measures
- Description: This thesis examines distributed authentication in the process of controlling computing resources. We investigate user sign-on and two of the main authentication technologies that can be used to control a resource through authentication and providing additional security services. The problems with the existing sign-on scenario are that users have too much credential information to manage and are prompted for this information too often. Single Sign-On (SSO) is a viable solution to this problem if physical procedures are introduced to minimise the risks associated with its use. The Generic Security Services API (GSS-API) provides security services in a manner in- dependent of the environment in which these security services are used, encapsulating security functionality and insulating users from changes in security technology. The un- derlying security functionality is provided by GSS-API mechanisms. We developed the Secure Remote Password GSS-API Mechanism (SRPGM) to provide a mechanism that has low infrastructure requirements, is password-based and does not require the use of long-term asymmetric keys. We provide implementations of the Java GSS-API bindings and the LIPKEY and SRPGM GSS-API mechanisms. The Secure Authentication and Security Layer (SASL) provides security to connection- based Internet protocols. After finding deficiencies in existing SASL mechanisms we de- veloped the Secure Remote Password SASL mechanism (SRP-SASL) that provides strong password-based authentication and countermeasures against known attacks, while still be- ing simple and easy to implement. We provide implementations of the Java SASL binding and several SASL mechanisms, including SRP-SASL.
- Full Text:
- Date Issued: 2000
Log analysis aided by latent semantic mapping
- Authors: Buys, Stephanus
- Date: 2013 , 2013-04-14
- Subjects: Latent semantic indexing , Data mining , Computer networks -- Security measures , Computer hackers , Computer security
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4575 , http://hdl.handle.net/10962/d1002963 , Latent semantic indexing , Data mining , Computer networks -- Security measures , Computer hackers , Computer security
- Description: In an age of zero-day exploits and increased on-line attacks on computing infrastructure, operational security practitioners are becoming increasingly aware of the value of the information captured in log events. Analysis of these events is critical during incident response, forensic investigations related to network breaches, hacking attacks and data leaks. Such analysis has led to the discipline of Security Event Analysis, also known as Log Analysis. There are several challenges when dealing with events, foremost being the increased volumes at which events are often generated and stored. Furthermore, events are often captured as unstructured data, with very little consistency in the formats or contents of the events. In this environment, security analysts and implementers of Log Management (LM) or Security Information and Event Management (SIEM) systems face the daunting task of identifying, classifying and disambiguating massive volumes of events in order for security analysis and automation to proceed. Latent Semantic Mapping (LSM) is a proven paradigm shown to be an effective method of, among other things, enabling word clustering, document clustering, topic clustering and semantic inference. This research is an investigation into the practical application of LSM in the discipline of Security Event Analysis, showing the value of using LSM to assist practitioners in identifying types of events, classifying events as belonging to certain sources or technologies and disambiguating different events from each other. The culmination of this research presents adaptations to traditional natural language processing techniques that resulted in improved efficacy of LSM when dealing with Security Event Analysis. This research provides strong evidence supporting the wider adoption and use of LSM, as well as further investigation into Security Event Analysis assisted by LSM and other natural language or computer-learning processing techniques. , LaTeX with hyperref package , Adobe Acrobat 9.54 Paper Capture Plug-in
- Full Text:
- Date Issued: 2013
- Authors: Buys, Stephanus
- Date: 2013 , 2013-04-14
- Subjects: Latent semantic indexing , Data mining , Computer networks -- Security measures , Computer hackers , Computer security
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4575 , http://hdl.handle.net/10962/d1002963 , Latent semantic indexing , Data mining , Computer networks -- Security measures , Computer hackers , Computer security
- Description: In an age of zero-day exploits and increased on-line attacks on computing infrastructure, operational security practitioners are becoming increasingly aware of the value of the information captured in log events. Analysis of these events is critical during incident response, forensic investigations related to network breaches, hacking attacks and data leaks. Such analysis has led to the discipline of Security Event Analysis, also known as Log Analysis. There are several challenges when dealing with events, foremost being the increased volumes at which events are often generated and stored. Furthermore, events are often captured as unstructured data, with very little consistency in the formats or contents of the events. In this environment, security analysts and implementers of Log Management (LM) or Security Information and Event Management (SIEM) systems face the daunting task of identifying, classifying and disambiguating massive volumes of events in order for security analysis and automation to proceed. Latent Semantic Mapping (LSM) is a proven paradigm shown to be an effective method of, among other things, enabling word clustering, document clustering, topic clustering and semantic inference. This research is an investigation into the practical application of LSM in the discipline of Security Event Analysis, showing the value of using LSM to assist practitioners in identifying types of events, classifying events as belonging to certain sources or technologies and disambiguating different events from each other. The culmination of this research presents adaptations to traditional natural language processing techniques that resulted in improved efficacy of LSM when dealing with Security Event Analysis. This research provides strong evidence supporting the wider adoption and use of LSM, as well as further investigation into Security Event Analysis assisted by LSM and other natural language or computer-learning processing techniques. , LaTeX with hyperref package , Adobe Acrobat 9.54 Paper Capture Plug-in
- Full Text:
- Date Issued: 2013
Explanation in rule-based expert systems
- Authors: Carden, Kenneth John
- Date: 1988
- Subjects: Expert systems (Computer science) Ecology -- Research
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4569 , http://hdl.handle.net/10962/d1002034
- Description: The ability of an expert system to explain its reasoning is fundamental to the system's credibility. Explanations become even more vital in systems which use methods of uncertainty propagation. The research documented here describes the development of an explanation sub-system which interfaces with the P.R.O. Expert System Toolkit. This toolkit has been used in the development of three small ecological expert systems. This project has involved adapting the results of research in the field of explanation-generation, to the requirements of the ecologist users. The subsystem contains two major components. The first lists the rules that fired during a consultation. The second component comprises routines responsible for quantifying the effects on the system conclusions of the answers given to questions. These latter routines can be used to perform sensitivity analyses on the answers given. The incorporation of such routines in small expert systems is quite unique
- Full Text:
- Date Issued: 1988
- Authors: Carden, Kenneth John
- Date: 1988
- Subjects: Expert systems (Computer science) Ecology -- Research
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4569 , http://hdl.handle.net/10962/d1002034
- Description: The ability of an expert system to explain its reasoning is fundamental to the system's credibility. Explanations become even more vital in systems which use methods of uncertainty propagation. The research documented here describes the development of an explanation sub-system which interfaces with the P.R.O. Expert System Toolkit. This toolkit has been used in the development of three small ecological expert systems. This project has involved adapting the results of research in the field of explanation-generation, to the requirements of the ecologist users. The subsystem contains two major components. The first lists the rules that fired during a consultation. The second component comprises routines responsible for quantifying the effects on the system conclusions of the answers given to questions. These latter routines can be used to perform sensitivity analyses on the answers given. The incorporation of such routines in small expert systems is quite unique
- Full Text:
- Date Issued: 1988
An exploratory investigation into an Integrated Vulnerability and Patch Management Framework
- Authors: Carstens, Duane
- Date: 2021-04
- Subjects: Computer security , Computer security -- Management , Computer networks -- Security measures , Patch Management , Integrated Vulnerability
- Language: English
- Type: thesis , text , Masters , MSc
- Identifier: http://hdl.handle.net/10962/177940 , vital:42892
- Description: In the rapidly changing world of cybersecurity, the constant increase of vulnerabilities continues to be a prevalent issue for many organisations. Malicious actors are aware that most organisations cannot timeously patch known vulnerabilities and are ill-prepared to protect against newly created vulnerabilities where a signature or an available patch has not yet been created. Consequently, information security personnel face ongoing challenges to mitigate these risks. In this research, the problem of remediation in a world of increasing vulnerabilities is considered. The current paradigm of vulnerability and patch management is reviewed using a pragmatic approach to all associated variables of these services / practices and, as a result, what is working and what is not working in terms of remediation is understood. In addition to the analysis, a taxonomy is created to provide a graphical representation of all associated variables to vulnerability and patch management based on existing literature. Frameworks currently being utilised in the industry to create an effective engagement model between vulnerability and patch management services are considered. The link between quantifying a threat, vulnerability and consequence; what Microsoft has available for patching; and the action plan for resulting vulnerabilities is explored. Furthermore, the processes and means of communication between each of these services are investigated to ensure there is effective remediation of vulnerabilities, ultimately improving the security risk posture of an organisation. In order to effectively measure the security risk posture, progress is measured between each of these services through a single averaged measurement metric. The outcome of the research highlights influencing factors that impact successful vulnerability management, in line with identified themes from the research taxonomy. These influencing factors are however significantly undermined due to resources within the same organisations not having a clear and consistent understanding of their role, organisational capabilities and objectives for effective vulnerability and patch management within their organisations. , Thesis (MSc) -- Faculty of Science, Computer Science, 2021
- Full Text:
- Date Issued: 2021-04
- Authors: Carstens, Duane
- Date: 2021-04
- Subjects: Computer security , Computer security -- Management , Computer networks -- Security measures , Patch Management , Integrated Vulnerability
- Language: English
- Type: thesis , text , Masters , MSc
- Identifier: http://hdl.handle.net/10962/177940 , vital:42892
- Description: In the rapidly changing world of cybersecurity, the constant increase of vulnerabilities continues to be a prevalent issue for many organisations. Malicious actors are aware that most organisations cannot timeously patch known vulnerabilities and are ill-prepared to protect against newly created vulnerabilities where a signature or an available patch has not yet been created. Consequently, information security personnel face ongoing challenges to mitigate these risks. In this research, the problem of remediation in a world of increasing vulnerabilities is considered. The current paradigm of vulnerability and patch management is reviewed using a pragmatic approach to all associated variables of these services / practices and, as a result, what is working and what is not working in terms of remediation is understood. In addition to the analysis, a taxonomy is created to provide a graphical representation of all associated variables to vulnerability and patch management based on existing literature. Frameworks currently being utilised in the industry to create an effective engagement model between vulnerability and patch management services are considered. The link between quantifying a threat, vulnerability and consequence; what Microsoft has available for patching; and the action plan for resulting vulnerabilities is explored. Furthermore, the processes and means of communication between each of these services are investigated to ensure there is effective remediation of vulnerabilities, ultimately improving the security risk posture of an organisation. In order to effectively measure the security risk posture, progress is measured between each of these services through a single averaged measurement metric. The outcome of the research highlights influencing factors that impact successful vulnerability management, in line with identified themes from the research taxonomy. These influencing factors are however significantly undermined due to resources within the same organisations not having a clear and consistent understanding of their role, organisational capabilities and objectives for effective vulnerability and patch management within their organisations. , Thesis (MSc) -- Faculty of Science, Computer Science, 2021
- Full Text:
- Date Issued: 2021-04
Minimal motion capture with inverse kinematics for articulated human figure animation
- Authors: Casanueva, Luis
- Date: 2000
- Subjects: Virtual reality , Image processing -- Digital techniques
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4620 , http://hdl.handle.net/10962/d1006485 , Virtual reality , Image processing -- Digital techniques
- Description: Animating an articulated figure usually requires expensive hardware in terms of motion capture equipment, processing power and rendering power. This implies a high cost system and thus eliminates the use of personal computers to drive avatars in virtual environments. We propose a system to animate an articulated human upper body in real-time, using minimal motion capture trackers to provide position and orientation for the limbs. The system has to drive an avatar in a virtual environment on a low-end computer. The cost of the motion capture equipment must be relatively low (hence the use of minimal trackers). We discuss the various types of motion capture equipment and decide to use electromagnetic trackers which are adequate for our requirements while being reasonably priced. We also discuss the use of inverse kinematics to solve for the articulated chains making up the topology of the articulated figure. Furthermore, we offer a method to describe articulated chains as well as a process to specify the reach of up to four link chains with various levels of redundancy for use in articulated figures. We then provide various types of constraints to reduce the redundancy of non-defined articulated chains, specifically for chains found in an articulated human upper body. Such methods include a way to solve for the redundancy in the orientation of the neck link, as well as three different methods to solve the redundancy of the articulated human arm. The first method involves eliminating a degree of freedom from the chain, thus reducing its redundancy. The second method calculates the elevation angle of the elbow position from the elevation angle of the hand. The third method determines the actual position of the elbow from an average of previous positions of the elbow according to the position and orientation of the hand. The previous positions of the elbow are captured during the calibration process. The redundancy of the neck is easily solved due to the small amount of redundancy in the chain. When solving the arm, the first method which should give a perfect result in theory, gives a poor result in practice due to the limitations of both the motion capture equipment and the design. The second method provides an adequate result for the position of the redundant elbow in most cases although fails in some cases. Still it benefits from a simple approach as well as very little need for calibration. The third method provides the most accurate method of the three for the position of the redundant elbow although it also fails in some cases. This method however requires a long calibration session for each user. The last two methods allow for the calibration data to be used in latter session, thus reducing considerably the calibration required. In combination with a virtual reality system, these processes allow for the real-time animation of an articulated figure to drive avatars in virtual environments or for low quality animation on a low-end computer.
- Full Text:
- Date Issued: 2000
- Authors: Casanueva, Luis
- Date: 2000
- Subjects: Virtual reality , Image processing -- Digital techniques
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4620 , http://hdl.handle.net/10962/d1006485 , Virtual reality , Image processing -- Digital techniques
- Description: Animating an articulated figure usually requires expensive hardware in terms of motion capture equipment, processing power and rendering power. This implies a high cost system and thus eliminates the use of personal computers to drive avatars in virtual environments. We propose a system to animate an articulated human upper body in real-time, using minimal motion capture trackers to provide position and orientation for the limbs. The system has to drive an avatar in a virtual environment on a low-end computer. The cost of the motion capture equipment must be relatively low (hence the use of minimal trackers). We discuss the various types of motion capture equipment and decide to use electromagnetic trackers which are adequate for our requirements while being reasonably priced. We also discuss the use of inverse kinematics to solve for the articulated chains making up the topology of the articulated figure. Furthermore, we offer a method to describe articulated chains as well as a process to specify the reach of up to four link chains with various levels of redundancy for use in articulated figures. We then provide various types of constraints to reduce the redundancy of non-defined articulated chains, specifically for chains found in an articulated human upper body. Such methods include a way to solve for the redundancy in the orientation of the neck link, as well as three different methods to solve the redundancy of the articulated human arm. The first method involves eliminating a degree of freedom from the chain, thus reducing its redundancy. The second method calculates the elevation angle of the elbow position from the elevation angle of the hand. The third method determines the actual position of the elbow from an average of previous positions of the elbow according to the position and orientation of the hand. The previous positions of the elbow are captured during the calibration process. The redundancy of the neck is easily solved due to the small amount of redundancy in the chain. When solving the arm, the first method which should give a perfect result in theory, gives a poor result in practice due to the limitations of both the motion capture equipment and the design. The second method provides an adequate result for the position of the redundant elbow in most cases although fails in some cases. Still it benefits from a simple approach as well as very little need for calibration. The third method provides the most accurate method of the three for the position of the redundant elbow although it also fails in some cases. This method however requires a long calibration session for each user. The last two methods allow for the calibration data to be used in latter session, thus reducing considerably the calibration required. In combination with a virtual reality system, these processes allow for the real-time animation of an articulated figure to drive avatars in virtual environments or for low quality animation on a low-end computer.
- Full Text:
- Date Issued: 2000
The monitor and synchroniser concepts in the programming language CLANG
- Authors: Chalmers, Alan Gordon
- Date: 1985
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4616 , http://hdl.handle.net/10962/d1006132
- Full Text:
- Date Issued: 1985
- Authors: Chalmers, Alan Gordon
- Date: 1985
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4616 , http://hdl.handle.net/10962/d1006132
- Full Text:
- Date Issued: 1985