The ISO/IEC 27002 and ISO/IEC 27799 information security management standards : a comparative analysis from a healthcare perspective
- Authors: Ngqondi, Tembisa Grace
- Date: 2009
- Subjects: Computer security , Computer networks -- Security measures -- Standards , Data protection -- Management -- Standards
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9765 , http://hdl.handle.net/10948/1066 , Computer security , Computer networks -- Security measures -- Standards , Data protection -- Management -- Standards
- Description: Technological shift has become significant and an area of concern in the health sector with regard to securing health information assets. Health information systems hosting personal health information expose these information assets to ever-evolving threats. This information includes aspects of an extremely sensitive nature, for example, a particular patient may have a history of drug abuse, which would be reflected in the patient’s medical record. The private nature of patient information places a higher demand on the need to ensure privacy. Ensuring that the security and privacy of health information remain intact is therefore vital in the healthcare environment. In order to protect information appropriately and effectively, good information security management practices should be followed. To this end, the International Organization for Standardization (ISO) published a code of practice for information security management, namely the ISO 27002 (2005). This standard is widely used in industry but is a generic standard aimed at all industries. Therefore it does not consider the unique security needs of a particular environment. Because of the unique nature of personal health information and its security and privacy requirements, the need to introduce a healthcare sector-specific standard for information security management was identified. The ISO 27799 was therefore published as an industry-specific variant of the ISO 27002 which is geared towards addressing security requirements in health informatics. It serves as an implementation guide for the ISO 27002 when implemented in the health sector. The publication of the ISO 27799 is considered as a positive development in the quest to improve health information security. However, the question arises whether the ISO 27799 addresses the security needs of the healthcare domain sufficiently. The extensive use of the ISO 27002 implies that many proponents of this standard (in healthcare), now have to ensure that they meet the (assumed) increased requirements of the ISO 27799. The purpose of this research is therefore to conduct a comprehensive comparison of the ISO 27002 and ISO 27799 standards to determine whether the ISO 27799 serves the specific needs of the health sector from an information security management point of view.
- Full Text:
- Date Issued: 2009
- Authors: Ngqondi, Tembisa Grace
- Date: 2009
- Subjects: Computer security , Computer networks -- Security measures -- Standards , Data protection -- Management -- Standards
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9765 , http://hdl.handle.net/10948/1066 , Computer security , Computer networks -- Security measures -- Standards , Data protection -- Management -- Standards
- Description: Technological shift has become significant and an area of concern in the health sector with regard to securing health information assets. Health information systems hosting personal health information expose these information assets to ever-evolving threats. This information includes aspects of an extremely sensitive nature, for example, a particular patient may have a history of drug abuse, which would be reflected in the patient’s medical record. The private nature of patient information places a higher demand on the need to ensure privacy. Ensuring that the security and privacy of health information remain intact is therefore vital in the healthcare environment. In order to protect information appropriately and effectively, good information security management practices should be followed. To this end, the International Organization for Standardization (ISO) published a code of practice for information security management, namely the ISO 27002 (2005). This standard is widely used in industry but is a generic standard aimed at all industries. Therefore it does not consider the unique security needs of a particular environment. Because of the unique nature of personal health information and its security and privacy requirements, the need to introduce a healthcare sector-specific standard for information security management was identified. The ISO 27799 was therefore published as an industry-specific variant of the ISO 27002 which is geared towards addressing security requirements in health informatics. It serves as an implementation guide for the ISO 27002 when implemented in the health sector. The publication of the ISO 27799 is considered as a positive development in the quest to improve health information security. However, the question arises whether the ISO 27799 addresses the security needs of the healthcare domain sufficiently. The extensive use of the ISO 27002 implies that many proponents of this standard (in healthcare), now have to ensure that they meet the (assumed) increased requirements of the ISO 27799. The purpose of this research is therefore to conduct a comprehensive comparison of the ISO 27002 and ISO 27799 standards to determine whether the ISO 27799 serves the specific needs of the health sector from an information security management point of view.
- Full Text:
- Date Issued: 2009
Towards a capability maturity model for a cyber range
- Authors: Aschmann, Michael Joseph
- Date: 2020
- Subjects: Computer software -- Development , Computer security
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/163142 , vital:41013
- Description: This work describes research undertaken towards the development of a Capability Maturity Model (CMM) for Cyber Ranges (CRs) focused on cyber security. Global cyber security needs are on the rise, and the need for attribution within the cyber domain is of particular concern. This has prompted major efforts to enhance cyber capabilities within organisations to increase their total cyber resilience posture. These efforts include, but are not limited to, the testing of computational devices, networks, and applications, and cyber skills training focused on prevention, detection and cyber attack response. A cyber range allows for the testing of the computational environment. By developing cyber events within a confined virtual or sand-boxed cyber environment, a cyber range can prepare the next generation of cyber security specialists to handle a variety of potential cyber attacks. Cyber ranges have different purposes, each designed to fulfil a different computational testing and cyber training goal; consequently, cyber ranges can vary greatly in the level of variety, capability, maturity and complexity. As cyber ranges proliferate and become more and more valued as tools for cyber security, a method to classify or rate them becomes essential. Yet while a universal criteria for measuring cyber ranges in terms of their capability maturity levels becomes more critical, there are currently very limited resources for researchers aiming to perform this kind of work. For this reason, this work proposes and describes a CMM, designed to give organisations the ability to benchmark the capability maturity of a given cyber range. This research adopted a synthesised approach to the development of a CMM, grounded in prior research and focused on the production of a conceptual model that provides a useful level of abstraction. In order to achieve this goal, the core capability elements of a cyber range are defined with their relative importance, allowing for the development of a proposed classification cyber range levels. An analysis of data gathered during the course of an expert review, together with other research, further supported the development of the conceptual model. In the context of cyber range capability, classification will include the ability of the cyber range to perform its functions optimally with different core capability elements, focusing on the Measurement of Capability (MoC) with its elements, namely effect, performance and threat ability. Cyber range maturity can evolve over time and can be defined through the Measurement of Maturity (MoM) with its elements, namely people, processes, technology. The combination of these measurements utilising the CMM for a CR determines the capability maturity level of a CR. The primary outcome of this research is the proposed level-based CMM framework for a cyber range, developed using adopted and synthesised CMMs, the analysis of an expert review, and the mapping of the results.
- Full Text:
- Date Issued: 2020
- Authors: Aschmann, Michael Joseph
- Date: 2020
- Subjects: Computer software -- Development , Computer security
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/163142 , vital:41013
- Description: This work describes research undertaken towards the development of a Capability Maturity Model (CMM) for Cyber Ranges (CRs) focused on cyber security. Global cyber security needs are on the rise, and the need for attribution within the cyber domain is of particular concern. This has prompted major efforts to enhance cyber capabilities within organisations to increase their total cyber resilience posture. These efforts include, but are not limited to, the testing of computational devices, networks, and applications, and cyber skills training focused on prevention, detection and cyber attack response. A cyber range allows for the testing of the computational environment. By developing cyber events within a confined virtual or sand-boxed cyber environment, a cyber range can prepare the next generation of cyber security specialists to handle a variety of potential cyber attacks. Cyber ranges have different purposes, each designed to fulfil a different computational testing and cyber training goal; consequently, cyber ranges can vary greatly in the level of variety, capability, maturity and complexity. As cyber ranges proliferate and become more and more valued as tools for cyber security, a method to classify or rate them becomes essential. Yet while a universal criteria for measuring cyber ranges in terms of their capability maturity levels becomes more critical, there are currently very limited resources for researchers aiming to perform this kind of work. For this reason, this work proposes and describes a CMM, designed to give organisations the ability to benchmark the capability maturity of a given cyber range. This research adopted a synthesised approach to the development of a CMM, grounded in prior research and focused on the production of a conceptual model that provides a useful level of abstraction. In order to achieve this goal, the core capability elements of a cyber range are defined with their relative importance, allowing for the development of a proposed classification cyber range levels. An analysis of data gathered during the course of an expert review, together with other research, further supported the development of the conceptual model. In the context of cyber range capability, classification will include the ability of the cyber range to perform its functions optimally with different core capability elements, focusing on the Measurement of Capability (MoC) with its elements, namely effect, performance and threat ability. Cyber range maturity can evolve over time and can be defined through the Measurement of Maturity (MoM) with its elements, namely people, processes, technology. The combination of these measurements utilising the CMM for a CR determines the capability maturity level of a CR. The primary outcome of this research is the proposed level-based CMM framework for a cyber range, developed using adopted and synthesised CMMs, the analysis of an expert review, and the mapping of the results.
- Full Text:
- Date Issued: 2020
Towards a collection of cost-effective technologies in support of the NIST cybersecurity framework
- Shackleton, Bruce Michael Stuart
- Authors: Shackleton, Bruce Michael Stuart
- Date: 2018
- Subjects: National Institute of Standards and Technology (U.S.) , Computer security , Computer networks Security measures , Small business Information technology Cost effectiveness , Open source software
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/62494 , vital:28199
- Description: The NIST Cybersecurity Framework (CSF) is a specific risk and cybersecurity framework. It provides guidance on controls that can be implemented to help improve an organisation’s cybersecurity risk posture. The CSF Functions consist of Identify, Protect, Detect, Respond, and Recover. Like most Information Technology (IT) frameworks, there are elements of people, processes, and technology. The same elements are required to successfully implement the NIST CSF. This research specifically focuses on the technology element. While there are many commercial technologies available for a small to medium sized business, the costs can be prohibitively expensive. Therefore, this research investigates cost-effective technologies and assesses their alignment to the NIST CSF. The assessment was made against the NIST CSF subcategories. Each subcategory was analysed to identify where a technology would likely be required. The framework provides a list of Informative References. These Informative References were used to create high- level technology categories, as well as identify the technical controls against which the technologies were measured. The technologies tested were either open source or proprietary. All open source technologies tested were free to use, or have a free community edition. Proprietary technologies would be free to use, or considered generally available to most organisations, such as components contained within Microsoft platforms. The results from the experimentation demonstrated that there are multiple cost-effective technologies that can support the NIST CSF. Once all technologies were tested, the NIST CSF was extended. Two new columns were added, namely high-level technology category, and tested technology. The columns were populated with output from the research. This extended framework begins an initial collection of cost-effective technologies in support of the NIST CSF.
- Full Text:
- Date Issued: 2018
- Authors: Shackleton, Bruce Michael Stuart
- Date: 2018
- Subjects: National Institute of Standards and Technology (U.S.) , Computer security , Computer networks Security measures , Small business Information technology Cost effectiveness , Open source software
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/62494 , vital:28199
- Description: The NIST Cybersecurity Framework (CSF) is a specific risk and cybersecurity framework. It provides guidance on controls that can be implemented to help improve an organisation’s cybersecurity risk posture. The CSF Functions consist of Identify, Protect, Detect, Respond, and Recover. Like most Information Technology (IT) frameworks, there are elements of people, processes, and technology. The same elements are required to successfully implement the NIST CSF. This research specifically focuses on the technology element. While there are many commercial technologies available for a small to medium sized business, the costs can be prohibitively expensive. Therefore, this research investigates cost-effective technologies and assesses their alignment to the NIST CSF. The assessment was made against the NIST CSF subcategories. Each subcategory was analysed to identify where a technology would likely be required. The framework provides a list of Informative References. These Informative References were used to create high- level technology categories, as well as identify the technical controls against which the technologies were measured. The technologies tested were either open source or proprietary. All open source technologies tested were free to use, or have a free community edition. Proprietary technologies would be free to use, or considered generally available to most organisations, such as components contained within Microsoft platforms. The results from the experimentation demonstrated that there are multiple cost-effective technologies that can support the NIST CSF. Once all technologies were tested, the NIST CSF was extended. Two new columns were added, namely high-level technology category, and tested technology. The columns were populated with output from the research. This extended framework begins an initial collection of cost-effective technologies in support of the NIST CSF.
- Full Text:
- Date Issued: 2018
Towards a framework for building security operation centers
- Authors: Jacobs, Pierre Conrad
- Date: 2015
- Subjects: Security systems industry , Systems engineering , Expert systems (Computer science) , COBIT (Information technology management standard) , Computer security
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4710 , http://hdl.handle.net/10962/d1017932
- Description: In this thesis a framework for Security Operation Centers (SOCs) is proposed. It was developed by utilising Systems Engineering best practices, combined with industry-accepted standards and frameworks, such as the TM Forum’s eTOM framework, CoBIT, ITIL, and ISO/IEC 27002:2005. This framework encompasses the design considerations, the operational considerations and the means to measure the effectiveness and efficiency of SOCs. The intent is to provide guidance to consumers on how to compare and measure the capabilities of SOCs provided by disparate service providers, and to provide service providers (internal and external) a framework to use when building and improving their offerings. The importance of providing a consistent, measureable and guaranteed service to customers is becoming more important, as there is an increased focus on holistic management of security. This has in turn resulted in an increased number of both internal and managed service provider solutions. While some frameworks exist for designing, building and operating specific security technologies used within SOCs, we did not find any comprehensive framework for designing, building and managing SOCs. Consequently, consumers of SOCs do not enjoy a constant experience from vendors, and may experience inconsistent services from geographically dispersed offerings provided by the same vendor.
- Full Text:
- Date Issued: 2015
- Authors: Jacobs, Pierre Conrad
- Date: 2015
- Subjects: Security systems industry , Systems engineering , Expert systems (Computer science) , COBIT (Information technology management standard) , Computer security
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4710 , http://hdl.handle.net/10962/d1017932
- Description: In this thesis a framework for Security Operation Centers (SOCs) is proposed. It was developed by utilising Systems Engineering best practices, combined with industry-accepted standards and frameworks, such as the TM Forum’s eTOM framework, CoBIT, ITIL, and ISO/IEC 27002:2005. This framework encompasses the design considerations, the operational considerations and the means to measure the effectiveness and efficiency of SOCs. The intent is to provide guidance to consumers on how to compare and measure the capabilities of SOCs provided by disparate service providers, and to provide service providers (internal and external) a framework to use when building and improving their offerings. The importance of providing a consistent, measureable and guaranteed service to customers is becoming more important, as there is an increased focus on holistic management of security. This has in turn resulted in an increased number of both internal and managed service provider solutions. While some frameworks exist for designing, building and operating specific security technologies used within SOCs, we did not find any comprehensive framework for designing, building and managing SOCs. Consequently, consumers of SOCs do not enjoy a constant experience from vendors, and may experience inconsistent services from geographically dispersed offerings provided by the same vendor.
- Full Text:
- Date Issued: 2015
Towards understanding and mitigating attacks leveraging zero-day exploits
- Authors: Smit, Liam
- Date: 2019
- Subjects: Computer crimes -- Prevention , Data protection , Hacking , Computer security , Computer networks -- Security measures , Computers -- Access control , Malware (Computer software)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/115718 , vital:34218
- Description: Zero-day vulnerabilities are unknown and therefore not addressed with the result that they can be exploited by attackers to gain unauthorised system access. In order to understand and mitigate against attacks leveraging zero-days or unknown techniques, it is necessary to study the vulnerabilities, exploits and attacks that make use of them. In recent years there have been a number of leaks publishing such attacks using various methods to exploit vulnerabilities. This research seeks to understand what types of vulnerabilities exist, why and how these are exploited, and how to defend against such attacks by either mitigating the vulnerabilities or the method / process of exploiting them. By moving beyond merely remedying the vulnerabilities to defences that are able to prevent or detect the actions taken by attackers, the security of the information system will be better positioned to deal with future unknown threats. An interesting finding is how attackers exploit moving beyond the observable bounds to circumvent security defences, for example, compromising syslog servers, or going down to lower system rings to gain access. However, defenders can counter this by employing defences that are external to the system preventing attackers from disabling them or removing collected evidence after gaining system access. Attackers are able to defeat air-gaps via the leakage of electromagnetic radiation as well as misdirect attribution by planting false artefacts for forensic analysis and attacking from third party information systems. They analyse the methods of other attackers to learn new techniques. An example of this is the Umbrage project whereby malware is analysed to decide whether it should be implemented as a proof of concept. Another important finding is that attackers respect defence mechanisms such as: remote syslog (e.g. firewall), core dump files, database auditing, and Tripwire (e.g. SlyHeretic). These defences all have the potential to result in the attacker being discovered. Attackers must either negate the defence mechanism or find unprotected targets. Defenders can use technologies such as encryption to defend against interception and man-in-the-middle attacks. They can also employ honeytokens and honeypots to alarm misdirect, slow down and learn from attackers. By employing various tactics defenders are able to increase their chance of detecting and time to react to attacks, even those exploiting hitherto unknown vulnerabilities. To summarize the information presented in this thesis and to show the practical importance thereof, an examination is presented of the NSA's network intrusion of the SWIFT organisation. It shows that the firewalls were exploited with remote code execution zerodays. This attack has a striking parallel in the approach used in the recent VPNFilter malware. If nothing else, the leaks provide information to other actors on how to attack and what to avoid. However, by studying state actors, we can gain insight into what other actors with fewer resources can do in the future.
- Full Text:
- Date Issued: 2019
- Authors: Smit, Liam
- Date: 2019
- Subjects: Computer crimes -- Prevention , Data protection , Hacking , Computer security , Computer networks -- Security measures , Computers -- Access control , Malware (Computer software)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/115718 , vital:34218
- Description: Zero-day vulnerabilities are unknown and therefore not addressed with the result that they can be exploited by attackers to gain unauthorised system access. In order to understand and mitigate against attacks leveraging zero-days or unknown techniques, it is necessary to study the vulnerabilities, exploits and attacks that make use of them. In recent years there have been a number of leaks publishing such attacks using various methods to exploit vulnerabilities. This research seeks to understand what types of vulnerabilities exist, why and how these are exploited, and how to defend against such attacks by either mitigating the vulnerabilities or the method / process of exploiting them. By moving beyond merely remedying the vulnerabilities to defences that are able to prevent or detect the actions taken by attackers, the security of the information system will be better positioned to deal with future unknown threats. An interesting finding is how attackers exploit moving beyond the observable bounds to circumvent security defences, for example, compromising syslog servers, or going down to lower system rings to gain access. However, defenders can counter this by employing defences that are external to the system preventing attackers from disabling them or removing collected evidence after gaining system access. Attackers are able to defeat air-gaps via the leakage of electromagnetic radiation as well as misdirect attribution by planting false artefacts for forensic analysis and attacking from third party information systems. They analyse the methods of other attackers to learn new techniques. An example of this is the Umbrage project whereby malware is analysed to decide whether it should be implemented as a proof of concept. Another important finding is that attackers respect defence mechanisms such as: remote syslog (e.g. firewall), core dump files, database auditing, and Tripwire (e.g. SlyHeretic). These defences all have the potential to result in the attacker being discovered. Attackers must either negate the defence mechanism or find unprotected targets. Defenders can use technologies such as encryption to defend against interception and man-in-the-middle attacks. They can also employ honeytokens and honeypots to alarm misdirect, slow down and learn from attackers. By employing various tactics defenders are able to increase their chance of detecting and time to react to attacks, even those exploiting hitherto unknown vulnerabilities. To summarize the information presented in this thesis and to show the practical importance thereof, an examination is presented of the NSA's network intrusion of the SWIFT organisation. It shows that the firewalls were exploited with remote code execution zerodays. This attack has a striking parallel in the approach used in the recent VPNFilter malware. If nothing else, the leaks provide information to other actors on how to attack and what to avoid. However, by studying state actors, we can gain insight into what other actors with fewer resources can do in the future.
- Full Text:
- Date Issued: 2019
WSP3: a web service model for personal privacy protection
- Authors: Ophoff, Jacobus Albertus
- Date: 2003
- Subjects: Data protection , Computer security , Privacy, Right of
- Language: English
- Type: Thesis , Masters , MTech (Information Technology)
- Identifier: vital:10798 , http://hdl.handle.net/10948/272 , Data protection , Computer security , Privacy, Right of
- Description: The prevalent use of the Internet not only brings with it numerous advantages, but also some drawbacks. The biggest of these problems is the threat to the individual’s personal privacy. This privacy issue is playing a growing role with respect to technological advancements. While new service-based technologies are considerably increasing the scope of information flow, the cost is a loss of control over personal information and therefore privacy. Existing privacy protection measures might fail to provide effective privacy protection in these new environments. This dissertation focuses on the use of new technologies to improve the levels of personal privacy. In this regard the WSP3 (Web Service Model for Personal Privacy Protection) model is formulated. This model proposes a privacy protection scheme using Web Services. Having received tremendous industry backing, Web Services is a very topical technology, promising much in the evolution of the Internet. In our society privacy is highly valued and a very important issue. Protecting personal privacy in environments using new technologies is crucial for their future success. These facts, combined with the detail that the WSP3 model focusses on Web Service environments, lead to the following realizations for the model: The WSP3 model provides users with control over their personal information and allows them to express their desired level of privacy. Parties requiring access to a user’s information are explicitly defined by the user, as well as the information available to them. The WSP3 model utilizes a Web Services architecture to provide privacy protection. In addition, it integrates security techniques, such as cryptography, into the architecture as required. The WSP3 model integrates with current standards to maintain their benefits. This allows the implementation of the model in any environment supporting these base technologies. In addition, the research involves the development of a prototype according to the model. This prototype serves to present a proof-of-concept by illustrating the WSP3 model and all the technologies involved. The WSP3 model gives users control over their privacy and allows everyone to decide their own level of protection. By incorporating Web Services, the model also shows how new technologies can be used to offer solutions to existing problem areas.
- Full Text:
- Date Issued: 2003
- Authors: Ophoff, Jacobus Albertus
- Date: 2003
- Subjects: Data protection , Computer security , Privacy, Right of
- Language: English
- Type: Thesis , Masters , MTech (Information Technology)
- Identifier: vital:10798 , http://hdl.handle.net/10948/272 , Data protection , Computer security , Privacy, Right of
- Description: The prevalent use of the Internet not only brings with it numerous advantages, but also some drawbacks. The biggest of these problems is the threat to the individual’s personal privacy. This privacy issue is playing a growing role with respect to technological advancements. While new service-based technologies are considerably increasing the scope of information flow, the cost is a loss of control over personal information and therefore privacy. Existing privacy protection measures might fail to provide effective privacy protection in these new environments. This dissertation focuses on the use of new technologies to improve the levels of personal privacy. In this regard the WSP3 (Web Service Model for Personal Privacy Protection) model is formulated. This model proposes a privacy protection scheme using Web Services. Having received tremendous industry backing, Web Services is a very topical technology, promising much in the evolution of the Internet. In our society privacy is highly valued and a very important issue. Protecting personal privacy in environments using new technologies is crucial for their future success. These facts, combined with the detail that the WSP3 model focusses on Web Service environments, lead to the following realizations for the model: The WSP3 model provides users with control over their personal information and allows them to express their desired level of privacy. Parties requiring access to a user’s information are explicitly defined by the user, as well as the information available to them. The WSP3 model utilizes a Web Services architecture to provide privacy protection. In addition, it integrates security techniques, such as cryptography, into the architecture as required. The WSP3 model integrates with current standards to maintain their benefits. This allows the implementation of the model in any environment supporting these base technologies. In addition, the research involves the development of a prototype according to the model. This prototype serves to present a proof-of-concept by illustrating the WSP3 model and all the technologies involved. The WSP3 model gives users control over their privacy and allows everyone to decide their own level of protection. By incorporating Web Services, the model also shows how new technologies can be used to offer solutions to existing problem areas.
- Full Text:
- Date Issued: 2003