The analysis of a computer music network and the implementation of essential subsystems
- Authors: Wilks, Antony John
- Date: 1995
- Subjects: Computer networks , Computer music , MIDI (Standard)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4666 , http://hdl.handle.net/10962/d1006682 , Computer networks , Computer music , MIDI (Standard)
- Description: The inability to share resources in commercial and institutional computer music studios results in non-optimal resource utilisation. The use of computers to process, store and communicate data can be extended within these studios, to provide the capability of sharing resources amongst their users. This thesis describes a computer music network which was designed for this purpose. Certain devices had to be custom built for the implementation of the network. The thesis discusses the design and construction of these devices.
- Full Text:
- Date Issued: 1995
- Authors: Wilks, Antony John
- Date: 1995
- Subjects: Computer networks , Computer music , MIDI (Standard)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4666 , http://hdl.handle.net/10962/d1006682 , Computer networks , Computer music , MIDI (Standard)
- Description: The inability to share resources in commercial and institutional computer music studios results in non-optimal resource utilisation. The use of computers to process, store and communicate data can be extended within these studios, to provide the capability of sharing resources amongst their users. This thesis describes a computer music network which was designed for this purpose. Certain devices had to be custom built for the implementation of the network. The thesis discusses the design and construction of these devices.
- Full Text:
- Date Issued: 1995
Design, evaluation and comparison of evolution and reinforcement learning models
- Authors: Mclean, Clinton Brett
- Date: 2002
- Subjects: Evolutionary computation Neural networks (Computer science) Reinforcement learning
- Language: English
- Type: Thesis , Masters , MEcon
- Identifier: vital:4625 , http://hdl.handle.net/10962/d1006493
- Description: This work presents the design, evaluation and comparison of evolution and reinforcement learning models, in isolation and combined in Darwinian and Lamarckian frameworks, with a particular emphasis being placed on their adaptive nature in response to environments that become increasingly unstable. Our ultimate objective is to determine whether hybrid models of evolution and learning can demonstrate adaptive qualities beyond those of such models when applied in isolation. This work demonstrates the limitations of evolution, reinforcement learning and Lamarckian models in dealing with increasingly unstable environments, while noting the effective adaptive nature of a Darwinian model to assimilate increasing levels of instability. This is shown to be a result of the Darwinian evolution model's ability to separate learning at two levels, the population's experience of the environment over the course of many generations and the individual's experience of the environment over the course of its lifetime. Thus, knowledge relating to the general characteristics of the environment over many generations can be maintained in the population's genotypes with phenotype (reinforcement) learning being utilized to adapt a particular agent to the particular characteristics of its environment. Lamarckian evolution, though, is shown to demonstrate adaptive characteristics that are highly effective in response to the stable environments. Selection and reproduction combined with reinforcement learning creates a model that has the ability to utilize useful knowledge produced by reinforcements, as opposed to random mutations, to accelerate the search process. As a result the influence of individual learning on the populations evolution is shown to be more successful when applied in the more direct Lamarckian form. Based on our results demonstrating the success of Lamarckian strategies in stable environments and Darwinian strategies in unstable environments, hybrid Darwinian/Lamarckian models are created with a view towards combining the advantages of both forms of evolution to produce a superior adaptive capability. Our investigation demonstrates that such hybrid models can effectively combine the adaptive advantageous of both Darwinian and Lamarckian evolution to provide a more effective capability of adapting to a range of conditions, from stable to unstable, appropriately adjusting the required degree of inheritance in response to the requirements of the environment.
- Full Text:
- Date Issued: 2002
- Authors: Mclean, Clinton Brett
- Date: 2002
- Subjects: Evolutionary computation Neural networks (Computer science) Reinforcement learning
- Language: English
- Type: Thesis , Masters , MEcon
- Identifier: vital:4625 , http://hdl.handle.net/10962/d1006493
- Description: This work presents the design, evaluation and comparison of evolution and reinforcement learning models, in isolation and combined in Darwinian and Lamarckian frameworks, with a particular emphasis being placed on their adaptive nature in response to environments that become increasingly unstable. Our ultimate objective is to determine whether hybrid models of evolution and learning can demonstrate adaptive qualities beyond those of such models when applied in isolation. This work demonstrates the limitations of evolution, reinforcement learning and Lamarckian models in dealing with increasingly unstable environments, while noting the effective adaptive nature of a Darwinian model to assimilate increasing levels of instability. This is shown to be a result of the Darwinian evolution model's ability to separate learning at two levels, the population's experience of the environment over the course of many generations and the individual's experience of the environment over the course of its lifetime. Thus, knowledge relating to the general characteristics of the environment over many generations can be maintained in the population's genotypes with phenotype (reinforcement) learning being utilized to adapt a particular agent to the particular characteristics of its environment. Lamarckian evolution, though, is shown to demonstrate adaptive characteristics that are highly effective in response to the stable environments. Selection and reproduction combined with reinforcement learning creates a model that has the ability to utilize useful knowledge produced by reinforcements, as opposed to random mutations, to accelerate the search process. As a result the influence of individual learning on the populations evolution is shown to be more successful when applied in the more direct Lamarckian form. Based on our results demonstrating the success of Lamarckian strategies in stable environments and Darwinian strategies in unstable environments, hybrid Darwinian/Lamarckian models are created with a view towards combining the advantages of both forms of evolution to produce a superior adaptive capability. Our investigation demonstrates that such hybrid models can effectively combine the adaptive advantageous of both Darwinian and Lamarckian evolution to provide a more effective capability of adapting to a range of conditions, from stable to unstable, appropriately adjusting the required degree of inheritance in response to the requirements of the environment.
- Full Text:
- Date Issued: 2002
Evaluating the cyber security skills gap relating to penetration testing
- Authors: Beukes, Dirk Johannes
- Date: 2021
- Subjects: Computer networks -- Security measures , Computer networks -- Monitoring , Computer networks -- Management , Data protection , Information technology -- Security measures , Professionals -- Supply and demand , Electronic data personnel -- Supply and demand
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/171120 , vital:42021
- Description: Information Technology (IT) is growing rapidly and has become an integral part of daily life. It provides a boundless list of services and opportunities, generating boundless sources of information, which could be abused or exploited. Due to this growth, there are thousands of new users added to the grid using computer systems in a static and mobile environment; this fact alone creates endless volumes of data to be exploited and hardware devices to be abused by the wrong people. The growth in the IT environment adds challenges that may affect users in their personal, professional, and business lives. There are constant threats on corporate and private computer networks and computer systems. In the corporate environment companies try to eliminate the threat by testing networks making use of penetration tests and by implementing cyber awareness programs to make employees more aware of the cyber threat. Penetration tests and vulnerability assessments are undervalued; are seen as a formality and are not used to increase system security. If used regularly the computer system will be more secure and attacks minimized. With the growth in technology, industries all over the globe become fully dependent on information systems in doing their day-to-day business. As technology evolves and new technology becomes available, the bigger the risk becomes to protect against the dangers which come with this new technology. For industry to protect itself against this growth in technology, personnel with a certain skill set is needed. This is where cyber security plays a very important role in the protection of information systems to ensure the confidentiality, integrity and availability of the information system itself and the data on the system. Due to this drive to secure information systems, the need for cyber security by professionals is on the rise as well. It is estimated that there is a shortage of one million cyber security professionals globally. What is the reason for this skills shortage? Will it be possible to close this skills shortage gap? This study is about identifying the skills gap and identifying possible ways to close this skills gap. In this study, research was conducted on the cyber security international standards, cyber security training at universities and international certification focusing specifically on penetration testing, the evaluation of the need of industry while recruiting new penetration testers, finishing with suggestions on how to fill possible gaps in the skills market with a conclusion.
- Full Text:
- Date Issued: 2021
- Authors: Beukes, Dirk Johannes
- Date: 2021
- Subjects: Computer networks -- Security measures , Computer networks -- Monitoring , Computer networks -- Management , Data protection , Information technology -- Security measures , Professionals -- Supply and demand , Electronic data personnel -- Supply and demand
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/171120 , vital:42021
- Description: Information Technology (IT) is growing rapidly and has become an integral part of daily life. It provides a boundless list of services and opportunities, generating boundless sources of information, which could be abused or exploited. Due to this growth, there are thousands of new users added to the grid using computer systems in a static and mobile environment; this fact alone creates endless volumes of data to be exploited and hardware devices to be abused by the wrong people. The growth in the IT environment adds challenges that may affect users in their personal, professional, and business lives. There are constant threats on corporate and private computer networks and computer systems. In the corporate environment companies try to eliminate the threat by testing networks making use of penetration tests and by implementing cyber awareness programs to make employees more aware of the cyber threat. Penetration tests and vulnerability assessments are undervalued; are seen as a formality and are not used to increase system security. If used regularly the computer system will be more secure and attacks minimized. With the growth in technology, industries all over the globe become fully dependent on information systems in doing their day-to-day business. As technology evolves and new technology becomes available, the bigger the risk becomes to protect against the dangers which come with this new technology. For industry to protect itself against this growth in technology, personnel with a certain skill set is needed. This is where cyber security plays a very important role in the protection of information systems to ensure the confidentiality, integrity and availability of the information system itself and the data on the system. Due to this drive to secure information systems, the need for cyber security by professionals is on the rise as well. It is estimated that there is a shortage of one million cyber security professionals globally. What is the reason for this skills shortage? Will it be possible to close this skills shortage gap? This study is about identifying the skills gap and identifying possible ways to close this skills gap. In this study, research was conducted on the cyber security international standards, cyber security training at universities and international certification focusing specifically on penetration testing, the evaluation of the need of industry while recruiting new penetration testers, finishing with suggestions on how to fill possible gaps in the skills market with a conclusion.
- Full Text:
- Date Issued: 2021
Investigating the viability of a framework for small scale, easily deployable and extensible hotspot management systems
- Authors: Thinyane, Mamello P
- Date: 2006
- Subjects: Local area networks (Computer networks) , Computer networks -- Management , Computer network architectures , Computer network protocols , Wireless communication systems , XML (Document markup language)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4638 , http://hdl.handle.net/10962/d1006553
- Description: The proliferation of PALs (Public Access Locations) is fuelling the development of new standards, protocols, services, and applications for WLANs (Wireless Local Area Networks). PALs are set up at public locations to meet continually changing, multiservice, multi-protocol user requirements. This research investigates the essential infrastructural requirements that will enable further proliferation of PALs, and consequently facilitate ubiquitous computing. Based on these requirements, an extensible architectural framework for PAL management systems that inherently facilitates the provisioning of multiple services and multiple protocols on PALs is derived. The ensuing framework, which is called Xobogel, is based on the microkernel architectural pattern, and the IPDR (Internet Protocol Data Record) specification. Xobogel takes into consideration and supports the implementation of diverse business models for PALs, in respect of distinct environmental factors. It also facilitates next-generation network service usage accounting through a simple, flexible, and extensible XML based usage record. The framework is subsequently validated for service element extensibility and simplicity through the design, implementation, and experimental deployment of SEHS (Small Extensible Hotspot System), a system based on the framework. The robustness and scalability of the framework is observed to be sufficient for SMME deployment, withstanding the stress testing experiments performed on SEHS. The range of service element and charging modules implemented confirm an acceptable level of flexibility and extensibility within the framework.
- Full Text:
- Date Issued: 2006
- Authors: Thinyane, Mamello P
- Date: 2006
- Subjects: Local area networks (Computer networks) , Computer networks -- Management , Computer network architectures , Computer network protocols , Wireless communication systems , XML (Document markup language)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4638 , http://hdl.handle.net/10962/d1006553
- Description: The proliferation of PALs (Public Access Locations) is fuelling the development of new standards, protocols, services, and applications for WLANs (Wireless Local Area Networks). PALs are set up at public locations to meet continually changing, multiservice, multi-protocol user requirements. This research investigates the essential infrastructural requirements that will enable further proliferation of PALs, and consequently facilitate ubiquitous computing. Based on these requirements, an extensible architectural framework for PAL management systems that inherently facilitates the provisioning of multiple services and multiple protocols on PALs is derived. The ensuing framework, which is called Xobogel, is based on the microkernel architectural pattern, and the IPDR (Internet Protocol Data Record) specification. Xobogel takes into consideration and supports the implementation of diverse business models for PALs, in respect of distinct environmental factors. It also facilitates next-generation network service usage accounting through a simple, flexible, and extensible XML based usage record. The framework is subsequently validated for service element extensibility and simplicity through the design, implementation, and experimental deployment of SEHS (Small Extensible Hotspot System), a system based on the framework. The robustness and scalability of the framework is observed to be sufficient for SMME deployment, withstanding the stress testing experiments performed on SEHS. The range of service element and charging modules implemented confirm an acceptable level of flexibility and extensibility within the framework.
- Full Text:
- Date Issued: 2006
Designing and implementing a virtual reality interaction framework
- Authors: Rorke, Michael
- Date: 2000
- Subjects: Virtual reality , Computer simulation , Human-computer interaction , Computer graphics
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4623 , http://hdl.handle.net/10962/d1006491 , Virtual reality , Computer simulation , Human-computer interaction , Computer graphics
- Description: Virtual Reality offers the possibility for humans to interact in a more natural way with the computer and its applications. Currently, Virtual Reality is used mainly in the field of visualisation where 3D graphics allow users to more easily view complex sets of data or structures. The field of interaction in Virtual Reality has been largely neglected due mainly to problems with input devices and equipment costs. Recent research has aimed to overcome these interaction problems, thereby creating a usable interaction platform for Virtual Reality. This thesis presents a background into the field of interaction in Virtual Reality. It goes on to propose a generic framework for the implementation of common interaction techniques into a homogeneous application development environment. This framework adds a new layer to the standard Virtual Reality toolkit – the interaction abstraction layer, or interactor layer. This separation is in line with current HCI practices. The interactor layer is further divided into specific sections – input component, interaction component, system component, intermediaries, entities and widgets. Each of these performs a specific function, with clearly defined interfaces between the different components to promote easy objectoriented implementation of the framework. The validity of the framework is shown in comparison with accepted taxonomies in the area of Virtual Reality interaction. Thus demonstrating that the framework covers all the relevant factors involved in the field. Furthermore, the thesis describes an implementation of this framework. The implementation was completed using the Rhodes University CoRgi Virtual Reality toolkit. Several postgraduate students in the Rhodes University Computer Science Department utilised the framework implementation to develop a set of case studies. These case studies demonstrate the practical use of the framework to create useful Virtual Reality applications, as well as demonstrating the generic nature of the framework and its extensibility to be able to handle new interaction techniques. Finally, the generic nature of the framework is further demonstrated by moving it from the standard CoRgi Virtual Reality toolkit, to a distributed version of this toolkit. The distributed implementation of the framework utilises the Common Object Request Broker Architecture (CORBA) to implement the distribution of the objects in the system. Using this distributed implementation, we are able to ascertain that CORBA is useful in the field of distributed real-time Virtual Reality, even taking into account the extra overhead introduced by the additional abstraction layer. We conclude from this thesis that it is important to abstract the interaction layer from the other layers of a Virtual Reality toolkit in order to provide a consistent interface to developers. We have shown that our framework is implementable and useful in the field, making it easier for developers to include interaction in their Virtual Reality applications. Our framework is able to handle all the current aspects of interaction in Virtual Reality, as well as being general enough to implement future interaction techniques. The framework is also applicable to different Virtual Reality toolkits and development platforms, making it ideal for developing general, cross-platform interactive Virtual Reality applications.
- Full Text:
- Date Issued: 2000
- Authors: Rorke, Michael
- Date: 2000
- Subjects: Virtual reality , Computer simulation , Human-computer interaction , Computer graphics
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4623 , http://hdl.handle.net/10962/d1006491 , Virtual reality , Computer simulation , Human-computer interaction , Computer graphics
- Description: Virtual Reality offers the possibility for humans to interact in a more natural way with the computer and its applications. Currently, Virtual Reality is used mainly in the field of visualisation where 3D graphics allow users to more easily view complex sets of data or structures. The field of interaction in Virtual Reality has been largely neglected due mainly to problems with input devices and equipment costs. Recent research has aimed to overcome these interaction problems, thereby creating a usable interaction platform for Virtual Reality. This thesis presents a background into the field of interaction in Virtual Reality. It goes on to propose a generic framework for the implementation of common interaction techniques into a homogeneous application development environment. This framework adds a new layer to the standard Virtual Reality toolkit – the interaction abstraction layer, or interactor layer. This separation is in line with current HCI practices. The interactor layer is further divided into specific sections – input component, interaction component, system component, intermediaries, entities and widgets. Each of these performs a specific function, with clearly defined interfaces between the different components to promote easy objectoriented implementation of the framework. The validity of the framework is shown in comparison with accepted taxonomies in the area of Virtual Reality interaction. Thus demonstrating that the framework covers all the relevant factors involved in the field. Furthermore, the thesis describes an implementation of this framework. The implementation was completed using the Rhodes University CoRgi Virtual Reality toolkit. Several postgraduate students in the Rhodes University Computer Science Department utilised the framework implementation to develop a set of case studies. These case studies demonstrate the practical use of the framework to create useful Virtual Reality applications, as well as demonstrating the generic nature of the framework and its extensibility to be able to handle new interaction techniques. Finally, the generic nature of the framework is further demonstrated by moving it from the standard CoRgi Virtual Reality toolkit, to a distributed version of this toolkit. The distributed implementation of the framework utilises the Common Object Request Broker Architecture (CORBA) to implement the distribution of the objects in the system. Using this distributed implementation, we are able to ascertain that CORBA is useful in the field of distributed real-time Virtual Reality, even taking into account the extra overhead introduced by the additional abstraction layer. We conclude from this thesis that it is important to abstract the interaction layer from the other layers of a Virtual Reality toolkit in order to provide a consistent interface to developers. We have shown that our framework is implementable and useful in the field, making it easier for developers to include interaction in their Virtual Reality applications. Our framework is able to handle all the current aspects of interaction in Virtual Reality, as well as being general enough to implement future interaction techniques. The framework is also applicable to different Virtual Reality toolkits and development platforms, making it ideal for developing general, cross-platform interactive Virtual Reality applications.
- Full Text:
- Date Issued: 2000
An integration of reduction and logic for programming languages
- Authors: Wright, David A
- Date: 1988
- Subjects: Logic programming languages , Programming languages (Electronic computers)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4570 , http://hdl.handle.net/10962/d1002035
- Description: A new declarative language is presented which captures the expressibility of both logic programming languages and functional languages. This is achieved by conditional graph rewriting, with full unification as the parameter passing mechanism. The syntax and semantics are described both formally and informally, and examples are offered to support the expressibility claim made above. The language design is of further interest due to its uniformity and the inclusion of a novel mechanism for type inference in the presence of derived type hierarchies
- Full Text:
- Date Issued: 1988
- Authors: Wright, David A
- Date: 1988
- Subjects: Logic programming languages , Programming languages (Electronic computers)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4570 , http://hdl.handle.net/10962/d1002035
- Description: A new declarative language is presented which captures the expressibility of both logic programming languages and functional languages. This is achieved by conditional graph rewriting, with full unification as the parameter passing mechanism. The syntax and semantics are described both formally and informally, and examples are offered to support the expressibility claim made above. The language design is of further interest due to its uniformity and the inclusion of a novel mechanism for type inference in the presence of derived type hierarchies
- Full Text:
- Date Issued: 1988
Categorising Network Telescope data using big data enrichment techniques
- Authors: Davis, Michael Reginald
- Date: 2019
- Subjects: Denial of service attacks , Big data , Computer networks -- Security measures
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92941 , vital:30766
- Description: Network Telescopes, Internet backbone sampling, IDS and other forms of network-sourced Threat Intelligence provide researchers with insight into the methods and intent of remote entities by capturing network traffic and analysing the resulting data. This analysis and determination of intent is made difficult by the large amounts of potentially malicious traffic, coupled with limited amount of knowledge that can be attributed to the source of the incoming data, as the source is known only by its IP address. Due to the lack of commonly available tooling, many researchers start this analysis from the beginning and so repeat and re-iterate previous research as the bulk of their work. As a result new insight into methods and approaches of analysis is gained at a high cost. Our research approaches this problem by using additional knowledge about the source IP address such as open ports, reverse and forward DNS, BGP routing tables and more, to enhance the researcher's ability to understand the traffic source. The research is a BigData experiment, where large (hundreds of GB) datasets are merged with a two month section of Network Telescope data using a set of Python scripts. The result are written to a Google BigQuery database table. Analysis of the network data is greatly simplified, with questions about the nature of the source, such as its device class (home routing device or server), potential vulnerabilities (open telnet ports or databases) and location becoming relatively easy to answer. Using this approach, researchers can focus on the questions that need answering and efficiently address them. This research could be taken further by using additional data sources such as Geo-location, WHOIS lookups, Threat Intelligence feeds and many others. Other potential areas of research include real-time categorisation of incoming packets, in order to better inform alerting and reporting systems' configuration. In conclusion, categorising Network Telescope data in this way provides insight into the intent of the (apparent) originator and as such is a valuable tool for those seeking to understand the purpose and intent of arriving packets. In particular, the ability to remove packets categorised as non-malicious (e.g. those in the Research category) from the data eliminates a known source of `noise' from the data. This allows the researcher to focus their efforts in a more productive manner.
- Full Text:
- Date Issued: 2019
- Authors: Davis, Michael Reginald
- Date: 2019
- Subjects: Denial of service attacks , Big data , Computer networks -- Security measures
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92941 , vital:30766
- Description: Network Telescopes, Internet backbone sampling, IDS and other forms of network-sourced Threat Intelligence provide researchers with insight into the methods and intent of remote entities by capturing network traffic and analysing the resulting data. This analysis and determination of intent is made difficult by the large amounts of potentially malicious traffic, coupled with limited amount of knowledge that can be attributed to the source of the incoming data, as the source is known only by its IP address. Due to the lack of commonly available tooling, many researchers start this analysis from the beginning and so repeat and re-iterate previous research as the bulk of their work. As a result new insight into methods and approaches of analysis is gained at a high cost. Our research approaches this problem by using additional knowledge about the source IP address such as open ports, reverse and forward DNS, BGP routing tables and more, to enhance the researcher's ability to understand the traffic source. The research is a BigData experiment, where large (hundreds of GB) datasets are merged with a two month section of Network Telescope data using a set of Python scripts. The result are written to a Google BigQuery database table. Analysis of the network data is greatly simplified, with questions about the nature of the source, such as its device class (home routing device or server), potential vulnerabilities (open telnet ports or databases) and location becoming relatively easy to answer. Using this approach, researchers can focus on the questions that need answering and efficiently address them. This research could be taken further by using additional data sources such as Geo-location, WHOIS lookups, Threat Intelligence feeds and many others. Other potential areas of research include real-time categorisation of incoming packets, in order to better inform alerting and reporting systems' configuration. In conclusion, categorising Network Telescope data in this way provides insight into the intent of the (apparent) originator and as such is a valuable tool for those seeking to understand the purpose and intent of arriving packets. In particular, the ability to remove packets categorised as non-malicious (e.g. those in the Research category) from the data eliminates a known source of `noise' from the data. This allows the researcher to focus their efforts in a more productive manner.
- Full Text:
- Date Issued: 2019
Automated grid fault detection and repair
- Authors: Luyt, Leslie
- Date: 2012 , 2012-05-24
- Subjects: Computational grids (Computer systems) -- Maintenance and repair , Cloud computing -- Maintenance and repair , Computer architecture
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4670 , http://hdl.handle.net/10962/d1006693 , Computational grids (Computer systems) -- Maintenance and repair , Cloud computing -- Maintenance and repair , Computer architecture
- Description: With the rise in interest in the field of grid and cloud computing, it is becoming increasingly necessary for the grid to be easily maintainable. This maintenance of the grid and grid services can be made easier by using an automated system to monitor and repair the grid as necessary. We propose a novel system to perform automated monitoring and repair of grid systems. To the best of our knowledge, no such systems exist. The results show that certain faults can be easily detected and repaired. , TeX , Adobe Acrobat 9.51 Paper Capture Plug-in
- Full Text:
- Date Issued: 2012
- Authors: Luyt, Leslie
- Date: 2012 , 2012-05-24
- Subjects: Computational grids (Computer systems) -- Maintenance and repair , Cloud computing -- Maintenance and repair , Computer architecture
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4670 , http://hdl.handle.net/10962/d1006693 , Computational grids (Computer systems) -- Maintenance and repair , Cloud computing -- Maintenance and repair , Computer architecture
- Description: With the rise in interest in the field of grid and cloud computing, it is becoming increasingly necessary for the grid to be easily maintainable. This maintenance of the grid and grid services can be made easier by using an automated system to monitor and repair the grid as necessary. We propose a novel system to perform automated monitoring and repair of grid systems. To the best of our knowledge, no such systems exist. The results show that certain faults can be easily detected and repaired. , TeX , Adobe Acrobat 9.51 Paper Capture Plug-in
- Full Text:
- Date Issued: 2012
A mobile phone solution for ad-hoc hitch-hiking in South Africa
- Authors: Miteche, Sacha Patrick
- Date: 2014
- Subjects: Cell phones -- Information services , Cell phone users -- South Africa , Hitchhiking -- South Africa , Mobile communication systems -- Social aspects , Digital media -- South Africa , Information technology -- South Africa
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4702 , http://hdl.handle.net/10962/d1013340
- Description: The purpose of this study was to investigate the use of mobile phones in organizing ad-hoc vehicle ridesharing based on hitch-hiking trips involving private car drivers and commuters in South Africa. A study was conducted to learn how hitch-hiking trips are arranged in the urban and rural areas of the Eastern Cape. This involved carrying out interviews with hitch-hikers and participating in several trips. The study results provided the design specifications for a Dynamic Ridesharing System (DRS) tailor-made to the hitch-hiking culture of this context. The design of the DRS considered the delivery of the ad-hoc ridesharing service to the anticipated mobile phones owned by people who use hitch-hiking. The implementation of the system used the available open source solutions and guidelines under the Siyakhula Living Lab project, which promotes the use of Information and Communication Technology (ICT) in marginalized communities of South Africa. The developed prototype was tested in both the simulated and live environments, then followed by usability tests to establish the viability of the system. The results from the tests indicate an initial breakthrough in the process of modernizing the ad-hoc ridesharing of hitch-hiking which is used by a section of people in the urban and rural areas of South Africa.
- Full Text:
- Date Issued: 2014
- Authors: Miteche, Sacha Patrick
- Date: 2014
- Subjects: Cell phones -- Information services , Cell phone users -- South Africa , Hitchhiking -- South Africa , Mobile communication systems -- Social aspects , Digital media -- South Africa , Information technology -- South Africa
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4702 , http://hdl.handle.net/10962/d1013340
- Description: The purpose of this study was to investigate the use of mobile phones in organizing ad-hoc vehicle ridesharing based on hitch-hiking trips involving private car drivers and commuters in South Africa. A study was conducted to learn how hitch-hiking trips are arranged in the urban and rural areas of the Eastern Cape. This involved carrying out interviews with hitch-hikers and participating in several trips. The study results provided the design specifications for a Dynamic Ridesharing System (DRS) tailor-made to the hitch-hiking culture of this context. The design of the DRS considered the delivery of the ad-hoc ridesharing service to the anticipated mobile phones owned by people who use hitch-hiking. The implementation of the system used the available open source solutions and guidelines under the Siyakhula Living Lab project, which promotes the use of Information and Communication Technology (ICT) in marginalized communities of South Africa. The developed prototype was tested in both the simulated and live environments, then followed by usability tests to establish the viability of the system. The results from the tests indicate an initial breakthrough in the process of modernizing the ad-hoc ridesharing of hitch-hiking which is used by a section of people in the urban and rural areas of South Africa.
- Full Text:
- Date Issued: 2014
An investigation into the role played by perceived security concerns in the adoption of mobile money services : a Zimbabwean case study
- Authors: Madebwe, Charles
- Date: 2015
- Subjects: Banks and banking, Mobile -- Zimbabwe , Global system for mobile communications , Cell phones -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4711 , http://hdl.handle.net/10962/d1017933
- Description: The ubiquitous nature of mobile phones and their popularity has led to opportunistic value added services (VAS), such as mobile money, riding on this phenomenon to be implemented. Several studies have been done to find factors that influence the adoption of mobile money and other information systems. The thesis looks at factors determining the uptake of mobile money over cellular networks with a special emphasis on aspects relating to perceived security even though other factors namely perceived usefulness, perceived ease of use, perceived trust and perceived cost were also looked at. The research further looks at the security threats introduced to mobile money by virtue of the nature, architecture, standards and protocols of Global System for Mobile Communications (GSM). The model employed for this research was the Technology Acceptance Model (TAM). Literature review was done on the security of GSM. Data was collected from a sample population around Harare, Zimbabwe using physical questionnaires. Statistical tests were performed on the collected data to find the significance of each construct to mobile money adoption. The research has found positive correlation between perceived security concerns and the adoption of money mobile money services over cellular networks. Perceived usefulness was found to be the most important factor in the adoption of mobile money. The research also found that customers need to trust the network service provider and the systems in use for them to adopt mobile money. Other factors driving consumer adoption were found to be perceived ease of use and perceived cost. The findings show that players who intend to introduce mobile money should strive to offer secure and useful systems that are trustworthy without making the service expensive or difficult to use. Literature review done showed that there is a possibility of compromising mobile money transactions done over GSM
- Full Text:
- Date Issued: 2015
- Authors: Madebwe, Charles
- Date: 2015
- Subjects: Banks and banking, Mobile -- Zimbabwe , Global system for mobile communications , Cell phones -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4711 , http://hdl.handle.net/10962/d1017933
- Description: The ubiquitous nature of mobile phones and their popularity has led to opportunistic value added services (VAS), such as mobile money, riding on this phenomenon to be implemented. Several studies have been done to find factors that influence the adoption of mobile money and other information systems. The thesis looks at factors determining the uptake of mobile money over cellular networks with a special emphasis on aspects relating to perceived security even though other factors namely perceived usefulness, perceived ease of use, perceived trust and perceived cost were also looked at. The research further looks at the security threats introduced to mobile money by virtue of the nature, architecture, standards and protocols of Global System for Mobile Communications (GSM). The model employed for this research was the Technology Acceptance Model (TAM). Literature review was done on the security of GSM. Data was collected from a sample population around Harare, Zimbabwe using physical questionnaires. Statistical tests were performed on the collected data to find the significance of each construct to mobile money adoption. The research has found positive correlation between perceived security concerns and the adoption of money mobile money services over cellular networks. Perceived usefulness was found to be the most important factor in the adoption of mobile money. The research also found that customers need to trust the network service provider and the systems in use for them to adopt mobile money. Other factors driving consumer adoption were found to be perceived ease of use and perceived cost. The findings show that players who intend to introduce mobile money should strive to offer secure and useful systems that are trustworthy without making the service expensive or difficult to use. Literature review done showed that there is a possibility of compromising mobile money transactions done over GSM
- Full Text:
- Date Issued: 2015
Towards a capability maturity model for a cyber range
- Authors: Aschmann, Michael Joseph
- Date: 2020
- Subjects: Computer software -- Development , Computer security
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/163142 , vital:41013
- Description: This work describes research undertaken towards the development of a Capability Maturity Model (CMM) for Cyber Ranges (CRs) focused on cyber security. Global cyber security needs are on the rise, and the need for attribution within the cyber domain is of particular concern. This has prompted major efforts to enhance cyber capabilities within organisations to increase their total cyber resilience posture. These efforts include, but are not limited to, the testing of computational devices, networks, and applications, and cyber skills training focused on prevention, detection and cyber attack response. A cyber range allows for the testing of the computational environment. By developing cyber events within a confined virtual or sand-boxed cyber environment, a cyber range can prepare the next generation of cyber security specialists to handle a variety of potential cyber attacks. Cyber ranges have different purposes, each designed to fulfil a different computational testing and cyber training goal; consequently, cyber ranges can vary greatly in the level of variety, capability, maturity and complexity. As cyber ranges proliferate and become more and more valued as tools for cyber security, a method to classify or rate them becomes essential. Yet while a universal criteria for measuring cyber ranges in terms of their capability maturity levels becomes more critical, there are currently very limited resources for researchers aiming to perform this kind of work. For this reason, this work proposes and describes a CMM, designed to give organisations the ability to benchmark the capability maturity of a given cyber range. This research adopted a synthesised approach to the development of a CMM, grounded in prior research and focused on the production of a conceptual model that provides a useful level of abstraction. In order to achieve this goal, the core capability elements of a cyber range are defined with their relative importance, allowing for the development of a proposed classification cyber range levels. An analysis of data gathered during the course of an expert review, together with other research, further supported the development of the conceptual model. In the context of cyber range capability, classification will include the ability of the cyber range to perform its functions optimally with different core capability elements, focusing on the Measurement of Capability (MoC) with its elements, namely effect, performance and threat ability. Cyber range maturity can evolve over time and can be defined through the Measurement of Maturity (MoM) with its elements, namely people, processes, technology. The combination of these measurements utilising the CMM for a CR determines the capability maturity level of a CR. The primary outcome of this research is the proposed level-based CMM framework for a cyber range, developed using adopted and synthesised CMMs, the analysis of an expert review, and the mapping of the results.
- Full Text:
- Date Issued: 2020
- Authors: Aschmann, Michael Joseph
- Date: 2020
- Subjects: Computer software -- Development , Computer security
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/163142 , vital:41013
- Description: This work describes research undertaken towards the development of a Capability Maturity Model (CMM) for Cyber Ranges (CRs) focused on cyber security. Global cyber security needs are on the rise, and the need for attribution within the cyber domain is of particular concern. This has prompted major efforts to enhance cyber capabilities within organisations to increase their total cyber resilience posture. These efforts include, but are not limited to, the testing of computational devices, networks, and applications, and cyber skills training focused on prevention, detection and cyber attack response. A cyber range allows for the testing of the computational environment. By developing cyber events within a confined virtual or sand-boxed cyber environment, a cyber range can prepare the next generation of cyber security specialists to handle a variety of potential cyber attacks. Cyber ranges have different purposes, each designed to fulfil a different computational testing and cyber training goal; consequently, cyber ranges can vary greatly in the level of variety, capability, maturity and complexity. As cyber ranges proliferate and become more and more valued as tools for cyber security, a method to classify or rate them becomes essential. Yet while a universal criteria for measuring cyber ranges in terms of their capability maturity levels becomes more critical, there are currently very limited resources for researchers aiming to perform this kind of work. For this reason, this work proposes and describes a CMM, designed to give organisations the ability to benchmark the capability maturity of a given cyber range. This research adopted a synthesised approach to the development of a CMM, grounded in prior research and focused on the production of a conceptual model that provides a useful level of abstraction. In order to achieve this goal, the core capability elements of a cyber range are defined with their relative importance, allowing for the development of a proposed classification cyber range levels. An analysis of data gathered during the course of an expert review, together with other research, further supported the development of the conceptual model. In the context of cyber range capability, classification will include the ability of the cyber range to perform its functions optimally with different core capability elements, focusing on the Measurement of Capability (MoC) with its elements, namely effect, performance and threat ability. Cyber range maturity can evolve over time and can be defined through the Measurement of Maturity (MoM) with its elements, namely people, processes, technology. The combination of these measurements utilising the CMM for a CR determines the capability maturity level of a CR. The primary outcome of this research is the proposed level-based CMM framework for a cyber range, developed using adopted and synthesised CMMs, the analysis of an expert review, and the mapping of the results.
- Full Text:
- Date Issued: 2020
COIN : a customisable, incentive driven video on demand framework for low-cost IPTV services
- Authors: Musvibe, Ray
- Date: 2012 , 2012-03-02
- Subjects: Internet television , Digital television , Television broadcasting -- Technological innovations , Multicasting (Computer networks) , Video dial tone , Open source software , Telecommunication , Capital investments
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4654 , http://hdl.handle.net/10962/d1006650 , Internet television , Digital television , Television broadcasting -- Technological innovations , Multicasting (Computer networks) , Video dial tone , Open source software , Telecommunication , Capital investments
- Description: There has been a significant rise in the provision of television and video services over IP (IPTV) in recent years. Increasing network capacity and falling bandwidth costs have made it both technically and economically feasible for service providers to deliver IPTV services. Several telecommunications (telco) operators worldwide are rolling out IPTV solutions and view IPTV as a major service differentiator and alternative revenue source. The main challenge that IPTV providers currently face, however, is the increasingly congested television service provider market, which also includes Internet Television. IPTV solutions therefore need strong service differentiators to succeed. IPTV solutions can doubtlessly sell much faster if they are more affordable or low-cost. Advertising has already been used in many service sectors to help lower service costs, including traditional broadcast television. This thesis therefore explores the role that advertising can play in helping to lower the cost of IPTV services and to incentivise IPTV billing. Another approach that IPTV providers can use to help sell their product is by addressing the growing need for control by today's multimedia users. This thesis will therefore explore the varied approaches that can be used to achieve viewer focused IPTV implementations. To further lower the cost of IPTV services, telcos can also turn to low-cost, open source platforms for service delivery. The adoption of low-cost infrastructure by telcos can lead to reduced Capital Expenditure (CAPEX), which in turn can lead to lower service fees, and ultimately to higher subscriptions and revenue. Therefore, in this thesis, the author proposes a CustOmisable, INcentive (COIN) driven Video on Demand (VoD) framework to be developed and deployed using the Mobicents Communication Platform, an open source service creation and execution platform. The COIN framework aims to provide a viewer focused, economically competitive service that combines the potential cost savings of using free and open source software (FOSS), with an innovative, incentive-driven billing approach. This project will also aim to evaluate whether the Mobicents Platform is a suitable service creation and execution platform for the proposed framework. Additionally, the proposed implementation aims to be interoperable with other IPTV implementations, hence shall follow current IPTV standardisation architectures and trends. The service testbed and its implementation are described in detail and only free and open source software is used; this is to enable its easy duplication and extension for future research. , TeX output 2012.03.02:1241 , Adobe Acrobat 9.2 Paper Capture Plug-in
- Full Text:
- Date Issued: 2012
- Authors: Musvibe, Ray
- Date: 2012 , 2012-03-02
- Subjects: Internet television , Digital television , Television broadcasting -- Technological innovations , Multicasting (Computer networks) , Video dial tone , Open source software , Telecommunication , Capital investments
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4654 , http://hdl.handle.net/10962/d1006650 , Internet television , Digital television , Television broadcasting -- Technological innovations , Multicasting (Computer networks) , Video dial tone , Open source software , Telecommunication , Capital investments
- Description: There has been a significant rise in the provision of television and video services over IP (IPTV) in recent years. Increasing network capacity and falling bandwidth costs have made it both technically and economically feasible for service providers to deliver IPTV services. Several telecommunications (telco) operators worldwide are rolling out IPTV solutions and view IPTV as a major service differentiator and alternative revenue source. The main challenge that IPTV providers currently face, however, is the increasingly congested television service provider market, which also includes Internet Television. IPTV solutions therefore need strong service differentiators to succeed. IPTV solutions can doubtlessly sell much faster if they are more affordable or low-cost. Advertising has already been used in many service sectors to help lower service costs, including traditional broadcast television. This thesis therefore explores the role that advertising can play in helping to lower the cost of IPTV services and to incentivise IPTV billing. Another approach that IPTV providers can use to help sell their product is by addressing the growing need for control by today's multimedia users. This thesis will therefore explore the varied approaches that can be used to achieve viewer focused IPTV implementations. To further lower the cost of IPTV services, telcos can also turn to low-cost, open source platforms for service delivery. The adoption of low-cost infrastructure by telcos can lead to reduced Capital Expenditure (CAPEX), which in turn can lead to lower service fees, and ultimately to higher subscriptions and revenue. Therefore, in this thesis, the author proposes a CustOmisable, INcentive (COIN) driven Video on Demand (VoD) framework to be developed and deployed using the Mobicents Communication Platform, an open source service creation and execution platform. The COIN framework aims to provide a viewer focused, economically competitive service that combines the potential cost savings of using free and open source software (FOSS), with an innovative, incentive-driven billing approach. This project will also aim to evaluate whether the Mobicents Platform is a suitable service creation and execution platform for the proposed framework. Additionally, the proposed implementation aims to be interoperable with other IPTV implementations, hence shall follow current IPTV standardisation architectures and trends. The service testbed and its implementation are described in detail and only free and open source software is used; this is to enable its easy duplication and extension for future research. , TeX output 2012.03.02:1241 , Adobe Acrobat 9.2 Paper Capture Plug-in
- Full Text:
- Date Issued: 2012
A platform for computer-assisted multilingual literacy development
- Authors: Mudimba, Bwini Chizabubi
- Date: 2011
- Subjects: FundaWethu , Language acquisition -- Computer-assisted instruction , Language arts (Elementary) -- Computer-assisted instruction , Language and education , Education, Bilingual , Computer-assisted instruction , Educational technology , Computers and literacy
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4600 , http://hdl.handle.net/10962/d1004850 , FundaWethu , Language acquisition -- Computer-assisted instruction , Language arts (Elementary) -- Computer-assisted instruction , Language and education , Education, Bilingual , Computer-assisted instruction , Educational technology , Computers and literacy
- Description: FundaWethu is reading software that is designed to deliver reading lessons to Grade R-3 (foundation phase) children who are learning to read in a multilingual context. Starting from a premise that the system should be both educative and entertaining, the system allows literacy researchers or teachers to construct rich multimedia reading lessons, with text, pictures (possibly animated), and audio files. Using the design-based research methodology which is problem driven and iterative, we followed a user-centred design process in creating FundaWethu. To promote sustainability of the software, we chose to bring teachers on board as “co-designers” using the lesson authoring tool. We made the authoring tool simple enough for use by non computer specialists, but expressive enough to enable a wide range of beginners reading exercises to be constructed in a number of different languages (indigenous South African languages in particular). This project therefore centred on the use of designbased research to build FundaWethu, the design and construction of FundaWethu and the usability study carried out to determine the adequacy of FundaWethu.
- Full Text:
- Date Issued: 2011
- Authors: Mudimba, Bwini Chizabubi
- Date: 2011
- Subjects: FundaWethu , Language acquisition -- Computer-assisted instruction , Language arts (Elementary) -- Computer-assisted instruction , Language and education , Education, Bilingual , Computer-assisted instruction , Educational technology , Computers and literacy
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4600 , http://hdl.handle.net/10962/d1004850 , FundaWethu , Language acquisition -- Computer-assisted instruction , Language arts (Elementary) -- Computer-assisted instruction , Language and education , Education, Bilingual , Computer-assisted instruction , Educational technology , Computers and literacy
- Description: FundaWethu is reading software that is designed to deliver reading lessons to Grade R-3 (foundation phase) children who are learning to read in a multilingual context. Starting from a premise that the system should be both educative and entertaining, the system allows literacy researchers or teachers to construct rich multimedia reading lessons, with text, pictures (possibly animated), and audio files. Using the design-based research methodology which is problem driven and iterative, we followed a user-centred design process in creating FundaWethu. To promote sustainability of the software, we chose to bring teachers on board as “co-designers” using the lesson authoring tool. We made the authoring tool simple enough for use by non computer specialists, but expressive enough to enable a wide range of beginners reading exercises to be constructed in a number of different languages (indigenous South African languages in particular). This project therefore centred on the use of designbased research to build FundaWethu, the design and construction of FundaWethu and the usability study carried out to determine the adequacy of FundaWethu.
- Full Text:
- Date Issued: 2011
An investigation into the viability of deploying thin client technology to support effective learning in a disadvantaged, rural high school setting
- Authors: Ndwe, Tembalethu Jama
- Date: 2002
- Subjects: Network computers , Education -- Data processing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4627 , http://hdl.handle.net/10962/d1006500 , Network computers , Education -- Data processing
- Description: Computer Based Training offers many attractive learning opportunities for high school pupils. Its deployment in economically depressed and educationally marginalized rural schools is extremely uncommon due to the high technology skills and costs involved in its deployment and ongoing maintenance. This thesis puts forward thin client technology as a potential solution to the needs of education environments of this kind. A functional business case is developed and evaluated in this thesis, based upon a requirements analysis of media delivery in learning, and upon formal cost/performance models and a deployment field trial. Because of the economic constraints of the envisaged deployment area in rural education, an industrial field trial is used, and the aspects of this trial that can be carried over to the rural school situation have been used to assess performance and cost indicators. Our study finds that thin client technology could be deployed and maintained more cost effectively than conventional fat client solutions in rural schools, that it is capable of supporting the learning elements needed in this deployment area, and that it is able to deliver the predominantly text based applications currently being used in schools. However, we find that technological improvements are needed before future multimediaintensive applications can be adequately supported.
- Full Text:
- Date Issued: 2002
- Authors: Ndwe, Tembalethu Jama
- Date: 2002
- Subjects: Network computers , Education -- Data processing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4627 , http://hdl.handle.net/10962/d1006500 , Network computers , Education -- Data processing
- Description: Computer Based Training offers many attractive learning opportunities for high school pupils. Its deployment in economically depressed and educationally marginalized rural schools is extremely uncommon due to the high technology skills and costs involved in its deployment and ongoing maintenance. This thesis puts forward thin client technology as a potential solution to the needs of education environments of this kind. A functional business case is developed and evaluated in this thesis, based upon a requirements analysis of media delivery in learning, and upon formal cost/performance models and a deployment field trial. Because of the economic constraints of the envisaged deployment area in rural education, an industrial field trial is used, and the aspects of this trial that can be carried over to the rural school situation have been used to assess performance and cost indicators. Our study finds that thin client technology could be deployed and maintained more cost effectively than conventional fat client solutions in rural schools, that it is capable of supporting the learning elements needed in this deployment area, and that it is able to deliver the predominantly text based applications currently being used in schools. However, we find that technological improvements are needed before future multimediaintensive applications can be adequately supported.
- Full Text:
- Date Issued: 2002
Amber : a aero-interaction honeypot with distributed intelligence
- Authors: Schoeman, Adam
- Date: 2015
- Subjects: Security systems -- Security measures , Computer viruses , Intrusion detection systems (Computer security) , Computer security
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4716 , http://hdl.handle.net/10962/d1017938
- Description: For the greater part, security controls are based on the principle of Decision through Detection (DtD). The exception to this is a honeypot, which analyses interactions between a third party and itself, while occupying a piece of unused information space. As honeypots are not located on productive information resources, any interaction with it can be assumed to be non-productive. This allows the honeypot to make decisions based simply on the presence of data, rather than on the behaviour of the data. But due to limited resources in human capital, honeypots’ uptake in the South African market has been underwhelming. Amber attempts to change this by offering a zero-interaction security system, which will use the honeypot approach of decision through Presence (DtP) to generate a blacklist of third parties, which can be passed on to a network enforcer. Empirical testing has proved the usefulness of this alternative and low cost approach in defending networks. The functionality of the system was also extended by installing nodes in different geographical locations, and streaming their detections into the central Amber hive.
- Full Text:
- Date Issued: 2015
- Authors: Schoeman, Adam
- Date: 2015
- Subjects: Security systems -- Security measures , Computer viruses , Intrusion detection systems (Computer security) , Computer security
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4716 , http://hdl.handle.net/10962/d1017938
- Description: For the greater part, security controls are based on the principle of Decision through Detection (DtD). The exception to this is a honeypot, which analyses interactions between a third party and itself, while occupying a piece of unused information space. As honeypots are not located on productive information resources, any interaction with it can be assumed to be non-productive. This allows the honeypot to make decisions based simply on the presence of data, rather than on the behaviour of the data. But due to limited resources in human capital, honeypots’ uptake in the South African market has been underwhelming. Amber attempts to change this by offering a zero-interaction security system, which will use the honeypot approach of decision through Presence (DtP) to generate a blacklist of third parties, which can be passed on to a network enforcer. Empirical testing has proved the usefulness of this alternative and low cost approach in defending networks. The functionality of the system was also extended by installing nodes in different geographical locations, and streaming their detections into the central Amber hive.
- Full Text:
- Date Issued: 2015
Explanation in rule-based expert systems
- Authors: Carden, Kenneth John
- Date: 1988
- Subjects: Expert systems (Computer science) Ecology -- Research
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4569 , http://hdl.handle.net/10962/d1002034
- Description: The ability of an expert system to explain its reasoning is fundamental to the system's credibility. Explanations become even more vital in systems which use methods of uncertainty propagation. The research documented here describes the development of an explanation sub-system which interfaces with the P.R.O. Expert System Toolkit. This toolkit has been used in the development of three small ecological expert systems. This project has involved adapting the results of research in the field of explanation-generation, to the requirements of the ecologist users. The subsystem contains two major components. The first lists the rules that fired during a consultation. The second component comprises routines responsible for quantifying the effects on the system conclusions of the answers given to questions. These latter routines can be used to perform sensitivity analyses on the answers given. The incorporation of such routines in small expert systems is quite unique
- Full Text:
- Date Issued: 1988
- Authors: Carden, Kenneth John
- Date: 1988
- Subjects: Expert systems (Computer science) Ecology -- Research
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4569 , http://hdl.handle.net/10962/d1002034
- Description: The ability of an expert system to explain its reasoning is fundamental to the system's credibility. Explanations become even more vital in systems which use methods of uncertainty propagation. The research documented here describes the development of an explanation sub-system which interfaces with the P.R.O. Expert System Toolkit. This toolkit has been used in the development of three small ecological expert systems. This project has involved adapting the results of research in the field of explanation-generation, to the requirements of the ecologist users. The subsystem contains two major components. The first lists the rules that fired during a consultation. The second component comprises routines responsible for quantifying the effects on the system conclusions of the answers given to questions. These latter routines can be used to perform sensitivity analyses on the answers given. The incorporation of such routines in small expert systems is quite unique
- Full Text:
- Date Issued: 1988
OVR : a novel architecture for voice-based applications
- Authors: Maema, Mathe
- Date: 2011 , 2011-04-01
- Subjects: Telephone systems -- Research , User interfaces (Computer systems) -- Research , Expert systems (Computer science) , Artificial intelligence , VoiceXML (Document markup language) , Application software -- Development
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4671 , http://hdl.handle.net/10962/d1006694 , Telephone systems -- Research , User interfaces (Computer systems) -- Research , Expert systems (Computer science) , Artificial intelligence , VoiceXML (Document markup language) , Application software -- Development
- Description: Despite the inherent limitation of accessing information serially, voice applications are increasingly growing in popularity as computing technologies advance. This is a positive development, because voice communication offers a number of benefits over other forms of communication. For example, voice may be better for delivering services to users whose eyes and hands may be engaged in other activities (e.g. driving) or to semi-literate or illiterate users. This thesis proposes a knowledge based architecture for building voice applications to help reduce the limitations of serial access to information. The proposed architecture, called OVR (Ontologies, VoiceXML and Reasoners), uses a rich backend that represents knowledge via ontologies and utilises reasoning engines to reason with it, in order to generate intelligent behaviour. Ontologies were chosen over other knowledge representation formalisms because of their expressivity and executable format, and because current trends suggest a general shift towards the use of ontologies in many systems used for information storing and sharing. For the frontend, this architecture uses VoiceXML, the emerging, and de facto standard for voice automated applications. A functional prototype was built for an initial validation of the architecture. The system is a simple voice application to help locate information about service providers that offer HIV (Human Immunodeficiency Virus) testing. We called this implementation HTLS (HIV Testing Locator System). The functional prototype was implemented using a number of technologies. OWL API, a Java interface designed to facilitate manipulation of ontologies authored in OWL was used to build a customised query interface for HTLS. Pellet reasoner was used for supporting queries to the knowledge base and Drools (JBoss rule engine) was used for processing dialog rules. VXI was used as the VoiceXML browser and an experimental softswitch called iLanga as the bridge to the telephony system. (At the heart of iLanga is Asterisk, a well known PBX-in-a-box.) HTLS behaved properly under system testing, providing the sought initial validation of OVR. , LaTeX with hyperref package
- Full Text:
- Date Issued: 2011
- Authors: Maema, Mathe
- Date: 2011 , 2011-04-01
- Subjects: Telephone systems -- Research , User interfaces (Computer systems) -- Research , Expert systems (Computer science) , Artificial intelligence , VoiceXML (Document markup language) , Application software -- Development
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4671 , http://hdl.handle.net/10962/d1006694 , Telephone systems -- Research , User interfaces (Computer systems) -- Research , Expert systems (Computer science) , Artificial intelligence , VoiceXML (Document markup language) , Application software -- Development
- Description: Despite the inherent limitation of accessing information serially, voice applications are increasingly growing in popularity as computing technologies advance. This is a positive development, because voice communication offers a number of benefits over other forms of communication. For example, voice may be better for delivering services to users whose eyes and hands may be engaged in other activities (e.g. driving) or to semi-literate or illiterate users. This thesis proposes a knowledge based architecture for building voice applications to help reduce the limitations of serial access to information. The proposed architecture, called OVR (Ontologies, VoiceXML and Reasoners), uses a rich backend that represents knowledge via ontologies and utilises reasoning engines to reason with it, in order to generate intelligent behaviour. Ontologies were chosen over other knowledge representation formalisms because of their expressivity and executable format, and because current trends suggest a general shift towards the use of ontologies in many systems used for information storing and sharing. For the frontend, this architecture uses VoiceXML, the emerging, and de facto standard for voice automated applications. A functional prototype was built for an initial validation of the architecture. The system is a simple voice application to help locate information about service providers that offer HIV (Human Immunodeficiency Virus) testing. We called this implementation HTLS (HIV Testing Locator System). The functional prototype was implemented using a number of technologies. OWL API, a Java interface designed to facilitate manipulation of ontologies authored in OWL was used to build a customised query interface for HTLS. Pellet reasoner was used for supporting queries to the knowledge base and Drools (JBoss rule engine) was used for processing dialog rules. VXI was used as the VoiceXML browser and an experimental softswitch called iLanga as the bridge to the telephony system. (At the heart of iLanga is Asterisk, a well known PBX-in-a-box.) HTLS behaved properly under system testing, providing the sought initial validation of OVR. , LaTeX with hyperref package
- Full Text:
- Date Issued: 2011
Selecting and augmenting a FOSS development and deployment environment for personalized video-oriented services in a Telco context
- Authors: Shibeshi, Zelalem Sintayehu
- Date: 2016
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/943 , vital:20005
- Description: The great demand for video services on the Internet is one contributing factor that led telecom companies to search for solutions to deliver innovative video services, using the different access technologies managed by them and leveraging the capacity of enforcing Quality of Service (QoS). One part of the solution was an infrastructure that guarantees QoS for these services, in the form of the IP Multimedia Subsystem (IMS) framework. The IMS framework was developed for delivering innovative multimedia services, but IMS alone does not provide the required services. This has led to further work in the area of multimedia service architectures. One noteworthy architecture is IPTV. IPTV is more than what its name implies, as it allows the development of various innovative video-oriented services and not just tv. When IPTV was introduced, many thought that it would bring back the revenue loss that telecom companies experienced to over-the-top (OTT) service providers. However, despite all its promises, the IPTV implementation has not shown as wide an uptake as one would expect. Although there could be various reasons for the slow penetration of IPTV, one reason could be the technical challenge that IPTV poses to service developers. One of the main reasons for the embarking of the research reported in this thesis was to identify and select free and open source software (FOSS) based platforms and augment them for easy development and deployment of video-oriented services. The thesis motivated how the IPTV architecture, with some modification, can be a good architecture to develop innovative video-oriented services. For a better understanding and investigate the issues of video-oriented service development on different platforms, we followed an incremental and iterative prototyping method. As a result, various video-oriented services were first developed and implementation-related issues were analyzed. This has helped us to identify problems that service developers face, including the requirement to utilize a number of protocols to develop an IPTV-based video-oriented service and the lack of a platform that provides a consistent programming interface to implement them all. The process also helped us to identify new uses cases through the process. As part of our selection process, we found that the Mobicents service development platform can be used as the basis for a good service development and deployment environment for video-oriented services. Mobicents is a Java-based service delivery platform for quick development, deployment and management of next generation network applications. Mobicents is a good choice because it provides a consistent programming interface and supports the various protocols needed in a consistent manner or an easy way to include the support for them. We used Mobicents to compose the environment that developers can use to build video-oriented services. Specifically we developed components and service building blocks that service developer can use to develop various innovative video-oriented services. During our research, we also identified various issues with regard to support from streaming servers in general and open source streaming servers in particular and also with the protocol they use. Specifically, we identified issues with Real Time Streaming Protocol (RTSP), a protocol specified as the media control protocol in the IPTV specification, and made proposals for solving them. We developed an RSTP proxy to augment the features lacking in the current streaming servers and implemented some of the features we proposed in it.
- Full Text:
- Date Issued: 2016
- Authors: Shibeshi, Zelalem Sintayehu
- Date: 2016
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/943 , vital:20005
- Description: The great demand for video services on the Internet is one contributing factor that led telecom companies to search for solutions to deliver innovative video services, using the different access technologies managed by them and leveraging the capacity of enforcing Quality of Service (QoS). One part of the solution was an infrastructure that guarantees QoS for these services, in the form of the IP Multimedia Subsystem (IMS) framework. The IMS framework was developed for delivering innovative multimedia services, but IMS alone does not provide the required services. This has led to further work in the area of multimedia service architectures. One noteworthy architecture is IPTV. IPTV is more than what its name implies, as it allows the development of various innovative video-oriented services and not just tv. When IPTV was introduced, many thought that it would bring back the revenue loss that telecom companies experienced to over-the-top (OTT) service providers. However, despite all its promises, the IPTV implementation has not shown as wide an uptake as one would expect. Although there could be various reasons for the slow penetration of IPTV, one reason could be the technical challenge that IPTV poses to service developers. One of the main reasons for the embarking of the research reported in this thesis was to identify and select free and open source software (FOSS) based platforms and augment them for easy development and deployment of video-oriented services. The thesis motivated how the IPTV architecture, with some modification, can be a good architecture to develop innovative video-oriented services. For a better understanding and investigate the issues of video-oriented service development on different platforms, we followed an incremental and iterative prototyping method. As a result, various video-oriented services were first developed and implementation-related issues were analyzed. This has helped us to identify problems that service developers face, including the requirement to utilize a number of protocols to develop an IPTV-based video-oriented service and the lack of a platform that provides a consistent programming interface to implement them all. The process also helped us to identify new uses cases through the process. As part of our selection process, we found that the Mobicents service development platform can be used as the basis for a good service development and deployment environment for video-oriented services. Mobicents is a Java-based service delivery platform for quick development, deployment and management of next generation network applications. Mobicents is a good choice because it provides a consistent programming interface and supports the various protocols needed in a consistent manner or an easy way to include the support for them. We used Mobicents to compose the environment that developers can use to build video-oriented services. Specifically we developed components and service building blocks that service developer can use to develop various innovative video-oriented services. During our research, we also identified various issues with regard to support from streaming servers in general and open source streaming servers in particular and also with the protocol they use. Specifically, we identified issues with Real Time Streaming Protocol (RTSP), a protocol specified as the media control protocol in the IPTV specification, and made proposals for solving them. We developed an RSTP proxy to augment the features lacking in the current streaming servers and implemented some of the features we proposed in it.
- Full Text:
- Date Issued: 2016
An empirical, in-depth investigation into service creation in H.323 Version 4 Networks
- Authors: Penton, Jason Barry
- Date: 2003 , 2013-05-24
- Subjects: Computer programming , Computer networks , Computer network protocols
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4681 , http://hdl.handle.net/10962/d1007637 , Computer programming , Computer networks , Computer network protocols
- Description: Over the past few years there has been an increasing tendency to carry voice on IP networks as opposed to the PSTN and other switched circuit networks. Initially this trend was favoured due to reduced costs but occurred at the expense of sacrificing the quality of the voice communications. Switched circuit networks have therefore remained the preferred carrier-grade voice communication network, but this is again changing. The advancement in improved quality of service (QoS) of real-time traffic on the IP network is a contributing factor to the anticipated future of the IP network supplying carrier-grade voice communications. Another contributing factor is the possibility of creating a new range of innovative, state-of-the-art telephony and communications services that acquire leverage through the intelligence and flexibility of the IP network. The latter has yet to be fully explored. Various protocols exist that facilitate the transport of voice and other media on IP networks. The most well known and widely supported of these is H.323. This work presents and discusses H.323 version 4 service creation. The work also categorises the various H.323 services and presents the mechanisms provided by H.323 version 4 that have facilitated the development of the three services I have developed, EmailReader, Telgo323 and CANS.
- Full Text:
- Date Issued: 2003
- Authors: Penton, Jason Barry
- Date: 2003 , 2013-05-24
- Subjects: Computer programming , Computer networks , Computer network protocols
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4681 , http://hdl.handle.net/10962/d1007637 , Computer programming , Computer networks , Computer network protocols
- Description: Over the past few years there has been an increasing tendency to carry voice on IP networks as opposed to the PSTN and other switched circuit networks. Initially this trend was favoured due to reduced costs but occurred at the expense of sacrificing the quality of the voice communications. Switched circuit networks have therefore remained the preferred carrier-grade voice communication network, but this is again changing. The advancement in improved quality of service (QoS) of real-time traffic on the IP network is a contributing factor to the anticipated future of the IP network supplying carrier-grade voice communications. Another contributing factor is the possibility of creating a new range of innovative, state-of-the-art telephony and communications services that acquire leverage through the intelligence and flexibility of the IP network. The latter has yet to be fully explored. Various protocols exist that facilitate the transport of voice and other media on IP networks. The most well known and widely supported of these is H.323. This work presents and discusses H.323 version 4 service creation. The work also categorises the various H.323 services and presents the mechanisms provided by H.323 version 4 that have facilitated the development of the three services I have developed, EmailReader, Telgo323 and CANS.
- Full Text:
- Date Issued: 2003
Limiting vulnerability exposure through effective patch management: threat mitigation through vulnerability remediation
- Authors: White, Dominic Stjohn Dolin
- Date: 2007 , 2007-02-08
- Subjects: Computer networks -- Security measures , Computer viruses , Computer security
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4629 , http://hdl.handle.net/10962/d1006510 , Computer networks -- Security measures , Computer viruses , Computer security
- Description: This document aims to provide a complete discussion on vulnerability and patch management. The first chapters look at the trends relating to vulnerabilities, exploits, attacks and patches. These trends describe the drivers of patch and vulnerability management and situate the discussion in the current security climate. The following chapters then aim to present both policy and technical solutions to the problem. The policies described lay out a comprehensive set of steps that can be followed by any organisation to implement their own patch management policy, including practical advice on integration with other policies, managing risk, identifying vulnerability, strategies for reducing downtime and generating metrics to measure progress. Having covered the steps that can be taken by users, a strategy describing how best a vendor should implement a related patch release policy is provided. An argument is made that current monthly patch release schedules are inadequate to allow users to most effectively and timeously mitigate vulnerabilities. The final chapters discuss the technical aspect of automating parts of the policies described. In particular the concept of 'defense in depth' is used to discuss additional strategies for 'buying time' during the patch process. The document then goes on to conclude that in the face of increasing malicious activity and more complex patching, solid frameworks such as those provided in this document are required to ensure an organisation can fully manage the patching process. However, more research is required to fully understand vulnerabilities and exploits. In particular more attention must be paid to threats, as little work as been done to fully understand threat-agent capabilities and activities from a day to day basis. , TeX output 2007.02.08:2212 , Adobe Acrobat 9.51 Paper Capture Plug-in
- Full Text:
- Date Issued: 2007
- Authors: White, Dominic Stjohn Dolin
- Date: 2007 , 2007-02-08
- Subjects: Computer networks -- Security measures , Computer viruses , Computer security
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4629 , http://hdl.handle.net/10962/d1006510 , Computer networks -- Security measures , Computer viruses , Computer security
- Description: This document aims to provide a complete discussion on vulnerability and patch management. The first chapters look at the trends relating to vulnerabilities, exploits, attacks and patches. These trends describe the drivers of patch and vulnerability management and situate the discussion in the current security climate. The following chapters then aim to present both policy and technical solutions to the problem. The policies described lay out a comprehensive set of steps that can be followed by any organisation to implement their own patch management policy, including practical advice on integration with other policies, managing risk, identifying vulnerability, strategies for reducing downtime and generating metrics to measure progress. Having covered the steps that can be taken by users, a strategy describing how best a vendor should implement a related patch release policy is provided. An argument is made that current monthly patch release schedules are inadequate to allow users to most effectively and timeously mitigate vulnerabilities. The final chapters discuss the technical aspect of automating parts of the policies described. In particular the concept of 'defense in depth' is used to discuss additional strategies for 'buying time' during the patch process. The document then goes on to conclude that in the face of increasing malicious activity and more complex patching, solid frameworks such as those provided in this document are required to ensure an organisation can fully manage the patching process. However, more research is required to fully understand vulnerabilities and exploits. In particular more attention must be paid to threats, as little work as been done to fully understand threat-agent capabilities and activities from a day to day basis. , TeX output 2007.02.08:2212 , Adobe Acrobat 9.51 Paper Capture Plug-in
- Full Text:
- Date Issued: 2007