Building a flexible and inexpensive multi-layer switch for software-defined networks
- Authors: Magwenzi, Tinashe
- Date: 2020
- Subjects: Software-defined networking (Computer network technology) , Telecommunication -- Switching systems , OpenFlow (Computer network protocol) , Local area networks (Computer networks)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/142841 , vital:38122
- Description: Software-Defined Networking (SDN) is a paradigm which enables the realisation of programmable network through the separation of the control logic from the forwarding functions. This separation is a departure from the traditional architecture. Much of the work done in SDN enabled devices has concentrated on higher end, high speed networks (10s GBit/s 100s GBit/s), rather than the relatively low bandwidth links (10s MBit/s to a few GBit/s) which are seen, for example, in South Africa. As SDN is increasingly becoming more accepted, due to its advantages over the traditional networks, it has been adopted for industrial purposes such as networking in data centres and network providers. The demand for programmable networks is increasing but is limited by the ability of providers to upgrade their infrastructure. In addition, as access to the Internet has become less expensive, the use of Internet is increasing in academic institutions, NGOs, and small to medium enterprises. This thesis details a means of building and managing a small scale Software-Defined Network using commodity hardware and open source tools. Core to the SDN Network illustrated in this thesis is the prototype of a multi-layer SDN switch. The proposed device is targeted to serve lower bandwidth communication (in relation to commercially produced high speed SDN-enabled devices). The performance of the prototype multilayer switch had shown to achieve: data-rates of up to 99.998%, average latencies that are under 40µs during forwarding/switching and under 100µs during routing while using packet sizes between 64 bytes and 1518 bytes, and a jitter of less than 15µs during all tests. This research explores in detail the design, development, and management of a multi-layer switch and its placement and integration in small scale SDN network. This includes testing of Layer 2 forwarding and Layer 3 routing, OpenFlow compliance testing, the management of the switch using created SDN applications, and real life network functionality such as forwarding, routing and VLAN networking to demonstrate its real world applicability.
- Full Text:
- Date Issued: 2020
- Authors: Magwenzi, Tinashe
- Date: 2020
- Subjects: Software-defined networking (Computer network technology) , Telecommunication -- Switching systems , OpenFlow (Computer network protocol) , Local area networks (Computer networks)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/142841 , vital:38122
- Description: Software-Defined Networking (SDN) is a paradigm which enables the realisation of programmable network through the separation of the control logic from the forwarding functions. This separation is a departure from the traditional architecture. Much of the work done in SDN enabled devices has concentrated on higher end, high speed networks (10s GBit/s 100s GBit/s), rather than the relatively low bandwidth links (10s MBit/s to a few GBit/s) which are seen, for example, in South Africa. As SDN is increasingly becoming more accepted, due to its advantages over the traditional networks, it has been adopted for industrial purposes such as networking in data centres and network providers. The demand for programmable networks is increasing but is limited by the ability of providers to upgrade their infrastructure. In addition, as access to the Internet has become less expensive, the use of Internet is increasing in academic institutions, NGOs, and small to medium enterprises. This thesis details a means of building and managing a small scale Software-Defined Network using commodity hardware and open source tools. Core to the SDN Network illustrated in this thesis is the prototype of a multi-layer SDN switch. The proposed device is targeted to serve lower bandwidth communication (in relation to commercially produced high speed SDN-enabled devices). The performance of the prototype multilayer switch had shown to achieve: data-rates of up to 99.998%, average latencies that are under 40µs during forwarding/switching and under 100µs during routing while using packet sizes between 64 bytes and 1518 bytes, and a jitter of less than 15µs during all tests. This research explores in detail the design, development, and management of a multi-layer switch and its placement and integration in small scale SDN network. This includes testing of Layer 2 forwarding and Layer 3 routing, OpenFlow compliance testing, the management of the switch using created SDN applications, and real life network functionality such as forwarding, routing and VLAN networking to demonstrate its real world applicability.
- Full Text:
- Date Issued: 2020
Building an E-health system for health awareness campaigns in poor areas
- Authors: Gremu, Chikumbutso David
- Date: 2015
- Subjects: National health services -- South Africa , Medical informatics , Public health -- Information services
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4708 , http://hdl.handle.net/10962/d1017930
- Description: Appropriate e-services as well as revenue generation capabilities are key to the deployment and the sustainability for ICT installations in poor areas, particularly common in developing country. The area of e-Health is a promising area for e-services that are both important to the population in those areas and potentially of direct interest to National Health Organizations, which already spend money for Health campaigns there. This thesis focuses on the design, implementation, and full functional testing of HealthAware, an application that allows health organization to set up targeted awareness campaigns for poor areas. Requirements for such application are very specific, starting from the fact that the preparation of the campaign and its execution/consumption happen in two different environments from a technological and social point of view. Part of the research work done for this thesis was to make the above requirements explicit and then use them in the design. This phase of the research was facilitated by the fact that the thesis' work was executed within the context of the Siyakhula Living Lab (SLL; www.siyakhulaLL.org), which has accumulated multi-year experience of ICT deployment in such areas. As a result of the found requirements, HealthAware comprises two components, which are web-based, Java applications that run in a peer-to-peer fashion. The first component, the Dashboard, is used to create, manage, and publish information for conducting awareness campaigns or surveys. The second component, HealthMessenger, facilitates users' access to the campaigns or surveys that were created using the Dashboard. The HealthMessenger was designed to be hosted on TeleWeaver while the Dashboard is hosted independently of TeleWeaver and simply communicates with the HealthMessenger through webservices. TeleWeaver is an application integration platform developed within the SLL to host software applications for poor areas. Using a core service of TeleWeaver, the profile service, where all the users' defining elements are contained, campaigns and surveys can be easily and effectively targeted, for example to match specific demographics or geographic locations. Revenue generation is attained via the logging of the interactions of the target users in the communities with the applications in TeleWeaver, from which billing data is generated according to the specific contractual agreements with the National Health Organization. From a general point of view, HealthAware contributes to the concrete realizations of a bidirectional access channel between Health Organizations and users in poor communities, which not only allows the communication of appropriate content in both directions, but get 'monetized' and in so doing becomes a revenue generator.
- Full Text:
- Date Issued: 2015
- Authors: Gremu, Chikumbutso David
- Date: 2015
- Subjects: National health services -- South Africa , Medical informatics , Public health -- Information services
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4708 , http://hdl.handle.net/10962/d1017930
- Description: Appropriate e-services as well as revenue generation capabilities are key to the deployment and the sustainability for ICT installations in poor areas, particularly common in developing country. The area of e-Health is a promising area for e-services that are both important to the population in those areas and potentially of direct interest to National Health Organizations, which already spend money for Health campaigns there. This thesis focuses on the design, implementation, and full functional testing of HealthAware, an application that allows health organization to set up targeted awareness campaigns for poor areas. Requirements for such application are very specific, starting from the fact that the preparation of the campaign and its execution/consumption happen in two different environments from a technological and social point of view. Part of the research work done for this thesis was to make the above requirements explicit and then use them in the design. This phase of the research was facilitated by the fact that the thesis' work was executed within the context of the Siyakhula Living Lab (SLL; www.siyakhulaLL.org), which has accumulated multi-year experience of ICT deployment in such areas. As a result of the found requirements, HealthAware comprises two components, which are web-based, Java applications that run in a peer-to-peer fashion. The first component, the Dashboard, is used to create, manage, and publish information for conducting awareness campaigns or surveys. The second component, HealthMessenger, facilitates users' access to the campaigns or surveys that were created using the Dashboard. The HealthMessenger was designed to be hosted on TeleWeaver while the Dashboard is hosted independently of TeleWeaver and simply communicates with the HealthMessenger through webservices. TeleWeaver is an application integration platform developed within the SLL to host software applications for poor areas. Using a core service of TeleWeaver, the profile service, where all the users' defining elements are contained, campaigns and surveys can be easily and effectively targeted, for example to match specific demographics or geographic locations. Revenue generation is attained via the logging of the interactions of the target users in the communities with the applications in TeleWeaver, from which billing data is generated according to the specific contractual agreements with the National Health Organization. From a general point of view, HealthAware contributes to the concrete realizations of a bidirectional access channel between Health Organizations and users in poor communities, which not only allows the communication of appropriate content in both directions, but get 'monetized' and in so doing becomes a revenue generator.
- Full Text:
- Date Issued: 2015
Building IKhwezi, a digital platform to capture everyday Indigenous Knowledge for improving educational outcomes in marginalised communities
- Authors: Ntšekhe, Mathe V K
- Date: 2018
- Subjects: Information technology , Knowledge management , Traditional ecological knowledge , Pedagogical content knowledge , Traditional ecological knowledge -- Technological innovations , IKhwezi , ICT4D , Indigenous Technological Pedagogical Content Knowledge (I-TPACK) , Siyakhula Living Lab
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/62505 , vital:28200
- Description: Aptly captured in the name, the broad mandate of Information and Communications Technologies for Development (ICT4D) is to facilitate the use of Information and Communication Technologies (ICTs) in society to support development. Education, as often stated, is the cornerstone for development, imparting knowledge for conceiving and realising development. In this thesis, we explore how everyday Indigenous Knowledge (IK) can be collected digitally, to enhance the educational outcomes of learners from marginalised backgrounds, by stimulating the production of teaching and learning materials that include the local imagery to have resonance with the learners. As part of the exploration, we reviewed a framework known as Technological Pedagogical Content Knowledge (TPACK), which spells out the different kinds of knowledge needed by teachers to teach effectively with ICTs. In this framework, IK is not present explicitly, but through the concept of context(s). Using Afrocentric and Pan-African scholarship, we argue that this logic is linked to colonialism and a critical decolonising pedagogy necessarily demands explication of IK: to make visible the cultures of the learners in the margins (e.g. Black rural learners). On the strength of this argument, we have proposed that TPACK be augumented to become Indigenous Technological Pedagogical Content Knowledge (I-TPACK). Through this augumentation, I-TPACK becomes an Afrocentric framework for a multicultural education in the digital era. The design of the digital platform for capturing IK relevant for formal education, was done in the Siyakhula Living Lab (SLL). The core idea of a Living Lab (LL) is that users must be understood in the context of their lived everyday reality. Further, they must be involved as co-creators in the design and innovation processes. On a methodological level, the LL environment allowed for the fusing together of multiple methods that can help to create a fitting solution. In this thesis, we followed an iterative user-centred methodology rooted in ethnography and phenomenology. Specifically, through long term conversations and interaction with teachers and ethnographic observations, we conceptualized a platform, IKhwezi, that facilitates the collection of context-sensitive content, collaboratively, and with cost and convenience in mind. We implemented this platform using MediaWiki, based on a number of considerations. From the ICT4D disciplinary point of view, a major consideration was being open to the possibility that other forms of innovation—and, not just ‘technovelty’ (i.e. technological/- technical innovation)—can provide a breakthrough or ingenious solution to the problem at hand. In a sense, we were reinforcing the growing sentiment within the discipline that technology is not the goal, but the means to foregrounding the commonality of the human experience in working towards development. Testing confirmed that there is some value in the platform. This is despite the challenges to onboard users, in pursuit of more content that could bolster the value of everyday IK in improving the educational outcomes of all learners.
- Full Text:
- Date Issued: 2018
- Authors: Ntšekhe, Mathe V K
- Date: 2018
- Subjects: Information technology , Knowledge management , Traditional ecological knowledge , Pedagogical content knowledge , Traditional ecological knowledge -- Technological innovations , IKhwezi , ICT4D , Indigenous Technological Pedagogical Content Knowledge (I-TPACK) , Siyakhula Living Lab
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/62505 , vital:28200
- Description: Aptly captured in the name, the broad mandate of Information and Communications Technologies for Development (ICT4D) is to facilitate the use of Information and Communication Technologies (ICTs) in society to support development. Education, as often stated, is the cornerstone for development, imparting knowledge for conceiving and realising development. In this thesis, we explore how everyday Indigenous Knowledge (IK) can be collected digitally, to enhance the educational outcomes of learners from marginalised backgrounds, by stimulating the production of teaching and learning materials that include the local imagery to have resonance with the learners. As part of the exploration, we reviewed a framework known as Technological Pedagogical Content Knowledge (TPACK), which spells out the different kinds of knowledge needed by teachers to teach effectively with ICTs. In this framework, IK is not present explicitly, but through the concept of context(s). Using Afrocentric and Pan-African scholarship, we argue that this logic is linked to colonialism and a critical decolonising pedagogy necessarily demands explication of IK: to make visible the cultures of the learners in the margins (e.g. Black rural learners). On the strength of this argument, we have proposed that TPACK be augumented to become Indigenous Technological Pedagogical Content Knowledge (I-TPACK). Through this augumentation, I-TPACK becomes an Afrocentric framework for a multicultural education in the digital era. The design of the digital platform for capturing IK relevant for formal education, was done in the Siyakhula Living Lab (SLL). The core idea of a Living Lab (LL) is that users must be understood in the context of their lived everyday reality. Further, they must be involved as co-creators in the design and innovation processes. On a methodological level, the LL environment allowed for the fusing together of multiple methods that can help to create a fitting solution. In this thesis, we followed an iterative user-centred methodology rooted in ethnography and phenomenology. Specifically, through long term conversations and interaction with teachers and ethnographic observations, we conceptualized a platform, IKhwezi, that facilitates the collection of context-sensitive content, collaboratively, and with cost and convenience in mind. We implemented this platform using MediaWiki, based on a number of considerations. From the ICT4D disciplinary point of view, a major consideration was being open to the possibility that other forms of innovation—and, not just ‘technovelty’ (i.e. technological/- technical innovation)—can provide a breakthrough or ingenious solution to the problem at hand. In a sense, we were reinforcing the growing sentiment within the discipline that technology is not the goal, but the means to foregrounding the commonality of the human experience in working towards development. Testing confirmed that there is some value in the platform. This is despite the challenges to onboard users, in pursuit of more content that could bolster the value of everyday IK in improving the educational outcomes of all learners.
- Full Text:
- Date Issued: 2018
Building the field component of a smart irrigation system: A detailed experience of a computer science graduate
- Authors: Pipile, Yamnkelani Yonela
- Date: 2021-10
- Subjects: Irrigation efficiency Computer-aided design South Africa , Irrigation projects Computer-aided design South Africa , Internet of things , Machine-to-machine communications , Smart water grids South Africa , Raspberry Pi (Computer) , Arduino (Programmable controller) , ZigBee , MQTT (MQ Telemetry Transport) , MQTT-SN , XBee
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10962/191814 , vital:45167
- Description: South Africa is a semi-arid area with an average annual rainfall of approximately 450mm, 60 per cent of which goes towards irrigation. Current irrigation systems generally apply water in a uniform manner across a field, which is both inefficient and can kill the plants. The Internet of Things (IoT), an emerging technology involving the utilization of sensors and actuators to build complex feedback systems, present an opportunity to build a smart irrigation solution. This research project illustrates the development of the field components of a water monitoring system using off the shelf and inexpensive components, exploring at the same time how easy or difficult it would be for a general Computer Science graduate to use hardware components and associated tools within the IoT area. The problem was initially broken down through a classical top-down process, in order to identify the components such as micro-computers, micro- controllers, sensors and network connections, that would be needed to build the solution. I then selected the Raspberry Pi 3, the Arduino Arduino Uno, the MH-Sensor-Series hygrometer, the MQTT messaging protocol, and the ZigBee communication protocol as implemented in the XBee S2C. Once the components were identified, the work followed a bottom-up approach: I studied the components in isolation and relative to each other, through a structured series of experiments, with each experiment addressing a specific component and examining how easy was to use the component. While each experiment allowed the author to acquire and deepen her understanding of each component, and progressively built a more sophisticated prototype, towards the complete solution. I found the vast majority of the identified components and tools to be easy to use, well documented, and most importantly, mature for consumption by our target user, until I encountered the MQTT-SN (MQTT-Sensor Network) implementation, not as mature as the rest. This resulted in us designing and implementing a light-weight, general ZigBee/MQTT gateway, named “yoGa” (Yonella's Gateway) from the author. At the end of the research, I was able to build the field components of a smart irrigation system using the selected tools, including the yoGa gateway, proving practically that a Computer Science graduate from a South African University can become productive in the emerging IoT area. , Thesis (MSc) -- Faculty of Science, Computer Science, 2021
- Full Text:
- Date Issued: 2021-10
- Authors: Pipile, Yamnkelani Yonela
- Date: 2021-10
- Subjects: Irrigation efficiency Computer-aided design South Africa , Irrigation projects Computer-aided design South Africa , Internet of things , Machine-to-machine communications , Smart water grids South Africa , Raspberry Pi (Computer) , Arduino (Programmable controller) , ZigBee , MQTT (MQ Telemetry Transport) , MQTT-SN , XBee
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10962/191814 , vital:45167
- Description: South Africa is a semi-arid area with an average annual rainfall of approximately 450mm, 60 per cent of which goes towards irrigation. Current irrigation systems generally apply water in a uniform manner across a field, which is both inefficient and can kill the plants. The Internet of Things (IoT), an emerging technology involving the utilization of sensors and actuators to build complex feedback systems, present an opportunity to build a smart irrigation solution. This research project illustrates the development of the field components of a water monitoring system using off the shelf and inexpensive components, exploring at the same time how easy or difficult it would be for a general Computer Science graduate to use hardware components and associated tools within the IoT area. The problem was initially broken down through a classical top-down process, in order to identify the components such as micro-computers, micro- controllers, sensors and network connections, that would be needed to build the solution. I then selected the Raspberry Pi 3, the Arduino Arduino Uno, the MH-Sensor-Series hygrometer, the MQTT messaging protocol, and the ZigBee communication protocol as implemented in the XBee S2C. Once the components were identified, the work followed a bottom-up approach: I studied the components in isolation and relative to each other, through a structured series of experiments, with each experiment addressing a specific component and examining how easy was to use the component. While each experiment allowed the author to acquire and deepen her understanding of each component, and progressively built a more sophisticated prototype, towards the complete solution. I found the vast majority of the identified components and tools to be easy to use, well documented, and most importantly, mature for consumption by our target user, until I encountered the MQTT-SN (MQTT-Sensor Network) implementation, not as mature as the rest. This resulted in us designing and implementing a light-weight, general ZigBee/MQTT gateway, named “yoGa” (Yonella's Gateway) from the author. At the end of the research, I was able to build the field components of a smart irrigation system using the selected tools, including the yoGa gateway, proving practically that a Computer Science graduate from a South African University can become productive in the emerging IoT area. , Thesis (MSc) -- Faculty of Science, Computer Science, 2021
- Full Text:
- Date Issued: 2021-10
Categorising Network Telescope data using big data enrichment techniques
- Authors: Davis, Michael Reginald
- Date: 2019
- Subjects: Denial of service attacks , Big data , Computer networks -- Security measures
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92941 , vital:30766
- Description: Network Telescopes, Internet backbone sampling, IDS and other forms of network-sourced Threat Intelligence provide researchers with insight into the methods and intent of remote entities by capturing network traffic and analysing the resulting data. This analysis and determination of intent is made difficult by the large amounts of potentially malicious traffic, coupled with limited amount of knowledge that can be attributed to the source of the incoming data, as the source is known only by its IP address. Due to the lack of commonly available tooling, many researchers start this analysis from the beginning and so repeat and re-iterate previous research as the bulk of their work. As a result new insight into methods and approaches of analysis is gained at a high cost. Our research approaches this problem by using additional knowledge about the source IP address such as open ports, reverse and forward DNS, BGP routing tables and more, to enhance the researcher's ability to understand the traffic source. The research is a BigData experiment, where large (hundreds of GB) datasets are merged with a two month section of Network Telescope data using a set of Python scripts. The result are written to a Google BigQuery database table. Analysis of the network data is greatly simplified, with questions about the nature of the source, such as its device class (home routing device or server), potential vulnerabilities (open telnet ports or databases) and location becoming relatively easy to answer. Using this approach, researchers can focus on the questions that need answering and efficiently address them. This research could be taken further by using additional data sources such as Geo-location, WHOIS lookups, Threat Intelligence feeds and many others. Other potential areas of research include real-time categorisation of incoming packets, in order to better inform alerting and reporting systems' configuration. In conclusion, categorising Network Telescope data in this way provides insight into the intent of the (apparent) originator and as such is a valuable tool for those seeking to understand the purpose and intent of arriving packets. In particular, the ability to remove packets categorised as non-malicious (e.g. those in the Research category) from the data eliminates a known source of `noise' from the data. This allows the researcher to focus their efforts in a more productive manner.
- Full Text:
- Date Issued: 2019
- Authors: Davis, Michael Reginald
- Date: 2019
- Subjects: Denial of service attacks , Big data , Computer networks -- Security measures
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92941 , vital:30766
- Description: Network Telescopes, Internet backbone sampling, IDS and other forms of network-sourced Threat Intelligence provide researchers with insight into the methods and intent of remote entities by capturing network traffic and analysing the resulting data. This analysis and determination of intent is made difficult by the large amounts of potentially malicious traffic, coupled with limited amount of knowledge that can be attributed to the source of the incoming data, as the source is known only by its IP address. Due to the lack of commonly available tooling, many researchers start this analysis from the beginning and so repeat and re-iterate previous research as the bulk of their work. As a result new insight into methods and approaches of analysis is gained at a high cost. Our research approaches this problem by using additional knowledge about the source IP address such as open ports, reverse and forward DNS, BGP routing tables and more, to enhance the researcher's ability to understand the traffic source. The research is a BigData experiment, where large (hundreds of GB) datasets are merged with a two month section of Network Telescope data using a set of Python scripts. The result are written to a Google BigQuery database table. Analysis of the network data is greatly simplified, with questions about the nature of the source, such as its device class (home routing device or server), potential vulnerabilities (open telnet ports or databases) and location becoming relatively easy to answer. Using this approach, researchers can focus on the questions that need answering and efficiently address them. This research could be taken further by using additional data sources such as Geo-location, WHOIS lookups, Threat Intelligence feeds and many others. Other potential areas of research include real-time categorisation of incoming packets, in order to better inform alerting and reporting systems' configuration. In conclusion, categorising Network Telescope data in this way provides insight into the intent of the (apparent) originator and as such is a valuable tool for those seeking to understand the purpose and intent of arriving packets. In particular, the ability to remove packets categorised as non-malicious (e.g. those in the Research category) from the data eliminates a known source of `noise' from the data. This allows the researcher to focus their efforts in a more productive manner.
- Full Text:
- Date Issued: 2019
Classification of the difficulty in accelerating problems using GPUs
- Authors: Tristram, Uvedale Roy
- Date: 2014
- Subjects: Graphics processing units , Computer algorithms , Computer programming , Problem solving -- Data processing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4699 , http://hdl.handle.net/10962/d1012978
- Description: Scientists continually require additional processing power, as this enables them to compute larger problem sizes, use more complex models and algorithms, and solve problems previously thought computationally impractical. General-purpose computation on graphics processing units (GPGPU) can help in this regard, as there is great potential in using graphics processors to accelerate many scientific models and algorithms. However, some problems are considerably harder to accelerate than others, and it may be challenging for those new to GPGPU to ascertain the difficulty of accelerating a particular problem or seek appropriate optimisation guidance. Through what was learned in the acceleration of a hydrological uncertainty ensemble model, large numbers of k-difference string comparisons, and a radix sort, problem attributes have been identified that can assist in the evaluation of the difficulty in accelerating a problem using GPUs. The identified attributes are inherent parallelism, branch divergence, problem size, required computational parallelism, memory access pattern regularity, data transfer overhead, and thread cooperation. Using these attributes as difficulty indicators, an initial problem difficulty classification framework has been created that aids in GPU acceleration difficulty evaluation. This framework further facilitates directed guidance on suggested optimisations and required knowledge based on problem classification, which has been demonstrated for the aforementioned accelerated problems. It is anticipated that this framework, or a derivative thereof, will prove to be a useful resource for new or novice GPGPU developers in the evaluation of potential problems for GPU acceleration.
- Full Text:
- Date Issued: 2014
- Authors: Tristram, Uvedale Roy
- Date: 2014
- Subjects: Graphics processing units , Computer algorithms , Computer programming , Problem solving -- Data processing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4699 , http://hdl.handle.net/10962/d1012978
- Description: Scientists continually require additional processing power, as this enables them to compute larger problem sizes, use more complex models and algorithms, and solve problems previously thought computationally impractical. General-purpose computation on graphics processing units (GPGPU) can help in this regard, as there is great potential in using graphics processors to accelerate many scientific models and algorithms. However, some problems are considerably harder to accelerate than others, and it may be challenging for those new to GPGPU to ascertain the difficulty of accelerating a particular problem or seek appropriate optimisation guidance. Through what was learned in the acceleration of a hydrological uncertainty ensemble model, large numbers of k-difference string comparisons, and a radix sort, problem attributes have been identified that can assist in the evaluation of the difficulty in accelerating a problem using GPUs. The identified attributes are inherent parallelism, branch divergence, problem size, required computational parallelism, memory access pattern regularity, data transfer overhead, and thread cooperation. Using these attributes as difficulty indicators, an initial problem difficulty classification framework has been created that aids in GPU acceleration difficulty evaluation. This framework further facilitates directed guidance on suggested optimisations and required knowledge based on problem classification, which has been demonstrated for the aforementioned accelerated problems. It is anticipated that this framework, or a derivative thereof, will prove to be a useful resource for new or novice GPGPU developers in the evaluation of potential problems for GPU acceleration.
- Full Text:
- Date Issued: 2014
Cloud information security : a higher education perspective
- Authors: Van der Schyff, Karl Izak
- Date: 2014
- Subjects: Cloud computing -- Security measures , Information technology -- Security measures , Data protection , Internet in higher education , Education, Higher -- Technological innovations
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4692 , http://hdl.handle.net/10962/d1011607 , Cloud computing -- Security measures , Information technology -- Security measures , Data protection , Internet in higher education , Education, Higher -- Technological innovations
- Description: In recent years higher education institutions have come under increasing financial pressure. This has not only prompted universities to investigate more cost effective means of delivering course content and maintaining research output, but also to investigate the administrative functions that accompany them. As such, many South African universities have either adopted or are in the process of adopting some form of cloud computing given the recent drop in bandwidth costs. However, this adoption process has raised concerns about the security of cloud-based information and this has, in some cases, had a negative impact on the adoption process. In an effort to study these concerns many researchers have employed a positivist approach with little, if any, focus on the operational context of these universities. Moreover, there has been very little research, specifically within the South African context. This study addresses some of these concerns by investigating the threats and security incident response life cycle within a higher education cloud. This was done by initially conducting a small scale survey and a detailed thematic analysis of twelve interviews from three South African universities. The identified themes and their corresponding analyses and interpretation contribute on both a practical and theoretical level with the practical contributions relating to a set of security driven criteria for selecting cloud providers as well as recommendations for universities who have or are in the process of adopting cloud computing. Theoretically several conceptual frameworks are offered allowing the researcher to convey his understanding of how the aforementioned practical concepts relate to each other as well as the concepts that constitute the research questions of this study.
- Full Text:
- Date Issued: 2014
- Authors: Van der Schyff, Karl Izak
- Date: 2014
- Subjects: Cloud computing -- Security measures , Information technology -- Security measures , Data protection , Internet in higher education , Education, Higher -- Technological innovations
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4692 , http://hdl.handle.net/10962/d1011607 , Cloud computing -- Security measures , Information technology -- Security measures , Data protection , Internet in higher education , Education, Higher -- Technological innovations
- Description: In recent years higher education institutions have come under increasing financial pressure. This has not only prompted universities to investigate more cost effective means of delivering course content and maintaining research output, but also to investigate the administrative functions that accompany them. As such, many South African universities have either adopted or are in the process of adopting some form of cloud computing given the recent drop in bandwidth costs. However, this adoption process has raised concerns about the security of cloud-based information and this has, in some cases, had a negative impact on the adoption process. In an effort to study these concerns many researchers have employed a positivist approach with little, if any, focus on the operational context of these universities. Moreover, there has been very little research, specifically within the South African context. This study addresses some of these concerns by investigating the threats and security incident response life cycle within a higher education cloud. This was done by initially conducting a small scale survey and a detailed thematic analysis of twelve interviews from three South African universities. The identified themes and their corresponding analyses and interpretation contribute on both a practical and theoretical level with the practical contributions relating to a set of security driven criteria for selecting cloud providers as well as recommendations for universities who have or are in the process of adopting cloud computing. Theoretically several conceptual frameworks are offered allowing the researcher to convey his understanding of how the aforementioned practical concepts relate to each other as well as the concepts that constitute the research questions of this study.
- Full Text:
- Date Issued: 2014
Cogitator : a parallel, fuzzy, database-driven expert system
- Authors: Baise, Paul
- Date: 1994 , 2012-10-08
- Subjects: Expert systems (Computer science) , Artificial intelligence -- Computer programs , System design , Cogitator (Computer system)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4667 , http://hdl.handle.net/10962/d1006684 , Expert systems (Computer science) , Artificial intelligence -- Computer programs , System design , Cogitator (Computer system)
- Description: The quest to build anthropomorphic machines has led researchers to focus on knowledge and the manipulation thereof. Recently, the expert system was proposed as a solution, working well in small, well understood domains. However these initial attempts highlighted the tedious process associated with building systems to display intelligence, the most notable being the Knowledge Acquisition Bottleneck. Attempts to circumvent this problem have led researchers to propose the use of machine learning databases as a source of knowledge. Attempts to utilise databases as sources of knowledge has led to the development Database-Driven Expert Systems. Furthermore, it has been ascertained that a requisite for intelligent systems is powerful computation. In response to these problems and proposals, a new type of database-driven expert system, Cogitator is proposed. It is shown to circumvent the Knowledge Acquisition Bottleneck and posess many other advantages over both traditional expert systems and connectionist systems, whilst having non-serious disadvantages. , KMBT_223
- Full Text:
- Date Issued: 1994
- Authors: Baise, Paul
- Date: 1994 , 2012-10-08
- Subjects: Expert systems (Computer science) , Artificial intelligence -- Computer programs , System design , Cogitator (Computer system)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4667 , http://hdl.handle.net/10962/d1006684 , Expert systems (Computer science) , Artificial intelligence -- Computer programs , System design , Cogitator (Computer system)
- Description: The quest to build anthropomorphic machines has led researchers to focus on knowledge and the manipulation thereof. Recently, the expert system was proposed as a solution, working well in small, well understood domains. However these initial attempts highlighted the tedious process associated with building systems to display intelligence, the most notable being the Knowledge Acquisition Bottleneck. Attempts to circumvent this problem have led researchers to propose the use of machine learning databases as a source of knowledge. Attempts to utilise databases as sources of knowledge has led to the development Database-Driven Expert Systems. Furthermore, it has been ascertained that a requisite for intelligent systems is powerful computation. In response to these problems and proposals, a new type of database-driven expert system, Cogitator is proposed. It is shown to circumvent the Knowledge Acquisition Bottleneck and posess many other advantages over both traditional expert systems and connectionist systems, whilst having non-serious disadvantages. , KMBT_223
- Full Text:
- Date Issued: 1994
COIN : a customisable, incentive driven video on demand framework for low-cost IPTV services
- Authors: Musvibe, Ray
- Date: 2012 , 2012-03-02
- Subjects: Internet television , Digital television , Television broadcasting -- Technological innovations , Multicasting (Computer networks) , Video dial tone , Open source software , Telecommunication , Capital investments
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4654 , http://hdl.handle.net/10962/d1006650 , Internet television , Digital television , Television broadcasting -- Technological innovations , Multicasting (Computer networks) , Video dial tone , Open source software , Telecommunication , Capital investments
- Description: There has been a significant rise in the provision of television and video services over IP (IPTV) in recent years. Increasing network capacity and falling bandwidth costs have made it both technically and economically feasible for service providers to deliver IPTV services. Several telecommunications (telco) operators worldwide are rolling out IPTV solutions and view IPTV as a major service differentiator and alternative revenue source. The main challenge that IPTV providers currently face, however, is the increasingly congested television service provider market, which also includes Internet Television. IPTV solutions therefore need strong service differentiators to succeed. IPTV solutions can doubtlessly sell much faster if they are more affordable or low-cost. Advertising has already been used in many service sectors to help lower service costs, including traditional broadcast television. This thesis therefore explores the role that advertising can play in helping to lower the cost of IPTV services and to incentivise IPTV billing. Another approach that IPTV providers can use to help sell their product is by addressing the growing need for control by today's multimedia users. This thesis will therefore explore the varied approaches that can be used to achieve viewer focused IPTV implementations. To further lower the cost of IPTV services, telcos can also turn to low-cost, open source platforms for service delivery. The adoption of low-cost infrastructure by telcos can lead to reduced Capital Expenditure (CAPEX), which in turn can lead to lower service fees, and ultimately to higher subscriptions and revenue. Therefore, in this thesis, the author proposes a CustOmisable, INcentive (COIN) driven Video on Demand (VoD) framework to be developed and deployed using the Mobicents Communication Platform, an open source service creation and execution platform. The COIN framework aims to provide a viewer focused, economically competitive service that combines the potential cost savings of using free and open source software (FOSS), with an innovative, incentive-driven billing approach. This project will also aim to evaluate whether the Mobicents Platform is a suitable service creation and execution platform for the proposed framework. Additionally, the proposed implementation aims to be interoperable with other IPTV implementations, hence shall follow current IPTV standardisation architectures and trends. The service testbed and its implementation are described in detail and only free and open source software is used; this is to enable its easy duplication and extension for future research. , TeX output 2012.03.02:1241 , Adobe Acrobat 9.2 Paper Capture Plug-in
- Full Text:
- Date Issued: 2012
- Authors: Musvibe, Ray
- Date: 2012 , 2012-03-02
- Subjects: Internet television , Digital television , Television broadcasting -- Technological innovations , Multicasting (Computer networks) , Video dial tone , Open source software , Telecommunication , Capital investments
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4654 , http://hdl.handle.net/10962/d1006650 , Internet television , Digital television , Television broadcasting -- Technological innovations , Multicasting (Computer networks) , Video dial tone , Open source software , Telecommunication , Capital investments
- Description: There has been a significant rise in the provision of television and video services over IP (IPTV) in recent years. Increasing network capacity and falling bandwidth costs have made it both technically and economically feasible for service providers to deliver IPTV services. Several telecommunications (telco) operators worldwide are rolling out IPTV solutions and view IPTV as a major service differentiator and alternative revenue source. The main challenge that IPTV providers currently face, however, is the increasingly congested television service provider market, which also includes Internet Television. IPTV solutions therefore need strong service differentiators to succeed. IPTV solutions can doubtlessly sell much faster if they are more affordable or low-cost. Advertising has already been used in many service sectors to help lower service costs, including traditional broadcast television. This thesis therefore explores the role that advertising can play in helping to lower the cost of IPTV services and to incentivise IPTV billing. Another approach that IPTV providers can use to help sell their product is by addressing the growing need for control by today's multimedia users. This thesis will therefore explore the varied approaches that can be used to achieve viewer focused IPTV implementations. To further lower the cost of IPTV services, telcos can also turn to low-cost, open source platforms for service delivery. The adoption of low-cost infrastructure by telcos can lead to reduced Capital Expenditure (CAPEX), which in turn can lead to lower service fees, and ultimately to higher subscriptions and revenue. Therefore, in this thesis, the author proposes a CustOmisable, INcentive (COIN) driven Video on Demand (VoD) framework to be developed and deployed using the Mobicents Communication Platform, an open source service creation and execution platform. The COIN framework aims to provide a viewer focused, economically competitive service that combines the potential cost savings of using free and open source software (FOSS), with an innovative, incentive-driven billing approach. This project will also aim to evaluate whether the Mobicents Platform is a suitable service creation and execution platform for the proposed framework. Additionally, the proposed implementation aims to be interoperable with other IPTV implementations, hence shall follow current IPTV standardisation architectures and trends. The service testbed and its implementation are described in detail and only free and open source software is used; this is to enable its easy duplication and extension for future research. , TeX output 2012.03.02:1241 , Adobe Acrobat 9.2 Paper Capture Plug-in
- Full Text:
- Date Issued: 2012
Concurrency in modula-2
- Authors: Sewry, David Andrew
- Date: 1985 , 2013-03-13
- Subjects: Modula-2 (Computer program language) , Programming languages (Electronic computers) , Computer multitasking
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4582 , http://hdl.handle.net/10962/d1004369 , Modula-2 (Computer program language) , Programming languages (Electronic computers) , Computer multitasking
- Description: A concurrent program is one in which a number of processes are considered to be active simultaneously . It is possib l e to t hink of a process as being a separate sequential program executing independently of other processes, although perhaps communicating with them at desired pOints . The concurrent program, as a whole, can be executed in one of two ways: il ii) in true concurrent manner, wi th each process executing on a dedicated processor in a quasi - concurrent manner, where a processor's processes . time is multiplexed between single the There are two motivations for the study of concurrency in programming languages : i) concurrent programming facilities can be exploited in systems where one has more t han one processor . As technology i mproves, machines having multiple processors will proliferate ii) concurrent p r ogramming facilities may allow programs to be structured as independent , bu t co - operating, processes which can then be implemented on a single processor system . This structure may be more natural to the programmer then the traditional sequential structures. An example is provided by Conway's - 1- Clearly, languages Pascal) problem [Ben82] . by their very nature, traditional sequential- type (Fortran, Basic, Cobol and earlier versions of prove inadequate for the purposes of concurrent programming without considerable extension (which some manufacturers have provided, rendering their compilers non standard-conforming). The general convenience of high level languages provides strong motivation for their development for rea l time programming. Modula - 2 [Wir83] is but one of a number of such r ecently developed languages, designed not only to fulfil a "sequential" role but also to offer facilities for concurrent programming. Developed by Niklaus Wirth in 1979 as a successor to Pascal and Modula, it is intended to serve under the banner of a generalpurpose systems - implementation language. This thesis investigates concurrency i n Modula - 2 and takes the following form: i ) an analYSis of the concurrent facilities offered ii) problems and difficulties associated with these facilities iii) improveme nts and enhancements, including the feasibility of using Modula - 2 to simulate constructs found in other languages, such as the Hoare monitor [Hoa74] and the Ada rendezvous [Uni81]. - 2- Each section concludes with an appraisal of the work conducted in that section . The final section consists of a critical assessment of those Modula - 2 language constructs and facilities provided for the implementation of concurrency and a brief look at concurrency in Modula, Modula-2's predecessor. - Introduction. , KMBT_363 , Adobe Acrobat 9.53 Paper Capture Plug-in
- Full Text:
- Date Issued: 1985
- Authors: Sewry, David Andrew
- Date: 1985 , 2013-03-13
- Subjects: Modula-2 (Computer program language) , Programming languages (Electronic computers) , Computer multitasking
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4582 , http://hdl.handle.net/10962/d1004369 , Modula-2 (Computer program language) , Programming languages (Electronic computers) , Computer multitasking
- Description: A concurrent program is one in which a number of processes are considered to be active simultaneously . It is possib l e to t hink of a process as being a separate sequential program executing independently of other processes, although perhaps communicating with them at desired pOints . The concurrent program, as a whole, can be executed in one of two ways: il ii) in true concurrent manner, wi th each process executing on a dedicated processor in a quasi - concurrent manner, where a processor's processes . time is multiplexed between single the There are two motivations for the study of concurrency in programming languages : i) concurrent programming facilities can be exploited in systems where one has more t han one processor . As technology i mproves, machines having multiple processors will proliferate ii) concurrent p r ogramming facilities may allow programs to be structured as independent , bu t co - operating, processes which can then be implemented on a single processor system . This structure may be more natural to the programmer then the traditional sequential structures. An example is provided by Conway's - 1- Clearly, languages Pascal) problem [Ben82] . by their very nature, traditional sequential- type (Fortran, Basic, Cobol and earlier versions of prove inadequate for the purposes of concurrent programming without considerable extension (which some manufacturers have provided, rendering their compilers non standard-conforming). The general convenience of high level languages provides strong motivation for their development for rea l time programming. Modula - 2 [Wir83] is but one of a number of such r ecently developed languages, designed not only to fulfil a "sequential" role but also to offer facilities for concurrent programming. Developed by Niklaus Wirth in 1979 as a successor to Pascal and Modula, it is intended to serve under the banner of a generalpurpose systems - implementation language. This thesis investigates concurrency i n Modula - 2 and takes the following form: i ) an analYSis of the concurrent facilities offered ii) problems and difficulties associated with these facilities iii) improveme nts and enhancements, including the feasibility of using Modula - 2 to simulate constructs found in other languages, such as the Hoare monitor [Hoa74] and the Ada rendezvous [Uni81]. - 2- Each section concludes with an appraisal of the work conducted in that section . The final section consists of a critical assessment of those Modula - 2 language constructs and facilities provided for the implementation of concurrency and a brief look at concurrency in Modula, Modula-2's predecessor. - Introduction. , KMBT_363 , Adobe Acrobat 9.53 Paper Capture Plug-in
- Full Text:
- Date Issued: 1985
Connection management applications for high-speed audio networking
- Authors: Sibanda, Phathisile
- Date: 2008 , 2008-03-12
- Subjects: Flash (Computer file) , Computer networks , Computer networks -- Management , Digital communications , Computer sound processing , Sound -- Recording and reproducing -- Digital techniques , Broadcast data systems , C# (Computer program language) , C++ (Computer program language) , ActionScript (Computer program language)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4634 , http://hdl.handle.net/10962/d1006532 , Flash (Computer file) , Computer networks , Computer networks -- Management , Digital communications , Computer sound processing , Sound -- Recording and reproducing -- Digital techniques , Broadcast data systems , C# (Computer program language) , C++ (Computer program language) , ActionScript (Computer program language)
- Description: Traditionally, connection management applications (referred to as patchbays) for high-speed audio networking, are predominantly developed using third-generation languages such as C, C# and C++. Due to the rapid increase in distributed audio/video network usage in the world today, connection management applications that control signal routing over these networks have also evolved in complexity to accommodate more functionality. As the result, high-speed audio networking application developers require a tool that will enable them to develop complex connection management applications easily and within the shortest possible time. In addition, this tool should provide them with the reliability and flexibility required to develop applications controlling signal routing in networks carrying real-time data. High-speed audio networks are used for various purposes that include audio/video production and broadcasting. This investigation evaluates the possibility of using Adobe Flash Professional 8, using ActionScript 2.0, for developing connection management applications. Three patchbays, namely the Broadcast patchbay, the Project studio patchbay, and the Hospitality/Convention Centre patchbay were developed and tested for connection management in three sound installation networks, namely the Broadcast network, the Project studio network, and the Hospitality/Convention Centre network. Findings indicate that complex connection management applications can effectively be implemented using the Adobe Flash IDE and ActionScript 2.0.
- Full Text:
- Date Issued: 2008
- Authors: Sibanda, Phathisile
- Date: 2008 , 2008-03-12
- Subjects: Flash (Computer file) , Computer networks , Computer networks -- Management , Digital communications , Computer sound processing , Sound -- Recording and reproducing -- Digital techniques , Broadcast data systems , C# (Computer program language) , C++ (Computer program language) , ActionScript (Computer program language)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4634 , http://hdl.handle.net/10962/d1006532 , Flash (Computer file) , Computer networks , Computer networks -- Management , Digital communications , Computer sound processing , Sound -- Recording and reproducing -- Digital techniques , Broadcast data systems , C# (Computer program language) , C++ (Computer program language) , ActionScript (Computer program language)
- Description: Traditionally, connection management applications (referred to as patchbays) for high-speed audio networking, are predominantly developed using third-generation languages such as C, C# and C++. Due to the rapid increase in distributed audio/video network usage in the world today, connection management applications that control signal routing over these networks have also evolved in complexity to accommodate more functionality. As the result, high-speed audio networking application developers require a tool that will enable them to develop complex connection management applications easily and within the shortest possible time. In addition, this tool should provide them with the reliability and flexibility required to develop applications controlling signal routing in networks carrying real-time data. High-speed audio networks are used for various purposes that include audio/video production and broadcasting. This investigation evaluates the possibility of using Adobe Flash Professional 8, using ActionScript 2.0, for developing connection management applications. Three patchbays, namely the Broadcast patchbay, the Project studio patchbay, and the Hospitality/Convention Centre patchbay were developed and tested for connection management in three sound installation networks, namely the Broadcast network, the Project studio network, and the Hospitality/Convention Centre network. Findings indicate that complex connection management applications can effectively be implemented using the Adobe Flash IDE and ActionScript 2.0.
- Full Text:
- Date Issued: 2008
Constructing a low-cost, open-source, VoiceXML
- Authors: King, Adam
- Date: 2007 , 2013-07-01
- Subjects: VoiceXML (Document markup language) , Asterisk (Computer file) , Internet telephony , Open source software
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4585 , http://hdl.handle.net/10962/d1004735 , VoiceXML (Document markup language) , Asterisk (Computer file) , Internet telephony , Open source software
- Description: Voice-enabled applications, applications that interact with a user via an audio channel, are used extensively today. Their use is growing as speech related technologies improve, as speech is one of the most natural methods of interaction. They can provide customer support as IVRs, can be used as an assistive technology, or can become an aural interface to the Internet. Given that the telephone is used extensively throughout the globe, the number of potential users of voice-enabled applications is very high. VoiceXML is a popular, open, high-level, standard means of creating voice-enabled applications which was designed to bring the benefits of web based development to services. While VoiceXML is an ideal language for creating these applications, VoiceXML gateways, the hardware and software responsible for interpreting VoiceXML applications and interfacing with the PSTN, are still expensive and so there is a need for a low-cost gateway. Asterisk, and open-source, TDM/VoIP telephony platform, can be used as a low-cost PSTN interface. This thesis investigates adding a VoiceXML service to Asterisk, creating a low-cost VoiceXML prototype gateway which is able to render voice-enabled applications. Following the Component-Based Software Engineering (CBSE) paradigm, the VoiceXML gateway is divided into a set of components which are sourced from the open-source community, and integrated to create the gateway. The browser requires a VoiceXML interpreter (OpenVXI), a Text-To-Speech engine (Festival) and a speech recognition engine (Sphinx 4). The integration of the components results in a low-cost, open-source VoiceXML gateway. System tests show that the integration of the components was successful, and that the system can handle concurrent calls. A fully compliant version of the gateway can be used in the real world to render voice-enabled applications at a low cost. , KMBT_363 , Adobe Acrobat 9.55 Paper Capture Plug-in
- Full Text:
- Date Issued: 2007
- Authors: King, Adam
- Date: 2007 , 2013-07-01
- Subjects: VoiceXML (Document markup language) , Asterisk (Computer file) , Internet telephony , Open source software
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4585 , http://hdl.handle.net/10962/d1004735 , VoiceXML (Document markup language) , Asterisk (Computer file) , Internet telephony , Open source software
- Description: Voice-enabled applications, applications that interact with a user via an audio channel, are used extensively today. Their use is growing as speech related technologies improve, as speech is one of the most natural methods of interaction. They can provide customer support as IVRs, can be used as an assistive technology, or can become an aural interface to the Internet. Given that the telephone is used extensively throughout the globe, the number of potential users of voice-enabled applications is very high. VoiceXML is a popular, open, high-level, standard means of creating voice-enabled applications which was designed to bring the benefits of web based development to services. While VoiceXML is an ideal language for creating these applications, VoiceXML gateways, the hardware and software responsible for interpreting VoiceXML applications and interfacing with the PSTN, are still expensive and so there is a need for a low-cost gateway. Asterisk, and open-source, TDM/VoIP telephony platform, can be used as a low-cost PSTN interface. This thesis investigates adding a VoiceXML service to Asterisk, creating a low-cost VoiceXML prototype gateway which is able to render voice-enabled applications. Following the Component-Based Software Engineering (CBSE) paradigm, the VoiceXML gateway is divided into a set of components which are sourced from the open-source community, and integrated to create the gateway. The browser requires a VoiceXML interpreter (OpenVXI), a Text-To-Speech engine (Festival) and a speech recognition engine (Sphinx 4). The integration of the components results in a low-cost, open-source VoiceXML gateway. System tests show that the integration of the components was successful, and that the system can handle concurrent calls. A fully compliant version of the gateway can be used in the real world to render voice-enabled applications at a low cost. , KMBT_363 , Adobe Acrobat 9.55 Paper Capture Plug-in
- Full Text:
- Date Issued: 2007
Correlation and comparative analysis of traffic across five network telescopes
- Nkhumeleni, Thizwilondi Moses
- Authors: Nkhumeleni, Thizwilondi Moses
- Date: 2014
- Subjects: Sensor networks , Computer networks , TCP/IP (Computer network protocol) , Computer networks -- Management , Electronic data processing -- Management
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4693 , http://hdl.handle.net/10962/d1011668 , Sensor networks , Computer networks , TCP/IP (Computer network protocol) , Computer networks -- Management , Electronic data processing -- Management
- Description: Monitoring unused IP address space by using network telescopes provides a favourable environment for researchers to study and detect malware, worms, denial of service and scanning activities. Research in the field of network telescopes has progressed over the past decade resulting in the development of an increased number of overlapping datasets. Rhodes University's network of telescope sensors has continued to grow with additional network telescopes being brought online. At the time of writing, Rhodes University has a distributed network of five relatively small /24 network telescopes. With five network telescope sensors, this research focuses on comparative and correlation analysis of traffic activity across the network of telescope sensors. To aid summarisation and visualisation techniques, time series' representing time-based traffic activity, are constructed. By employing an iterative experimental process of captured traffic, two natural categories of the five network telescopes are presented. Using the cross- and auto-correlation methods of time series analysis, moderate correlation of traffic activity was achieved between telescope sensors in each category. Weak to moderate correlation was calculated when comparing category A and category B network telescopes' datasets. Results were significantly improved by studying TCP traffic separately. Moderate to strong correlation coefficients in each category were calculated when using TCP traffic only. UDP traffic analysis showed weaker correlation between sensors, however the uniformity of ICMP traffic showed correlation of traffic activity across all sensors. The results confirmed the visual observation of traffic relativity in telescope sensors within the same category and quantitatively analysed the correlation of network telescopes' traffic activity.
- Full Text:
- Date Issued: 2014
- Authors: Nkhumeleni, Thizwilondi Moses
- Date: 2014
- Subjects: Sensor networks , Computer networks , TCP/IP (Computer network protocol) , Computer networks -- Management , Electronic data processing -- Management
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4693 , http://hdl.handle.net/10962/d1011668 , Sensor networks , Computer networks , TCP/IP (Computer network protocol) , Computer networks -- Management , Electronic data processing -- Management
- Description: Monitoring unused IP address space by using network telescopes provides a favourable environment for researchers to study and detect malware, worms, denial of service and scanning activities. Research in the field of network telescopes has progressed over the past decade resulting in the development of an increased number of overlapping datasets. Rhodes University's network of telescope sensors has continued to grow with additional network telescopes being brought online. At the time of writing, Rhodes University has a distributed network of five relatively small /24 network telescopes. With five network telescope sensors, this research focuses on comparative and correlation analysis of traffic activity across the network of telescope sensors. To aid summarisation and visualisation techniques, time series' representing time-based traffic activity, are constructed. By employing an iterative experimental process of captured traffic, two natural categories of the five network telescopes are presented. Using the cross- and auto-correlation methods of time series analysis, moderate correlation of traffic activity was achieved between telescope sensors in each category. Weak to moderate correlation was calculated when comparing category A and category B network telescopes' datasets. Results were significantly improved by studying TCP traffic separately. Moderate to strong correlation coefficients in each category were calculated when using TCP traffic only. UDP traffic analysis showed weaker correlation between sensors, however the uniformity of ICMP traffic showed correlation of traffic activity across all sensors. The results confirmed the visual observation of traffic relativity in telescope sensors within the same category and quantitatively analysed the correlation of network telescopes' traffic activity.
- Full Text:
- Date Issued: 2014
CREWS : a Component-driven, Run-time Extensible Web Service framework
- Authors: Parry, Dominic Charles
- Date: 2004
- Subjects: Component software -- Development , Computer software -- Reusability , Software reengineering , Web services
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4628 , http://hdl.handle.net/10962/d1006501 , Component software -- Development , Computer software -- Reusability , Software reengineering , Web services
- Description: There has been an increased focus in recent years on the development of re-usable software, in the form of objects and software components. This increase, together with pressures from enterprises conducting transactions on the Web to support all business interactions on all scales, has encouraged research towards the development of easily reconfigurable and highly adaptable Web services. This work investigates the ability of Component-Based Software Development (CBSD) to produce such systems, and proposes a more manageable use of CBSD methodologies. Component-Driven Software Development (CDSD) is introduced to enable better component manageability. Current Web service technologies are also examined to determine their ability to support extensible Web services, and a dynamic Web service architecture is proposed. The work also describes the development of two proof-of-concept systems, DREW Chat and Hamilton Bank. DREW Chat and Hamilton Bank are implementations of Web services that support extension dynamically and at run-time. DREW Chat is implemented on the client side, where the user is given the ability to change the client as required. Hamilton Bank is a server-side implementation, which is run-time customisable by both the user and the party offering the service. In each case, a generic architecture is produced to support dynamic Web services. These architectures are combined to produce CREWS, a Component-driven Runtime Extensible Web Service solution that enables Web services to support the ever changing needs of enterprises. A discussion of similar work is presented, identifying the strengths and weaknesses of our architecture when compared to other solutions.
- Full Text:
- Date Issued: 2004
- Authors: Parry, Dominic Charles
- Date: 2004
- Subjects: Component software -- Development , Computer software -- Reusability , Software reengineering , Web services
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4628 , http://hdl.handle.net/10962/d1006501 , Component software -- Development , Computer software -- Reusability , Software reengineering , Web services
- Description: There has been an increased focus in recent years on the development of re-usable software, in the form of objects and software components. This increase, together with pressures from enterprises conducting transactions on the Web to support all business interactions on all scales, has encouraged research towards the development of easily reconfigurable and highly adaptable Web services. This work investigates the ability of Component-Based Software Development (CBSD) to produce such systems, and proposes a more manageable use of CBSD methodologies. Component-Driven Software Development (CDSD) is introduced to enable better component manageability. Current Web service technologies are also examined to determine their ability to support extensible Web services, and a dynamic Web service architecture is proposed. The work also describes the development of two proof-of-concept systems, DREW Chat and Hamilton Bank. DREW Chat and Hamilton Bank are implementations of Web services that support extension dynamically and at run-time. DREW Chat is implemented on the client side, where the user is given the ability to change the client as required. Hamilton Bank is a server-side implementation, which is run-time customisable by both the user and the party offering the service. In each case, a generic architecture is produced to support dynamic Web services. These architectures are combined to produce CREWS, a Component-driven Runtime Extensible Web Service solution that enables Web services to support the ever changing needs of enterprises. A discussion of similar work is presented, identifying the strengths and weaknesses of our architecture when compared to other solutions.
- Full Text:
- Date Issued: 2004
CSP-i : an implementation of CSP
- Authors: Wrench, Karen Lee
- Date: 1987 , 2013-03-08
- Subjects: Synchronization--Computers , Programming languages (Electronic computers)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4579 , http://hdl.handle.net/10962/d1003124 , Synchronization--Computers , Programming languages (Electronic computers)
- Description: CSP (Communicating Sequential Processes) is a notation proposed by Hoare, for expressing process communication and synchronization. Although this notation has been widely acclaimed, Hoare himself never implemented it as a computer language. He did however produce the necessary correctness proofs and subsequently the notation has been adopted (in various guises) by the designers of other concurrent languages such as Ada and occam. Only two attempts have been made at a direct and precise implementation of CSP. With closer scrutiny, even these implementations are found to deviate from the specifications expounded by Hoare, and in so doing restrict the original proposal. This thesis comprises two main sections. The first of these includes a brief look at the primitives of concurrent programming, followed by a comparative study of the existing adaptations of CSP and other message passing languages. The latter section is devoted to a description of the author's attempt at an original implementation of the notation. The result of this attempt is the creation of the CSP-i language and a suitable environment for executing CSP-i programs on an IBM PC. The CSP-i implementation is comparable with other concurrent systems presently available. In some aspects, the primitives featured in CSP-i provide the user with a more efficient and concise notation for expressing concurrent algorithms than several other message-based languages, notably occam. , KMBT_363 , Adobe Acrobat 9.53 Paper Capture Plug-in
- Full Text:
- Date Issued: 1987
- Authors: Wrench, Karen Lee
- Date: 1987 , 2013-03-08
- Subjects: Synchronization--Computers , Programming languages (Electronic computers)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4579 , http://hdl.handle.net/10962/d1003124 , Synchronization--Computers , Programming languages (Electronic computers)
- Description: CSP (Communicating Sequential Processes) is a notation proposed by Hoare, for expressing process communication and synchronization. Although this notation has been widely acclaimed, Hoare himself never implemented it as a computer language. He did however produce the necessary correctness proofs and subsequently the notation has been adopted (in various guises) by the designers of other concurrent languages such as Ada and occam. Only two attempts have been made at a direct and precise implementation of CSP. With closer scrutiny, even these implementations are found to deviate from the specifications expounded by Hoare, and in so doing restrict the original proposal. This thesis comprises two main sections. The first of these includes a brief look at the primitives of concurrent programming, followed by a comparative study of the existing adaptations of CSP and other message passing languages. The latter section is devoted to a description of the author's attempt at an original implementation of the notation. The result of this attempt is the creation of the CSP-i language and a suitable environment for executing CSP-i programs on an IBM PC. The CSP-i implementation is comparable with other concurrent systems presently available. In some aspects, the primitives featured in CSP-i provide the user with a more efficient and concise notation for expressing concurrent algorithms than several other message-based languages, notably occam. , KMBT_363 , Adobe Acrobat 9.53 Paper Capture Plug-in
- Full Text:
- Date Issued: 1987
Culturally-relevant augmented user interfaces for illiterate and semi-literate users
- Authors: Gavaza, Takayedzwa
- Date: 2012 , 2012-06-14
- Subjects: User interfaces (Computer systems) -- Research , Computer software -- Research , Graphical user interfaces (Computer systems) -- Research , Human-computer interaction , Computers and literacy
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4665 , http://hdl.handle.net/10962/d1006679 , User interfaces (Computer systems) -- Research , Computer software -- Research , Graphical user interfaces (Computer systems) -- Research , Human-computer interaction , Computers and literacy
- Description: This thesis discusses guidelines for developers of Augmented User Interfaces that can be used by illiterate and semi-literate users. To discover how illiterate and semi-literate users intuitively understand interaction with a computer, a series of Wizard of Oz experiments were conducted. In the first Wizard of Oz study, users were presented with a standard desktop computer, fitted with a number of input devices to determine how they assume interaction should occur. This study found that the users preferred the use of speech and gestures which mirrored findings from other researchers. The study also found that users struggled to understand the tab metaphor which is used frequently in applications. From these findings, a localised culturally-relevant tab interface was developed to determine the feasibility of localised Graphical User Interface components. A second study was undertaken to compare the localised tab interface with the traditional tabbed interface. This study collected both quantitative and qualitative data from the participants. It found that users could interact with a localised tabbed interface faster and more accurately than with the traditional counterparts. More importantly, users stated that they intuitively understood the localised interface component, whereas they did not understand the traditional tab metaphor. These user studies have shown that the use of self-explanatory animations, video feedback, localised tabbed interface metaphors and voice output have a positive impact on enabling illiterate and semi-literate users to access information. , TeX , Adobe Acrobat 9.51 Paper Capture Plug-in
- Full Text:
- Date Issued: 2012
- Authors: Gavaza, Takayedzwa
- Date: 2012 , 2012-06-14
- Subjects: User interfaces (Computer systems) -- Research , Computer software -- Research , Graphical user interfaces (Computer systems) -- Research , Human-computer interaction , Computers and literacy
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4665 , http://hdl.handle.net/10962/d1006679 , User interfaces (Computer systems) -- Research , Computer software -- Research , Graphical user interfaces (Computer systems) -- Research , Human-computer interaction , Computers and literacy
- Description: This thesis discusses guidelines for developers of Augmented User Interfaces that can be used by illiterate and semi-literate users. To discover how illiterate and semi-literate users intuitively understand interaction with a computer, a series of Wizard of Oz experiments were conducted. In the first Wizard of Oz study, users were presented with a standard desktop computer, fitted with a number of input devices to determine how they assume interaction should occur. This study found that the users preferred the use of speech and gestures which mirrored findings from other researchers. The study also found that users struggled to understand the tab metaphor which is used frequently in applications. From these findings, a localised culturally-relevant tab interface was developed to determine the feasibility of localised Graphical User Interface components. A second study was undertaken to compare the localised tab interface with the traditional tabbed interface. This study collected both quantitative and qualitative data from the participants. It found that users could interact with a localised tabbed interface faster and more accurately than with the traditional counterparts. More importantly, users stated that they intuitively understood the localised interface component, whereas they did not understand the traditional tab metaphor. These user studies have shown that the use of self-explanatory animations, video feedback, localised tabbed interface metaphors and voice output have a positive impact on enabling illiterate and semi-literate users to access information. , TeX , Adobe Acrobat 9.51 Paper Capture Plug-in
- Full Text:
- Date Issued: 2012
Data-centric security : towards a utopian model for protecting corporate data on mobile devices
- Authors: Mayisela, Simphiwe Hector
- Date: 2014
- Subjects: Computer security , Computer networks -- Security measures , Business enterprises -- Computer networks -- Security measures , Mobile computing -- Security measures , Mobile communication systems -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4688 , http://hdl.handle.net/10962/d1011094 , Computer security , Computer networks -- Security measures , Business enterprises -- Computer networks -- Security measures , Mobile computing -- Security measures , Mobile communication systems -- Security measures
- Description: Data-centric security is significant in understanding, assessing and mitigating the various risks and impacts of sharing information outside corporate boundaries. Information generally leaves corporate boundaries through mobile devices. Mobile devices continue to evolve as multi-functional tools for everyday life, surpassing their initial intended use. This added capability and increasingly extensive use of mobile devices does not come without a degree of risk - hence the need to guard and protect information as it exists beyond the corporate boundaries and throughout its lifecycle. Literature on existing models crafted to protect data, rather than infrastructure in which the data resides, is reviewed. Technologies that organisations have implemented to adopt the data-centric model are studied. A utopian model that takes into account the shortcomings of existing technologies and deficiencies of common theories is proposed. Two sets of qualitative studies are reported; the first is a preliminary online survey to assess the ubiquity of mobile devices and extent of technology adoption towards implementation of data-centric model; and the second comprises of a focus survey and expert interviews pertaining on technologies that organisations have implemented to adopt the data-centric model. The latter study revealed insufficient data at the time of writing for the results to be statistically significant; however; indicative trends supported the assertions documented in the literature review. The question that this research answers is whether or not current technology implementations designed to mitigate risks from mobile devices, actually address business requirements. This research question, answered through these two sets qualitative studies, discovered inconsistencies between the technology implementations and business requirements. The thesis concludes by proposing a realistic model, based on the outcome of the qualitative study, which bridges the gap between the technology implementations and business requirements. Future work which could perhaps be conducted in light of the findings and the comments from this research is also considered.
- Full Text:
- Date Issued: 2014
- Authors: Mayisela, Simphiwe Hector
- Date: 2014
- Subjects: Computer security , Computer networks -- Security measures , Business enterprises -- Computer networks -- Security measures , Mobile computing -- Security measures , Mobile communication systems -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4688 , http://hdl.handle.net/10962/d1011094 , Computer security , Computer networks -- Security measures , Business enterprises -- Computer networks -- Security measures , Mobile computing -- Security measures , Mobile communication systems -- Security measures
- Description: Data-centric security is significant in understanding, assessing and mitigating the various risks and impacts of sharing information outside corporate boundaries. Information generally leaves corporate boundaries through mobile devices. Mobile devices continue to evolve as multi-functional tools for everyday life, surpassing their initial intended use. This added capability and increasingly extensive use of mobile devices does not come without a degree of risk - hence the need to guard and protect information as it exists beyond the corporate boundaries and throughout its lifecycle. Literature on existing models crafted to protect data, rather than infrastructure in which the data resides, is reviewed. Technologies that organisations have implemented to adopt the data-centric model are studied. A utopian model that takes into account the shortcomings of existing technologies and deficiencies of common theories is proposed. Two sets of qualitative studies are reported; the first is a preliminary online survey to assess the ubiquity of mobile devices and extent of technology adoption towards implementation of data-centric model; and the second comprises of a focus survey and expert interviews pertaining on technologies that organisations have implemented to adopt the data-centric model. The latter study revealed insufficient data at the time of writing for the results to be statistically significant; however; indicative trends supported the assertions documented in the literature review. The question that this research answers is whether or not current technology implementations designed to mitigate risks from mobile devices, actually address business requirements. This research question, answered through these two sets qualitative studies, discovered inconsistencies between the technology implementations and business requirements. The thesis concludes by proposing a realistic model, based on the outcome of the qualitative study, which bridges the gap between the technology implementations and business requirements. Future work which could perhaps be conducted in light of the findings and the comments from this research is also considered.
- Full Text:
- Date Issued: 2014
De-identification of personal information for use in software testing to ensure compliance with the Protection of Personal Information Act
- Authors: Mark, Stephen John
- Date: 2018
- Subjects: Data processing , Information technology -- Security measures , Computer security -- South Africa , Data protection -- Law and legislation -- South Africa , Data encryption (Computer science) , Python (Computer program language) , SQL (Computer program language) , Protection of Personal Information Act (POPI)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/63888 , vital:28503
- Description: Encryption of Personally Identifiable Information stored in a Structured Query Language Database has been difficult for a long time. This is owing to block-cipher encryption algorithms changing the length and type of the input data when encrypted, which cannot subsequently be stored in the database without altering its structure. As the enactment of the South African Protection of Personal Information Act, No 4 of 2013 (POPI), was set in motion with the appointment of the Information Regulators Office in December 2016, South African companies are intensely focused on implementing compliance strategies and processes. The legislation, promulgated in 2013, encompasses the processing and storage of personally identifiable information (PII), ensuring that corporations act responsibly when collecting, storing and using individuals’ personal data. The Act comprises eight broad conditions that will become legislation once the new Information Regulator’s office is fully equipped to carry out their duties. POPI requires that individuals’ data should be kept confidential from all but those who specifically have permission to access the data. This means that not all members of IT teams should have access to the data unless it has been de-identified. This study tests an implementation of the Fixed Feistel 1 algorithm from the National Institute of Standards and Technology (NIST) “Special Publication 800-38G: Recommendation for Block Cipher Modes of Operation : Methods for Format-Preserving Encryption” using the LibFFX Python library. The Python scripting language was used for the experiments. The research shows that it is indeed possible to encrypt data in a Structured Query Language Database without changing the database schema using the new Format-Preserving encryption technique from NIST800-38G. Quality Assurance software testers can then run their full set of tests on the encrypted database. There is no reduction of encryption strength when using the FF1 encryption technique, compared to the underlying AES-128 encryption algorithm. It further shows that the utility of the data is not lost once it is encrypted.
- Full Text:
- Date Issued: 2018
- Authors: Mark, Stephen John
- Date: 2018
- Subjects: Data processing , Information technology -- Security measures , Computer security -- South Africa , Data protection -- Law and legislation -- South Africa , Data encryption (Computer science) , Python (Computer program language) , SQL (Computer program language) , Protection of Personal Information Act (POPI)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/63888 , vital:28503
- Description: Encryption of Personally Identifiable Information stored in a Structured Query Language Database has been difficult for a long time. This is owing to block-cipher encryption algorithms changing the length and type of the input data when encrypted, which cannot subsequently be stored in the database without altering its structure. As the enactment of the South African Protection of Personal Information Act, No 4 of 2013 (POPI), was set in motion with the appointment of the Information Regulators Office in December 2016, South African companies are intensely focused on implementing compliance strategies and processes. The legislation, promulgated in 2013, encompasses the processing and storage of personally identifiable information (PII), ensuring that corporations act responsibly when collecting, storing and using individuals’ personal data. The Act comprises eight broad conditions that will become legislation once the new Information Regulator’s office is fully equipped to carry out their duties. POPI requires that individuals’ data should be kept confidential from all but those who specifically have permission to access the data. This means that not all members of IT teams should have access to the data unless it has been de-identified. This study tests an implementation of the Fixed Feistel 1 algorithm from the National Institute of Standards and Technology (NIST) “Special Publication 800-38G: Recommendation for Block Cipher Modes of Operation : Methods for Format-Preserving Encryption” using the LibFFX Python library. The Python scripting language was used for the experiments. The research shows that it is indeed possible to encrypt data in a Structured Query Language Database without changing the database schema using the new Format-Preserving encryption technique from NIST800-38G. Quality Assurance software testers can then run their full set of tests on the encrypted database. There is no reduction of encryption strength when using the FF1 encryption technique, compared to the underlying AES-128 encryption algorithm. It further shows that the utility of the data is not lost once it is encrypted.
- Full Text:
- Date Issued: 2018
Decorating Asterisk : experiments in service creation for a multi-protocol telephony environment using open source tools
- Authors: Hitchcock, Jonathan
- Date: 2006
- Subjects: Asterisk (Computer file) , Internet telephony
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4635 , http://hdl.handle.net/10962/d1006539 , Asterisk (Computer file) , Internet telephony
- Description: As Voice over IP becomes more prevalent, value-adds to the service will become ubiquitous. Voice over IP (VoIP) is no longer a single service application, but an array of marketable services of increasing depth, which are moving into the non-desktop market. In addition, as the range of devices being generally used increases, it will become necessary for all services, including VoIP services, to be accessible from multiple platforms and through varied interfaces. With the recent introduction and growth of the open source software PBX system named Asterisk, the possibility of achieving these goals has become more concrete. In addition to Asterisk, a number of open source systems are being developed which facilitate the development of systems that interoperate over a wide variety of platforms and through multiple interfaces. This thesis investigates Asterisk in terms of its viability to provide the depth of services that will be required in a VoIP environment, as well as a number of other open source systems in terms of what they can offer such a system. In addition, it investigates whether these services can be made available on different devices. Using various systems built as a proof-of-concept, this thesis shows that Asterisk, in conjunction with various other open source projects, such as the Twisted framework provides a concrete tool which can be used to realise flexible and protocol independent telephony solutions for a small to medium enterprise.
- Full Text:
- Date Issued: 2006
- Authors: Hitchcock, Jonathan
- Date: 2006
- Subjects: Asterisk (Computer file) , Internet telephony
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4635 , http://hdl.handle.net/10962/d1006539 , Asterisk (Computer file) , Internet telephony
- Description: As Voice over IP becomes more prevalent, value-adds to the service will become ubiquitous. Voice over IP (VoIP) is no longer a single service application, but an array of marketable services of increasing depth, which are moving into the non-desktop market. In addition, as the range of devices being generally used increases, it will become necessary for all services, including VoIP services, to be accessible from multiple platforms and through varied interfaces. With the recent introduction and growth of the open source software PBX system named Asterisk, the possibility of achieving these goals has become more concrete. In addition to Asterisk, a number of open source systems are being developed which facilitate the development of systems that interoperate over a wide variety of platforms and through multiple interfaces. This thesis investigates Asterisk in terms of its viability to provide the depth of services that will be required in a VoIP environment, as well as a number of other open source systems in terms of what they can offer such a system. In addition, it investigates whether these services can be made available on different devices. Using various systems built as a proof-of-concept, this thesis shows that Asterisk, in conjunction with various other open source projects, such as the Twisted framework provides a concrete tool which can be used to realise flexible and protocol independent telephony solutions for a small to medium enterprise.
- Full Text:
- Date Issued: 2006
Deploying DNSSEC in islands of security
- Authors: Murisa, Wesley Vengayi
- Date: 2013 , 2013-03-31
- Subjects: Internet domain names , Computer security , Computer network protocols , Computer security -- Africa
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4577 , http://hdl.handle.net/10962/d1003053 , Internet domain names , Computer security , Computer network protocols , Computer security -- Africa
- Description: The Domain Name System (DNS), a name resolution protocol is one of the vulnerable network protocols that has been subjected to many security attacks such as cache poisoning, denial of service and the 'Kaminsky' spoofing attack. When DNS was designed, security was not incorporated into its design. The DNS Security Extensions (DNSSEC) provides security to the name resolution process by using public key cryptosystems. Although DNSSEC has backward compatibility with unsecured zones, it only offers security to clients when communicating with security aware zones. Widespread deployment of DNSSEC is therefore necessary to secure the name resolution process and provide security to the Internet. Only a few Top Level Domains (TLD's) have deployed DNSSEC, this inherently makes it difficult for their sub-domains to implement the security extensions to the DNS. This study analyses mechanisms that can be used by domains in islands of security to deploy DNSSEC so that the name resolution process can be secured in two specific cases where either the TLD is not signed or the domain registrar is not able to support signed domains. The DNS client side mechanisms evaluated in this study include web browser plug-ins, local validating resolvers and domain look-aside validation. The results of the study show that web browser plug-ins cannot work on their own without local validating resolvers. The web browser validators, however, proved to be useful in indicating to the user whether a domain has been validated or not. Local resolvers present a more secure option for Internet users who cannot trust the communication channel between their stub resolvers and remote name servers. However, they do not provide a way of showing the user whether a domain name has been correctly validated or not. Based on the results of the tests conducted, it is recommended that local validators be used with browser validators for visibility and improved security. On the DNS server side, Domain Look-aside Validation (DLV) presents a viable alternative for organizations in islands of security like most countries in Africa where only two country code Top Level Domains (ccTLD) have deployed DNSSEC. This research recommends use of DLV by corporates to provide DNS security to both internal and external users accessing their web based services. , LaTeX with hyperref package , pdfTeX-1.40.10
- Full Text:
- Date Issued: 2013
- Authors: Murisa, Wesley Vengayi
- Date: 2013 , 2013-03-31
- Subjects: Internet domain names , Computer security , Computer network protocols , Computer security -- Africa
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4577 , http://hdl.handle.net/10962/d1003053 , Internet domain names , Computer security , Computer network protocols , Computer security -- Africa
- Description: The Domain Name System (DNS), a name resolution protocol is one of the vulnerable network protocols that has been subjected to many security attacks such as cache poisoning, denial of service and the 'Kaminsky' spoofing attack. When DNS was designed, security was not incorporated into its design. The DNS Security Extensions (DNSSEC) provides security to the name resolution process by using public key cryptosystems. Although DNSSEC has backward compatibility with unsecured zones, it only offers security to clients when communicating with security aware zones. Widespread deployment of DNSSEC is therefore necessary to secure the name resolution process and provide security to the Internet. Only a few Top Level Domains (TLD's) have deployed DNSSEC, this inherently makes it difficult for their sub-domains to implement the security extensions to the DNS. This study analyses mechanisms that can be used by domains in islands of security to deploy DNSSEC so that the name resolution process can be secured in two specific cases where either the TLD is not signed or the domain registrar is not able to support signed domains. The DNS client side mechanisms evaluated in this study include web browser plug-ins, local validating resolvers and domain look-aside validation. The results of the study show that web browser plug-ins cannot work on their own without local validating resolvers. The web browser validators, however, proved to be useful in indicating to the user whether a domain has been validated or not. Local resolvers present a more secure option for Internet users who cannot trust the communication channel between their stub resolvers and remote name servers. However, they do not provide a way of showing the user whether a domain name has been correctly validated or not. Based on the results of the tests conducted, it is recommended that local validators be used with browser validators for visibility and improved security. On the DNS server side, Domain Look-aside Validation (DLV) presents a viable alternative for organizations in islands of security like most countries in Africa where only two country code Top Level Domains (ccTLD) have deployed DNSSEC. This research recommends use of DLV by corporates to provide DNS security to both internal and external users accessing their web based services. , LaTeX with hyperref package , pdfTeX-1.40.10
- Full Text:
- Date Issued: 2013