NFComms: A synchronous communication framework for the CPU-NFP heterogeneous system
- Authors: Pennefather, Sean
- Date: 2020
- Subjects: Network processors , Computer programming , Parallel processing (Electronic computers) , Netronome
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/144181 , vital:38318
- Description: This work explores the viability of using a Network Flow Processor (NFP), developed by Netronome, as a coprocessor for the construction of a CPU-NFP heterogeneous platform in the domain of general processing. When considering heterogeneous platforms involving architectures like the NFP, the communication framework provided is typically represented as virtual network interfaces and is thus not suitable for generic communication. To enable a CPU-NFP heterogeneous platform for use in the domain of general computing, a suitable generic communication framework is required. A feasibility study for a suitable communication medium between the two candidate architectures showed that a generic framework that conforms to the mechanisms dictated by Communicating Sequential Processes is achievable. The resulting NFComms framework, which facilitates inter- and intra-architecture communication through the use of synchronous message passing, supports up to 16 unidirectional channels and includes queuing mechanisms for transparently supporting concurrent streams exceeding the channel count. The framework has a minimum latency of between 15.5 μs and 18 μs per synchronous transaction and can sustain a peak throughput of up to 30 Gbit/s. The framework also supports a runtime for interacting with the Go programming language, allowing user-space processes to subscribe channels to the framework for interacting with processes executing on the NFP. The viability of utilising a heterogeneous CPU-NFP system for use in the domain of general and network computing was explored by introducing a set of problems or applications spanning general computing, and network processing. These were implemented on the heterogeneous architecture and benchmarked against equivalent CPU-only and CPU/GPU solutions. The results recorded were used to form an opinion on the viability of using an NFP for general processing. It is the author’s opinion that, beyond very specific use cases, it appears that the NFP-400 is not currently a viable solution as a coprocessor in the field of general computing. This does not mean that the proposed framework or the concept of a heterogeneous CPU-NFP system should be discarded as such a system does have acceptable use in the fields of network and stream processing. Additionally, when comparing the recorded limitations to those seen during the early stages of general purpose GPU development, it is clear that general processing on the NFP is currently in a similar state.
- Full Text:
- Date Issued: 2020
- Authors: Pennefather, Sean
- Date: 2020
- Subjects: Network processors , Computer programming , Parallel processing (Electronic computers) , Netronome
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/144181 , vital:38318
- Description: This work explores the viability of using a Network Flow Processor (NFP), developed by Netronome, as a coprocessor for the construction of a CPU-NFP heterogeneous platform in the domain of general processing. When considering heterogeneous platforms involving architectures like the NFP, the communication framework provided is typically represented as virtual network interfaces and is thus not suitable for generic communication. To enable a CPU-NFP heterogeneous platform for use in the domain of general computing, a suitable generic communication framework is required. A feasibility study for a suitable communication medium between the two candidate architectures showed that a generic framework that conforms to the mechanisms dictated by Communicating Sequential Processes is achievable. The resulting NFComms framework, which facilitates inter- and intra-architecture communication through the use of synchronous message passing, supports up to 16 unidirectional channels and includes queuing mechanisms for transparently supporting concurrent streams exceeding the channel count. The framework has a minimum latency of between 15.5 μs and 18 μs per synchronous transaction and can sustain a peak throughput of up to 30 Gbit/s. The framework also supports a runtime for interacting with the Go programming language, allowing user-space processes to subscribe channels to the framework for interacting with processes executing on the NFP. The viability of utilising a heterogeneous CPU-NFP system for use in the domain of general and network computing was explored by introducing a set of problems or applications spanning general computing, and network processing. These were implemented on the heterogeneous architecture and benchmarked against equivalent CPU-only and CPU/GPU solutions. The results recorded were used to form an opinion on the viability of using an NFP for general processing. It is the author’s opinion that, beyond very specific use cases, it appears that the NFP-400 is not currently a viable solution as a coprocessor in the field of general computing. This does not mean that the proposed framework or the concept of a heterogeneous CPU-NFP system should be discarded as such a system does have acceptable use in the fields of network and stream processing. Additionally, when comparing the recorded limitations to those seen during the early stages of general purpose GPU development, it is clear that general processing on the NFP is currently in a similar state.
- Full Text:
- Date Issued: 2020
Transformative ICT education practices in rural secondary schools for developmental needs and realities: the Eastern Cape Province, South Africa
- Authors: Simuja, Clement
- Date: 2020
- Subjects: Education, Secondary -- South Africa -- Data processing , Information technology -- Study and teaching (Secondary) --South Africa , Educational technology -- Developing countries , Rural development -- Developing countries , Computer-assisted instruction -- South Africa -- Eastern Cape , Internet in education -- South Africa , Rural schools -- South Africa -- Eastern Cape , Community and school -- South Africa -- Eastern Cape
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/150631 , vital:38991
- Description: The perceived social development significance of Information and Communication Technology (ICT) has dramatically expanded the domains in which this cluster of ICTs is being discussed and acted upon. The action to promote community development in rural areas in South Africa has made its way into the introduction of ICT education in secondary schools. Since rural secondary schools form part of the framework for rural communities, they are being challenged to provide ICT education that makes a difference in learners’ lives. This requires engaging education practices that inspire learners to construct knowledge of ICT that does not only respond to examination purposes but rather, to the needs and development aspirations of the community. This research examines the experience of engaging learners and communities in socially informed ICT education in rural secondary schools. Specifically, it seeks to develop a critique of current practices involved in ICT education in rural secondary schools, and explores plausible alternatives to such practices that would make ICT education more transformative and structured towards the developmental concerns of communities. The main empirical focus for the research was five rural secondary schools in the Eastern Cape Province in South Africa. The research involved 53 participants that participated in a socially informed ICT training process. The training was designed to inspire participants to share their self-defined ICT education and ICT knowledge experiences. Critical Action Learning and Philosophical Inquiry provided the methodological framework, whilst the theoretical framework draws on Foucault’s philosophical ideas on power-knowledge relations. Through this theoretical analysis, the research examines the dynamic interplay of practices in ICT education with the values, ideals, and knowledge that form the core-life experiences of learners and rural communities. The research findings of this study indicate that current ICT education practices in rural secondary schools are endowed with ideologies that are affecting learners’ identity, social experiences, power, and ownership of the reflective meaning of using ICTs in community development. The contribution of this thesis lies in demonstrating ways that reframe ICT education transformatively, and more specifically its practices in the light of the way power, identity, ownership and social experience construct and offer learners a transformative view of self and the world. This could enable ICT education to fulfil the potential of contributing to social development in rural communities. The thesis culminates by presenting a theoretical framework that articulates the structural and authoritative components of ICT education practices – these relate to learners’ conscious understandings and represented thoughts, sensations and meanings embedded in the context, and actions and locations of using their knowledge of ICT.
- Full Text:
- Date Issued: 2020
- Authors: Simuja, Clement
- Date: 2020
- Subjects: Education, Secondary -- South Africa -- Data processing , Information technology -- Study and teaching (Secondary) --South Africa , Educational technology -- Developing countries , Rural development -- Developing countries , Computer-assisted instruction -- South Africa -- Eastern Cape , Internet in education -- South Africa , Rural schools -- South Africa -- Eastern Cape , Community and school -- South Africa -- Eastern Cape
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/150631 , vital:38991
- Description: The perceived social development significance of Information and Communication Technology (ICT) has dramatically expanded the domains in which this cluster of ICTs is being discussed and acted upon. The action to promote community development in rural areas in South Africa has made its way into the introduction of ICT education in secondary schools. Since rural secondary schools form part of the framework for rural communities, they are being challenged to provide ICT education that makes a difference in learners’ lives. This requires engaging education practices that inspire learners to construct knowledge of ICT that does not only respond to examination purposes but rather, to the needs and development aspirations of the community. This research examines the experience of engaging learners and communities in socially informed ICT education in rural secondary schools. Specifically, it seeks to develop a critique of current practices involved in ICT education in rural secondary schools, and explores plausible alternatives to such practices that would make ICT education more transformative and structured towards the developmental concerns of communities. The main empirical focus for the research was five rural secondary schools in the Eastern Cape Province in South Africa. The research involved 53 participants that participated in a socially informed ICT training process. The training was designed to inspire participants to share their self-defined ICT education and ICT knowledge experiences. Critical Action Learning and Philosophical Inquiry provided the methodological framework, whilst the theoretical framework draws on Foucault’s philosophical ideas on power-knowledge relations. Through this theoretical analysis, the research examines the dynamic interplay of practices in ICT education with the values, ideals, and knowledge that form the core-life experiences of learners and rural communities. The research findings of this study indicate that current ICT education practices in rural secondary schools are endowed with ideologies that are affecting learners’ identity, social experiences, power, and ownership of the reflective meaning of using ICTs in community development. The contribution of this thesis lies in demonstrating ways that reframe ICT education transformatively, and more specifically its practices in the light of the way power, identity, ownership and social experience construct and offer learners a transformative view of self and the world. This could enable ICT education to fulfil the potential of contributing to social development in rural communities. The thesis culminates by presenting a theoretical framework that articulates the structural and authoritative components of ICT education practices – these relate to learners’ conscious understandings and represented thoughts, sensations and meanings embedded in the context, and actions and locations of using their knowledge of ICT.
- Full Text:
- Date Issued: 2020
A development method for deriving reusable concurrent programs from verified CSP models
- Authors: Dibley, James
- Date: 2019
- Subjects: CSP (Computer program language) , Sequential processing (Computer science) , Go (Computer program language) , CSPIDER (Open source tool)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/72329 , vital:30035
- Description: This work proposes and demonstrates a novel method for software development that applies formal verification techniques to the design and implementation of concurrent programs. This method is supported by a new software tool, CSPIDER, which translates machine-readable Communicating Sequential Processes (CSP) models into encapsulated, reusable components coded in the Go programming language. In relation to existing CSP implementation techniques, this work is only the second to implement a translator and it provides original support for some CSP language constructs and modelling approaches. The method is evaluated through three case studies: a concurrent sorting array, a trialdivision prime number generator, and a component node for the Ricart-Agrawala distributed mutual exclusion algorithm. Each of these case studies presents the formal verification of safety and functional requirements through CSP model-checking, and it is shown that CSPIDER is capable of generating reusable implementations from each model. The Ricart-Agrawala case study demonstrates the application of the method to the design of a protocol component. This method maintains full compatibility with the primary CSP verification tool. Applying the CSPIDER tool requires minimal commitment to an explicitly defined modelling style and a very small set of pre-translation annotations, but all of these measures can be instated prior to verification. The Go code that CSPIDER produces requires no intervention before it may be used as a component within a larger development. The translator provides a traceable, structured implementation of the CSP model, automatically deriving formal parameters and a channel-based client interface from its interpretation of the CSP model. Each case study demonstrates the use of the translated component within a simple test development.
- Full Text:
- Date Issued: 2019
- Authors: Dibley, James
- Date: 2019
- Subjects: CSP (Computer program language) , Sequential processing (Computer science) , Go (Computer program language) , CSPIDER (Open source tool)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/72329 , vital:30035
- Description: This work proposes and demonstrates a novel method for software development that applies formal verification techniques to the design and implementation of concurrent programs. This method is supported by a new software tool, CSPIDER, which translates machine-readable Communicating Sequential Processes (CSP) models into encapsulated, reusable components coded in the Go programming language. In relation to existing CSP implementation techniques, this work is only the second to implement a translator and it provides original support for some CSP language constructs and modelling approaches. The method is evaluated through three case studies: a concurrent sorting array, a trialdivision prime number generator, and a component node for the Ricart-Agrawala distributed mutual exclusion algorithm. Each of these case studies presents the formal verification of safety and functional requirements through CSP model-checking, and it is shown that CSPIDER is capable of generating reusable implementations from each model. The Ricart-Agrawala case study demonstrates the application of the method to the design of a protocol component. This method maintains full compatibility with the primary CSP verification tool. Applying the CSPIDER tool requires minimal commitment to an explicitly defined modelling style and a very small set of pre-translation annotations, but all of these measures can be instated prior to verification. The Go code that CSPIDER produces requires no intervention before it may be used as a component within a larger development. The translator provides a traceable, structured implementation of the CSP model, automatically deriving formal parameters and a channel-based client interface from its interpretation of the CSP model. Each case study demonstrates the use of the translated component within a simple test development.
- Full Text:
- Date Issued: 2019
A multi-threading software countermeasure to mitigate side channel analysis in the time domain
- Authors: Frieslaar, Ibraheem
- Date: 2019
- Subjects: Computer security , Data encryption (Computer science) , Noise generators (Electronics)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/71152 , vital:29790
- Description: This research is the first of its kind to investigate the utilisation of a multi-threading software-based countermeasure to mitigate Side Channel Analysis (SCA) attacks, with a particular focus on the AES-128 cryptographic algorithm. This investigation is novel, as there has not been a software-based countermeasure relying on multi-threading to our knowledge. The research has been tested on the Atmel microcontrollers, as well as a more fully featured system in the form of the popular Raspberry Pi that utilises the ARM7 processor. The main contributions of this research is the introduction of a multi-threading software based countermeasure used to mitigate SCA attacks on both an embedded device and a Raspberry Pi. These threads are comprised of various mathematical operations which are utilised to generate electromagnetic (EM) noise resulting in the obfuscation of the execution of the AES-128 algorithm. A novel EM noise generator known as the FRIES noise generator is implemented to obfuscate data captured in the EM field. FRIES comprises of hiding the execution of AES-128 algorithm within the EM noise generated by the 512 Secure Hash Algorithm (SHA) from the libcrypto++ and OpenSSL libraries. In order to evaluate the proposed countermeasure, a novel attack methodology was developed where the entire secret AES-128 encryption key was recovered from a Raspberry Pi, which has not been achieved before. The FRIES noise generator was pitted against this new attack vector and other known noise generators. The results exhibited that the FRIES noise generator withstood this attack whilst other existing techniques still leaked out secret information. The visual location of the AES-128 encryption algorithm in the EM spectrum and key recovery was prevented. These results demonstrated that the proposed multi-threading software based countermeasure was able to be resistant to existing and new forms of attacks, thus verifying that a multi-threading software based countermeasure can serve to mitigate SCA attacks.
- Full Text:
- Date Issued: 2019
- Authors: Frieslaar, Ibraheem
- Date: 2019
- Subjects: Computer security , Data encryption (Computer science) , Noise generators (Electronics)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/71152 , vital:29790
- Description: This research is the first of its kind to investigate the utilisation of a multi-threading software-based countermeasure to mitigate Side Channel Analysis (SCA) attacks, with a particular focus on the AES-128 cryptographic algorithm. This investigation is novel, as there has not been a software-based countermeasure relying on multi-threading to our knowledge. The research has been tested on the Atmel microcontrollers, as well as a more fully featured system in the form of the popular Raspberry Pi that utilises the ARM7 processor. The main contributions of this research is the introduction of a multi-threading software based countermeasure used to mitigate SCA attacks on both an embedded device and a Raspberry Pi. These threads are comprised of various mathematical operations which are utilised to generate electromagnetic (EM) noise resulting in the obfuscation of the execution of the AES-128 algorithm. A novel EM noise generator known as the FRIES noise generator is implemented to obfuscate data captured in the EM field. FRIES comprises of hiding the execution of AES-128 algorithm within the EM noise generated by the 512 Secure Hash Algorithm (SHA) from the libcrypto++ and OpenSSL libraries. In order to evaluate the proposed countermeasure, a novel attack methodology was developed where the entire secret AES-128 encryption key was recovered from a Raspberry Pi, which has not been achieved before. The FRIES noise generator was pitted against this new attack vector and other known noise generators. The results exhibited that the FRIES noise generator withstood this attack whilst other existing techniques still leaked out secret information. The visual location of the AES-128 encryption algorithm in the EM spectrum and key recovery was prevented. These results demonstrated that the proposed multi-threading software based countermeasure was able to be resistant to existing and new forms of attacks, thus verifying that a multi-threading software based countermeasure can serve to mitigate SCA attacks.
- Full Text:
- Date Issued: 2019
Bolvedere: a scalable network flow threat analysis system
- Authors: Herbert, Alan
- Date: 2019
- Subjects: Bolvedere (Computer network analysis system) , Computer networks -- Scalability , Computer networks -- Measurement , Computer networks -- Security measures , Telecommunication -- Traffic -- Measurement
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/71557 , vital:29873
- Description: Since the advent of the Internet, and its public availability in the late 90’s, there have been significant advancements to network technologies and thus a significant increase of the bandwidth available to network users, both human and automated. Although this growth is of great value to network users, it has led to an increase in malicious network-based activities and it is theorized that, as more services become available on the Internet, the volume of such activities will continue to grow. Because of this, there is a need to monitor, comprehend, discern, understand and (where needed) respond to events on networks worldwide. Although this line of thought is simple in its reasoning, undertaking such a task is no small feat. Full packet analysis is a method of network surveillance that seeks out specific characteristics within network traffic that may tell of malicious activity or anomalies in regular network usage. It is carried out within firewalls and implemented through packet classification. In the context of the networks that make up the Internet, this form of packet analysis has become infeasible, as the volume of traffic introduced onto these networks every day is so large that there are simply not enough processing resources to perform such a task on every packet in real time. One could combat this problem by performing post-incident forensics; archiving packets and processing them later. However, as one cannot process all incoming packets, the archive will eventually run out of space. Full packet analysis is also hindered by the fact that some existing, commonly-used solutions are designed around a single host and single thread of execution, an outdated approach that is far slower than necessary on current computing technology. This research explores the conceptual design and implementation of a scalable network traffic analysis system named Bolvedere. Analysis performed by Bolvedere simply asks whether the existence of a connection, coupled with its associated metadata, is enough to conclude something meaningful about that connection. This idea draws away from the traditional processing of every single byte in every single packet monitored on a network link (Deep Packet Inspection) through the concept of working with connection flows. Bolvedere performs its work by leveraging the NetFlow version 9 and IPFIX protocols, but is not limited to these. It is implemented using a modular approach that allows for either complete execution of the system on a single host or the horizontal scaling out of subsystems on multiple hosts. The use of multiple hosts is achieved through the implementation of Zero Message Queue (ZMQ). This allows for Bolvedre to horizontally scale out, which results in an increase in processing resources and thus an increase in analysis throughput. This is due to ease of interprocess communications provided by ZMQ. Many underlying mechanisms in Bolvedere have been automated. This is intended to make the system more userfriendly, as the user need only tell Bolvedere what information they wish to analyse, and the system will then rebuild itself in order to achieve this required task. Bolvedere has also been hardware-accelerated through the use of Field-Programmable Gate Array (FPGA) technologies, which more than doubled the total throughput of the system.
- Full Text:
- Date Issued: 2019
- Authors: Herbert, Alan
- Date: 2019
- Subjects: Bolvedere (Computer network analysis system) , Computer networks -- Scalability , Computer networks -- Measurement , Computer networks -- Security measures , Telecommunication -- Traffic -- Measurement
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/71557 , vital:29873
- Description: Since the advent of the Internet, and its public availability in the late 90’s, there have been significant advancements to network technologies and thus a significant increase of the bandwidth available to network users, both human and automated. Although this growth is of great value to network users, it has led to an increase in malicious network-based activities and it is theorized that, as more services become available on the Internet, the volume of such activities will continue to grow. Because of this, there is a need to monitor, comprehend, discern, understand and (where needed) respond to events on networks worldwide. Although this line of thought is simple in its reasoning, undertaking such a task is no small feat. Full packet analysis is a method of network surveillance that seeks out specific characteristics within network traffic that may tell of malicious activity or anomalies in regular network usage. It is carried out within firewalls and implemented through packet classification. In the context of the networks that make up the Internet, this form of packet analysis has become infeasible, as the volume of traffic introduced onto these networks every day is so large that there are simply not enough processing resources to perform such a task on every packet in real time. One could combat this problem by performing post-incident forensics; archiving packets and processing them later. However, as one cannot process all incoming packets, the archive will eventually run out of space. Full packet analysis is also hindered by the fact that some existing, commonly-used solutions are designed around a single host and single thread of execution, an outdated approach that is far slower than necessary on current computing technology. This research explores the conceptual design and implementation of a scalable network traffic analysis system named Bolvedere. Analysis performed by Bolvedere simply asks whether the existence of a connection, coupled with its associated metadata, is enough to conclude something meaningful about that connection. This idea draws away from the traditional processing of every single byte in every single packet monitored on a network link (Deep Packet Inspection) through the concept of working with connection flows. Bolvedere performs its work by leveraging the NetFlow version 9 and IPFIX protocols, but is not limited to these. It is implemented using a modular approach that allows for either complete execution of the system on a single host or the horizontal scaling out of subsystems on multiple hosts. The use of multiple hosts is achieved through the implementation of Zero Message Queue (ZMQ). This allows for Bolvedre to horizontally scale out, which results in an increase in processing resources and thus an increase in analysis throughput. This is due to ease of interprocess communications provided by ZMQ. Many underlying mechanisms in Bolvedere have been automated. This is intended to make the system more userfriendly, as the user need only tell Bolvedere what information they wish to analyse, and the system will then rebuild itself in order to achieve this required task. Bolvedere has also been hardware-accelerated through the use of Field-Programmable Gate Array (FPGA) technologies, which more than doubled the total throughput of the system.
- Full Text:
- Date Issued: 2019
Building IKhwezi, a digital platform to capture everyday Indigenous Knowledge for improving educational outcomes in marginalised communities
- Authors: Ntšekhe, Mathe V K
- Date: 2018
- Subjects: Information technology , Knowledge management , Traditional ecological knowledge , Pedagogical content knowledge , Traditional ecological knowledge -- Technological innovations , IKhwezi , ICT4D , Indigenous Technological Pedagogical Content Knowledge (I-TPACK) , Siyakhula Living Lab
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/62505 , vital:28200
- Description: Aptly captured in the name, the broad mandate of Information and Communications Technologies for Development (ICT4D) is to facilitate the use of Information and Communication Technologies (ICTs) in society to support development. Education, as often stated, is the cornerstone for development, imparting knowledge for conceiving and realising development. In this thesis, we explore how everyday Indigenous Knowledge (IK) can be collected digitally, to enhance the educational outcomes of learners from marginalised backgrounds, by stimulating the production of teaching and learning materials that include the local imagery to have resonance with the learners. As part of the exploration, we reviewed a framework known as Technological Pedagogical Content Knowledge (TPACK), which spells out the different kinds of knowledge needed by teachers to teach effectively with ICTs. In this framework, IK is not present explicitly, but through the concept of context(s). Using Afrocentric and Pan-African scholarship, we argue that this logic is linked to colonialism and a critical decolonising pedagogy necessarily demands explication of IK: to make visible the cultures of the learners in the margins (e.g. Black rural learners). On the strength of this argument, we have proposed that TPACK be augumented to become Indigenous Technological Pedagogical Content Knowledge (I-TPACK). Through this augumentation, I-TPACK becomes an Afrocentric framework for a multicultural education in the digital era. The design of the digital platform for capturing IK relevant for formal education, was done in the Siyakhula Living Lab (SLL). The core idea of a Living Lab (LL) is that users must be understood in the context of their lived everyday reality. Further, they must be involved as co-creators in the design and innovation processes. On a methodological level, the LL environment allowed for the fusing together of multiple methods that can help to create a fitting solution. In this thesis, we followed an iterative user-centred methodology rooted in ethnography and phenomenology. Specifically, through long term conversations and interaction with teachers and ethnographic observations, we conceptualized a platform, IKhwezi, that facilitates the collection of context-sensitive content, collaboratively, and with cost and convenience in mind. We implemented this platform using MediaWiki, based on a number of considerations. From the ICT4D disciplinary point of view, a major consideration was being open to the possibility that other forms of innovation—and, not just ‘technovelty’ (i.e. technological/- technical innovation)—can provide a breakthrough or ingenious solution to the problem at hand. In a sense, we were reinforcing the growing sentiment within the discipline that technology is not the goal, but the means to foregrounding the commonality of the human experience in working towards development. Testing confirmed that there is some value in the platform. This is despite the challenges to onboard users, in pursuit of more content that could bolster the value of everyday IK in improving the educational outcomes of all learners.
- Full Text:
- Date Issued: 2018
- Authors: Ntšekhe, Mathe V K
- Date: 2018
- Subjects: Information technology , Knowledge management , Traditional ecological knowledge , Pedagogical content knowledge , Traditional ecological knowledge -- Technological innovations , IKhwezi , ICT4D , Indigenous Technological Pedagogical Content Knowledge (I-TPACK) , Siyakhula Living Lab
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/62505 , vital:28200
- Description: Aptly captured in the name, the broad mandate of Information and Communications Technologies for Development (ICT4D) is to facilitate the use of Information and Communication Technologies (ICTs) in society to support development. Education, as often stated, is the cornerstone for development, imparting knowledge for conceiving and realising development. In this thesis, we explore how everyday Indigenous Knowledge (IK) can be collected digitally, to enhance the educational outcomes of learners from marginalised backgrounds, by stimulating the production of teaching and learning materials that include the local imagery to have resonance with the learners. As part of the exploration, we reviewed a framework known as Technological Pedagogical Content Knowledge (TPACK), which spells out the different kinds of knowledge needed by teachers to teach effectively with ICTs. In this framework, IK is not present explicitly, but through the concept of context(s). Using Afrocentric and Pan-African scholarship, we argue that this logic is linked to colonialism and a critical decolonising pedagogy necessarily demands explication of IK: to make visible the cultures of the learners in the margins (e.g. Black rural learners). On the strength of this argument, we have proposed that TPACK be augumented to become Indigenous Technological Pedagogical Content Knowledge (I-TPACK). Through this augumentation, I-TPACK becomes an Afrocentric framework for a multicultural education in the digital era. The design of the digital platform for capturing IK relevant for formal education, was done in the Siyakhula Living Lab (SLL). The core idea of a Living Lab (LL) is that users must be understood in the context of their lived everyday reality. Further, they must be involved as co-creators in the design and innovation processes. On a methodological level, the LL environment allowed for the fusing together of multiple methods that can help to create a fitting solution. In this thesis, we followed an iterative user-centred methodology rooted in ethnography and phenomenology. Specifically, through long term conversations and interaction with teachers and ethnographic observations, we conceptualized a platform, IKhwezi, that facilitates the collection of context-sensitive content, collaboratively, and with cost and convenience in mind. We implemented this platform using MediaWiki, based on a number of considerations. From the ICT4D disciplinary point of view, a major consideration was being open to the possibility that other forms of innovation—and, not just ‘technovelty’ (i.e. technological/- technical innovation)—can provide a breakthrough or ingenious solution to the problem at hand. In a sense, we were reinforcing the growing sentiment within the discipline that technology is not the goal, but the means to foregrounding the commonality of the human experience in working towards development. Testing confirmed that there is some value in the platform. This is despite the challenges to onboard users, in pursuit of more content that could bolster the value of everyday IK in improving the educational outcomes of all learners.
- Full Text:
- Date Issued: 2018
Investigating combinations of feature extraction and classification for improved image-based multimodal biometric systems at the feature level
- Authors: Brown, Dane
- Date: 2018
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/63470 , vital:28414
- Description: Multimodal biometrics has become a popular means of overcoming the limitations of unimodal biometric systems. However, the rich information particular to the feature level is of a complex nature and leveraging its potential without overfitting a classifier is not well studied. This research investigates feature-classifier combinations on the fingerprint, face, palmprint, and iris modalities to effectively fuse their feature vectors for a complementary result. The effects of different feature-classifier combinations are thus isolated to identify novel or improved algorithms. A new face segmentation algorithm is shown to increase consistency in nominal and extreme scenarios. Moreover, two novel feature extraction techniques demonstrate better adaptation to dynamic lighting conditions, while reducing feature dimensionality to the benefit of classifiers. A comprehensive set of unimodal experiments are carried out to evaluate both verification and identification performance on a variety of datasets using four classifiers, namely Eigen, Fisher, Local Binary Pattern Histogram and linear Support Vector Machine on various feature extraction methods. The recognition performance of the proposed algorithms are shown to outperform the vast majority of related studies, when using the same dataset under the same test conditions. In the unimodal comparisons presented, the proposed approaches outperform existing systems even when given a handicap such as fewer training samples or data with a greater number of classes. A separate comprehensive set of experiments on feature fusion show that combining modality data provides a substantial increase in accuracy, with only a few exceptions that occur when differences in the image data quality of two modalities are substantial. However, when two poor quality datasets are fused, noticeable gains in recognition performance are realized when using the novel feature extraction approach. Finally, feature-fusion guidelines are proposed to provide the necessary insight to leverage the rich information effectively when fusing multiple biometric modalities at the feature level. These guidelines serve as the foundation to better understand and construct biometric systems that are effective in a variety of applications.
- Full Text:
- Date Issued: 2018
- Authors: Brown, Dane
- Date: 2018
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/63470 , vital:28414
- Description: Multimodal biometrics has become a popular means of overcoming the limitations of unimodal biometric systems. However, the rich information particular to the feature level is of a complex nature and leveraging its potential without overfitting a classifier is not well studied. This research investigates feature-classifier combinations on the fingerprint, face, palmprint, and iris modalities to effectively fuse their feature vectors for a complementary result. The effects of different feature-classifier combinations are thus isolated to identify novel or improved algorithms. A new face segmentation algorithm is shown to increase consistency in nominal and extreme scenarios. Moreover, two novel feature extraction techniques demonstrate better adaptation to dynamic lighting conditions, while reducing feature dimensionality to the benefit of classifiers. A comprehensive set of unimodal experiments are carried out to evaluate both verification and identification performance on a variety of datasets using four classifiers, namely Eigen, Fisher, Local Binary Pattern Histogram and linear Support Vector Machine on various feature extraction methods. The recognition performance of the proposed algorithms are shown to outperform the vast majority of related studies, when using the same dataset under the same test conditions. In the unimodal comparisons presented, the proposed approaches outperform existing systems even when given a handicap such as fewer training samples or data with a greater number of classes. A separate comprehensive set of experiments on feature fusion show that combining modality data provides a substantial increase in accuracy, with only a few exceptions that occur when differences in the image data quality of two modalities are substantial. However, when two poor quality datasets are fused, noticeable gains in recognition performance are realized when using the novel feature extraction approach. Finally, feature-fusion guidelines are proposed to provide the necessary insight to leverage the rich information effectively when fusing multiple biometric modalities at the feature level. These guidelines serve as the foundation to better understand and construct biometric systems that are effective in a variety of applications.
- Full Text:
- Date Issued: 2018
Preimages for SHA-1
- Authors: Motara, Yusuf Moosa
- Date: 2018
- Subjects: Data encryption (Computer science) , Computer security -- Software , Hashing (Computer science) , Data compression (Computer science) , Preimage , Secure Hash Algorithm 1 (SHA-1)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/57885 , vital:27004
- Description: This research explores the problem of finding a preimage — an input that, when passed through a particular function, will result in a pre-specified output — for the compression function of the SHA-1 cryptographic hash. This problem is much more difficult than the problem of finding a collision for a hash function, and preimage attacks for very few popular hash functions are known. The research begins by introducing the field and giving an overview of the existing work in the area. A thorough analysis of the compression function is made, resulting in alternative formulations for both parts of the function, and both statistical and theoretical tools to determine the difficulty of the SHA-1 preimage problem. Different representations (And- Inverter Graph, Binary Decision Diagram, Conjunctive Normal Form, Constraint Satisfaction form, and Disjunctive Normal Form) and associated tools to manipulate and/or analyse these representations are then applied and explored, and results are collected and interpreted. In conclusion, the SHA-1 preimage problem remains unsolved and insoluble for the foreseeable future. The primary issue is one of efficient representation; despite a promising theoretical difficulty, both the diffusion characteristics and the depth of the tree stand in the way of efficient search. Despite this, the research served to confirm and quantify the difficulty of the problem both theoretically, using Schaefer's Theorem, and practically, in the context of different representations.
- Full Text:
- Date Issued: 2018
- Authors: Motara, Yusuf Moosa
- Date: 2018
- Subjects: Data encryption (Computer science) , Computer security -- Software , Hashing (Computer science) , Data compression (Computer science) , Preimage , Secure Hash Algorithm 1 (SHA-1)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/57885 , vital:27004
- Description: This research explores the problem of finding a preimage — an input that, when passed through a particular function, will result in a pre-specified output — for the compression function of the SHA-1 cryptographic hash. This problem is much more difficult than the problem of finding a collision for a hash function, and preimage attacks for very few popular hash functions are known. The research begins by introducing the field and giving an overview of the existing work in the area. A thorough analysis of the compression function is made, resulting in alternative formulations for both parts of the function, and both statistical and theoretical tools to determine the difficulty of the SHA-1 preimage problem. Different representations (And- Inverter Graph, Binary Decision Diagram, Conjunctive Normal Form, Constraint Satisfaction form, and Disjunctive Normal Form) and associated tools to manipulate and/or analyse these representations are then applied and explored, and results are collected and interpreted. In conclusion, the SHA-1 preimage problem remains unsolved and insoluble for the foreseeable future. The primary issue is one of efficient representation; despite a promising theoretical difficulty, both the diffusion characteristics and the depth of the tree stand in the way of efficient search. Despite this, the research served to confirm and quantify the difficulty of the problem both theoretically, using Schaefer's Theorem, and practically, in the context of different representations.
- Full Text:
- Date Issued: 2018
Evolving an efficient and effective off-the-shelf computing infrastructure for schools in rural areas of South Africa
- Authors: Siebörger, Ingrid Gisélle
- Date: 2017
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/14557 , vital:21938
- Description: Upliftment of rural areas and poverty alleviation are priorities for development in South Africa. Information and knowledge are key strategic resources for social and economic development and ICTs act as tools to support them, enabling innovative and more cost effective approaches. In order for ICT interventions to be possible, infrastructure has to be deployed. For the deployment to be effective and sustainable, the local community needs to be involved in shaping and supporting it. This study describes the technical work done in the Siyakhula Living Lab (SLL), a long-term ICT4D experiment in the Mbashe Municipality, with a focus on the deployment of ICT infrastructure in schools, for teaching and learning but also for use by the communities surrounding the schools. As a result of this work, computing infrastructure was deployed, in various phases, in 17 schools in the area and a “broadband island” connecting them was created. The dissertation reports on the initial deployment phases, discussing theoretical underpinnings and policies for using technology in education as well various computing and networking technologies and associated policies available and appropriate for use in rural South African schools. This information forms the backdrop of a survey conducted with teachers from six schools in the SLL, together with experimental work towards the provision of an evolved, efficient and effective off-the-shelf computing infrastructure in selected schools, in order to attempt to address the shortcomings of the computing infrastructure deployed initially in the SLL. The result of the study is the proposal of an evolved computing infrastructure model for use in rural South African schools.
- Full Text:
- Date Issued: 2017
- Authors: Siebörger, Ingrid Gisélle
- Date: 2017
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/14557 , vital:21938
- Description: Upliftment of rural areas and poverty alleviation are priorities for development in South Africa. Information and knowledge are key strategic resources for social and economic development and ICTs act as tools to support them, enabling innovative and more cost effective approaches. In order for ICT interventions to be possible, infrastructure has to be deployed. For the deployment to be effective and sustainable, the local community needs to be involved in shaping and supporting it. This study describes the technical work done in the Siyakhula Living Lab (SLL), a long-term ICT4D experiment in the Mbashe Municipality, with a focus on the deployment of ICT infrastructure in schools, for teaching and learning but also for use by the communities surrounding the schools. As a result of this work, computing infrastructure was deployed, in various phases, in 17 schools in the area and a “broadband island” connecting them was created. The dissertation reports on the initial deployment phases, discussing theoretical underpinnings and policies for using technology in education as well various computing and networking technologies and associated policies available and appropriate for use in rural South African schools. This information forms the backdrop of a survey conducted with teachers from six schools in the SLL, together with experimental work towards the provision of an evolved, efficient and effective off-the-shelf computing infrastructure in selected schools, in order to attempt to address the shortcomings of the computing infrastructure deployed initially in the SLL. The result of the study is the proposal of an evolved computing infrastructure model for use in rural South African schools.
- Full Text:
- Date Issued: 2017
Transcription factor binding specificity and occupancy : elucidation, modelling and evaluation
- Authors: Kibet, Caleb Kipkurui
- Date: 2017
- Subjects: Transcription factors , Transcription factors -- Data processing , Motif Assessment and Ranking Suite
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:21185 , http://hdl.handle.net/10962/6838
- Description: The major contributions of this thesis are addressing the need for an objective quality evaluation of a transcription factor binding model, demonstrating the value of the tools developed to this end and elucidating how in vitro and in vivo information can be utilized to improve TF binding specificity models. Accurate elucidation of TF binding specificity remains an ongoing challenge in gene regulatory research. Several in vitro and in vivo experimental techniques have been developed followed by a proliferation of algorithms, and ultimately, the binding models. This increase led to a choice problem for the end users: which tools to use, and which is the most accurate model for a given TF? Therefore, the first section of this thesis investigates the motif assessment problem: how scoring functions, choice and processing of benchmark data, and statistics used in evaluation affect motif ranking. This analysis revealed that TF motif quality assessment requires a systematic comparative analysis, and that scoring functions used have a TF-specific effect on motif ranking. These results advised the design of a Motif Assessment and Ranking Suite MARS, supported by PBM and ChIP-seq benchmark data and an extensive collection of PWM motifs. MARS implements consistency, enrichment, and scoring and classification-based motif evaluation algorithms. Transcription factor binding is also influenced and determined by contextual factors: chromatin accessibility, competition or cooperation with other TFs, cell line or condition specificity, binding locality (e.g. proximity to transcription start sites) and the shape of the binding site (DNA-shape). In vitro techniques do not capture such context; therefore, this thesis also combines PBM and DNase-seq data using a comparative k-mer enrichment approach that compares open chromatin with genome-wide prevalence, achieving a modest performance improvement when benchmarked on ChIP-seq data. Finally, since statistical and probabilistic methods cannot capture all the information that determine binding, a machine learning approach (XGBooost) was implemented to investigate how the features contribute to TF specificity and occupancy. This combinatorial approach improves the predictive ability of TF specificity models with the most predictive feature being chromatin accessibility, while the DNA-shape and conservation information all significantly improve on the baseline model of k-mer and DNase data. The results and the tools introduced in this thesis are useful for systematic comparative analysis (via MARS) and a combinatorial approach to modelling TF binding specificity, including appropriate feature engineering practices for machine learning modelling.
- Full Text:
- Date Issued: 2017
- Authors: Kibet, Caleb Kipkurui
- Date: 2017
- Subjects: Transcription factors , Transcription factors -- Data processing , Motif Assessment and Ranking Suite
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:21185 , http://hdl.handle.net/10962/6838
- Description: The major contributions of this thesis are addressing the need for an objective quality evaluation of a transcription factor binding model, demonstrating the value of the tools developed to this end and elucidating how in vitro and in vivo information can be utilized to improve TF binding specificity models. Accurate elucidation of TF binding specificity remains an ongoing challenge in gene regulatory research. Several in vitro and in vivo experimental techniques have been developed followed by a proliferation of algorithms, and ultimately, the binding models. This increase led to a choice problem for the end users: which tools to use, and which is the most accurate model for a given TF? Therefore, the first section of this thesis investigates the motif assessment problem: how scoring functions, choice and processing of benchmark data, and statistics used in evaluation affect motif ranking. This analysis revealed that TF motif quality assessment requires a systematic comparative analysis, and that scoring functions used have a TF-specific effect on motif ranking. These results advised the design of a Motif Assessment and Ranking Suite MARS, supported by PBM and ChIP-seq benchmark data and an extensive collection of PWM motifs. MARS implements consistency, enrichment, and scoring and classification-based motif evaluation algorithms. Transcription factor binding is also influenced and determined by contextual factors: chromatin accessibility, competition or cooperation with other TFs, cell line or condition specificity, binding locality (e.g. proximity to transcription start sites) and the shape of the binding site (DNA-shape). In vitro techniques do not capture such context; therefore, this thesis also combines PBM and DNase-seq data using a comparative k-mer enrichment approach that compares open chromatin with genome-wide prevalence, achieving a modest performance improvement when benchmarked on ChIP-seq data. Finally, since statistical and probabilistic methods cannot capture all the information that determine binding, a machine learning approach (XGBooost) was implemented to investigate how the features contribute to TF specificity and occupancy. This combinatorial approach improves the predictive ability of TF specificity models with the most predictive feature being chromatin accessibility, while the DNA-shape and conservation information all significantly improve on the baseline model of k-mer and DNase data. The results and the tools introduced in this thesis are useful for systematic comparative analysis (via MARS) and a combinatorial approach to modelling TF binding specificity, including appropriate feature engineering practices for machine learning modelling.
- Full Text:
- Date Issued: 2017
GPU Accelerated protocol analysis for large and long-term traffic traces
- Nottingham, Alastair Timothy
- Authors: Nottingham, Alastair Timothy
- Date: 2016
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/910 , vital:20002
- Description: This thesis describes the design and implementation of GPF+, a complete general packet classification system developed using Nvidia CUDA for Compute Capability 3.5+ GPUs. This system was developed with the aim of accelerating the analysis of arbitrary network protocols within network traffic traces using inexpensive, massively parallel commodity hardware. GPF+ and its supporting components are specifically intended to support the processing of large, long-term network packet traces such as those produced by network telescopes, which are currently difficult and time consuming to analyse. The GPF+ classifier is based on prior research in the field, which produced a prototype classifier called GPF, targeted at Compute Capability 1.3 GPUs. GPF+ greatly extends the GPF model, improving runtime flexibility and scalability, whilst maintaining high execution efficiency. GPF+ incorporates a compact, lightweight registerbased state machine that supports massively-parallel, multi-match filter predicate evaluation, as well as efficient arbitrary field extraction. GPF+ tracks packet composition during execution, and adjusts processing at runtime to avoid redundant memory transactions and unnecessary computation through warp-voting. GPF+ additionally incorporates a 128-bit in-thread cache, accelerated through register shuffling, to accelerate access to packet data in slow GPU global memory. GPF+ uses a high-level DSL to simplify protocol and filter creation, whilst better facilitating protocol reuse. The system is supported by a pipeline of multi-threaded high-performance host components, which communicate asynchronously through 0MQ messaging middleware to buffer, index, and dispatch packet data on the host system. The system was evaluated using high-end Kepler (Nvidia GTX Titan) and entry level Maxwell (Nvidia GTX 750) GPUs. The results of this evaluation showed high system performance, limited only by device side IO (600MBps) in all tests. GPF+ maintained high occupancy and device utilisation in all tests, without significant serialisation, and showed improved scaling to more complex filter sets. Results were used to visualise captures of up to 160 GB in seconds, and to extract and pre-filter captures small enough to be easily analysed in applications such as Wireshark.
- Full Text:
- Date Issued: 2016
- Authors: Nottingham, Alastair Timothy
- Date: 2016
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/910 , vital:20002
- Description: This thesis describes the design and implementation of GPF+, a complete general packet classification system developed using Nvidia CUDA for Compute Capability 3.5+ GPUs. This system was developed with the aim of accelerating the analysis of arbitrary network protocols within network traffic traces using inexpensive, massively parallel commodity hardware. GPF+ and its supporting components are specifically intended to support the processing of large, long-term network packet traces such as those produced by network telescopes, which are currently difficult and time consuming to analyse. The GPF+ classifier is based on prior research in the field, which produced a prototype classifier called GPF, targeted at Compute Capability 1.3 GPUs. GPF+ greatly extends the GPF model, improving runtime flexibility and scalability, whilst maintaining high execution efficiency. GPF+ incorporates a compact, lightweight registerbased state machine that supports massively-parallel, multi-match filter predicate evaluation, as well as efficient arbitrary field extraction. GPF+ tracks packet composition during execution, and adjusts processing at runtime to avoid redundant memory transactions and unnecessary computation through warp-voting. GPF+ additionally incorporates a 128-bit in-thread cache, accelerated through register shuffling, to accelerate access to packet data in slow GPU global memory. GPF+ uses a high-level DSL to simplify protocol and filter creation, whilst better facilitating protocol reuse. The system is supported by a pipeline of multi-threaded high-performance host components, which communicate asynchronously through 0MQ messaging middleware to buffer, index, and dispatch packet data on the host system. The system was evaluated using high-end Kepler (Nvidia GTX Titan) and entry level Maxwell (Nvidia GTX 750) GPUs. The results of this evaluation showed high system performance, limited only by device side IO (600MBps) in all tests. GPF+ maintained high occupancy and device utilisation in all tests, without significant serialisation, and showed improved scaling to more complex filter sets. Results were used to visualise captures of up to 160 GB in seconds, and to extract and pre-filter captures small enough to be easily analysed in applications such as Wireshark.
- Full Text:
- Date Issued: 2016
Selecting and augmenting a FOSS development and deployment environment for personalized video-oriented services in a Telco context
- Authors: Shibeshi, Zelalem Sintayehu
- Date: 2016
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/943 , vital:20005
- Description: The great demand for video services on the Internet is one contributing factor that led telecom companies to search for solutions to deliver innovative video services, using the different access technologies managed by them and leveraging the capacity of enforcing Quality of Service (QoS). One part of the solution was an infrastructure that guarantees QoS for these services, in the form of the IP Multimedia Subsystem (IMS) framework. The IMS framework was developed for delivering innovative multimedia services, but IMS alone does not provide the required services. This has led to further work in the area of multimedia service architectures. One noteworthy architecture is IPTV. IPTV is more than what its name implies, as it allows the development of various innovative video-oriented services and not just tv. When IPTV was introduced, many thought that it would bring back the revenue loss that telecom companies experienced to over-the-top (OTT) service providers. However, despite all its promises, the IPTV implementation has not shown as wide an uptake as one would expect. Although there could be various reasons for the slow penetration of IPTV, one reason could be the technical challenge that IPTV poses to service developers. One of the main reasons for the embarking of the research reported in this thesis was to identify and select free and open source software (FOSS) based platforms and augment them for easy development and deployment of video-oriented services. The thesis motivated how the IPTV architecture, with some modification, can be a good architecture to develop innovative video-oriented services. For a better understanding and investigate the issues of video-oriented service development on different platforms, we followed an incremental and iterative prototyping method. As a result, various video-oriented services were first developed and implementation-related issues were analyzed. This has helped us to identify problems that service developers face, including the requirement to utilize a number of protocols to develop an IPTV-based video-oriented service and the lack of a platform that provides a consistent programming interface to implement them all. The process also helped us to identify new uses cases through the process. As part of our selection process, we found that the Mobicents service development platform can be used as the basis for a good service development and deployment environment for video-oriented services. Mobicents is a Java-based service delivery platform for quick development, deployment and management of next generation network applications. Mobicents is a good choice because it provides a consistent programming interface and supports the various protocols needed in a consistent manner or an easy way to include the support for them. We used Mobicents to compose the environment that developers can use to build video-oriented services. Specifically we developed components and service building blocks that service developer can use to develop various innovative video-oriented services. During our research, we also identified various issues with regard to support from streaming servers in general and open source streaming servers in particular and also with the protocol they use. Specifically, we identified issues with Real Time Streaming Protocol (RTSP), a protocol specified as the media control protocol in the IPTV specification, and made proposals for solving them. We developed an RSTP proxy to augment the features lacking in the current streaming servers and implemented some of the features we proposed in it.
- Full Text:
- Date Issued: 2016
- Authors: Shibeshi, Zelalem Sintayehu
- Date: 2016
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/943 , vital:20005
- Description: The great demand for video services on the Internet is one contributing factor that led telecom companies to search for solutions to deliver innovative video services, using the different access technologies managed by them and leveraging the capacity of enforcing Quality of Service (QoS). One part of the solution was an infrastructure that guarantees QoS for these services, in the form of the IP Multimedia Subsystem (IMS) framework. The IMS framework was developed for delivering innovative multimedia services, but IMS alone does not provide the required services. This has led to further work in the area of multimedia service architectures. One noteworthy architecture is IPTV. IPTV is more than what its name implies, as it allows the development of various innovative video-oriented services and not just tv. When IPTV was introduced, many thought that it would bring back the revenue loss that telecom companies experienced to over-the-top (OTT) service providers. However, despite all its promises, the IPTV implementation has not shown as wide an uptake as one would expect. Although there could be various reasons for the slow penetration of IPTV, one reason could be the technical challenge that IPTV poses to service developers. One of the main reasons for the embarking of the research reported in this thesis was to identify and select free and open source software (FOSS) based platforms and augment them for easy development and deployment of video-oriented services. The thesis motivated how the IPTV architecture, with some modification, can be a good architecture to develop innovative video-oriented services. For a better understanding and investigate the issues of video-oriented service development on different platforms, we followed an incremental and iterative prototyping method. As a result, various video-oriented services were first developed and implementation-related issues were analyzed. This has helped us to identify problems that service developers face, including the requirement to utilize a number of protocols to develop an IPTV-based video-oriented service and the lack of a platform that provides a consistent programming interface to implement them all. The process also helped us to identify new uses cases through the process. As part of our selection process, we found that the Mobicents service development platform can be used as the basis for a good service development and deployment environment for video-oriented services. Mobicents is a Java-based service delivery platform for quick development, deployment and management of next generation network applications. Mobicents is a good choice because it provides a consistent programming interface and supports the various protocols needed in a consistent manner or an easy way to include the support for them. We used Mobicents to compose the environment that developers can use to build video-oriented services. Specifically we developed components and service building blocks that service developer can use to develop various innovative video-oriented services. During our research, we also identified various issues with regard to support from streaming servers in general and open source streaming servers in particular and also with the protocol they use. Specifically, we identified issues with Real Time Streaming Protocol (RTSP), a protocol specified as the media control protocol in the IPTV specification, and made proposals for solving them. We developed an RSTP proxy to augment the features lacking in the current streaming servers and implemented some of the features we proposed in it.
- Full Text:
- Date Issued: 2016
The development of a discovery and control environment for networked audio devices based on a study of current audio control protocols
- Authors: Eales, Andrew Arnold
- Date: 2016
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/539 , vital:19968
- Description: This dissertation develops a standard device model for networked audio devices and introduces a novel discovery and control environment that uses the developed device model. The proposed standard device model is derived from a study of current audio control protocols. Both the functional capabilities and design principles of audio control protocols are investigated with an emphasis on Open Sound Control, SNMP and IEC-62379, AES64, CopperLan and UPnP. An abstract model of networked audio devices is developed, and the model is implemented in each of the previously mentioned control protocols. This model is also used within a novel discovery and control environment designed around a distributed associative memory termed an object space. This environment challenges the accepted notions of the functionality provided by a control protocol. The study concludes by comparing the salient features of the different control protocols encountered in this study. Different approaches to control protocol design are considered, and several design heuristics for control protocols are proposed.
- Full Text:
- Date Issued: 2016
- Authors: Eales, Andrew Arnold
- Date: 2016
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/539 , vital:19968
- Description: This dissertation develops a standard device model for networked audio devices and introduces a novel discovery and control environment that uses the developed device model. The proposed standard device model is derived from a study of current audio control protocols. Both the functional capabilities and design principles of audio control protocols are investigated with an emphasis on Open Sound Control, SNMP and IEC-62379, AES64, CopperLan and UPnP. An abstract model of networked audio devices is developed, and the model is implemented in each of the previously mentioned control protocols. This model is also used within a novel discovery and control environment designed around a distributed associative memory termed an object space. This environment challenges the accepted notions of the functionality provided by a control protocol. The study concludes by comparing the salient features of the different control protocols encountered in this study. Different approaches to control protocol design are considered, and several design heuristics for control protocols are proposed.
- Full Text:
- Date Issued: 2016
Network simulation for professional audio networks
- Authors: Otten, Fred
- Date: 2015
- Subjects: Sound engineers , Ethernet (Local area network system) , Computer networks , Computer simulation
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4713 , http://hdl.handle.net/10962/d1017935
- Description: Audio Engineers are required to design and deploy large multi-channel sound systems which meet a set of requirements and use networking technologies such as Firewire and Ethernet AVB. Bandwidth utilisation and parameter groupings are among the factors which need to be considered in these designs. An implementation of an extensible, generic simulation framework would allow audio engineers to easily compare protocols and networking technologies and get near real time responses with regards to bandwidth utilisation. Our hypothesis is that an application-level capability can be developed which uses a network simulation framework to enable this process and enhances the audio engineer’s experience of designing and configuring a network. This thesis presents a new, extensible simulation framework which can be utilised to simulate professional audio networks. This framework is utilised to develop an application - AudioNetSim - based on the requirements of an audio engineer. The thesis describes the AudioNetSim models and implementations for Ethernet AVB, Firewire and the AES- 64 control protocol. AudioNetSim enables bandwidth usage determination for any network configuration and connection scenario and is used to compare Firewire and Ethernet AVB bandwidth utilisation. It also applies graph theory to the circular join problem and provides a solution to detect circular joins.
- Full Text:
- Date Issued: 2015
- Authors: Otten, Fred
- Date: 2015
- Subjects: Sound engineers , Ethernet (Local area network system) , Computer networks , Computer simulation
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4713 , http://hdl.handle.net/10962/d1017935
- Description: Audio Engineers are required to design and deploy large multi-channel sound systems which meet a set of requirements and use networking technologies such as Firewire and Ethernet AVB. Bandwidth utilisation and parameter groupings are among the factors which need to be considered in these designs. An implementation of an extensible, generic simulation framework would allow audio engineers to easily compare protocols and networking technologies and get near real time responses with regards to bandwidth utilisation. Our hypothesis is that an application-level capability can be developed which uses a network simulation framework to enable this process and enhances the audio engineer’s experience of designing and configuring a network. This thesis presents a new, extensible simulation framework which can be utilised to simulate professional audio networks. This framework is utilised to develop an application - AudioNetSim - based on the requirements of an audio engineer. The thesis describes the AudioNetSim models and implementations for Ethernet AVB, Firewire and the AES- 64 control protocol. AudioNetSim enables bandwidth usage determination for any network configuration and connection scenario and is used to compare Firewire and Ethernet AVB bandwidth utilisation. It also applies graph theory to the circular join problem and provides a solution to detect circular joins.
- Full Text:
- Date Issued: 2015
Pro-active visualization of cyber security on a National Level : a South African case study
- Authors: Swart, Ignatius Petrus
- Date: 2015
- Subjects: Internet -- Security measures -- South Africa , Computer security -- Government policy -- South Africa
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4718 , http://hdl.handle.net/10962/d1017940
- Description: The need for increased national cyber security situational awareness is evident from the growing number of published national cyber security strategies. Governments are progressively seen as responsible for cyber security, but at the same time increasingly constrained by legal, privacy and resource considerations. Infrastructure and services that form part of the national cyber domain are often not under the control of government, necessitating the need for information sharing between governments and commercial partners. While sharing of security information is necessary, it typically requires considerable time to be implemented effectively. In an effort to decrease the time and effort required for cyber security situational awareness, this study considered commercially available data sources relating to a national cyber domain. Open source information is typically used by attackers to gather information with great success. An understanding of the data provided by these sources can also afford decision makers the opportunity to set priorities more effectively. Through the use of an adapted Joint Directors of Laboratories (JDL) fusion model, an experimental system was implemented that visualized the potential that open source intelligence could have on cyber situational awareness. Datasets used in the validation of the model contained information obtained from eight different data sources over a two year period with a focus on the South African .co.za sub domain. Over a million infrastructure devices were examined in this study along with information pertaining to a potential 88 million vulnerabilities on these devices. During the examination of data sources, a severe lack of information regarding the human aspect in cyber security was identified that led to the creation of a novel Personally Identifiable Information detection sensor (PII). The resultant two million records pertaining to PII in the South African domain were incorporated into the data fusion experiment for processing. The results of this processing are discussed in the three case studies. The results offered in this study aim to highlight how data fusion and effective visualization can serve to move national cyber security from a primarily reactive undertaking to a more pro-active model.
- Full Text:
- Date Issued: 2015
- Authors: Swart, Ignatius Petrus
- Date: 2015
- Subjects: Internet -- Security measures -- South Africa , Computer security -- Government policy -- South Africa
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4718 , http://hdl.handle.net/10962/d1017940
- Description: The need for increased national cyber security situational awareness is evident from the growing number of published national cyber security strategies. Governments are progressively seen as responsible for cyber security, but at the same time increasingly constrained by legal, privacy and resource considerations. Infrastructure and services that form part of the national cyber domain are often not under the control of government, necessitating the need for information sharing between governments and commercial partners. While sharing of security information is necessary, it typically requires considerable time to be implemented effectively. In an effort to decrease the time and effort required for cyber security situational awareness, this study considered commercially available data sources relating to a national cyber domain. Open source information is typically used by attackers to gather information with great success. An understanding of the data provided by these sources can also afford decision makers the opportunity to set priorities more effectively. Through the use of an adapted Joint Directors of Laboratories (JDL) fusion model, an experimental system was implemented that visualized the potential that open source intelligence could have on cyber situational awareness. Datasets used in the validation of the model contained information obtained from eight different data sources over a two year period with a focus on the South African .co.za sub domain. Over a million infrastructure devices were examined in this study along with information pertaining to a potential 88 million vulnerabilities on these devices. During the examination of data sources, a severe lack of information regarding the human aspect in cyber security was identified that led to the creation of a novel Personally Identifiable Information detection sensor (PII). The resultant two million records pertaining to PII in the South African domain were incorporated into the data fusion experiment for processing. The results of this processing are discussed in the three case studies. The results offered in this study aim to highlight how data fusion and effective visualization can serve to move national cyber security from a primarily reactive undertaking to a more pro-active model.
- Full Text:
- Date Issued: 2015
A formalised ontology for network attack classification
- Authors: Van Heerden, Renier Pelser
- Date: 2014
- Subjects: Computer networks -- Security measures Computer security Computer crimes -- Investigation Computer crimes -- Prevention
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4691 , http://hdl.handle.net/10962/d1011603
- Description: One of the most popular attack vectors against computers are their network connections. Attacks on computers through their networks are commonplace and have various levels of complexity. This research formally describes network-based computer attacks in the form of a story, formally and within an ontology. The ontology categorises network attacks where attack scenarios are the focal class. This class consists of: Denial-of- Service, Industrial Espionage, Web Defacement, Unauthorised Data Access, Financial Theft, Industrial Sabotage, Cyber-Warfare, Resource Theft, System Compromise, and Runaway Malware. This ontology was developed by building a taxonomy and a temporal network attack model. Network attack instances (also know as individuals) are classified according to their respective attack scenarios, with the use of an automated reasoner within the ontology. The automated reasoner deductions are verified formally; and via the automated reasoner, a relaxed set of scenarios is determined, which is relevant in a near real-time environment. A prototype system (called Aeneas) was developed to classify network-based attacks. Aeneas integrates the sensors into a detection system that can classify network attacks in a near real-time environment. To verify the ontology and the prototype Aeneas, a virtual test bed was developed in which network-based attacks were generated to verify the detection system. Aeneas was able to detect incoming attacks and classify them according to their scenario. The novel part of this research is the attack scenarios that are described in the form of a story, as well as formally and in an ontology. The ontology is used in a novel way to determine to which class attack instances belong and how the network attack ontology is affected in a near real-time environment.
- Full Text:
- Date Issued: 2014
- Authors: Van Heerden, Renier Pelser
- Date: 2014
- Subjects: Computer networks -- Security measures Computer security Computer crimes -- Investigation Computer crimes -- Prevention
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4691 , http://hdl.handle.net/10962/d1011603
- Description: One of the most popular attack vectors against computers are their network connections. Attacks on computers through their networks are commonplace and have various levels of complexity. This research formally describes network-based computer attacks in the form of a story, formally and within an ontology. The ontology categorises network attacks where attack scenarios are the focal class. This class consists of: Denial-of- Service, Industrial Espionage, Web Defacement, Unauthorised Data Access, Financial Theft, Industrial Sabotage, Cyber-Warfare, Resource Theft, System Compromise, and Runaway Malware. This ontology was developed by building a taxonomy and a temporal network attack model. Network attack instances (also know as individuals) are classified according to their respective attack scenarios, with the use of an automated reasoner within the ontology. The automated reasoner deductions are verified formally; and via the automated reasoner, a relaxed set of scenarios is determined, which is relevant in a near real-time environment. A prototype system (called Aeneas) was developed to classify network-based attacks. Aeneas integrates the sensors into a detection system that can classify network attacks in a near real-time environment. To verify the ontology and the prototype Aeneas, a virtual test bed was developed in which network-based attacks were generated to verify the detection system. Aeneas was able to detect incoming attacks and classify them according to their scenario. The novel part of this research is the attack scenarios that are described in the form of a story, as well as formally and in an ontology. The ontology is used in a novel way to determine to which class attack instances belong and how the network attack ontology is affected in a near real-time environment.
- Full Text:
- Date Issued: 2014
An investigation into XSets of primitive behaviours for emergent behaviour in stigmergic and message passing antlike agents
- Authors: Chibaya, Colin
- Date: 2014
- Subjects: Ants -- Behavior -- Computer programs , Insects -- Behavior -- Computer programs , Ant communities -- Behavior , Insect societies
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4698 , http://hdl.handle.net/10962/d1012965
- Description: Ants are fascinating creatures - not so much because they are intelligent on their own, but because as a group they display compelling emergent behaviour (the extent to which one observes features in a swarm which cannot be traced back to the actions of swarm members). What does each swarm member do which allows deliberate engineering of emergent behaviour? We investigate the development of a language for programming swarms of ant agents towards desired emergent behaviour. Five aspects of stigmergic (pheromone sensitive computational devices in which a non-symbolic form of communication that is indirectly mediated via the environment arises) and message passing ant agents (computational devices which rely on implicit communication spaces in which direction vectors are shared one-on-one) are studied. First, we investigate the primitive behaviours which characterize ant agents' discrete actions at individual levels. Ten such primitive behaviours are identified as candidate building blocks of the ant agent language sought. We then study mechanisms in which primitive behaviours are put together into XSets (collection of primitive behaviours, parameter values, and meta information which spells out how and when primitive behaviours are used). Various permutations of XSets are possible which define the search space for best performer XSets for particular tasks. Genetic programming principles are proposed as a search strategy for best performer XSets that would allow particular emergent behaviour to occur. XSets in the search space are evolved over various genetic generations and tested for abilities to allow path finding (as proof of concept). XSets are ranked according to the indices of merit (fitness measures which indicate how well XSets allow particular emergent behaviour to occur) they achieve. Best performer XSets for the path finding task are identifed and reported. We validate the results yield when best performer XSets are used with regard to normality, correlation, similarities in variation, and similarities between mean performances over time. Commonly, the simulation results yield pass most statistical tests. The last aspect we study is the application of best performer XSets to different problem tasks. Five experiments are administered in this regard. The first experiment assesses XSets' abilities to allow multiple targets location (ant agents' abilities to locate continuous regions of targets), and found out that best performer XSets are problem independent. However both categories of XSets are sensitive to changes in agent density. We test the influences of individual primitive behaviours and the effects of the sequences of primitive behaviours to the indices of merit of XSets and found out that most primitive behaviours are indispensable, especially when specific sequences are prescribed. The effects of pheromone dissipation to the indices of merit of stigmergic XSets are also scrutinized. Precisely, dissipation is not causal. Rather, it enhances convergence. Overall, this work successfully identify the discrete primitive behaviours of stigmergic and message passing ant-like devices. It successfully put these primitive behaviours together into XSets which characterize a language for programming ant-like devices towards desired emergent behaviour. This XSets approach is a new ant language representation with which a wider domain of emergent tasks can be resolved.
- Full Text:
- Date Issued: 2014
- Authors: Chibaya, Colin
- Date: 2014
- Subjects: Ants -- Behavior -- Computer programs , Insects -- Behavior -- Computer programs , Ant communities -- Behavior , Insect societies
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4698 , http://hdl.handle.net/10962/d1012965
- Description: Ants are fascinating creatures - not so much because they are intelligent on their own, but because as a group they display compelling emergent behaviour (the extent to which one observes features in a swarm which cannot be traced back to the actions of swarm members). What does each swarm member do which allows deliberate engineering of emergent behaviour? We investigate the development of a language for programming swarms of ant agents towards desired emergent behaviour. Five aspects of stigmergic (pheromone sensitive computational devices in which a non-symbolic form of communication that is indirectly mediated via the environment arises) and message passing ant agents (computational devices which rely on implicit communication spaces in which direction vectors are shared one-on-one) are studied. First, we investigate the primitive behaviours which characterize ant agents' discrete actions at individual levels. Ten such primitive behaviours are identified as candidate building blocks of the ant agent language sought. We then study mechanisms in which primitive behaviours are put together into XSets (collection of primitive behaviours, parameter values, and meta information which spells out how and when primitive behaviours are used). Various permutations of XSets are possible which define the search space for best performer XSets for particular tasks. Genetic programming principles are proposed as a search strategy for best performer XSets that would allow particular emergent behaviour to occur. XSets in the search space are evolved over various genetic generations and tested for abilities to allow path finding (as proof of concept). XSets are ranked according to the indices of merit (fitness measures which indicate how well XSets allow particular emergent behaviour to occur) they achieve. Best performer XSets for the path finding task are identifed and reported. We validate the results yield when best performer XSets are used with regard to normality, correlation, similarities in variation, and similarities between mean performances over time. Commonly, the simulation results yield pass most statistical tests. The last aspect we study is the application of best performer XSets to different problem tasks. Five experiments are administered in this regard. The first experiment assesses XSets' abilities to allow multiple targets location (ant agents' abilities to locate continuous regions of targets), and found out that best performer XSets are problem independent. However both categories of XSets are sensitive to changes in agent density. We test the influences of individual primitive behaviours and the effects of the sequences of primitive behaviours to the indices of merit of XSets and found out that most primitive behaviours are indispensable, especially when specific sequences are prescribed. The effects of pheromone dissipation to the indices of merit of stigmergic XSets are also scrutinized. Precisely, dissipation is not causal. Rather, it enhances convergence. Overall, this work successfully identify the discrete primitive behaviours of stigmergic and message passing ant-like devices. It successfully put these primitive behaviours together into XSets which characterize a language for programming ant-like devices towards desired emergent behaviour. This XSets approach is a new ant language representation with which a wider domain of emergent tasks can be resolved.
- Full Text:
- Date Issued: 2014
An investigation of parameter relationships in a high-speed digital multimedia environment
- Authors: Chigwamba, Nyasha
- Date: 2014
- Subjects: Multimedia communications , Digital communications , Local area networks (Computer networks) , Computer network architectures , Computer network protocols , Computer sound processing , Sound -- Recording and reproducing -- Digital techniques
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4725 , http://hdl.handle.net/10962/d1021153
- Description: With the rapid adoption of multimedia network technologies, a number of companies and standards bodies are introducing technologies that enhance user experience in networked multimedia environments. These technologies focus on device discovery, connection management, control, and monitoring. This study focused on control and monitoring. Multimedia networks make it possible for devices that are part of the same network to reside in different physical locations. These devices contain parameters that are used to control particular features, such as speaker volume, bass, amplifier gain, and video resolution. It is often necessary for changes in one parameter to affect other parameters, such as a synchronised change between volume and bass parameters, or collective control of multiple parameters. Thus, relationships are required between the parameters. In addition, some devices contain parameters, such as voltage, temperature, and audio level, that require constant monitoring to enable corrective action when thresholds are exceeded. Therefore, a mechanism for monitoring networked devices is required. This thesis proposes relationships that are essential for the proper functioning of a multimedia network and that should, therefore, be incorporated in standard form into a protocol, such that all devices can depend on them. Implementation mechanisms for these relationships were created. Parameter grouping and monitoring capabilities within mixing console implementations and existing control protocols were reviewed. A number of requirements for parameter grouping and monitoring were derived from this review. These requirements include a formal classification of relationship types, the ability to create relationships between parameters with different underlying value units, the ability to create relationships between parameters residing on different devices on a network, and the use of an event-driven mechanism for parameter monitoring. These requirements were the criteria used to govern the implementation mechanisms that were created as part of this study. Parameter grouping and monitoring mechanisms were implemented for the XFN protocol. The mechanisms implemented fulfil the requirements derived from the review of capabilities of mixing consoles and existing control protocols. The formal classification of relationship types was implemented within XFN parameters using lists that keep track of the relationships between each XFN parameter and other XFN parameters that reside on the same device or on other devices on the network. A common value unit, known as the global unit, was defined for use as the value format within value update messages between XFN parameters that have relationships. Mapping tables were used to translate the global unit values to application-specific (universal) units, such as decibels (dB). A mechanism for bulk parameter retrieval within the XFN protocol was augmented to produce an event-driven mechanism for parameter monitoring. These implementation mechanisms were applied to an XFN-protocol-compliant graphical control application to demonstrate their usage within an end user context. At the time of this study, the XFN protocol was undergoing standardisation within the Audio Engineering Society. The AES-64 standard has now been approved. Most of the implementation mechanisms resulting from this study have been incorporated into this standard.
- Full Text:
- Date Issued: 2014
- Authors: Chigwamba, Nyasha
- Date: 2014
- Subjects: Multimedia communications , Digital communications , Local area networks (Computer networks) , Computer network architectures , Computer network protocols , Computer sound processing , Sound -- Recording and reproducing -- Digital techniques
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4725 , http://hdl.handle.net/10962/d1021153
- Description: With the rapid adoption of multimedia network technologies, a number of companies and standards bodies are introducing technologies that enhance user experience in networked multimedia environments. These technologies focus on device discovery, connection management, control, and monitoring. This study focused on control and monitoring. Multimedia networks make it possible for devices that are part of the same network to reside in different physical locations. These devices contain parameters that are used to control particular features, such as speaker volume, bass, amplifier gain, and video resolution. It is often necessary for changes in one parameter to affect other parameters, such as a synchronised change between volume and bass parameters, or collective control of multiple parameters. Thus, relationships are required between the parameters. In addition, some devices contain parameters, such as voltage, temperature, and audio level, that require constant monitoring to enable corrective action when thresholds are exceeded. Therefore, a mechanism for monitoring networked devices is required. This thesis proposes relationships that are essential for the proper functioning of a multimedia network and that should, therefore, be incorporated in standard form into a protocol, such that all devices can depend on them. Implementation mechanisms for these relationships were created. Parameter grouping and monitoring capabilities within mixing console implementations and existing control protocols were reviewed. A number of requirements for parameter grouping and monitoring were derived from this review. These requirements include a formal classification of relationship types, the ability to create relationships between parameters with different underlying value units, the ability to create relationships between parameters residing on different devices on a network, and the use of an event-driven mechanism for parameter monitoring. These requirements were the criteria used to govern the implementation mechanisms that were created as part of this study. Parameter grouping and monitoring mechanisms were implemented for the XFN protocol. The mechanisms implemented fulfil the requirements derived from the review of capabilities of mixing consoles and existing control protocols. The formal classification of relationship types was implemented within XFN parameters using lists that keep track of the relationships between each XFN parameter and other XFN parameters that reside on the same device or on other devices on the network. A common value unit, known as the global unit, was defined for use as the value format within value update messages between XFN parameters that have relationships. Mapping tables were used to translate the global unit values to application-specific (universal) units, such as decibels (dB). A mechanism for bulk parameter retrieval within the XFN protocol was augmented to produce an event-driven mechanism for parameter monitoring. These implementation mechanisms were applied to an XFN-protocol-compliant graphical control application to demonstrate their usage within an end user context. At the time of this study, the XFN protocol was undergoing standardisation within the Audio Engineering Society. The AES-64 standard has now been approved. Most of the implementation mechanisms resulting from this study have been incorporated into this standard.
- Full Text:
- Date Issued: 2014
An investigation of protocol command translation as a means to enable interoperability between networked audio devices
- Authors: Igumbor, Osedum Peter
- Date: 2014
- Subjects: Streaming audio Data transmission systems Computer network protocols Computer networks -- Management Command languages (Computer science)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4689 , http://hdl.handle.net/10962/d1011128
- Description: Digital audio networks allow multiple channels of audio to be streamed between devices. This eliminates the need for many different cables to route audio between devices. An added advantage of digital audio networks is the ability to configure and control the networked devices from a common control point. Common control of networked devices enables a sound engineer to establish and destroy audio stream connections between networked devices that are distances apart. On a digital audio network, an audio transport technology enables the exchange of data streams. Typically, an audio transport technology is capable of transporting both control messages and audio data streams. There exist a number of audio transport technologies. Some of these technologies implement data transport by exchanging OSI/ISO layer 2 data frames, while others transport data within OSI/ISO layer 3 packets. There are some approaches to achieving interoperability between devices that utilize different audio transport technologies. A digital audio device typically implements an audio control protocol, which enables it process configuration and control messages from a remote controller. An audio control protocol also defines the structure of the messages that are exchanged between compliant devices. There are currently a wide range of audio control protocols. Some audio control protocols utilize layer 3 audio transport technology, while others utilize layer 2 audio transport technology. An audio device can only communicate with other devices that implement the same control protocol, irrespective of a common transport technology that connects the devices. The existence of different audio control protocols among devices on a network results in a situation where the devices are unable to communicate with each other. Furthermore, a single control application is unable to establish or destroy audio stream connections between the networked devices, since they implement different control protocols. When an audio engineer is designing an audio network installation, this interoperability challenge restricts the choice of devices that can be included. Even when audio transport interoperability has been achieved, common control of the devices remains a challenge. This research investigates protocol command translation as a means to enable interoperability between networked audio devices that implement different audio control protocols. It proposes the use of a command translator that is capable of receiving messages conforming to one protocol from any of the networked devices, translating the received message to conform to a different control protocol, then transmitting the translated message to the intended target which understands the translated protocol message. In so doing, the command translator enables common control of the networked devices, since a control application is able to configure and control devices that conform to different protocols by utilizing the command translator to perform appropriate protocol translation.
- Full Text:
- Date Issued: 2014
- Authors: Igumbor, Osedum Peter
- Date: 2014
- Subjects: Streaming audio Data transmission systems Computer network protocols Computer networks -- Management Command languages (Computer science)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4689 , http://hdl.handle.net/10962/d1011128
- Description: Digital audio networks allow multiple channels of audio to be streamed between devices. This eliminates the need for many different cables to route audio between devices. An added advantage of digital audio networks is the ability to configure and control the networked devices from a common control point. Common control of networked devices enables a sound engineer to establish and destroy audio stream connections between networked devices that are distances apart. On a digital audio network, an audio transport technology enables the exchange of data streams. Typically, an audio transport technology is capable of transporting both control messages and audio data streams. There exist a number of audio transport technologies. Some of these technologies implement data transport by exchanging OSI/ISO layer 2 data frames, while others transport data within OSI/ISO layer 3 packets. There are some approaches to achieving interoperability between devices that utilize different audio transport technologies. A digital audio device typically implements an audio control protocol, which enables it process configuration and control messages from a remote controller. An audio control protocol also defines the structure of the messages that are exchanged between compliant devices. There are currently a wide range of audio control protocols. Some audio control protocols utilize layer 3 audio transport technology, while others utilize layer 2 audio transport technology. An audio device can only communicate with other devices that implement the same control protocol, irrespective of a common transport technology that connects the devices. The existence of different audio control protocols among devices on a network results in a situation where the devices are unable to communicate with each other. Furthermore, a single control application is unable to establish or destroy audio stream connections between the networked devices, since they implement different control protocols. When an audio engineer is designing an audio network installation, this interoperability challenge restricts the choice of devices that can be included. Even when audio transport interoperability has been achieved, common control of the devices remains a challenge. This research investigates protocol command translation as a means to enable interoperability between networked audio devices that implement different audio control protocols. It proposes the use of a command translator that is capable of receiving messages conforming to one protocol from any of the networked devices, translating the received message to conform to a different control protocol, then transmitting the translated message to the intended target which understands the translated protocol message. In so doing, the command translator enables common control of the networked devices, since a control application is able to configure and control devices that conform to different protocols by utilizing the command translator to perform appropriate protocol translation.
- Full Text:
- Date Issued: 2014
A structural and functional specification of a SCIM for service interaction management and personalisation in the IMS
- Authors: Tsietsi, Mosiuoa Jeremia
- Date: 2012
- Subjects: Internet Protocol multimedia subsystem , Internet Protocol multimedia subsystem -- Specifications , Long-Term Evolution (Telecommunications) , European Telecommunications Standards Institute , Wireless communication systems , Multimedia communications
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4606 , http://hdl.handle.net/10962/d1004864 , Internet Protocol multimedia subsystem , Internet Protocol multimedia subsystem -- Specifications , Long-Term Evolution (Telecommunications) , European Telecommunications Standards Institute , Wireless communication systems , Multimedia communications
- Description: The Internet Protocol Multimedia Subsystem (IMS) is a component of the 3G mobile network that has been specified by standards development organisations such as the 3GPP (3rd Generation Partnership Project) and ETSI (European Telecommunication Standards Institute). IMS seeks to guarantee that the telecommunication network of the future provides subscribers with seamless access to services across disparate networks. In order to achieve this, it defines a service architecture that hosts application servers that provide subscribers with value added services. Typically, an application server bundles all the functionality it needs to execute the services it delivers, however this view is currently being challenged. It is now thought that services should be synthesised from simple building blocks called service capabilities. This decomposition would facilitate the re-use of service capabilities across multiple services and would support the creation of new services that could not have originally been conceived. The shift from monolithic services to those built from service capabilities poses a challenge to the current service model in IMS. To accommodate this, the 3GPP has defined an entity known as a service capability interaction manager (SCIM) that would be responsible for managing the interactions between service capabilities in order to realise complex services. Some of these interactions could potentially lead to undesirable results, which the SCIM must work to avoid. As an added requirement, it is believed that the network should allow policies to be applied to network services which the SCIM should be responsible for enforcing. At the time of writing, the functional and structural architecture of the SCIM has not yet been standardised. This thesis explores the current serv ice architecture of the IMS in detail. Proposals that address the structure and functions of the SCIM are carefully compared and contrasted. This investigation leads to the presentation of key aspects of the SCIM, and provides solutions that explain how it should interact with service capabilities, manage undesirable interactions and factor user and network operator policies into its execution model. A modified design of the IMS service layer that embeds the SCIM is subsequently presented and described. The design uses existing IMS protocols and requires no change in the behaviour of the standard IMS entities. In order to develop a testbed for experimental verification of the design, the identification of suitable software platforms was required. This thesis presents some of the most popular platforms currently used by developers such as the Open IMS Core and OpenSER, as well as an open source, Java-based, multimedia communication platform called Mobicents. As a precursor to the development of the SCIM, a converged multimedia service is presented that describes how a video streaming application that is leveraged by a web portal was implemented for an IMS testbed using Mobicents components. The Mobicents SIP Servlets container was subsequently used to model an initial prototype of the SCIM, using a mUlti-component telephony service to illustrate the proposed service execution model. The design focuses on SIP-based services only, but should also work for other types of IMS application servers as well.
- Full Text:
- Date Issued: 2012
- Authors: Tsietsi, Mosiuoa Jeremia
- Date: 2012
- Subjects: Internet Protocol multimedia subsystem , Internet Protocol multimedia subsystem -- Specifications , Long-Term Evolution (Telecommunications) , European Telecommunications Standards Institute , Wireless communication systems , Multimedia communications
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4606 , http://hdl.handle.net/10962/d1004864 , Internet Protocol multimedia subsystem , Internet Protocol multimedia subsystem -- Specifications , Long-Term Evolution (Telecommunications) , European Telecommunications Standards Institute , Wireless communication systems , Multimedia communications
- Description: The Internet Protocol Multimedia Subsystem (IMS) is a component of the 3G mobile network that has been specified by standards development organisations such as the 3GPP (3rd Generation Partnership Project) and ETSI (European Telecommunication Standards Institute). IMS seeks to guarantee that the telecommunication network of the future provides subscribers with seamless access to services across disparate networks. In order to achieve this, it defines a service architecture that hosts application servers that provide subscribers with value added services. Typically, an application server bundles all the functionality it needs to execute the services it delivers, however this view is currently being challenged. It is now thought that services should be synthesised from simple building blocks called service capabilities. This decomposition would facilitate the re-use of service capabilities across multiple services and would support the creation of new services that could not have originally been conceived. The shift from monolithic services to those built from service capabilities poses a challenge to the current service model in IMS. To accommodate this, the 3GPP has defined an entity known as a service capability interaction manager (SCIM) that would be responsible for managing the interactions between service capabilities in order to realise complex services. Some of these interactions could potentially lead to undesirable results, which the SCIM must work to avoid. As an added requirement, it is believed that the network should allow policies to be applied to network services which the SCIM should be responsible for enforcing. At the time of writing, the functional and structural architecture of the SCIM has not yet been standardised. This thesis explores the current serv ice architecture of the IMS in detail. Proposals that address the structure and functions of the SCIM are carefully compared and contrasted. This investigation leads to the presentation of key aspects of the SCIM, and provides solutions that explain how it should interact with service capabilities, manage undesirable interactions and factor user and network operator policies into its execution model. A modified design of the IMS service layer that embeds the SCIM is subsequently presented and described. The design uses existing IMS protocols and requires no change in the behaviour of the standard IMS entities. In order to develop a testbed for experimental verification of the design, the identification of suitable software platforms was required. This thesis presents some of the most popular platforms currently used by developers such as the Open IMS Core and OpenSER, as well as an open source, Java-based, multimedia communication platform called Mobicents. As a precursor to the development of the SCIM, a converged multimedia service is presented that describes how a video streaming application that is leveraged by a web portal was implemented for an IMS testbed using Mobicents components. The Mobicents SIP Servlets container was subsequently used to model an initial prototype of the SCIM, using a mUlti-component telephony service to illustrate the proposed service execution model. The design focuses on SIP-based services only, but should also work for other types of IMS application servers as well.
- Full Text:
- Date Issued: 2012