A structural and functional specification of a SCIM for service interaction management and personalisation in the IMS
- Authors: Tsietsi, Mosiuoa Jeremia
- Date: 2012
- Subjects: Internet Protocol multimedia subsystem , Internet Protocol multimedia subsystem -- Specifications , Long-Term Evolution (Telecommunications) , European Telecommunications Standards Institute , Wireless communication systems , Multimedia communications
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4606 , http://hdl.handle.net/10962/d1004864 , Internet Protocol multimedia subsystem , Internet Protocol multimedia subsystem -- Specifications , Long-Term Evolution (Telecommunications) , European Telecommunications Standards Institute , Wireless communication systems , Multimedia communications
- Description: The Internet Protocol Multimedia Subsystem (IMS) is a component of the 3G mobile network that has been specified by standards development organisations such as the 3GPP (3rd Generation Partnership Project) and ETSI (European Telecommunication Standards Institute). IMS seeks to guarantee that the telecommunication network of the future provides subscribers with seamless access to services across disparate networks. In order to achieve this, it defines a service architecture that hosts application servers that provide subscribers with value added services. Typically, an application server bundles all the functionality it needs to execute the services it delivers, however this view is currently being challenged. It is now thought that services should be synthesised from simple building blocks called service capabilities. This decomposition would facilitate the re-use of service capabilities across multiple services and would support the creation of new services that could not have originally been conceived. The shift from monolithic services to those built from service capabilities poses a challenge to the current service model in IMS. To accommodate this, the 3GPP has defined an entity known as a service capability interaction manager (SCIM) that would be responsible for managing the interactions between service capabilities in order to realise complex services. Some of these interactions could potentially lead to undesirable results, which the SCIM must work to avoid. As an added requirement, it is believed that the network should allow policies to be applied to network services which the SCIM should be responsible for enforcing. At the time of writing, the functional and structural architecture of the SCIM has not yet been standardised. This thesis explores the current serv ice architecture of the IMS in detail. Proposals that address the structure and functions of the SCIM are carefully compared and contrasted. This investigation leads to the presentation of key aspects of the SCIM, and provides solutions that explain how it should interact with service capabilities, manage undesirable interactions and factor user and network operator policies into its execution model. A modified design of the IMS service layer that embeds the SCIM is subsequently presented and described. The design uses existing IMS protocols and requires no change in the behaviour of the standard IMS entities. In order to develop a testbed for experimental verification of the design, the identification of suitable software platforms was required. This thesis presents some of the most popular platforms currently used by developers such as the Open IMS Core and OpenSER, as well as an open source, Java-based, multimedia communication platform called Mobicents. As a precursor to the development of the SCIM, a converged multimedia service is presented that describes how a video streaming application that is leveraged by a web portal was implemented for an IMS testbed using Mobicents components. The Mobicents SIP Servlets container was subsequently used to model an initial prototype of the SCIM, using a mUlti-component telephony service to illustrate the proposed service execution model. The design focuses on SIP-based services only, but should also work for other types of IMS application servers as well.
- Full Text:
- Date Issued: 2012
- Authors: Tsietsi, Mosiuoa Jeremia
- Date: 2012
- Subjects: Internet Protocol multimedia subsystem , Internet Protocol multimedia subsystem -- Specifications , Long-Term Evolution (Telecommunications) , European Telecommunications Standards Institute , Wireless communication systems , Multimedia communications
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4606 , http://hdl.handle.net/10962/d1004864 , Internet Protocol multimedia subsystem , Internet Protocol multimedia subsystem -- Specifications , Long-Term Evolution (Telecommunications) , European Telecommunications Standards Institute , Wireless communication systems , Multimedia communications
- Description: The Internet Protocol Multimedia Subsystem (IMS) is a component of the 3G mobile network that has been specified by standards development organisations such as the 3GPP (3rd Generation Partnership Project) and ETSI (European Telecommunication Standards Institute). IMS seeks to guarantee that the telecommunication network of the future provides subscribers with seamless access to services across disparate networks. In order to achieve this, it defines a service architecture that hosts application servers that provide subscribers with value added services. Typically, an application server bundles all the functionality it needs to execute the services it delivers, however this view is currently being challenged. It is now thought that services should be synthesised from simple building blocks called service capabilities. This decomposition would facilitate the re-use of service capabilities across multiple services and would support the creation of new services that could not have originally been conceived. The shift from monolithic services to those built from service capabilities poses a challenge to the current service model in IMS. To accommodate this, the 3GPP has defined an entity known as a service capability interaction manager (SCIM) that would be responsible for managing the interactions between service capabilities in order to realise complex services. Some of these interactions could potentially lead to undesirable results, which the SCIM must work to avoid. As an added requirement, it is believed that the network should allow policies to be applied to network services which the SCIM should be responsible for enforcing. At the time of writing, the functional and structural architecture of the SCIM has not yet been standardised. This thesis explores the current serv ice architecture of the IMS in detail. Proposals that address the structure and functions of the SCIM are carefully compared and contrasted. This investigation leads to the presentation of key aspects of the SCIM, and provides solutions that explain how it should interact with service capabilities, manage undesirable interactions and factor user and network operator policies into its execution model. A modified design of the IMS service layer that embeds the SCIM is subsequently presented and described. The design uses existing IMS protocols and requires no change in the behaviour of the standard IMS entities. In order to develop a testbed for experimental verification of the design, the identification of suitable software platforms was required. This thesis presents some of the most popular platforms currently used by developers such as the Open IMS Core and OpenSER, as well as an open source, Java-based, multimedia communication platform called Mobicents. As a precursor to the development of the SCIM, a converged multimedia service is presented that describes how a video streaming application that is leveraged by a web portal was implemented for an IMS testbed using Mobicents components. The Mobicents SIP Servlets container was subsequently used to model an initial prototype of the SCIM, using a mUlti-component telephony service to illustrate the proposed service execution model. The design focuses on SIP-based services only, but should also work for other types of IMS application servers as well.
- Full Text:
- Date Issued: 2012
An investigation into information security practices implemented by Research and Educational Network of Uganda (RENU) member institution
- Authors: Kisakye, Alex
- Date: 2012 , 2012-11-06
- Subjects: Research and Educational Network of Uganda , Computer security -- Education (Higher) -- Uganda , Computer networks -- Security measures -- Education (Higher) -- Uganda , Management -- Computer network resources -- Education (Higher) -- Uganda , Computer hackers
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4586 , http://hdl.handle.net/10962/d1004748 , Research and Educational Network of Uganda , Computer security -- Education (Higher) -- Uganda , Computer networks -- Security measures -- Education (Higher) -- Uganda , Management -- Computer network resources -- Education (Higher) -- Uganda , Computer hackers
- Description: Educational institutions are known to be at the heart of complex computing systems in any region in which they exist, especially in Africa. The existence of high end computing power, often connected to the Internet and to research network grids, makes educational institutions soft targets for attackers. Attackers of such networks are normally either looking to exploit the large computing resources available for use in secondary attacks or to steal Intellectual Property (IP) from the research networks to which the institutions belong. Universities also store a lot of information about their current students and staff population as well as alumni ranging from personal to financial information. Unauthorized access to such information violates statutory requirement of the law and could grossly tarnish the institutions name not to mention cost the institution a lot of money during post-incident activities. The purpose of this study was to investigate the information security practices that have been put in place by Research and Education Network of Uganda (RENU) member institutions to safeguard institutional data and systems from both internal and external security threats. The study was conducted on six member institutions in three phases, between the months of May and July 2011 in Uganda. Phase One involved the use of a customised quantitative questionnaire tool. The tool - originally developed by information security governance task-force of EDUCAUSE - was customised for use in Uganda. Phase Two involved the use of a qualitative interview guide in a sessions between the investigator and respondents. Results show that institutions rely heavily on Information and Communication Technology (ICT) systems and services and that all institutions had already acquired more than three information systems and had acquired and implemented some of the cutting edge equipment and systems in their data centres. Further results show that institutions have established ICT departments although staff have not been trained in information security. All institutions interviewed have ICT policies although only a few have carried out policy sensitization and awareness campaigns for their staff and students. , TeX
- Full Text:
- Date Issued: 2012
- Authors: Kisakye, Alex
- Date: 2012 , 2012-11-06
- Subjects: Research and Educational Network of Uganda , Computer security -- Education (Higher) -- Uganda , Computer networks -- Security measures -- Education (Higher) -- Uganda , Management -- Computer network resources -- Education (Higher) -- Uganda , Computer hackers
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4586 , http://hdl.handle.net/10962/d1004748 , Research and Educational Network of Uganda , Computer security -- Education (Higher) -- Uganda , Computer networks -- Security measures -- Education (Higher) -- Uganda , Management -- Computer network resources -- Education (Higher) -- Uganda , Computer hackers
- Description: Educational institutions are known to be at the heart of complex computing systems in any region in which they exist, especially in Africa. The existence of high end computing power, often connected to the Internet and to research network grids, makes educational institutions soft targets for attackers. Attackers of such networks are normally either looking to exploit the large computing resources available for use in secondary attacks or to steal Intellectual Property (IP) from the research networks to which the institutions belong. Universities also store a lot of information about their current students and staff population as well as alumni ranging from personal to financial information. Unauthorized access to such information violates statutory requirement of the law and could grossly tarnish the institutions name not to mention cost the institution a lot of money during post-incident activities. The purpose of this study was to investigate the information security practices that have been put in place by Research and Education Network of Uganda (RENU) member institutions to safeguard institutional data and systems from both internal and external security threats. The study was conducted on six member institutions in three phases, between the months of May and July 2011 in Uganda. Phase One involved the use of a customised quantitative questionnaire tool. The tool - originally developed by information security governance task-force of EDUCAUSE - was customised for use in Uganda. Phase Two involved the use of a qualitative interview guide in a sessions between the investigator and respondents. Results show that institutions rely heavily on Information and Communication Technology (ICT) systems and services and that all institutions had already acquired more than three information systems and had acquired and implemented some of the cutting edge equipment and systems in their data centres. Further results show that institutions have established ICT departments although staff have not been trained in information security. All institutions interviewed have ICT policies although only a few have carried out policy sensitization and awareness campaigns for their staff and students. , TeX
- Full Text:
- Date Issued: 2012
An investigation into the control of audio streaming across networks having diverse quality of service mechanisms
- Authors: Foulkes, Philip James
- Date: 2012
- Subjects: Streaming audio -- Testing Data transmission systems -- Testing Computer networks -- Management Computer networks -- Evaluation Computer network protocols -- Standards
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4607 , http://hdl.handle.net/10962/d1004865
- Description: The transmission of realtime audio data across digital networks is subject to strict quality of service requirements. These networks need to be able to guarantee network resources (e.g., bandwidth), ensure timely and deterministic data delivery, and provide time synchronisation mechanisms to ensure successful transmission of this data. Two open standards-based networking technologies, namely IEEE 1394 and the recently standardised Ethernet AVB, provide distinct methods for achieving these goals. Audio devices that are compatible with IEEE 1394 networks exist, and audio devices that are compatible with Ethernet AVB networks are starting to come onto the market. There is a need for mechanisms to provide compatibility between the audio devices that reside on these disparate networks such that existing IEEE 1394 audio devices are able to communicate with Ethernet AVB audio devices, and vice versa. The audio devices that reside on these networks may be remotely controlled by a diverse set of incompatible command and control protocols. It is desirable to have a common network-neutral method of control over the various parameters of the devices that reside on these networks. As part of this study, two Ethernet AVB systems were developed. One system acts as an Ethernet AVB audio endpoint device and another system acts as an audio gateway between IEEE 1394 and Ethernet AVB networks. These systems, along with existing IEEE 1394 audio devices, were used to demonstrate the ability to transfer audio data between the networking technologies. Each of the devices is remotely controllable via a network neutral command and control protocol, XFN. The IEEE 1394 and Ethernet AVB devices are used to demonstrate the use of the XFN protocol to allow for network neutral connection management to take place between IEEE 1394 and Ethernet AVB networks. User control over these diverse devices is achieved via the use of a graphical patchbay application, which aims to provide a consistent user interface to a diverse range of devices.
- Full Text:
- Date Issued: 2012
- Authors: Foulkes, Philip James
- Date: 2012
- Subjects: Streaming audio -- Testing Data transmission systems -- Testing Computer networks -- Management Computer networks -- Evaluation Computer network protocols -- Standards
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4607 , http://hdl.handle.net/10962/d1004865
- Description: The transmission of realtime audio data across digital networks is subject to strict quality of service requirements. These networks need to be able to guarantee network resources (e.g., bandwidth), ensure timely and deterministic data delivery, and provide time synchronisation mechanisms to ensure successful transmission of this data. Two open standards-based networking technologies, namely IEEE 1394 and the recently standardised Ethernet AVB, provide distinct methods for achieving these goals. Audio devices that are compatible with IEEE 1394 networks exist, and audio devices that are compatible with Ethernet AVB networks are starting to come onto the market. There is a need for mechanisms to provide compatibility between the audio devices that reside on these disparate networks such that existing IEEE 1394 audio devices are able to communicate with Ethernet AVB audio devices, and vice versa. The audio devices that reside on these networks may be remotely controlled by a diverse set of incompatible command and control protocols. It is desirable to have a common network-neutral method of control over the various parameters of the devices that reside on these networks. As part of this study, two Ethernet AVB systems were developed. One system acts as an Ethernet AVB audio endpoint device and another system acts as an audio gateway between IEEE 1394 and Ethernet AVB networks. These systems, along with existing IEEE 1394 audio devices, were used to demonstrate the ability to transfer audio data between the networking technologies. Each of the devices is remotely controllable via a network neutral command and control protocol, XFN. The IEEE 1394 and Ethernet AVB devices are used to demonstrate the use of the XFN protocol to allow for network neutral connection management to take place between IEEE 1394 and Ethernet AVB networks. User control over these diverse devices is achieved via the use of a graphical patchbay application, which aims to provide a consistent user interface to a diverse range of devices.
- Full Text:
- Date Issued: 2012
Automated grid fault detection and repair
- Authors: Luyt, Leslie
- Date: 2012 , 2012-05-24
- Subjects: Computational grids (Computer systems) -- Maintenance and repair , Cloud computing -- Maintenance and repair , Computer architecture
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4670 , http://hdl.handle.net/10962/d1006693 , Computational grids (Computer systems) -- Maintenance and repair , Cloud computing -- Maintenance and repair , Computer architecture
- Description: With the rise in interest in the field of grid and cloud computing, it is becoming increasingly necessary for the grid to be easily maintainable. This maintenance of the grid and grid services can be made easier by using an automated system to monitor and repair the grid as necessary. We propose a novel system to perform automated monitoring and repair of grid systems. To the best of our knowledge, no such systems exist. The results show that certain faults can be easily detected and repaired. , TeX , Adobe Acrobat 9.51 Paper Capture Plug-in
- Full Text:
- Date Issued: 2012
- Authors: Luyt, Leslie
- Date: 2012 , 2012-05-24
- Subjects: Computational grids (Computer systems) -- Maintenance and repair , Cloud computing -- Maintenance and repair , Computer architecture
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4670 , http://hdl.handle.net/10962/d1006693 , Computational grids (Computer systems) -- Maintenance and repair , Cloud computing -- Maintenance and repair , Computer architecture
- Description: With the rise in interest in the field of grid and cloud computing, it is becoming increasingly necessary for the grid to be easily maintainable. This maintenance of the grid and grid services can be made easier by using an automated system to monitor and repair the grid as necessary. We propose a novel system to perform automated monitoring and repair of grid systems. To the best of our knowledge, no such systems exist. The results show that certain faults can be easily detected and repaired. , TeX , Adobe Acrobat 9.51 Paper Capture Plug-in
- Full Text:
- Date Issued: 2012
COIN : a customisable, incentive driven video on demand framework for low-cost IPTV services
- Authors: Musvibe, Ray
- Date: 2012 , 2012-03-02
- Subjects: Internet television , Digital television , Television broadcasting -- Technological innovations , Multicasting (Computer networks) , Video dial tone , Open source software , Telecommunication , Capital investments
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4654 , http://hdl.handle.net/10962/d1006650 , Internet television , Digital television , Television broadcasting -- Technological innovations , Multicasting (Computer networks) , Video dial tone , Open source software , Telecommunication , Capital investments
- Description: There has been a significant rise in the provision of television and video services over IP (IPTV) in recent years. Increasing network capacity and falling bandwidth costs have made it both technically and economically feasible for service providers to deliver IPTV services. Several telecommunications (telco) operators worldwide are rolling out IPTV solutions and view IPTV as a major service differentiator and alternative revenue source. The main challenge that IPTV providers currently face, however, is the increasingly congested television service provider market, which also includes Internet Television. IPTV solutions therefore need strong service differentiators to succeed. IPTV solutions can doubtlessly sell much faster if they are more affordable or low-cost. Advertising has already been used in many service sectors to help lower service costs, including traditional broadcast television. This thesis therefore explores the role that advertising can play in helping to lower the cost of IPTV services and to incentivise IPTV billing. Another approach that IPTV providers can use to help sell their product is by addressing the growing need for control by today's multimedia users. This thesis will therefore explore the varied approaches that can be used to achieve viewer focused IPTV implementations. To further lower the cost of IPTV services, telcos can also turn to low-cost, open source platforms for service delivery. The adoption of low-cost infrastructure by telcos can lead to reduced Capital Expenditure (CAPEX), which in turn can lead to lower service fees, and ultimately to higher subscriptions and revenue. Therefore, in this thesis, the author proposes a CustOmisable, INcentive (COIN) driven Video on Demand (VoD) framework to be developed and deployed using the Mobicents Communication Platform, an open source service creation and execution platform. The COIN framework aims to provide a viewer focused, economically competitive service that combines the potential cost savings of using free and open source software (FOSS), with an innovative, incentive-driven billing approach. This project will also aim to evaluate whether the Mobicents Platform is a suitable service creation and execution platform for the proposed framework. Additionally, the proposed implementation aims to be interoperable with other IPTV implementations, hence shall follow current IPTV standardisation architectures and trends. The service testbed and its implementation are described in detail and only free and open source software is used; this is to enable its easy duplication and extension for future research. , TeX output 2012.03.02:1241 , Adobe Acrobat 9.2 Paper Capture Plug-in
- Full Text:
- Date Issued: 2012
- Authors: Musvibe, Ray
- Date: 2012 , 2012-03-02
- Subjects: Internet television , Digital television , Television broadcasting -- Technological innovations , Multicasting (Computer networks) , Video dial tone , Open source software , Telecommunication , Capital investments
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4654 , http://hdl.handle.net/10962/d1006650 , Internet television , Digital television , Television broadcasting -- Technological innovations , Multicasting (Computer networks) , Video dial tone , Open source software , Telecommunication , Capital investments
- Description: There has been a significant rise in the provision of television and video services over IP (IPTV) in recent years. Increasing network capacity and falling bandwidth costs have made it both technically and economically feasible for service providers to deliver IPTV services. Several telecommunications (telco) operators worldwide are rolling out IPTV solutions and view IPTV as a major service differentiator and alternative revenue source. The main challenge that IPTV providers currently face, however, is the increasingly congested television service provider market, which also includes Internet Television. IPTV solutions therefore need strong service differentiators to succeed. IPTV solutions can doubtlessly sell much faster if they are more affordable or low-cost. Advertising has already been used in many service sectors to help lower service costs, including traditional broadcast television. This thesis therefore explores the role that advertising can play in helping to lower the cost of IPTV services and to incentivise IPTV billing. Another approach that IPTV providers can use to help sell their product is by addressing the growing need for control by today's multimedia users. This thesis will therefore explore the varied approaches that can be used to achieve viewer focused IPTV implementations. To further lower the cost of IPTV services, telcos can also turn to low-cost, open source platforms for service delivery. The adoption of low-cost infrastructure by telcos can lead to reduced Capital Expenditure (CAPEX), which in turn can lead to lower service fees, and ultimately to higher subscriptions and revenue. Therefore, in this thesis, the author proposes a CustOmisable, INcentive (COIN) driven Video on Demand (VoD) framework to be developed and deployed using the Mobicents Communication Platform, an open source service creation and execution platform. The COIN framework aims to provide a viewer focused, economically competitive service that combines the potential cost savings of using free and open source software (FOSS), with an innovative, incentive-driven billing approach. This project will also aim to evaluate whether the Mobicents Platform is a suitable service creation and execution platform for the proposed framework. Additionally, the proposed implementation aims to be interoperable with other IPTV implementations, hence shall follow current IPTV standardisation architectures and trends. The service testbed and its implementation are described in detail and only free and open source software is used; this is to enable its easy duplication and extension for future research. , TeX output 2012.03.02:1241 , Adobe Acrobat 9.2 Paper Capture Plug-in
- Full Text:
- Date Issued: 2012
Culturally-relevant augmented user interfaces for illiterate and semi-literate users
- Authors: Gavaza, Takayedzwa
- Date: 2012 , 2012-06-14
- Subjects: User interfaces (Computer systems) -- Research , Computer software -- Research , Graphical user interfaces (Computer systems) -- Research , Human-computer interaction , Computers and literacy
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4665 , http://hdl.handle.net/10962/d1006679 , User interfaces (Computer systems) -- Research , Computer software -- Research , Graphical user interfaces (Computer systems) -- Research , Human-computer interaction , Computers and literacy
- Description: This thesis discusses guidelines for developers of Augmented User Interfaces that can be used by illiterate and semi-literate users. To discover how illiterate and semi-literate users intuitively understand interaction with a computer, a series of Wizard of Oz experiments were conducted. In the first Wizard of Oz study, users were presented with a standard desktop computer, fitted with a number of input devices to determine how they assume interaction should occur. This study found that the users preferred the use of speech and gestures which mirrored findings from other researchers. The study also found that users struggled to understand the tab metaphor which is used frequently in applications. From these findings, a localised culturally-relevant tab interface was developed to determine the feasibility of localised Graphical User Interface components. A second study was undertaken to compare the localised tab interface with the traditional tabbed interface. This study collected both quantitative and qualitative data from the participants. It found that users could interact with a localised tabbed interface faster and more accurately than with the traditional counterparts. More importantly, users stated that they intuitively understood the localised interface component, whereas they did not understand the traditional tab metaphor. These user studies have shown that the use of self-explanatory animations, video feedback, localised tabbed interface metaphors and voice output have a positive impact on enabling illiterate and semi-literate users to access information. , TeX , Adobe Acrobat 9.51 Paper Capture Plug-in
- Full Text:
- Date Issued: 2012
- Authors: Gavaza, Takayedzwa
- Date: 2012 , 2012-06-14
- Subjects: User interfaces (Computer systems) -- Research , Computer software -- Research , Graphical user interfaces (Computer systems) -- Research , Human-computer interaction , Computers and literacy
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4665 , http://hdl.handle.net/10962/d1006679 , User interfaces (Computer systems) -- Research , Computer software -- Research , Graphical user interfaces (Computer systems) -- Research , Human-computer interaction , Computers and literacy
- Description: This thesis discusses guidelines for developers of Augmented User Interfaces that can be used by illiterate and semi-literate users. To discover how illiterate and semi-literate users intuitively understand interaction with a computer, a series of Wizard of Oz experiments were conducted. In the first Wizard of Oz study, users were presented with a standard desktop computer, fitted with a number of input devices to determine how they assume interaction should occur. This study found that the users preferred the use of speech and gestures which mirrored findings from other researchers. The study also found that users struggled to understand the tab metaphor which is used frequently in applications. From these findings, a localised culturally-relevant tab interface was developed to determine the feasibility of localised Graphical User Interface components. A second study was undertaken to compare the localised tab interface with the traditional tabbed interface. This study collected both quantitative and qualitative data from the participants. It found that users could interact with a localised tabbed interface faster and more accurately than with the traditional counterparts. More importantly, users stated that they intuitively understood the localised interface component, whereas they did not understand the traditional tab metaphor. These user studies have shown that the use of self-explanatory animations, video feedback, localised tabbed interface metaphors and voice output have a positive impact on enabling illiterate and semi-literate users to access information. , TeX , Adobe Acrobat 9.51 Paper Capture Plug-in
- Full Text:
- Date Issued: 2012
GPF : a framework for general packet classification on GPU co-processors
- Authors: Nottingham, Alastair
- Date: 2012
- Subjects: Graphics processing units , Coprocessors , Computer network protocols , Computer networks -- Security measures , NVIDIA Corporation
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4661 , http://hdl.handle.net/10962/d1006662 , Graphics processing units , Coprocessors , Computer network protocols , Computer networks -- Security measures , NVIDIA Corporation
- Description: This thesis explores the design and experimental implementation of GPF, a novel protocol-independent, multi-match packet classification framework. This framework is targeted and optimised for flexible, efficient execution on NVIDIA GPU platforms through the CUDA API, but should not be difficult to port to other platforms, such as OpenCL, in the future. GPF was conceived and developed in order to accelerate classification of large packet capture files, such as those collected by Network Telescopes. It uses a multiphase SIMD classification process which exploits both the parallelism of packet sets and the redundancy in filter programs, in order to classify packet captures against multiple filters at extremely high rates. The resultant framework - comprised of classification, compilation and buffering components - efficiently leverages GPU resources to classify arbitrary protocols, and return multiple filter results for each packet. The classification functions described were verified and evaluated by testing an experimental prototype implementation against several filter programs, of varying complexity, on devices from three GPU platform generations. In addition to the significant speedup achieved in processing results, analysis indicates that the prototype classification functions perform predictably, and scale linearly with respect to both packet count and filter complexity. Furthermore, classification throughput (packets/s) remained essentially constant regardless of the underlying packet data, and thus the effective data rate when classifying a particular filter was heavily influenced by the average size of packets in the processed capture. For example: in the trivial case of classifying all IPv4 packets ranging in size from 70 bytes to 1KB, the observed data rate achieved by the GPU classification kernels ranged from 60Gbps to 900Gbps on a GTX 275, and from 220Gbps to 3.3Tbps on a GTX 480. In the less trivial case of identifying all ARP, TCP, UDP and ICMP packets for both IPv4 and IPv6 protocols, the effective data rates ranged from 15Gbps to 220Gbps (GTX 275), and from 50Gbps to 740Gbps (GTX 480), for 70B and 1KB packets respectively. , LaTeX with hyperref package
- Full Text:
- Date Issued: 2012
- Authors: Nottingham, Alastair
- Date: 2012
- Subjects: Graphics processing units , Coprocessors , Computer network protocols , Computer networks -- Security measures , NVIDIA Corporation
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4661 , http://hdl.handle.net/10962/d1006662 , Graphics processing units , Coprocessors , Computer network protocols , Computer networks -- Security measures , NVIDIA Corporation
- Description: This thesis explores the design and experimental implementation of GPF, a novel protocol-independent, multi-match packet classification framework. This framework is targeted and optimised for flexible, efficient execution on NVIDIA GPU platforms through the CUDA API, but should not be difficult to port to other platforms, such as OpenCL, in the future. GPF was conceived and developed in order to accelerate classification of large packet capture files, such as those collected by Network Telescopes. It uses a multiphase SIMD classification process which exploits both the parallelism of packet sets and the redundancy in filter programs, in order to classify packet captures against multiple filters at extremely high rates. The resultant framework - comprised of classification, compilation and buffering components - efficiently leverages GPU resources to classify arbitrary protocols, and return multiple filter results for each packet. The classification functions described were verified and evaluated by testing an experimental prototype implementation against several filter programs, of varying complexity, on devices from three GPU platform generations. In addition to the significant speedup achieved in processing results, analysis indicates that the prototype classification functions perform predictably, and scale linearly with respect to both packet count and filter complexity. Furthermore, classification throughput (packets/s) remained essentially constant regardless of the underlying packet data, and thus the effective data rate when classifying a particular filter was heavily influenced by the average size of packets in the processed capture. For example: in the trivial case of classifying all IPv4 packets ranging in size from 70 bytes to 1KB, the observed data rate achieved by the GPU classification kernels ranged from 60Gbps to 900Gbps on a GTX 275, and from 220Gbps to 3.3Tbps on a GTX 480. In the less trivial case of identifying all ARP, TCP, UDP and ICMP packets for both IPv4 and IPv6 protocols, the effective data rates ranged from 15Gbps to 220Gbps (GTX 275), and from 50Gbps to 740Gbps (GTX 480), for 70B and 1KB packets respectively. , LaTeX with hyperref package
- Full Text:
- Date Issued: 2012
Investigating tools and techniques for improving software performance on multiprocessor computer systems
- Authors: Tristram, Waide Barrington
- Date: 2012
- Subjects: Multiprocessors , Multiprogramming (Electronic computers) , Parallel programming (Computer science) , Linux , Abstract data types (Computer science) , Threads (Computer programs) , Computer programming
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4655 , http://hdl.handle.net/10962/d1006651 , Multiprocessors , Multiprogramming (Electronic computers) , Parallel programming (Computer science) , Linux , Abstract data types (Computer science) , Threads (Computer programs) , Computer programming
- Description: The availability of modern commodity multicore processors and multiprocessor computer systems has resulted in the widespread adoption of parallel computers in a variety of environments, ranging from the home to workstation and server environments in particular. Unfortunately, parallel programming is harder and requires more expertise than the traditional sequential programming model. The variety of tools and parallel programming models available to the programmer further complicates the issue. The primary goal of this research was to identify and describe a selection of parallel programming tools and techniques to aid novice parallel programmers in the process of developing efficient parallel C/C++ programs for the Linux platform. This was achieved by highlighting and describing the key concepts and hardware factors that affect parallel programming, providing a brief survey of commonly available software development tools and parallel programming models and libraries, and presenting structured approaches to software performance tuning and parallel programming. Finally, the performance of several parallel programming models and libraries was investigated, along with the programming effort required to implement solutions using the respective models. A quantitative research methodology was applied to the investigation of the performance and programming effort associated with the selected parallel programming models and libraries, which included automatic parallelisation by the compiler, Boost Threads, Cilk Plus, OpenMP, POSIX threads (Pthreads), and Threading Building Blocks (TBB). Additionally, the performance of the GNU C/C++ and Intel C/C++ compilers was examined. The results revealed that the choice of parallel programming model or library is dependent on the type of problem being solved and that there is no overall best choice for all classes of problem. However, the results also indicate that parallel programming models with higher levels of abstraction require less programming effort and provide similar performance compared to explicit threading models. The principle conclusion was that the problem analysis and parallel design are an important factor in the selection of the parallel programming model and tools, but that models with higher levels of abstractions, such as OpenMP and Threading Building Blocks, are favoured.
- Full Text:
- Date Issued: 2012
- Authors: Tristram, Waide Barrington
- Date: 2012
- Subjects: Multiprocessors , Multiprogramming (Electronic computers) , Parallel programming (Computer science) , Linux , Abstract data types (Computer science) , Threads (Computer programs) , Computer programming
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4655 , http://hdl.handle.net/10962/d1006651 , Multiprocessors , Multiprogramming (Electronic computers) , Parallel programming (Computer science) , Linux , Abstract data types (Computer science) , Threads (Computer programs) , Computer programming
- Description: The availability of modern commodity multicore processors and multiprocessor computer systems has resulted in the widespread adoption of parallel computers in a variety of environments, ranging from the home to workstation and server environments in particular. Unfortunately, parallel programming is harder and requires more expertise than the traditional sequential programming model. The variety of tools and parallel programming models available to the programmer further complicates the issue. The primary goal of this research was to identify and describe a selection of parallel programming tools and techniques to aid novice parallel programmers in the process of developing efficient parallel C/C++ programs for the Linux platform. This was achieved by highlighting and describing the key concepts and hardware factors that affect parallel programming, providing a brief survey of commonly available software development tools and parallel programming models and libraries, and presenting structured approaches to software performance tuning and parallel programming. Finally, the performance of several parallel programming models and libraries was investigated, along with the programming effort required to implement solutions using the respective models. A quantitative research methodology was applied to the investigation of the performance and programming effort associated with the selected parallel programming models and libraries, which included automatic parallelisation by the compiler, Boost Threads, Cilk Plus, OpenMP, POSIX threads (Pthreads), and Threading Building Blocks (TBB). Additionally, the performance of the GNU C/C++ and Intel C/C++ compilers was examined. The results revealed that the choice of parallel programming model or library is dependent on the type of problem being solved and that there is no overall best choice for all classes of problem. However, the results also indicate that parallel programming models with higher levels of abstraction require less programming effort and provide similar performance compared to explicit threading models. The principle conclusion was that the problem analysis and parallel design are an important factor in the selection of the parallel programming model and tools, but that models with higher levels of abstractions, such as OpenMP and Threading Building Blocks, are favoured.
- Full Text:
- Date Issued: 2012
Web-based visualisation techniques for reporting zoonotic outbreaks
- Authors: Ncube, Sinini Paul
- Date: 2012
- Subjects: Zoonoses -- Reporting , Communicable diseases -- Reporting , Communication in medicine , Medical telematics , Internet , Information visualization
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4664 , http://hdl.handle.net/10962/d1006672 , Zoonoses -- Reporting , Communicable diseases -- Reporting , Communication in medicine , Medical telematics , Internet , Information visualization
- Description: Zoonotic diseases are diseases that are transmitted from animals or vectors to humans and vice versa. The public together with veterinarian authorities should readily access disease information as it is vital in rapidly controlling resultant zoonotic outbreak threats through improved awareness. Currently, the reporting of disease information in South Africa is predominantly limited to traditional methods of Information Communication Technologies (ICTs) like faxes, monthly newspaper reports, radios, phones and televisions. Although these are effective ways of communication, their disadvantage is that the information that most of them offer can only be accessed at specific times during a crisis. New technologies like the internet have become the most efficient way of distributing information in near-real-time. Many developed countries have used web-based reporting platforms to deliver timely information through temporal and geographic visualisation techniques. There has been an attempt in the use of web-based reporting in South Africa but most of these sites are characterised by heavy text which makes them time consuming to use or maintain. As a result most sites have not been updated or have ceased to exist because of the work load involved. The success of web reporting mechanisms in developed countries offers evidence that web-based reporting systems when appropriately visualised can improve the easy understanding of information and efficiency in the analysis of that data. In this thesis, a web-based reporting prototype was proposed after gathering information from different sources: literature related to disease reporting and the visualisation of infectious diseases; the exploration of the currently deployed web systems; and the investigation of user requirements from relevant parties. The proposed prototype system was then developed using Adobe Flash tools, Java and MySQL languages. A focus group then reviewed the developed system to ascertain that the relevant requirements had been incorporated and to obtain additional ideas about the system. This led to the proposal of a new prototype system that can be used by the authorities concerned as a plan to develop a fully functional disease reporting system for South Africa.
- Full Text:
- Date Issued: 2012
- Authors: Ncube, Sinini Paul
- Date: 2012
- Subjects: Zoonoses -- Reporting , Communicable diseases -- Reporting , Communication in medicine , Medical telematics , Internet , Information visualization
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4664 , http://hdl.handle.net/10962/d1006672 , Zoonoses -- Reporting , Communicable diseases -- Reporting , Communication in medicine , Medical telematics , Internet , Information visualization
- Description: Zoonotic diseases are diseases that are transmitted from animals or vectors to humans and vice versa. The public together with veterinarian authorities should readily access disease information as it is vital in rapidly controlling resultant zoonotic outbreak threats through improved awareness. Currently, the reporting of disease information in South Africa is predominantly limited to traditional methods of Information Communication Technologies (ICTs) like faxes, monthly newspaper reports, radios, phones and televisions. Although these are effective ways of communication, their disadvantage is that the information that most of them offer can only be accessed at specific times during a crisis. New technologies like the internet have become the most efficient way of distributing information in near-real-time. Many developed countries have used web-based reporting platforms to deliver timely information through temporal and geographic visualisation techniques. There has been an attempt in the use of web-based reporting in South Africa but most of these sites are characterised by heavy text which makes them time consuming to use or maintain. As a result most sites have not been updated or have ceased to exist because of the work load involved. The success of web reporting mechanisms in developed countries offers evidence that web-based reporting systems when appropriately visualised can improve the easy understanding of information and efficiency in the analysis of that data. In this thesis, a web-based reporting prototype was proposed after gathering information from different sources: literature related to disease reporting and the visualisation of infectious diseases; the exploration of the currently deployed web systems; and the investigation of user requirements from relevant parties. The proposed prototype system was then developed using Adobe Flash tools, Java and MySQL languages. A focus group then reviewed the developed system to ascertain that the relevant requirements had been incorporated and to obtain additional ideas about the system. This led to the proposal of a new prototype system that can be used by the authorities concerned as a plan to develop a fully functional disease reporting system for South Africa.
- Full Text:
- Date Issued: 2012
μCloud : a P2P cloud platform for computing service provision
- Authors: Fouodji Tasse, Ghislain
- Date: 2012 , 2012-08-22
- Subjects: Cloud computing , Peer-to-peer architecture (Computer networks) , Computer architecture , Computer service industry
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4663 , http://hdl.handle.net/10962/d1006669 , Cloud computing , Peer-to-peer architecture (Computer networks) , Computer architecture , Computer service industry
- Description: The advancements in virtualization technologies have provided a large spectrum of computational approaches. Dedicated computations can be run on private environments (virtual machines), created within the same computer. Through capable APIs, this functionality is leveraged for the service we wish to implement; a computer power service (CPS). We target peer-to-peer systems for this service, to exploit the potential of aggregating computing resources. The concept of a P2P network is mostly known for its expanded usage in distributed networks for sharing resources like content files or real-time data. This study adds computing power to the list of shared resources by describing a suitable service composition. Taking into account the dynamic nature of the platform, this CPS provision is achieved using a self stabilizing clustering algorithm. So, the resulting system of our research is based around a hierarchical P2P architecture and offers end-to-end consideration of resource provisioning and reliability. We named this system μCloud and characterizes it as a self-provisioning cloud service platform. It is designed, implemented and presented in this dissertation. Eventually, we assessed our work by showing that μCloud succeeds in providing user-centric services using a P2P computing unit. With this, we conclude that our system would be highly beneficial in both small and massively deployed environments. , KMBT_223 , Adobe Acrobat 9.51 Paper Capture Plug-in
- Full Text:
- Date Issued: 2012
- Authors: Fouodji Tasse, Ghislain
- Date: 2012 , 2012-08-22
- Subjects: Cloud computing , Peer-to-peer architecture (Computer networks) , Computer architecture , Computer service industry
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4663 , http://hdl.handle.net/10962/d1006669 , Cloud computing , Peer-to-peer architecture (Computer networks) , Computer architecture , Computer service industry
- Description: The advancements in virtualization technologies have provided a large spectrum of computational approaches. Dedicated computations can be run on private environments (virtual machines), created within the same computer. Through capable APIs, this functionality is leveraged for the service we wish to implement; a computer power service (CPS). We target peer-to-peer systems for this service, to exploit the potential of aggregating computing resources. The concept of a P2P network is mostly known for its expanded usage in distributed networks for sharing resources like content files or real-time data. This study adds computing power to the list of shared resources by describing a suitable service composition. Taking into account the dynamic nature of the platform, this CPS provision is achieved using a self stabilizing clustering algorithm. So, the resulting system of our research is based around a hierarchical P2P architecture and offers end-to-end consideration of resource provisioning and reliability. We named this system μCloud and characterizes it as a self-provisioning cloud service platform. It is designed, implemented and presented in this dissertation. Eventually, we assessed our work by showing that μCloud succeeds in providing user-centric services using a P2P computing unit. With this, we conclude that our system would be highly beneficial in both small and massively deployed environments. , KMBT_223 , Adobe Acrobat 9.51 Paper Capture Plug-in
- Full Text:
- Date Issued: 2012
- «
- ‹
- 1
- ›
- »