An investigation of issues of privacy, anonymity and multi-factor authentication in an open environment
- Authors: Miles, Shaun Graeme
- Date: 2012-06-20
- Subjects: Electronic data processing departments -- Security measures , Electronic data processing departments , Privacy, Right of , Computer security , Data protection , Computers -- Access control
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4656 , http://hdl.handle.net/10962/d1006653 , Electronic data processing departments -- Security measures , Electronic data processing departments , Privacy, Right of , Computer security , Data protection , Computers -- Access control
- Description: This thesis performs an investigation into issues concerning the broad area ofIdentity and Access Management, with a focus on open environments. Through literature research the issues of privacy, anonymity and access control are identified. The issue of privacy is an inherent problem due to the nature of the digital network environment. Information can be duplicated and modified regardless of the wishes and intentions ofthe owner of that information unless proper measures are taken to secure the environment. Once information is published or divulged on the network, there is very little way of controlling the subsequent usage of that information. To address this issue a model for privacy is presented that follows the user centric paradigm of meta-identity. The lack of anonymity, where security measures can be thwarted through the observation of the environment, is a concern for users and systems. By an attacker observing the communication channel and monitoring the interactions between users and systems over a long enough period of time, it is possible to infer knowledge about the users and systems. This knowledge is used to build an identity profile of potential victims to be used in subsequent attacks. To address the problem, mechanisms for providing an acceptable level of anonymity while maintaining adequate accountability (from a legal standpoint) are explored. In terms of access control, the inherent weakness of single factor authentication mechanisms is discussed. The typical mechanism is the user-name and password pair, which provides a single point of failure. By increasing the factors used in authentication, the amount of work required to compromise the system increases non-linearly. Within an open network, several aspects hinder wide scale adoption and use of multi-factor authentication schemes, such as token management and the impact on usability. The framework is developed from a Utopian point of view, with the aim of being applicable to many situations as opposed to a single specific domain. The framework incorporates multi-factor authentication over multiple paths using mobile phones and GSM networks, and explores the usefulness of such an approach. The models are in tum analysed, providing a discussion into the assumptions made and the problems faced by each model. , Adobe Acrobat Pro 9.5.1 , Adobe Acrobat 9.51 Paper Capture Plug-in
- Full Text:
- Authors: Miles, Shaun Graeme
- Date: 2012-06-20
- Subjects: Electronic data processing departments -- Security measures , Electronic data processing departments , Privacy, Right of , Computer security , Data protection , Computers -- Access control
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4656 , http://hdl.handle.net/10962/d1006653 , Electronic data processing departments -- Security measures , Electronic data processing departments , Privacy, Right of , Computer security , Data protection , Computers -- Access control
- Description: This thesis performs an investigation into issues concerning the broad area ofIdentity and Access Management, with a focus on open environments. Through literature research the issues of privacy, anonymity and access control are identified. The issue of privacy is an inherent problem due to the nature of the digital network environment. Information can be duplicated and modified regardless of the wishes and intentions ofthe owner of that information unless proper measures are taken to secure the environment. Once information is published or divulged on the network, there is very little way of controlling the subsequent usage of that information. To address this issue a model for privacy is presented that follows the user centric paradigm of meta-identity. The lack of anonymity, where security measures can be thwarted through the observation of the environment, is a concern for users and systems. By an attacker observing the communication channel and monitoring the interactions between users and systems over a long enough period of time, it is possible to infer knowledge about the users and systems. This knowledge is used to build an identity profile of potential victims to be used in subsequent attacks. To address the problem, mechanisms for providing an acceptable level of anonymity while maintaining adequate accountability (from a legal standpoint) are explored. In terms of access control, the inherent weakness of single factor authentication mechanisms is discussed. The typical mechanism is the user-name and password pair, which provides a single point of failure. By increasing the factors used in authentication, the amount of work required to compromise the system increases non-linearly. Within an open network, several aspects hinder wide scale adoption and use of multi-factor authentication schemes, such as token management and the impact on usability. The framework is developed from a Utopian point of view, with the aim of being applicable to many situations as opposed to a single specific domain. The framework incorporates multi-factor authentication over multiple paths using mobile phones and GSM networks, and explores the usefulness of such an approach. The models are in tum analysed, providing a discussion into the assumptions made and the problems faced by each model. , Adobe Acrobat Pro 9.5.1 , Adobe Acrobat 9.51 Paper Capture Plug-in
- Full Text:
A common analysis framework for simulated streaming-video networks
- Authors: Mulumba, Patrick
- Date: 2009
- Subjects: Computer networks -- Management , Streaming video , Mass media -- Technological innovations
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4590 , http://hdl.handle.net/10962/d1004828 , Computer networks -- Management , Streaming video , Mass media -- Technological innovations
- Description: Distributed media streaming has been driven by the combination of improved media compression techniques and an increase in the availability of bandwidth. This increase has lead to the development of various streaming distribution engines (systems/services), which currently provide the majority of the streaming media available throughout the Internet. This study aimed to analyse a range of existing commercial and open-source streaming media distribution engines, and classify them in such a way as to define a Common Analysis Framework for Simulated Streaming-Video Networks (CAFSS-Net). This common framework was used as the basis for a simulation tool intended to aid in the development and deployment of streaming media networks and predict the performance impacts of both network configuration changes, video features (scene complexity, resolution) and general scaling. CAFSS-Net consists of six components: the server, the client(s), the network simulator, the video publishing tools, the videos and the evaluation tool-set. Test scenarios are presented consisting of different network configurations, scales and external traffic specifications. From these test scenarios, results were obtained to determine interesting observations attained and to provide an overview of the different test specications for this study. From these results, an analysis of the system was performed, yielding relationships between the videos, the different bandwidths, the different measurement tools and the different components of CAFSS-Net. Based on the analysis of the results, the implications for CAFSS-Net highlighted different achievements and proposals for future work for the different components. CAFSS-Net was able to successfully integrate all of its components to evaluate the different streaming scenarios. The streaming server, client and video components accomplished their objectives. It is noted that although the video publishing tool was able to provide the necessary compression/decompression services, proposals for the implementation of alternative compression/decompression schemes could serve as a suitable extension. The network simulator and evaluation tool-set components were also successful, but future tests (particularly in low bandwidth scenarios) are suggested in order to further improve the accuracy of the framework as a whole. CAFSS-Net is especially successful with analysing high bandwidth connections with the results being similar to those of the physical network tests.
- Full Text:
- Date Issued: 2009
- Authors: Mulumba, Patrick
- Date: 2009
- Subjects: Computer networks -- Management , Streaming video , Mass media -- Technological innovations
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4590 , http://hdl.handle.net/10962/d1004828 , Computer networks -- Management , Streaming video , Mass media -- Technological innovations
- Description: Distributed media streaming has been driven by the combination of improved media compression techniques and an increase in the availability of bandwidth. This increase has lead to the development of various streaming distribution engines (systems/services), which currently provide the majority of the streaming media available throughout the Internet. This study aimed to analyse a range of existing commercial and open-source streaming media distribution engines, and classify them in such a way as to define a Common Analysis Framework for Simulated Streaming-Video Networks (CAFSS-Net). This common framework was used as the basis for a simulation tool intended to aid in the development and deployment of streaming media networks and predict the performance impacts of both network configuration changes, video features (scene complexity, resolution) and general scaling. CAFSS-Net consists of six components: the server, the client(s), the network simulator, the video publishing tools, the videos and the evaluation tool-set. Test scenarios are presented consisting of different network configurations, scales and external traffic specifications. From these test scenarios, results were obtained to determine interesting observations attained and to provide an overview of the different test specications for this study. From these results, an analysis of the system was performed, yielding relationships between the videos, the different bandwidths, the different measurement tools and the different components of CAFSS-Net. Based on the analysis of the results, the implications for CAFSS-Net highlighted different achievements and proposals for future work for the different components. CAFSS-Net was able to successfully integrate all of its components to evaluate the different streaming scenarios. The streaming server, client and video components accomplished their objectives. It is noted that although the video publishing tool was able to provide the necessary compression/decompression services, proposals for the implementation of alternative compression/decompression schemes could serve as a suitable extension. The network simulator and evaluation tool-set components were also successful, but future tests (particularly in low bandwidth scenarios) are suggested in order to further improve the accuracy of the framework as a whole. CAFSS-Net is especially successful with analysing high bandwidth connections with the results being similar to those of the physical network tests.
- Full Text:
- Date Issued: 2009
A grid based approach for the control and recall of the properties of IEEE 1394 audio devices
- Authors: Foulkes, Philip James
- Date: 2009
- Subjects: IEEE 1394 (Standard) , Computer sound processing , Digital communications , Local area networks (Computer networks) , Sound -- Recording and reproducing -- Digital techniques , Computational grids (Computer systems)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4594 , http://hdl.handle.net/10962/d1004836 , IEEE 1394 (Standard) , Computer sound processing , Digital communications , Local area networks (Computer networks) , Sound -- Recording and reproducing -- Digital techniques , Computational grids (Computer systems)
- Description: The control of modern audio studios is complex. Audio mixing desks have grown to the point where they contain thousands of parameters. The control surfaces of these devices do not reflect the routing and signal processing capabilities that the devices are capable of. Software audio mixing desk editors have been developed that allow for the remote control of these devices, but their graphical user interfaces retain the complexities of the audio mixing desk that they represent. In this thesis, we propose a grid approach to audio mixing. The developed grid audio mixing desk editor represents an audio mixing desk as a series of graphical routing matrices. These routing matrices expose the various signal processing points and signal flows that exist within an audio mixing desk. The routing matrices allow for audio signals to be routed within the device, and allow for the device’s parameters to be adjusted by selecting the appropriate signal processing points. With the use of the programming interfaces that are defined as part of the Studio Connections – Total Recall SDK, the audio mixing desk editor was integrated with compatible DAW applications to provide persistence of audio mixing desk parameter states. Many audio studios currently use digital networks to connect audio devices together. Audio and control signals are patched between devices through the use of software patchbays that run on computers. We propose a double grid-based FireWire patchbay aimed to simplify the patching of signals between audio devices on a FireWire network. The FireWire patchbay was implemented in such a way such that it can host software device editors that are Studio Connections compatible. This has allowed software device editors to be associated with the devices that are represented on the FireWire patchbay, thus allowing for studio wide control from a single application. The double grid-based patchbay was implemented such that it can be hosted by compatible DAW applications. Through this, the double grid-based patchbay application is able to provide the DAW application with the state of the parameters of the devices in a studio, as well as the connections between them. The DAW application may save this state data to its native song files. This state data may be passed back to the double grid-based patchbay when the song file is reloaded at a later stage. This state data may then be used by the patchbay to restore the parameters of the patchbay and its device editors to a previous state. This restored state may then be transferred to the hardware devices being represented by the patchbay.
- Full Text:
- Date Issued: 2009
- Authors: Foulkes, Philip James
- Date: 2009
- Subjects: IEEE 1394 (Standard) , Computer sound processing , Digital communications , Local area networks (Computer networks) , Sound -- Recording and reproducing -- Digital techniques , Computational grids (Computer systems)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4594 , http://hdl.handle.net/10962/d1004836 , IEEE 1394 (Standard) , Computer sound processing , Digital communications , Local area networks (Computer networks) , Sound -- Recording and reproducing -- Digital techniques , Computational grids (Computer systems)
- Description: The control of modern audio studios is complex. Audio mixing desks have grown to the point where they contain thousands of parameters. The control surfaces of these devices do not reflect the routing and signal processing capabilities that the devices are capable of. Software audio mixing desk editors have been developed that allow for the remote control of these devices, but their graphical user interfaces retain the complexities of the audio mixing desk that they represent. In this thesis, we propose a grid approach to audio mixing. The developed grid audio mixing desk editor represents an audio mixing desk as a series of graphical routing matrices. These routing matrices expose the various signal processing points and signal flows that exist within an audio mixing desk. The routing matrices allow for audio signals to be routed within the device, and allow for the device’s parameters to be adjusted by selecting the appropriate signal processing points. With the use of the programming interfaces that are defined as part of the Studio Connections – Total Recall SDK, the audio mixing desk editor was integrated with compatible DAW applications to provide persistence of audio mixing desk parameter states. Many audio studios currently use digital networks to connect audio devices together. Audio and control signals are patched between devices through the use of software patchbays that run on computers. We propose a double grid-based FireWire patchbay aimed to simplify the patching of signals between audio devices on a FireWire network. The FireWire patchbay was implemented in such a way such that it can host software device editors that are Studio Connections compatible. This has allowed software device editors to be associated with the devices that are represented on the FireWire patchbay, thus allowing for studio wide control from a single application. The double grid-based patchbay was implemented such that it can be hosted by compatible DAW applications. Through this, the double grid-based patchbay application is able to provide the DAW application with the state of the parameters of the devices in a studio, as well as the connections between them. The DAW application may save this state data to its native song files. This state data may be passed back to the double grid-based patchbay when the song file is reloaded at a later stage. This state data may then be used by the patchbay to restore the parameters of the patchbay and its device editors to a previous state. This restored state may then be transferred to the hardware devices being represented by the patchbay.
- Full Text:
- Date Issued: 2009
A knowledge-oriented, context-sensitive architectural framework for service deployment in marginalized rural communities
- Authors: Thinyane, Mamello P
- Date: 2009
- Subjects: Information technology Expert systems (Computer science) Software architecture User interfaces (Computer systems) Ethnoscience Social networks Rural development Technical assistance -- Developing countries Information networks -- Developing countries
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4599 , http://hdl.handle.net/10962/d1004843
- Description: The notion of a global knowledge society is somewhat of a misnomer due to the fact that large portions of the global community are not participants in this global knowledge society which is driven, shaped by and socio-technically biased towards a small fraction of the global population. Information and Communication Technology (ICT) is culture-sensitive and this is a dynamic that is largely ignored in the majority of ICT for Development (ICT4D) interventions, leading to the technological determinism flaw and ultimately a failure of the undertaken projects. The deployment of ICT solutions, in particular in the context of ICT4D, must be informed by the cultural and socio-technical profile of the deployment environments and solutions themselves must be developed with a focus towards context-sensitivity and ethnocentricity. In this thesis, we investigate the viability of a software architectural framework for the development of ICT solutions that are context-sensitive and ethnocentric1, and so aligned with the cultural and social dynamics within the environment of deployment. The conceptual framework, named PIASK, defines five tiers (presentation, interaction, access, social networking, and knowledge base) which allow for: behavioural completeness of the layer components; a modular and functionally decoupled architecture; and the flexibility to situate and contextualize the developed applications along the dimensions of the User Interface (UI), interaction modalities, usage metaphors, underlying Indigenous Knowledge (IK), and access protocols. We have developed a proof-of-concept service platform, called KnowNet, based on the PIASK architecture. KnowNet is built around the knowledge base layer, which consists of domain ontologies that encapsulate the knowledge in the platform, with an intrinsic flexibility to access secondary knowledge repositories. The domain ontologies constructed (as examples) are for the provisioning of eServices to support societal activities (e.g. commerce, health, agriculture, medicine) within a rural and marginalized area of Dwesa, in the Eastern Cape province of South Africa. The social networking layer allows for situating the platform within the local social systems. Heterogeneity of user profiles and multiplicity of end-user devices are handled through the access and the presentation components, and the service logic is implemented by the interaction components. This services platform validates the PIASK architecture for end-to-end provisioning of multi-modal, heterogeneous, ontology-based services. The development of KnowNet was informed on one hand by the latest trends within service architectures, semantic web technologies and social applications, and on the other hand by the context consideration based on the profile (IK systems dynamics, infrastructure, usability requirements) of the Dwesa community. The realization of the service platform is based on the JADE Multi-Agent System (MAS), and this shows the applicability and adequacy of MAS’s for service deployment in a rural context, at the same time providing key advantages such as platform fault-tolerance, robustness and flexibility. While the context of conceptualization of PIASK and the implementation of KnowNet is that of rurality and of ICT4D, the applicability of the architecture extends to other similarly heterogeneous and context-sensitive domains. KnowNet has been validated for functional and technical adequacy, and we have also undertaken an initial prevalidation for social context sensitivity. We observe that the five tier PIASK architecture provides an adequate framework for developing context-sensitive and ethnocentric software: by functionally separating and making explicit the social networking and access tier components, while still maintaining the traditional separation of presentation, business logic and data components.
- Full Text:
- Date Issued: 2009
- Authors: Thinyane, Mamello P
- Date: 2009
- Subjects: Information technology Expert systems (Computer science) Software architecture User interfaces (Computer systems) Ethnoscience Social networks Rural development Technical assistance -- Developing countries Information networks -- Developing countries
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4599 , http://hdl.handle.net/10962/d1004843
- Description: The notion of a global knowledge society is somewhat of a misnomer due to the fact that large portions of the global community are not participants in this global knowledge society which is driven, shaped by and socio-technically biased towards a small fraction of the global population. Information and Communication Technology (ICT) is culture-sensitive and this is a dynamic that is largely ignored in the majority of ICT for Development (ICT4D) interventions, leading to the technological determinism flaw and ultimately a failure of the undertaken projects. The deployment of ICT solutions, in particular in the context of ICT4D, must be informed by the cultural and socio-technical profile of the deployment environments and solutions themselves must be developed with a focus towards context-sensitivity and ethnocentricity. In this thesis, we investigate the viability of a software architectural framework for the development of ICT solutions that are context-sensitive and ethnocentric1, and so aligned with the cultural and social dynamics within the environment of deployment. The conceptual framework, named PIASK, defines five tiers (presentation, interaction, access, social networking, and knowledge base) which allow for: behavioural completeness of the layer components; a modular and functionally decoupled architecture; and the flexibility to situate and contextualize the developed applications along the dimensions of the User Interface (UI), interaction modalities, usage metaphors, underlying Indigenous Knowledge (IK), and access protocols. We have developed a proof-of-concept service platform, called KnowNet, based on the PIASK architecture. KnowNet is built around the knowledge base layer, which consists of domain ontologies that encapsulate the knowledge in the platform, with an intrinsic flexibility to access secondary knowledge repositories. The domain ontologies constructed (as examples) are for the provisioning of eServices to support societal activities (e.g. commerce, health, agriculture, medicine) within a rural and marginalized area of Dwesa, in the Eastern Cape province of South Africa. The social networking layer allows for situating the platform within the local social systems. Heterogeneity of user profiles and multiplicity of end-user devices are handled through the access and the presentation components, and the service logic is implemented by the interaction components. This services platform validates the PIASK architecture for end-to-end provisioning of multi-modal, heterogeneous, ontology-based services. The development of KnowNet was informed on one hand by the latest trends within service architectures, semantic web technologies and social applications, and on the other hand by the context consideration based on the profile (IK systems dynamics, infrastructure, usability requirements) of the Dwesa community. The realization of the service platform is based on the JADE Multi-Agent System (MAS), and this shows the applicability and adequacy of MAS’s for service deployment in a rural context, at the same time providing key advantages such as platform fault-tolerance, robustness and flexibility. While the context of conceptualization of PIASK and the implementation of KnowNet is that of rurality and of ICT4D, the applicability of the architecture extends to other similarly heterogeneous and context-sensitive domains. KnowNet has been validated for functional and technical adequacy, and we have also undertaken an initial prevalidation for social context sensitivity. We observe that the five tier PIASK architecture provides an adequate framework for developing context-sensitive and ethnocentric software: by functionally separating and making explicit the social networking and access tier components, while still maintaining the traditional separation of presentation, business logic and data components.
- Full Text:
- Date Issued: 2009
An investigation into the design and implementation of an internet-scale network simulator
- Authors: Richter, John Peter Frank
- Date: 2009
- Subjects: Computer simulation , Computer network resources , Computer networks , Internet
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4597 , http://hdl.handle.net/10962/d1004840 , Computer simulation , Computer network resources , Computer networks , Internet
- Description: Simulation is a complex task with many research applications - chiey as a research tool, to test and evaluate hypothetical scenarios. Though many simulations execute similar operations and utilise similar data, there are few simulation frameworks or toolkits that allow researchers to rapidly develop their concepts. Those that are available to researchers are limited in scope, or use old technology that is no longer useful to modern researchers. As a result of this, many researchers build their own simulations without a framework, wasting time and resources on a system that could already cater for the majority of their simulation's requirements. In this work, a system is proposed for the creation of a scalable, dynamic-resolution network simulation framework that provides scalable scope for researchers, using modern technologies and languages. This framework should allow researchers to rapidly develop a broad range of semantically-rich simulations, without the necessity of superor grid-computers or clusters. Design and implementation are discussed and alternative network simulations are compared to the proposed framework. A series of simulations, focusing on malware, is run on an implementation of this framework, and the results are compared to expectations for the outcomes of those simulations. In conclusion, a critical review of the simulator is made, considering any extensions or shortcomings that need to be addressed.
- Full Text:
- Date Issued: 2009
- Authors: Richter, John Peter Frank
- Date: 2009
- Subjects: Computer simulation , Computer network resources , Computer networks , Internet
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4597 , http://hdl.handle.net/10962/d1004840 , Computer simulation , Computer network resources , Computer networks , Internet
- Description: Simulation is a complex task with many research applications - chiey as a research tool, to test and evaluate hypothetical scenarios. Though many simulations execute similar operations and utilise similar data, there are few simulation frameworks or toolkits that allow researchers to rapidly develop their concepts. Those that are available to researchers are limited in scope, or use old technology that is no longer useful to modern researchers. As a result of this, many researchers build their own simulations without a framework, wasting time and resources on a system that could already cater for the majority of their simulation's requirements. In this work, a system is proposed for the creation of a scalable, dynamic-resolution network simulation framework that provides scalable scope for researchers, using modern technologies and languages. This framework should allow researchers to rapidly develop a broad range of semantically-rich simulations, without the necessity of superor grid-computers or clusters. Design and implementation are discussed and alternative network simulations are compared to the proposed framework. A series of simulations, focusing on malware, is run on an implementation of this framework, and the results are compared to expectations for the outcomes of those simulations. In conclusion, a critical review of the simulator is made, considering any extensions or shortcomings that need to be addressed.
- Full Text:
- Date Issued: 2009
An investigation into the hardware abstraction layer of the plural node architecture for IEEE 1394 audio devices
- Authors: Chigwamba, Nyasha
- Date: 2009
- Subjects: IEEE 1394 (Standard) , Digital communications , Computer sound processing , Local area networks (Computer networks) , Computer network architectures , Sound -- Recording and reproducing -- Digital techniques
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4598 , http://hdl.handle.net/10962/d1004841 , IEEE 1394 (Standard) , Digital communications , Computer sound processing , Local area networks (Computer networks) , Computer network architectures , Sound -- Recording and reproducing -- Digital techniques
- Description: Digital audio network technologies are becoming more prevalent in audio related environments. Yamaha Corporation has created a digital audio network solution, named mLAN (music Local Area Network), that uses IEEE 1394 as its underlying network technology. IEEE 1394 is a digital network technology that is specifically designed for real-time multimedia data transmission. The second generation of mLAN is based on the Plural Node Architecture, where the control of audio and MIDI routings between IEEE 1394 devices is split between two node types, namely an Enabler and a Transporter. The Transporter typically resides in an IEEE 1394 device and is solely responsible for transmission and reception of audio or MIDI data. The Enabler typically resides in a workstation and exposes an abstract representation of audio or MIDI plugs on each Transporter to routing control applications. The Enabler is responsible for configuring audio and MIDI routings between plugs on different Transporters. A Hardware Abstraction Layer (HAL) within the Enabler allows it to uniformly communicate with Transporters that are created by various vendors. A plug-in mechanism is used to provide this capability. When vendors create Transporters, they also create device-specific plug-ins for the Enabler. These plug-ins are created against a Transporter HAL Application Programming Interface (API) that defines methods to access the capabilities of Transporters. An Open Generic Transporter (OGT) guideline document which models all the capabilities of Transporters has been produced. These guidelines make it possible for manufacturers to create Transporters that make use of a common plug-in, although based on different hardware architectures. The introduction of the OGT concept has revealed additional Transporter capabilities that are not incorporated in the existing Transporter HAL API. This has led to the underutilisation of OGT capabilities. The main goals of this investigation have been to improve the Enabler’s plug-in mechanism, and to incorporate the additional capabilities that have been revealed by the OGT into the Transporter HAL API. We propose a new plug-in mechanism, and a new Transporter HAL API that fully utilises both the additional capabilities revealed by the OGT and the capabilities of existing Transporters.
- Full Text:
- Date Issued: 2009
- Authors: Chigwamba, Nyasha
- Date: 2009
- Subjects: IEEE 1394 (Standard) , Digital communications , Computer sound processing , Local area networks (Computer networks) , Computer network architectures , Sound -- Recording and reproducing -- Digital techniques
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4598 , http://hdl.handle.net/10962/d1004841 , IEEE 1394 (Standard) , Digital communications , Computer sound processing , Local area networks (Computer networks) , Computer network architectures , Sound -- Recording and reproducing -- Digital techniques
- Description: Digital audio network technologies are becoming more prevalent in audio related environments. Yamaha Corporation has created a digital audio network solution, named mLAN (music Local Area Network), that uses IEEE 1394 as its underlying network technology. IEEE 1394 is a digital network technology that is specifically designed for real-time multimedia data transmission. The second generation of mLAN is based on the Plural Node Architecture, where the control of audio and MIDI routings between IEEE 1394 devices is split between two node types, namely an Enabler and a Transporter. The Transporter typically resides in an IEEE 1394 device and is solely responsible for transmission and reception of audio or MIDI data. The Enabler typically resides in a workstation and exposes an abstract representation of audio or MIDI plugs on each Transporter to routing control applications. The Enabler is responsible for configuring audio and MIDI routings between plugs on different Transporters. A Hardware Abstraction Layer (HAL) within the Enabler allows it to uniformly communicate with Transporters that are created by various vendors. A plug-in mechanism is used to provide this capability. When vendors create Transporters, they also create device-specific plug-ins for the Enabler. These plug-ins are created against a Transporter HAL Application Programming Interface (API) that defines methods to access the capabilities of Transporters. An Open Generic Transporter (OGT) guideline document which models all the capabilities of Transporters has been produced. These guidelines make it possible for manufacturers to create Transporters that make use of a common plug-in, although based on different hardware architectures. The introduction of the OGT concept has revealed additional Transporter capabilities that are not incorporated in the existing Transporter HAL API. This has led to the underutilisation of OGT capabilities. The main goals of this investigation have been to improve the Enabler’s plug-in mechanism, and to incorporate the additional capabilities that have been revealed by the OGT into the Transporter HAL API. We propose a new plug-in mechanism, and a new Transporter HAL API that fully utilises both the additional capabilities revealed by the OGT and the capabilities of existing Transporters.
- Full Text:
- Date Issued: 2009
Automating the conversion of natural language fiction to multi-modal 3D animated virtual environments
- Authors: Glass, Kevin Robert
- Date: 2009
- Subjects: Virtual computer systems , Virtual storage (Computer science) , Virtual reality , Computer animation , Fiction -- Computer programs , Narration (Rhetoric) -- Computer simulation , Animation (Cinematography) , Natural language processing (Computer Science)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4632 , http://hdl.handle.net/10962/d1006518
- Description: Popular fiction books describe rich visual environments that contain characters, objects, and behaviour. This research develops automated processes for converting text sourced from fiction books into animated virtual environments and multi-modal films. This involves the analysis of unrestricted natural language fiction to identify appropriate visual descriptions, and the interpretation of the identified descriptions for constructing animated 3D virtual environments. The goal of the text analysis stage is the creation of annotated fiction text, which identifies visual descriptions in a structured manner. A hierarchical rule-based learning system is created that induces patterns from example annotations provided by a human, and uses these for the creation of additional annotations. Patterns are expressed as tree structures that abstract the input text on different levels according to structural (token, sentence) and syntactic (parts-of-speech, syntactic function) categories. Patterns are generalized using pair-wise merging, where dissimilar sub-trees are replaced with wild-cards. The result is a small set of generalized patterns that are able to create correct annotations. A set of generalized patterns represents a model of an annotator's mental process regarding a particular annotation category. Annotated text is interpreted automatically for constructing detailed scene descriptions. This includes identifying which scenes to visualize, and identifying the contents and behaviour in each scene. Entity behaviour in a 3D virtual environment is formulated using time-based constraints that are automatically derived from annotations. Constraints are expressed as non-linear symbolic functions that restrict the trajectories of a pair of entities over a continuous interval of time. Solutions to these constraints specify precise behaviour. We create an innovative quantified constraint optimizer for locating sound solutions, which uses interval arithmetic for treating time and space as contiguous quantities. This optimization method uses a technique of constraint relaxation and tightening that allows solution approximations to be located where constraint systems are inconsistent (an ability not previously explored in interval-based quantified constraint solving). 3D virtual environments are populated by automatically selecting geometric models or procedural geometry-creation methods from a library. 3D models are animated according to trajectories derived from constraint solutions. The final animated film is sequenced using a range of modalities including animated 3D graphics, textual subtitles, audio narrations, and foleys. Hierarchical rule-based learning is evaluated over a range of annotation categories. Models are induced for different categories of annotation without modifying the core learning algorithms, and these models are shown to be applicable to different types of books. Models are induced automatically with accuracies ranging between 51.4% and 90.4%, depending on the category. We show that models are refined if further examples are provided, and this supports a boot-strapping process for training the learning mechanism. The task of interpreting annotated fiction text and populating 3D virtual environments is successfully automated using our described techniques. Detailed scene descriptions are created accurately, where between 83% and 96% of the automatically generated descriptions require no manual modification (depending on the type of description). The interval-based quantified constraint optimizer fully automates the behaviour specification process. Sample animated multi-modal 3D films are created using extracts from fiction books that are unrestricted in terms of complexity or subject matter (unlike existing text-to-graphics systems). These examples demonstrate that: behaviour is visualized that corresponds to the descriptions in the original text; appropriate geometry is selected (or created) for visualizing entities in each scene; sequences of scenes are created for a film-like presentation of the story; and that multiple modalities are combined to create a coherent multi-modal representation of the fiction text. This research demonstrates that visual descriptions in fiction text can be automatically identified, and that these descriptions can be converted into corresponding animated virtual environments. Unlike existing text-to-graphics systems, we describe techniques that function over unrestricted natural language text and perform the conversion process without the need for manually constructed repositories of world knowledge. This enables the rapid production of animated 3D virtual environments, allowing the human designer to focus on creative aspects.
- Full Text:
- Date Issued: 2009
- Authors: Glass, Kevin Robert
- Date: 2009
- Subjects: Virtual computer systems , Virtual storage (Computer science) , Virtual reality , Computer animation , Fiction -- Computer programs , Narration (Rhetoric) -- Computer simulation , Animation (Cinematography) , Natural language processing (Computer Science)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4632 , http://hdl.handle.net/10962/d1006518
- Description: Popular fiction books describe rich visual environments that contain characters, objects, and behaviour. This research develops automated processes for converting text sourced from fiction books into animated virtual environments and multi-modal films. This involves the analysis of unrestricted natural language fiction to identify appropriate visual descriptions, and the interpretation of the identified descriptions for constructing animated 3D virtual environments. The goal of the text analysis stage is the creation of annotated fiction text, which identifies visual descriptions in a structured manner. A hierarchical rule-based learning system is created that induces patterns from example annotations provided by a human, and uses these for the creation of additional annotations. Patterns are expressed as tree structures that abstract the input text on different levels according to structural (token, sentence) and syntactic (parts-of-speech, syntactic function) categories. Patterns are generalized using pair-wise merging, where dissimilar sub-trees are replaced with wild-cards. The result is a small set of generalized patterns that are able to create correct annotations. A set of generalized patterns represents a model of an annotator's mental process regarding a particular annotation category. Annotated text is interpreted automatically for constructing detailed scene descriptions. This includes identifying which scenes to visualize, and identifying the contents and behaviour in each scene. Entity behaviour in a 3D virtual environment is formulated using time-based constraints that are automatically derived from annotations. Constraints are expressed as non-linear symbolic functions that restrict the trajectories of a pair of entities over a continuous interval of time. Solutions to these constraints specify precise behaviour. We create an innovative quantified constraint optimizer for locating sound solutions, which uses interval arithmetic for treating time and space as contiguous quantities. This optimization method uses a technique of constraint relaxation and tightening that allows solution approximations to be located where constraint systems are inconsistent (an ability not previously explored in interval-based quantified constraint solving). 3D virtual environments are populated by automatically selecting geometric models or procedural geometry-creation methods from a library. 3D models are animated according to trajectories derived from constraint solutions. The final animated film is sequenced using a range of modalities including animated 3D graphics, textual subtitles, audio narrations, and foleys. Hierarchical rule-based learning is evaluated over a range of annotation categories. Models are induced for different categories of annotation without modifying the core learning algorithms, and these models are shown to be applicable to different types of books. Models are induced automatically with accuracies ranging between 51.4% and 90.4%, depending on the category. We show that models are refined if further examples are provided, and this supports a boot-strapping process for training the learning mechanism. The task of interpreting annotated fiction text and populating 3D virtual environments is successfully automated using our described techniques. Detailed scene descriptions are created accurately, where between 83% and 96% of the automatically generated descriptions require no manual modification (depending on the type of description). The interval-based quantified constraint optimizer fully automates the behaviour specification process. Sample animated multi-modal 3D films are created using extracts from fiction books that are unrestricted in terms of complexity or subject matter (unlike existing text-to-graphics systems). These examples demonstrate that: behaviour is visualized that corresponds to the descriptions in the original text; appropriate geometry is selected (or created) for visualizing entities in each scene; sequences of scenes are created for a film-like presentation of the story; and that multiple modalities are combined to create a coherent multi-modal representation of the fiction text. This research demonstrates that visual descriptions in fiction text can be automatically identified, and that these descriptions can be converted into corresponding animated virtual environments. Unlike existing text-to-graphics systems, we describe techniques that function over unrestricted natural language text and perform the conversion process without the need for manually constructed repositories of world knowledge. This enables the rapid production of animated 3D virtual environments, allowing the human designer to focus on creative aspects.
- Full Text:
- Date Issued: 2009
Using semantic knowledge to improve compression on log files
- Authors: Otten, Frederick John
- Date: 2009 , 2008-11-19
- Subjects: Computer networks , Data compression (Computer science) , Semantics--Data processing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4650 , http://hdl.handle.net/10962/d1006619 , Computer networks , Data compression (Computer science) , Semantics--Data processing
- Description: With the move towards global and multi-national companies, information technology infrastructure requirements are increasing. As the size of these computer networks increases, it becomes more and more difficult to monitor, control, and secure them. Networks consist of a number of diverse devices, sensors, and gateways which are often spread over large geographical areas. Each of these devices produce log files which need to be analysed and monitored to provide network security and satisfy regulations. Data compression programs such as gzip and bzip2 are commonly used to reduce the quantity of data for archival purposes after the log files have been rotated. However, there are many other compression programs which exist - each with their own advantages and disadvantages. These programs each use a different amount of memory and take different compression and decompression times to achieve different compression ratios. System log files also contain redundancy which is not necessarily exploited by standard compression programs. Log messages usually use a similar format with a defined syntax. In the log files, all the ASCII characters are not used and the messages contain certain "phrases" which often repeated. This thesis investigates the use of compression as a means of data reduction and how the use of semantic knowledge can improve data compression (also applying results to different scenarios that can occur in a distributed computing environment). It presents the results of a series of tests performed on different log files. It also examines the semantic knowledge which exists in maillog files and how it can be exploited to improve the compression results. The results from a series of text preprocessors which exploit this knowledge are presented and evaluated. These preprocessors include: one which replaces the timestamps and IP addresses with their binary equivalents and one which replaces words from a dictionary with unused ASCII characters. In this thesis, data compression is shown to be an effective method of data reduction producing up to 98 percent reduction in filesize on a corpus of log files. The use of preprocessors which exploit semantic knowledge results in up to 56 percent improvement in overall compression time and up to 32 percent reduction in compressed size. , TeX , pdfTeX-1.40.3
- Full Text:
- Date Issued: 2009
- Authors: Otten, Frederick John
- Date: 2009 , 2008-11-19
- Subjects: Computer networks , Data compression (Computer science) , Semantics--Data processing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4650 , http://hdl.handle.net/10962/d1006619 , Computer networks , Data compression (Computer science) , Semantics--Data processing
- Description: With the move towards global and multi-national companies, information technology infrastructure requirements are increasing. As the size of these computer networks increases, it becomes more and more difficult to monitor, control, and secure them. Networks consist of a number of diverse devices, sensors, and gateways which are often spread over large geographical areas. Each of these devices produce log files which need to be analysed and monitored to provide network security and satisfy regulations. Data compression programs such as gzip and bzip2 are commonly used to reduce the quantity of data for archival purposes after the log files have been rotated. However, there are many other compression programs which exist - each with their own advantages and disadvantages. These programs each use a different amount of memory and take different compression and decompression times to achieve different compression ratios. System log files also contain redundancy which is not necessarily exploited by standard compression programs. Log messages usually use a similar format with a defined syntax. In the log files, all the ASCII characters are not used and the messages contain certain "phrases" which often repeated. This thesis investigates the use of compression as a means of data reduction and how the use of semantic knowledge can improve data compression (also applying results to different scenarios that can occur in a distributed computing environment). It presents the results of a series of tests performed on different log files. It also examines the semantic knowledge which exists in maillog files and how it can be exploited to improve the compression results. The results from a series of text preprocessors which exploit this knowledge are presented and evaluated. These preprocessors include: one which replaces the timestamps and IP addresses with their binary equivalents and one which replaces words from a dictionary with unused ASCII characters. In this thesis, data compression is shown to be an effective method of data reduction producing up to 98 percent reduction in filesize on a corpus of log files. The use of preprocessors which exploit semantic knowledge results in up to 56 percent improvement in overall compression time and up to 32 percent reduction in compressed size. , TeX , pdfTeX-1.40.3
- Full Text:
- Date Issued: 2009
- «
- ‹
- 1
- ›
- »