An analysis of the correlation beween packet loss and network delay on the perfomance of congested networks and their impact: case study University of Fort Hare
- Authors: Lutshete, Sizwe
- Date: 2013
- Subjects: Computer network protocols -- South Africa -- Eastern Cape , Packet switching (Data transmission) , Cactus -- South Africa -- Eastern Cape , Network analysis (Planning) -- South Africa -- Eastern Cape , Network performance (Telecommunication) , Network Time Protocol (Computer network protocol)
- Language: English
- Type: Thesis , Masters , MSc (Computer Science)
- Identifier: vital:11393 , http://hdl.handle.net/10353/d1006843 , Computer network protocols -- South Africa -- Eastern Cape , Packet switching (Data transmission) , Cactus -- South Africa -- Eastern Cape , Network analysis (Planning) -- South Africa -- Eastern Cape , Network performance (Telecommunication) , Network Time Protocol (Computer network protocol)
- Description: In this paper we study packet delay and loss rate at the University of Fort Hare network. The focus of this paper is to evaluate the information derived from a multipoint measurement of, University of Fort Hare network which will be collected for a duration of three Months during June 2011 to August 2011 at the TSC uplink and Ethernet hubs outside and inside relative to the Internet firewall host. The specific value of this data set lies in the end to end instrumentation of all devices operating at the packet level, combined with the duration of observation. We will provide measures for the normal day−to−day operation of the University of fort hare network both at off-peak and during peak hours. We expect to show the impact of delay and loss rate at the University of Fort Hare network. The data set will include a number of areas, where service quality (delay and packet loss) is extreme, moderate, good and we will examine the causes and impacts on network users.
- Full Text:
- Date Issued: 2013
- Authors: Lutshete, Sizwe
- Date: 2013
- Subjects: Computer network protocols -- South Africa -- Eastern Cape , Packet switching (Data transmission) , Cactus -- South Africa -- Eastern Cape , Network analysis (Planning) -- South Africa -- Eastern Cape , Network performance (Telecommunication) , Network Time Protocol (Computer network protocol)
- Language: English
- Type: Thesis , Masters , MSc (Computer Science)
- Identifier: vital:11393 , http://hdl.handle.net/10353/d1006843 , Computer network protocols -- South Africa -- Eastern Cape , Packet switching (Data transmission) , Cactus -- South Africa -- Eastern Cape , Network analysis (Planning) -- South Africa -- Eastern Cape , Network performance (Telecommunication) , Network Time Protocol (Computer network protocol)
- Description: In this paper we study packet delay and loss rate at the University of Fort Hare network. The focus of this paper is to evaluate the information derived from a multipoint measurement of, University of Fort Hare network which will be collected for a duration of three Months during June 2011 to August 2011 at the TSC uplink and Ethernet hubs outside and inside relative to the Internet firewall host. The specific value of this data set lies in the end to end instrumentation of all devices operating at the packet level, combined with the duration of observation. We will provide measures for the normal day−to−day operation of the University of fort hare network both at off-peak and during peak hours. We expect to show the impact of delay and loss rate at the University of Fort Hare network. The data set will include a number of areas, where service quality (delay and packet loss) is extreme, moderate, good and we will examine the causes and impacts on network users.
- Full Text:
- Date Issued: 2013
A model for assessing and reporting network performance measurement in SANReN
- Authors: Draai, Kevin
- Date: 2017
- Subjects: Computer networks -- Evaluation Network performance (Telecommunication) , Computer networks -- Management
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: http://hdl.handle.net/10948/16131 , vital:28326
- Description: The performance measurement of a service provider network is an important activity. It is required for the smooth operation of the network as well as for reporting and planning. SANReN is a service provider tasked with serving the research and education network of South Africa. It currently has no structure or process for determining network performance metrics to measure the performance of its network. The objective of this study is to determine, through a process or structure, which metrics are best suited to the SANReN environment. This study is conducted in 3 phases in order to discover and verify the solution to this problem. The phases are "Contextualisation", "Design",and "Verification". The "Contextualisation" phase includes the literature review. This provides the context for the problem area but also serves as a search function for the solution. This study adopts the design science research paradigm which requires the creation of an artefact. The "Design" phase involves the creation of the conceptual network performance measurement model. This is the artefact and a generalised model for determining the network performance metrics for an NREN. To prove the utility of the model it is implemented in the SANReN environment. This is done in the "Verification" phase. The network performance measurement model proposes a process to determine network performance metrics. This process includes getting NREN requirements and goals, defining the NRENs network design goals through these requirements, define network performance metrics from these goals, evaluating the NRENs monitoring capability, and measuring what is possible. This model provides a starting point for NRENs to determine network performance metrics tailored to its environment. This is done in the SANReN environment as a proof of concept. The utility of the model is shown through the implementation in the SANReN environment thus it can be said that it is generic.The tools that monitor the performance of the SANReN network are used to retrieve network performance data from. Through understanding the requirements, determining network design goals and performance metrics, and determining the gap the retrieving of results took place. These results are analysed and finally aggregated to provide information that feeds into SANReN reporting and planning processes. A template is provided to do the aggregation of metric results. This template provides the structure to enable metrics results aggregation but leaves the categories or labels for the reporting and planning sections blank. These categories are specific to each NREN. At this point SANReN has the aggregated information to use for planning and reporting. The model is verified and thus the study’s main research objective is satisfied.
- Full Text:
- Date Issued: 2017
- Authors: Draai, Kevin
- Date: 2017
- Subjects: Computer networks -- Evaluation Network performance (Telecommunication) , Computer networks -- Management
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: http://hdl.handle.net/10948/16131 , vital:28326
- Description: The performance measurement of a service provider network is an important activity. It is required for the smooth operation of the network as well as for reporting and planning. SANReN is a service provider tasked with serving the research and education network of South Africa. It currently has no structure or process for determining network performance metrics to measure the performance of its network. The objective of this study is to determine, through a process or structure, which metrics are best suited to the SANReN environment. This study is conducted in 3 phases in order to discover and verify the solution to this problem. The phases are "Contextualisation", "Design",and "Verification". The "Contextualisation" phase includes the literature review. This provides the context for the problem area but also serves as a search function for the solution. This study adopts the design science research paradigm which requires the creation of an artefact. The "Design" phase involves the creation of the conceptual network performance measurement model. This is the artefact and a generalised model for determining the network performance metrics for an NREN. To prove the utility of the model it is implemented in the SANReN environment. This is done in the "Verification" phase. The network performance measurement model proposes a process to determine network performance metrics. This process includes getting NREN requirements and goals, defining the NRENs network design goals through these requirements, define network performance metrics from these goals, evaluating the NRENs monitoring capability, and measuring what is possible. This model provides a starting point for NRENs to determine network performance metrics tailored to its environment. This is done in the SANReN environment as a proof of concept. The utility of the model is shown through the implementation in the SANReN environment thus it can be said that it is generic.The tools that monitor the performance of the SANReN network are used to retrieve network performance data from. Through understanding the requirements, determining network design goals and performance metrics, and determining the gap the retrieving of results took place. These results are analysed and finally aggregated to provide information that feeds into SANReN reporting and planning processes. A template is provided to do the aggregation of metric results. This template provides the structure to enable metrics results aggregation but leaves the categories or labels for the reporting and planning sections blank. These categories are specific to each NREN. At this point SANReN has the aggregated information to use for planning and reporting. The model is verified and thus the study’s main research objective is satisfied.
- Full Text:
- Date Issued: 2017
Improving the robustness and effectiveness of rural telecommunication infrastructures in Dwesa South Africa
- Authors: Ranga, Memory Munashe
- Date: 2011
- Subjects: Information technology -- South Africa -- Eastern Cape , Rural development -- South Africa -- Eastern Cape , Community development -- South Africa -- Eastern Cape , Information networks -- South Africa -- Eastern Cape , Sustainable development -- South Africa -- Eastern Cape , Computer networks -- South Africa -- Eastern Cape
- Language: English
- Type: Thesis , Masters , MSc (Computer Science)
- Identifier: vital:11382 , http://hdl.handle.net/10353/d1001113 , Information technology -- South Africa -- Eastern Cape , Rural development -- South Africa -- Eastern Cape , Community development -- South Africa -- Eastern Cape , Information networks -- South Africa -- Eastern Cape , Sustainable development -- South Africa -- Eastern Cape , Computer networks -- South Africa -- Eastern Cape
- Description: In recent years, immense effort has been channelled towards the Information and Technological development of rural areas. To support this development, telecommunication networks have been deployed. The availability of these telecommunication networks is expected to improve the way people share ideas and communicate locally and globally, reducing limiting factors like distance through the use of the Internet. The major problem for these networks is that very few of them have managed to stay in operation over long periods of time. One of the major causes of this failure is the lack of proper monitoring and management as, in some cases, administrators are located far away from the network site. Other factors that contribute to the frequent failure of these networks are lack of proper infrastructure, lack of a constant power supply and other environmental issues. A telecommunication network was deployed for the people of Dwesa by the Siyakhula Living Lab project. During this research project, frequent visits were made to the site and network users were informally interviewed in order to gain insight into the network challenges. Based on the challenges, different network monitoring systems and other solutions were deployed on the network. This thesis analyses the problems encountered and presents possible and affordable solutions that were implemented on the network. This was done to improve the network‟s reliability, availability and manageability whilst exploring possible and practical ways in which the connectivity of the deployed telecommunication network can be maintained. As part of these solutions, a GPRS redundant link, Nagios and Cacti monitoring systems as well as Simple backup systems were deployed. v Acronyms AC Access Concentrators AMANDA Automatic Marylyn Network Disk Archiver CDMA Code Divison Multiple Access CGI Common Gateway Interface.
- Full Text:
- Date Issued: 2011
- Authors: Ranga, Memory Munashe
- Date: 2011
- Subjects: Information technology -- South Africa -- Eastern Cape , Rural development -- South Africa -- Eastern Cape , Community development -- South Africa -- Eastern Cape , Information networks -- South Africa -- Eastern Cape , Sustainable development -- South Africa -- Eastern Cape , Computer networks -- South Africa -- Eastern Cape
- Language: English
- Type: Thesis , Masters , MSc (Computer Science)
- Identifier: vital:11382 , http://hdl.handle.net/10353/d1001113 , Information technology -- South Africa -- Eastern Cape , Rural development -- South Africa -- Eastern Cape , Community development -- South Africa -- Eastern Cape , Information networks -- South Africa -- Eastern Cape , Sustainable development -- South Africa -- Eastern Cape , Computer networks -- South Africa -- Eastern Cape
- Description: In recent years, immense effort has been channelled towards the Information and Technological development of rural areas. To support this development, telecommunication networks have been deployed. The availability of these telecommunication networks is expected to improve the way people share ideas and communicate locally and globally, reducing limiting factors like distance through the use of the Internet. The major problem for these networks is that very few of them have managed to stay in operation over long periods of time. One of the major causes of this failure is the lack of proper monitoring and management as, in some cases, administrators are located far away from the network site. Other factors that contribute to the frequent failure of these networks are lack of proper infrastructure, lack of a constant power supply and other environmental issues. A telecommunication network was deployed for the people of Dwesa by the Siyakhula Living Lab project. During this research project, frequent visits were made to the site and network users were informally interviewed in order to gain insight into the network challenges. Based on the challenges, different network monitoring systems and other solutions were deployed on the network. This thesis analyses the problems encountered and presents possible and affordable solutions that were implemented on the network. This was done to improve the network‟s reliability, availability and manageability whilst exploring possible and practical ways in which the connectivity of the deployed telecommunication network can be maintained. As part of these solutions, a GPRS redundant link, Nagios and Cacti monitoring systems as well as Simple backup systems were deployed. v Acronyms AC Access Concentrators AMANDA Automatic Marylyn Network Disk Archiver CDMA Code Divison Multiple Access CGI Common Gateway Interface.
- Full Text:
- Date Issued: 2011
A decentralized multi-agent based network management system for ICT4D networks
- Authors: Matebese, Sithembiso
- Date: 2014
- Subjects: Microsoft� Word 2010
- Language: English
- Type: Thesis , Masters , MSc (Computer Science)
- Identifier: vital:11398 , http://hdl.handle.net/10353/d1019853
- Description: Network management is fundamental for assuring high quality services required by each user for the effective utilization of network resources. In this research, we propose the use of a decentralized, flexible and scalable Multi-Agent based system to monitor and manage rural broadband networks adaptively and efficiently. This mechanism is not novel as it has been used for high-speed, large-scale and distributed networks. This research investigates how software agents could collaborate in the process of managing rural broadband networks and developing an autonomous decentralized network management mechanism. In rural networks, network management is a challenging task because of lack of a reliable power supply, greater geographical distances, topographical barriers, and lack of technical support as well as computer repair facilities. This renders the network monitoring function complex and difficult. Since software agents are goal-driven, this research aims at developing a distributed management system that efficiently diagnoses errors on a given network and autonomously invokes effective changes to the network based on the goals defined on system agents. To make this possible, the Siyakhula Living Lab network was used as the research case study and existing network management system was reviewed and used as the basis for the proposed network management system. The proposed network management system uses JADE framework, Hyperic-Sigar API, Java networking programming and JESS scripting language to implement reasoning software agents. JADE and Java were used to develop the system agents with FIPA specifications. Hyperic-Sigar was used to collect the device information, Jpcap was used for collecting device network information and JESS for developing a rule engine for agents to reason about the device and network state. Even though the system is developed with Siyakhula Living Lab considerations, technically it can be used in any small-medium network because it is adaptable and scalable to various network infrastructure requirements. The proposed system consists of two types of agents, the MasterAgent and the NodeAgent. The MasterAgent resides on the device that has the agent platform and NodeAgent resides on devices connected to the network. The MasterAgent provides the network administrator with graphical and web user interfaces so that they can view network analysis and statistics. The agent platform provides agents with the executing environment and every agent, when started, is added to this platform. This system is platform independent as it has been tested on Linux, Mac and Windows platforms. The implemented system has been found to provide a suitable network management function to rural broadband networks that is: scalable in that more node agents can be added to the system to accommodate more devices in the network; autonomous in the ability to reason and execute actions based on the defined rules; fault-tolerant through being designed as a decentralized platform thereby reducing the Single Point of Failure (SPOF) in the system.
- Full Text:
- Date Issued: 2014
- Authors: Matebese, Sithembiso
- Date: 2014
- Subjects: Microsoft� Word 2010
- Language: English
- Type: Thesis , Masters , MSc (Computer Science)
- Identifier: vital:11398 , http://hdl.handle.net/10353/d1019853
- Description: Network management is fundamental for assuring high quality services required by each user for the effective utilization of network resources. In this research, we propose the use of a decentralized, flexible and scalable Multi-Agent based system to monitor and manage rural broadband networks adaptively and efficiently. This mechanism is not novel as it has been used for high-speed, large-scale and distributed networks. This research investigates how software agents could collaborate in the process of managing rural broadband networks and developing an autonomous decentralized network management mechanism. In rural networks, network management is a challenging task because of lack of a reliable power supply, greater geographical distances, topographical barriers, and lack of technical support as well as computer repair facilities. This renders the network monitoring function complex and difficult. Since software agents are goal-driven, this research aims at developing a distributed management system that efficiently diagnoses errors on a given network and autonomously invokes effective changes to the network based on the goals defined on system agents. To make this possible, the Siyakhula Living Lab network was used as the research case study and existing network management system was reviewed and used as the basis for the proposed network management system. The proposed network management system uses JADE framework, Hyperic-Sigar API, Java networking programming and JESS scripting language to implement reasoning software agents. JADE and Java were used to develop the system agents with FIPA specifications. Hyperic-Sigar was used to collect the device information, Jpcap was used for collecting device network information and JESS for developing a rule engine for agents to reason about the device and network state. Even though the system is developed with Siyakhula Living Lab considerations, technically it can be used in any small-medium network because it is adaptable and scalable to various network infrastructure requirements. The proposed system consists of two types of agents, the MasterAgent and the NodeAgent. The MasterAgent resides on the device that has the agent platform and NodeAgent resides on devices connected to the network. The MasterAgent provides the network administrator with graphical and web user interfaces so that they can view network analysis and statistics. The agent platform provides agents with the executing environment and every agent, when started, is added to this platform. This system is platform independent as it has been tested on Linux, Mac and Windows platforms. The implemented system has been found to provide a suitable network management function to rural broadband networks that is: scalable in that more node agents can be added to the system to accommodate more devices in the network; autonomous in the ability to reason and execute actions based on the defined rules; fault-tolerant through being designed as a decentralized platform thereby reducing the Single Point of Failure (SPOF) in the system.
- Full Text:
- Date Issued: 2014
Impact of intergrating teebus hydro power on the unbalanced distribution MV network
- Authors: Mthethwa, Lindani
- Date: 2018
- Subjects: Electric power systems , Renewable energy sources Hydroelectric power plants
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: http://hdl.handle.net/10948/33054 , vital:32512
- Description: Small hydro power sources have been identified as one of the renewable energy technologies that the South African government is focusing on in order to generate more electricity from renewable/independent resources. Due to the low carbon output of most renewable energy technologies and the carbon intensive power generation technologies that are currently being used in South Africa e.g. Hydro, coal, gas, and etc. further pressure is increasing to incorporate cleaner forms of generation. In 2002 a study focusing on the hydropower potential was compiled providing an assessment according to conventional and unconventional possibilities for all the provinces. Nowadays, the power electricity demand is growing fast and one of the main tasks for power engineers is to generate electricity from renewable energy sources to overcome this increase in the energy consumption and at the same time reduce environmental impact of power generation. Eskom Distribution Eastern Cape Operating Unit (ECOU) was requested to investigate the feasibility of connecting a small hydro power scheme located in the Teebus area in the Eastern Cape. The Eastern Cape in particular, was identified as potentially the most productive area for small hydroelectric development in South Africa for both the grid connected and off grid applications. These network conditions are in contrast to the South African electricity network where long radial feeders with low X/R ratios and high resistance, spanning large geographic areas, give rise to low voltages on the network. Practical simulation networks have been used to test the conditions set out in the South African Grid Code/NERSA standard and to test the impact of connecting small hydro generation onto the unbalanced distribution network. These networks are representative of various real case scenarios of the South African distribution network. Most of the findings from the simulations were consistent with what was expected when comparing with other literatures. From the simulation results it was seen that the performance of the variable speed generators were superior to that of the fixed speed generators during transient conditions. It was also seen that the weakness of the network had a negative effect on the stability of the system. It is also noted that the stability studies are a necessity when connecting the generators to a network and that each case should be reviewed individually. The fundamental cause of voltage instability is identified as incapability of combined distribution and generation system to meet excessive load demand in either real power or reactive power form.
- Full Text:
- Date Issued: 2018
- Authors: Mthethwa, Lindani
- Date: 2018
- Subjects: Electric power systems , Renewable energy sources Hydroelectric power plants
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: http://hdl.handle.net/10948/33054 , vital:32512
- Description: Small hydro power sources have been identified as one of the renewable energy technologies that the South African government is focusing on in order to generate more electricity from renewable/independent resources. Due to the low carbon output of most renewable energy technologies and the carbon intensive power generation technologies that are currently being used in South Africa e.g. Hydro, coal, gas, and etc. further pressure is increasing to incorporate cleaner forms of generation. In 2002 a study focusing on the hydropower potential was compiled providing an assessment according to conventional and unconventional possibilities for all the provinces. Nowadays, the power electricity demand is growing fast and one of the main tasks for power engineers is to generate electricity from renewable energy sources to overcome this increase in the energy consumption and at the same time reduce environmental impact of power generation. Eskom Distribution Eastern Cape Operating Unit (ECOU) was requested to investigate the feasibility of connecting a small hydro power scheme located in the Teebus area in the Eastern Cape. The Eastern Cape in particular, was identified as potentially the most productive area for small hydroelectric development in South Africa for both the grid connected and off grid applications. These network conditions are in contrast to the South African electricity network where long radial feeders with low X/R ratios and high resistance, spanning large geographic areas, give rise to low voltages on the network. Practical simulation networks have been used to test the conditions set out in the South African Grid Code/NERSA standard and to test the impact of connecting small hydro generation onto the unbalanced distribution network. These networks are representative of various real case scenarios of the South African distribution network. Most of the findings from the simulations were consistent with what was expected when comparing with other literatures. From the simulation results it was seen that the performance of the variable speed generators were superior to that of the fixed speed generators during transient conditions. It was also seen that the weakness of the network had a negative effect on the stability of the system. It is also noted that the stability studies are a necessity when connecting the generators to a network and that each case should be reviewed individually. The fundamental cause of voltage instability is identified as incapability of combined distribution and generation system to meet excessive load demand in either real power or reactive power form.
- Full Text:
- Date Issued: 2018
Network management for community networks
- Authors: Wells, Daniel David
- Date: 2010 , 2010-03-26
- Subjects: Computer networks -- Management , Internet -- South Africa , Internet -- Management , Broadband communication systems
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4643 , http://hdl.handle.net/10962/d1006587
- Description: Community networks (in South Africa and Africa) are often serviced by limited bandwidth network backhauls. Relative to the basic needs of the community, this is an expensive ongoing concern. In many cases the Internet connection is shared among multiple sites. Community networks may also have a lack of technical personnel to maintain a network of this nature. Hence, there is a demand for a system which will monitor and manage bandwidth use, as well as network use. The proposed solution for community networks and the focus within this dissertation, is a system of two parts. A Community Access Point (CAP) is located at each site within the community network. This provides the hosts and servers at that site with access to services on the community network and the Internet, it is the site's router. The CAP provides a web based interface (CAPgui) which allows configuration of the device and viewing of simple monitoring statistics. The Access Concentrator (AC) is the default router for the CAPs and the gateway to the Internet. It provides authenticated and encrypted communication between the network sites. The AC performs several monitoring functions, both for the individual sites and for the upstream Internet connection. The AC provides a means for centrally managing and effectively allocating Internet bandwidth by using the web based interface (ACgui). Bandwidth use can be allocated per user, per host and per site. The system is maintainable, extendable and customisable for different network architectures. The system was deployed successfully to two community networks. The Centre of Excellence (CoE) testbed network is a peri-urban network deployment whereas the Siyakhula Living Lab (SLL) network is a rural deployment. The results gathered conclude that the project was successful as the deployed system is more robust and more manageable than the previous systems.
- Full Text:
- Date Issued: 2010
- Authors: Wells, Daniel David
- Date: 2010 , 2010-03-26
- Subjects: Computer networks -- Management , Internet -- South Africa , Internet -- Management , Broadband communication systems
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4643 , http://hdl.handle.net/10962/d1006587
- Description: Community networks (in South Africa and Africa) are often serviced by limited bandwidth network backhauls. Relative to the basic needs of the community, this is an expensive ongoing concern. In many cases the Internet connection is shared among multiple sites. Community networks may also have a lack of technical personnel to maintain a network of this nature. Hence, there is a demand for a system which will monitor and manage bandwidth use, as well as network use. The proposed solution for community networks and the focus within this dissertation, is a system of two parts. A Community Access Point (CAP) is located at each site within the community network. This provides the hosts and servers at that site with access to services on the community network and the Internet, it is the site's router. The CAP provides a web based interface (CAPgui) which allows configuration of the device and viewing of simple monitoring statistics. The Access Concentrator (AC) is the default router for the CAPs and the gateway to the Internet. It provides authenticated and encrypted communication between the network sites. The AC performs several monitoring functions, both for the individual sites and for the upstream Internet connection. The AC provides a means for centrally managing and effectively allocating Internet bandwidth by using the web based interface (ACgui). Bandwidth use can be allocated per user, per host and per site. The system is maintainable, extendable and customisable for different network architectures. The system was deployed successfully to two community networks. The Centre of Excellence (CoE) testbed network is a peri-urban network deployment whereas the Siyakhula Living Lab (SLL) network is a rural deployment. The results gathered conclude that the project was successful as the deployed system is more robust and more manageable than the previous systems.
- Full Text:
- Date Issued: 2010
Novel approaches to the monitoring of computer networks
- Authors: Halse, G A
- Date: 2003
- Subjects: Computer networks , Computer networks -- Management , Computer networks -- South Africa -- Grahamstown , Rhodes University -- Information Technology Division
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4645 , http://hdl.handle.net/10962/d1006601
- Description: Traditional network monitoring techniques suffer from a number of limitations. They are usually designed to solve the most general case, and as a result often fall short of expectation. This project sets out to provide the network administrator with a set of alternative tools to solve specific, but common, problems. It uses the network at Rhodes University as a case study and addresses a number of issues that arise on this network. Four problematic areas are identified within this network: the automatic determination of network topology and layout, the tracking of network growth, the determination of the physical and logical locations of hosts on the network, and the need for intelligent fault reporting systems. These areas are chosen because other network monitoring techniques have failed to adequately address these problems, and because they present problems that are common across a large number of networks. Each area is examined separately and a solution is sought for each of the problems identified. As a result, a set of tools is developed to solve these problems using a number of novel network monitoring techniques. These tools are designed to be as portable as possible so as not to limit their use to the case study network. Their use within Rhodes, as well as their applicability to other situations is discussed. In all cases, any limitations and shortfalls in the approaches that were employed are examined.
- Full Text:
- Date Issued: 2003
- Authors: Halse, G A
- Date: 2003
- Subjects: Computer networks , Computer networks -- Management , Computer networks -- South Africa -- Grahamstown , Rhodes University -- Information Technology Division
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4645 , http://hdl.handle.net/10962/d1006601
- Description: Traditional network monitoring techniques suffer from a number of limitations. They are usually designed to solve the most general case, and as a result often fall short of expectation. This project sets out to provide the network administrator with a set of alternative tools to solve specific, but common, problems. It uses the network at Rhodes University as a case study and addresses a number of issues that arise on this network. Four problematic areas are identified within this network: the automatic determination of network topology and layout, the tracking of network growth, the determination of the physical and logical locations of hosts on the network, and the need for intelligent fault reporting systems. These areas are chosen because other network monitoring techniques have failed to adequately address these problems, and because they present problems that are common across a large number of networks. Each area is examined separately and a solution is sought for each of the problems identified. As a result, a set of tools is developed to solve these problems using a number of novel network monitoring techniques. These tools are designed to be as portable as possible so as not to limit their use to the case study network. Their use within Rhodes, as well as their applicability to other situations is discussed. In all cases, any limitations and shortfalls in the approaches that were employed are examined.
- Full Text:
- Date Issued: 2003
A common analysis framework for simulated streaming-video networks
- Authors: Mulumba, Patrick
- Date: 2009
- Subjects: Computer networks -- Management , Streaming video , Mass media -- Technological innovations
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4590 , http://hdl.handle.net/10962/d1004828 , Computer networks -- Management , Streaming video , Mass media -- Technological innovations
- Description: Distributed media streaming has been driven by the combination of improved media compression techniques and an increase in the availability of bandwidth. This increase has lead to the development of various streaming distribution engines (systems/services), which currently provide the majority of the streaming media available throughout the Internet. This study aimed to analyse a range of existing commercial and open-source streaming media distribution engines, and classify them in such a way as to define a Common Analysis Framework for Simulated Streaming-Video Networks (CAFSS-Net). This common framework was used as the basis for a simulation tool intended to aid in the development and deployment of streaming media networks and predict the performance impacts of both network configuration changes, video features (scene complexity, resolution) and general scaling. CAFSS-Net consists of six components: the server, the client(s), the network simulator, the video publishing tools, the videos and the evaluation tool-set. Test scenarios are presented consisting of different network configurations, scales and external traffic specifications. From these test scenarios, results were obtained to determine interesting observations attained and to provide an overview of the different test specications for this study. From these results, an analysis of the system was performed, yielding relationships between the videos, the different bandwidths, the different measurement tools and the different components of CAFSS-Net. Based on the analysis of the results, the implications for CAFSS-Net highlighted different achievements and proposals for future work for the different components. CAFSS-Net was able to successfully integrate all of its components to evaluate the different streaming scenarios. The streaming server, client and video components accomplished their objectives. It is noted that although the video publishing tool was able to provide the necessary compression/decompression services, proposals for the implementation of alternative compression/decompression schemes could serve as a suitable extension. The network simulator and evaluation tool-set components were also successful, but future tests (particularly in low bandwidth scenarios) are suggested in order to further improve the accuracy of the framework as a whole. CAFSS-Net is especially successful with analysing high bandwidth connections with the results being similar to those of the physical network tests.
- Full Text:
- Date Issued: 2009
- Authors: Mulumba, Patrick
- Date: 2009
- Subjects: Computer networks -- Management , Streaming video , Mass media -- Technological innovations
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4590 , http://hdl.handle.net/10962/d1004828 , Computer networks -- Management , Streaming video , Mass media -- Technological innovations
- Description: Distributed media streaming has been driven by the combination of improved media compression techniques and an increase in the availability of bandwidth. This increase has lead to the development of various streaming distribution engines (systems/services), which currently provide the majority of the streaming media available throughout the Internet. This study aimed to analyse a range of existing commercial and open-source streaming media distribution engines, and classify them in such a way as to define a Common Analysis Framework for Simulated Streaming-Video Networks (CAFSS-Net). This common framework was used as the basis for a simulation tool intended to aid in the development and deployment of streaming media networks and predict the performance impacts of both network configuration changes, video features (scene complexity, resolution) and general scaling. CAFSS-Net consists of six components: the server, the client(s), the network simulator, the video publishing tools, the videos and the evaluation tool-set. Test scenarios are presented consisting of different network configurations, scales and external traffic specifications. From these test scenarios, results were obtained to determine interesting observations attained and to provide an overview of the different test specications for this study. From these results, an analysis of the system was performed, yielding relationships between the videos, the different bandwidths, the different measurement tools and the different components of CAFSS-Net. Based on the analysis of the results, the implications for CAFSS-Net highlighted different achievements and proposals for future work for the different components. CAFSS-Net was able to successfully integrate all of its components to evaluate the different streaming scenarios. The streaming server, client and video components accomplished their objectives. It is noted that although the video publishing tool was able to provide the necessary compression/decompression services, proposals for the implementation of alternative compression/decompression schemes could serve as a suitable extension. The network simulator and evaluation tool-set components were also successful, but future tests (particularly in low bandwidth scenarios) are suggested in order to further improve the accuracy of the framework as a whole. CAFSS-Net is especially successful with analysing high bandwidth connections with the results being similar to those of the physical network tests.
- Full Text:
- Date Issued: 2009
Pursuing cost-effective secure network micro-segmentation
- Authors: Fürst, Mark Richard
- Date: 2018
- Subjects: Computer networks -- Security measures , Computer networks -- Access control , Firewalls (Computer security) , IPSec (Computer network protocol) , Network micro-segmentation
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/131106 , vital:36524
- Description: Traditional network segmentation allows discrete trust levels to be defined for different network segments, using physical firewalls or routers that control north-south traffic flowing between different interfaces. This technique reduces the attack surface area should an attacker breach one of the perimeter defences. However, east-west traffic flowing between endpoints within the same network segment does not pass through a firewall, and an attacker may be able to move laterally between endpoints within that segment. Network micro-segmentation was designed to address the challenge of controlling east-west traffic, and various solutions have been released with differing levels of capabilities and feature sets. These approaches range from simple network switch Access Control List based segmentation to complex hypervisor based software-defined security segments defined down to the individual workload, container or process level, and enforced via policy based security controls for each segment. Several commercial solutions for network micro-segmentation exist, but these are primarily focused on physical and cloud data centres, and are often accompanied by significant capital outlay and resource requirements. Given these constraints, this research determines whether existing tools provided with operating systems can be re-purposed to implement micro-segmentation and restrict east-west traffic within one or more network segments for a small-to-medium sized corporate network. To this end, a proof-of-concept lab environment was built with a heterogeneous mix of Windows and Linux virtual servers and workstations deployed in an Active Directory domain. The use of Group Policy Objects to deploy IPsec Server and Domain Isolation for controlling traffic between endpoints is examined, in conjunction with IPsec Authenticated Header and Encapsulating Security Payload modes as an additional layer of security. The outcome of the research shows that revisiting existing tools can enable organisations to implement an additional, cost-effective secure layer of defence in their network.
- Full Text:
- Date Issued: 2018
- Authors: Fürst, Mark Richard
- Date: 2018
- Subjects: Computer networks -- Security measures , Computer networks -- Access control , Firewalls (Computer security) , IPSec (Computer network protocol) , Network micro-segmentation
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/131106 , vital:36524
- Description: Traditional network segmentation allows discrete trust levels to be defined for different network segments, using physical firewalls or routers that control north-south traffic flowing between different interfaces. This technique reduces the attack surface area should an attacker breach one of the perimeter defences. However, east-west traffic flowing between endpoints within the same network segment does not pass through a firewall, and an attacker may be able to move laterally between endpoints within that segment. Network micro-segmentation was designed to address the challenge of controlling east-west traffic, and various solutions have been released with differing levels of capabilities and feature sets. These approaches range from simple network switch Access Control List based segmentation to complex hypervisor based software-defined security segments defined down to the individual workload, container or process level, and enforced via policy based security controls for each segment. Several commercial solutions for network micro-segmentation exist, but these are primarily focused on physical and cloud data centres, and are often accompanied by significant capital outlay and resource requirements. Given these constraints, this research determines whether existing tools provided with operating systems can be re-purposed to implement micro-segmentation and restrict east-west traffic within one or more network segments for a small-to-medium sized corporate network. To this end, a proof-of-concept lab environment was built with a heterogeneous mix of Windows and Linux virtual servers and workstations deployed in an Active Directory domain. The use of Group Policy Objects to deploy IPsec Server and Domain Isolation for controlling traffic between endpoints is examined, in conjunction with IPsec Authenticated Header and Encapsulating Security Payload modes as an additional layer of security. The outcome of the research shows that revisiting existing tools can enable organisations to implement an additional, cost-effective secure layer of defence in their network.
- Full Text:
- Date Issued: 2018
Development of a web-based interface for a wireless sensor network monitoring system
- Authors: Gumbo, Sibukele
- Date: 2007
- Subjects: Wireless LAN , Sensor networks , Wireless communication systems , Web sites -- Design , User interfaces (Computer systems)
- Language: English
- Type: Thesis , Masters , MSc (Computer Science)
- Identifier: vital:11372 , http://hdl.handle.net/10353/68 , Wireless LAN , Sensor networks , Wireless communication systems , Web sites -- Design , User interfaces (Computer systems)
- Description: In the recent past, wireless sensor technology has undergone advancements in its autonomous data collecting aspects, and has become an area worth investigating in relation to structural monitoring applications. The system described in this thesis aims at acquiring, storing and displaying overhead transmission line related data collected from a wireless sensor network. Open source tools were used in its development and implementation. The inherent linearly aligned topology of transmission line monitoring devices is not without shortcomings; hence analysis of linear node placement, hardware and software components was carried out to determine the feasibility of the system. Their limited data processing capabilities has motivated the development of a post processing wireless sensor application in order to present any collected structural data in an understandable format.
- Full Text:
- Date Issued: 2007
- Authors: Gumbo, Sibukele
- Date: 2007
- Subjects: Wireless LAN , Sensor networks , Wireless communication systems , Web sites -- Design , User interfaces (Computer systems)
- Language: English
- Type: Thesis , Masters , MSc (Computer Science)
- Identifier: vital:11372 , http://hdl.handle.net/10353/68 , Wireless LAN , Sensor networks , Wireless communication systems , Web sites -- Design , User interfaces (Computer systems)
- Description: In the recent past, wireless sensor technology has undergone advancements in its autonomous data collecting aspects, and has become an area worth investigating in relation to structural monitoring applications. The system described in this thesis aims at acquiring, storing and displaying overhead transmission line related data collected from a wireless sensor network. Open source tools were used in its development and implementation. The inherent linearly aligned topology of transmission line monitoring devices is not without shortcomings; hence analysis of linear node placement, hardware and software components was carried out to determine the feasibility of the system. Their limited data processing capabilities has motivated the development of a post processing wireless sensor application in order to present any collected structural data in an understandable format.
- Full Text:
- Date Issued: 2007
A review of the Siyakhula Living Lab’s network solution for Internet in marginalized communities
- Muchatibaya, Hilbert Munashe
- Authors: Muchatibaya, Hilbert Munashe
- Date: 2022-10-14
- Subjects: Information and communication technologies for development , Information technology South Africa , Access network , User experience , Local area networks (Computer networks) South Africa
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/364943 , vital:65664
- Description: Changes within Information and Communication Technology (ICT) over the past decade required a review of the network layer component deployed in the Siyakhula Living Lab (SLL), a long-term joint venture between the Telkom Centres of Excellence hosted at University of Fort Hare and Rhodes University in South Africa. The SLL overall solution for the sustainable internet in poor communities consists of three main components – the computing infrastructure layer, the network layer, and the e-services layer. At the core of the network layer is the concept of BI, a high-speed local area network realized through easy-to deploy wireless technologies that establish point-to-multipoint connections among schools within a limited geographical area. Schools within the broadband island become then Digital Access Nodes (DANs), with computing infrastructure that provides access to the network. The review, reported in this thesis, aimed at determining whether the model for the network layer was still able to meet the needs of marginalized communities in South Africa, given the recent changes in ICT. The research work used the living lab methodology – a grassroots, user-driven approach that emphasizes co-creation between the beneficiaries and external entities (researchers, industry partners and the government) - to do viability tests on the solution for the network component. The viability tests included lab and field experiments, to produce the qualitative and quantitative data needed to propose an updated blueprint. The results of the review found that the network topology used in the SLL’s network, the BI, is still viable, while WiMAX is now outdated. Also, the in-network web cache, Squid, is no longer effective, given the switch to HTTPS and the pervasive presence of advertising. The solution to the first issue is outdoor Wi-Fi, a proven solution easily deployable in grass-roots fashion. The second issue can be mitigated by leveraging Squid’s ‘bumping’ and splicing features; deploying a browser extension to make picture download optional; and using Pihole, a DNS sinkhole. Hopefully, the revised solution could become a component of South African Government’s broadband plan, “SA Connect”. , Thesis (MSc) -- Faculty of Science, Computer Science, 2022
- Full Text:
- Date Issued: 2022-10-14
- Authors: Muchatibaya, Hilbert Munashe
- Date: 2022-10-14
- Subjects: Information and communication technologies for development , Information technology South Africa , Access network , User experience , Local area networks (Computer networks) South Africa
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/364943 , vital:65664
- Description: Changes within Information and Communication Technology (ICT) over the past decade required a review of the network layer component deployed in the Siyakhula Living Lab (SLL), a long-term joint venture between the Telkom Centres of Excellence hosted at University of Fort Hare and Rhodes University in South Africa. The SLL overall solution for the sustainable internet in poor communities consists of three main components – the computing infrastructure layer, the network layer, and the e-services layer. At the core of the network layer is the concept of BI, a high-speed local area network realized through easy-to deploy wireless technologies that establish point-to-multipoint connections among schools within a limited geographical area. Schools within the broadband island become then Digital Access Nodes (DANs), with computing infrastructure that provides access to the network. The review, reported in this thesis, aimed at determining whether the model for the network layer was still able to meet the needs of marginalized communities in South Africa, given the recent changes in ICT. The research work used the living lab methodology – a grassroots, user-driven approach that emphasizes co-creation between the beneficiaries and external entities (researchers, industry partners and the government) - to do viability tests on the solution for the network component. The viability tests included lab and field experiments, to produce the qualitative and quantitative data needed to propose an updated blueprint. The results of the review found that the network topology used in the SLL’s network, the BI, is still viable, while WiMAX is now outdated. Also, the in-network web cache, Squid, is no longer effective, given the switch to HTTPS and the pervasive presence of advertising. The solution to the first issue is outdoor Wi-Fi, a proven solution easily deployable in grass-roots fashion. The second issue can be mitigated by leveraging Squid’s ‘bumping’ and splicing features; deploying a browser extension to make picture download optional; and using Pihole, a DNS sinkhole. Hopefully, the revised solution could become a component of South African Government’s broadband plan, “SA Connect”. , Thesis (MSc) -- Faculty of Science, Computer Science, 2022
- Full Text:
- Date Issued: 2022-10-14
Correlation and comparative analysis of traffic across five network telescopes
- Nkhumeleni, Thizwilondi Moses
- Authors: Nkhumeleni, Thizwilondi Moses
- Date: 2014
- Subjects: Sensor networks , Computer networks , TCP/IP (Computer network protocol) , Computer networks -- Management , Electronic data processing -- Management
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4693 , http://hdl.handle.net/10962/d1011668 , Sensor networks , Computer networks , TCP/IP (Computer network protocol) , Computer networks -- Management , Electronic data processing -- Management
- Description: Monitoring unused IP address space by using network telescopes provides a favourable environment for researchers to study and detect malware, worms, denial of service and scanning activities. Research in the field of network telescopes has progressed over the past decade resulting in the development of an increased number of overlapping datasets. Rhodes University's network of telescope sensors has continued to grow with additional network telescopes being brought online. At the time of writing, Rhodes University has a distributed network of five relatively small /24 network telescopes. With five network telescope sensors, this research focuses on comparative and correlation analysis of traffic activity across the network of telescope sensors. To aid summarisation and visualisation techniques, time series' representing time-based traffic activity, are constructed. By employing an iterative experimental process of captured traffic, two natural categories of the five network telescopes are presented. Using the cross- and auto-correlation methods of time series analysis, moderate correlation of traffic activity was achieved between telescope sensors in each category. Weak to moderate correlation was calculated when comparing category A and category B network telescopes' datasets. Results were significantly improved by studying TCP traffic separately. Moderate to strong correlation coefficients in each category were calculated when using TCP traffic only. UDP traffic analysis showed weaker correlation between sensors, however the uniformity of ICMP traffic showed correlation of traffic activity across all sensors. The results confirmed the visual observation of traffic relativity in telescope sensors within the same category and quantitatively analysed the correlation of network telescopes' traffic activity.
- Full Text:
- Date Issued: 2014
- Authors: Nkhumeleni, Thizwilondi Moses
- Date: 2014
- Subjects: Sensor networks , Computer networks , TCP/IP (Computer network protocol) , Computer networks -- Management , Electronic data processing -- Management
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4693 , http://hdl.handle.net/10962/d1011668 , Sensor networks , Computer networks , TCP/IP (Computer network protocol) , Computer networks -- Management , Electronic data processing -- Management
- Description: Monitoring unused IP address space by using network telescopes provides a favourable environment for researchers to study and detect malware, worms, denial of service and scanning activities. Research in the field of network telescopes has progressed over the past decade resulting in the development of an increased number of overlapping datasets. Rhodes University's network of telescope sensors has continued to grow with additional network telescopes being brought online. At the time of writing, Rhodes University has a distributed network of five relatively small /24 network telescopes. With five network telescope sensors, this research focuses on comparative and correlation analysis of traffic activity across the network of telescope sensors. To aid summarisation and visualisation techniques, time series' representing time-based traffic activity, are constructed. By employing an iterative experimental process of captured traffic, two natural categories of the five network telescopes are presented. Using the cross- and auto-correlation methods of time series analysis, moderate correlation of traffic activity was achieved between telescope sensors in each category. Weak to moderate correlation was calculated when comparing category A and category B network telescopes' datasets. Results were significantly improved by studying TCP traffic separately. Moderate to strong correlation coefficients in each category were calculated when using TCP traffic only. UDP traffic analysis showed weaker correlation between sensors, however the uniformity of ICMP traffic showed correlation of traffic activity across all sensors. The results confirmed the visual observation of traffic relativity in telescope sensors within the same category and quantitatively analysed the correlation of network telescopes' traffic activity.
- Full Text:
- Date Issued: 2014
Ant colony optimisation-based algorithms for optical burst switching networks
- Gravett, Andrew Scott, Gibbon, Timothy B
- Authors: Gravett, Andrew Scott , Gibbon, Timothy B
- Date: 2017
- Subjects: Distributed algorithms , Ants -- Behavior -- Mathematical models
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10948/18939 , vital:28757
- Description: This research developed two novel distributed algorithms inspired by Ant Colony Optimisation (ACO) for a solution to the problem of dynamic Routing and Wavelength Assignment (RWA) with wavelength continuity constraint in Optical Burst Switching (OBS) networks utilising both the traditional International Telecommunication Union (ITU) Fixed Grid Wavelength Division Multiplexing (WDM) and Flexible Spectrum scenarios. The growing demand for more bandwidth in optical networks require more efficient utilisation of available optical resources. OBS is a promising optical switching technique for the improved utilisation of optical network resources over the current optical circuit switching technique. The development of newer technologies has introduced higher rate transmissions and various modulation formats, however, introducing these technologies into the traditional ITU Fixed Grid does not efficiently utilise the available bandwidth. Flexible Spectrum is a promising approach offering a solution to the problem of improving bandwidth utilisation, which comes with a potential cost. Transmissions have the potential for impairment with respect to the increased traffic and lack of large channel spacing. Proposed routing algorithms should be aware of the linear and non-linear Physical Layer Impairments (PLIs) in order to operate closer to optimum performance. The OBS resource reservation protocol does not cater for the loss of transmissions, Burst Control Packets (BCPs) included, due to physical layer impairments. The protocol was adapted for use in Flexible Spectrum. Investigation of the use of a route and wavelength combination, from source to destination node pair, for the RWA process was proposed for ACO-based approaches to enforce the establishment and use of complete paths for greedy exploitation in Flexible Spectrum was conducted. The routing tuple for the RWA process is the tight coupling of a route and wavelength in combination intended to promote the greedy exploitation of successful paths for transmission requests. The application of the routing tuples differs from traditional ACO-based approaches and prompted the investigation of new pheromone calculation equations. The two novel proposed approaches were tested and experiments conducted comparing with and against existing algorithms (a simple greedy and an ACO-based algorithm) in a traditional ITU Fixed Grid and Flexible Spectrum scenario on three different network topologies. The proposed Flexible Spectrum Ant Colony (FSAC) approach had a markably improved performance over the existing algorithms in the ITU Fixed Grid WDM and Flexible Spectrum scenarios, while Upper Confidence Bound Routing and Wavelength Assignment (UCBRWA) algorithm was able to perform well in the traditional ITU Fixed Grid WDM scenario, but underperformed in the Flexible Spectrum scenario. The results show that the distributed ACO-based FSAC algorithm significantly improved the burst transmission success probability, providing a good solution in the Flexible Spectrum network environment undergoing transmission impairments.
- Full Text:
- Date Issued: 2017
- Authors: Gravett, Andrew Scott , Gibbon, Timothy B
- Date: 2017
- Subjects: Distributed algorithms , Ants -- Behavior -- Mathematical models
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10948/18939 , vital:28757
- Description: This research developed two novel distributed algorithms inspired by Ant Colony Optimisation (ACO) for a solution to the problem of dynamic Routing and Wavelength Assignment (RWA) with wavelength continuity constraint in Optical Burst Switching (OBS) networks utilising both the traditional International Telecommunication Union (ITU) Fixed Grid Wavelength Division Multiplexing (WDM) and Flexible Spectrum scenarios. The growing demand for more bandwidth in optical networks require more efficient utilisation of available optical resources. OBS is a promising optical switching technique for the improved utilisation of optical network resources over the current optical circuit switching technique. The development of newer technologies has introduced higher rate transmissions and various modulation formats, however, introducing these technologies into the traditional ITU Fixed Grid does not efficiently utilise the available bandwidth. Flexible Spectrum is a promising approach offering a solution to the problem of improving bandwidth utilisation, which comes with a potential cost. Transmissions have the potential for impairment with respect to the increased traffic and lack of large channel spacing. Proposed routing algorithms should be aware of the linear and non-linear Physical Layer Impairments (PLIs) in order to operate closer to optimum performance. The OBS resource reservation protocol does not cater for the loss of transmissions, Burst Control Packets (BCPs) included, due to physical layer impairments. The protocol was adapted for use in Flexible Spectrum. Investigation of the use of a route and wavelength combination, from source to destination node pair, for the RWA process was proposed for ACO-based approaches to enforce the establishment and use of complete paths for greedy exploitation in Flexible Spectrum was conducted. The routing tuple for the RWA process is the tight coupling of a route and wavelength in combination intended to promote the greedy exploitation of successful paths for transmission requests. The application of the routing tuples differs from traditional ACO-based approaches and prompted the investigation of new pheromone calculation equations. The two novel proposed approaches were tested and experiments conducted comparing with and against existing algorithms (a simple greedy and an ACO-based algorithm) in a traditional ITU Fixed Grid and Flexible Spectrum scenario on three different network topologies. The proposed Flexible Spectrum Ant Colony (FSAC) approach had a markably improved performance over the existing algorithms in the ITU Fixed Grid WDM and Flexible Spectrum scenarios, while Upper Confidence Bound Routing and Wavelength Assignment (UCBRWA) algorithm was able to perform well in the traditional ITU Fixed Grid WDM scenario, but underperformed in the Flexible Spectrum scenario. The results show that the distributed ACO-based FSAC algorithm significantly improved the burst transmission success probability, providing a good solution in the Flexible Spectrum network environment undergoing transmission impairments.
- Full Text:
- Date Issued: 2017
Peer-to-peer energy trading system using IoT and a low-computation blockchain network
- Authors: Ncube, Tyron
- Date: 2021-10-29
- Subjects: Blockchains (Databases) , Internet of things , Renewable energy sources , Smart power grids , Peer-to-peer architecture (Computer networks) , Energy trading system
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10962/192119 , vital:45197
- Description: The use of renewable energy is increasing every year as it is seen as a viable and sustain- able long-term alternative to fossil-based sources of power. Emerging technologies are being merged with existing renewable energy systems to address some of the challenges associated with renewable energy, such as reliability and limited storage facilities for the generated energy. The Internet of Things (IoT) has made it possible for consumers to make money by selling off excess energy back to the utility company through smart grids that allow bi-directional communication between the consumer and the utility company. The major drawback of this is that the utility company still plays a central role in this setup as they are the only buyer of this excess energy generated from renewable energy sources. This research intends to use blockchain technology by leveraging its decentralized architecture to enable other individuals to be able to purchase this excess energy. Blockchain technology is first explained in detail, and its main features, such as consensus mechanisms, are examined. This evaluation of blockchain technology gives rise to some design questions that are taken into consideration to create a low-energy, low-computation Ethereum-based blockchain network that is the foundation for a peer-to-peer energy trading system. The peer-to-peer energy trading system makes use of smart meters to collect data about energy usage and gives users a web-based interface where they can transact with each other. A smart contract is also designed to facilitate payments for transactions. Lastly, the system is tested by carrying out transactions and transferring energy from one node in the system to another. , Thesis (MSc) -- Faculty of Science, Computer Science, 2021
- Full Text:
- Date Issued: 2021-10-29
- Authors: Ncube, Tyron
- Date: 2021-10-29
- Subjects: Blockchains (Databases) , Internet of things , Renewable energy sources , Smart power grids , Peer-to-peer architecture (Computer networks) , Energy trading system
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10962/192119 , vital:45197
- Description: The use of renewable energy is increasing every year as it is seen as a viable and sustain- able long-term alternative to fossil-based sources of power. Emerging technologies are being merged with existing renewable energy systems to address some of the challenges associated with renewable energy, such as reliability and limited storage facilities for the generated energy. The Internet of Things (IoT) has made it possible for consumers to make money by selling off excess energy back to the utility company through smart grids that allow bi-directional communication between the consumer and the utility company. The major drawback of this is that the utility company still plays a central role in this setup as they are the only buyer of this excess energy generated from renewable energy sources. This research intends to use blockchain technology by leveraging its decentralized architecture to enable other individuals to be able to purchase this excess energy. Blockchain technology is first explained in detail, and its main features, such as consensus mechanisms, are examined. This evaluation of blockchain technology gives rise to some design questions that are taken into consideration to create a low-energy, low-computation Ethereum-based blockchain network that is the foundation for a peer-to-peer energy trading system. The peer-to-peer energy trading system makes use of smart meters to collect data about energy usage and gives users a web-based interface where they can transact with each other. A smart contract is also designed to facilitate payments for transactions. Lastly, the system is tested by carrying out transactions and transferring energy from one node in the system to another. , Thesis (MSc) -- Faculty of Science, Computer Science, 2021
- Full Text:
- Date Issued: 2021-10-29
Guidelines to address the human factor in the South African National Research and Education Network beneficiary institutions
- Authors: Mjikeliso, Yolanda
- Date: 2014
- Subjects: National Research and Education Network (Computer network) Information networks -- South Africa Computer networks -- Security measures -- South Africa
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: http://hdl.handle.net/10948/9946 , vital:26635
- Description: Even if all the technical security solutions appropriate for an organisation’s network are implemented, for example, firewalls, antivirus programs and encryption, if the human factor is neglected then these technical security solutions will serve no purpose. The greatest challenge to network security is probably not the technological solutions that organisations invest in, but the human factor (non-technical solutions), which most organisations neglect. The human factor is often ignored even though humans are the most important resources of organisations and perform all the physical tasks, configure and manage equipment, enter data, manage people and operate the systems and networks. The same people that manage and operate networks and systems have vulnerabilities. They are not perfect and there will always be an element of mistake-making or error. In other words, humans make mistakes that could result in security vulnerabilities, and the exploitation of these vulnerabilities could in turn result in network security breaches. Human vulnerabilities are driven by many factors including insufficient security education, training and awareness, a lack of security policies and procedures in the organisation, a limited attention span and negligence. Network security may thus be compromised by this human vulnerability. In the context of this dissertation, both physical and technological controls should be implemented to ensure the security of the SANReN network. However, if the human factors are not adequately addressed, the network would become vulnerable to risks posed by the human factor which could threaten the security of the network. Accordingly, the primary research objective of this study is to formulate guidelines that address the information security related human factors in the rolling out and continued management of the SANReN network. An analysis of existing policies and procedures governing the SANReN network was conducted and it was determined that there are currently no guidelines addressing the human factor in the SANReN beneficiary institutions. Therefore, the aim of this study is to provide the guidelines for addressing the human factor threats in the SANReN beneficiary institutions.
- Full Text:
- Date Issued: 2014
- Authors: Mjikeliso, Yolanda
- Date: 2014
- Subjects: National Research and Education Network (Computer network) Information networks -- South Africa Computer networks -- Security measures -- South Africa
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: http://hdl.handle.net/10948/9946 , vital:26635
- Description: Even if all the technical security solutions appropriate for an organisation’s network are implemented, for example, firewalls, antivirus programs and encryption, if the human factor is neglected then these technical security solutions will serve no purpose. The greatest challenge to network security is probably not the technological solutions that organisations invest in, but the human factor (non-technical solutions), which most organisations neglect. The human factor is often ignored even though humans are the most important resources of organisations and perform all the physical tasks, configure and manage equipment, enter data, manage people and operate the systems and networks. The same people that manage and operate networks and systems have vulnerabilities. They are not perfect and there will always be an element of mistake-making or error. In other words, humans make mistakes that could result in security vulnerabilities, and the exploitation of these vulnerabilities could in turn result in network security breaches. Human vulnerabilities are driven by many factors including insufficient security education, training and awareness, a lack of security policies and procedures in the organisation, a limited attention span and negligence. Network security may thus be compromised by this human vulnerability. In the context of this dissertation, both physical and technological controls should be implemented to ensure the security of the SANReN network. However, if the human factors are not adequately addressed, the network would become vulnerable to risks posed by the human factor which could threaten the security of the network. Accordingly, the primary research objective of this study is to formulate guidelines that address the information security related human factors in the rolling out and continued management of the SANReN network. An analysis of existing policies and procedures governing the SANReN network was conducted and it was determined that there are currently no guidelines addressing the human factor in the SANReN beneficiary institutions. Therefore, the aim of this study is to provide the guidelines for addressing the human factor threats in the SANReN beneficiary institutions.
- Full Text:
- Date Issued: 2014
Analysis of the reliability for the 132/66/22 KV distribution network within ESKOM’s Eastern Cape operating unit
- Authors: Pantshwa, Athini
- Date: 2017
- Subjects: Electric power distribution Electricity -- Supply -- Engineering , Smart power grids
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: http://hdl.handle.net/10948/19750 , vital:28953
- Description: A stable and reliable electrical power supply system is an inevitable pre-requisite for the technological and economic growth of any nation. Due to this, utilities must strive and ensure that the customer’s reliability requirements are met and that the regulators requirements are satisfied at the lowest possible cost. It is known fact around the world that 90% of the customer service interruptions are caused due to failure in distribution system. Therefore, it is worth considering reliability assessments as it provides an opportunity to incorporate the cost or losses incurred by the utilities customer as a result of power failure. This must be considered in the planning and operating practices. The system modelling and simulation study is carried out on one of the district’s distribution system which consists of 132 kV, 66 kV and 22 kV network in Aliwal North Sector ECOU. The reliability assessment is done on the 22, 66 and 132 kV system to assess the performance of the present system and also predictive reliability analysis for the future system considering load growth and system expansion. The alternative which gives low SAIDI, SAIFI and minimum breakeven costs is being assessed and considered. The reliability of 132 kV system could be further improved by constructing a new 132 kV line from a different source of supply and connecting with line coming from another district (reserve) at reasonable break even cost. The decision base could be further improved by having Aliwal North Sector context interruption cost. However, the historical data which may be used in Aliwal North Sector to acquire interruption costs from the customers are being proposed. The focus should be on improving the power quality on constrained networks first, then the reliability. Therefore for the Aliwal North power system network it is imperative that Eskom invest on the reliability of this network. This dissertation also analysed load reflected economic benefit versus performance expectations that should be optimized through achieving a balance between network performance (SAIDI) improvement, and total life cycle cost (to Eskom as well as the economy). Reliability analysis conducted in this dissertation used Aliwal North power system network as a case study; the results proved that the system is vulnerable to faults, planned and unplanned outages. Reliability evaluation studies were conducted on the system using DigSilent software in conjunction with FME. These two models gave accurate results with acceptable variance in most indices except for the ENS where the variance was quite significant. It can be concluded that DigSilent results are the most accurate results in all three reliability evaluation scenarios for the Aliwal North Power System, best interpretation being that of DigSilent.
- Full Text:
- Date Issued: 2017
- Authors: Pantshwa, Athini
- Date: 2017
- Subjects: Electric power distribution Electricity -- Supply -- Engineering , Smart power grids
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: http://hdl.handle.net/10948/19750 , vital:28953
- Description: A stable and reliable electrical power supply system is an inevitable pre-requisite for the technological and economic growth of any nation. Due to this, utilities must strive and ensure that the customer’s reliability requirements are met and that the regulators requirements are satisfied at the lowest possible cost. It is known fact around the world that 90% of the customer service interruptions are caused due to failure in distribution system. Therefore, it is worth considering reliability assessments as it provides an opportunity to incorporate the cost or losses incurred by the utilities customer as a result of power failure. This must be considered in the planning and operating practices. The system modelling and simulation study is carried out on one of the district’s distribution system which consists of 132 kV, 66 kV and 22 kV network in Aliwal North Sector ECOU. The reliability assessment is done on the 22, 66 and 132 kV system to assess the performance of the present system and also predictive reliability analysis for the future system considering load growth and system expansion. The alternative which gives low SAIDI, SAIFI and minimum breakeven costs is being assessed and considered. The reliability of 132 kV system could be further improved by constructing a new 132 kV line from a different source of supply and connecting with line coming from another district (reserve) at reasonable break even cost. The decision base could be further improved by having Aliwal North Sector context interruption cost. However, the historical data which may be used in Aliwal North Sector to acquire interruption costs from the customers are being proposed. The focus should be on improving the power quality on constrained networks first, then the reliability. Therefore for the Aliwal North power system network it is imperative that Eskom invest on the reliability of this network. This dissertation also analysed load reflected economic benefit versus performance expectations that should be optimized through achieving a balance between network performance (SAIDI) improvement, and total life cycle cost (to Eskom as well as the economy). Reliability analysis conducted in this dissertation used Aliwal North power system network as a case study; the results proved that the system is vulnerable to faults, planned and unplanned outages. Reliability evaluation studies were conducted on the system using DigSilent software in conjunction with FME. These two models gave accurate results with acceptable variance in most indices except for the ENS where the variance was quite significant. It can be concluded that DigSilent results are the most accurate results in all three reliability evaluation scenarios for the Aliwal North Power System, best interpretation being that of DigSilent.
- Full Text:
- Date Issued: 2017
Classifying network attack scenarios using an ontology
- Van Heerden, Renier, Irwin, Barry V W, Burke, I D
- Authors: Van Heerden, Renier , Irwin, Barry V W , Burke, I D
- Date: 2012
- Language: English
- Type: Conference paper
- Identifier: vital:6606 , http://hdl.handle.net/10962/d1009326
- Description: This paper presents a methodology using network attack ontology to classify computer-based attacks. Computer network attacks differ in motivation, execution and end result. Because attacks are diverse, no standard classification exists. If an attack could be classified, it could be mitigated accordingly. A taxonomy of computer network attacks forms the basis of the ontology. Most published taxonomies present an attack from either the attacker's or defender's point of view. This taxonomy presents both views. The main taxonomy classes are: Actor, Actor Location, Aggressor, Attack Goal, Attack Mechanism, Attack Scenario, Automation Level, Effects, Motivation, Phase, Scope and Target. The "Actor" class is the entity executing the attack. The "Actor Location" class is the Actor‟s country of origin. The "Aggressor" class is the group instigating an attack. The "Attack Goal" class specifies the attacker‟s goal. The "Attack Mechanism" class defines the attack methodology. The "Automation Level" class indicates the level of human interaction. The "Effects" class describes the consequences of an attack. The "Motivation" class specifies incentives for an attack. The "Scope" class describes the size and utility of the target. The "Target" class is the physical device or entity targeted by an attack. The "Vulnerability" class describes a target vulnerability used by the attacker. The "Phase" class represents an attack model that subdivides an attack into different phases. The ontology was developed using an "Attack Scenario" class, which draws from other classes and can be used to characterize and classify computer network attacks. An "Attack Scenario" consists of phases, has a scope and is attributed to an actor and aggressor which have a goal. The "Attack Scenario" thus represents different classes of attacks. High profile computer network attacks such as Stuxnet and the Estonia attacks can now be been classified through the “Attack Scenario” class.
- Full Text:
- Date Issued: 2012
- Authors: Van Heerden, Renier , Irwin, Barry V W , Burke, I D
- Date: 2012
- Language: English
- Type: Conference paper
- Identifier: vital:6606 , http://hdl.handle.net/10962/d1009326
- Description: This paper presents a methodology using network attack ontology to classify computer-based attacks. Computer network attacks differ in motivation, execution and end result. Because attacks are diverse, no standard classification exists. If an attack could be classified, it could be mitigated accordingly. A taxonomy of computer network attacks forms the basis of the ontology. Most published taxonomies present an attack from either the attacker's or defender's point of view. This taxonomy presents both views. The main taxonomy classes are: Actor, Actor Location, Aggressor, Attack Goal, Attack Mechanism, Attack Scenario, Automation Level, Effects, Motivation, Phase, Scope and Target. The "Actor" class is the entity executing the attack. The "Actor Location" class is the Actor‟s country of origin. The "Aggressor" class is the group instigating an attack. The "Attack Goal" class specifies the attacker‟s goal. The "Attack Mechanism" class defines the attack methodology. The "Automation Level" class indicates the level of human interaction. The "Effects" class describes the consequences of an attack. The "Motivation" class specifies incentives for an attack. The "Scope" class describes the size and utility of the target. The "Target" class is the physical device or entity targeted by an attack. The "Vulnerability" class describes a target vulnerability used by the attacker. The "Phase" class represents an attack model that subdivides an attack into different phases. The ontology was developed using an "Attack Scenario" class, which draws from other classes and can be used to characterize and classify computer network attacks. An "Attack Scenario" consists of phases, has a scope and is attributed to an actor and aggressor which have a goal. The "Attack Scenario" thus represents different classes of attacks. High profile computer network attacks such as Stuxnet and the Estonia attacks can now be been classified through the “Attack Scenario” class.
- Full Text:
- Date Issued: 2012
An analysis of the risk exposure of adopting IPV6 in enterprise networks
- Authors: Berko, Istvan Sandor
- Date: 2015
- Subjects: International Workshop on Deploying the Future Infrastructure , Computer networks , Computer networks -- Security measures , Computer network protocols
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4722 , http://hdl.handle.net/10962/d1018918
- Description: The IPv6 increased address pool presents changes in resource impact to the Enterprise that, if not adequately addressed, can change risks that are locally significant in IPv4 to risks that can impact the Enterprise in its entirety. The expected conclusion is that the IPv6 environment will impose significant changes in the Enterprise environment - which may negatively impact organisational security if the IPv6 nuances are not adequately addressed. This thesis reviews the risks related to the operation of enterprise networks with the introduction of IPv6. The global trends are discussed to provide insight and background to the IPv6 research space. Analysing the current state of readiness in enterprise networks, quantifies the value of developing this thesis. The base controls that should be deployed in enterprise networks to prevent the abuse of IPv6 through tunnelling and the protection of the enterprise access layer are discussed. A series of case studies are presented which identify and analyse the impact of certain changes in the IPv6 protocol on the enterprise networks. The case studies also identify mitigation techniques to reduce risk.
- Full Text:
- Date Issued: 2015
- Authors: Berko, Istvan Sandor
- Date: 2015
- Subjects: International Workshop on Deploying the Future Infrastructure , Computer networks , Computer networks -- Security measures , Computer network protocols
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4722 , http://hdl.handle.net/10962/d1018918
- Description: The IPv6 increased address pool presents changes in resource impact to the Enterprise that, if not adequately addressed, can change risks that are locally significant in IPv4 to risks that can impact the Enterprise in its entirety. The expected conclusion is that the IPv6 environment will impose significant changes in the Enterprise environment - which may negatively impact organisational security if the IPv6 nuances are not adequately addressed. This thesis reviews the risks related to the operation of enterprise networks with the introduction of IPv6. The global trends are discussed to provide insight and background to the IPv6 research space. Analysing the current state of readiness in enterprise networks, quantifies the value of developing this thesis. The base controls that should be deployed in enterprise networks to prevent the abuse of IPv6 through tunnelling and the protection of the enterprise access layer are discussed. A series of case studies are presented which identify and analyse the impact of certain changes in the IPv6 protocol on the enterprise networks. The case studies also identify mitigation techniques to reduce risk.
- Full Text:
- Date Issued: 2015
An Analysis of Internet Background Radiation within an African IPv4 netblock
- Authors: Hendricks, Wadeegh
- Date: 2020
- Subjects: Computer networks -- Monitoring –- South Africa , Dark Web , Computer networks -- Security measures –- South Africa , Universities and Colleges -- Computer networks -- Security measures , Malware (Computer software) , TCP/IP (Computer network protocol)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/103791 , vital:32298
- Description: The use of passive network sensors has in the past proven to be quite effective in monitoring and analysing the current state of traffic on a network. Internet traffic destined to a routable, yet unused address block is often referred to as Internet Background Radiation (IBR) and characterised as unsolicited. This unsolicited traffic is however quite valuable to researchers in that it allows them to study the traffic patterns in a covert manner. IBR is largely composed of network and port scanning traffic, backscatter packets from virus and malware activity and to a lesser extent, misconfiguration of network devices. This research answers the following two questions: (1) What is the current state of IBR within the context of a South African IP address space and (2) Can any anomalies be detected in the traffic, with specific reference to current global malware attacks such as Mirai and similar. Rhodes University operates five IPv4 passive network sensors, commonly known as network telescopes, each monitoring its own /24 IP address block. The oldest of these network telescopes has been collecting traffic for over a decade, with the newest being established in 2011. This research focuses on the in-depth analysis of the traffic captured by one telescope in the 155/8 range over a 12 month period, from January to December 2017. The traffic was analysed and classified according the protocol, TCP flag, source IP address, destination port, packet count and payload size. Apart from the normal network traffic graphs and tables, a geographic heatmap of source traffic was also created, based on the source IP address. Spikes and noticeable variances in traffic patterns were further investigated and evidence of Mirai like malware activity was observed. Network and port scanning were found to comprise the largest amount of traffic, accounting for over 90% of the total IBR. Various scanning techniques were identified, including low level passive scanning and much higher level active scanning.
- Full Text:
- Date Issued: 2020
- Authors: Hendricks, Wadeegh
- Date: 2020
- Subjects: Computer networks -- Monitoring –- South Africa , Dark Web , Computer networks -- Security measures –- South Africa , Universities and Colleges -- Computer networks -- Security measures , Malware (Computer software) , TCP/IP (Computer network protocol)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/103791 , vital:32298
- Description: The use of passive network sensors has in the past proven to be quite effective in monitoring and analysing the current state of traffic on a network. Internet traffic destined to a routable, yet unused address block is often referred to as Internet Background Radiation (IBR) and characterised as unsolicited. This unsolicited traffic is however quite valuable to researchers in that it allows them to study the traffic patterns in a covert manner. IBR is largely composed of network and port scanning traffic, backscatter packets from virus and malware activity and to a lesser extent, misconfiguration of network devices. This research answers the following two questions: (1) What is the current state of IBR within the context of a South African IP address space and (2) Can any anomalies be detected in the traffic, with specific reference to current global malware attacks such as Mirai and similar. Rhodes University operates five IPv4 passive network sensors, commonly known as network telescopes, each monitoring its own /24 IP address block. The oldest of these network telescopes has been collecting traffic for over a decade, with the newest being established in 2011. This research focuses on the in-depth analysis of the traffic captured by one telescope in the 155/8 range over a 12 month period, from January to December 2017. The traffic was analysed and classified according the protocol, TCP flag, source IP address, destination port, packet count and payload size. Apart from the normal network traffic graphs and tables, a geographic heatmap of source traffic was also created, based on the source IP address. Spikes and noticeable variances in traffic patterns were further investigated and evidence of Mirai like malware activity was observed. Network and port scanning were found to comprise the largest amount of traffic, accounting for over 90% of the total IBR. Various scanning techniques were identified, including low level passive scanning and much higher level active scanning.
- Full Text:
- Date Issued: 2020
Actor/actant-network theory as emerging methodology for environmental education research in southern Africa
- Authors: Nhamo, Godwell
- Date: 2006
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/373537 , vital:66702 , xlink:href="https://www.ajol.info/index.php/sajee/article/view/122722"
- Description: This paper deliberates on actor/actant-network theory (AANT) as methodology for policy research in environmental education (EE). Insights are drawn from work that applied AANT to research environmental policy processes surrounding the formulation and implementation of South Africa’s Plastic Bags Regulations of 2003. The paper reveals that the application of AANT methodology made it possible to trace relationships, actors, actants and actor/actant-networks surrounding the Plastic Bags Regulations as quasi-object (token). The methodology also enabled a focus on understanding and investigating tensions, debates and responses emerging from the policy process. The findings were that after the promulgation of the first draft of the Plastic Bags Regulations in May 2000, tensions emerged around the nature of regulation (whether to use the command and control approach – preferred by Organised Government – or self regulation – preferred by Organised Business and Organised Labour). From these findings, a series of conceptual frameworks were drawn up as identified around key actors and actor/actant-networks. The conceptual frameworks included among them, Organised Government, Organised Business and Organised Labour.
- Full Text:
- Date Issued: 2006
- Authors: Nhamo, Godwell
- Date: 2006
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/373537 , vital:66702 , xlink:href="https://www.ajol.info/index.php/sajee/article/view/122722"
- Description: This paper deliberates on actor/actant-network theory (AANT) as methodology for policy research in environmental education (EE). Insights are drawn from work that applied AANT to research environmental policy processes surrounding the formulation and implementation of South Africa’s Plastic Bags Regulations of 2003. The paper reveals that the application of AANT methodology made it possible to trace relationships, actors, actants and actor/actant-networks surrounding the Plastic Bags Regulations as quasi-object (token). The methodology also enabled a focus on understanding and investigating tensions, debates and responses emerging from the policy process. The findings were that after the promulgation of the first draft of the Plastic Bags Regulations in May 2000, tensions emerged around the nature of regulation (whether to use the command and control approach – preferred by Organised Government – or self regulation – preferred by Organised Business and Organised Labour). From these findings, a series of conceptual frameworks were drawn up as identified around key actors and actor/actant-networks. The conceptual frameworks included among them, Organised Government, Organised Business and Organised Labour.
- Full Text:
- Date Issued: 2006