Network simulation for professional audio networks
- Authors: Otten, Fred
- Date: 2015
- Subjects: Sound engineers , Ethernet (Local area network system) , Computer networks , Computer simulation
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4713 , http://hdl.handle.net/10962/d1017935
- Description: Audio Engineers are required to design and deploy large multi-channel sound systems which meet a set of requirements and use networking technologies such as Firewire and Ethernet AVB. Bandwidth utilisation and parameter groupings are among the factors which need to be considered in these designs. An implementation of an extensible, generic simulation framework would allow audio engineers to easily compare protocols and networking technologies and get near real time responses with regards to bandwidth utilisation. Our hypothesis is that an application-level capability can be developed which uses a network simulation framework to enable this process and enhances the audio engineer’s experience of designing and configuring a network. This thesis presents a new, extensible simulation framework which can be utilised to simulate professional audio networks. This framework is utilised to develop an application - AudioNetSim - based on the requirements of an audio engineer. The thesis describes the AudioNetSim models and implementations for Ethernet AVB, Firewire and the AES- 64 control protocol. AudioNetSim enables bandwidth usage determination for any network configuration and connection scenario and is used to compare Firewire and Ethernet AVB bandwidth utilisation. It also applies graph theory to the circular join problem and provides a solution to detect circular joins.
- Full Text:
- Date Issued: 2015
- Authors: Otten, Fred
- Date: 2015
- Subjects: Sound engineers , Ethernet (Local area network system) , Computer networks , Computer simulation
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4713 , http://hdl.handle.net/10962/d1017935
- Description: Audio Engineers are required to design and deploy large multi-channel sound systems which meet a set of requirements and use networking technologies such as Firewire and Ethernet AVB. Bandwidth utilisation and parameter groupings are among the factors which need to be considered in these designs. An implementation of an extensible, generic simulation framework would allow audio engineers to easily compare protocols and networking technologies and get near real time responses with regards to bandwidth utilisation. Our hypothesis is that an application-level capability can be developed which uses a network simulation framework to enable this process and enhances the audio engineer’s experience of designing and configuring a network. This thesis presents a new, extensible simulation framework which can be utilised to simulate professional audio networks. This framework is utilised to develop an application - AudioNetSim - based on the requirements of an audio engineer. The thesis describes the AudioNetSim models and implementations for Ethernet AVB, Firewire and the AES- 64 control protocol. AudioNetSim enables bandwidth usage determination for any network configuration and connection scenario and is used to compare Firewire and Ethernet AVB bandwidth utilisation. It also applies graph theory to the circular join problem and provides a solution to detect circular joins.
- Full Text:
- Date Issued: 2015
Evaluating text preprocessing to improve compression on maillogs
- Otten, Fred, Irwin, Barry V W, Thinyane, Hannah
- Authors: Otten, Fred , Irwin, Barry V W , Thinyane, Hannah
- Date: 2009
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430138 , vital:72668 , https://doi.org/10.1145/1632149.1632157
- Description: Maillogs contain important information about mail which has been sent or received. This information can be used for statistical purposes, to help prevent viruses or to help prevent SPAM. In order to satisfy regula-tions and follow good security practices, maillogs need to be monitored and archived. Since there is a large quantity of data, some form of data reduction is necessary. Data compression programs such as gzip and bzip2 are commonly used to reduce the quantity of data. Text preprocessing can be used to aid the compression of English text files. This paper evaluates whether text preprocessing, particularly word replacement, can be used to improve the compression of maillogs. It presents an algorithm for constructing a dictionary for word replacement and provides the results of experiments conducted using the ppmd, gzip, bzip2 and 7zip programs. These tests show that text prepro-cessing improves data compression on maillogs. Improvements of up to 56 percent in compression time and up to 32 percent in compression ratio are achieved. It also shows that a dictionary may be generated and used on other maillogs to yield reductions within half a percent of the results achieved for the maillog used to generate the dictionary.
- Full Text:
- Date Issued: 2009
- Authors: Otten, Fred , Irwin, Barry V W , Thinyane, Hannah
- Date: 2009
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430138 , vital:72668 , https://doi.org/10.1145/1632149.1632157
- Description: Maillogs contain important information about mail which has been sent or received. This information can be used for statistical purposes, to help prevent viruses or to help prevent SPAM. In order to satisfy regula-tions and follow good security practices, maillogs need to be monitored and archived. Since there is a large quantity of data, some form of data reduction is necessary. Data compression programs such as gzip and bzip2 are commonly used to reduce the quantity of data. Text preprocessing can be used to aid the compression of English text files. This paper evaluates whether text preprocessing, particularly word replacement, can be used to improve the compression of maillogs. It presents an algorithm for constructing a dictionary for word replacement and provides the results of experiments conducted using the ppmd, gzip, bzip2 and 7zip programs. These tests show that text prepro-cessing improves data compression on maillogs. Improvements of up to 56 percent in compression time and up to 32 percent in compression ratio are achieved. It also shows that a dictionary may be generated and used on other maillogs to yield reductions within half a percent of the results achieved for the maillog used to generate the dictionary.
- Full Text:
- Date Issued: 2009
Evaluating compression as an enabler for centralised monitoring in a Next Generation Network
- Otten, Fred, Irwin, Barry V W, Slay, Hannah
- Authors: Otten, Fred , Irwin, Barry V W , Slay, Hannah
- Date: 2007
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428226 , vital:72495 , https://citeseerx.ist.psu.edu/document?repid=rep1andtype=pdfanddoi=f9ed69db7da44c168082934cd4ea5a413b2bf7f5
- Description: With the emergence of Next Generation Networks and a large number of next generation services, the volume and diversity of information is on the rise. These networks are often large, distributed and consist of het-erogeneous devices. In order to provide effective centralised monitoring and control we need to be able to assemble the relevant data at a cen-tral point. This becomes difficult because of the large quantity of data. We also would like to achieve this using the least amount of bandwidth, and minimise the latency. This paper investigates using compression to enable centralised monitoring and control. It presents the results of ex-periments showing that compression is an effective method of data re-duction, resulting in up to 93.3 percent reduction in bandwidth usage for point-to-point transmission. This paper also describes an architecture that incorporates compression and provides centralised monitoring and control.
- Full Text:
- Date Issued: 2007
- Authors: Otten, Fred , Irwin, Barry V W , Slay, Hannah
- Date: 2007
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428226 , vital:72495 , https://citeseerx.ist.psu.edu/document?repid=rep1andtype=pdfanddoi=f9ed69db7da44c168082934cd4ea5a413b2bf7f5
- Description: With the emergence of Next Generation Networks and a large number of next generation services, the volume and diversity of information is on the rise. These networks are often large, distributed and consist of het-erogeneous devices. In order to provide effective centralised monitoring and control we need to be able to assemble the relevant data at a cen-tral point. This becomes difficult because of the large quantity of data. We also would like to achieve this using the least amount of bandwidth, and minimise the latency. This paper investigates using compression to enable centralised monitoring and control. It presents the results of ex-periments showing that compression is an effective method of data re-duction, resulting in up to 93.3 percent reduction in bandwidth usage for point-to-point transmission. This paper also describes an architecture that incorporates compression and provides centralised monitoring and control.
- Full Text:
- Date Issued: 2007
The Need for Centralised, Cross Platform Information Aggregation
- Otten, Fred, Irwin, Barry V W, Slay, Hannah
- Authors: Otten, Fred , Irwin, Barry V W , Slay, Hannah
- Date: 2006
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428773 , vital:72535 , https://d1wqtxts1xzle7.cloudfront.net/2355475/8jlt6v8tz4wmhs6.pdf?1425084143=response-content-disposi-tion=inline%3B+filename%3DThe_need_for_centralised_cross_platform.pdfExpires=1714743760Signature=fsImuFaOfYc2FtUC88DqRrK1Anh84~rvBsZt2j46BfPyKMbbmswGZN5E2ajRJ7tZi5SZ4zQJvI5U6L47nmoXlNA0~Vo3pON-sYEo6Kn3TiTLvxwUpPQALnP7IvL-EEhgh11T-OuNZf0Q8QArxk6iqi4zjiOYbHUb~FDWw8MJ7ekH~frNS75mDrjpZ4xL8MqPNRHctaR3E5m~4i71SYO8hfbZw4vu7AhNNNvrRoIhbtLCEUsg-j7TkBDgVHts8LCsM5knmEKwgQTSBQTkLoRuNmXngqYikjvL7jUuHXibjSVaMSD78WRqXE~LDDkT7KXU7EbkPXzjRYJyamQ5qDXa3A__ey-Pair-Id=APKAJLOHF5GGSLRBV4ZA
- Description: With the move towards global and multi-national companies, information technology infrastructure requirements are increasing. As the size of these computer networks increases, it becomes more and more difficult to moni-tor, control, and secure them. Network security involves the creation of large amounts of information in the form of logs and messages from a number of diverse devices, sensors, and gateways which are often spread over large geographical areas. This makes the monitoring and control difficult, and hence poses security problems. The aggregation of information is necessary in information audits, intrusion detection, network monitoring and management. The use of different platforms and devices complicates the problem, and makes aggregation more difficult. Network security administrators and security researchers require aggregation to simplify the analysis and comprehension of activity across the entire net-work. Centralised information aggregation will help deal with redundancy, analysis, monitoring and control. This aids the detection of wide spread attacks on global organisational networks, improving intrusion detection and mitigation. This paper discusses and motivates the need for central-ised, cross platform information aggregation in greater detail. It also sug-gests methods which may be used, discusses the security issues, and gives the advantages and disadvantages of aggregation.
- Full Text:
- Date Issued: 2006
- Authors: Otten, Fred , Irwin, Barry V W , Slay, Hannah
- Date: 2006
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428773 , vital:72535 , https://d1wqtxts1xzle7.cloudfront.net/2355475/8jlt6v8tz4wmhs6.pdf?1425084143=response-content-disposi-tion=inline%3B+filename%3DThe_need_for_centralised_cross_platform.pdfExpires=1714743760Signature=fsImuFaOfYc2FtUC88DqRrK1Anh84~rvBsZt2j46BfPyKMbbmswGZN5E2ajRJ7tZi5SZ4zQJvI5U6L47nmoXlNA0~Vo3pON-sYEo6Kn3TiTLvxwUpPQALnP7IvL-EEhgh11T-OuNZf0Q8QArxk6iqi4zjiOYbHUb~FDWw8MJ7ekH~frNS75mDrjpZ4xL8MqPNRHctaR3E5m~4i71SYO8hfbZw4vu7AhNNNvrRoIhbtLCEUsg-j7TkBDgVHts8LCsM5knmEKwgQTSBQTkLoRuNmXngqYikjvL7jUuHXibjSVaMSD78WRqXE~LDDkT7KXU7EbkPXzjRYJyamQ5qDXa3A__ey-Pair-Id=APKAJLOHF5GGSLRBV4ZA
- Description: With the move towards global and multi-national companies, information technology infrastructure requirements are increasing. As the size of these computer networks increases, it becomes more and more difficult to moni-tor, control, and secure them. Network security involves the creation of large amounts of information in the form of logs and messages from a number of diverse devices, sensors, and gateways which are often spread over large geographical areas. This makes the monitoring and control difficult, and hence poses security problems. The aggregation of information is necessary in information audits, intrusion detection, network monitoring and management. The use of different platforms and devices complicates the problem, and makes aggregation more difficult. Network security administrators and security researchers require aggregation to simplify the analysis and comprehension of activity across the entire net-work. Centralised information aggregation will help deal with redundancy, analysis, monitoring and control. This aids the detection of wide spread attacks on global organisational networks, improving intrusion detection and mitigation. This paper discusses and motivates the need for central-ised, cross platform information aggregation in greater detail. It also sug-gests methods which may be used, discusses the security issues, and gives the advantages and disadvantages of aggregation.
- Full Text:
- Date Issued: 2006
- «
- ‹
- 1
- ›
- »