- Title
- Investigating tools and techniques for improving software performance on multiprocessor computer systems
- Creator
- Tristram, Waide Barrington
- Subject
- Multiprocessors
- Subject
- Multiprogramming (Electronic computers)
- Subject
- Parallel programming (Computer science)
- Subject
- Linux
- Subject
- Abstract data types (Computer science)
- Subject
- Threads (Computer programs)
- Subject
- Computer programming
- Date Issued
- 2012
- Date
- 2012
- Type
- Thesis
- Type
- Masters
- Type
- MSc
- Identifier
- vital:4655
- Identifier
- http://hdl.handle.net/10962/d1006651
- Identifier
- Multiprocessors
- Identifier
- Multiprogramming (Electronic computers)
- Identifier
- Parallel programming (Computer science)
- Identifier
- Linux
- Identifier
- Abstract data types (Computer science)
- Identifier
- Threads (Computer programs)
- Identifier
- Computer programming
- Description
- The availability of modern commodity multicore processors and multiprocessor computer systems has resulted in the widespread adoption of parallel computers in a variety of environments, ranging from the home to workstation and server environments in particular. Unfortunately, parallel programming is harder and requires more expertise than the traditional sequential programming model. The variety of tools and parallel programming models available to the programmer further complicates the issue. The primary goal of this research was to identify and describe a selection of parallel programming tools and techniques to aid novice parallel programmers in the process of developing efficient parallel C/C++ programs for the Linux platform. This was achieved by highlighting and describing the key concepts and hardware factors that affect parallel programming, providing a brief survey of commonly available software development tools and parallel programming models and libraries, and presenting structured approaches to software performance tuning and parallel programming. Finally, the performance of several parallel programming models and libraries was investigated, along with the programming effort required to implement solutions using the respective models. A quantitative research methodology was applied to the investigation of the performance and programming effort associated with the selected parallel programming models and libraries, which included automatic parallelisation by the compiler, Boost Threads, Cilk Plus, OpenMP, POSIX threads (Pthreads), and Threading Building Blocks (TBB). Additionally, the performance of the GNU C/C++ and Intel C/C++ compilers was examined. The results revealed that the choice of parallel programming model or library is dependent on the type of problem being solved and that there is no overall best choice for all classes of problem. However, the results also indicate that parallel programming models with higher levels of abstraction require less programming effort and provide similar performance compared to explicit threading models. The principle conclusion was that the problem analysis and parallel design are an important factor in the selection of the parallel programming model and tools, but that models with higher levels of abstractions, such as OpenMP and Threading Building Blocks, are favoured.
- Format
- 240 leaves
- Format
- Publisher
- Rhodes University
- Publisher
- Faculty of Science, Computer Science
- Language
- English
- Rights
- Tristram, Waide Barrington
- Hits: 2979
- Visitors: 3433
- Downloads: 515
Thumbnail | File | Description | Size | Format | |||
---|---|---|---|---|---|---|---|
View Details Download | SOURCEPDF | 1 MB | Adobe Acrobat PDF | View Details Download |