A comparison of implementation platforms for the visualisation of animal family trees
- Authors: Kanotangudza, Priviledge
- Date: 2024-04
- Subjects: Business intelligence -- Computer programs , Human-computer interaction , Computer science
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/64105 , vital:73653
- Description: Genealogy is the study of family history. Family trees are used to show ancestry and visualise family history. Animal family trees are different from human family trees as animals have more offspring to represent in a family tree visualisation. Auctioneering organisations, such as Boere Korporasie Beperk (BKB), provide livestock auction catalogues containing pictures of the animal on sale, the animal’s family tree and its breeding and selection data. Modern-day farming has become data-driven and livestock farmers use various online devices and platforms to obtain information, such as real-time milk production, animal health monitoring and to manage farming operations. This study investigated and compared two Business Intelligence (BI) platforms namely Microsoft Power BI and Tableau (Salesforce) and the Python programming language used in the implementation of cattle family tree charts. Animal family tree visualisation requirements were identified from analysing data collected from 23 agriculture users and auction attendees who responded to an online questionnaire. The results of an online survey showed that agriculture users preferred an animal family tree that resembled a human one, which is not currently used in livestock auction catalogues. A conference paper was published based on the survey results. The Design Science Research Methodology (DSRM) was used to aid in creating animal family tree charts using Power BI, Tableau and Python. The author compared the visualisation tools against selected criteria, such as learnability, portability interoperability and security. Usability evaluations using eye tracking were conducted with agriculture users in a usability lab to compare the artefacts developed using Power BI and Python. Tableau was discarded during the implementation process as it did not produce the required family tree visualisation The Technology Acceptance Model (TAM) theory, which seeks to predict the acceptance and use of technology based on users' perception of its usefulness and ease of use, was used to guide the research study in evaluating the artefacts. According to TAM, the adoption of the proposed technology to solve the problem of a static animal family tree in livestock auction catalogues was dependent on the agriculture user’s beliefs. This was based upon that the technology would help them make better buying decisions at livestock auctions effortlessly. The other theory used in this study was the Task Technology Fit (TTF). This theory was used mainly to create the task list to be used in the usability test. The results showed that the author of this work and the agriculture users preferred the artefact produced by Power BI. The learnability and development time was shorter and the User Interface (UI) created was more intuitive. The findings of this study indicated that the present auction catalogue could be supplemented using interactive online animal family tree visualisations created using Power BI. This study recommended that livestock auctioneering organisations should, in addition to providing paper catalogues, provide farmers with an online platform to view the family trees of cattle on auction to enhance purchasing decisions. , Thesis (MCom) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics, 2024
- Full Text:
- Date Issued: 2024-04
- Authors: Kanotangudza, Priviledge
- Date: 2024-04
- Subjects: Business intelligence -- Computer programs , Human-computer interaction , Computer science
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/64105 , vital:73653
- Description: Genealogy is the study of family history. Family trees are used to show ancestry and visualise family history. Animal family trees are different from human family trees as animals have more offspring to represent in a family tree visualisation. Auctioneering organisations, such as Boere Korporasie Beperk (BKB), provide livestock auction catalogues containing pictures of the animal on sale, the animal’s family tree and its breeding and selection data. Modern-day farming has become data-driven and livestock farmers use various online devices and platforms to obtain information, such as real-time milk production, animal health monitoring and to manage farming operations. This study investigated and compared two Business Intelligence (BI) platforms namely Microsoft Power BI and Tableau (Salesforce) and the Python programming language used in the implementation of cattle family tree charts. Animal family tree visualisation requirements were identified from analysing data collected from 23 agriculture users and auction attendees who responded to an online questionnaire. The results of an online survey showed that agriculture users preferred an animal family tree that resembled a human one, which is not currently used in livestock auction catalogues. A conference paper was published based on the survey results. The Design Science Research Methodology (DSRM) was used to aid in creating animal family tree charts using Power BI, Tableau and Python. The author compared the visualisation tools against selected criteria, such as learnability, portability interoperability and security. Usability evaluations using eye tracking were conducted with agriculture users in a usability lab to compare the artefacts developed using Power BI and Python. Tableau was discarded during the implementation process as it did not produce the required family tree visualisation The Technology Acceptance Model (TAM) theory, which seeks to predict the acceptance and use of technology based on users' perception of its usefulness and ease of use, was used to guide the research study in evaluating the artefacts. According to TAM, the adoption of the proposed technology to solve the problem of a static animal family tree in livestock auction catalogues was dependent on the agriculture user’s beliefs. This was based upon that the technology would help them make better buying decisions at livestock auctions effortlessly. The other theory used in this study was the Task Technology Fit (TTF). This theory was used mainly to create the task list to be used in the usability test. The results showed that the author of this work and the agriculture users preferred the artefact produced by Power BI. The learnability and development time was shorter and the User Interface (UI) created was more intuitive. The findings of this study indicated that the present auction catalogue could be supplemented using interactive online animal family tree visualisations created using Power BI. This study recommended that livestock auctioneering organisations should, in addition to providing paper catalogues, provide farmers with an online platform to view the family trees of cattle on auction to enhance purchasing decisions. , Thesis (MCom) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics, 2024
- Full Text:
- Date Issued: 2024-04
A methodology for modernising legacy web applications: subtitle if needed. If no subtitle follow instructions in manual
- Authors: Malgraff, Maxine
- Date: 2024-04
- Subjects: Management information systems , Information technology , Application software -- Development
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/64148 , vital:73657
- Description: One problem faced in the Information Systems domain is that of poorly maintained, poorly documented, and/or unmanageable systems, known as Legacy Information Systems (LISs). As a result of the everchanging web development landscape, web applications have also become susceptible to the challenges faced in keeping up with technological advances, and older applications are starting to display the characteristics of becoming Legacy Web Applications (LWAs). As retaining business process support and meeting business requirements is often necessary, one method of recovering vital LWAs is to modernise them. System modernisation aims to recover business knowledge and provide an enhanced system that overcomes the problems plagued by LISs. When planning to modernise an LWA, guidance and support are essential to ensure that the modernisation exercise is performed efficiently and effectively. Modernisation methodologies can provide this required guidance and support as they provide models, tools and techniques that serve as guiding principles for the modernisation process. Although many modernisation methodologies exist, very few offer a comprehensive approach to modernisation that provides guidelines for each modernisation phase, tools to assist in the modernisation and techniques that can be used throughout. Existing methodologies also do not cater for cases that include both the LWA and migration to modernised web-specific environments. This research study aimed to investigate modernisation methodologies and identify which methodologies, or parts thereof, could be adapted for modernising LWAs. Existing methodologies were analysed and compared using the definition of a methodology, as well as other factors that improve the modernisation process. Modernisation case studies were reviewed to identify lessons learned from these studies so that these could be considered when planning an LWA modernisation. The ARTIST methodology was the most comprehensive modernisation methodology identified from those researched and was selected as the most appropriate methodology for modernising an LWA. ARTIST was modified to the mARTIST methodology to cater for web-based environments.mARTIST was used to modernise an existing LWA, called OldMax, at an automotive manufacturer, anonymously referred to as AutoCo, to determine its ability to support the modernisation of LWAs. Additional tools and evaluation methods were also investigated and used in place of those recommended by ARTIST, where deemed appropriate for the modernisation of OldMax. Limitations set by AutoCo on the hosting and technical environments for the modernised application also required ARTIST to be adapted to better suit the use case. The steps taken during this modernisation were documented and reported on to highlight the effectiveness of mARTIST and the tools used. The result of this modernisation was that the modernised web application, ModMax, was evaluated to determine the success of the modernisation. The modernisation of OldMax to ModMax, using the mARTIST methodology, was found to be successful based on the criteria set by the ARTIST methodology. Based on this, mARTIST can successfully be used for the modernisation of LWAs. To support future modernisations, an evaluation method for determining technical feasibility was developed for LWA, and alternate tools that could be used throughout modernisation exercises were recommended. , Thesis (MSc) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics, 2024
- Full Text:
- Date Issued: 2024-04
- Authors: Malgraff, Maxine
- Date: 2024-04
- Subjects: Management information systems , Information technology , Application software -- Development
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/64148 , vital:73657
- Description: One problem faced in the Information Systems domain is that of poorly maintained, poorly documented, and/or unmanageable systems, known as Legacy Information Systems (LISs). As a result of the everchanging web development landscape, web applications have also become susceptible to the challenges faced in keeping up with technological advances, and older applications are starting to display the characteristics of becoming Legacy Web Applications (LWAs). As retaining business process support and meeting business requirements is often necessary, one method of recovering vital LWAs is to modernise them. System modernisation aims to recover business knowledge and provide an enhanced system that overcomes the problems plagued by LISs. When planning to modernise an LWA, guidance and support are essential to ensure that the modernisation exercise is performed efficiently and effectively. Modernisation methodologies can provide this required guidance and support as they provide models, tools and techniques that serve as guiding principles for the modernisation process. Although many modernisation methodologies exist, very few offer a comprehensive approach to modernisation that provides guidelines for each modernisation phase, tools to assist in the modernisation and techniques that can be used throughout. Existing methodologies also do not cater for cases that include both the LWA and migration to modernised web-specific environments. This research study aimed to investigate modernisation methodologies and identify which methodologies, or parts thereof, could be adapted for modernising LWAs. Existing methodologies were analysed and compared using the definition of a methodology, as well as other factors that improve the modernisation process. Modernisation case studies were reviewed to identify lessons learned from these studies so that these could be considered when planning an LWA modernisation. The ARTIST methodology was the most comprehensive modernisation methodology identified from those researched and was selected as the most appropriate methodology for modernising an LWA. ARTIST was modified to the mARTIST methodology to cater for web-based environments.mARTIST was used to modernise an existing LWA, called OldMax, at an automotive manufacturer, anonymously referred to as AutoCo, to determine its ability to support the modernisation of LWAs. Additional tools and evaluation methods were also investigated and used in place of those recommended by ARTIST, where deemed appropriate for the modernisation of OldMax. Limitations set by AutoCo on the hosting and technical environments for the modernised application also required ARTIST to be adapted to better suit the use case. The steps taken during this modernisation were documented and reported on to highlight the effectiveness of mARTIST and the tools used. The result of this modernisation was that the modernised web application, ModMax, was evaluated to determine the success of the modernisation. The modernisation of OldMax to ModMax, using the mARTIST methodology, was found to be successful based on the criteria set by the ARTIST methodology. Based on this, mARTIST can successfully be used for the modernisation of LWAs. To support future modernisations, an evaluation method for determining technical feasibility was developed for LWA, and alternate tools that could be used throughout modernisation exercises were recommended. , Thesis (MSc) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics, 2024
- Full Text:
- Date Issued: 2024-04
A model for measuring and predicting stress for software developers using vital signs and activities
- Authors: Hibbers, Ilze
- Date: 2024-04
- Subjects: Machine learning , Neural networks (Computer science) , Computer software developers
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/63799 , vital:73614
- Description: Occupational stress is a well-recognised issue that affects individuals in various professions and industries. Reducing occupational stress has multiple benefits, such as improving employee's health and performance. This study proposes a model to measure and predict occupational stress using data collected in a real IT office environment. Different data sources, such as questionnaires, application software (RescueTime) and Fitbit smartwatches were used for collecting heart rate (HR), facial emotions, computer interactions, and application usage. The results of the Demand Control Support and Effort and Reward questionnaires indicated that the participants experienced high social support and an average level of workload. Participants also reported their daily perceived stress and workload level using a 5- point score. The perceived stress of the participants was overall neutral. There was no correlation found between HR, interactions, fear, and meetings. K-means and Bernoulli algorithms were applied to the dataset and two well-separated clusters were formed. The centroids indicated that higher heart rates were grouped either with meetings or had a higher difference in the center point values for interactions. Silhouette scores and 5-fold-validation were used to measure the accuracy of the clusters. However, these clusters were unable to predict the daily reported stress levels. Calculations were done on the computer usage data to measure interaction speeds and time spent working, in meetings, or away from the computer. These calculations were used as input into a decision tree with the reported daily stress levels. The results of the tree helped to identify which patterns lead to stressful days. The results indicated that days with high time pressure led to more reported stress. A new, more general tree was developed, which was able to predict 82 per cent of the daily stress reported. The main discovery of the research was that stress does not have a straightforward connection with computer interactions, facial emotions, or meetings. High interactions sometimes lead to stress and other times do not. So, predicting stress involves finding patterns and how data from different data sources interact with each other. Future work will revolve around validating the model in more office environments around South Africa. , Thesis (MSc) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics, 2024
- Full Text:
- Date Issued: 2024-04
A model for measuring and predicting stress for software developers using vital signs and activities
- Authors: Hibbers, Ilze
- Date: 2024-04
- Subjects: Machine learning , Neural networks (Computer science) , Computer software developers
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/63799 , vital:73614
- Description: Occupational stress is a well-recognised issue that affects individuals in various professions and industries. Reducing occupational stress has multiple benefits, such as improving employee's health and performance. This study proposes a model to measure and predict occupational stress using data collected in a real IT office environment. Different data sources, such as questionnaires, application software (RescueTime) and Fitbit smartwatches were used for collecting heart rate (HR), facial emotions, computer interactions, and application usage. The results of the Demand Control Support and Effort and Reward questionnaires indicated that the participants experienced high social support and an average level of workload. Participants also reported their daily perceived stress and workload level using a 5- point score. The perceived stress of the participants was overall neutral. There was no correlation found between HR, interactions, fear, and meetings. K-means and Bernoulli algorithms were applied to the dataset and two well-separated clusters were formed. The centroids indicated that higher heart rates were grouped either with meetings or had a higher difference in the center point values for interactions. Silhouette scores and 5-fold-validation were used to measure the accuracy of the clusters. However, these clusters were unable to predict the daily reported stress levels. Calculations were done on the computer usage data to measure interaction speeds and time spent working, in meetings, or away from the computer. These calculations were used as input into a decision tree with the reported daily stress levels. The results of the tree helped to identify which patterns lead to stressful days. The results indicated that days with high time pressure led to more reported stress. A new, more general tree was developed, which was able to predict 82 per cent of the daily stress reported. The main discovery of the research was that stress does not have a straightforward connection with computer interactions, facial emotions, or meetings. High interactions sometimes lead to stress and other times do not. So, predicting stress involves finding patterns and how data from different data sources interact with each other. Future work will revolve around validating the model in more office environments around South Africa. , Thesis (MSc) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics, 2024
- Full Text:
- Date Issued: 2024-04
A process for integrated fitness and menstrual cycle data visualisations
- Authors: Taljaard, Isabelle
- Date: 2024-04
- Subjects: Human-computer interaction , Personal information management , Medical informatics -- Standards
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/64379 , vital:73689
- Description: The increase in female participation in sport has led to an increase in research reporting on the relationship between fitness and menstrual cycle (F&M) data. Fitness variables such as VO2 max and heart rate are influenced by menstrual hormones and change with the different phases of a cycle. People frequently track both their F&M data, to understand their long-term activity and their body’s changes during the different cycle phases. Both these data sets are tracked and visualised separately to help people understand their data, however little work has been done to visualise the relationship between the two data sets. A process that guides the creation of an integrated F&M visualisation does not exist. This research aimed to develop and adopt a process that could be used to successfully guide the creation of an integrated F&M visualisation. The study followed the Design Science Research Methodology (DSRM) to create a primary and secondary artefact – the process and instantiation thereof. The DSRM was applied in iterative cycles where the process was developed, instantiations created and evaluated by participants. To develop the process, existing data processing and visualisation processes were reviewed from literature, to assess their successes and shortcomings. The review of existing processes revealed what steps, and factors related to those steps, would need to be considered. The process review highlighted the importance of five process steps: planning, collection, access, integration, and visualisation. Once the conceptual process was designed, it was adapted for the goal of creating an integrated F&M data visualisation. Prior to implementation, the process was first tested in a pilot study to ensure its validity before involving participants in data collection. After the process pilot study, the final implementation of the process took place and participants were recruited. In the first step of the process, the different fitness data types that are influenced by the menstrual cycle, and vice versa, were identified through a literature review. In the second step, devices to be used for data collection were evaluated and tested through exploratory testing and review of user manuals available online. The third and fourth steps, access, and integration were informed by further exploratory testing and review of relevant literature. The fifth step, data visualisation, was guided by relevant studies, Hick’s law, and the Schema Theory. Two Iterations of DSR were conducted in two phases. Phase 1 (P1) was the instantiation of the planning, collection, access, and processing steps. Participants wore smartwatches while going about their daily lives and working out and tracked their menstrual cycle to collect data. P1data was used to create several instantiations of the process. The second phase (P2) was the instantiation the visualisation step. The final visualisations, resulting from the instantiations, were evaluated by participants in P2. The review notes were used to improve both the process and the final visualisations. Both P1 and P2 were repeated (iterated) twice. The recommended process can be used by anyone who wants to create an integrated F&M visualisation and was designed to be modular so that users could choose to follow the whole process or only specific steps. The findings of this research can provide guidance to users, developers and smartwatch manufacturers of what people’s preferences are for these integrated visualisations. It also provides guidance for those who wish to create their own visualisations without needing prior programming experience or knowledge, since easy to use, online visualisation tools are recommended. The process instantiations will assist people, especially women, to better understand their menstrual cycle and how it affects their physical well-being. , Thesis (MCom) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics, 2024
- Full Text:
- Date Issued: 2024-04
- Authors: Taljaard, Isabelle
- Date: 2024-04
- Subjects: Human-computer interaction , Personal information management , Medical informatics -- Standards
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/64379 , vital:73689
- Description: The increase in female participation in sport has led to an increase in research reporting on the relationship between fitness and menstrual cycle (F&M) data. Fitness variables such as VO2 max and heart rate are influenced by menstrual hormones and change with the different phases of a cycle. People frequently track both their F&M data, to understand their long-term activity and their body’s changes during the different cycle phases. Both these data sets are tracked and visualised separately to help people understand their data, however little work has been done to visualise the relationship between the two data sets. A process that guides the creation of an integrated F&M visualisation does not exist. This research aimed to develop and adopt a process that could be used to successfully guide the creation of an integrated F&M visualisation. The study followed the Design Science Research Methodology (DSRM) to create a primary and secondary artefact – the process and instantiation thereof. The DSRM was applied in iterative cycles where the process was developed, instantiations created and evaluated by participants. To develop the process, existing data processing and visualisation processes were reviewed from literature, to assess their successes and shortcomings. The review of existing processes revealed what steps, and factors related to those steps, would need to be considered. The process review highlighted the importance of five process steps: planning, collection, access, integration, and visualisation. Once the conceptual process was designed, it was adapted for the goal of creating an integrated F&M data visualisation. Prior to implementation, the process was first tested in a pilot study to ensure its validity before involving participants in data collection. After the process pilot study, the final implementation of the process took place and participants were recruited. In the first step of the process, the different fitness data types that are influenced by the menstrual cycle, and vice versa, were identified through a literature review. In the second step, devices to be used for data collection were evaluated and tested through exploratory testing and review of user manuals available online. The third and fourth steps, access, and integration were informed by further exploratory testing and review of relevant literature. The fifth step, data visualisation, was guided by relevant studies, Hick’s law, and the Schema Theory. Two Iterations of DSR were conducted in two phases. Phase 1 (P1) was the instantiation of the planning, collection, access, and processing steps. Participants wore smartwatches while going about their daily lives and working out and tracked their menstrual cycle to collect data. P1data was used to create several instantiations of the process. The second phase (P2) was the instantiation the visualisation step. The final visualisations, resulting from the instantiations, were evaluated by participants in P2. The review notes were used to improve both the process and the final visualisations. Both P1 and P2 were repeated (iterated) twice. The recommended process can be used by anyone who wants to create an integrated F&M visualisation and was designed to be modular so that users could choose to follow the whole process or only specific steps. The findings of this research can provide guidance to users, developers and smartwatch manufacturers of what people’s preferences are for these integrated visualisations. It also provides guidance for those who wish to create their own visualisations without needing prior programming experience or knowledge, since easy to use, online visualisation tools are recommended. The process instantiations will assist people, especially women, to better understand their menstrual cycle and how it affects their physical well-being. , Thesis (MCom) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics, 2024
- Full Text:
- Date Issued: 2024-04
A toolkit for successful workplace learning analytics at software vendors
- Authors: Whale, Alyssa Morgan
- Date: 2024-04
- Subjects: Computer-assisted instruction , Intelligent tutoring systems , Information visualisation
- Language: English
- Type: Doctoral theses , text
- Identifier: http://hdl.handle.net/10948/64448 , vital:73713
- Description: Software vendors commonly provide digital software training to their stakeholders and therefore are faced with the problem of an influx of data collected from these training/learning initiatives. Every second of every day, data is being collected based on online learning activities and learner behaviour. Thus, online platforms are struggling to cope with the volumes of data that are collected, and companies are finding it difficult to analyse and manage this data in a way that can be beneficial to all stakeholders. The majority of studies investigating learning analytics have been conducted in educational settings. This research aimed to develop and evaluate a toolkit that can be used for successful Workplace Learning Analytics (WLA) at software vendors. The study followed the Design Science Research (DSR) methodology, which was applied in iterative cycles where various components of the toolkit were designed, developed, and evaluated by participants. The real-world-context was a software vendor, ERPCo, which has been struggling to implement WLA successfully with their current Learning Experience Platform (LXP), as well as with their previous platform. Qualitative data was collected using document analysis of key company documents and Focus Group Discussions (FGDs) with employees from ERPCo to explore and confirm different topics and themes. These methods were used to iteratively analyse the As-Is and To-Be situations at ERPCo and to develop and evaluate the proposed WLA Toolkit. The method used to analyse the collected data from the FGDs was the Qualitative Content Analysis (QCA) method. To develop the first component of the toolkit, the Organisation component, the organisational success factors that influence the success of WLA were identified using a Systematic Literature Review (SLR). These factors were discussed and validated in two exploratory FGDs held with employees from ERPCo, one with operational stakeholders and the other with strategic decision makers. The DeLone and McLean Information Systems (D&M IS) Success Model was used to undergird the research as a theory to guide the understanding of the factors influencing the success of WLA. Many of the factors identified in theory were found to be prevalent in the real-world-context, with some additional ones being identified in the FGDs. The most frequent challenges highlighted by participants were related to visibility; readily available high-quality data; flexibility of reporting; complexity of reporting; and effective decision making and insights obtained. Many of these related to the concept of usability issues for both the system and the information, which is specifically related to System Quality or Information Quality from the D&M IS Success Model. The second and third components of the toolkit are the Technology and Applications; and Information components respectively. Therefore, architecture and data management challenges and requirements for these components were analysed. An appropriate WLA architecture was selected and then further customised for use at ERPCo. A third FGD was conducted with employees who had more technical roles in ERPCo. The purpose of this FGD was to provide input on the architecture, technologies and data management challenges and requirements. In the Technology and Applications component of the WLA Toolkit, factors influencing WLA success related to applications and visualisations were considered. An instantiation of this component was demonstrated in the fourth FGD, where learning data from the LXP at ERPCo was collected and a dashboard incorporating recommended visualisation techniques was developed as a proof of concept. In this FGD participants gave feedback on both the dashboard and the toolkit. The artefact of this research is the WLA Toolkit that can be used by practitioners to guide the planning and implementation of WLA in large organisations that use LXP and WLA platforms. Researchers can use the WLA Toolkit to gain a deeper understanding of the required components and factors for successful WLA in software vendors. The research also contributes to the D&M IS Success Model theory in the information economy. In support of this PhD dissertation, the following paper has been published: Whale, A. & Scholtz, B. 2022. A Theoretical Classification of Organizational Success Factors for Workplace Learning Analytics. NEXTCOMP 2022. Mauritius. A draft manuscript for a journal paper was in progress at the time of submitting this thesis. , Thesis (PhD) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics , 2024
- Full Text:
- Date Issued: 2024-04
- Authors: Whale, Alyssa Morgan
- Date: 2024-04
- Subjects: Computer-assisted instruction , Intelligent tutoring systems , Information visualisation
- Language: English
- Type: Doctoral theses , text
- Identifier: http://hdl.handle.net/10948/64448 , vital:73713
- Description: Software vendors commonly provide digital software training to their stakeholders and therefore are faced with the problem of an influx of data collected from these training/learning initiatives. Every second of every day, data is being collected based on online learning activities and learner behaviour. Thus, online platforms are struggling to cope with the volumes of data that are collected, and companies are finding it difficult to analyse and manage this data in a way that can be beneficial to all stakeholders. The majority of studies investigating learning analytics have been conducted in educational settings. This research aimed to develop and evaluate a toolkit that can be used for successful Workplace Learning Analytics (WLA) at software vendors. The study followed the Design Science Research (DSR) methodology, which was applied in iterative cycles where various components of the toolkit were designed, developed, and evaluated by participants. The real-world-context was a software vendor, ERPCo, which has been struggling to implement WLA successfully with their current Learning Experience Platform (LXP), as well as with their previous platform. Qualitative data was collected using document analysis of key company documents and Focus Group Discussions (FGDs) with employees from ERPCo to explore and confirm different topics and themes. These methods were used to iteratively analyse the As-Is and To-Be situations at ERPCo and to develop and evaluate the proposed WLA Toolkit. The method used to analyse the collected data from the FGDs was the Qualitative Content Analysis (QCA) method. To develop the first component of the toolkit, the Organisation component, the organisational success factors that influence the success of WLA were identified using a Systematic Literature Review (SLR). These factors were discussed and validated in two exploratory FGDs held with employees from ERPCo, one with operational stakeholders and the other with strategic decision makers. The DeLone and McLean Information Systems (D&M IS) Success Model was used to undergird the research as a theory to guide the understanding of the factors influencing the success of WLA. Many of the factors identified in theory were found to be prevalent in the real-world-context, with some additional ones being identified in the FGDs. The most frequent challenges highlighted by participants were related to visibility; readily available high-quality data; flexibility of reporting; complexity of reporting; and effective decision making and insights obtained. Many of these related to the concept of usability issues for both the system and the information, which is specifically related to System Quality or Information Quality from the D&M IS Success Model. The second and third components of the toolkit are the Technology and Applications; and Information components respectively. Therefore, architecture and data management challenges and requirements for these components were analysed. An appropriate WLA architecture was selected and then further customised for use at ERPCo. A third FGD was conducted with employees who had more technical roles in ERPCo. The purpose of this FGD was to provide input on the architecture, technologies and data management challenges and requirements. In the Technology and Applications component of the WLA Toolkit, factors influencing WLA success related to applications and visualisations were considered. An instantiation of this component was demonstrated in the fourth FGD, where learning data from the LXP at ERPCo was collected and a dashboard incorporating recommended visualisation techniques was developed as a proof of concept. In this FGD participants gave feedback on both the dashboard and the toolkit. The artefact of this research is the WLA Toolkit that can be used by practitioners to guide the planning and implementation of WLA in large organisations that use LXP and WLA platforms. Researchers can use the WLA Toolkit to gain a deeper understanding of the required components and factors for successful WLA in software vendors. The research also contributes to the D&M IS Success Model theory in the information economy. In support of this PhD dissertation, the following paper has been published: Whale, A. & Scholtz, B. 2022. A Theoretical Classification of Organizational Success Factors for Workplace Learning Analytics. NEXTCOMP 2022. Mauritius. A draft manuscript for a journal paper was in progress at the time of submitting this thesis. , Thesis (PhD) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics , 2024
- Full Text:
- Date Issued: 2024-04
An in vitro evaluation of the anti-breast cancer activity of Nigella sativa extracts and its bioactive compound in combination with curcumin
- Authors: Botha, Susanna Gertruida
- Date: 2024-04
- Subjects: Herbs -- Therapeutic use , Radiation-protective agents , Breast -- Cancer -- Treatment
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/63639 , vital:73571
- Description: Breast cancer constitutes 23% of all cancers in South African females. Curcumin and Nigella sativa have anti-cancer, anti-metastatic and antioxidant-properties and may be effective against breast cancer. This study focused on the effect of N. sativa extracts or thymoquinone and curcumin, individually and in combination, on breast cancer cells. An MTT assay showed that curcumin reduced cell viability by 50% (IC50) at 18 ± 2.63 μg/mL and thymoquinone (TQ) at 5 ± 0.95 μg/mL against the MDA-MB-231 cells. The IC50 values for curcumin and TQ were 35 ± 6.98 μg/mL and 4 ± 0.96 μg/mL against the MCF-7 cells, respectively. The IC50 value for the NSBE was determined to be 350 ± 55 μg/mL. The IC50 value of NSAE did not fall within the selected concentration range. Synergism was noted for combinations of NSBE with curcumin, and combinations of TQ with curcumin, against both MCF-7 and MDA-MB-231 cells. Two synergistic combinations per treatment per cell line, as determined by the combination index analysis, were chosen for further investigation. The combinations and individual treatments tested against the MCF-10A cells, were not significant, except for NSBE80:CURC20 combination. Curcumin had the most significant anti-oxidant activity; however, no link was noted between the anti-oxidant activity and the cytotoxicity of the combinations. The combination treatments induced apoptosis more effectively than the individual treatments. Caspase-3 dependent apoptosis was noted for NSBE10:CURC90 and TQ80:CURC20 combinations against the MDA-MB-231 cells, and the TQ60:CURC40 combination against the MCF-7 cells. The individual and combined treatments effectively reduced MDA-MB-231 cell adhesion to fibronectin, but not all reduced the cell adhesion to laminin. Based on these results, the combinations of curcumin with TQ or NSBE, have promising anticancer benefits against breast cancer. , Thesis (MSc) -- Faculty of Science, School of Biomolecular & Chemical Sciences, 2024
- Full Text:
- Date Issued: 2024-04
- Authors: Botha, Susanna Gertruida
- Date: 2024-04
- Subjects: Herbs -- Therapeutic use , Radiation-protective agents , Breast -- Cancer -- Treatment
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/63639 , vital:73571
- Description: Breast cancer constitutes 23% of all cancers in South African females. Curcumin and Nigella sativa have anti-cancer, anti-metastatic and antioxidant-properties and may be effective against breast cancer. This study focused on the effect of N. sativa extracts or thymoquinone and curcumin, individually and in combination, on breast cancer cells. An MTT assay showed that curcumin reduced cell viability by 50% (IC50) at 18 ± 2.63 μg/mL and thymoquinone (TQ) at 5 ± 0.95 μg/mL against the MDA-MB-231 cells. The IC50 values for curcumin and TQ were 35 ± 6.98 μg/mL and 4 ± 0.96 μg/mL against the MCF-7 cells, respectively. The IC50 value for the NSBE was determined to be 350 ± 55 μg/mL. The IC50 value of NSAE did not fall within the selected concentration range. Synergism was noted for combinations of NSBE with curcumin, and combinations of TQ with curcumin, against both MCF-7 and MDA-MB-231 cells. Two synergistic combinations per treatment per cell line, as determined by the combination index analysis, were chosen for further investigation. The combinations and individual treatments tested against the MCF-10A cells, were not significant, except for NSBE80:CURC20 combination. Curcumin had the most significant anti-oxidant activity; however, no link was noted between the anti-oxidant activity and the cytotoxicity of the combinations. The combination treatments induced apoptosis more effectively than the individual treatments. Caspase-3 dependent apoptosis was noted for NSBE10:CURC90 and TQ80:CURC20 combinations against the MDA-MB-231 cells, and the TQ60:CURC40 combination against the MCF-7 cells. The individual and combined treatments effectively reduced MDA-MB-231 cell adhesion to fibronectin, but not all reduced the cell adhesion to laminin. Based on these results, the combinations of curcumin with TQ or NSBE, have promising anticancer benefits against breast cancer. , Thesis (MSc) -- Faculty of Science, School of Biomolecular & Chemical Sciences, 2024
- Full Text:
- Date Issued: 2024-04
Augmenting encoder-decoder networks for first-order logic formula parsing using attention pointer mechanisms
- Authors: Tissink, Kade
- Date: 2024-04
- Subjects: Translators (Computer programs) , Computational linguistics , Computer science
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/64390 , vital:73692
- Description: Semantic parsing is the task of extracting a structured machine-interpretable representation from natural language utterance. This representation can be used for various applications such as question answering, information extraction, and dialogue systems. However, semantic parsing is a challenging problem that requires dealing with the ambiguity, variability, and complexity of natural language. This dissertation investigates neural parsing of natural language (NL) sentences to first-order logic (FOL) formulas. FOL is a widely used formal language for expressing logical statements and reasoning. FOL formulas can capture the meaning and structure of natural language sentences in a precise and unambiguous way. The problem is initially approached as a sequence-to-sequence mapping task using both LSTM-based and transformer encoder-decoder architectures for character-, subword-, and wordlevel text tokenisation. These models are trained on NL-FOL datasets using supervised learning and evaluated on various metrics such as exact match accuracy, syntactic validity, formula structure accuracy, and predicate/constant similarity. A novel augmented model is then introduced that decomposes the task of neural FOL parsing into four inter-dependent subtasks: template decoding, predicate and constant recognition, predicate set pointing, and object set pointing. The components for the four subtasks are jointly trained using multi-task learning and evaluated using the same metrics from the sequence-tosequence models. The results indicate improved performance over the sequence-to-sequence models and the modular design allows for more interpretability and flexibility. Additionally, to compensate for the scarcity of open-source, labelled NL-FOL datasets, a new benchmark is constructed from publicly accessible data. The data consists of NL sentences paired with corresponding FOL formulas in a standardised notation. The data is split into training, validation, and test sets. The main contributions of this dissertation are: an in-depth literature review covering decades of research presented with a consistent notation, the formation of a complex NL-FOL benchmark that includes algorithmically generated and human-annotated FOL formulas, proposal of a novel transformer encoder-decoder architecture that is shown to successfully train at significant depths, evaluation of twenty sequence-to-sequence models on the task of neural FOL parsing for different text representations and encoder-decoder architectures, the proposal of a novel augmented FOL parsing architecture, and an in-depth analysis of the strengths and weaknesses of these models. , Thesis (MSc) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics , 2024
- Full Text:
- Date Issued: 2024-04
- Authors: Tissink, Kade
- Date: 2024-04
- Subjects: Translators (Computer programs) , Computational linguistics , Computer science
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/64390 , vital:73692
- Description: Semantic parsing is the task of extracting a structured machine-interpretable representation from natural language utterance. This representation can be used for various applications such as question answering, information extraction, and dialogue systems. However, semantic parsing is a challenging problem that requires dealing with the ambiguity, variability, and complexity of natural language. This dissertation investigates neural parsing of natural language (NL) sentences to first-order logic (FOL) formulas. FOL is a widely used formal language for expressing logical statements and reasoning. FOL formulas can capture the meaning and structure of natural language sentences in a precise and unambiguous way. The problem is initially approached as a sequence-to-sequence mapping task using both LSTM-based and transformer encoder-decoder architectures for character-, subword-, and wordlevel text tokenisation. These models are trained on NL-FOL datasets using supervised learning and evaluated on various metrics such as exact match accuracy, syntactic validity, formula structure accuracy, and predicate/constant similarity. A novel augmented model is then introduced that decomposes the task of neural FOL parsing into four inter-dependent subtasks: template decoding, predicate and constant recognition, predicate set pointing, and object set pointing. The components for the four subtasks are jointly trained using multi-task learning and evaluated using the same metrics from the sequence-tosequence models. The results indicate improved performance over the sequence-to-sequence models and the modular design allows for more interpretability and flexibility. Additionally, to compensate for the scarcity of open-source, labelled NL-FOL datasets, a new benchmark is constructed from publicly accessible data. The data consists of NL sentences paired with corresponding FOL formulas in a standardised notation. The data is split into training, validation, and test sets. The main contributions of this dissertation are: an in-depth literature review covering decades of research presented with a consistent notation, the formation of a complex NL-FOL benchmark that includes algorithmically generated and human-annotated FOL formulas, proposal of a novel transformer encoder-decoder architecture that is shown to successfully train at significant depths, evaluation of twenty sequence-to-sequence models on the task of neural FOL parsing for different text representations and encoder-decoder architectures, the proposal of a novel augmented FOL parsing architecture, and an in-depth analysis of the strengths and weaknesses of these models. , Thesis (MSc) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics , 2024
- Full Text:
- Date Issued: 2024-04
Augmenting the Moore-Penrose generalised Inverse to train neural networks
- Authors: Fang, Bobby
- Date: 2024-04
- Subjects: Neural networks (Computer science) , Machine learning , Mathematical optimization -- Computer programs
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/63755 , vital:73595
- Description: An Extreme Learning Machine (ELM) is a non-iterative and fast feedforward neural network training algorithm which uses the Moore-Penrose generalised inverse of a matrix to compute the weights of the output layer of the neural network, using a random initialisation for the hidden layer. While ELM has been used to train feedforward neural networks, the effectiveness of the MP generalised to train recurrent neural networks is yet to be investigated. The primary aim of this research was to investigate how biases in the output layer and the MP generalised inverse can be used to train recurrent neural networks. To accomplish this, the Bias Augmented ELM (BA-ELM), which concatenated the hidden layer output matrix with a ones-column vector to simulate the biases in the output layer, was proposed. A variety of datasets generated from optimisation test functions, as well as using real-world regression and classification datasets, were used to validate BA-ELM. The results showed in specific circumstances that BA-ELM was able to perform better than ELM. Following this, Recurrent ELM (R-ELM) was proposed which uses a recurrent hidden layer instead of a feedforward hidden layer. Recurrent neural networks also rely on having functional feedback connections in the recurrent layer. A hybrid training algorithm, Recurrent Hybrid ELM (R-HELM), was proposed, which uses a gradient-based algorithm to optimise the recurrent layer and the MP generalised inverse to compute the output weights. The evaluation of R-ELM and R-HELM algorithms were carried out using three different recurrent architectures on two recurrent tasks derived from the Susceptible- Exposed-Infected-Removed (SEIR) epidemiology model. Various training hyperparameters were evaluated through hyperparameter investigations to investigate their effectiveness on the hybrid training algorithm. With optimal hyperparameters, the hybrid training algorithm was able to achieve better performance than the conventional gradient-based algorithm. , Thesis (MSc) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics, 2024
- Full Text:
- Date Issued: 2024-04
- Authors: Fang, Bobby
- Date: 2024-04
- Subjects: Neural networks (Computer science) , Machine learning , Mathematical optimization -- Computer programs
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/63755 , vital:73595
- Description: An Extreme Learning Machine (ELM) is a non-iterative and fast feedforward neural network training algorithm which uses the Moore-Penrose generalised inverse of a matrix to compute the weights of the output layer of the neural network, using a random initialisation for the hidden layer. While ELM has been used to train feedforward neural networks, the effectiveness of the MP generalised to train recurrent neural networks is yet to be investigated. The primary aim of this research was to investigate how biases in the output layer and the MP generalised inverse can be used to train recurrent neural networks. To accomplish this, the Bias Augmented ELM (BA-ELM), which concatenated the hidden layer output matrix with a ones-column vector to simulate the biases in the output layer, was proposed. A variety of datasets generated from optimisation test functions, as well as using real-world regression and classification datasets, were used to validate BA-ELM. The results showed in specific circumstances that BA-ELM was able to perform better than ELM. Following this, Recurrent ELM (R-ELM) was proposed which uses a recurrent hidden layer instead of a feedforward hidden layer. Recurrent neural networks also rely on having functional feedback connections in the recurrent layer. A hybrid training algorithm, Recurrent Hybrid ELM (R-HELM), was proposed, which uses a gradient-based algorithm to optimise the recurrent layer and the MP generalised inverse to compute the output weights. The evaluation of R-ELM and R-HELM algorithms were carried out using three different recurrent architectures on two recurrent tasks derived from the Susceptible- Exposed-Infected-Removed (SEIR) epidemiology model. Various training hyperparameters were evaluated through hyperparameter investigations to investigate their effectiveness on the hybrid training algorithm. With optimal hyperparameters, the hybrid training algorithm was able to achieve better performance than the conventional gradient-based algorithm. , Thesis (MSc) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics, 2024
- Full Text:
- Date Issued: 2024-04
Comparative study of the effect of iloprost on neuroinflammatory changes in c8-b4 microglial cells and murine model of trypanosomiasis
- Authors: Jacobs, Ashleigh
- Date: 2024-04
- Subjects: Trypanosomiasis -- South Africa , DNA -- Methylation -- Research -- Methodology , Central nervous system -- Diseases , Nervous system -- Degeneration
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/64077 , vital:73651
- Description: Neurodegenerative conditions significantly impact well-being and quality of life in individuals with major symptoms including mood disorders, cognitive decline, and psychiatric disturbances, often resulting from neuroinflammation triggered by immune responses to bacterial or parasitic infections such as gram-negative bacteria or Human African Trypanosomiasis. Microglia play a crucial role in both neurotoxicity and cellular processes involved in restoring the neural health. Exploring the therapeutic potential of prostacyclin and its analogues in regulating microglia responses to inflammatory insult and treating Trypanosoma brucei (T.b) infection remains an unexplored area. The aim of this study was to assess the potential neuroprotective effects of Iloprost through comparative analysis of neuroinflammatory responses in both microglial cells exposed to lipopolysaccharide (LPS) and mouse brains infected with T.b brucei. In phase I of this study both resting and LPS treated C8-B4 microglial cells were exposed to varying concentrations of Iloprost. The effects of Iloprost on LPS-induced inflammation were analysed using immunofluorescence to detect microglial activation and differentiate between pro and anti-inflammatory phenotypes. Furthermore, pro and anti-inflammatory cytokine secretion was determined using an ELISA, in addition gene expression analysis was carried out using quantitative polymerase chain reaction (qPCR). Also, DNA methylation status of C8-B4 cells exposed to LPS challenge alone or in combination with various concentrations of Iloprost were determined using bisulfite sequencing technique followed by qPCR. In phase II of the study, a total of twenty-four Albino Swiss male mice (8-10 weeks old) were divided into four treatment groups with 6 mice in each group. All treatment groups except the non-infected control were inoculated with the T.b brucei parasite. One group received a single intraperitoneal injection of Diminazene aceturate (4 mg kg-1) while the remaining group received repeated intraperitoneal injections of Iloprost (200 μg kg-1). On day ten of the study, mouse brains were removed on ice using forceps. The hippocampal tissues were dissected out and processed for quantification of gene expression changes in pro and anti-inflammatory cytokines. Overall, the findings of this study indicate that LPS-induced pro-inflammatory cytokine, TNF-α and IL-1β, secretion and gene expression is down-regulated in C8-B4 microglial cells treated with Iloprost. Furthermore, there was a significant up-regulation in the expression of anti-inflammatory genes, particularly ARG-1, CD206, BDNF and CREB in response to Iloprost treatment following LPS-induced inflammation. This study is also the first to confirm M2 microglial polarization with Iloprost treatment in both resting and LPS treated cells. However, hypermethylation at CREB and BDNF promoter regions was observed 24 hours after Iloprost treatment. Additionally, Iloprost reversed hypomethylation at the BDNF promoter region that had been induced by LPS treatment. The rodent model also indicated a downregulation in the pro-inflammatory cytokine, IL-1β, expression and upregulation of BDNF transcription in T.b brucei infected mice treated with repeated doses of Iloprost. In conclusion, determining the immunomodulatory roles of Iloprost in both in vitro and in vivo models of neuroinflammation could assist in the development of alternative therapy for neurodegenerative disease. , Thesis (MSc) -- Faculty of Science, School of Biomolecular & Chemical Sciences, 2024
- Full Text:
- Date Issued: 2024-04
- Authors: Jacobs, Ashleigh
- Date: 2024-04
- Subjects: Trypanosomiasis -- South Africa , DNA -- Methylation -- Research -- Methodology , Central nervous system -- Diseases , Nervous system -- Degeneration
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/64077 , vital:73651
- Description: Neurodegenerative conditions significantly impact well-being and quality of life in individuals with major symptoms including mood disorders, cognitive decline, and psychiatric disturbances, often resulting from neuroinflammation triggered by immune responses to bacterial or parasitic infections such as gram-negative bacteria or Human African Trypanosomiasis. Microglia play a crucial role in both neurotoxicity and cellular processes involved in restoring the neural health. Exploring the therapeutic potential of prostacyclin and its analogues in regulating microglia responses to inflammatory insult and treating Trypanosoma brucei (T.b) infection remains an unexplored area. The aim of this study was to assess the potential neuroprotective effects of Iloprost through comparative analysis of neuroinflammatory responses in both microglial cells exposed to lipopolysaccharide (LPS) and mouse brains infected with T.b brucei. In phase I of this study both resting and LPS treated C8-B4 microglial cells were exposed to varying concentrations of Iloprost. The effects of Iloprost on LPS-induced inflammation were analysed using immunofluorescence to detect microglial activation and differentiate between pro and anti-inflammatory phenotypes. Furthermore, pro and anti-inflammatory cytokine secretion was determined using an ELISA, in addition gene expression analysis was carried out using quantitative polymerase chain reaction (qPCR). Also, DNA methylation status of C8-B4 cells exposed to LPS challenge alone or in combination with various concentrations of Iloprost were determined using bisulfite sequencing technique followed by qPCR. In phase II of the study, a total of twenty-four Albino Swiss male mice (8-10 weeks old) were divided into four treatment groups with 6 mice in each group. All treatment groups except the non-infected control were inoculated with the T.b brucei parasite. One group received a single intraperitoneal injection of Diminazene aceturate (4 mg kg-1) while the remaining group received repeated intraperitoneal injections of Iloprost (200 μg kg-1). On day ten of the study, mouse brains were removed on ice using forceps. The hippocampal tissues were dissected out and processed for quantification of gene expression changes in pro and anti-inflammatory cytokines. Overall, the findings of this study indicate that LPS-induced pro-inflammatory cytokine, TNF-α and IL-1β, secretion and gene expression is down-regulated in C8-B4 microglial cells treated with Iloprost. Furthermore, there was a significant up-regulation in the expression of anti-inflammatory genes, particularly ARG-1, CD206, BDNF and CREB in response to Iloprost treatment following LPS-induced inflammation. This study is also the first to confirm M2 microglial polarization with Iloprost treatment in both resting and LPS treated cells. However, hypermethylation at CREB and BDNF promoter regions was observed 24 hours after Iloprost treatment. Additionally, Iloprost reversed hypomethylation at the BDNF promoter region that had been induced by LPS treatment. The rodent model also indicated a downregulation in the pro-inflammatory cytokine, IL-1β, expression and upregulation of BDNF transcription in T.b brucei infected mice treated with repeated doses of Iloprost. In conclusion, determining the immunomodulatory roles of Iloprost in both in vitro and in vivo models of neuroinflammation could assist in the development of alternative therapy for neurodegenerative disease. , Thesis (MSc) -- Faculty of Science, School of Biomolecular & Chemical Sciences, 2024
- Full Text:
- Date Issued: 2024-04
Comparing stable isotope ratios and metal concentrations between components of the benthic food web: a case study of the Swartkops Estuary South Africa
- Authors: Ndoto, Asiphe
- Date: 2024-04
- Subjects: Swartkops River Estuary (South Africa) , Estuarine ecology -- South Africa -- Swartkops River Estuary , Fishes -- Ecology -- South Africa -- Swartkops River Estuary
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/64256 , vital:73669
- Description: Estuarine systems are highly productive ecosystems; however, they are subjected to high anthropogenic pressure such as metal contamination and: increased nutrient loads. The contamination sources of metals and nutrients in urban estuaries are derived: from industrial waste. agricultural and urban runoff that flows into estuaries. An example of such a system is the Swartkops Estuary. industry and three wastewater treatment plants within the Swartkops River catchment are major sources of metal. and nutrient pollution, respectively. The metals accumulate in the environment, are biomagnified up the food web, and transferred from one trophic level to another. At lethal concentrations, metals pose a threat to organisms using the estuary by affecting their physiological and biochemical processes. Stable Isotope analysis has proven to be an effective tool for investigating, trophic linkages in the food chain from a variety of environments. By assessing both metals and stable _isotopes in the. estuary it can provide a more robust understanding of the pathway metals accumulate, biomagnified, and transfer from the environment through the estuarine food web. , Thesis (MSc) -- Faculty of Science, School of Environmental Sciences, 2022
- Full Text:
- Date Issued: 2024-04
- Authors: Ndoto, Asiphe
- Date: 2024-04
- Subjects: Swartkops River Estuary (South Africa) , Estuarine ecology -- South Africa -- Swartkops River Estuary , Fishes -- Ecology -- South Africa -- Swartkops River Estuary
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/64256 , vital:73669
- Description: Estuarine systems are highly productive ecosystems; however, they are subjected to high anthropogenic pressure such as metal contamination and: increased nutrient loads. The contamination sources of metals and nutrients in urban estuaries are derived: from industrial waste. agricultural and urban runoff that flows into estuaries. An example of such a system is the Swartkops Estuary. industry and three wastewater treatment plants within the Swartkops River catchment are major sources of metal. and nutrient pollution, respectively. The metals accumulate in the environment, are biomagnified up the food web, and transferred from one trophic level to another. At lethal concentrations, metals pose a threat to organisms using the estuary by affecting their physiological and biochemical processes. Stable Isotope analysis has proven to be an effective tool for investigating, trophic linkages in the food chain from a variety of environments. By assessing both metals and stable _isotopes in the. estuary it can provide a more robust understanding of the pathway metals accumulate, biomagnified, and transfer from the environment through the estuarine food web. , Thesis (MSc) -- Faculty of Science, School of Environmental Sciences, 2022
- Full Text:
- Date Issued: 2024-04
Development of a numerical geohydrological model for a fractured rock aquifer in the Karoo, near Sutherland, South Africa
- Authors: Maqhubela, Akhona
- Date: 2024-04
- Subjects: Hydrogeology -- South Africa -- Northern Cape , Groundwater -- South Africa -- North Cape -- Management , Evapotranspiration
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/64164 , vital:73658
- Description: The regional scale method in groundwater storage observation introduces uncertainties that hinder the evaluation of the remaining lifespan of depleted aquifers. The scarcity of precipitation data presents significant global challenge, especially in semi-arid regions. This study constructs a regional numerical hydrogeological model that identifies the potential impacts of climate change on the water balance for the South African Gravimetric Observation Station in Sutherland. The purpose of this study is to understand mechanisms controlling groundwater in the fractured rock aquifer. The climate data from the Weather forecast data over the last ten years was collected from the South African Weather Service. and groundwater levels data assessed the potential impacts of climate change on water balance components, especially precipitation and evapotranspiration. Precipitation is the primary recharge parameter in this study and had the highest level recorded in winter, with May having the highest precipitation rates of 24,62mm. The instrument conducted two profile investigations in a single day to detect geological abnormalities at various depths, achieving an impressive accuracy of up to 0.001 mV. The fact that groundwater flows from regions of higher hydraulic heads to areas of lower hydraulic charges, confirms that riverbeds in Sutherland act as preferential conduits for subsurface recharge. The profile and processed geophysical maps show low chances of getting groundwater in this observed area due to extensively great depth, approximately 150 – 210 m. The river package from MODFLOW model shows little inflow to the study nearby well locations. These model results showed a negative difference between water flowing in and out of the system of about -7m3 between 2002 and 2020. Groundwater flows faster at borehole five, where the hydraulic conductivity is large. The resulting regional hydrogeological model offered valuable insights into how climate change might influence the distribution and accessibility of groundwater resources. In the context of Sutherland, a negative groundwater budget value signaled that groundwater extraction or consumption surpassed the natural replenishment or recharge of the aquifer. , Thesis (MSc) -- Faculty of Science, School of Environmental Sciences, 2024
- Full Text:
- Date Issued: 2024-04
- Authors: Maqhubela, Akhona
- Date: 2024-04
- Subjects: Hydrogeology -- South Africa -- Northern Cape , Groundwater -- South Africa -- North Cape -- Management , Evapotranspiration
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/64164 , vital:73658
- Description: The regional scale method in groundwater storage observation introduces uncertainties that hinder the evaluation of the remaining lifespan of depleted aquifers. The scarcity of precipitation data presents significant global challenge, especially in semi-arid regions. This study constructs a regional numerical hydrogeological model that identifies the potential impacts of climate change on the water balance for the South African Gravimetric Observation Station in Sutherland. The purpose of this study is to understand mechanisms controlling groundwater in the fractured rock aquifer. The climate data from the Weather forecast data over the last ten years was collected from the South African Weather Service. and groundwater levels data assessed the potential impacts of climate change on water balance components, especially precipitation and evapotranspiration. Precipitation is the primary recharge parameter in this study and had the highest level recorded in winter, with May having the highest precipitation rates of 24,62mm. The instrument conducted two profile investigations in a single day to detect geological abnormalities at various depths, achieving an impressive accuracy of up to 0.001 mV. The fact that groundwater flows from regions of higher hydraulic heads to areas of lower hydraulic charges, confirms that riverbeds in Sutherland act as preferential conduits for subsurface recharge. The profile and processed geophysical maps show low chances of getting groundwater in this observed area due to extensively great depth, approximately 150 – 210 m. The river package from MODFLOW model shows little inflow to the study nearby well locations. These model results showed a negative difference between water flowing in and out of the system of about -7m3 between 2002 and 2020. Groundwater flows faster at borehole five, where the hydraulic conductivity is large. The resulting regional hydrogeological model offered valuable insights into how climate change might influence the distribution and accessibility of groundwater resources. In the context of Sutherland, a negative groundwater budget value signaled that groundwater extraction or consumption surpassed the natural replenishment or recharge of the aquifer. , Thesis (MSc) -- Faculty of Science, School of Environmental Sciences, 2024
- Full Text:
- Date Issued: 2024-04
Development of a numerical geohydrological model for a fractured rock aquifer in the Karoo, near Sutherland, South Africa
- Authors: Maqhubela, Akhona
- Date: 2024-04
- Subjects: Groundwater -- South Africa -- Northern Cape , Hydrogeology -- South Africa -- Northern Cape , Remote sensing , Geographic information systems
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/64163 , vital:73659
- Description: The regional scale method in groundwater storage observation introduces uncertainties that hinder the evaluation of the remaining lifespan of depleted aquifers. The scarcity of precipitation data presents significant global challenge, especially in semi-arid regions. This study constructs a regional numerical hydrogeological model that identifies the potential impacts of climate change on the water balance for the South African Gravimetric Observation Station in Sutherland. The purpose of this study is to understand mechanisms controlling groundwater in the fractured rock aquifer. The climate data from the Weather forecast data over the last ten years was collected from the South African Weather Service. and groundwater levels data assessed the potential impacts of climate change on water balance components, especially precipitation and evapotranspiration. Precipitation is the primary recharge parameter in this study and had the highest level recorded in winter, with May having the highest precipitation rates of 24,62mm. The instrument conducted two profile investigations in a single day to detect geological abnormalities at various depths, achieving an impressive accuracy of up to 0.001 mV. The fact that groundwater flows from regions of higher hydraulic heads to areas of lower hydraulic charges, confirms that riverbeds in Sutherland act as preferential conduits for subsurface recharge. The profile and processed geophysical maps show low chances of getting groundwater in this observed area due to extensively great depth, approximately 150 – 210 m. The river package from MODFLOW model shows little inflow to the study nearby well locations. These model results showed a negative difference between water flowing in and out of the system of about -7m3 between 2002 and 2020. Groundwater flows faster at borehole five, where the hydraulic conductivity is large. The resulting regional hydrogeological model offered valuable insights into how climate change might influence the distribution and accessibility of groundwater resources. In the context of Sutherland, a negative groundwater budget value signaled that groundwater extraction or consumption surpassed the natural replenishment or recharge of the aquifer. , Thesis (MSc) -- Faculty of Science, School of Environmental Sciences, 2022
- Full Text:
- Date Issued: 2024-04
- Authors: Maqhubela, Akhona
- Date: 2024-04
- Subjects: Groundwater -- South Africa -- Northern Cape , Hydrogeology -- South Africa -- Northern Cape , Remote sensing , Geographic information systems
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/64163 , vital:73659
- Description: The regional scale method in groundwater storage observation introduces uncertainties that hinder the evaluation of the remaining lifespan of depleted aquifers. The scarcity of precipitation data presents significant global challenge, especially in semi-arid regions. This study constructs a regional numerical hydrogeological model that identifies the potential impacts of climate change on the water balance for the South African Gravimetric Observation Station in Sutherland. The purpose of this study is to understand mechanisms controlling groundwater in the fractured rock aquifer. The climate data from the Weather forecast data over the last ten years was collected from the South African Weather Service. and groundwater levels data assessed the potential impacts of climate change on water balance components, especially precipitation and evapotranspiration. Precipitation is the primary recharge parameter in this study and had the highest level recorded in winter, with May having the highest precipitation rates of 24,62mm. The instrument conducted two profile investigations in a single day to detect geological abnormalities at various depths, achieving an impressive accuracy of up to 0.001 mV. The fact that groundwater flows from regions of higher hydraulic heads to areas of lower hydraulic charges, confirms that riverbeds in Sutherland act as preferential conduits for subsurface recharge. The profile and processed geophysical maps show low chances of getting groundwater in this observed area due to extensively great depth, approximately 150 – 210 m. The river package from MODFLOW model shows little inflow to the study nearby well locations. These model results showed a negative difference between water flowing in and out of the system of about -7m3 between 2002 and 2020. Groundwater flows faster at borehole five, where the hydraulic conductivity is large. The resulting regional hydrogeological model offered valuable insights into how climate change might influence the distribution and accessibility of groundwater resources. In the context of Sutherland, a negative groundwater budget value signaled that groundwater extraction or consumption surpassed the natural replenishment or recharge of the aquifer. , Thesis (MSc) -- Faculty of Science, School of Environmental Sciences, 2022
- Full Text:
- Date Issued: 2024-04
Development of the Zirconium-based metal- organic framework UiO-66 for Adsorption-mediated electrochemical sensing of organonitrogen compounds in fuels
- Authors: Mokgohloa, Mathule Collen
- Date: 2024-04
- Subjects: Electrochemical sensors , Quinoline -- synthesis , Pyridine -- Synthesis
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/64193 , vital:73663
- Description: The combustion of fuel which contains organonitrogen compounds has led to an increase in atmospheric and environmental levels of nitrogen oxides which are responsible for several environmental, ecological, and human health problems. With increasingly strict environmental regulations and deleterious effects of the nitrogen-containing compounds in fuels, there is a strong need for the removal and detection of nitrogen-containing compounds in fuels to produce fuels with lower levels of nitrogen compounds. The Environmental Protection Agency (EPA) mandated nitrogen content in fossil fuels to be about less than 1 wt%. The existing analytical techniques used for the quantification of nitrogen-containing compounds in fuels include GC-MS, GC-AED, and spectrophotometry. Despite being sensitive and specific, these methods require expensive equipment, highly trained personnel, and time-consuming pre-treatment methods to avoid interferences from similar compounds, and they suffer from analyte loss and inadequate results. Thus, they can only be carried out in the off-site laboratories, hindering them from rapid on-site screening. The metal-organic framework (MOF) UiO-66-NH2 and its composites UiO-66-NH2/GA, and UiO- 66-NH2/GO-NH2 (GA= Graphene aerosol and GO= Graphene oxide) have shown great potentialin the adsorption of organonitrogen compounds like quinoline. However, research in the electrochemical application of these MOFs and their derivatives is limited despite their high surface area, abundant porosity, and increased conductivity. To demonstrate their electrochemical sensing potential, modification of the glassy carbon electrode (GCE) was suggested, which would show a higher degree of association for pyridine and quinoline on modified UiO-66-NH2/GA and UiO-66-NH2/GO-NH2 surfaces, thereby creating a more favourable route for adsorption. This would result in enhanced sensing of pyridine and quinoline in model fuel. Thus, unlike the bare GCE, the fabricated/modified can selectively detect high levels of organonitrogen compounds. In this study, Chapter 3, UiO-66-NH2/GA and UiO-66-NH2/GO-NH2 are prepared via the solvothermal method and then characterized using various spectroscopic and imaging techniques such as Scanning Electron Microscopy (SEM), X-ray Photoelectron Spectroscopy (XPS), Ultraviolet-Visible Spectroscopy (UV-VIS), Thermogravimetric Analysis (TGA), X-ray Development of the Zirconium-based metal- organic framework UiO-66 for Adsorption-mediated electrochemical sensing of organonitrogen compounds in fuels. , Thesis (MSc) -- Faculty of Science, School of Biomolecular & Chemical Sciences, 2024
- Full Text:
- Date Issued: 2024-04
- Authors: Mokgohloa, Mathule Collen
- Date: 2024-04
- Subjects: Electrochemical sensors , Quinoline -- synthesis , Pyridine -- Synthesis
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/64193 , vital:73663
- Description: The combustion of fuel which contains organonitrogen compounds has led to an increase in atmospheric and environmental levels of nitrogen oxides which are responsible for several environmental, ecological, and human health problems. With increasingly strict environmental regulations and deleterious effects of the nitrogen-containing compounds in fuels, there is a strong need for the removal and detection of nitrogen-containing compounds in fuels to produce fuels with lower levels of nitrogen compounds. The Environmental Protection Agency (EPA) mandated nitrogen content in fossil fuels to be about less than 1 wt%. The existing analytical techniques used for the quantification of nitrogen-containing compounds in fuels include GC-MS, GC-AED, and spectrophotometry. Despite being sensitive and specific, these methods require expensive equipment, highly trained personnel, and time-consuming pre-treatment methods to avoid interferences from similar compounds, and they suffer from analyte loss and inadequate results. Thus, they can only be carried out in the off-site laboratories, hindering them from rapid on-site screening. The metal-organic framework (MOF) UiO-66-NH2 and its composites UiO-66-NH2/GA, and UiO- 66-NH2/GO-NH2 (GA= Graphene aerosol and GO= Graphene oxide) have shown great potentialin the adsorption of organonitrogen compounds like quinoline. However, research in the electrochemical application of these MOFs and their derivatives is limited despite their high surface area, abundant porosity, and increased conductivity. To demonstrate their electrochemical sensing potential, modification of the glassy carbon electrode (GCE) was suggested, which would show a higher degree of association for pyridine and quinoline on modified UiO-66-NH2/GA and UiO-66-NH2/GO-NH2 surfaces, thereby creating a more favourable route for adsorption. This would result in enhanced sensing of pyridine and quinoline in model fuel. Thus, unlike the bare GCE, the fabricated/modified can selectively detect high levels of organonitrogen compounds. In this study, Chapter 3, UiO-66-NH2/GA and UiO-66-NH2/GO-NH2 are prepared via the solvothermal method and then characterized using various spectroscopic and imaging techniques such as Scanning Electron Microscopy (SEM), X-ray Photoelectron Spectroscopy (XPS), Ultraviolet-Visible Spectroscopy (UV-VIS), Thermogravimetric Analysis (TGA), X-ray Development of the Zirconium-based metal- organic framework UiO-66 for Adsorption-mediated electrochemical sensing of organonitrogen compounds in fuels. , Thesis (MSc) -- Faculty of Science, School of Biomolecular & Chemical Sciences, 2024
- Full Text:
- Date Issued: 2024-04
Development of TiO2 nanostructures with a modified energy band gap for hydrogen extraction
- Authors: Mutubuki, Arnold
- Date: 2024-04
- Subjects: Nanostructures , Nanoscience , Nanochemistry
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/64226 , vital:73666
- Description: A rise in fossil fuel depletion has motivated the research towards alternative, cost effective and clean processes for energy production through renewable sources. The scientific community is currently engaged in extensive research to exploit viable, sustainable methods for generating green hydrogen. Titania (TiO2) is historically the most studied photoactive semiconductor material with great potential in photoelectrochemical water splitting (PECWS), following the discovery by Fujishima and Honda in 1972. TiO2 possesses superior physicochemical characteristics and band gap edges, which enables the semiconductor to effectively facilitate the PECWS process. Efforts are still ongoing to explore alternatives for narrowing the optical band gap energy of TiO2, for an efficient photoelectrode. In this research work, open-ended and well-ordered TiO2 nanotubular arrays were synthesised by a three-step anodization process. The third anodization was crucial to detach the TiO2 thin film from an opaque Ti metal substrate. The free-standing thin films were transferred and pasted onto conductive FTO-coated glass substrates transparent to visible light and annealed at 400 ℃ for crystallisation. The multi-step anodization has shown an improved top tube morphology by eliminating an initiation TiO2 mesh formed when a conventional single-step anodization process is used under similar conditions. To widen the absorption range of the samples, CuO nanosheets were deposited onto nanotubular TiO2/FTO films through successive ionic layer adsorption (SILAR), a wet chemical method. The formation of a CuO/TiO2 nanostructure enhances the transfer of photogenerated carriers, suppressing charge recombination. This research focused on investigating the influence of selected SILAR parameters on the formation of CuO nanostructures. The first was the effect of precursor concentration on the structural, morphological and optical properties of the CuO/TiO2/FTO nanostructured photoelectrode. The effect of the precursor concentration on the structure and morphology was evident in the X-ray diffraction (XRD) patterns and scanning electron microscopy (SEM) micrographs. Crystallite sizes of deposited CuO increased from 10.6 nm to 15.7 nm when precursor concentration was varied from 0.02 M to 0.10 M. The UV-visible absorbance results show that an increase in precursor concentration leads to a red shift of both the peak absorbance and edge wavelength of the CuO/TiO2/FTO absorbance spectra. This phenomenon is believed to be caused by the presence of CuO, which exhibits active absorption in the visible spectrum. As evidenced by the study, the continued increase in precursor concentration does not result in a further widening of the absorption band. This is demonstrated by the example of a CuO/TiO2/FTO sample decorated with a 0.2 M precursor. The second was the effect of SILAR immersion cycles on the properties of the CuO/TiO2/FTO nanostructure developed. The increase in the number of immersion cycles led to a notable progression in the adsorption cupric oxide on the TiO2/FTO samples. A redshift in the absorbance peak and edge wavelength is observed in the UV-visible spectra of CuO/TiO2/FTO photoelectrode. The efficacy of the SILAR technique in modifying the absorption band of nanotubular TiO2 thin films has been conclusively demonstrated through comprehensive analysis and correlation of the relationships between the structure and optical properties, as evidenced by the XRD patterns, Raman spectra, SEM, TEM micrographs, and UV-visible absorbance spectra. , Thesis (MSc) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics, 2024
- Full Text:
- Date Issued: 2024-04
- Authors: Mutubuki, Arnold
- Date: 2024-04
- Subjects: Nanostructures , Nanoscience , Nanochemistry
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/64226 , vital:73666
- Description: A rise in fossil fuel depletion has motivated the research towards alternative, cost effective and clean processes for energy production through renewable sources. The scientific community is currently engaged in extensive research to exploit viable, sustainable methods for generating green hydrogen. Titania (TiO2) is historically the most studied photoactive semiconductor material with great potential in photoelectrochemical water splitting (PECWS), following the discovery by Fujishima and Honda in 1972. TiO2 possesses superior physicochemical characteristics and band gap edges, which enables the semiconductor to effectively facilitate the PECWS process. Efforts are still ongoing to explore alternatives for narrowing the optical band gap energy of TiO2, for an efficient photoelectrode. In this research work, open-ended and well-ordered TiO2 nanotubular arrays were synthesised by a three-step anodization process. The third anodization was crucial to detach the TiO2 thin film from an opaque Ti metal substrate. The free-standing thin films were transferred and pasted onto conductive FTO-coated glass substrates transparent to visible light and annealed at 400 ℃ for crystallisation. The multi-step anodization has shown an improved top tube morphology by eliminating an initiation TiO2 mesh formed when a conventional single-step anodization process is used under similar conditions. To widen the absorption range of the samples, CuO nanosheets were deposited onto nanotubular TiO2/FTO films through successive ionic layer adsorption (SILAR), a wet chemical method. The formation of a CuO/TiO2 nanostructure enhances the transfer of photogenerated carriers, suppressing charge recombination. This research focused on investigating the influence of selected SILAR parameters on the formation of CuO nanostructures. The first was the effect of precursor concentration on the structural, morphological and optical properties of the CuO/TiO2/FTO nanostructured photoelectrode. The effect of the precursor concentration on the structure and morphology was evident in the X-ray diffraction (XRD) patterns and scanning electron microscopy (SEM) micrographs. Crystallite sizes of deposited CuO increased from 10.6 nm to 15.7 nm when precursor concentration was varied from 0.02 M to 0.10 M. The UV-visible absorbance results show that an increase in precursor concentration leads to a red shift of both the peak absorbance and edge wavelength of the CuO/TiO2/FTO absorbance spectra. This phenomenon is believed to be caused by the presence of CuO, which exhibits active absorption in the visible spectrum. As evidenced by the study, the continued increase in precursor concentration does not result in a further widening of the absorption band. This is demonstrated by the example of a CuO/TiO2/FTO sample decorated with a 0.2 M precursor. The second was the effect of SILAR immersion cycles on the properties of the CuO/TiO2/FTO nanostructure developed. The increase in the number of immersion cycles led to a notable progression in the adsorption cupric oxide on the TiO2/FTO samples. A redshift in the absorbance peak and edge wavelength is observed in the UV-visible spectra of CuO/TiO2/FTO photoelectrode. The efficacy of the SILAR technique in modifying the absorption band of nanotubular TiO2 thin films has been conclusively demonstrated through comprehensive analysis and correlation of the relationships between the structure and optical properties, as evidenced by the XRD patterns, Raman spectra, SEM, TEM micrographs, and UV-visible absorbance spectra. , Thesis (MSc) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics, 2024
- Full Text:
- Date Issued: 2024-04
Dislocation imaging of AISI316L stainless steels using electron channeling contrast imaging (ECCI)
- Pullen, Luchian Charton Morne
- Authors: Pullen, Luchian Charton Morne
- Date: 2024-04
- Subjects: Electron microscopy , Microscopy -- Technique
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/64301 , vital:73674
- Description: This study investigates the use of electron microscopy to image dislocations in high-temperature steels used in the electrical power generation industry. Dislocations play an important role in the mechanical properties of steels, which continuously evolve during component manufacturing and subsequent in-service exposure due to creep and/or fatigue. The dislocation density of the steels can potentially be used as a fingerprint to identify at-risk components that has either reached end-of-life or that was incorrectly manufactured due to forming or heat treatments. Traditionally, dislocation measurements are performed using transmission electron microscopy (TEM) performed on thin foils samples. However, accurate and precise measurements of the dislocation density in steels using TEM remain a challenge due to the time-consuming nature, small sampling volumes, and effects of sample preparation on the quantitative results. The aim of this study is to evaluate and establish electron channeling contrast imaging (ECCI) as a scanning electron microscopy method of quantifying the dislocation densities of power plant steels. This method can be applied to conventionally polished bulk samples allowing for large areas to be sampled. Samples consisting of AISI316L stainless steel were used as a model alloy (large grain size ~100 μm) to compare dislocation imaging using annular dark field (ADF)-scanning TEM (STEM) and ECCI. Three materials states consisting of a cold drawn rod (high dislocation density), annealed rod (low dislocation density), and an annealed sample subjected to cyclic fatigue testing (medium dislocation density) were investigated. Systematic investigations into the data acquisition parameters showed that an incident beam energy (20 kV), beam current (~4 nA), pixel size (5 nm), and working distance (4-5 mm) on a JEOL7001F SEM fitted with a retractable BSE detector could successfully image the dislocation structures for the material states used in this study. The ECCI technique was successfully used to determine the dislocation density in the three material states and the quantitative results showed similar trends as the ADF-STEM quantification results, but with less effort. Future studies using electron backscattered diffraction (EBSD) orientation mapping combined with electron channeling pattern (ECP) calibrations using a single crystal Si sample will allow for ECCI imaging under controlled grain orientations. Furthermore, accurate image segmentation of dislocations from a micrograph remains a key limitation, which can be improved with the use of advanced image analysis based on deep learning approaches. The quantitative dislocation density techniques demonstrated in this study can be adapted not only for studies of other power plant steels (eg. 9-12% Cr Creep Strength Enhanced Ferritic) but also to other materials systems such as aluminium to study the recrystallization processes during annealing. , Thesis (MSc) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics, 2025
- Full Text:
- Date Issued: 2024-04
- Authors: Pullen, Luchian Charton Morne
- Date: 2024-04
- Subjects: Electron microscopy , Microscopy -- Technique
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/64301 , vital:73674
- Description: This study investigates the use of electron microscopy to image dislocations in high-temperature steels used in the electrical power generation industry. Dislocations play an important role in the mechanical properties of steels, which continuously evolve during component manufacturing and subsequent in-service exposure due to creep and/or fatigue. The dislocation density of the steels can potentially be used as a fingerprint to identify at-risk components that has either reached end-of-life or that was incorrectly manufactured due to forming or heat treatments. Traditionally, dislocation measurements are performed using transmission electron microscopy (TEM) performed on thin foils samples. However, accurate and precise measurements of the dislocation density in steels using TEM remain a challenge due to the time-consuming nature, small sampling volumes, and effects of sample preparation on the quantitative results. The aim of this study is to evaluate and establish electron channeling contrast imaging (ECCI) as a scanning electron microscopy method of quantifying the dislocation densities of power plant steels. This method can be applied to conventionally polished bulk samples allowing for large areas to be sampled. Samples consisting of AISI316L stainless steel were used as a model alloy (large grain size ~100 μm) to compare dislocation imaging using annular dark field (ADF)-scanning TEM (STEM) and ECCI. Three materials states consisting of a cold drawn rod (high dislocation density), annealed rod (low dislocation density), and an annealed sample subjected to cyclic fatigue testing (medium dislocation density) were investigated. Systematic investigations into the data acquisition parameters showed that an incident beam energy (20 kV), beam current (~4 nA), pixel size (5 nm), and working distance (4-5 mm) on a JEOL7001F SEM fitted with a retractable BSE detector could successfully image the dislocation structures for the material states used in this study. The ECCI technique was successfully used to determine the dislocation density in the three material states and the quantitative results showed similar trends as the ADF-STEM quantification results, but with less effort. Future studies using electron backscattered diffraction (EBSD) orientation mapping combined with electron channeling pattern (ECP) calibrations using a single crystal Si sample will allow for ECCI imaging under controlled grain orientations. Furthermore, accurate image segmentation of dislocations from a micrograph remains a key limitation, which can be improved with the use of advanced image analysis based on deep learning approaches. The quantitative dislocation density techniques demonstrated in this study can be adapted not only for studies of other power plant steels (eg. 9-12% Cr Creep Strength Enhanced Ferritic) but also to other materials systems such as aluminium to study the recrystallization processes during annealing. , Thesis (MSc) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics, 2025
- Full Text:
- Date Issued: 2024-04
Elephant impacts on plant diversity and structure in the Shamwari Private Game Reserve
- Authors: Halvey, Andrew Lloyd
- Date: 2024-04
- Subjects: Elephants -- Nutrition -- South Africa -- Eastern Cape , Elephants -- Habitat -- South Africa -- Eastern Cape , Shamwari Game Reserve (South Africa)
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/63777 , vital:73597
- Description: Many African landscapes rely on processes such as fire, tree-fall and drought in addition to herbivores to initiate change across the landscape. In the Eastern Cape, elephant have a significant impact on the community structure and diversity of the vegetation they live in. This is most likely the case for the Albany Valley Thicket and azonal riparian vegetation of Shamwari Private Game Reserve, where browsing animals, particularly megaherbivores like the black rhinoceros and elephant, are the main cause of defoliation. The presence of large herbivores creates challenges when it comes to long-term sustainability and biodiversity of the vegetation in Shamwari. Vegetation monitoring provides essential information for effective management of megaherbivores not only in Shamwari but in many other similar reserves. The aim of this study was to design a monitoring plan for the Albany Valley Thicket and riparian vegetation in Shamwari using available vegetation metrics. The vegetation was measured in permanent plots (90 m line intercept analysis per plot) in the Albany Valley Thicket and riparian vegetation of Shamwari. Plot selection was based on thicket structural integrity using NDVI score as a proxy. In all plots, thicket structure was assessed using canopy heights measured every 50 cm along the line. Detrended correspondence analysis of the species abundance data suggested three distinct structural and compositional vegetation states for thicket and riparian vegetation: dense, intermediate and open. Significant relationships between NDVI and vegetation structural metrics across the condition states indicated that NDVI could be used as a proxy for vegetation condition. Vegetation compositional metrics, however, were not always correlated to NDVI and determining species diversity for the vegetation presents additional information useful for monitoring. The monitoring recommended for the reserve is to evaluate vegetation structural integrity annually in summer using NDVI. Areas of change could then be measured for diversity as well as for change in the abundance of selected plant indicator species. This information should be used to initiate management actions if unwanted change has occurred. , Thesis (MSc) -- Faculty of Science, School of Environmental Sciences, 2024
- Full Text:
- Date Issued: 2024-04
- Authors: Halvey, Andrew Lloyd
- Date: 2024-04
- Subjects: Elephants -- Nutrition -- South Africa -- Eastern Cape , Elephants -- Habitat -- South Africa -- Eastern Cape , Shamwari Game Reserve (South Africa)
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/63777 , vital:73597
- Description: Many African landscapes rely on processes such as fire, tree-fall and drought in addition to herbivores to initiate change across the landscape. In the Eastern Cape, elephant have a significant impact on the community structure and diversity of the vegetation they live in. This is most likely the case for the Albany Valley Thicket and azonal riparian vegetation of Shamwari Private Game Reserve, where browsing animals, particularly megaherbivores like the black rhinoceros and elephant, are the main cause of defoliation. The presence of large herbivores creates challenges when it comes to long-term sustainability and biodiversity of the vegetation in Shamwari. Vegetation monitoring provides essential information for effective management of megaherbivores not only in Shamwari but in many other similar reserves. The aim of this study was to design a monitoring plan for the Albany Valley Thicket and riparian vegetation in Shamwari using available vegetation metrics. The vegetation was measured in permanent plots (90 m line intercept analysis per plot) in the Albany Valley Thicket and riparian vegetation of Shamwari. Plot selection was based on thicket structural integrity using NDVI score as a proxy. In all plots, thicket structure was assessed using canopy heights measured every 50 cm along the line. Detrended correspondence analysis of the species abundance data suggested three distinct structural and compositional vegetation states for thicket and riparian vegetation: dense, intermediate and open. Significant relationships between NDVI and vegetation structural metrics across the condition states indicated that NDVI could be used as a proxy for vegetation condition. Vegetation compositional metrics, however, were not always correlated to NDVI and determining species diversity for the vegetation presents additional information useful for monitoring. The monitoring recommended for the reserve is to evaluate vegetation structural integrity annually in summer using NDVI. Areas of change could then be measured for diversity as well as for change in the abundance of selected plant indicator species. This information should be used to initiate management actions if unwanted change has occurred. , Thesis (MSc) -- Faculty of Science, School of Environmental Sciences, 2024
- Full Text:
- Date Issued: 2024-04
Estimation of a generalist meso-carnivore (black-backed jackal) population from a fenced protected area
- Davidson-Phillips, Samuel Ralph
- Authors: Davidson-Phillips, Samuel Ralph
- Date: 2024-04
- Subjects: Wildlife conservation , Carnivorous animals -- Conservation , Carnivorous animals -- Ecology
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/63698 , vital:73589
- Description: Since 2017, landowners, field guides, and management staff have reported large groups of black-backed jackals (Lupullela mesomelas) (here-forward jackal) in the Welgevonden Game Reserve, Limpopo, South Africa. This is linked with several observations of jackals predating on various ungulate species, potentially leading to unintended consequences on prey populations. These observations combined with an apparent poor survival rate of impala (Aepyceros melampus) and continuous declines in their number led to the perception that jackals could be partly responsible. Several studies have attempted to describe the ecological role of jackals within multiple environments, most of which have proven to be variable and context dependent. Human-modified landscapes along with the fencing of protected areas, may have manipulated the role of jackal within these scenarios. Jackals are wide-ranging and generally not confined by fencing, therefore their population trends possibly fluctuate within these anthropogenic landscapes. Re-introduced apex predators have been shown to facilitate food (provision of carrion) and simultaneously suppress jackal (active killing), this, however, remains difficult to predict. Jackals are classified as facultative cooperative hunters, and the term describes how they hunt in groups opportunistically when suitable resources of prey are available. The indication by several studies that jackals do actively predate rather than only scavenge, illustrates that the species has the potential to cause declines in an ungulate population. It therefore appears erroneous to exclude the species in terms of predator-prey relationships, particularly for land managers of fenced protected areas. The first step to any ecological management is the understanding of population size and trends over time. Unfortunately, little to no reliable methods exist to assess or monitor jackal populations. A popular tool for cryptic and wide-ranging terrestrial carnivores is Spatial Capture Recapture (SCR) models, typically through a camera trap array. These often rely on individual identities and an imperfect detection process to derive a statistical estimate of a given area. Jackals have been assumed to be individually unidentifiable and therefore these methods have largely been excluded. To address this a pilot-targeted camera-trap survey was conducted to improve capture and image quality. Following the role of this procedure, semi-automated software was applied to test the feasibility of individual identifications of captured images. This resulted in a subset of 58 right and left identifiable flank images, compiled from the highest graded images (n = 220) using the open-source Interactive Individual IdentificationSystem Beta Contour 3.0 (I3S Contour). I3S Contour assists users by distinguishing between unique contours on independent flanks without omitting observer effort and ranking. The effectiveness of the identification procedure was evaluated using three software tool trials, namely Computer-aided Annotation, Manual Contour Annotation, and Manual Contour Annotation (MA-2), where MA-2 included additional user-defined meta-data to images. Results showed that jackals could be individually identified from camera trap images and thus opened up the use of previously excluded SCR methodologies. Utilising the jackal database derived from the identification procedures described a total of 28 complete identifications (both flanks matched), 32 left-sided and 36 right-sided captures were used. These were derived from two independent survey periods split between seasonality (Winter & Spring). Two SCR methods were compared, namely, the Spatially Explicit Camera Recapture (SECR) and the newly developed Spatial Presence-Absence (SPA) modelling approach. SECR relies on full individual identification linked to spatial locations to derive spatial parameters to estimate population densities. The SECR methodology has been considered the most precise and was thus used as the benchmark. SPA relies on detections only (i.e., without individual identities), along with informative or uninformative priors. This must be across a spatial array that has detectors close enough to allow for simultaneous detections during each occasion (< 24 hours). Comparisons between these model outputs indicated a high degree of confidence interval overlap; however, SPA had a consistently higher posterior mode density estimate (63-64% higher), where the coefficient of variation between outputs also indicated the SPA having a closer relative precision. The targeted survey results for both model outputs for 2021 did not appear unusually high when compared to other studies. To assess the WGR population size over the long term, opportunistic by-catch data from a nine-year leopard (Panthera pardus) camera survey (Panthera organisation) was utilised. Model outputs from each of the years indicated that population estimates remained relatively stable. This was an unexpected result as the SPA densities did not follow the detection observations. This could be attributed to M not being set high enough (200) and the model reaching the limit, resulting in similar outputs between years. An alternative explanation is where the station spacing is larger than the diameter of the home range, which may reduce spatial correlation. , Thesis (MSc) -- Faculty of Science, School of Natural Resource Science & Management, 2024
- Full Text:
- Date Issued: 2024-04
- Authors: Davidson-Phillips, Samuel Ralph
- Date: 2024-04
- Subjects: Wildlife conservation , Carnivorous animals -- Conservation , Carnivorous animals -- Ecology
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/63698 , vital:73589
- Description: Since 2017, landowners, field guides, and management staff have reported large groups of black-backed jackals (Lupullela mesomelas) (here-forward jackal) in the Welgevonden Game Reserve, Limpopo, South Africa. This is linked with several observations of jackals predating on various ungulate species, potentially leading to unintended consequences on prey populations. These observations combined with an apparent poor survival rate of impala (Aepyceros melampus) and continuous declines in their number led to the perception that jackals could be partly responsible. Several studies have attempted to describe the ecological role of jackals within multiple environments, most of which have proven to be variable and context dependent. Human-modified landscapes along with the fencing of protected areas, may have manipulated the role of jackal within these scenarios. Jackals are wide-ranging and generally not confined by fencing, therefore their population trends possibly fluctuate within these anthropogenic landscapes. Re-introduced apex predators have been shown to facilitate food (provision of carrion) and simultaneously suppress jackal (active killing), this, however, remains difficult to predict. Jackals are classified as facultative cooperative hunters, and the term describes how they hunt in groups opportunistically when suitable resources of prey are available. The indication by several studies that jackals do actively predate rather than only scavenge, illustrates that the species has the potential to cause declines in an ungulate population. It therefore appears erroneous to exclude the species in terms of predator-prey relationships, particularly for land managers of fenced protected areas. The first step to any ecological management is the understanding of population size and trends over time. Unfortunately, little to no reliable methods exist to assess or monitor jackal populations. A popular tool for cryptic and wide-ranging terrestrial carnivores is Spatial Capture Recapture (SCR) models, typically through a camera trap array. These often rely on individual identities and an imperfect detection process to derive a statistical estimate of a given area. Jackals have been assumed to be individually unidentifiable and therefore these methods have largely been excluded. To address this a pilot-targeted camera-trap survey was conducted to improve capture and image quality. Following the role of this procedure, semi-automated software was applied to test the feasibility of individual identifications of captured images. This resulted in a subset of 58 right and left identifiable flank images, compiled from the highest graded images (n = 220) using the open-source Interactive Individual IdentificationSystem Beta Contour 3.0 (I3S Contour). I3S Contour assists users by distinguishing between unique contours on independent flanks without omitting observer effort and ranking. The effectiveness of the identification procedure was evaluated using three software tool trials, namely Computer-aided Annotation, Manual Contour Annotation, and Manual Contour Annotation (MA-2), where MA-2 included additional user-defined meta-data to images. Results showed that jackals could be individually identified from camera trap images and thus opened up the use of previously excluded SCR methodologies. Utilising the jackal database derived from the identification procedures described a total of 28 complete identifications (both flanks matched), 32 left-sided and 36 right-sided captures were used. These were derived from two independent survey periods split between seasonality (Winter & Spring). Two SCR methods were compared, namely, the Spatially Explicit Camera Recapture (SECR) and the newly developed Spatial Presence-Absence (SPA) modelling approach. SECR relies on full individual identification linked to spatial locations to derive spatial parameters to estimate population densities. The SECR methodology has been considered the most precise and was thus used as the benchmark. SPA relies on detections only (i.e., without individual identities), along with informative or uninformative priors. This must be across a spatial array that has detectors close enough to allow for simultaneous detections during each occasion (< 24 hours). Comparisons between these model outputs indicated a high degree of confidence interval overlap; however, SPA had a consistently higher posterior mode density estimate (63-64% higher), where the coefficient of variation between outputs also indicated the SPA having a closer relative precision. The targeted survey results for both model outputs for 2021 did not appear unusually high when compared to other studies. To assess the WGR population size over the long term, opportunistic by-catch data from a nine-year leopard (Panthera pardus) camera survey (Panthera organisation) was utilised. Model outputs from each of the years indicated that population estimates remained relatively stable. This was an unexpected result as the SPA densities did not follow the detection observations. This could be attributed to M not being set high enough (200) and the model reaching the limit, resulting in similar outputs between years. An alternative explanation is where the station spacing is larger than the diameter of the home range, which may reduce spatial correlation. , Thesis (MSc) -- Faculty of Science, School of Natural Resource Science & Management, 2024
- Full Text:
- Date Issued: 2024-04
Evaluating elephant, Loxodonta africana, space-use and elephant-linked vegetation change in Liwonde National Park, Malawi
- Authors: Evers, Emma Else Maria
- Date: 2024-04
- Subjects: Elephants -- Nutrition -- Malawi , Ecological heterogeneity , Vegetation and climate
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/63744 , vital:73594
- Description: Heterogeneity, the spatio-temporal variation of abiotic and biotic factors, is a key concept that underpins many ecological phenomena and promotes biodiversity. Ecosystem engineers, such as African savanna elephants (hereafter elephant), Loxodonta africana, are organisms capable of affecting heterogeneity through the creation or modification of habitats. Thus, their impacts can have important consequences for ecosystem biodiversity, both positive and negative. Caughley’s “elephant problem” cautions that confined or compressed, growing elephant populations will inevitably lead to a loss of biodiversity. However, a shift in our understanding of elephants suggests that not all elephant impacts lead to negative biodiversity consequences, as long as there is a heterogeneous spread of elephant impacts that allows for spatio-temporal refuges promoting the persistence of both impact-tolerant and impact-intolerant species. To date, little empirical evidence is available in support of managing elephants under this paradigm and few studies are available that infer the consequences of the distribution of elephant impacts on biodiversity. In addition, most studies use parametric statistics that do not account for scale, spatial autocorrelation, or non-stationarity, leading to a misrepresentation of the underlying processes and patterns of drivers of elephant space-use and the consequences of their impacts on biodiversity. Here, I evaluate spatio-temporal patterns and drivers of elephant space-use, and how the distribution of their impacts affects biodiversity through vegetation changes, using a multi-scaled spatial approach, in Liwonde National Park, Malawi. My study demonstrates that elephant space-use in Liwonde is heterogeneous, leading to spatio-temporal variation in the distribution of their impacts, even in a small, fenced reserve. The importance of the drivers of this heterogeneous space-use varied based on the scale of analysis, water was generally important at larger scales while vegetation quality (indexed by NDVI) was more important at smaller scales. When examined using local models, my results suggest that relationships exhibit non-stationarity, what is important in one area of the park is not necessarily important in other areas. The spatio-temporal variation of the inferred impacts of elephants in Liwonde still allowed for spatio-temporal refuges to be created, no clear linear relationship was found between elephant return intervals and woody species structural and functional diversity (indexed by changes in tree cover and changes in annual regrowth using Normalized Difference Vegetation Index as a measure, respectively) throughout the park. My study provides support for adopting the heterogeneity paradigm for managing elephants and demonstrates that not all elephant impacts result in negative vegetation change. I also demonstrate the crucial implications of accounting for scale, non-stationarity, and spatial autocorrelation to evaluate how animals both respond to, and contribute to, environmental heterogeneity. , Thesis (MSc) -- Faculty of Science, School of Environmental Sciences, 2024
- Full Text:
- Date Issued: 2024-04
- Authors: Evers, Emma Else Maria
- Date: 2024-04
- Subjects: Elephants -- Nutrition -- Malawi , Ecological heterogeneity , Vegetation and climate
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/63744 , vital:73594
- Description: Heterogeneity, the spatio-temporal variation of abiotic and biotic factors, is a key concept that underpins many ecological phenomena and promotes biodiversity. Ecosystem engineers, such as African savanna elephants (hereafter elephant), Loxodonta africana, are organisms capable of affecting heterogeneity through the creation or modification of habitats. Thus, their impacts can have important consequences for ecosystem biodiversity, both positive and negative. Caughley’s “elephant problem” cautions that confined or compressed, growing elephant populations will inevitably lead to a loss of biodiversity. However, a shift in our understanding of elephants suggests that not all elephant impacts lead to negative biodiversity consequences, as long as there is a heterogeneous spread of elephant impacts that allows for spatio-temporal refuges promoting the persistence of both impact-tolerant and impact-intolerant species. To date, little empirical evidence is available in support of managing elephants under this paradigm and few studies are available that infer the consequences of the distribution of elephant impacts on biodiversity. In addition, most studies use parametric statistics that do not account for scale, spatial autocorrelation, or non-stationarity, leading to a misrepresentation of the underlying processes and patterns of drivers of elephant space-use and the consequences of their impacts on biodiversity. Here, I evaluate spatio-temporal patterns and drivers of elephant space-use, and how the distribution of their impacts affects biodiversity through vegetation changes, using a multi-scaled spatial approach, in Liwonde National Park, Malawi. My study demonstrates that elephant space-use in Liwonde is heterogeneous, leading to spatio-temporal variation in the distribution of their impacts, even in a small, fenced reserve. The importance of the drivers of this heterogeneous space-use varied based on the scale of analysis, water was generally important at larger scales while vegetation quality (indexed by NDVI) was more important at smaller scales. When examined using local models, my results suggest that relationships exhibit non-stationarity, what is important in one area of the park is not necessarily important in other areas. The spatio-temporal variation of the inferred impacts of elephants in Liwonde still allowed for spatio-temporal refuges to be created, no clear linear relationship was found between elephant return intervals and woody species structural and functional diversity (indexed by changes in tree cover and changes in annual regrowth using Normalized Difference Vegetation Index as a measure, respectively) throughout the park. My study provides support for adopting the heterogeneity paradigm for managing elephants and demonstrates that not all elephant impacts result in negative vegetation change. I also demonstrate the crucial implications of accounting for scale, non-stationarity, and spatial autocorrelation to evaluate how animals both respond to, and contribute to, environmental heterogeneity. , Thesis (MSc) -- Faculty of Science, School of Environmental Sciences, 2024
- Full Text:
- Date Issued: 2024-04
Evaluation of road surface distresses using GPS and GIS techniques: a case study of the City of Johannesburg, South Africa
- Authors: Tsedu, Rinae
- Date: 2024-04
- Subjects: Global Positioning System South Africa -- Johannesburg , Navigation -- Technological innovations , Geographic information systems -- South Africa -- Johannesburg
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/64401 , vital:73695
- Description: Thesis (MSc) -- Faculty of Science, School of Environmental Sciences, 2024
- Full Text:
- Date Issued: 2024-04
- Authors: Tsedu, Rinae
- Date: 2024-04
- Subjects: Global Positioning System South Africa -- Johannesburg , Navigation -- Technological innovations , Geographic information systems -- South Africa -- Johannesburg
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/64401 , vital:73695
- Description: Thesis (MSc) -- Faculty of Science, School of Environmental Sciences, 2024
- Full Text:
- Date Issued: 2024-04
Exploring the role of herbivory in Albany Subtropical Thicket restoration
- Authors: Hunt, Kristen Louise
- Date: 2024-04
- Subjects: Shrubs -- South Africa , Portulacaria afra -- South Africa , Grasslands -- South Africa , Plant communities -- South Africa
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/64060 , vital:73647
- Description: This dissertation attempted to investigate the influence of herbivory on the success of thicket restoration, addressing a critical gap in the current knowledge within the restoration initiative. Despite two decades of thicket restoration practices, the role of herbivory in influencing restoration success has been assumed and not yet quantified. This research aimed to observe and identify herbivore species and their interactions that may affect the survival of Portulacaria afra Jacq. material planted in thicket restoration contexts. This research took place on three game farms serving as case studies within the Albany Subtropical Thicket (Eastern Cape, South Africa). Multiple experiments were conducted to assess how the concept of different “natural refugia” might impact herbivore interactions with planted material, incorporating factors such as planting around rainfall, within open and semi-intact vegetation patches, and in proximity to water sources. Trail cameras were used for real-time monitoring of herbivore interactions within planted sites to understand and quantify herbivore interactions with P. afra cuttings and how they may impact plant survival. Results from the trail camera monitoring (Chapter 2) indicate varied herbivore interactions occurring on planted material, with the primary herbivore responsible for these interactions varying among farms. Species interactions were not consistent across farms, and herbivore interactions exhibited spatial and temporal variability. Notably, interactions declined soon after the start of the wet phase when surrounding vegetation could recover, indicating the influences of alternative forage availability on herbivore foraging choices. Different herbivore interactions were identified and quantified through trail camera images, ranging from minor biomass removal (estimated at <5 cm of stem and leaf material) to more detrimental actions such as uprooting and leaf stripping. Consistently, planted P. afra survival rates (Chapter 3) were significantly higher for protected material than those exposed to herbivores, regardless of whether planted in a dry or wet phase. Moreover, when exposed to herbivores, rooted material had significantly higher survival rates than unrooted material, indicating the potential advantage of a well-developed root system in faster recovery after a herbivory event. This research explored the influence of various factors, including rainfall, rooting state, protection, surrounding vegetation, and proximity to water, on P. afra survival and how some of these factors may affect P. afra survival in relation to herbivore interactions (Chapter 3). Significant differences in cutting survival were observed between dry and wet phases, rooted and unrooted material, and material protected vs exposed to herbivores. While survival was not significantly different in experiments involving surrounding vegetation and proximity to water, potential patterns were identified, warranting further investigation. A clipping and defoliation experiment under simulated seasonal conditions emphasised the significance of , Thesis (MSc) -- Faculty of Science, School of Environmental Sciences, 2024
- Full Text:
- Date Issued: 2024-04
- Authors: Hunt, Kristen Louise
- Date: 2024-04
- Subjects: Shrubs -- South Africa , Portulacaria afra -- South Africa , Grasslands -- South Africa , Plant communities -- South Africa
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/64060 , vital:73647
- Description: This dissertation attempted to investigate the influence of herbivory on the success of thicket restoration, addressing a critical gap in the current knowledge within the restoration initiative. Despite two decades of thicket restoration practices, the role of herbivory in influencing restoration success has been assumed and not yet quantified. This research aimed to observe and identify herbivore species and their interactions that may affect the survival of Portulacaria afra Jacq. material planted in thicket restoration contexts. This research took place on three game farms serving as case studies within the Albany Subtropical Thicket (Eastern Cape, South Africa). Multiple experiments were conducted to assess how the concept of different “natural refugia” might impact herbivore interactions with planted material, incorporating factors such as planting around rainfall, within open and semi-intact vegetation patches, and in proximity to water sources. Trail cameras were used for real-time monitoring of herbivore interactions within planted sites to understand and quantify herbivore interactions with P. afra cuttings and how they may impact plant survival. Results from the trail camera monitoring (Chapter 2) indicate varied herbivore interactions occurring on planted material, with the primary herbivore responsible for these interactions varying among farms. Species interactions were not consistent across farms, and herbivore interactions exhibited spatial and temporal variability. Notably, interactions declined soon after the start of the wet phase when surrounding vegetation could recover, indicating the influences of alternative forage availability on herbivore foraging choices. Different herbivore interactions were identified and quantified through trail camera images, ranging from minor biomass removal (estimated at <5 cm of stem and leaf material) to more detrimental actions such as uprooting and leaf stripping. Consistently, planted P. afra survival rates (Chapter 3) were significantly higher for protected material than those exposed to herbivores, regardless of whether planted in a dry or wet phase. Moreover, when exposed to herbivores, rooted material had significantly higher survival rates than unrooted material, indicating the potential advantage of a well-developed root system in faster recovery after a herbivory event. This research explored the influence of various factors, including rainfall, rooting state, protection, surrounding vegetation, and proximity to water, on P. afra survival and how some of these factors may affect P. afra survival in relation to herbivore interactions (Chapter 3). Significant differences in cutting survival were observed between dry and wet phases, rooted and unrooted material, and material protected vs exposed to herbivores. While survival was not significantly different in experiments involving surrounding vegetation and proximity to water, potential patterns were identified, warranting further investigation. A clipping and defoliation experiment under simulated seasonal conditions emphasised the significance of , Thesis (MSc) -- Faculty of Science, School of Environmental Sciences, 2024
- Full Text:
- Date Issued: 2024-04