A comprehensive evaluation framework for system modernization : a case study using data services
- Authors: Barnes, Meredith Anne
- Date: 2011
- Subjects: Computer architecture
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10460 , http://hdl.handle.net/10948/1499 , Computer architecture
- Description: Modernization is a solution to migrate cumbersome existing systems to a new architecture for improved longevity of business processes. Three modernization approaches exist. White-box and black-box modernization are distinct from one another. Grey-box modernization is a hybrid of the white-box and black-box approaches. Modernization can be utilised to create data services for a Service Oriented Architecture. Since it is unclear which modernization approach is more suitable for the development of data services, a comprehensive evaluation framework is proposed to evaluate which of the white- or black-box approaches is more suitable. The comprehensive framework consists of three evaluation components. Firstly, developer effort to modernize existing code is measured by acknowledged software metrics. Secondly, the quality of the data services is measured against identified Quality of Service criteria for data services in particular. Thirdly, the effectiveness of the modernized data services is measured through usability evaluations. By inspection of the combination of application of each of the evaluation components, a recommended approach is identified for the modernization of data services. The comprehensive framework was successfully employed to compare the white-box and black-box modernization approaches applied to a case study. Results indicated that had only a single evaluation component been used, inconclusive results of the more suitable approach may have been obtained. The findings of this research contribute a comprehensive evaluation framework which can be applied to compare modernization approaches and measure modernization success.
- Full Text:
- Date Issued: 2011
- Authors: Barnes, Meredith Anne
- Date: 2011
- Subjects: Computer architecture
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10460 , http://hdl.handle.net/10948/1499 , Computer architecture
- Description: Modernization is a solution to migrate cumbersome existing systems to a new architecture for improved longevity of business processes. Three modernization approaches exist. White-box and black-box modernization are distinct from one another. Grey-box modernization is a hybrid of the white-box and black-box approaches. Modernization can be utilised to create data services for a Service Oriented Architecture. Since it is unclear which modernization approach is more suitable for the development of data services, a comprehensive evaluation framework is proposed to evaluate which of the white- or black-box approaches is more suitable. The comprehensive framework consists of three evaluation components. Firstly, developer effort to modernize existing code is measured by acknowledged software metrics. Secondly, the quality of the data services is measured against identified Quality of Service criteria for data services in particular. Thirdly, the effectiveness of the modernized data services is measured through usability evaluations. By inspection of the combination of application of each of the evaluation components, a recommended approach is identified for the modernization of data services. The comprehensive framework was successfully employed to compare the white-box and black-box modernization approaches applied to a case study. Results indicated that had only a single evaluation component been used, inconclusive results of the more suitable approach may have been obtained. The findings of this research contribute a comprehensive evaluation framework which can be applied to compare modernization approaches and measure modernization success.
- Full Text:
- Date Issued: 2011
An intelligent multimodal interface for in-car communication systems
- Authors: Sielinou, Patrick Tchankue
- Date: 2011
- Subjects: Automotive telematics , Automobiles -- Electronic equipment
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10461 , http://hdl.handle.net/10948/1495 , Automotive telematics , Automobiles -- Electronic equipment
- Description: In-car communication systems (ICCS) are becoming more frequently used by drivers. ICCS are used in order to minimise the driving distraction due to using a mobile phone while driving. Several usability studies of ICCS utilising speech user interfaces (SUIs) have identified usability issues that can affect the workload, performance, satisfaction and user experience of the driver. This is due to current speech technologies which can be a source of errors that may frustrate the driver and negatively affect the user experience. The aim of this research was to design a new multimodal interface that will manage the interaction between an ICCS and the driver. Unlike the current ICCS, it should make more voice input available, so as to support tasks (e.g. sending text messages; browsing the phone book, etc), which still require a cognitive workload from the driver. An adaptive multimodal interface was proposed in order to address current ICCS issues. The multimodal interface used both speech and manual input; however only the speech channel is used as output. This was done in order to minimise the visual distraction that graphical user interfaces or haptics devices can cause with current ICCS. The adaptive interface was designed to minimise the cognitive distraction of the driver. The adaptive interface ensures that whenever the distraction level of the driver is high, any information communication is postponed. After the design and the implementation of the first version of the prototype interface, called MIMI, a usability evaluation was conducted in order to identify any possible usability issues. Although voice dialling was found to be problematic, the results were encouraging in terms of performance, workload and user satisfaction. The suggestions received from the participants to improve the system usability were incorporated in the next implementation of MIMI. The adaptive module was then implemented to reduce driver distraction based on the driver‟s current context. The proposed architecture showed encouraging results in terms of usability and safety. The adaptive behaviour of MIMI significantly contributed to the reduction of cognitive distraction, because drivers received less information during difficult driving situations.
- Full Text:
- Date Issued: 2011
- Authors: Sielinou, Patrick Tchankue
- Date: 2011
- Subjects: Automotive telematics , Automobiles -- Electronic equipment
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10461 , http://hdl.handle.net/10948/1495 , Automotive telematics , Automobiles -- Electronic equipment
- Description: In-car communication systems (ICCS) are becoming more frequently used by drivers. ICCS are used in order to minimise the driving distraction due to using a mobile phone while driving. Several usability studies of ICCS utilising speech user interfaces (SUIs) have identified usability issues that can affect the workload, performance, satisfaction and user experience of the driver. This is due to current speech technologies which can be a source of errors that may frustrate the driver and negatively affect the user experience. The aim of this research was to design a new multimodal interface that will manage the interaction between an ICCS and the driver. Unlike the current ICCS, it should make more voice input available, so as to support tasks (e.g. sending text messages; browsing the phone book, etc), which still require a cognitive workload from the driver. An adaptive multimodal interface was proposed in order to address current ICCS issues. The multimodal interface used both speech and manual input; however only the speech channel is used as output. This was done in order to minimise the visual distraction that graphical user interfaces or haptics devices can cause with current ICCS. The adaptive interface was designed to minimise the cognitive distraction of the driver. The adaptive interface ensures that whenever the distraction level of the driver is high, any information communication is postponed. After the design and the implementation of the first version of the prototype interface, called MIMI, a usability evaluation was conducted in order to identify any possible usability issues. Although voice dialling was found to be problematic, the results were encouraging in terms of performance, workload and user satisfaction. The suggestions received from the participants to improve the system usability were incorporated in the next implementation of MIMI. The adaptive module was then implemented to reduce driver distraction based on the driver‟s current context. The proposed architecture showed encouraging results in terms of usability and safety. The adaptive behaviour of MIMI significantly contributed to the reduction of cognitive distraction, because drivers received less information during difficult driving situations.
- Full Text:
- Date Issued: 2011
An investigation into the effect of carbon type addictives on the negative electrode during the partial state of charge capacity cycling of lead acid batteries
- Authors: Snyders, Charmelle
- Date: 2011
- Subjects: Lead-acid batteries
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10379 , http://hdl.handle.net/10948/1494 , Lead-acid batteries
- Description: It is well known that a conventional lead acid cell that is exposed to a partial state of charge capacity cycling (PSoCCC) would experience a build-up of irreversible PbSO4 on the negative electrode. This results into a damaged negative electrode due to excessive PbSO4 formation by the typical visual “Venetian Blinds” effect of the active material. This displays the loss of adhesion of the active material with the electrode’s grids thereby making large sections of the material ineffective and reducing the cells useful capacity during high current applications. The addition of certain graphites to the negative paste mix had proven to be successful to reduce this effect. In the first part of the study, the physical and chemical properties of the various additives that are added to the negative electrode paste mix were comparatively studied. This was done to investigate any significant differences between various suppliers that could possibly influence the electrochemical characteristics of the Pb-acid battery performance. This comparative study was done by using the following analytical techniques; BET surface area, laser diffraction particle size, PXRD, TGA-MS and SEM. The study showed that there were no significant differences between the additives supplied from different suppliers except for some anomalies in the usefulness of techniques such as N2 adsorption to study the BET surface area of BaSO4. In order to reduce the sulphation effect from occurring within the Pb-acid battery a number of adjustments are made to the electrode active material. For example, Pb-acid battery manufacturers make use of an inert polymer based material, known as Polymat, to cover the electrode surfaces as part of their continuous electrode pasting process. It is made from a non woven polyester fiber that is applied to the pasted electrodes during the continuous pasting process. In this study the Polymat pasted electrodes has demonstrated a better physical adhesion of the active material to the grid support thereby maintaining the active material’s physical integrity. This however did not reduce the sulphation effect due to the high rate partial state of capacity cycling (HRPSoCCC) test but reduced the physical damage due to the irreversible active material blistering effect. The study investigated what effect the Polymat on the electrodes has on the III battery’s Cold Cranking Ability (CCA) at -18 degree C, the HRPSoCCC cycling and its active material utilization. The study showed that there was little or no differences in the CCA and HRPSoCCC capabilities of cells made with the Polymat when compared to cells without the Polymat, with significant improvement in active material’s adhesion and integrity to the grid wire. This was confirmed by PXRD and SEM analysis. Negative electrodes were made with four types of graphites (natural, flake, expanded and nano fibre) added to the negative paste mixture in order to reduce the effect of sulphation. The study looked at using statistical design of experiment (DoE) principles to investigate the variables (additives) such as different graphites, BaSO4 and Vanisperse to the negative electrode paste mixture where upon measuring the responses (electrochemical tests) a set of controlled experiments were done to study the extent of the variables interaction, dependency and independency on the cells electrochemical properties. This was especially in relation to the improvement of the battery’s ability to work under HRPSoCCC. The statistical analysis showed that there was a notable significant influence of the amounts of vanisperse, BaSO4 and their respective interactions on a number of electrochemical responses, such as the Peukert constant (n), CCA discharge time, material utilization at different discharge rates and the ability to capacity cycle under the simulated HRPSoCCC testing. The study did not suggest an optimized concentration of the additives, but did give an indication that there was a statistical significant trend in certain electrochemical responses with an interaction between the amounts of the additives BaSO4 and Vanisperse. The study also showed that the addition of a small amount of Nano carbon can significantly change the observed crystal morphology of the negative active material and that an improvement in the number of capacity cycles can be achieved during the HRPSoCCC test when compared to the other types of graphite additives.
- Full Text:
- Date Issued: 2011
- Authors: Snyders, Charmelle
- Date: 2011
- Subjects: Lead-acid batteries
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10379 , http://hdl.handle.net/10948/1494 , Lead-acid batteries
- Description: It is well known that a conventional lead acid cell that is exposed to a partial state of charge capacity cycling (PSoCCC) would experience a build-up of irreversible PbSO4 on the negative electrode. This results into a damaged negative electrode due to excessive PbSO4 formation by the typical visual “Venetian Blinds” effect of the active material. This displays the loss of adhesion of the active material with the electrode’s grids thereby making large sections of the material ineffective and reducing the cells useful capacity during high current applications. The addition of certain graphites to the negative paste mix had proven to be successful to reduce this effect. In the first part of the study, the physical and chemical properties of the various additives that are added to the negative electrode paste mix were comparatively studied. This was done to investigate any significant differences between various suppliers that could possibly influence the electrochemical characteristics of the Pb-acid battery performance. This comparative study was done by using the following analytical techniques; BET surface area, laser diffraction particle size, PXRD, TGA-MS and SEM. The study showed that there were no significant differences between the additives supplied from different suppliers except for some anomalies in the usefulness of techniques such as N2 adsorption to study the BET surface area of BaSO4. In order to reduce the sulphation effect from occurring within the Pb-acid battery a number of adjustments are made to the electrode active material. For example, Pb-acid battery manufacturers make use of an inert polymer based material, known as Polymat, to cover the electrode surfaces as part of their continuous electrode pasting process. It is made from a non woven polyester fiber that is applied to the pasted electrodes during the continuous pasting process. In this study the Polymat pasted electrodes has demonstrated a better physical adhesion of the active material to the grid support thereby maintaining the active material’s physical integrity. This however did not reduce the sulphation effect due to the high rate partial state of capacity cycling (HRPSoCCC) test but reduced the physical damage due to the irreversible active material blistering effect. The study investigated what effect the Polymat on the electrodes has on the III battery’s Cold Cranking Ability (CCA) at -18 degree C, the HRPSoCCC cycling and its active material utilization. The study showed that there was little or no differences in the CCA and HRPSoCCC capabilities of cells made with the Polymat when compared to cells without the Polymat, with significant improvement in active material’s adhesion and integrity to the grid wire. This was confirmed by PXRD and SEM analysis. Negative electrodes were made with four types of graphites (natural, flake, expanded and nano fibre) added to the negative paste mixture in order to reduce the effect of sulphation. The study looked at using statistical design of experiment (DoE) principles to investigate the variables (additives) such as different graphites, BaSO4 and Vanisperse to the negative electrode paste mixture where upon measuring the responses (electrochemical tests) a set of controlled experiments were done to study the extent of the variables interaction, dependency and independency on the cells electrochemical properties. This was especially in relation to the improvement of the battery’s ability to work under HRPSoCCC. The statistical analysis showed that there was a notable significant influence of the amounts of vanisperse, BaSO4 and their respective interactions on a number of electrochemical responses, such as the Peukert constant (n), CCA discharge time, material utilization at different discharge rates and the ability to capacity cycle under the simulated HRPSoCCC testing. The study did not suggest an optimized concentration of the additives, but did give an indication that there was a statistical significant trend in certain electrochemical responses with an interaction between the amounts of the additives BaSO4 and Vanisperse. The study also showed that the addition of a small amount of Nano carbon can significantly change the observed crystal morphology of the negative active material and that an improvement in the number of capacity cycles can be achieved during the HRPSoCCC test when compared to the other types of graphite additives.
- Full Text:
- Date Issued: 2011
Assessing the statistical methodologies of business research in the South African context
- Authors: Ndou, Aifheli Amos
- Date: 2011
- Subjects: Statistics -- South Africa , Commercial statistics -- South Africa
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:8631 , http://hdl.handle.net/10948/1484 , Statistics -- South Africa , Commercial statistics -- South Africa
- Description: The aim of the study is to establish an acceptable classification scheme for the statistical methods used in business research. The approach compares the statistical component of the research and evaluates how it has changed over time and across different journals. If, as expected, the statistical expertise has changed, the change would be identified with the view to recommending curriculum changes for Statistics Departments of South African tertiary institutions.
- Full Text:
- Date Issued: 2011
- Authors: Ndou, Aifheli Amos
- Date: 2011
- Subjects: Statistics -- South Africa , Commercial statistics -- South Africa
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:8631 , http://hdl.handle.net/10948/1484 , Statistics -- South Africa , Commercial statistics -- South Africa
- Description: The aim of the study is to establish an acceptable classification scheme for the statistical methods used in business research. The approach compares the statistical component of the research and evaluates how it has changed over time and across different journals. If, as expected, the statistical expertise has changed, the change would be identified with the view to recommending curriculum changes for Statistics Departments of South African tertiary institutions.
- Full Text:
- Date Issued: 2011
Benefication of glycerol from algae and vegetable oil
- Authors: Mafu, Lubabalo Rowan
- Date: 2011
- Subjects: Glycerin -- Biotechnology , Biodiesel fuels , Renewable natural resources
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10409 , http://hdl.handle.net/10948/d1011503 , Glycerin -- Biotechnology , Biodiesel fuels , Renewable natural resources
- Description: This research has been directed at furthering the utilization of crude glycerol oversupply formed as a by-product from the biodiesel manufacturing process. Phosphorylation of hydroxyl groups is a synthetic route that was investigated for the conversion of glycerol into a glycerol-phosphate (GPE) ester mixture. The process investigated for the synthesis of a GPE product was based on phosphorylation reaction procedures that were previously reported in the literature. The reaction to convert glycerol into a GPE mixture has been thoroughly investigated and the hydrogen chloride gas formed as a reaction by-product has been optimized. The chemical properties of GPE have been studied and discussed together with a mass balance of the overall glycerol phosphorylation process. The phosphate groups contained in polyhydric phosphate molecules have a potential chelating effect on cations. There are several cations that may be chelated by the phosphate ester group of polyhydric phosphate molecules. These cations include ammonium (NH4+), Potassium (K+), Calcium (Ca2+) etc, which are essential as nutrients in plant fertilizer formulations. This research has investigated the use of a GPE synthesized from glycerol in the laboratory and the use thereof as a phosphorus containing base in the formulation and evaluation of Nitrogen, Phosphorus and Potassium (NPK) containing fertilizer solution, Ammonium-Potassium-Glycerol-Phosphate (APGP) fertilizer solution. The APGP fertilizer solution has further been evaluated by growing two week old tomato seedlings under controlled conditions. The performance of the APGP fertiliser solution has been evaluated using design of experiments by comparison with traditionally used liquidAmmonium-Potassium-Phosphate inorganic fertilizer. This fertilizer solution has been prepared in similar manner as APGP formulation with the difference between them being the source of phosphorus. The results have been evaluated using statistic analysis where a significant difference between the evaluated fertilizer formulations was found. The comparative study of these formulations was monitored by the observed plant weights. A blank treatment was used as a control to determine if a significant difference among these formulations was observed. Anova single factor and t-Test methods (Two-Samples assumed of equal variances) are statistical models that were applied to interpret the observed experimental data with respect to wet and dry weighed masses of tomato seedlings. These methods indicated a confirmed conclusion that there was a significant difference between APPO4 solution and APGP solution. The observed data have shown that the APPO4 solution provided significantly better fertigation performance than APGP solution. Consequently, further investigation has been conducted to determine the cause of the poorer performance of the APGP solution. The further study of the APGP fertilizer solution included nutrient stability testing, biological analysis and other observed physical changes of the APGP solution over time. Biological results have revealed the presence of a Fusarium fungus species that has grown and is suspended in APGP fertilizer solution. This microbe species has been observed to play a vital role in consuming fertilizer nutrients. In addition, the observed abnormal plant growth and nutrient decomposition of the APGP formulation has been proposed to be mostly a result of the pathogenicity of the fusarium fungi species that was suspended in the APGP solution. Further work has been proposed in which the effect of such biological contamination is eliminated through adequate sterilization procedures and the APGP formulation re-evaluated.
- Full Text:
- Date Issued: 2011
- Authors: Mafu, Lubabalo Rowan
- Date: 2011
- Subjects: Glycerin -- Biotechnology , Biodiesel fuels , Renewable natural resources
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10409 , http://hdl.handle.net/10948/d1011503 , Glycerin -- Biotechnology , Biodiesel fuels , Renewable natural resources
- Description: This research has been directed at furthering the utilization of crude glycerol oversupply formed as a by-product from the biodiesel manufacturing process. Phosphorylation of hydroxyl groups is a synthetic route that was investigated for the conversion of glycerol into a glycerol-phosphate (GPE) ester mixture. The process investigated for the synthesis of a GPE product was based on phosphorylation reaction procedures that were previously reported in the literature. The reaction to convert glycerol into a GPE mixture has been thoroughly investigated and the hydrogen chloride gas formed as a reaction by-product has been optimized. The chemical properties of GPE have been studied and discussed together with a mass balance of the overall glycerol phosphorylation process. The phosphate groups contained in polyhydric phosphate molecules have a potential chelating effect on cations. There are several cations that may be chelated by the phosphate ester group of polyhydric phosphate molecules. These cations include ammonium (NH4+), Potassium (K+), Calcium (Ca2+) etc, which are essential as nutrients in plant fertilizer formulations. This research has investigated the use of a GPE synthesized from glycerol in the laboratory and the use thereof as a phosphorus containing base in the formulation and evaluation of Nitrogen, Phosphorus and Potassium (NPK) containing fertilizer solution, Ammonium-Potassium-Glycerol-Phosphate (APGP) fertilizer solution. The APGP fertilizer solution has further been evaluated by growing two week old tomato seedlings under controlled conditions. The performance of the APGP fertiliser solution has been evaluated using design of experiments by comparison with traditionally used liquidAmmonium-Potassium-Phosphate inorganic fertilizer. This fertilizer solution has been prepared in similar manner as APGP formulation with the difference between them being the source of phosphorus. The results have been evaluated using statistic analysis where a significant difference between the evaluated fertilizer formulations was found. The comparative study of these formulations was monitored by the observed plant weights. A blank treatment was used as a control to determine if a significant difference among these formulations was observed. Anova single factor and t-Test methods (Two-Samples assumed of equal variances) are statistical models that were applied to interpret the observed experimental data with respect to wet and dry weighed masses of tomato seedlings. These methods indicated a confirmed conclusion that there was a significant difference between APPO4 solution and APGP solution. The observed data have shown that the APPO4 solution provided significantly better fertigation performance than APGP solution. Consequently, further investigation has been conducted to determine the cause of the poorer performance of the APGP solution. The further study of the APGP fertilizer solution included nutrient stability testing, biological analysis and other observed physical changes of the APGP solution over time. Biological results have revealed the presence of a Fusarium fungus species that has grown and is suspended in APGP fertilizer solution. This microbe species has been observed to play a vital role in consuming fertilizer nutrients. In addition, the observed abnormal plant growth and nutrient decomposition of the APGP formulation has been proposed to be mostly a result of the pathogenicity of the fusarium fungi species that was suspended in the APGP solution. Further work has been proposed in which the effect of such biological contamination is eliminated through adequate sterilization procedures and the APGP formulation re-evaluated.
- Full Text:
- Date Issued: 2011
Evaluation of model systems for the study of protein association / incorporation of Beta-Methylamino-L-Alanine (BMAA)
- Authors: Visser, Claire
- Date: 2011
- Subjects: Neurotoxic agents , Nervous system -- Diseases
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10314 , http://hdl.handle.net/10948/1451 , Neurotoxic agents , Nervous system -- Diseases
- Description: β-methylamino-L-alanine (BMAA) is thought to be a contributing factor of Amyotrophic Lateral Sclerosis-Parkinsonism Dementia Complex (ALS/PDC). It has been shown that the levels of toxin ingestion by humans are too low to cause disease. However, it has recently been theorized that this toxin is bioaccumulated within cells. Via a process of slow release from this reservoir, the BMAA is able to bring about neurotoxicity. Mechanisms of uptake and bioaccumulation of BMAA have been proposed in several publications; however the mechanism of protein incorporation of BMAA has not yet been identified. Identifying suitable model systems would be a prerequisite in order for future studies on BMAA protein incorporation. Three specific models were therefore chosen for investigation; mammalian cell lines including C2C12 and HT29, a prokaryotic (E. coli) expression system and yeast cells. The cytotoxity of BMAA was established for the mammalian cell lines and further investigation of BMAA incorporation into cellular proteins was performed on all three above mentioned models. Samples were run on HPLC-MS in order to determine uptake of BMAA into cells or lack thereof. Results indicate negligible cytotoxicity as measured by MTT and CellTitre Blue assays, limited uptake and protein incorporation of BMAA within the prokaryotic model and insignificant uptake of BMAA by yeast cells. Although the uptake of BMAA in the prokaryotic model was not extensive, there was indeed uptake. BMAA was not only taken up into the cells but was also observed in inclusion body protein samples after hydrolysis. After further investigation and use, this model could very well provide researchers with information pertaining to the mechanism of association of BMAA with proteins. Although the other models provided negative results, this research was valuable in the sense that one can narrow down the number of possible model systems available. Also, in seeking models for studying protein association/incorporation, the use of the final target cell is not relevant or necessary as the purpose of the research was to identify a model system in which the mechanism of protein association/incorporation can, in future, be studied.
- Full Text:
- Date Issued: 2011
- Authors: Visser, Claire
- Date: 2011
- Subjects: Neurotoxic agents , Nervous system -- Diseases
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10314 , http://hdl.handle.net/10948/1451 , Neurotoxic agents , Nervous system -- Diseases
- Description: β-methylamino-L-alanine (BMAA) is thought to be a contributing factor of Amyotrophic Lateral Sclerosis-Parkinsonism Dementia Complex (ALS/PDC). It has been shown that the levels of toxin ingestion by humans are too low to cause disease. However, it has recently been theorized that this toxin is bioaccumulated within cells. Via a process of slow release from this reservoir, the BMAA is able to bring about neurotoxicity. Mechanisms of uptake and bioaccumulation of BMAA have been proposed in several publications; however the mechanism of protein incorporation of BMAA has not yet been identified. Identifying suitable model systems would be a prerequisite in order for future studies on BMAA protein incorporation. Three specific models were therefore chosen for investigation; mammalian cell lines including C2C12 and HT29, a prokaryotic (E. coli) expression system and yeast cells. The cytotoxity of BMAA was established for the mammalian cell lines and further investigation of BMAA incorporation into cellular proteins was performed on all three above mentioned models. Samples were run on HPLC-MS in order to determine uptake of BMAA into cells or lack thereof. Results indicate negligible cytotoxicity as measured by MTT and CellTitre Blue assays, limited uptake and protein incorporation of BMAA within the prokaryotic model and insignificant uptake of BMAA by yeast cells. Although the uptake of BMAA in the prokaryotic model was not extensive, there was indeed uptake. BMAA was not only taken up into the cells but was also observed in inclusion body protein samples after hydrolysis. After further investigation and use, this model could very well provide researchers with information pertaining to the mechanism of association of BMAA with proteins. Although the other models provided negative results, this research was valuable in the sense that one can narrow down the number of possible model systems available. Also, in seeking models for studying protein association/incorporation, the use of the final target cell is not relevant or necessary as the purpose of the research was to identify a model system in which the mechanism of protein association/incorporation can, in future, be studied.
- Full Text:
- Date Issued: 2011
Finite element analysis of a composite sandwich beam subjected to a four point bend
- Authors: Hove, Darlington
- Date: 2011
- Subjects: Sandwich construction -- Mathematical models , Composite materials -- Research
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10503 , http://hdl.handle.net/10948/1465 , Sandwich construction -- Mathematical models , Composite materials -- Research
- Description: The work in this dissertation deals with the global structural response and local damage effects of a simply supported natural fibre composite sandwich beam subjected to a four-point bend. For the global structural response, we are investigating the flexural behaviour of the composite sandwich beam. We begin by using the principle of virtual work to derive the linear and nonlinear Timoshenko beam theory. Based on these theories, we then proceed to develop the respective finite element models and then implement the numerical algorithm in MATLAB. Comparing the numerical results with experimental results from the CSIR, the numerical model correctly and qualitatively recovers the underlying mechanics with some noted deviances which are explained at the end. The local damage effect of interest is delamination and we begin by reviewing delamination theory with more emphasis on the cohesive zone model. The cohesive zone model relates the traction at the interface to the relative displacement of the interface thereby creating a material model of the interface. We then carry out a cohesive zone model delamination case study in MSC.Marc and MSC.Mentat software packages. The delamination modelling is carried out purely as a numerical study as there are no experimental results to validate the numerical results.
- Full Text:
- Date Issued: 2011
- Authors: Hove, Darlington
- Date: 2011
- Subjects: Sandwich construction -- Mathematical models , Composite materials -- Research
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10503 , http://hdl.handle.net/10948/1465 , Sandwich construction -- Mathematical models , Composite materials -- Research
- Description: The work in this dissertation deals with the global structural response and local damage effects of a simply supported natural fibre composite sandwich beam subjected to a four-point bend. For the global structural response, we are investigating the flexural behaviour of the composite sandwich beam. We begin by using the principle of virtual work to derive the linear and nonlinear Timoshenko beam theory. Based on these theories, we then proceed to develop the respective finite element models and then implement the numerical algorithm in MATLAB. Comparing the numerical results with experimental results from the CSIR, the numerical model correctly and qualitatively recovers the underlying mechanics with some noted deviances which are explained at the end. The local damage effect of interest is delamination and we begin by reviewing delamination theory with more emphasis on the cohesive zone model. The cohesive zone model relates the traction at the interface to the relative displacement of the interface thereby creating a material model of the interface. We then carry out a cohesive zone model delamination case study in MSC.Marc and MSC.Mentat software packages. The delamination modelling is carried out purely as a numerical study as there are no experimental results to validate the numerical results.
- Full Text:
- Date Issued: 2011
Heterothermy and seasonal patterns of metabolic rate in the southern African hedgehog (Atelerix frontalis)
- Authors: Hallam, Stacey Leigh
- Date: 2011
- Subjects: Atelerix , Metabolism -- Measurement
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10683 , http://hdl.handle.net/10948/1497 , Atelerix , Metabolism -- Measurement
- Description:
Animals that inhabit unfavourable habitats and experience seasons where the cost of maintenance exceeds the available energy resources have over time developed behavioural and physiological mechanisms to survive. These adaptations include changes in activity, improvement of cold tolerance by using nonshivering thermogenesis (NST), improvement of thermal conductance, reduction of body mass, or acclimation to colder temperatures (reduction of metabolic requirement). In addition some species exhibit heterothermy, in the form of either daily torpor or longer-term hibernation. The southern African hedgehog (Atelerix frontalis) is an excellent candidate to investigate the phenomenon of heterothermy because it is a small insectivore (summer body mass ca. 300 to 400g), burrows, inhabits harsh habitats and is not easy to find during the winter months. In this study I aimed to investigate whether A. frontalis exhibits seasonal differences in metabolic rate and furthermore if this species exhibits heterothermy. The study was carried out in the Northern Cape Province, South Africa. Hedgehogs were hand captured and their metabolic rates were measured using indirect calorimetry. Individuals were implanted with temperature dataloggers for a summer period (November 2009-January 2010) and a winter period (May-August 2009). The summer BMR of adult A. frontalis (0.448 ±0.035 mlO2/g/h, n=4) was significantly lower than their winter BMR (0.811 ±0.073 mlO2/g/h, n=4) and statistical analyses revealed that this was an affect caused by seasonal changes in the ambient environment. Individuals spent up to 84 percent of time during the measurement period torpid (-8°C
- Full Text:
- Date Issued: 2011
- Authors: Hallam, Stacey Leigh
- Date: 2011
- Subjects: Atelerix , Metabolism -- Measurement
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10683 , http://hdl.handle.net/10948/1497 , Atelerix , Metabolism -- Measurement
- Description:
Animals that inhabit unfavourable habitats and experience seasons where the cost of maintenance exceeds the available energy resources have over time developed behavioural and physiological mechanisms to survive. These adaptations include changes in activity, improvement of cold tolerance by using nonshivering thermogenesis (NST), improvement of thermal conductance, reduction of body mass, or acclimation to colder temperatures (reduction of metabolic requirement). In addition some species exhibit heterothermy, in the form of either daily torpor or longer-term hibernation. The southern African hedgehog (Atelerix frontalis) is an excellent candidate to investigate the phenomenon of heterothermy because it is a small insectivore (summer body mass ca. 300 to 400g), burrows, inhabits harsh habitats and is not easy to find during the winter months. In this study I aimed to investigate whether A. frontalis exhibits seasonal differences in metabolic rate and furthermore if this species exhibits heterothermy. The study was carried out in the Northern Cape Province, South Africa. Hedgehogs were hand captured and their metabolic rates were measured using indirect calorimetry. Individuals were implanted with temperature dataloggers for a summer period (November 2009-January 2010) and a winter period (May-August 2009). The summer BMR of adult A. frontalis (0.448 ±0.035 mlO2/g/h, n=4) was significantly lower than their winter BMR (0.811 ±0.073 mlO2/g/h, n=4) and statistical analyses revealed that this was an affect caused by seasonal changes in the ambient environment. Individuals spent up to 84 percent of time during the measurement period torpid (-8°C
- Full Text:
- Date Issued: 2011
Modelling of the crystallisation process of highly concentrated ammonium nitrate emulsions
- Authors: Simpson, Brenton
- Date: 2011
- Subjects: Explosives , Blasting , Chemical explosives , Ammonium nitrate
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10415 , http://hdl.handle.net/10948/d1012622 , Explosives , Blasting , Chemical explosives , Ammonium nitrate
- Description: Highly concentrated ammonium nitrate emulsions are extensively used as an explosive in the mining industry. The emulsion is made from a supercooled aqueous salt solution with various stabilisers and an organic hydrocarbon phase under vigorous stirring to room temperature. The resulting emulsion is thermodynamically unstable and tends to crystallise over time. This is not suitable for the transportation or pumping of the emulsion in its application. This study showed that the crystallisation process of highly concentrated ammonium nitrate emulsions can be influenced by varying the emulsion droplet size as well as the types and ratios of surfactants used during the preparation stage. The results showed that there were significant differences in the rheological properties of the freshly-prepared emulsion, based on both the emulsion droplet size, and the type of surfactant and ratio of surfactants used. A decrease of the emulsion droplet size resulted in the increase of the elastic character, which can be explained by more compact network organisation of droplets. In terms of the different surfactants, it was shown that the Pibsa-Imide stabilised emulsions resulted in an emulsion with the highest storage modulus over the entire strain amplitude regions as well as the highest shear stresses over the whole shear rate region. The study showed that the relatively slow emulsion crystallisation process can be studied by using powder X-ray diffraction (PXRD). The amount of amorphous and crystalline phases present in the sample can be effectively quantified by using the Partial Or No Known Crystal Structural (PONKCS) method which can model accurately the contributions of the amorphous halo. An external standard calibration method, which used a different amorphous material with the crystalline material to obtain a suitable calibration constant, was employed. The results showed that the method would quantify the amount of the fully crystallised emulsion to be between 80 and 90 percent, which was in agreement with the solid content added during sample preparation and confirmed by Thermal Gravimetric Analysis (TGA). The simultaneous TGA / DSC results were able to show the number of solid/solid peak transitions as well as the total moisture content to be around 20 percent by mass in various emulsion samples studied. The study was able to model the crystallisation by using the Avrami and Tobin kinetic relationships which are commonly used for the crystallisation processes of polymers. The Avrami relationship proved to be useful in describing the type of crystallisation that occurred. This was based on literature where the exponent parameter (n) which was between 1 and 4 would relate to different types of crystallisation models. The results of this study showed that the crystallisation process would change for the samples that had shown a longer crystallisation process. The results indicated that the samples prepared with the lower Pibsa-Urea ratio showed a more sporadic crystallisation process, whereas the samples with the higher ratio of Pibsa-Urea showed a more controlled crystallisation process. The study also considered the rheological properties of the fresh emulsion, which showed that droplet size also had an influence on the stress strain relationship of the emulsion droplets.
- Full Text:
- Date Issued: 2011
- Authors: Simpson, Brenton
- Date: 2011
- Subjects: Explosives , Blasting , Chemical explosives , Ammonium nitrate
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10415 , http://hdl.handle.net/10948/d1012622 , Explosives , Blasting , Chemical explosives , Ammonium nitrate
- Description: Highly concentrated ammonium nitrate emulsions are extensively used as an explosive in the mining industry. The emulsion is made from a supercooled aqueous salt solution with various stabilisers and an organic hydrocarbon phase under vigorous stirring to room temperature. The resulting emulsion is thermodynamically unstable and tends to crystallise over time. This is not suitable for the transportation or pumping of the emulsion in its application. This study showed that the crystallisation process of highly concentrated ammonium nitrate emulsions can be influenced by varying the emulsion droplet size as well as the types and ratios of surfactants used during the preparation stage. The results showed that there were significant differences in the rheological properties of the freshly-prepared emulsion, based on both the emulsion droplet size, and the type of surfactant and ratio of surfactants used. A decrease of the emulsion droplet size resulted in the increase of the elastic character, which can be explained by more compact network organisation of droplets. In terms of the different surfactants, it was shown that the Pibsa-Imide stabilised emulsions resulted in an emulsion with the highest storage modulus over the entire strain amplitude regions as well as the highest shear stresses over the whole shear rate region. The study showed that the relatively slow emulsion crystallisation process can be studied by using powder X-ray diffraction (PXRD). The amount of amorphous and crystalline phases present in the sample can be effectively quantified by using the Partial Or No Known Crystal Structural (PONKCS) method which can model accurately the contributions of the amorphous halo. An external standard calibration method, which used a different amorphous material with the crystalline material to obtain a suitable calibration constant, was employed. The results showed that the method would quantify the amount of the fully crystallised emulsion to be between 80 and 90 percent, which was in agreement with the solid content added during sample preparation and confirmed by Thermal Gravimetric Analysis (TGA). The simultaneous TGA / DSC results were able to show the number of solid/solid peak transitions as well as the total moisture content to be around 20 percent by mass in various emulsion samples studied. The study was able to model the crystallisation by using the Avrami and Tobin kinetic relationships which are commonly used for the crystallisation processes of polymers. The Avrami relationship proved to be useful in describing the type of crystallisation that occurred. This was based on literature where the exponent parameter (n) which was between 1 and 4 would relate to different types of crystallisation models. The results of this study showed that the crystallisation process would change for the samples that had shown a longer crystallisation process. The results indicated that the samples prepared with the lower Pibsa-Urea ratio showed a more sporadic crystallisation process, whereas the samples with the higher ratio of Pibsa-Urea showed a more controlled crystallisation process. The study also considered the rheological properties of the fresh emulsion, which showed that droplet size also had an influence on the stress strain relationship of the emulsion droplets.
- Full Text:
- Date Issued: 2011
Modelling trends in evapotranspiration using the MODIS LAI for selected Eastern Cape catchments
- Authors: Finca, Andiswa
- Date: 2011
- Subjects: Evapotranspiration , Evapotranspiration -- South Africa -- Eastern Cape
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10651 , http://hdl.handle.net/10948/d1009517 , Evapotranspiration , Evapotranspiration -- South Africa -- Eastern Cape
- Description: Grassland is the dominant vegetation cover of many of the 19 Water Catchment Areas within South Africa. The inappropriate management of some of these grassland catchments by the communities that depend on them for their livelihoods, often results in overgrazed lands with low biomass or invasive alien species. The short grass maintained by grazing policies of many communities results in high storm flows that have an adverse effect on the quantity and quality of runoff and recharge. Catchment-scale water balances depend on accurate estimates of run-off, recharge and evapotranspiration (ET). This study focuses on the ET component of the catchment scale water balance and explores the effect of two different grazing strategies on ET. To achieve this, two contrasting but adjacent quaternary catchments namely: P10A (a high biomass site) and Q91C (a low biomass site) were selected within the Bushman’s River Primary catchment as primary study sites. Within each catchment, a relatively homogenous pixel of 1 km was selected, representing contrasting example of high and low intensity grazing. From an eleven year MODIS leaf area index (LAI) data stack (March 2000 – 2010), 8-day LAI values was extracted for each pixel in each catchment. Using the Penman- Monteith equation, potential evapotranspiration (ET0) was calculated using data from a nearly automatic weather station. Actual evapotranspiration was estimated by adjusting ET0 using the values extracted from the MODIS LAI product. The MODIS LAI ET (ETMODIS) obtained for the eleven year period for both 1 km pixels decreased consistently, reflecting a general trend in declining LAI throughout the Eastern Cape. The highest ETMODIS obtained from P10A was 610.3 mm (2001) and the lowest was 333.1 mm (2009). Then from Q91C the highest ET obtained was 534.7 mm (2006) and the lowest was 266.2 mm (2009). The ETMODIS results were validated for each catchment using the Open Top Chamber (OTC) which sums the water lost from vegetation and soil within the chamber. This validation was conducted during the growing season of 2010–11. Wind speed; relative humidity and temperature were measured both at the inlet and the outlet of the chamber on five clear sunny days for each 1 km pixel. ETa for the same period was compared to the OTC ET (ETOTC) using the regression analysis and a good relationship was observed with the r2 of 0.7065. The relationship observed confirmed that ETOTC closely approximates ETMODIS and that the OTC can be used as a tool to validate MODIS LAI ET on clear, low winds and sunny days. In order to demonstrate proof-of-concept for the use of this modeling of ETMODIS within a Payment for Ecosystem Services framework, the approach was applied to two other quaternary catchments under communal tenure. Within each catchment, three land use scenarios were created for each catchment to reflect potential changes in the standing aboveground biomass. For Scenario 1, the status quo was maintained; for Scenario 2, MODIS pixels representing 28 km in each catchment were selected and the LAI of these pixels was doubled; and for scenario 3, LAI was halved. ETMODIS was calculated for each scenario by adjusting the ET0 data from a nearby automatic weather station with the MODIS LAI product. The results showed that the estimated annual ETMODIS obtained from the high biomass catchment was 111 mm greater than that obtained from the low biomass catchment. When comparing between the scenarios, the annual ETMODIS obtained from scenario 2 was the highest of the 3 scenarios for both sites. These results confirm that increased leaf area results in higher annual ETMODIS. This has a positive long term impact on stream flow, as high grass biomass allows the rainfall to infiltrate the soil and be gradually released to the dams with reduced magnitude of storm flows. This approach has the potential to quantify the benefits to down-stream water users of improving above-ground biomass in catchments.
- Full Text:
- Date Issued: 2011
- Authors: Finca, Andiswa
- Date: 2011
- Subjects: Evapotranspiration , Evapotranspiration -- South Africa -- Eastern Cape
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10651 , http://hdl.handle.net/10948/d1009517 , Evapotranspiration , Evapotranspiration -- South Africa -- Eastern Cape
- Description: Grassland is the dominant vegetation cover of many of the 19 Water Catchment Areas within South Africa. The inappropriate management of some of these grassland catchments by the communities that depend on them for their livelihoods, often results in overgrazed lands with low biomass or invasive alien species. The short grass maintained by grazing policies of many communities results in high storm flows that have an adverse effect on the quantity and quality of runoff and recharge. Catchment-scale water balances depend on accurate estimates of run-off, recharge and evapotranspiration (ET). This study focuses on the ET component of the catchment scale water balance and explores the effect of two different grazing strategies on ET. To achieve this, two contrasting but adjacent quaternary catchments namely: P10A (a high biomass site) and Q91C (a low biomass site) were selected within the Bushman’s River Primary catchment as primary study sites. Within each catchment, a relatively homogenous pixel of 1 km was selected, representing contrasting example of high and low intensity grazing. From an eleven year MODIS leaf area index (LAI) data stack (March 2000 – 2010), 8-day LAI values was extracted for each pixel in each catchment. Using the Penman- Monteith equation, potential evapotranspiration (ET0) was calculated using data from a nearly automatic weather station. Actual evapotranspiration was estimated by adjusting ET0 using the values extracted from the MODIS LAI product. The MODIS LAI ET (ETMODIS) obtained for the eleven year period for both 1 km pixels decreased consistently, reflecting a general trend in declining LAI throughout the Eastern Cape. The highest ETMODIS obtained from P10A was 610.3 mm (2001) and the lowest was 333.1 mm (2009). Then from Q91C the highest ET obtained was 534.7 mm (2006) and the lowest was 266.2 mm (2009). The ETMODIS results were validated for each catchment using the Open Top Chamber (OTC) which sums the water lost from vegetation and soil within the chamber. This validation was conducted during the growing season of 2010–11. Wind speed; relative humidity and temperature were measured both at the inlet and the outlet of the chamber on five clear sunny days for each 1 km pixel. ETa for the same period was compared to the OTC ET (ETOTC) using the regression analysis and a good relationship was observed with the r2 of 0.7065. The relationship observed confirmed that ETOTC closely approximates ETMODIS and that the OTC can be used as a tool to validate MODIS LAI ET on clear, low winds and sunny days. In order to demonstrate proof-of-concept for the use of this modeling of ETMODIS within a Payment for Ecosystem Services framework, the approach was applied to two other quaternary catchments under communal tenure. Within each catchment, three land use scenarios were created for each catchment to reflect potential changes in the standing aboveground biomass. For Scenario 1, the status quo was maintained; for Scenario 2, MODIS pixels representing 28 km in each catchment were selected and the LAI of these pixels was doubled; and for scenario 3, LAI was halved. ETMODIS was calculated for each scenario by adjusting the ET0 data from a nearby automatic weather station with the MODIS LAI product. The results showed that the estimated annual ETMODIS obtained from the high biomass catchment was 111 mm greater than that obtained from the low biomass catchment. When comparing between the scenarios, the annual ETMODIS obtained from scenario 2 was the highest of the 3 scenarios for both sites. These results confirm that increased leaf area results in higher annual ETMODIS. This has a positive long term impact on stream flow, as high grass biomass allows the rainfall to infiltrate the soil and be gradually released to the dams with reduced magnitude of storm flows. This approach has the potential to quantify the benefits to down-stream water users of improving above-ground biomass in catchments.
- Full Text:
- Date Issued: 2011
Nearshore subtidal soft-bottom macrozoobenthic community structure in the western sector of Algoa Bay, South Africa
- Authors: Masikane, Ntuthuko Fortune
- Date: 2011
- Subjects: Benthic animals -- South Africa -- Algoa Bay , Benthos
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10684 , http://hdl.handle.net/10948/1466 , Benthic animals -- South Africa -- Algoa Bay , Benthos
- Description: The objectives of this study were to characterise macrozoobenthic community structure of the western sector of Algoa Bay, to identify the drivers of community structure and to develop a long-term monitoring framework. Data were collected from six study sites stratified along-shore. Each site comprised three stations; most sites were located in areas directly influenced by anthropogenic activities such as inflow from storm water drains and areas where dredged spoil was dumped. Other sites included areas in close proximity to estuary mouths. Physico-chemical parameters of the water column were measured with a YSI instrument, sediment for faunal and physico-chemical analyses was sampled with a Van Veen grab, and collected macrofauna were sedated and preserved pending analysis. In the laboratory, macrofauna were identified to finest taxonomic resolution possible under dissecting and compound microscopes, and enumerated. Sediment samples for physico-chemical analyses were kept frozen pending analysis. Up to 187 species belonging to 137 genera and 105 families were identified. Univariate community parameters such as abundance and number of species varied significantly along-shore, generally increasing towards less wave-exposed sites. Multivariate analyses revealed that community assemblages were heterogeneously distributed along-shore, corresponding to areas where anthropogenic influences such as effluent discharge and commercial harbour activities prevailed. During the 2008 survey, species assemblages separated into six groups corresponding to the six sites but xvii during the 2009 survey, species assemblages separated into four groups probably due to changes in environmental parameters such as the hydrodynamic regime. In both surveys the assemblage opposite a drainage canal (Papenkuils outfall) was distinct as it was dissimilar to all other assemblages. This site was also heterogeneous over relatively small spatial scales. Important physico-chemical variables influencing community structures during the 2008 survey included bottom measurements of temperature, salinity, dissolved oxygen, coarse sand and mud. During the 2009 survey, only bottom temperature and mud content were identified as important physico-chemical variables structuring community assemblages. The principal variable was probably the hydrodynamic regime, driving community structure at a larger scale in Algoa Bay. On a localised scale, communities were probably structured by other factors such as effluent discharges, influence of estuary mouths and activities associated with the harbour. With a lack of information on keystone species (regarded as good monitoring species) in Algoa Bay, it was proposed that groups that cumulatively comprise 50–75 percent of total abundance within communities be monitored annually. Included are amphipods, polychaetes, cumaceans, ostracods, tanaids and bivalves. It was also proposed that areas opposite estuary mouths, effluent outfalls and the dredged spoil dumpsite be monitored. This routine monitoring programme should be accompanied by periodic hypothesis driven research to assess the importance of stochastic events (e.g., upwelling) on macrozoobenthic community dynamics. Keywords: macrozoobenthos, soft-bottom, community assemblages, spatial distribution patterns, environmental drivers, long-term monitoring framework.
- Full Text:
- Date Issued: 2011
- Authors: Masikane, Ntuthuko Fortune
- Date: 2011
- Subjects: Benthic animals -- South Africa -- Algoa Bay , Benthos
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10684 , http://hdl.handle.net/10948/1466 , Benthic animals -- South Africa -- Algoa Bay , Benthos
- Description: The objectives of this study were to characterise macrozoobenthic community structure of the western sector of Algoa Bay, to identify the drivers of community structure and to develop a long-term monitoring framework. Data were collected from six study sites stratified along-shore. Each site comprised three stations; most sites were located in areas directly influenced by anthropogenic activities such as inflow from storm water drains and areas where dredged spoil was dumped. Other sites included areas in close proximity to estuary mouths. Physico-chemical parameters of the water column were measured with a YSI instrument, sediment for faunal and physico-chemical analyses was sampled with a Van Veen grab, and collected macrofauna were sedated and preserved pending analysis. In the laboratory, macrofauna were identified to finest taxonomic resolution possible under dissecting and compound microscopes, and enumerated. Sediment samples for physico-chemical analyses were kept frozen pending analysis. Up to 187 species belonging to 137 genera and 105 families were identified. Univariate community parameters such as abundance and number of species varied significantly along-shore, generally increasing towards less wave-exposed sites. Multivariate analyses revealed that community assemblages were heterogeneously distributed along-shore, corresponding to areas where anthropogenic influences such as effluent discharge and commercial harbour activities prevailed. During the 2008 survey, species assemblages separated into six groups corresponding to the six sites but xvii during the 2009 survey, species assemblages separated into four groups probably due to changes in environmental parameters such as the hydrodynamic regime. In both surveys the assemblage opposite a drainage canal (Papenkuils outfall) was distinct as it was dissimilar to all other assemblages. This site was also heterogeneous over relatively small spatial scales. Important physico-chemical variables influencing community structures during the 2008 survey included bottom measurements of temperature, salinity, dissolved oxygen, coarse sand and mud. During the 2009 survey, only bottom temperature and mud content were identified as important physico-chemical variables structuring community assemblages. The principal variable was probably the hydrodynamic regime, driving community structure at a larger scale in Algoa Bay. On a localised scale, communities were probably structured by other factors such as effluent discharges, influence of estuary mouths and activities associated with the harbour. With a lack of information on keystone species (regarded as good monitoring species) in Algoa Bay, it was proposed that groups that cumulatively comprise 50–75 percent of total abundance within communities be monitored annually. Included are amphipods, polychaetes, cumaceans, ostracods, tanaids and bivalves. It was also proposed that areas opposite estuary mouths, effluent outfalls and the dredged spoil dumpsite be monitored. This routine monitoring programme should be accompanied by periodic hypothesis driven research to assess the importance of stochastic events (e.g., upwelling) on macrozoobenthic community dynamics. Keywords: macrozoobenthos, soft-bottom, community assemblages, spatial distribution patterns, environmental drivers, long-term monitoring framework.
- Full Text:
- Date Issued: 2011
On the design and monitoring of photovoltaic systems for rural homes
- Authors: Williams, Nathaniel John
- Date: 2011
- Subjects: Photovoltaic cells , Dwellings -- Power supply
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10516 , http://hdl.handle.net/10948/1308 , Photovoltaic cells , Dwellings -- Power supply
- Description: It is estimated that 1.6 billion people today live without access to electricity. Most of these people live in remote rural areas in developing countries. One economic solution to this problem is the deployment of small domestic photovoltaic (PV) systems called solar home systems (SHS). In order to improve the performance and reduce the life cycle cost of these systems, accurate monitoring data of real SHSs is required. To this end, two SHSs typical of those found in the field were designed and installed, one in a rural area of the Eastern Cape of South Africa and the other in the laboratory. Monitoring systems were designed to record energy ows in the system and important environmental parameters. A novel technique was developed to correct for measurement errors occurring during the utilization of pulse width modulation charge control techniques. These errors were found to be as large as 47.6 percent. Simulations show that correction techniques produce measurement errors that are up to 20 times smaller than uncorrected values, depending upon the operating conditions. As a tool to aid in the analysis of monitoring data, a PV performance model was developed. The model, used to predict the maximum power point (MPP) power of a PV array, was able to predict MPP energy production to within 0.2 percent over the course of three days. Monitoring data from the laboratory system shows that the largest sources of energy loss are charge control, module under performance relative to manufacturer specifications and operation of the PV array away from MPP. These accounted for losses of approximately 18-27 percent, 15 percent and 8-11 percent of rated PV energy under standard test conditions, respectively. Energy consumed by loads on the systems was less than 50 percent of rated PV energy for both the remote and laboratory systems. Performance ratios (PR) for the laboratory system ranged from 0.38 to 0.49 for the three monitoring periods. The remote system produced a PR of 0.46. In both systems the PV arrays appear to have been oversized. This was due to overestimation of the energy requirements of the loads on the systems. In the laboratory system, the loads consisting of three compact fluorescent lamps and one incandescent lamp, were used to simulate a typical SHS load pro le and collectively consumed only 85 percent of their rated power. The 8 predicted load profile for the remote system proved to be signi cantly overestimated. The results of the monitoring project demonstrate the importance of acquiring an accurate estimation of the energy demand from loads on the system. Overestimations result in over-sized arrays and energy lost to charge control while under-sized systems risk damaging system batteries and load shedding. Significant under-performance of the PV module used in the laboratory system, underlines the importance of measuring module IV curves and verifying manufacturer specifications before system deployment. It was also found that signi cant PV array performance gains could be obtained by the use of maximum power point tracking charge controllers. Increased PV array performance leads to smaller arrays and reduced system cost.
- Full Text:
- Date Issued: 2011
- Authors: Williams, Nathaniel John
- Date: 2011
- Subjects: Photovoltaic cells , Dwellings -- Power supply
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10516 , http://hdl.handle.net/10948/1308 , Photovoltaic cells , Dwellings -- Power supply
- Description: It is estimated that 1.6 billion people today live without access to electricity. Most of these people live in remote rural areas in developing countries. One economic solution to this problem is the deployment of small domestic photovoltaic (PV) systems called solar home systems (SHS). In order to improve the performance and reduce the life cycle cost of these systems, accurate monitoring data of real SHSs is required. To this end, two SHSs typical of those found in the field were designed and installed, one in a rural area of the Eastern Cape of South Africa and the other in the laboratory. Monitoring systems were designed to record energy ows in the system and important environmental parameters. A novel technique was developed to correct for measurement errors occurring during the utilization of pulse width modulation charge control techniques. These errors were found to be as large as 47.6 percent. Simulations show that correction techniques produce measurement errors that are up to 20 times smaller than uncorrected values, depending upon the operating conditions. As a tool to aid in the analysis of monitoring data, a PV performance model was developed. The model, used to predict the maximum power point (MPP) power of a PV array, was able to predict MPP energy production to within 0.2 percent over the course of three days. Monitoring data from the laboratory system shows that the largest sources of energy loss are charge control, module under performance relative to manufacturer specifications and operation of the PV array away from MPP. These accounted for losses of approximately 18-27 percent, 15 percent and 8-11 percent of rated PV energy under standard test conditions, respectively. Energy consumed by loads on the systems was less than 50 percent of rated PV energy for both the remote and laboratory systems. Performance ratios (PR) for the laboratory system ranged from 0.38 to 0.49 for the three monitoring periods. The remote system produced a PR of 0.46. In both systems the PV arrays appear to have been oversized. This was due to overestimation of the energy requirements of the loads on the systems. In the laboratory system, the loads consisting of three compact fluorescent lamps and one incandescent lamp, were used to simulate a typical SHS load pro le and collectively consumed only 85 percent of their rated power. The 8 predicted load profile for the remote system proved to be signi cantly overestimated. The results of the monitoring project demonstrate the importance of acquiring an accurate estimation of the energy demand from loads on the system. Overestimations result in over-sized arrays and energy lost to charge control while under-sized systems risk damaging system batteries and load shedding. Significant under-performance of the PV module used in the laboratory system, underlines the importance of measuring module IV curves and verifying manufacturer specifications before system deployment. It was also found that signi cant PV array performance gains could be obtained by the use of maximum power point tracking charge controllers. Increased PV array performance leads to smaller arrays and reduced system cost.
- Full Text:
- Date Issued: 2011
Quality issues related to apparel mechandising in South Africa
- Authors: Das, Sweta
- Date: 2011
- Subjects: Fashion merchandising -- South Africa , Quality of products -- South Africa
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10447 , http://hdl.handle.net/10948/1585 , Fashion merchandising -- South Africa , Quality of products -- South Africa
- Description: The objectives of this study are to develop an understanding of the quality related issues and gaps relevant to apparel merchandising within the South African context, with a specific focus on Fabric Objective Measurement, a relatively new technology and one which could fruitfully be applied in South Africa, but which appears to have been largely neglected to date. Fabric Objective Measurement (FOM) represents a new generation of instrumentally measured parameters which provide a more complete picture of fabric quality, tailorability and clothing performance. The two main FOM systems, FAST and Kawabata, are discussed under FOM in terms of their applications, control charts and their worldwide utilisation. A literature review has been done on the global clothing sector as well as South African clothing industry. The research involved a questionnaire survey of, and interviews with major clothing and retail companies in South Africa with a specific focus on the gap in the South African clothing industry in terms of FOM and other quality related issues. The data and information so captured are presented graphically, statistically analyzed and interpreted, to arrive at the main conclusions and recommendations. Trubok, Newcastle, the only company in South Africa utilizing FOM, was visited in order to obtain hands on experience with the FAST system as operated in a mill. Two different fabrics were tested and the control charts obtained were interpreted. According to the analysis of the questionnaires and interviews, various conclusions could be drawn. When benchmarking a product, quality emerged as the first criterion, 100 percent retailers and manufacturers agreed to this. Most respondents stated that their fabric and garment testing is mostly done in-house while other respondents stated that their fabric and garment testing is mostly done by their respective suppliers. The most commonly used outside laboratories are SGS and ITS. Merchandising and quality complement each other and with proper quality assessment the merchandising workflow becomes smooth, easy and timely delivery of products. All of the respondents (100 percent) supported this fact. Retailers and manufacturers agreed that quality and merchandising are related to each other and hence helping those in achieving product benchmarking (statistically significant at 95 percent confidence level). Retailers and manufacturers conduct fabric and garment tests on a regular/routine basis and mostly use knitted and woven fabrics in garment making. In addition to the above, the worldwide manufacturers and suppliers of the FAST and Kawabata systems were approached to obtain data and information about the number of such systems sold worldwide and their fields of application. This information was considered important in promoting FOM in South Africa. Only one manufacturer is presently using FAST for quality control purposes. Of the manufacturers and retailers covered, most of them were either unfamiliar or totally unaware of FOM and its application. This indicates that there is considerable scope for introducing this highly advanced technology into the textile and clothing manufacturing and retail pipeline in South Africa. Most of the manufacturers and retailers (50 percent) intend to introduce certain new tests in future. The tests that they are planning to introduce in future may include FAST, which is fairly simple, reliable and productive, as well as enhancing the quality of the garment. If used, FOM can improve the quality and competitiveness on the international level which is currently lacking in the South African clothing sector.
- Full Text:
- Date Issued: 2011
- Authors: Das, Sweta
- Date: 2011
- Subjects: Fashion merchandising -- South Africa , Quality of products -- South Africa
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10447 , http://hdl.handle.net/10948/1585 , Fashion merchandising -- South Africa , Quality of products -- South Africa
- Description: The objectives of this study are to develop an understanding of the quality related issues and gaps relevant to apparel merchandising within the South African context, with a specific focus on Fabric Objective Measurement, a relatively new technology and one which could fruitfully be applied in South Africa, but which appears to have been largely neglected to date. Fabric Objective Measurement (FOM) represents a new generation of instrumentally measured parameters which provide a more complete picture of fabric quality, tailorability and clothing performance. The two main FOM systems, FAST and Kawabata, are discussed under FOM in terms of their applications, control charts and their worldwide utilisation. A literature review has been done on the global clothing sector as well as South African clothing industry. The research involved a questionnaire survey of, and interviews with major clothing and retail companies in South Africa with a specific focus on the gap in the South African clothing industry in terms of FOM and other quality related issues. The data and information so captured are presented graphically, statistically analyzed and interpreted, to arrive at the main conclusions and recommendations. Trubok, Newcastle, the only company in South Africa utilizing FOM, was visited in order to obtain hands on experience with the FAST system as operated in a mill. Two different fabrics were tested and the control charts obtained were interpreted. According to the analysis of the questionnaires and interviews, various conclusions could be drawn. When benchmarking a product, quality emerged as the first criterion, 100 percent retailers and manufacturers agreed to this. Most respondents stated that their fabric and garment testing is mostly done in-house while other respondents stated that their fabric and garment testing is mostly done by their respective suppliers. The most commonly used outside laboratories are SGS and ITS. Merchandising and quality complement each other and with proper quality assessment the merchandising workflow becomes smooth, easy and timely delivery of products. All of the respondents (100 percent) supported this fact. Retailers and manufacturers agreed that quality and merchandising are related to each other and hence helping those in achieving product benchmarking (statistically significant at 95 percent confidence level). Retailers and manufacturers conduct fabric and garment tests on a regular/routine basis and mostly use knitted and woven fabrics in garment making. In addition to the above, the worldwide manufacturers and suppliers of the FAST and Kawabata systems were approached to obtain data and information about the number of such systems sold worldwide and their fields of application. This information was considered important in promoting FOM in South Africa. Only one manufacturer is presently using FAST for quality control purposes. Of the manufacturers and retailers covered, most of them were either unfamiliar or totally unaware of FOM and its application. This indicates that there is considerable scope for introducing this highly advanced technology into the textile and clothing manufacturing and retail pipeline in South Africa. Most of the manufacturers and retailers (50 percent) intend to introduce certain new tests in future. The tests that they are planning to introduce in future may include FAST, which is fairly simple, reliable and productive, as well as enhancing the quality of the garment. If used, FOM can improve the quality and competitiveness on the international level which is currently lacking in the South African clothing sector.
- Full Text:
- Date Issued: 2011
Radiation damage in GaAs and SiC
- Authors: Janse van Vuuren, Arno
- Date: 2011
- Subjects: Gallium arsenide semiconductors
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10515 , http://hdl.handle.net/10948/1477 , Gallium arsenide semiconductors
- Description: In this dissertation the microstructure and hardness of phosphorous implanted SiC and neutron irradiated SiC and GaAs have been investigated. SiC is important due to its application as a barrier coating layer in coated particle fuel used in high temperature gas cooled reactors. The characterisation of neutron irradiated GaAs has been included in this study in order to compare the radiation damage produced by protons and neutrons since proton bombardment of SiC could in principle be used for out-of-reactor simulations of the neutron irradiation damage created in SiC during reactor operation. The following SiC and GaAs compounds were investigated: As-implanted and annealed single crystal 6H-SiC wafers and polycrystalline 3C-SiC bulk material implanted with phosphorous ions. As-irradiated and annealed polycrystalline 3C-SiC bulk material irradiated with fast neutrons. As-irradiated and annealed single crystal GaAs wafers irradiated with fast neutrons. The main techniques used for the analyses were transmission electron microscopy (TEM) and nano-indentation hardness testing. The following results were obtained for the investigation of implanted and irradiated SiC and GaAs: Phosphorous Implanted 6H-SiC and 3C-SiC The depth of the P+ ion damage was found to be in good agreement with predictions by TRIM 2010. Micro-diffraction of the damage region in P+ implanted 6H-SiC (dose 5×1016 ions/cm2) indicates that amorphization occurred and that recrystallisation of this layer occurred during annealing at 1200°C. TEM analysis revealed that the layer recrystallised in the 3C phase of SiC and twin defects also formed within the layer. Micro-diffraction of the damage region in P+ implanted 3C-SiC (dose 1×1015 ions/cm2) indicates that amorphization also occurred for this sample and that recrystallisation of this layer occurred during annealing at 800°C. Nano-hardness testing of the P+ implanted 6H-SiC indicated that the hardness of the implanted SiC was initially much lower than unimplanted SiC due to the formation of an amorphous layer during ion implantation. After annealing the implanted SiC at 800°C and 1200°C, the hardness increased due to re-crystallisation and point defect hardening. Neutron Irradiated 3C-SiC TEM investigations of neutron irradiated 3C-SiC revealed the presence dark spot defects for SiC samples irradiated to a dose of 5.9×1021 n/cm2 and 9.6×1021 n/cm2. Neutron Irradiated GaAs TEM investigation revealed a high density of dislocation loops in the unannealed neutron irradiated GaAs. The loop diameters increased after post-irradiation annealing in the range 600 to 800 °C. The dislocation loops were found to be of interstitial type lying on the {110} cleavage planes of GaAs. This finding is in agreement with earlier studies on 300 keV proton bombarded and 1 MeV electron irradiated GaAs where interstitial loops on {110} planes became visible after annealing at temperatures exceeding 500 °C. The small dislocation loops on the {110} planes of the neutron irradiated GaAs transformed to large loops and dislocations after annealing at 1000 °C.
- Full Text:
- Date Issued: 2011
- Authors: Janse van Vuuren, Arno
- Date: 2011
- Subjects: Gallium arsenide semiconductors
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10515 , http://hdl.handle.net/10948/1477 , Gallium arsenide semiconductors
- Description: In this dissertation the microstructure and hardness of phosphorous implanted SiC and neutron irradiated SiC and GaAs have been investigated. SiC is important due to its application as a barrier coating layer in coated particle fuel used in high temperature gas cooled reactors. The characterisation of neutron irradiated GaAs has been included in this study in order to compare the radiation damage produced by protons and neutrons since proton bombardment of SiC could in principle be used for out-of-reactor simulations of the neutron irradiation damage created in SiC during reactor operation. The following SiC and GaAs compounds were investigated: As-implanted and annealed single crystal 6H-SiC wafers and polycrystalline 3C-SiC bulk material implanted with phosphorous ions. As-irradiated and annealed polycrystalline 3C-SiC bulk material irradiated with fast neutrons. As-irradiated and annealed single crystal GaAs wafers irradiated with fast neutrons. The main techniques used for the analyses were transmission electron microscopy (TEM) and nano-indentation hardness testing. The following results were obtained for the investigation of implanted and irradiated SiC and GaAs: Phosphorous Implanted 6H-SiC and 3C-SiC The depth of the P+ ion damage was found to be in good agreement with predictions by TRIM 2010. Micro-diffraction of the damage region in P+ implanted 6H-SiC (dose 5×1016 ions/cm2) indicates that amorphization occurred and that recrystallisation of this layer occurred during annealing at 1200°C. TEM analysis revealed that the layer recrystallised in the 3C phase of SiC and twin defects also formed within the layer. Micro-diffraction of the damage region in P+ implanted 3C-SiC (dose 1×1015 ions/cm2) indicates that amorphization also occurred for this sample and that recrystallisation of this layer occurred during annealing at 800°C. Nano-hardness testing of the P+ implanted 6H-SiC indicated that the hardness of the implanted SiC was initially much lower than unimplanted SiC due to the formation of an amorphous layer during ion implantation. After annealing the implanted SiC at 800°C and 1200°C, the hardness increased due to re-crystallisation and point defect hardening. Neutron Irradiated 3C-SiC TEM investigations of neutron irradiated 3C-SiC revealed the presence dark spot defects for SiC samples irradiated to a dose of 5.9×1021 n/cm2 and 9.6×1021 n/cm2. Neutron Irradiated GaAs TEM investigation revealed a high density of dislocation loops in the unannealed neutron irradiated GaAs. The loop diameters increased after post-irradiation annealing in the range 600 to 800 °C. The dislocation loops were found to be of interstitial type lying on the {110} cleavage planes of GaAs. This finding is in agreement with earlier studies on 300 keV proton bombarded and 1 MeV electron irradiated GaAs where interstitial loops on {110} planes became visible after annealing at temperatures exceeding 500 °C. The small dislocation loops on the {110} planes of the neutron irradiated GaAs transformed to large loops and dislocations after annealing at 1000 °C.
- Full Text:
- Date Issued: 2011
Restoring the biodiversity of canopy species within degraded spekboom thicket
- Van der Vyver, Marius Lodewyk
- Authors: Van der Vyver, Marius Lodewyk
- Date: 2011
- Subjects: Plant diversity , Biodiversity
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10602 , http://hdl.handle.net/10948/1564 , Plant diversity , Biodiversity
- Description: I investigated the return of plant canopy diversity to degraded spekboom thicket landscapes under restoration treatment. I attempted the reintroduction of five nursery propagated and naturally-occurring plant species in severely degraded Portulacaria afra Jacq. (spekboom) dominated thickets that have been subjected to a restoration method involving the planting of dense rows of P. afra truncheons for various time periods and also in degraded and intact thickets. I also planted nursery propagated P. afra cuttings. An average of 30 propagules of each species, were planted in each of the chosen areas in two distinct seasons that exhibited distinct rainfall peaks. Sixteen propagules of P. afra were also planted in each treatment only once. Propagules of the two thicket woody canopy species (S. longispina and P. capensis) showed a total survival of 1% and 9%, respectively. Survival of L. ferocissimum and R. obovatum was 19% and 70% and all propagules of P. afra survived. Analyses showed that survival is primarily tied to a species effect, with R. obovatum and P. afra showing significantly better survival than the other species. Within the other surviving few species a significant preference for overhanging canopy cover was observed. The results show little significance of restoration treatment for propagule survival, suggesting that a range of conditions is needed for the successful establishment of canopy species that likely involves a microclimate and suitable substrate created by canopy cover and litter fall, combined with an exceptional series of rainfall events. I found that the high costs involved with a biodiversity planting endeavour, and the low survival of propagules of thicket canopy plant species (P. afra excepted), renders the proposed biodiversity planting restoration protocol both ecologically and economically inefficient. Restoration success involves the autogenic regeneration of key species or functional groups within the degraded ecosystem. Heavily degraded spekboom-dominated thicket does not spontaneously regenerate its former canopy species composition and this state of affairs was interpreted in terms of a state-and-transition conceptual model. Floristic analyses of degraded, intact and a range of stands under restoration treatment for varying time periods at two locations in Sundays Spekboomveld revealed that the stands under restoration are progressively regenerating canopy species biodiversity with increasing restoration age, and that intact sites are still the most diverse. The high total carbon content (TCC) measured within the older restored stands Rhinosterhoek (241 t C ha-1 after 50 years at a depth of 50 cm) rivals that recorded for intact spekboom thickets, and the number of recruits found within older restored sites rivals intact sites sampled. 2 The changes recorded in the above- and belowground environments potentially identify P. afra as an ecosystem engineer within spekboom dominated thickets that facillitates the build-up of carbon above- and belowground and the accompanying changes in soil quality and the unique microclimate aboveground, which enables the hypothetical threshold of the degraded state to be transcended. This restoration methodology is accordingly considered efficient and autogenic canopy species return was found to be prominent after a period of 35-50 years of restoration treatment.
- Full Text:
- Date Issued: 2011
- Authors: Van der Vyver, Marius Lodewyk
- Date: 2011
- Subjects: Plant diversity , Biodiversity
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10602 , http://hdl.handle.net/10948/1564 , Plant diversity , Biodiversity
- Description: I investigated the return of plant canopy diversity to degraded spekboom thicket landscapes under restoration treatment. I attempted the reintroduction of five nursery propagated and naturally-occurring plant species in severely degraded Portulacaria afra Jacq. (spekboom) dominated thickets that have been subjected to a restoration method involving the planting of dense rows of P. afra truncheons for various time periods and also in degraded and intact thickets. I also planted nursery propagated P. afra cuttings. An average of 30 propagules of each species, were planted in each of the chosen areas in two distinct seasons that exhibited distinct rainfall peaks. Sixteen propagules of P. afra were also planted in each treatment only once. Propagules of the two thicket woody canopy species (S. longispina and P. capensis) showed a total survival of 1% and 9%, respectively. Survival of L. ferocissimum and R. obovatum was 19% and 70% and all propagules of P. afra survived. Analyses showed that survival is primarily tied to a species effect, with R. obovatum and P. afra showing significantly better survival than the other species. Within the other surviving few species a significant preference for overhanging canopy cover was observed. The results show little significance of restoration treatment for propagule survival, suggesting that a range of conditions is needed for the successful establishment of canopy species that likely involves a microclimate and suitable substrate created by canopy cover and litter fall, combined with an exceptional series of rainfall events. I found that the high costs involved with a biodiversity planting endeavour, and the low survival of propagules of thicket canopy plant species (P. afra excepted), renders the proposed biodiversity planting restoration protocol both ecologically and economically inefficient. Restoration success involves the autogenic regeneration of key species or functional groups within the degraded ecosystem. Heavily degraded spekboom-dominated thicket does not spontaneously regenerate its former canopy species composition and this state of affairs was interpreted in terms of a state-and-transition conceptual model. Floristic analyses of degraded, intact and a range of stands under restoration treatment for varying time periods at two locations in Sundays Spekboomveld revealed that the stands under restoration are progressively regenerating canopy species biodiversity with increasing restoration age, and that intact sites are still the most diverse. The high total carbon content (TCC) measured within the older restored stands Rhinosterhoek (241 t C ha-1 after 50 years at a depth of 50 cm) rivals that recorded for intact spekboom thickets, and the number of recruits found within older restored sites rivals intact sites sampled. 2 The changes recorded in the above- and belowground environments potentially identify P. afra as an ecosystem engineer within spekboom dominated thickets that facillitates the build-up of carbon above- and belowground and the accompanying changes in soil quality and the unique microclimate aboveground, which enables the hypothetical threshold of the degraded state to be transcended. This restoration methodology is accordingly considered efficient and autogenic canopy species return was found to be prominent after a period of 35-50 years of restoration treatment.
- Full Text:
- Date Issued: 2011
Statistical comparison of international size-based equity index using a mixture distribution
- Authors: Ngundze, Unathi
- Date: 2011
- Subjects: Mixture distributions (Probability theory) , Finance -- Statistics , Investment analysis , Portfolio management
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10576 , http://hdl.handle.net/10948/d1012367 , Mixture distributions (Probability theory) , Finance -- Statistics , Investment analysis , Portfolio management
- Description: Investors and financial analysts spend an inordinate amount of time, resources and effort in an attempt to perfect the science of maximising the level of financial returns. To this end, the field of distribution modelling and analysis of firm size effect is important as an investment analysis and appraisal tool. Numerous studies have been conducted to determine which distribution best fits stock returns (Mandelbrot, 1963; Fama, 1965 and Akgiray and Booth, 1988). Analysis and review of earlier research has revealed that researchers claim that the returns follow a normal distribution. However, the findings have not been without their own limitations in terms of the empirical results in that many also say that the research done does not account for the fat tails and skewness of the data. Some research studies dealing with the anomaly of firm size effect have led to the conclusion that smaller firms tend to command higher returns relative to their larger counterparts with a similar risk profile (Banz, 1981). Recently, Janse van Rensburg et al. (2009a) conducted a study in which both non- normality of stock returns and firm size effect were addressed simultaneously. They used a scale mixture of two normal distributions to compare the stock returns of large capitalisation and small capitalisation shares portfolios. The study concluded that in periods of high volatility, the small capitalisation portfolio is far more risky than the large capitalisation portfolio. In periods of low volatility they are equally risky. Janse van Rensburg et al. (2009a) identified a number of limitations to the study. These included data problems, survivorship bias, exclusion of dividends, and the use of standard statistical tests in the presence of non-normality. They concluded that it was difficult to generalise findings because of the use of only two (limited) portfolios. In the extension of the research, Janse van Rensburg (2009b) concluded that a scale mixture of two normal distributions provided a more superior fit than any other mixture. The scope of this research is an extension of the work by Janse van Rensburg et al. (2009a) and Janse van Rensburg (2009b), with a view to addressing several of the limitations and findings of the earlier studies. The Janse van rensburg (2009b) study was based on data from the Johannesburg Stock Exchange (JSE); this study seeks to compare their research by looking at the New York Stock Exchange (NYSE) to determine if similar results occur in developed markets. For analysis purposes, this study used the statistical software package R (R Development Core Team 2008) and its package mixtools (Young, Benaglia, Chauveau, Elmore, Hettmansperg, Hunter, Thomas, Xuan 2008). Some computation was also done using Microsoft Excel. This dissertation is arranged as follows: Chapter 2 is a literature review of some of the baseline studies and research that supports the conclusion that earlier research finding had serious limitations. Chapter 3 describes the data used in the study and gives a breakdown of portfolio formation and the methodology used in the study. Chapter 4 provides the statistical background of the methods used in this study. Chapter 5 presents the statistical analysis and distribution fitting of the data. Finally, Chapter 6 gives conclusions drawn from the results obtained in the analysis of data as well as recommendations for future work.
- Full Text:
- Date Issued: 2011
- Authors: Ngundze, Unathi
- Date: 2011
- Subjects: Mixture distributions (Probability theory) , Finance -- Statistics , Investment analysis , Portfolio management
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10576 , http://hdl.handle.net/10948/d1012367 , Mixture distributions (Probability theory) , Finance -- Statistics , Investment analysis , Portfolio management
- Description: Investors and financial analysts spend an inordinate amount of time, resources and effort in an attempt to perfect the science of maximising the level of financial returns. To this end, the field of distribution modelling and analysis of firm size effect is important as an investment analysis and appraisal tool. Numerous studies have been conducted to determine which distribution best fits stock returns (Mandelbrot, 1963; Fama, 1965 and Akgiray and Booth, 1988). Analysis and review of earlier research has revealed that researchers claim that the returns follow a normal distribution. However, the findings have not been without their own limitations in terms of the empirical results in that many also say that the research done does not account for the fat tails and skewness of the data. Some research studies dealing with the anomaly of firm size effect have led to the conclusion that smaller firms tend to command higher returns relative to their larger counterparts with a similar risk profile (Banz, 1981). Recently, Janse van Rensburg et al. (2009a) conducted a study in which both non- normality of stock returns and firm size effect were addressed simultaneously. They used a scale mixture of two normal distributions to compare the stock returns of large capitalisation and small capitalisation shares portfolios. The study concluded that in periods of high volatility, the small capitalisation portfolio is far more risky than the large capitalisation portfolio. In periods of low volatility they are equally risky. Janse van Rensburg et al. (2009a) identified a number of limitations to the study. These included data problems, survivorship bias, exclusion of dividends, and the use of standard statistical tests in the presence of non-normality. They concluded that it was difficult to generalise findings because of the use of only two (limited) portfolios. In the extension of the research, Janse van Rensburg (2009b) concluded that a scale mixture of two normal distributions provided a more superior fit than any other mixture. The scope of this research is an extension of the work by Janse van Rensburg et al. (2009a) and Janse van Rensburg (2009b), with a view to addressing several of the limitations and findings of the earlier studies. The Janse van rensburg (2009b) study was based on data from the Johannesburg Stock Exchange (JSE); this study seeks to compare their research by looking at the New York Stock Exchange (NYSE) to determine if similar results occur in developed markets. For analysis purposes, this study used the statistical software package R (R Development Core Team 2008) and its package mixtools (Young, Benaglia, Chauveau, Elmore, Hettmansperg, Hunter, Thomas, Xuan 2008). Some computation was also done using Microsoft Excel. This dissertation is arranged as follows: Chapter 2 is a literature review of some of the baseline studies and research that supports the conclusion that earlier research finding had serious limitations. Chapter 3 describes the data used in the study and gives a breakdown of portfolio formation and the methodology used in the study. Chapter 4 provides the statistical background of the methods used in this study. Chapter 5 presents the statistical analysis and distribution fitting of the data. Finally, Chapter 6 gives conclusions drawn from the results obtained in the analysis of data as well as recommendations for future work.
- Full Text:
- Date Issued: 2011
Synthesis of bromochloromethane using phase transfer catalysis
- Authors: Brooks, Lancelot L
- Date: 2011
- Subjects: Chemistry, Analytic , Fire extinguishing agents , Chemical systems , Physical science
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10382 , http://hdl.handle.net/10948/d1008162 , Chemistry, Analytic , Fire extinguishing agents , Chemical systems , Physical science
- Description: The synthesis of bromochloromethane (BCM) in a batch reactor, using phase transfer catalysis, was investigated. During the synthetic procedure, sodium bromide (100.0g, 0.97mol) along with an excess amount of dichloromethane (265.0g, 3.12 mol) was charged to a reactor containing benzyl triethylammonium chloride (13 mmol), dissolved in 50 ml of water. The bench scale reactions were all carried out in a Parr 4520 bench top pressure reactor coupled to a Parr 4841 temperature controller. The method produced a 50.0 percent yield of the product BCM after a reaction time of 12 to 13 hours. The main objective for this investigation was to optimize the abovementioned reaction with respect to yield and reactor throughput. Quantitative analysis of BCM was performed on a Focus Gas Chromatograph, fitted with a flame ionization detector, and a BP20 column (30m × 0,32mm ID × 0,25 mm). Delta software, version 5.0, was applied for data collection and processing. The injector and detector port were set at 250°C and 280°C, respectively. The oven temperature was set and held at 40°C for a period of 2 minutes, then gradually increased at a rate of 10°C/min to 130°C, with the final hold time set for 1 minute. An analytical method for the quantitative analysis of BCM was developed, optimized and validated. Validation of the analytical method commenced over a period of three days, and focussed the following validation parameters: Accuracy, precision, and ruggedness. Statistical evaluation of the results obtained for precision showed that the error between individual injections is less than 2 percent for each component. However, ANOVA analysis showed a significant difference between the mean response factors obtained in the three day period (p-value < 0.05). Thus we could conclude that the response factors had to be determined on each day before quantitatively analyzing samples. The accuracy of the analytical method was assessed by using the percent recovery method. Results obtained showed that a mean percent recovery of 100.18 percent was obtained for BCM, with the absolute bias = 0.0004, and the percent bias = 0.18 percent. Hence the 95 confidence intervals for the percent recovery and percent bias are given by: (Lz, Uz) = (100.56 percent percent 102.15 percent), 13 (LPB, UPB) = (0.56 percent, 2.15 percent), respectively. Since the 95 percent confidence interval for the percent recovery contains 100, or equivalently, the 95 percent confidence interval for percent bias contains 0, the assay method is considered accurate and validated for BCM. In the same manner the accuracy and percent recovery for DCM and DBM was evaluated. The method was found to be accurate and validated for DBM, however, slightly biased in determining the recovered amount of DCM. With the analytical method validated, the batch production process could be evaluated. A total of six process variables, namely reaction time, water amount, temperature, volume of the two phases, stirring rate, and catalyst concentration, were selected for the study. The effects of the individual variables were determined in the classical manner, by varying only the one of interest while keeping all others constant. The experimental data generated was fit to a quadratic response surface model. The profile plots that were obtained from this model allowed a visual representation of the effect of the six variables. The experimental results obtained showed that the reaction follows pseudo zero-order kinetics and that the rate of the reaction is directly proportional to the concentration of the catalyst. The reaction obeys the Arrhenius equation, and the relatively high activation energy of 87kJ.mol -1 signifies that the rate constant is strongly dependent on the temperature of the reaction. The results also showed that the formation of BCM is favoured by an increase in the reaction temperature, catalyst concentration, and a high organic: aqueous phase ratio. Thus the synthesis of BCM using phase transfer catalyst could be optimised, to obtain a 100 percent yield BCM, by increasing both the reaction temperature to 105°C, and the concentration of the phase transfer catalyst -benzyl triethylammonium chloride - to 5.36 mol percent. The reaction time was also reduced to 6 hours.
- Full Text:
- Date Issued: 2011
- Authors: Brooks, Lancelot L
- Date: 2011
- Subjects: Chemistry, Analytic , Fire extinguishing agents , Chemical systems , Physical science
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10382 , http://hdl.handle.net/10948/d1008162 , Chemistry, Analytic , Fire extinguishing agents , Chemical systems , Physical science
- Description: The synthesis of bromochloromethane (BCM) in a batch reactor, using phase transfer catalysis, was investigated. During the synthetic procedure, sodium bromide (100.0g, 0.97mol) along with an excess amount of dichloromethane (265.0g, 3.12 mol) was charged to a reactor containing benzyl triethylammonium chloride (13 mmol), dissolved in 50 ml of water. The bench scale reactions were all carried out in a Parr 4520 bench top pressure reactor coupled to a Parr 4841 temperature controller. The method produced a 50.0 percent yield of the product BCM after a reaction time of 12 to 13 hours. The main objective for this investigation was to optimize the abovementioned reaction with respect to yield and reactor throughput. Quantitative analysis of BCM was performed on a Focus Gas Chromatograph, fitted with a flame ionization detector, and a BP20 column (30m × 0,32mm ID × 0,25 mm). Delta software, version 5.0, was applied for data collection and processing. The injector and detector port were set at 250°C and 280°C, respectively. The oven temperature was set and held at 40°C for a period of 2 minutes, then gradually increased at a rate of 10°C/min to 130°C, with the final hold time set for 1 minute. An analytical method for the quantitative analysis of BCM was developed, optimized and validated. Validation of the analytical method commenced over a period of three days, and focussed the following validation parameters: Accuracy, precision, and ruggedness. Statistical evaluation of the results obtained for precision showed that the error between individual injections is less than 2 percent for each component. However, ANOVA analysis showed a significant difference between the mean response factors obtained in the three day period (p-value < 0.05). Thus we could conclude that the response factors had to be determined on each day before quantitatively analyzing samples. The accuracy of the analytical method was assessed by using the percent recovery method. Results obtained showed that a mean percent recovery of 100.18 percent was obtained for BCM, with the absolute bias = 0.0004, and the percent bias = 0.18 percent. Hence the 95 confidence intervals for the percent recovery and percent bias are given by: (Lz, Uz) = (100.56 percent percent 102.15 percent), 13 (LPB, UPB) = (0.56 percent, 2.15 percent), respectively. Since the 95 percent confidence interval for the percent recovery contains 100, or equivalently, the 95 percent confidence interval for percent bias contains 0, the assay method is considered accurate and validated for BCM. In the same manner the accuracy and percent recovery for DCM and DBM was evaluated. The method was found to be accurate and validated for DBM, however, slightly biased in determining the recovered amount of DCM. With the analytical method validated, the batch production process could be evaluated. A total of six process variables, namely reaction time, water amount, temperature, volume of the two phases, stirring rate, and catalyst concentration, were selected for the study. The effects of the individual variables were determined in the classical manner, by varying only the one of interest while keeping all others constant. The experimental data generated was fit to a quadratic response surface model. The profile plots that were obtained from this model allowed a visual representation of the effect of the six variables. The experimental results obtained showed that the reaction follows pseudo zero-order kinetics and that the rate of the reaction is directly proportional to the concentration of the catalyst. The reaction obeys the Arrhenius equation, and the relatively high activation energy of 87kJ.mol -1 signifies that the rate constant is strongly dependent on the temperature of the reaction. The results also showed that the formation of BCM is favoured by an increase in the reaction temperature, catalyst concentration, and a high organic: aqueous phase ratio. Thus the synthesis of BCM using phase transfer catalyst could be optimised, to obtain a 100 percent yield BCM, by increasing both the reaction temperature to 105°C, and the concentration of the phase transfer catalyst -benzyl triethylammonium chloride - to 5.36 mol percent. The reaction time was also reduced to 6 hours.
- Full Text:
- Date Issued: 2011
The effectiveness of livestock guarding dogs for livestock production and conservation in Namibia
- Authors: Potgieter, Gail Christine
- Date: 2011
- Subjects: Livestock protection dogs -- Namibia , Herding dogs -- Namibia , Livestock -- Predators of -- Control -- Namibia , Livestock -- Losses -- Namibia
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10680 , http://hdl.handle.net/10948/1666 , Livestock protection dogs -- Namibia , Herding dogs -- Namibia , Livestock -- Predators of -- Control -- Namibia , Livestock -- Losses -- Namibia
- Description: The use of livestock guarding dogs (LGDs) to mitigate farmer-predator conflict in Namibia was evaluated. As farmer-predator conflict has two sides, LGDs were evaluated in terms of livestock production and conservation. The main objectives in terms of livestock production were to document: 1) the perceived ability of LGDs to reduce livestock losses in a cost-effective manner; 2) the farmers’ satisfaction with LGD performance; and 3) factors influencing LGD behaviour. The main objectives in terms of conservation were to record: 1) predator killing by farmers relative to LGD introduction; 2) direct impacts of LGDs on target (damage-causing) species; and 3) the impact of LGDs on non-target species. This evaluation was conducted on LGDs bred by the Cheetah Conservation Fund (CCF) and placed on farms in Namibia. The data were collected during face-to-face interviews with farmers using LGDs. Historical data from the CCF programme were used in conjunction with a complete survey of the farmers in the CCF LGD programme during 2009-2010. In terms of livestock production, 91 percent of the LGDs (n = 65) eliminated or reduced livestock losses. Subsequently, 73 percent of the farmers perceived their LGDs as economically beneficial, although a cost-benefit analysis showed that only 59 percent of the LGDs were cost-effective. Farmers were generally satisfied with the performance of their LGDs. However, farmer satisfaction was more closely linked to good LGD behaviour than the perceived reduction in livestock losses. The most commonly-reported LGD behavioural problems (n = 195) were staying at home rather than accompanying the livestock (21 percent) and chasing wildlife (19 percent). LGD staying home behaviour was linked to a lack of care on subsistence farms, as high quality dog food was not consistently provided. Care for LGDs declined with LGD age on subsistence, but not commercial, farms. In terms of conservation, predator-killing farmers killed fewer individuals in the year since LGD introduction than previously; this result was only significant for black-backed jackal Canis mesomelas. However, 37 LGDs killed jackals, nine killed baboons Papio ursinus, three killed caracals Caracal caracal and one killed a cheetah Acinonyx jubatus (n = 83). Farmers and LGDs combined killed significantly more jackals in the survey year than the same farmers (n = 36) killed before LGD introduction. Conversely, five farmers killed 3.2 ± 2.01 cheetahs each in the year before LGD introduction, whereas LGDs and these farmers combined killed only 0.2 ± 0.2 cheetahs per farm in the survey year. Only 16 LGDs (n = 83) killed non-target species. The high LGD success rate in terms of livestock production was facilitated by livestock husbandry practices in the study area. In terms of conservation, LGDs were more beneficial for apex predators than for mesopredators and had a minor impact on non-target species.
- Full Text:
- Date Issued: 2011
- Authors: Potgieter, Gail Christine
- Date: 2011
- Subjects: Livestock protection dogs -- Namibia , Herding dogs -- Namibia , Livestock -- Predators of -- Control -- Namibia , Livestock -- Losses -- Namibia
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10680 , http://hdl.handle.net/10948/1666 , Livestock protection dogs -- Namibia , Herding dogs -- Namibia , Livestock -- Predators of -- Control -- Namibia , Livestock -- Losses -- Namibia
- Description: The use of livestock guarding dogs (LGDs) to mitigate farmer-predator conflict in Namibia was evaluated. As farmer-predator conflict has two sides, LGDs were evaluated in terms of livestock production and conservation. The main objectives in terms of livestock production were to document: 1) the perceived ability of LGDs to reduce livestock losses in a cost-effective manner; 2) the farmers’ satisfaction with LGD performance; and 3) factors influencing LGD behaviour. The main objectives in terms of conservation were to record: 1) predator killing by farmers relative to LGD introduction; 2) direct impacts of LGDs on target (damage-causing) species; and 3) the impact of LGDs on non-target species. This evaluation was conducted on LGDs bred by the Cheetah Conservation Fund (CCF) and placed on farms in Namibia. The data were collected during face-to-face interviews with farmers using LGDs. Historical data from the CCF programme were used in conjunction with a complete survey of the farmers in the CCF LGD programme during 2009-2010. In terms of livestock production, 91 percent of the LGDs (n = 65) eliminated or reduced livestock losses. Subsequently, 73 percent of the farmers perceived their LGDs as economically beneficial, although a cost-benefit analysis showed that only 59 percent of the LGDs were cost-effective. Farmers were generally satisfied with the performance of their LGDs. However, farmer satisfaction was more closely linked to good LGD behaviour than the perceived reduction in livestock losses. The most commonly-reported LGD behavioural problems (n = 195) were staying at home rather than accompanying the livestock (21 percent) and chasing wildlife (19 percent). LGD staying home behaviour was linked to a lack of care on subsistence farms, as high quality dog food was not consistently provided. Care for LGDs declined with LGD age on subsistence, but not commercial, farms. In terms of conservation, predator-killing farmers killed fewer individuals in the year since LGD introduction than previously; this result was only significant for black-backed jackal Canis mesomelas. However, 37 LGDs killed jackals, nine killed baboons Papio ursinus, three killed caracals Caracal caracal and one killed a cheetah Acinonyx jubatus (n = 83). Farmers and LGDs combined killed significantly more jackals in the survey year than the same farmers (n = 36) killed before LGD introduction. Conversely, five farmers killed 3.2 ± 2.01 cheetahs each in the year before LGD introduction, whereas LGDs and these farmers combined killed only 0.2 ± 0.2 cheetahs per farm in the survey year. Only 16 LGDs (n = 83) killed non-target species. The high LGD success rate in terms of livestock production was facilitated by livestock husbandry practices in the study area. In terms of conservation, LGDs were more beneficial for apex predators than for mesopredators and had a minor impact on non-target species.
- Full Text:
- Date Issued: 2011
The influence of fire and plantation management on Wetlands on the Tsitsikamma plateau
- Authors: Hugo, Christine Denise
- Date: 2011
- Subjects: Forest management -- South Africa -- Tsitsikama Plateau , Dragonflies -- Effect of habitat modification on -- South Africa -- Tsitsikama Plateau
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10747 , http://hdl.handle.net/10948/1464 , Forest management -- South Africa -- Tsitsikama Plateau , Dragonflies -- Effect of habitat modification on -- South Africa -- Tsitsikama Plateau
- Description: Wetlands on the extensively afforested Tsitsikamma Plateau are prone to fire and according to foresters, they behave as fire channels that under bergwind conditions rapidly carry fire into plantations. The destruction of plantations causes great economic loss and MTO would therefore prefer to afforest some smaller wetlands to limit the fire hazard. This study was carried out in the middle of a drought period and sought to determine the influence of fire, plantation management and the environment on wetlands and its component species. This study of palustrine wetlands on the Tsitsikamma Plateau identified five wetland vegetation communities, in which plant species richness was relatively low. Plant compositional structure of wetlands is influenced by wetland location, the height of the adjacent plantation and fire frequency. This study found a pronounced plant species turnover from west to east and soil coarseness increased along the same gradient. Re-sprouters dominated the wetland communities in the Tsitsikamma but a few populations of the obligate re-seeding ‘Near threatened’ Leucadendron conicum rely on fire for rejuvenation purposes. Regarding dragonflies in wetlands, abundance was found to be low, while species richness was relatively high considering the absence of surface water. The study found that fire indirectly influenced dragonfly abundance and species composition by altering vegetation structure. Dragonfly abundance and species richness was generally higher in wetlands with older vegetation (≥ 9 years). Further, most dragonflies frequenting the palustrine wetland habitats were females. Seeing that female dragonflies spend most of their time away from prime breeding habitats to escape male harassment, the study indicated these wetlands as important refuge habitats for them. Dragonfly abundance is expected to increase once the drought ends; however, the overall patterns observed are likely to remain unchanged under wetter conditions. Narrow wetlands (< 10 m) are few on the plateau and it is not advisable to sacrifice wider wetlands in the Tsitsikamma. Further, with regards to ecological processes and wetlands’ influence on the surrounding Tsitsikamma matrix, more research is needed before wetlands may be sacrificed. To deal with the fire risk the Tsitsikamma environment poses to plantations, it is strongly recommended to establish and maintain a cleared buffer area between plantations and wetlands. Further, for vegetation rejuvenation purposes, it is important to burn wetlands at irregular intervals but not more frequently than every nine years and not less frequently than every 25-30 years.
- Full Text:
- Date Issued: 2011
- Authors: Hugo, Christine Denise
- Date: 2011
- Subjects: Forest management -- South Africa -- Tsitsikama Plateau , Dragonflies -- Effect of habitat modification on -- South Africa -- Tsitsikama Plateau
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10747 , http://hdl.handle.net/10948/1464 , Forest management -- South Africa -- Tsitsikama Plateau , Dragonflies -- Effect of habitat modification on -- South Africa -- Tsitsikama Plateau
- Description: Wetlands on the extensively afforested Tsitsikamma Plateau are prone to fire and according to foresters, they behave as fire channels that under bergwind conditions rapidly carry fire into plantations. The destruction of plantations causes great economic loss and MTO would therefore prefer to afforest some smaller wetlands to limit the fire hazard. This study was carried out in the middle of a drought period and sought to determine the influence of fire, plantation management and the environment on wetlands and its component species. This study of palustrine wetlands on the Tsitsikamma Plateau identified five wetland vegetation communities, in which plant species richness was relatively low. Plant compositional structure of wetlands is influenced by wetland location, the height of the adjacent plantation and fire frequency. This study found a pronounced plant species turnover from west to east and soil coarseness increased along the same gradient. Re-sprouters dominated the wetland communities in the Tsitsikamma but a few populations of the obligate re-seeding ‘Near threatened’ Leucadendron conicum rely on fire for rejuvenation purposes. Regarding dragonflies in wetlands, abundance was found to be low, while species richness was relatively high considering the absence of surface water. The study found that fire indirectly influenced dragonfly abundance and species composition by altering vegetation structure. Dragonfly abundance and species richness was generally higher in wetlands with older vegetation (≥ 9 years). Further, most dragonflies frequenting the palustrine wetland habitats were females. Seeing that female dragonflies spend most of their time away from prime breeding habitats to escape male harassment, the study indicated these wetlands as important refuge habitats for them. Dragonfly abundance is expected to increase once the drought ends; however, the overall patterns observed are likely to remain unchanged under wetter conditions. Narrow wetlands (< 10 m) are few on the plateau and it is not advisable to sacrifice wider wetlands in the Tsitsikamma. Further, with regards to ecological processes and wetlands’ influence on the surrounding Tsitsikamma matrix, more research is needed before wetlands may be sacrificed. To deal with the fire risk the Tsitsikamma environment poses to plantations, it is strongly recommended to establish and maintain a cleared buffer area between plantations and wetlands. Further, for vegetation rejuvenation purposes, it is important to burn wetlands at irregular intervals but not more frequently than every nine years and not less frequently than every 25-30 years.
- Full Text:
- Date Issued: 2011
The photodecomposition of different polymorphic forms of 1,4-dihydropyridine channel blockers
- Authors: Francis, Farzaana
- Date: 2011
- Subjects: Photodegradation , Nifedipine , Nimodipine , Dihydropyridine
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10378 , http://hdl.handle.net/10948/1496 , Photodegradation , Nifedipine , Nimodipine , Dihydropyridine
- Description: 1,4-Dihydropyridines (DHPs) are a classification of compounds used as calcium channel blockers in the treatment of various conditions. These compounds readily undergo photodegradation. The degradants produced have no pharmaceutical activity and render the drugs ineffective. DHPs also exhibit polymorphism. Nifedipine and Nimodipine are two such drugs. This study aimed to monitor the photodegradation of these two drugs and to establish the effect of particle size, polymorphism and β-Cyclodextrin (β-CD) on the rate of photodegradation. Different polymorphs (namely the amorphous and stable crystalline polymorphs) of the two drugs were prepared for use in the study. Mixtures of each drug with β-CD were also prepared for photostability studies. The mixtures were prepared in a 1:1 molar ratio. The rate of photodegradation was studied with a 500 W metal halide lamp in accordance to ICH guidelines. The study employed samples on a small scale where degradation was analysed with High Performance Liquid Chromatography, and also samples on a larger scale where degradation was monitored with Powder X-ray Diffraction. The two sets of results of observing the degradation process by two analytical techniques where compared in terms of their quantification methods. The extent of photodegradation was suitably modelled and fitted using the Avrami-Erofeyev kinetic equation. Smaller particle size showed increased photodegradation for Nimodipine; the effect was insignificant for Nifedipine however. For both drugs it was found that the amorphous polymorph underwent faster photodegradation. The study showed that β-CD caused an increase in photodegradation for both drugs under these experimental conditions.
- Full Text:
- Date Issued: 2011
- Authors: Francis, Farzaana
- Date: 2011
- Subjects: Photodegradation , Nifedipine , Nimodipine , Dihydropyridine
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10378 , http://hdl.handle.net/10948/1496 , Photodegradation , Nifedipine , Nimodipine , Dihydropyridine
- Description: 1,4-Dihydropyridines (DHPs) are a classification of compounds used as calcium channel blockers in the treatment of various conditions. These compounds readily undergo photodegradation. The degradants produced have no pharmaceutical activity and render the drugs ineffective. DHPs also exhibit polymorphism. Nifedipine and Nimodipine are two such drugs. This study aimed to monitor the photodegradation of these two drugs and to establish the effect of particle size, polymorphism and β-Cyclodextrin (β-CD) on the rate of photodegradation. Different polymorphs (namely the amorphous and stable crystalline polymorphs) of the two drugs were prepared for use in the study. Mixtures of each drug with β-CD were also prepared for photostability studies. The mixtures were prepared in a 1:1 molar ratio. The rate of photodegradation was studied with a 500 W metal halide lamp in accordance to ICH guidelines. The study employed samples on a small scale where degradation was analysed with High Performance Liquid Chromatography, and also samples on a larger scale where degradation was monitored with Powder X-ray Diffraction. The two sets of results of observing the degradation process by two analytical techniques where compared in terms of their quantification methods. The extent of photodegradation was suitably modelled and fitted using the Avrami-Erofeyev kinetic equation. Smaller particle size showed increased photodegradation for Nimodipine; the effect was insignificant for Nifedipine however. For both drugs it was found that the amorphous polymorph underwent faster photodegradation. The study showed that β-CD caused an increase in photodegradation for both drugs under these experimental conditions.
- Full Text:
- Date Issued: 2011