A case study : exploring a DVD driven approach for teaching and learning mathematics, at secondary school level, with a framework of blended learning
- Authors: Padayachee, Pragashni
- Date: 2010
- Subjects: Matehmatics -- Study and Teaching (Secondary) -- Audio-visual aids , Matehmatics -- Study and Teaching (Secondary) -- South Africa -- Port Elizabeth
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:10504 , http://hdl.handle.net/10948/1384 , Matehmatics -- Study and Teaching (Secondary) -- Audio-visual aids , Matehmatics -- Study and Teaching (Secondary) -- South Africa -- Port Elizabeth
- Description: Post-apartheid South Africa is witnessing an education crisis of significant proportions. The new outcomes-based education system has failed to deliver and universities are suffering the consequences of under-preparation of learners for tertiary studies especially in mathematics. The educator corps is lacking and it has become common practice for universities to deploy augmented programmes in mathematics for secondary school learners in the surrounding areas. This thesis describes a particular approach of blended learning, devised for the Incubator School Project (ISP), an initiative of the Nelson Mandela Metropolitan University (NMMU) in the Eastern Cape of South Africa. The defining feature of this blended approach is that it incorporates DVD technology, which offers an affordable and accessible option for the particular group of learners and the schools they attend. The thesis poses the research question: How did the use of the DVD approach within a blended learning environment support the learning of mathematics? This case study explores the particular blended approach and reports six fold on the approach – qualitatively based firstly on a questionnaire completed by learners and secondly on interviews of learners, thirdly on the facilitators reports, fourthly quantitatively on learner performance before and after the intervention. Fifthly six schools are used as a case study where the mathematics performance of the learners who participated in the ISP is compared to those who did not participate in the ISP. Finally the scope of blending of this model is evaluated by means of a radar chart, adapted from an existing radar measure. This research revealed that using the DVD approach within a blended learning environment did lead to an improvement in learners perceptions about mathematics, an improvement in the manner in which they learned mathematics, an extension in their mathematics knowledge and provided learners with a supportive environment in which to learn mathematics. The elements which supported learning in this approach are presented. The findings of the study suggest that this approach impacted favourably on the mathematics learning and enhanced the mathematics learning and performance of these learners. Recommendations are offered for practice, teachers and schools and for further research possibilities.
- Full Text:
- Date Issued: 2010
- Authors: Padayachee, Pragashni
- Date: 2010
- Subjects: Matehmatics -- Study and Teaching (Secondary) -- Audio-visual aids , Matehmatics -- Study and Teaching (Secondary) -- South Africa -- Port Elizabeth
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:10504 , http://hdl.handle.net/10948/1384 , Matehmatics -- Study and Teaching (Secondary) -- Audio-visual aids , Matehmatics -- Study and Teaching (Secondary) -- South Africa -- Port Elizabeth
- Description: Post-apartheid South Africa is witnessing an education crisis of significant proportions. The new outcomes-based education system has failed to deliver and universities are suffering the consequences of under-preparation of learners for tertiary studies especially in mathematics. The educator corps is lacking and it has become common practice for universities to deploy augmented programmes in mathematics for secondary school learners in the surrounding areas. This thesis describes a particular approach of blended learning, devised for the Incubator School Project (ISP), an initiative of the Nelson Mandela Metropolitan University (NMMU) in the Eastern Cape of South Africa. The defining feature of this blended approach is that it incorporates DVD technology, which offers an affordable and accessible option for the particular group of learners and the schools they attend. The thesis poses the research question: How did the use of the DVD approach within a blended learning environment support the learning of mathematics? This case study explores the particular blended approach and reports six fold on the approach – qualitatively based firstly on a questionnaire completed by learners and secondly on interviews of learners, thirdly on the facilitators reports, fourthly quantitatively on learner performance before and after the intervention. Fifthly six schools are used as a case study where the mathematics performance of the learners who participated in the ISP is compared to those who did not participate in the ISP. Finally the scope of blending of this model is evaluated by means of a radar chart, adapted from an existing radar measure. This research revealed that using the DVD approach within a blended learning environment did lead to an improvement in learners perceptions about mathematics, an improvement in the manner in which they learned mathematics, an extension in their mathematics knowledge and provided learners with a supportive environment in which to learn mathematics. The elements which supported learning in this approach are presented. The findings of the study suggest that this approach impacted favourably on the mathematics learning and enhanced the mathematics learning and performance of these learners. Recommendations are offered for practice, teachers and schools and for further research possibilities.
- Full Text:
- Date Issued: 2010
A characterization of landslide occurrence in the Kigezi Highlands of South Western Uganda
- Authors: Nseka, Denis
- Date: 2018
- Subjects: Landslides -- Uganda , Land degradation -- Uganda Earth movements
- Language: English
- Type: Thesis , Doctoral , DPhil
- Identifier: http://hdl.handle.net/10948/33791 , vital:33029
- Description: The frequency and magnitude of landslide occurrence in the Kigezi highlands of South Western Uganda has increased, but the key underpinnings of the occurrences are yet to be understood. The overall aim of this study was to characterize the parameters underpinning landslide occurrence in the Kigezi highlands. This information is important for predicting or identifying actual and potential landslide sites. This should inform policy, particularly in terms of developing early warning systems to landslide hazards in these highlands. The present study analysed the area’s topography, soil properties as well as land use and cover changes underpinning the spatialtemporal distribution of landslide occurrence in the region. The present study focussed on selected topographic parameters including slope gradient, profile curvature, Topographic Wetness Index (TWI), Stream Power Index (SPI), and Topographic Position Index (TPI). These factors were parameterized in the field and GIS environment using a 10 m Digital Elevation Model. Sixty five landslide features were surveyed and mapped. Soil properties were characterised in relation to slope position. Onsite soil property analysis was conducted within the landslide scars, auger holes and full profile representative sites. Furthermore, soil infiltration and strength tests, as well as clay mineralogy analyses were also conducted. An analysis of the spatial-temporal land use and cover changes was undertaken using satellite imagery spanning the period between 1985 and 2015. Landslides were noted to concentrate along topographic hollows in the landscape. The occurrence is dominant where slope gradient is between 25˚ and 35˚, profile curvature between 0.1 and 5, TWI between 8 and 18, SPI >10 and TPI between -1 and 1. Landslides are less pronounced on slope zones where slope gradient is <15˚ and >45˚, profile curvature <0, TWI <8 and >18, SPI <10 and TPI >1. Deep soil profiles ranging between 2.5 and 7 meters are a major characteristic of the study area. Soils are characterized by clay pans at a depth ranging between 0.75 and 3 meters within the profiles. The study area is dominated by clay texture, except for the uppermost surface horizons, which are loamy sand. All surface horizons analysed had the percentage of sand, silt and clay ranging from 33 to 55%, 22 to 40% and 10 to 30% respectively. In the deeper horizons, sand was observed to reduce drastically to less than 23%, while clay increased to greater than 50%. The clay content is very high in the deeper horizons exceeding 35%. By implication, such soils with a very high clay content and plasticity index are considered as Vertisols, with a profound influence in the occurrence of landslides. The top soil predominantly contains more quartz, while subsurface horizons have considerable amounts of illite/muscovite as the dominant clay minerals, ranging from 43% to 47 %. The liquid limit, plasticity index, computed weighted plasticity index (PIw), expansiveness (ɛex) and dispersion ranging from 50, 22, 17, 10 and 23 to 66, 44,34,54 and 64, respectively also have strong implications for landslide occurrence. Landslides are not normally experienced during or immediately after extreme rainfall events but occur later in the rainfall season. By implication, this time lag in landslide occurrence and rainfall distribution, is due to the initial infiltration through quartz dominated upper soil layers, before illite/muscovite clays in the lower soil horizons get saturated. Whereas forest cover reduced from 40 % in 1985 to 8% in 2015, cultivated land and settlements increased from 16% and 11% to 52% and 25% respectively during the same period. The distribution of cultivated land decreased in lower slope sections within gradient group < 15˚ by 59%. It however increased in upper sections within gradient cluster 25˚ to 35˚ by over 85% during the study period. There is a shift of cultivated land to the steeper sensitive upper slope elements associated with landslides in the study area. More than 50% of the landslides are occurring on cultivated land, 20% on settlements while less than 15 % and 10% are occurring on grassland and forests with degraded areas respectively. Landslides in Kigezi highlands are triggered by a complex interaction of multiple- factors, including dynamic triggers and ground condition variables. Topographic hollows are convergence zones within the landscape where all the parameters interact to cause landslides. Topographic hollows are therefore potential and actual landslide sites in the study area. Characterized by deep soil horizons with high clay content dominated by illite/muscovite minerals in the sub soils and profile concave forms with moderately steep slopes, topographic hollows are the most vulnerable slope elements to landslide occurrence. The spatial temporal patterns of landslide occurrence in the study area has changed due to increased cultivation of steep middle and upper slopes. Characterized by deep soil horizons with high clay content dominated by illite/muscovite minerals in the sub soils and profile concave forms with moderately steep slopes, topographic hollows are the most vulnerable slope elements to landslide occurrence. The spatial-temporal patterns of landslide occurrence in the study area has changed due to increased cultivation of steep middle and upper slopes. A close spatial and temporal correlation between land use/cover changes and landslide occurrence is discernible. The understanding of these topographical, pedological and land use/cover parameters and their influence on landslide occurrence is important in land management. It is now possible to identify and predict actual and potential landslide zones, and also demarcate safer zones for community activities. The information generated about the area’s topographic, pedological and land cover characteristics should help in vulnerability mitigation and enhance community resilience to landslide hazards in this fragile highland ecosystem. This can be done through designating zones for community activities while avoiding potential landslide zones. It is also recommended that, tree cover restoration be done in the highlands and the farmers encouraged to re-establish terrace farming while avoiding cultivation of sensitive steep middle and upper slope sections.
- Full Text:
- Date Issued: 2018
- Authors: Nseka, Denis
- Date: 2018
- Subjects: Landslides -- Uganda , Land degradation -- Uganda Earth movements
- Language: English
- Type: Thesis , Doctoral , DPhil
- Identifier: http://hdl.handle.net/10948/33791 , vital:33029
- Description: The frequency and magnitude of landslide occurrence in the Kigezi highlands of South Western Uganda has increased, but the key underpinnings of the occurrences are yet to be understood. The overall aim of this study was to characterize the parameters underpinning landslide occurrence in the Kigezi highlands. This information is important for predicting or identifying actual and potential landslide sites. This should inform policy, particularly in terms of developing early warning systems to landslide hazards in these highlands. The present study analysed the area’s topography, soil properties as well as land use and cover changes underpinning the spatialtemporal distribution of landslide occurrence in the region. The present study focussed on selected topographic parameters including slope gradient, profile curvature, Topographic Wetness Index (TWI), Stream Power Index (SPI), and Topographic Position Index (TPI). These factors were parameterized in the field and GIS environment using a 10 m Digital Elevation Model. Sixty five landslide features were surveyed and mapped. Soil properties were characterised in relation to slope position. Onsite soil property analysis was conducted within the landslide scars, auger holes and full profile representative sites. Furthermore, soil infiltration and strength tests, as well as clay mineralogy analyses were also conducted. An analysis of the spatial-temporal land use and cover changes was undertaken using satellite imagery spanning the period between 1985 and 2015. Landslides were noted to concentrate along topographic hollows in the landscape. The occurrence is dominant where slope gradient is between 25˚ and 35˚, profile curvature between 0.1 and 5, TWI between 8 and 18, SPI >10 and TPI between -1 and 1. Landslides are less pronounced on slope zones where slope gradient is <15˚ and >45˚, profile curvature <0, TWI <8 and >18, SPI <10 and TPI >1. Deep soil profiles ranging between 2.5 and 7 meters are a major characteristic of the study area. Soils are characterized by clay pans at a depth ranging between 0.75 and 3 meters within the profiles. The study area is dominated by clay texture, except for the uppermost surface horizons, which are loamy sand. All surface horizons analysed had the percentage of sand, silt and clay ranging from 33 to 55%, 22 to 40% and 10 to 30% respectively. In the deeper horizons, sand was observed to reduce drastically to less than 23%, while clay increased to greater than 50%. The clay content is very high in the deeper horizons exceeding 35%. By implication, such soils with a very high clay content and plasticity index are considered as Vertisols, with a profound influence in the occurrence of landslides. The top soil predominantly contains more quartz, while subsurface horizons have considerable amounts of illite/muscovite as the dominant clay minerals, ranging from 43% to 47 %. The liquid limit, plasticity index, computed weighted plasticity index (PIw), expansiveness (ɛex) and dispersion ranging from 50, 22, 17, 10 and 23 to 66, 44,34,54 and 64, respectively also have strong implications for landslide occurrence. Landslides are not normally experienced during or immediately after extreme rainfall events but occur later in the rainfall season. By implication, this time lag in landslide occurrence and rainfall distribution, is due to the initial infiltration through quartz dominated upper soil layers, before illite/muscovite clays in the lower soil horizons get saturated. Whereas forest cover reduced from 40 % in 1985 to 8% in 2015, cultivated land and settlements increased from 16% and 11% to 52% and 25% respectively during the same period. The distribution of cultivated land decreased in lower slope sections within gradient group < 15˚ by 59%. It however increased in upper sections within gradient cluster 25˚ to 35˚ by over 85% during the study period. There is a shift of cultivated land to the steeper sensitive upper slope elements associated with landslides in the study area. More than 50% of the landslides are occurring on cultivated land, 20% on settlements while less than 15 % and 10% are occurring on grassland and forests with degraded areas respectively. Landslides in Kigezi highlands are triggered by a complex interaction of multiple- factors, including dynamic triggers and ground condition variables. Topographic hollows are convergence zones within the landscape where all the parameters interact to cause landslides. Topographic hollows are therefore potential and actual landslide sites in the study area. Characterized by deep soil horizons with high clay content dominated by illite/muscovite minerals in the sub soils and profile concave forms with moderately steep slopes, topographic hollows are the most vulnerable slope elements to landslide occurrence. The spatial temporal patterns of landslide occurrence in the study area has changed due to increased cultivation of steep middle and upper slopes. Characterized by deep soil horizons with high clay content dominated by illite/muscovite minerals in the sub soils and profile concave forms with moderately steep slopes, topographic hollows are the most vulnerable slope elements to landslide occurrence. The spatial-temporal patterns of landslide occurrence in the study area has changed due to increased cultivation of steep middle and upper slopes. A close spatial and temporal correlation between land use/cover changes and landslide occurrence is discernible. The understanding of these topographical, pedological and land use/cover parameters and their influence on landslide occurrence is important in land management. It is now possible to identify and predict actual and potential landslide zones, and also demarcate safer zones for community activities. The information generated about the area’s topographic, pedological and land cover characteristics should help in vulnerability mitigation and enhance community resilience to landslide hazards in this fragile highland ecosystem. This can be done through designating zones for community activities while avoiding potential landslide zones. It is also recommended that, tree cover restoration be done in the highlands and the farmers encouraged to re-establish terrace farming while avoiding cultivation of sensitive steep middle and upper slope sections.
- Full Text:
- Date Issued: 2018
A chemo-enzymatic process for the production of beta-thymidine, a key intermediate in antiretrovirol manufacture
- Gordon, Gregory Ernest Robert
- Authors: Gordon, Gregory Ernest Robert
- Date: 2010
- Subjects: HIV infections -- Treatment -- South Africa , HIV infections -- South Africa -- Prevention , Antiretroviral agents
- Language: English
- Type: Thesis , Doctoral , DTech
- Identifier: vital:10423 , http://hdl.handle.net/10948/d1016217
- Description: The socio-economic impact of HIV/AIDS on South Africa has resulted in lower gross domestic product, loss of skills in key sectors such as education, and increased health-care costs in providing access to treatment. Currently active pharmaceutical ingredients (API’s) such as stavudine (d4T) and azidothymidine (AZT) are imported from India and China, while formulation is conducted locally. A strategy was initiated between CSIR Biosciences and LIFElab under the auspices of Arvir Technologies to investigate the feasibility of local antiretroviral manufacture (d4T and AZT) or the manufacture of a key intermediate such as β- thymidine (dT). Several advantages associated with successful implementation of this strategy include ensuring a local supply of API’s, thus reducing reliance on procurement from foreign sources and reducing the effect of foreign exchange rate fluctuations on providing cost effective access to treatment. A local supply source would also reduce the imports and thus aid the balance of payments deficit, and in addition to this, provide stimulus in the local pharmaceutical manufacturing industry (which has been in decline for several decades), resulting in increased skills and employment opportunities. This thesis describes the development of a superior chemo-enzymatic process for the production of β-thymidine (72 percent yield, prior to isolation), a key intermediate in the preparation of anti-retrovirals. Alternative processes based purely on chemical or bioprocess transformations to prepare either 5-methyluridine (5-MU) or dT suffer from several disadvantages: lengthy transformations due to protection/deprotection strategies, low selectivties and product yields (30 percent in the chemical process) and isolation of the product from dilute process streams requiring the use of large uneconomical reactors (bioprocesss). This contributes significantly to the cost of d4T and AZT manufacture. Our novel chemoenzymatic process comprises of a biocatalytic reaction for the production of 5-MU, with subsequent chemical transformation into dT (3 steps) negating and circumventing the limitations of the chemical or bioprocess routes. During the course of this project development, the β-thymidine selling price declined from 175 $/kg (2005) to 100 $/kg (2008). However, the process described in this work is still competitive based on the current β- thymidine selling price of 100 $/kg. The process economics show that with further optimization and increasing the isolated dT yield from 70 percent to 90 percent, the variable cost decreases from 136 $/kg to 110 $/kg. The increase in isolated yield is highly probable, based on solubility data of β-thymidine. The decrease in β-thymidine selling price and technological improvement in dT manufacture should translate into lower API manufacture costs and more cost effective access to treatment. Our novel biocatalytic process producing 5-MU uses a coupled enzyme system employing PNP, Purine Nucleoside Phosphorylase and PyNP, Pyrimidine Nucleoside Phosphorylase. The overall transglycosylation reaction may be decoupled into the phosphorolysis reaction (PNP) and synthesis reaction (PyNP). During the phosphorolysis reaction, guanosine is converted into guanine and ribose-1-phosphate (R-1-P) in the presence of PNP enzyme. The reaction intermediate R-1-P is then coupled to thymine in the presence of PyNP enzyme during the synthesis reaction, producing 5-MU. The process was scaled up from lab-scale to bench-scale (10 - 20 L) and demonstrated to be robust and reproducible. This is evident from the average guanosine conversion (94.7 percent ± 2.03) and 5-MU yield (88.2 percent ± 6.21) and mole balance (104 percent ± 7.61) which were obtained at bench-scale (3 replicates, 10 L). The reaction was carried out at reactor productivities of between 7 – 11 g.L-1.h-1. The integration of the biocatalytic process and chemical processes was successfully carried out, showing that 5-MU produced using our novel biocatalytic process behaved similarly to commercially available 5- MU (ex. Dayang Chemicals, China). A PCT patent application (Ref. No. P44422PC01) on this chemo-enzymatic process has been filed and currently public private partnerships are being explored through Arvir Technologies to evaluate and validate this technology at one ton scale.
- Full Text:
- Date Issued: 2010
- Authors: Gordon, Gregory Ernest Robert
- Date: 2010
- Subjects: HIV infections -- Treatment -- South Africa , HIV infections -- South Africa -- Prevention , Antiretroviral agents
- Language: English
- Type: Thesis , Doctoral , DTech
- Identifier: vital:10423 , http://hdl.handle.net/10948/d1016217
- Description: The socio-economic impact of HIV/AIDS on South Africa has resulted in lower gross domestic product, loss of skills in key sectors such as education, and increased health-care costs in providing access to treatment. Currently active pharmaceutical ingredients (API’s) such as stavudine (d4T) and azidothymidine (AZT) are imported from India and China, while formulation is conducted locally. A strategy was initiated between CSIR Biosciences and LIFElab under the auspices of Arvir Technologies to investigate the feasibility of local antiretroviral manufacture (d4T and AZT) or the manufacture of a key intermediate such as β- thymidine (dT). Several advantages associated with successful implementation of this strategy include ensuring a local supply of API’s, thus reducing reliance on procurement from foreign sources and reducing the effect of foreign exchange rate fluctuations on providing cost effective access to treatment. A local supply source would also reduce the imports and thus aid the balance of payments deficit, and in addition to this, provide stimulus in the local pharmaceutical manufacturing industry (which has been in decline for several decades), resulting in increased skills and employment opportunities. This thesis describes the development of a superior chemo-enzymatic process for the production of β-thymidine (72 percent yield, prior to isolation), a key intermediate in the preparation of anti-retrovirals. Alternative processes based purely on chemical or bioprocess transformations to prepare either 5-methyluridine (5-MU) or dT suffer from several disadvantages: lengthy transformations due to protection/deprotection strategies, low selectivties and product yields (30 percent in the chemical process) and isolation of the product from dilute process streams requiring the use of large uneconomical reactors (bioprocesss). This contributes significantly to the cost of d4T and AZT manufacture. Our novel chemoenzymatic process comprises of a biocatalytic reaction for the production of 5-MU, with subsequent chemical transformation into dT (3 steps) negating and circumventing the limitations of the chemical or bioprocess routes. During the course of this project development, the β-thymidine selling price declined from 175 $/kg (2005) to 100 $/kg (2008). However, the process described in this work is still competitive based on the current β- thymidine selling price of 100 $/kg. The process economics show that with further optimization and increasing the isolated dT yield from 70 percent to 90 percent, the variable cost decreases from 136 $/kg to 110 $/kg. The increase in isolated yield is highly probable, based on solubility data of β-thymidine. The decrease in β-thymidine selling price and technological improvement in dT manufacture should translate into lower API manufacture costs and more cost effective access to treatment. Our novel biocatalytic process producing 5-MU uses a coupled enzyme system employing PNP, Purine Nucleoside Phosphorylase and PyNP, Pyrimidine Nucleoside Phosphorylase. The overall transglycosylation reaction may be decoupled into the phosphorolysis reaction (PNP) and synthesis reaction (PyNP). During the phosphorolysis reaction, guanosine is converted into guanine and ribose-1-phosphate (R-1-P) in the presence of PNP enzyme. The reaction intermediate R-1-P is then coupled to thymine in the presence of PyNP enzyme during the synthesis reaction, producing 5-MU. The process was scaled up from lab-scale to bench-scale (10 - 20 L) and demonstrated to be robust and reproducible. This is evident from the average guanosine conversion (94.7 percent ± 2.03) and 5-MU yield (88.2 percent ± 6.21) and mole balance (104 percent ± 7.61) which were obtained at bench-scale (3 replicates, 10 L). The reaction was carried out at reactor productivities of between 7 – 11 g.L-1.h-1. The integration of the biocatalytic process and chemical processes was successfully carried out, showing that 5-MU produced using our novel biocatalytic process behaved similarly to commercially available 5- MU (ex. Dayang Chemicals, China). A PCT patent application (Ref. No. P44422PC01) on this chemo-enzymatic process has been filed and currently public private partnerships are being explored through Arvir Technologies to evaluate and validate this technology at one ton scale.
- Full Text:
- Date Issued: 2010
A combination of platinum anticancer drugs and mangiferin causes increased efficacy in cancer cell lines
- Authors: Du Plessis-Stoman, Debbie
- Date: 2010
- Subjects: Cancer -- Chemotherapy , Antineoplastic agents , Platinum compounds -- Therapeutic use , Cancer cells
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:10338 , http://hdl.handle.net/10948/d1016160
- Description: This thesis mainly deals with some biochemical aspects regarding the efficacy of novel platinum anticancer compounds alone and in combination with mangiferin, as part of a broader study in which both chemistry and biochemistry are involved. Various novel diamine and N-S donor chelate compounds of platinum II and IV have been developed in which factors such as stereochemistry, ligand exchange rate and biocompatibility were considered as additional parameters. In the first order testing, each of these compounds was tested with reference to their “killing” potential by comparing their rate of killing, over a period of 48 hours with those of cisplatin and oxaliplatin. Numerous novel compounds were tested in this way, using the MTT cell viability assay and the three cancer cell lines MCF7, HT29 and HeLa. Although only a few could be regarded as equal to or even better than cisplatin, CPA7 and oxaliplatin, the testing of these compounds on cancer cells provided useful knowledge for the further development of novel compounds. Three of the better compounds, namely Yol 25, Yol 29.1 and Mar 4.1.4 were selected for further studies, together with oxaliplatin and CPA7 as positive controls, to obtain more detailed knowledge of their anticancer action, both alone and when applied in combination with mangiferin. In addition to the above, resistant cells were produced for each of the three different cell lines tested and all the selected compounds, both in the presence and absence of mangiferin. The effects of these treatments on the activation of NFĸB when applied to normal and resistant cell lines were also investigated. All the compounds induced apoptosis in the cell lines tested as well as alter the DNA cycle at one or more phase. Additionally, combination of these compounds with mangiferin enhanced the above-mentioned effects. Mangiferin decreases the IC50 values of the platinum drugs by up to 3.4 times and, although mangiferin alone did not induce cell cycle arrest, the presence of mangiferin in combination with oxaliplatin and Yol 25 shows an earlier and greatly enhanced delay in the S-phase, while cells treated with CPA7, Yol 29.1 and Mar 4.1.4 in combination with mangiferin showed a later, but greatly enhanced delay in the S-phase. It was also found that mangiferin acts as an NFĸB inhibitor when applied in combination with these drugs, which, in turn, reduces the occurrence of resistance in the cell lines. Resistance to oxaliplatin was counteracted by the combination with mangiferin in HeLa and HT29, but not in MCF7 cells, while resistance to CPA7 was only counteracted in the MCF7 cell line. Yol 25 and Mar 4.1.4 did not seem to induce resistance in HeLa and MCF7 cells, but did in HT29 cells, whereas Yol 29.1 caused resistance in HeLa and HT29 cells, but not in MCF7 cells. Finally, an effort was made to evaluate the different compounds by comparing them with respect to their properties relating to anticancer action with and without the addition of mangiferin.
- Full Text:
- Date Issued: 2010
- Authors: Du Plessis-Stoman, Debbie
- Date: 2010
- Subjects: Cancer -- Chemotherapy , Antineoplastic agents , Platinum compounds -- Therapeutic use , Cancer cells
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:10338 , http://hdl.handle.net/10948/d1016160
- Description: This thesis mainly deals with some biochemical aspects regarding the efficacy of novel platinum anticancer compounds alone and in combination with mangiferin, as part of a broader study in which both chemistry and biochemistry are involved. Various novel diamine and N-S donor chelate compounds of platinum II and IV have been developed in which factors such as stereochemistry, ligand exchange rate and biocompatibility were considered as additional parameters. In the first order testing, each of these compounds was tested with reference to their “killing” potential by comparing their rate of killing, over a period of 48 hours with those of cisplatin and oxaliplatin. Numerous novel compounds were tested in this way, using the MTT cell viability assay and the three cancer cell lines MCF7, HT29 and HeLa. Although only a few could be regarded as equal to or even better than cisplatin, CPA7 and oxaliplatin, the testing of these compounds on cancer cells provided useful knowledge for the further development of novel compounds. Three of the better compounds, namely Yol 25, Yol 29.1 and Mar 4.1.4 were selected for further studies, together with oxaliplatin and CPA7 as positive controls, to obtain more detailed knowledge of their anticancer action, both alone and when applied in combination with mangiferin. In addition to the above, resistant cells were produced for each of the three different cell lines tested and all the selected compounds, both in the presence and absence of mangiferin. The effects of these treatments on the activation of NFĸB when applied to normal and resistant cell lines were also investigated. All the compounds induced apoptosis in the cell lines tested as well as alter the DNA cycle at one or more phase. Additionally, combination of these compounds with mangiferin enhanced the above-mentioned effects. Mangiferin decreases the IC50 values of the platinum drugs by up to 3.4 times and, although mangiferin alone did not induce cell cycle arrest, the presence of mangiferin in combination with oxaliplatin and Yol 25 shows an earlier and greatly enhanced delay in the S-phase, while cells treated with CPA7, Yol 29.1 and Mar 4.1.4 in combination with mangiferin showed a later, but greatly enhanced delay in the S-phase. It was also found that mangiferin acts as an NFĸB inhibitor when applied in combination with these drugs, which, in turn, reduces the occurrence of resistance in the cell lines. Resistance to oxaliplatin was counteracted by the combination with mangiferin in HeLa and HT29, but not in MCF7 cells, while resistance to CPA7 was only counteracted in the MCF7 cell line. Yol 25 and Mar 4.1.4 did not seem to induce resistance in HeLa and MCF7 cells, but did in HT29 cells, whereas Yol 29.1 caused resistance in HeLa and HT29 cells, but not in MCF7 cells. Finally, an effort was made to evaluate the different compounds by comparing them with respect to their properties relating to anticancer action with and without the addition of mangiferin.
- Full Text:
- Date Issued: 2010
A commercial process development for plant food formulation using polyprotic acids from natural extracts as chelating agents
- Authors: Ndibewu, Peter Papoh
- Date: 2005
- Subjects: Chelates , Lemon juice , Liquid fertilizers
- Language: English
- Type: Thesis , Doctoral , DTech (Chemistry)
- Identifier: vital:10368 , http://hdl.handle.net/10948/153 , Chelates , Lemon juice , Liquid fertilizers
- Description: The citrus industry is one of South Africa's largest agricultural sectors in terms of export earnings with lemon fruits and juice as a trendsetter due to their high grade quality. According to growers, the Eastern Cape Province of South Africa alone produces an excess of about 10-14,000 tons of lemon juice which is presently of no economic value due to the sour taste and “bitterness”. As a result of this excess and in order to make use of the polyprotic acids naturally occurring in the lemon juice, four fertilizer nutrient mixtures are formulated, using lemon juice as base. From a conceptual scientific approach, characterization (physico-chemical and functional properties determinations) of Eureka Lemon fruit juices were undertaken, followed by smaller scale batch formulation experiments. On the basis that these lemon juice-based fertilizer mixtures are prepared following standard liquid fertilizer formulation guidelines, a field test was conducted to evaluate their potential effectiveness to influence plant growth. A growth chamber testing on tomato plants revealed high growth response (> 99.9 % certainty) potential in two of the semi-organic mixtures formulated while the organic mixture showed a relatively good growth rate as compared to the control (pure tap water). According to statistical analysis (ANOVA) comparison, two of the semi-organic mixtures performed considerably better than the two commercial samples evaluated. Potential benefits profoundly associated with these nutrient mixtures as compared to similar liquid fertilizer products on the market is that most nutrients are chelated and dissolved in solution. Also, the mixtures contain all necessary nutrients including plant growth substances required for healthier plant growth. The most important socioeconomic impact is the value addition to the technology chain in the citrus industry. The use of fluid fertilizers in significant quantities is less than twenty years old. Nevertheless, growth has been so rapid that in South Africa demand for mixed liquid fertilizer has greatly increased from 90 000 tons NPK & blended micronutrients in 1955 to more than 600 000 per annum tons today (Report 41/2003, Department of Minerals and Energy). The liquid fertilizers market is sparsely specialized with major competitors like Omnia, Kynoch and Foskor supplying more than 50 % of the market demand. Amongst the nutrient mixtures formulated, mixture one is an NPK (1-1-2) based nutrient mixture containing both secondary nutrients (0.5 % Mg & 1.0 % Ca) and seven micronutrients (0.1 % Fe, 0.05 % Cu, 0.05 % Zn, 0.05 % Mn, 0.02 % B, 0.0005 % Mo and 0.0005 % Co). The composition of this mixture offers the formula a potential to be used as a general purpose (all stages of plant growth) fertilization mixture in view of its balanced composition (containing all essential plant nutrients). Mixture two contains essentially the micronutrients and in higher concentrations (0.3 % Fe, 0.3 % Cu, 0.1 % Zn, 0.2 % Mn, 0.02 % B, 0.0005 % Mo and 0.0005 % Co) as compared to mixture one except for boron, molybdenum and cobalt. The concentration of the micronutrients contained in this mixture is adequately high which offers a potential for it to be used in supplementing nutrition in plants with critical micronutrient-deficient symptoms. Mixture three is very similar to mixture two (1.0 % Fe, 0.05 % Cu, 0.05 % Zn, 0.05 Mn, 0.05 % B, 0.0005 % Mo and 0.0005 % Co) except that the concentrations of all seven micronutrients are considerably less than those of contained in mixture two. However, the concentration of iron in this mixture is as high as 1.0 %. The mixture has a potential to be used in high iron-deficient situations. Mixture four is an organic formula with relatively low nutrient concentrations (NPK-0.02-0.02-1, 0.27 % Mg, 0.02 % Ca, 0.008 % Fe, 0.26 % Cu, 0.012 % Zn, 0.009 % Mn). Nevertheless, this mixture is appealing for organically grown crops where the use of chemicals is prohibited by standards. These lemon juice-based nutrient mixtures were further characterized and tested for stability and storability over a period of eight weeks. This study revealed no major change in the physical quality (colour, pH and “salt out” effect). The basic formulation methodology is a two-step procedure that involves filtration of the lemon juice to remove membranous materials, mixing at ambient temperature and stabilization of the nutrient mixtures. However, for the organic nutrient formula mix, filtration follows after extraction of nutrients from plant materials using the lemon juice.
- Full Text:
- Date Issued: 2005
- Authors: Ndibewu, Peter Papoh
- Date: 2005
- Subjects: Chelates , Lemon juice , Liquid fertilizers
- Language: English
- Type: Thesis , Doctoral , DTech (Chemistry)
- Identifier: vital:10368 , http://hdl.handle.net/10948/153 , Chelates , Lemon juice , Liquid fertilizers
- Description: The citrus industry is one of South Africa's largest agricultural sectors in terms of export earnings with lemon fruits and juice as a trendsetter due to their high grade quality. According to growers, the Eastern Cape Province of South Africa alone produces an excess of about 10-14,000 tons of lemon juice which is presently of no economic value due to the sour taste and “bitterness”. As a result of this excess and in order to make use of the polyprotic acids naturally occurring in the lemon juice, four fertilizer nutrient mixtures are formulated, using lemon juice as base. From a conceptual scientific approach, characterization (physico-chemical and functional properties determinations) of Eureka Lemon fruit juices were undertaken, followed by smaller scale batch formulation experiments. On the basis that these lemon juice-based fertilizer mixtures are prepared following standard liquid fertilizer formulation guidelines, a field test was conducted to evaluate their potential effectiveness to influence plant growth. A growth chamber testing on tomato plants revealed high growth response (> 99.9 % certainty) potential in two of the semi-organic mixtures formulated while the organic mixture showed a relatively good growth rate as compared to the control (pure tap water). According to statistical analysis (ANOVA) comparison, two of the semi-organic mixtures performed considerably better than the two commercial samples evaluated. Potential benefits profoundly associated with these nutrient mixtures as compared to similar liquid fertilizer products on the market is that most nutrients are chelated and dissolved in solution. Also, the mixtures contain all necessary nutrients including plant growth substances required for healthier plant growth. The most important socioeconomic impact is the value addition to the technology chain in the citrus industry. The use of fluid fertilizers in significant quantities is less than twenty years old. Nevertheless, growth has been so rapid that in South Africa demand for mixed liquid fertilizer has greatly increased from 90 000 tons NPK & blended micronutrients in 1955 to more than 600 000 per annum tons today (Report 41/2003, Department of Minerals and Energy). The liquid fertilizers market is sparsely specialized with major competitors like Omnia, Kynoch and Foskor supplying more than 50 % of the market demand. Amongst the nutrient mixtures formulated, mixture one is an NPK (1-1-2) based nutrient mixture containing both secondary nutrients (0.5 % Mg & 1.0 % Ca) and seven micronutrients (0.1 % Fe, 0.05 % Cu, 0.05 % Zn, 0.05 % Mn, 0.02 % B, 0.0005 % Mo and 0.0005 % Co). The composition of this mixture offers the formula a potential to be used as a general purpose (all stages of plant growth) fertilization mixture in view of its balanced composition (containing all essential plant nutrients). Mixture two contains essentially the micronutrients and in higher concentrations (0.3 % Fe, 0.3 % Cu, 0.1 % Zn, 0.2 % Mn, 0.02 % B, 0.0005 % Mo and 0.0005 % Co) as compared to mixture one except for boron, molybdenum and cobalt. The concentration of the micronutrients contained in this mixture is adequately high which offers a potential for it to be used in supplementing nutrition in plants with critical micronutrient-deficient symptoms. Mixture three is very similar to mixture two (1.0 % Fe, 0.05 % Cu, 0.05 % Zn, 0.05 Mn, 0.05 % B, 0.0005 % Mo and 0.0005 % Co) except that the concentrations of all seven micronutrients are considerably less than those of contained in mixture two. However, the concentration of iron in this mixture is as high as 1.0 %. The mixture has a potential to be used in high iron-deficient situations. Mixture four is an organic formula with relatively low nutrient concentrations (NPK-0.02-0.02-1, 0.27 % Mg, 0.02 % Ca, 0.008 % Fe, 0.26 % Cu, 0.012 % Zn, 0.009 % Mn). Nevertheless, this mixture is appealing for organically grown crops where the use of chemicals is prohibited by standards. These lemon juice-based nutrient mixtures were further characterized and tested for stability and storability over a period of eight weeks. This study revealed no major change in the physical quality (colour, pH and “salt out” effect). The basic formulation methodology is a two-step procedure that involves filtration of the lemon juice to remove membranous materials, mixing at ambient temperature and stabilization of the nutrient mixtures. However, for the organic nutrient formula mix, filtration follows after extraction of nutrients from plant materials using the lemon juice.
- Full Text:
- Date Issued: 2005
A comparative study of artificial neural networks and physics models as simulators in evolutionary robotics
- Pretorius, Christiaan Johannes
- Authors: Pretorius, Christiaan Johannes
- Date: 2019
- Subjects: Neural networks (Computer science)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10948/30789 , vital:31131
- Description: The Evolutionary Robotics (ER) process is a technique that applies evolutionary optimization algorithms to the task of automatically developing, or evolving, robotic control programs. These control programs, or simply controllers, are evolved in order to allow a robot to perform a required task. During the ER process, use is often made of robotic simulators to evaluate the performance of candidate controllers that are produced in the course of the controller evolution process. Such simulators accelerate and otherwise simplify the controller evolution process, as opposed to the more arduous process of evaluating controllers in the real world without use of simulation. To date, the vast majority of simulators that have been applied in ER are physics- based models which are constructed by taking into account the underlying physics governing the operation of the robotic system in question. An alternative approach to simulator implementation in ER is the usage of Artificial Neural Networks (ANNs) as simulators in the ER process. Such simulators are referred to as Simulator Neural Networks (SNNs). Previous studies have indicated that SNNs can successfully be used as an alter- native to physics-based simulators in the ER process on various robotic platforms. At the commencement of the current study it was not, however, known how this relatively new method of simulation would compare to traditional physics-based simulation approaches in ER. The study presented in this thesis thus endeavoured to quantitatively compare SNNs and physics-based models as simulators in the ER process. In order to con- duct this comparative study, both SNNs and physics simulators were constructed for the modelling of three different robotic platforms: a differentially-steered robot, a wheeled inverted pendulum robot and a hexapod robot. Each of these two types of simulation was then used in simulation-based evolution processes to evolve con- trollers for each robotic platform. During these controller evolution processes, the SNNs and physics models were compared in terms of their accuracy in making pre- dictions of robotic behaviour, their computational efficiency in arriving at these predictions, the human effort required to construct each simulator and, most im- portantly, the real-world performance of controllers evolved by making use of each simulator. The results obtained in this study illustrated experimentally that SNNs were, in the majority of cases, able to make more accurate predictions than the physics- based models and these SNNs were arguably simpler to construct than the physics simulators. Additionally, SNNs were also shown to be a computationally efficient alternative to physics-based simulators in ER and, again in the majority of cases, these SNNs were able to produce controllers which outperformed those evolved in the physics-based simulators, when these controllers were uploaded to the real-world robots. The results of this thesis thus suggest that SNNs are a viable alternative to more commonly-used physics simulators in ER and further investigation of the potential of this simulation technique appears warranted.
- Full Text:
- Date Issued: 2019
- Authors: Pretorius, Christiaan Johannes
- Date: 2019
- Subjects: Neural networks (Computer science)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10948/30789 , vital:31131
- Description: The Evolutionary Robotics (ER) process is a technique that applies evolutionary optimization algorithms to the task of automatically developing, or evolving, robotic control programs. These control programs, or simply controllers, are evolved in order to allow a robot to perform a required task. During the ER process, use is often made of robotic simulators to evaluate the performance of candidate controllers that are produced in the course of the controller evolution process. Such simulators accelerate and otherwise simplify the controller evolution process, as opposed to the more arduous process of evaluating controllers in the real world without use of simulation. To date, the vast majority of simulators that have been applied in ER are physics- based models which are constructed by taking into account the underlying physics governing the operation of the robotic system in question. An alternative approach to simulator implementation in ER is the usage of Artificial Neural Networks (ANNs) as simulators in the ER process. Such simulators are referred to as Simulator Neural Networks (SNNs). Previous studies have indicated that SNNs can successfully be used as an alter- native to physics-based simulators in the ER process on various robotic platforms. At the commencement of the current study it was not, however, known how this relatively new method of simulation would compare to traditional physics-based simulation approaches in ER. The study presented in this thesis thus endeavoured to quantitatively compare SNNs and physics-based models as simulators in the ER process. In order to con- duct this comparative study, both SNNs and physics simulators were constructed for the modelling of three different robotic platforms: a differentially-steered robot, a wheeled inverted pendulum robot and a hexapod robot. Each of these two types of simulation was then used in simulation-based evolution processes to evolve con- trollers for each robotic platform. During these controller evolution processes, the SNNs and physics models were compared in terms of their accuracy in making pre- dictions of robotic behaviour, their computational efficiency in arriving at these predictions, the human effort required to construct each simulator and, most im- portantly, the real-world performance of controllers evolved by making use of each simulator. The results obtained in this study illustrated experimentally that SNNs were, in the majority of cases, able to make more accurate predictions than the physics- based models and these SNNs were arguably simpler to construct than the physics simulators. Additionally, SNNs were also shown to be a computationally efficient alternative to physics-based simulators in ER and, again in the majority of cases, these SNNs were able to produce controllers which outperformed those evolved in the physics-based simulators, when these controllers were uploaded to the real-world robots. The results of this thesis thus suggest that SNNs are a viable alternative to more commonly-used physics simulators in ER and further investigation of the potential of this simulation technique appears warranted.
- Full Text:
- Date Issued: 2019
A comparison of programming notations for a tertiary level introductory programming course
- Authors: Cilliers, Charmain Barbara
- Date: 2004
- Subjects: Computer programming -- Study and teaching (Higher) -- South Africa , Computer programmers -- South Africa
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:11093 , http://hdl.handle.net/10948/d1019679
- Description: Increasing pressure from national government to improve throughput at South African tertiary education institutions presents challenges to educators of introductory programming courses. In response, educators must adopt effective methods and strategies that encourage novice programmers to be successful in such courses. An approach that seeks to increase and maintain satisfactory throughput is the modification of the teaching model in these courses by adjusting presentation techniques. This thesis investigates the effect of integrating an experimental iconic programming notation and associated development environment with existing conventional textual technological support in the teaching model of a tertiary level introductory programming course. The investigation compares the performance achievement of novice programmers using only conventional textual technological support with that of novice programmers using the integrated iconic and conventional textual technological support. In preparation for the investigation, interpretation of existing knowledge on the behaviour of novice programmers while learning to program results in a novel framework of eight novice programmer requirements for technological support in an introductory programming course. This framework is applied in the examination of existing categories of technological support as well as in the design of new technological support for novice programmers learning to program. It thus provides information for the selection of existing and the design of new introductory programming technological support. The findings of the investigation suggest strong evidence that performance achievement of novice programmers in a tertiary level introductory programming course improves significantly with the inclusion of iconic technological support in the teaching model. The benefits are particularly evident in the portion of the novice programmer population who have been identified as being at risk of being successful in the course. Novice programmers identified as being at risk perform substantially better when using iconic technological support concurrently with conventional textual technological support than their equals who use only the latter form. Considerably more at risk novice programmers using the integrated form of technological support are in fact successful in the introductory programming course when compared with their counterparts who use conventional textual technological support only. The contributions of this thesis address deficiencies existing in current documented research. These contributions are primarily apparent in a number of distinct areas, namely: • formalisation of a novel framework of novice programmer requirements for technological support in an introductory programming course; • application of the framework as a formal evaluation technique; • application of the framework in the design of a visual iconic programming notation and development environment; • enhancement of existing empirical evidence and experimental research methodology typically applied to studies in programming; as well as • a proposal for a modified introductory programming course teaching model. The thesis has effectively applied substantial existing research on the cognitive model of the novice programmer as well as that on experimental technological support. The increase of throughput to a recommended rate of 75 percent in the tertiary level introductory programming course at the University of Port Elizabeth is attributed solely to the incorporation of iconic technological support in the teaching model of the course.
- Full Text:
- Date Issued: 2004
- Authors: Cilliers, Charmain Barbara
- Date: 2004
- Subjects: Computer programming -- Study and teaching (Higher) -- South Africa , Computer programmers -- South Africa
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:11093 , http://hdl.handle.net/10948/d1019679
- Description: Increasing pressure from national government to improve throughput at South African tertiary education institutions presents challenges to educators of introductory programming courses. In response, educators must adopt effective methods and strategies that encourage novice programmers to be successful in such courses. An approach that seeks to increase and maintain satisfactory throughput is the modification of the teaching model in these courses by adjusting presentation techniques. This thesis investigates the effect of integrating an experimental iconic programming notation and associated development environment with existing conventional textual technological support in the teaching model of a tertiary level introductory programming course. The investigation compares the performance achievement of novice programmers using only conventional textual technological support with that of novice programmers using the integrated iconic and conventional textual technological support. In preparation for the investigation, interpretation of existing knowledge on the behaviour of novice programmers while learning to program results in a novel framework of eight novice programmer requirements for technological support in an introductory programming course. This framework is applied in the examination of existing categories of technological support as well as in the design of new technological support for novice programmers learning to program. It thus provides information for the selection of existing and the design of new introductory programming technological support. The findings of the investigation suggest strong evidence that performance achievement of novice programmers in a tertiary level introductory programming course improves significantly with the inclusion of iconic technological support in the teaching model. The benefits are particularly evident in the portion of the novice programmer population who have been identified as being at risk of being successful in the course. Novice programmers identified as being at risk perform substantially better when using iconic technological support concurrently with conventional textual technological support than their equals who use only the latter form. Considerably more at risk novice programmers using the integrated form of technological support are in fact successful in the introductory programming course when compared with their counterparts who use conventional textual technological support only. The contributions of this thesis address deficiencies existing in current documented research. These contributions are primarily apparent in a number of distinct areas, namely: • formalisation of a novel framework of novice programmer requirements for technological support in an introductory programming course; • application of the framework as a formal evaluation technique; • application of the framework in the design of a visual iconic programming notation and development environment; • enhancement of existing empirical evidence and experimental research methodology typically applied to studies in programming; as well as • a proposal for a modified introductory programming course teaching model. The thesis has effectively applied substantial existing research on the cognitive model of the novice programmer as well as that on experimental technological support. The increase of throughput to a recommended rate of 75 percent in the tertiary level introductory programming course at the University of Port Elizabeth is attributed solely to the incorporation of iconic technological support in the teaching model of the course.
- Full Text:
- Date Issued: 2004
A context-aware model to improve usability of information presented on mobile devices
- Authors: Ntawanga, Felix Fred
- Date: 2014
- Subjects: Context-aware computing , Online information services
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:10495 , http://hdl.handle.net/10948/d1020768
- Description: Online information access on mobile devices is increasing as a result of the growth in the use of Internet-enabled handheld (or pocket-size) devices. The combined influence of recent enabling technologies such as Web 2.0, mobile app stores and improved wireless networks have driven the increase in online applications that allow users to access various types of information on mobile devices regardless of time and location. Examples of such applications (usually shortened to app) include: social media, such as FacebookTM App and TwitterTM App, banking applications such as (Standard Bank South Africa)TM Mobile Banking App and First National Bank (FNB) BankingTM App, and news application such as news 24TM App and BBCTM News App. Online businesses involved in buying, selling and business transaction processing activities via the Internet have exploited the opportunity to extend electronic commerce (e-commerce) initiatives into mobile commerce (m-commerce). Online businesses that interact with end user customers implement business to consumer (B2C) m-commerce applications that enable customers to access and browse product catalogue information on mobile devices, anytime, anywhere. Customers accessing electronic product catalogue information on a mobile device face a number of challenges such as a long list of products presented on a small screen and a longer information download time. These challenges mainly originate from the limiting and dynamic nature of the mobile apps operating environment, for example, dynamic location, bandwidth fluctuations and, diverse and limited device features, collectively referred to as context. The goal of this research was to design and implement a context-aware model that can be incorporated into an m-commerce application in order to improve the presentation of product catalogue information on m-commerce storefronts. The motivation for selecting product catalogue is prompted by literature which indicates that improved presentation of information in m-commerce (and e-commerce) applications has a positive impact on usability of the websites. Usable m-commerce (and e-commerce) websites improve efficiency in consumer behaviour that impacts sales, profits and business growth. The context-aware model aimed at collecting context information within the user environment and utilising it to determine optimal retrieval and presentation of product catalogue in m-commerce. An integrated logical context sensor and Mathematical algorithms were implemented in the context-aware model. The integrated logical context sensor was responsible for the collection of different types of predetermined context information such as device specification or capabilities, connection bandwidth, location and time of the day as well as the user profile. The algorithms transformed the collected context information into usable formats and enabled optimal retrieval and presentation of product catalogue data on a specific mobile device. Open-source implementation tools were utilised to implement components of the model including: HTML5, PhP, JavaScript and MySQL database. The context-aware model was incorporated into an existing m-commerce application. Two user evaluation studies were conducted during the course of the research. The first evaluation was to evaluate the accuracy of information collected by the context sensor component of the model. This survey was conducted with a sample of 30 users from different countries across the world. In-between the context sensor and main evaluation surveys, a pilot study was conducted with a sample of 19 users with great experience in mobile application development and use from SAP Next Business and Technology, Africa. Finally an overall user evaluation study was conducted with a sample of 30 users from a remote area called Kgautswane in Limpopo Province, South Africa. The results obtained indicate that the context-aware model was able to determine accurate context information in real-time and effectively determine how much product information should be retrieved and how the information should be presented on a mobile device interface. Two main contributions emerged from the research, first the research contributed to the field of mobile Human Computer Interaction. During the research, techniques of evaluating and improving usability of mobile applications were demonstrated. Secondly, the research made a significant contribution to the upcoming field of context-aware computing. The research brought clarity with regard to context-aware computing which is lacking in existing, current research despite the field’s proven impact of improving usability of applications. Researchers can utilise contributions made in this research to develop further techniques and usable context-aware solutions.
- Full Text:
- Date Issued: 2014
- Authors: Ntawanga, Felix Fred
- Date: 2014
- Subjects: Context-aware computing , Online information services
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:10495 , http://hdl.handle.net/10948/d1020768
- Description: Online information access on mobile devices is increasing as a result of the growth in the use of Internet-enabled handheld (or pocket-size) devices. The combined influence of recent enabling technologies such as Web 2.0, mobile app stores and improved wireless networks have driven the increase in online applications that allow users to access various types of information on mobile devices regardless of time and location. Examples of such applications (usually shortened to app) include: social media, such as FacebookTM App and TwitterTM App, banking applications such as (Standard Bank South Africa)TM Mobile Banking App and First National Bank (FNB) BankingTM App, and news application such as news 24TM App and BBCTM News App. Online businesses involved in buying, selling and business transaction processing activities via the Internet have exploited the opportunity to extend electronic commerce (e-commerce) initiatives into mobile commerce (m-commerce). Online businesses that interact with end user customers implement business to consumer (B2C) m-commerce applications that enable customers to access and browse product catalogue information on mobile devices, anytime, anywhere. Customers accessing electronic product catalogue information on a mobile device face a number of challenges such as a long list of products presented on a small screen and a longer information download time. These challenges mainly originate from the limiting and dynamic nature of the mobile apps operating environment, for example, dynamic location, bandwidth fluctuations and, diverse and limited device features, collectively referred to as context. The goal of this research was to design and implement a context-aware model that can be incorporated into an m-commerce application in order to improve the presentation of product catalogue information on m-commerce storefronts. The motivation for selecting product catalogue is prompted by literature which indicates that improved presentation of information in m-commerce (and e-commerce) applications has a positive impact on usability of the websites. Usable m-commerce (and e-commerce) websites improve efficiency in consumer behaviour that impacts sales, profits and business growth. The context-aware model aimed at collecting context information within the user environment and utilising it to determine optimal retrieval and presentation of product catalogue in m-commerce. An integrated logical context sensor and Mathematical algorithms were implemented in the context-aware model. The integrated logical context sensor was responsible for the collection of different types of predetermined context information such as device specification or capabilities, connection bandwidth, location and time of the day as well as the user profile. The algorithms transformed the collected context information into usable formats and enabled optimal retrieval and presentation of product catalogue data on a specific mobile device. Open-source implementation tools were utilised to implement components of the model including: HTML5, PhP, JavaScript and MySQL database. The context-aware model was incorporated into an existing m-commerce application. Two user evaluation studies were conducted during the course of the research. The first evaluation was to evaluate the accuracy of information collected by the context sensor component of the model. This survey was conducted with a sample of 30 users from different countries across the world. In-between the context sensor and main evaluation surveys, a pilot study was conducted with a sample of 19 users with great experience in mobile application development and use from SAP Next Business and Technology, Africa. Finally an overall user evaluation study was conducted with a sample of 30 users from a remote area called Kgautswane in Limpopo Province, South Africa. The results obtained indicate that the context-aware model was able to determine accurate context information in real-time and effectively determine how much product information should be retrieved and how the information should be presented on a mobile device interface. Two main contributions emerged from the research, first the research contributed to the field of mobile Human Computer Interaction. During the research, techniques of evaluating and improving usability of mobile applications were demonstrated. Secondly, the research made a significant contribution to the upcoming field of context-aware computing. The research brought clarity with regard to context-aware computing which is lacking in existing, current research despite the field’s proven impact of improving usability of applications. Researchers can utilise contributions made in this research to develop further techniques and usable context-aware solutions.
- Full Text:
- Date Issued: 2014
A contribution to the theory of prime modules
- Authors: Ssevviiri, David
- Date: 2013
- Subjects: Modules (Algebra) , Radical theory , Rings (Algebra)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:10510 , http://hdl.handle.net/10948/d1019923
- Description: This thesis is aimed at generalizing notions of rings to modules. In par-ticular, notions of completely prime ideals, s-prime ideals, 2-primal rings and nilpotency of elements of rings are respectively generalized to completely prime submodules and classical completely prime submodules, s-prime submodules, 2-primal modules and nilpotency of elements of modules. Properties and rad-icals that arise from each of these notions are studied.
- Full Text:
- Date Issued: 2013
- Authors: Ssevviiri, David
- Date: 2013
- Subjects: Modules (Algebra) , Radical theory , Rings (Algebra)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:10510 , http://hdl.handle.net/10948/d1019923
- Description: This thesis is aimed at generalizing notions of rings to modules. In par-ticular, notions of completely prime ideals, s-prime ideals, 2-primal rings and nilpotency of elements of rings are respectively generalized to completely prime submodules and classical completely prime submodules, s-prime submodules, 2-primal modules and nilpotency of elements of modules. Properties and rad-icals that arise from each of these notions are studied.
- Full Text:
- Date Issued: 2013
A framework for designing ambient assisted living services for disabled individuals
- Authors: Kyazze, Michael
- Date: 2018
- Subjects: Assistive computer technology , Computers and people with disabilities Self-help devices for people with disabilities People with disabilities -- Means of communication -- Technological innovations Communication devices for people with disabilities
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10948/31240 , vital:31347
- Description: Physically disabled individuals face a number of challenges when carrying out their everyday activities such as moving around, communicating with others, and their personal care. One way of overcoming these challenges is by using personal assistants. An alternative is to enable independence through assistive technology. This research aimed to investigate how physically disabled individuals experience these challenges, and how assistive technology can enable them to be more independent. In order to achieve the goal of this research, existing literature was reviewed on disability, assisted living, and interaction techniques. The literature study on disability identified some of the challenges faced by disabled individuals in their daily lives. In order to contextualize these challenges, interview studies with eighteen disabled individuals, and twelve personal assistants were carried out in Kampala, Uganda and Port Elizabeth, South Africa. The participants from both Uganda and South Africa were limited to those living in urban areas. The Ugandan participants noted that, whereas technology may assist their daily lives, their most essential needs are basic disability support aids such as wheelchairs and better long canes. This was in contrast with the South African participants, who have access to basic disability support aids. The South African participants identified their key needs as controlling an electronic environment without assistance, e.g. house lights, using a mobile phone, and using a computer without assistance. The interviews narrowed down the scope to focus on individuals with quadriplegia, specifically individuals who have limited hand use, but can comfortably speak and move their heads, and make gestures such as head shake and nod. Literature on assisted living technologies and frameworks, provided the technical foundation for the research. The literature review of interaction techniques identified a number of possible ways in which individuals with quadriplegia can interact with technology. An appropriate set of interaction techniques, namely head shake and nod, voice, and facial feature tracking were identified. Evaluations of the interaction techniques excluded head shake and nod, because of an inconsistency in detecting an individual’s head pose in different lighting conditions, when using a Microsoft Kinect. Voice and facial feature tracking using a standard computer camera were identified as the most suitable interaction techniques for this study. A framework for designing assisted living software services was developed. The framework allows disability researchers and solution developers to understand the needs of a given disability group, and design relevant solutions. To demonstrate that the proposed framework iii | P a g e addresses the main aim of this research, a prototype was developed that enables users to control smart lights (Phillips hue), a Smart TV (Samsung), and carry out basic navigation and webbrowsing on a computer. Users could interact with the software using voice and facial feature commands. A usability study was carried out with fifteen physically disabled individuals in Port Elizabeth, South Africa. The results of the evaluation study were highly positive. The successful evaluation of the prototype provided empirical evidence that the proposed framework does assist in the design of relevant and useful software services, to meet the unique needs of physically disabled individuals.
- Full Text:
- Date Issued: 2018
- Authors: Kyazze, Michael
- Date: 2018
- Subjects: Assistive computer technology , Computers and people with disabilities Self-help devices for people with disabilities People with disabilities -- Means of communication -- Technological innovations Communication devices for people with disabilities
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10948/31240 , vital:31347
- Description: Physically disabled individuals face a number of challenges when carrying out their everyday activities such as moving around, communicating with others, and their personal care. One way of overcoming these challenges is by using personal assistants. An alternative is to enable independence through assistive technology. This research aimed to investigate how physically disabled individuals experience these challenges, and how assistive technology can enable them to be more independent. In order to achieve the goal of this research, existing literature was reviewed on disability, assisted living, and interaction techniques. The literature study on disability identified some of the challenges faced by disabled individuals in their daily lives. In order to contextualize these challenges, interview studies with eighteen disabled individuals, and twelve personal assistants were carried out in Kampala, Uganda and Port Elizabeth, South Africa. The participants from both Uganda and South Africa were limited to those living in urban areas. The Ugandan participants noted that, whereas technology may assist their daily lives, their most essential needs are basic disability support aids such as wheelchairs and better long canes. This was in contrast with the South African participants, who have access to basic disability support aids. The South African participants identified their key needs as controlling an electronic environment without assistance, e.g. house lights, using a mobile phone, and using a computer without assistance. The interviews narrowed down the scope to focus on individuals with quadriplegia, specifically individuals who have limited hand use, but can comfortably speak and move their heads, and make gestures such as head shake and nod. Literature on assisted living technologies and frameworks, provided the technical foundation for the research. The literature review of interaction techniques identified a number of possible ways in which individuals with quadriplegia can interact with technology. An appropriate set of interaction techniques, namely head shake and nod, voice, and facial feature tracking were identified. Evaluations of the interaction techniques excluded head shake and nod, because of an inconsistency in detecting an individual’s head pose in different lighting conditions, when using a Microsoft Kinect. Voice and facial feature tracking using a standard computer camera were identified as the most suitable interaction techniques for this study. A framework for designing assisted living software services was developed. The framework allows disability researchers and solution developers to understand the needs of a given disability group, and design relevant solutions. To demonstrate that the proposed framework iii | P a g e addresses the main aim of this research, a prototype was developed that enables users to control smart lights (Phillips hue), a Smart TV (Samsung), and carry out basic navigation and webbrowsing on a computer. Users could interact with the software using voice and facial feature commands. A usability study was carried out with fifteen physically disabled individuals in Port Elizabeth, South Africa. The results of the evaluation study were highly positive. The successful evaluation of the prototype provided empirical evidence that the proposed framework does assist in the design of relevant and useful software services, to meet the unique needs of physically disabled individuals.
- Full Text:
- Date Issued: 2018
A framework for grain commodity trading decision support in South Africa
- Authors: Ayankoya, Kayode Anthony
- Date: 2016
- Subjects: Grain trade -- South Africa Commodity exchanges -- South Africa Food industry and trade -- South Africa
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10948/11437 , vital:26925
- Description: In several countries around the world, grain commodities are traded as assets on stock exchanges. This indicate that the market and effectively the prices of the grain commodities in such countries, are controlled by several local and international economic, political and social factors that are rapidly changing. As a result, the prices of some grain commodities are volatile and trading in such commodities are prone to price-related risks. There are different trading strategies for minimising price-related risks and maximising profits. But empirical research suggests that making the right decision for effective grain commodities trading has been a difficult task for stakeholders due to high volatility of grain commodities prices. Studies have shown that this is more challenging among grain commodities farmers because of their lack of skills and the time to sift through and make sense of the datasets on the plethora of factors that influence the grain commodities market. This thesis focused on providing an answer for the main research problem that grain farmers in South Africa do not take full advantage of all the available strategies for trading their grain commodities because of the complexities associated with monitoring the large datasets that influence the grain commodities market. The main objective set by this study is to design a framework that can be followed to collect, integrate and analyse datasets that influence trading decisions of grain farmers in South Africa about grain commodities. This study takes advantage of the developments in Big Data and Data Science to achieve the set objective using the Design Science Research (DSR) methodology. The prediction of future prices of grain commodities for the different trading strategies was identified as an important factor for making better decisions when trading grain commodities and the key factors that influence the prices were identified. This was followed by a critical review of the literature to determine how the concepts of Big Data and Data Science can be leveraged for an effective grain commodities trading decision support. This resulted in a proposed framework for grain commodities trading. The proposed framework suggested an investigation of the factors that influence the prices of grain commodities as the basis for acquiring the relevant datasets. The proposed framework suggested the adoption of the Big Data approach in acquiring, preparing and integrating relevant datasets from several sources. Furthermore, it was suggested that algorithmic models for predicting grain commodities prices can be developed on top of the data layer of the proposed framework to provide real-time decision support. The proposed framework suggests the need for a carefully designed visualisation of the result and the collected data that promotes user experience. Lastly, the proposed framework included a technology consideration component to support the Big Data and Data Science approach of the framework. To demonstrate that the proposed framework addressed the main problem of this research, datasets from several sources on trading white maize in South Africa and the factors that influence market were streamed, integrated and analysed. Backpropagation Neural Network algorithm was used for modelling the prices of white maize for spot and futures trading strategies were predicted. There are other modelling techniques such as the Box-Jenkins statistical time series analysis methodology. But, Neural Networks was identified as more suitable for time series data with complex patterns and relationships. A demonstration system was setup to provide effective decision support by using near real-time data to provide a dynamic predictive analytics for the spot and December futures contract prices of white maize in South Africa. Comparative analysis of predictions made using the model from the proposed framework to actual data indicated a significant degree of accuracy. A further evaluation was carried out by asking experienced traders to make predictions for the spot and December futures contract prices of white maize. The result of the exercise indicated that the predictions from the developed model were much closer to the actual prices. This indicated that the proposed framework is technically capable and generally useful. It also shows that the proposed framework can be used to provide decision support about trading grain commodities to stakeholders with lesser skills, experience and resources. The practical contribution of this thesis is that relevant datasets from several sources can be streamed into an integrated data source in real-time, which can be used as input for a real-time learning algorithmic model for predicting grain commodities prices. This will make it possible for a predictive analytics that responds to market volatility thereby providing an effective decision support for grain commodities trading. Another practical contribution of this thesis is a proposed framework that can be followed for developing a Decision Support System for trading in grain commodities. This thesis made theoretical contributions by building on the information processing theory and the decision making theory. The theoretical contribution of this thesis consists of the identification of Big Data approach, tools and techniques for eradicating uncertainty and equivocality in grain commodities trading decision making process.
- Full Text:
- Date Issued: 2016
- Authors: Ayankoya, Kayode Anthony
- Date: 2016
- Subjects: Grain trade -- South Africa Commodity exchanges -- South Africa Food industry and trade -- South Africa
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10948/11437 , vital:26925
- Description: In several countries around the world, grain commodities are traded as assets on stock exchanges. This indicate that the market and effectively the prices of the grain commodities in such countries, are controlled by several local and international economic, political and social factors that are rapidly changing. As a result, the prices of some grain commodities are volatile and trading in such commodities are prone to price-related risks. There are different trading strategies for minimising price-related risks and maximising profits. But empirical research suggests that making the right decision for effective grain commodities trading has been a difficult task for stakeholders due to high volatility of grain commodities prices. Studies have shown that this is more challenging among grain commodities farmers because of their lack of skills and the time to sift through and make sense of the datasets on the plethora of factors that influence the grain commodities market. This thesis focused on providing an answer for the main research problem that grain farmers in South Africa do not take full advantage of all the available strategies for trading their grain commodities because of the complexities associated with monitoring the large datasets that influence the grain commodities market. The main objective set by this study is to design a framework that can be followed to collect, integrate and analyse datasets that influence trading decisions of grain farmers in South Africa about grain commodities. This study takes advantage of the developments in Big Data and Data Science to achieve the set objective using the Design Science Research (DSR) methodology. The prediction of future prices of grain commodities for the different trading strategies was identified as an important factor for making better decisions when trading grain commodities and the key factors that influence the prices were identified. This was followed by a critical review of the literature to determine how the concepts of Big Data and Data Science can be leveraged for an effective grain commodities trading decision support. This resulted in a proposed framework for grain commodities trading. The proposed framework suggested an investigation of the factors that influence the prices of grain commodities as the basis for acquiring the relevant datasets. The proposed framework suggested the adoption of the Big Data approach in acquiring, preparing and integrating relevant datasets from several sources. Furthermore, it was suggested that algorithmic models for predicting grain commodities prices can be developed on top of the data layer of the proposed framework to provide real-time decision support. The proposed framework suggests the need for a carefully designed visualisation of the result and the collected data that promotes user experience. Lastly, the proposed framework included a technology consideration component to support the Big Data and Data Science approach of the framework. To demonstrate that the proposed framework addressed the main problem of this research, datasets from several sources on trading white maize in South Africa and the factors that influence market were streamed, integrated and analysed. Backpropagation Neural Network algorithm was used for modelling the prices of white maize for spot and futures trading strategies were predicted. There are other modelling techniques such as the Box-Jenkins statistical time series analysis methodology. But, Neural Networks was identified as more suitable for time series data with complex patterns and relationships. A demonstration system was setup to provide effective decision support by using near real-time data to provide a dynamic predictive analytics for the spot and December futures contract prices of white maize in South Africa. Comparative analysis of predictions made using the model from the proposed framework to actual data indicated a significant degree of accuracy. A further evaluation was carried out by asking experienced traders to make predictions for the spot and December futures contract prices of white maize. The result of the exercise indicated that the predictions from the developed model were much closer to the actual prices. This indicated that the proposed framework is technically capable and generally useful. It also shows that the proposed framework can be used to provide decision support about trading grain commodities to stakeholders with lesser skills, experience and resources. The practical contribution of this thesis is that relevant datasets from several sources can be streamed into an integrated data source in real-time, which can be used as input for a real-time learning algorithmic model for predicting grain commodities prices. This will make it possible for a predictive analytics that responds to market volatility thereby providing an effective decision support for grain commodities trading. Another practical contribution of this thesis is a proposed framework that can be followed for developing a Decision Support System for trading in grain commodities. This thesis made theoretical contributions by building on the information processing theory and the decision making theory. The theoretical contribution of this thesis consists of the identification of Big Data approach, tools and techniques for eradicating uncertainty and equivocality in grain commodities trading decision making process.
- Full Text:
- Date Issued: 2016
A framework to measure human behaviour whilst reading
- Salehzadeh, Seyed Amirsaleh, Greyling, Jean
- Authors: Salehzadeh, Seyed Amirsaleh , Greyling, Jean
- Date: 2019
- Subjects: Computational intelligence , Machine learning Artificial intelligence Neural networks (Computer science)
- Language: English
- Type: Thesis , Doctoral , DPhil
- Identifier: http://hdl.handle.net/10948/43578 , vital:36921
- Description: The brain is the most complex object in the known universe that gives a sense of being to humans and characterises human behaviour. Building models of brain functions is perhaps the most fascinating scientific challenge in the 21st century. Reading is a significant cognitive process in the human brain that plays a critical role in the vital process of learning and in performing some daily activities. The study of human behaviour during reading has been an area of interest for researchers in different fields of science. This thesis is based upon providing a novel framework, called ARSAT (Assisting Researchers in the Selection of Appropriate Technologies), that measures the behaviour of humans when reading text. The ARSAT framework aims at assisting researchers in the selection and application of appropriate technologies to measure the behaviour of a person who is reading text. The ARSAT framework will assist to researchers who investigate the reading process and find it difficult to select appropriate theories, metrics, data collection methods and data analytics techniques. The ARSAT framework enhances the ability of its users to select appropriate metrics indicating the effective factors on the characterisation of different aspects of human behaviour during the reading process. As will be shown in this research study, human behaviour is characterised by a complicated interplay of action, cognition and emotion. The ARSAT framework also facilitates selecting appropriate sensory technologies that can be used to monitor and collect data for the metrics. Moreover, this research study will introduce BehaveNet, a novel Deep Learning modelling approach, which can be used for training Deep Learning models of human behaviour from the sensory data collected. In this thesis, a comprehensive literature study is presented that was conducted to acquire adequate knowledge for designing the ARSAT framework. In order to identify the contributing factors that affect the reading process, an overview of some existing theories of the reading process is provided. Furthermore, a number of sensory technologies and techniques that can be applied to monitoring the changes in the metrics indicating the factors are also demonstrated. Only, the technologies that are commercially available on the market are recommended by the ARSAT framework. A variety of Machine Learning techniques were also investigated when designing the BehaveNet. The BehaveNet takes advantage of the complementarity of Convolutional Neural Networks, Long Short-Term Memory networks and Deep Neural Networks. The design of a Human Behaviour Monitoring System (HBMS), by utilising the ARSAT framework for recognising three attention-seeking activities of humans, is also presented in this research study. Reading printed text, as well as speaking out loudly and watching a programme on TV were proposed as activities that a person unintentionally may shift his/her attention from reading into distractions. Between sensory devices recommended by the ARSAT framework, the Muse headband which is an Electroencephalography (EEG) and head motion-sensing wearable device, was selected to track the forehead EEG and a person’s head movements. The EEG and 3-axes accelerometer data were recorded from eight participants when they read printed text, as well as the time they performed two other activities. An imbalanced dataset consisting over 1.2 million rows of noisy data was created and used to build a model of the activities (60% training and 20% validating data) and evaluating the model (20% of the data). The efficiency of the framework is demonstrated by comparing the performance of the models built by utilising the BehaveNet, with the models built by utilising a number of competing Deep Learning models for raw EEG and accelerometer data, that have attained state-of-the-art performance. The classification results are evaluated by some metrics including the classification accuracy, F1 score, confusion matrix, Receiver Operating Characteristic curve, and Area under Curve (AUC) score. By considering the results, the BehaveNet contributed to the body of knowledge as an approach for measuring human behaviour by using sensory devices. In comparison with the performance of the other models, the models built by utilising the BehaveNet, attained better performance when classifying data of two EEG channels (Accuracy = 95%; AUC=0.99; F1 = 0.95), data of a single EEG channel (Accuracy = 85%; AUC=0.96; F1 = 0.83), accelerometer data (Accuracy = 81%; AUC = 0.9; F1 = 0.76) and all of the data in the dataset (Accuracy = 97%; AUC = 0.99; F1 = 0.96). The dataset and the source code of this project are also published on the Internet to help the science community. The Muse headband is also shown to be an economical and standard wearable device that can be successfully used in behavioural research.
- Full Text:
- Date Issued: 2019
- Authors: Salehzadeh, Seyed Amirsaleh , Greyling, Jean
- Date: 2019
- Subjects: Computational intelligence , Machine learning Artificial intelligence Neural networks (Computer science)
- Language: English
- Type: Thesis , Doctoral , DPhil
- Identifier: http://hdl.handle.net/10948/43578 , vital:36921
- Description: The brain is the most complex object in the known universe that gives a sense of being to humans and characterises human behaviour. Building models of brain functions is perhaps the most fascinating scientific challenge in the 21st century. Reading is a significant cognitive process in the human brain that plays a critical role in the vital process of learning and in performing some daily activities. The study of human behaviour during reading has been an area of interest for researchers in different fields of science. This thesis is based upon providing a novel framework, called ARSAT (Assisting Researchers in the Selection of Appropriate Technologies), that measures the behaviour of humans when reading text. The ARSAT framework aims at assisting researchers in the selection and application of appropriate technologies to measure the behaviour of a person who is reading text. The ARSAT framework will assist to researchers who investigate the reading process and find it difficult to select appropriate theories, metrics, data collection methods and data analytics techniques. The ARSAT framework enhances the ability of its users to select appropriate metrics indicating the effective factors on the characterisation of different aspects of human behaviour during the reading process. As will be shown in this research study, human behaviour is characterised by a complicated interplay of action, cognition and emotion. The ARSAT framework also facilitates selecting appropriate sensory technologies that can be used to monitor and collect data for the metrics. Moreover, this research study will introduce BehaveNet, a novel Deep Learning modelling approach, which can be used for training Deep Learning models of human behaviour from the sensory data collected. In this thesis, a comprehensive literature study is presented that was conducted to acquire adequate knowledge for designing the ARSAT framework. In order to identify the contributing factors that affect the reading process, an overview of some existing theories of the reading process is provided. Furthermore, a number of sensory technologies and techniques that can be applied to monitoring the changes in the metrics indicating the factors are also demonstrated. Only, the technologies that are commercially available on the market are recommended by the ARSAT framework. A variety of Machine Learning techniques were also investigated when designing the BehaveNet. The BehaveNet takes advantage of the complementarity of Convolutional Neural Networks, Long Short-Term Memory networks and Deep Neural Networks. The design of a Human Behaviour Monitoring System (HBMS), by utilising the ARSAT framework for recognising three attention-seeking activities of humans, is also presented in this research study. Reading printed text, as well as speaking out loudly and watching a programme on TV were proposed as activities that a person unintentionally may shift his/her attention from reading into distractions. Between sensory devices recommended by the ARSAT framework, the Muse headband which is an Electroencephalography (EEG) and head motion-sensing wearable device, was selected to track the forehead EEG and a person’s head movements. The EEG and 3-axes accelerometer data were recorded from eight participants when they read printed text, as well as the time they performed two other activities. An imbalanced dataset consisting over 1.2 million rows of noisy data was created and used to build a model of the activities (60% training and 20% validating data) and evaluating the model (20% of the data). The efficiency of the framework is demonstrated by comparing the performance of the models built by utilising the BehaveNet, with the models built by utilising a number of competing Deep Learning models for raw EEG and accelerometer data, that have attained state-of-the-art performance. The classification results are evaluated by some metrics including the classification accuracy, F1 score, confusion matrix, Receiver Operating Characteristic curve, and Area under Curve (AUC) score. By considering the results, the BehaveNet contributed to the body of knowledge as an approach for measuring human behaviour by using sensory devices. In comparison with the performance of the other models, the models built by utilising the BehaveNet, attained better performance when classifying data of two EEG channels (Accuracy = 95%; AUC=0.99; F1 = 0.95), data of a single EEG channel (Accuracy = 85%; AUC=0.96; F1 = 0.83), accelerometer data (Accuracy = 81%; AUC = 0.9; F1 = 0.76) and all of the data in the dataset (Accuracy = 97%; AUC = 0.99; F1 = 0.96). The dataset and the source code of this project are also published on the Internet to help the science community. The Muse headband is also shown to be an economical and standard wearable device that can be successfully used in behavioural research.
- Full Text:
- Date Issued: 2019
A gatherer’s paradise? early humans and plant foraging on the Cape south coast, South Africa
- Authors: Botha, Maria Susan
- Date: 2019
- Subjects: Plant remains (Archaeology) -- South Africa , Plant physiology Plant ecology Botany -- South Africa
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10948/37139 , vital:34124
- Description: Humans were driven to refugia during the cold and dry glacial Marine Isotope Stage 6 (MIS6) (~195–125 ka); only a few places, including the Cape south coast, show archaeological evidence of continuous human occupation. It has been hypothesised that the Cape south coast provided the requisite shelter to ensure human’s earliest survival because it provided all the right ingredients. The shores offer abundant shellfish, the land a diverse array of plants, and the climate was ameliorated due to the proximity of the then exposed Palaeo-Agulhas Plain to the warm Agulhas current. The aim of this study is to determine whether the indigenous flora of this region could have provided a sufficient edible resource for early humans. Residents of the Cape south coast have genetic ancestry linking them to the Khoe-San, the original inhabitants of the area, and still have an extensive knowledge of the local edible plants. With their help, I set out to determine a) whether humans have been utilising the same plant species over time, b) what the foraging potential of the edible plants in the region is and c) how resilient these plants are to human foraging. If we know whether the plant species known and used today were the same as those that were utilised by past humans, we can then use the contemporary knowledge to make predictions about past utilisation. To answer this question, I collated two databases: archaeological (all plant species found in archaeological sites [dating 0 to 80,000 BP]) and contemporary (all plant species in the modern-day ethnographic literature [last 400 years]) that occur within the Greater Cape Floristic Region(GCFR). I found a significant number of plant species shared between the two databases, which suggests that at least some plant species have been used by humans over a long period of time. To determine the indigenous plant foraging potential of the region, I foraged for food in the Cape south coast (451 bouts) monthly over a two-year period with the help of local inhabitants. The findings show that edible plant resources are distributed patchily and focusing on specific vegetation types would greatly enhance chances of harvesting 2,000 kcal per day, which is considered the daily calorific requirements for a typical hunter-gatherer. I then sought to understand how resilient these plants [(with an emphasis on plants with an underground storage organ (USO)] would be to human foraging. To do this, I set out plots and harvested all edible foods for three consecutive years with the help of foragers. Results indicate that there was a significant reduction in edible weight only in the third year of consecutive harvesting. In conclusion, using various approaches, this study investigates the plant food potential of the Cape south coast from the perspective of early human consumers. The findings suggest that knowledge regarding useful plants dates back to at least 80,000 BP. Food resources are patchily distributed across the main vegetation types found within the Cape south coast and occur in hotspots, i.e. concentrated areas hosting high densities of edible plant foods, surrounded by areas with very low plant food densities. Foragers could have harvested their daily calorific quota more easily if they focused their harvesting efforts in specific vegetation types found in the Cape south coast. Furthermore, many USOs circumvent climatic fluctuations, herbivory or both by staggering their emergence over multiple years, which implies they have some resilience to human foraging.
- Full Text:
- Date Issued: 2019
- Authors: Botha, Maria Susan
- Date: 2019
- Subjects: Plant remains (Archaeology) -- South Africa , Plant physiology Plant ecology Botany -- South Africa
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10948/37139 , vital:34124
- Description: Humans were driven to refugia during the cold and dry glacial Marine Isotope Stage 6 (MIS6) (~195–125 ka); only a few places, including the Cape south coast, show archaeological evidence of continuous human occupation. It has been hypothesised that the Cape south coast provided the requisite shelter to ensure human’s earliest survival because it provided all the right ingredients. The shores offer abundant shellfish, the land a diverse array of plants, and the climate was ameliorated due to the proximity of the then exposed Palaeo-Agulhas Plain to the warm Agulhas current. The aim of this study is to determine whether the indigenous flora of this region could have provided a sufficient edible resource for early humans. Residents of the Cape south coast have genetic ancestry linking them to the Khoe-San, the original inhabitants of the area, and still have an extensive knowledge of the local edible plants. With their help, I set out to determine a) whether humans have been utilising the same plant species over time, b) what the foraging potential of the edible plants in the region is and c) how resilient these plants are to human foraging. If we know whether the plant species known and used today were the same as those that were utilised by past humans, we can then use the contemporary knowledge to make predictions about past utilisation. To answer this question, I collated two databases: archaeological (all plant species found in archaeological sites [dating 0 to 80,000 BP]) and contemporary (all plant species in the modern-day ethnographic literature [last 400 years]) that occur within the Greater Cape Floristic Region(GCFR). I found a significant number of plant species shared between the two databases, which suggests that at least some plant species have been used by humans over a long period of time. To determine the indigenous plant foraging potential of the region, I foraged for food in the Cape south coast (451 bouts) monthly over a two-year period with the help of local inhabitants. The findings show that edible plant resources are distributed patchily and focusing on specific vegetation types would greatly enhance chances of harvesting 2,000 kcal per day, which is considered the daily calorific requirements for a typical hunter-gatherer. I then sought to understand how resilient these plants [(with an emphasis on plants with an underground storage organ (USO)] would be to human foraging. To do this, I set out plots and harvested all edible foods for three consecutive years with the help of foragers. Results indicate that there was a significant reduction in edible weight only in the third year of consecutive harvesting. In conclusion, using various approaches, this study investigates the plant food potential of the Cape south coast from the perspective of early human consumers. The findings suggest that knowledge regarding useful plants dates back to at least 80,000 BP. Food resources are patchily distributed across the main vegetation types found within the Cape south coast and occur in hotspots, i.e. concentrated areas hosting high densities of edible plant foods, surrounded by areas with very low plant food densities. Foragers could have harvested their daily calorific quota more easily if they focused their harvesting efforts in specific vegetation types found in the Cape south coast. Furthermore, many USOs circumvent climatic fluctuations, herbivory or both by staggering their emergence over multiple years, which implies they have some resilience to human foraging.
- Full Text:
- Date Issued: 2019
A method for the evaluation of similarity measures on graphs and network-structured data
- Authors: Naude, Kevin Alexander
- Date: 2014
- Subjects: Graph theory , Network analysis (Planning)
- Language: English
- Type: Thesis , Doctoral , DPhil
- Identifier: vital:10494
- Description: Measures of similarity play a subtle but important role in a large number of disciplines. For example, a researcher in bioinformatics may devise a new computed measure of similarity between biological structures, and use its scores to infer bio-logical association. Other academics may use related approaches in structured text search, or for object recognition in computer vision. These are diverse and practical applications of similarity. A critical question is this: to what extent can a given similarity measure be trusted? This is a difficult problem, at the heart of which lies the broader issue: what exactly constitutes good similarity judgement? This research presents the view that similarity measures have properties of judgement that are intrinsic to their formulation, and that such properties are measurable. The problem of comparing similarity measures is one of identifying ground-truths for similarity. The approach taken in this work is to examine the relative ordering of graph pairs, when compared with respect to a common reference graph. Ground- truth outcomes are obtained from a novel theory: the theory of irreducible change in graphs. This theory supports stronger claims than those made for edit distances. Whereas edit distances are sensitive to a configuration of costs, irreducible change under the new theory is independent of such parameters. Ground-truth data is obtained by isolating test cases for which a common outcome is assured for all possible least measures of change that can be formulated within a chosen change descriptor space. By isolating these specific cases, and excluding others, the research introduces a framework for evaluating similarity measures on mathematically defensible grounds. The evaluation method is demonstrated in a series of case studies which evaluate the similarity performance of known graph similarity measures. The findings of these experiments provide the first general characterisation of common similarity measures over a wide range of graph properties. The similarity computed from the maximum common induced subgraph (Dice-MCIS) is shown to provide good general similarity judgement. However, it is shown that Blondel's similarity measure can exceed the judgement sensitivity of Dice-MCIS, provided the graphs have both sufficient attribute label diversity, and edge density. The final contribution is the introduction of a new similarity measure for graphs, which is shown to have statistically greater judgement sensitivity than all other measures examined. All of these findings are made possible through the theory of irreducible change in graphs. The research provides the first mathematical basis for reasoning about the quality of similarity judgments. This enables researchers to analyse similarity measures directly, making similarity measures first class objects of scientific inquiry.
- Full Text:
- Date Issued: 2014
- Authors: Naude, Kevin Alexander
- Date: 2014
- Subjects: Graph theory , Network analysis (Planning)
- Language: English
- Type: Thesis , Doctoral , DPhil
- Identifier: vital:10494
- Description: Measures of similarity play a subtle but important role in a large number of disciplines. For example, a researcher in bioinformatics may devise a new computed measure of similarity between biological structures, and use its scores to infer bio-logical association. Other academics may use related approaches in structured text search, or for object recognition in computer vision. These are diverse and practical applications of similarity. A critical question is this: to what extent can a given similarity measure be trusted? This is a difficult problem, at the heart of which lies the broader issue: what exactly constitutes good similarity judgement? This research presents the view that similarity measures have properties of judgement that are intrinsic to their formulation, and that such properties are measurable. The problem of comparing similarity measures is one of identifying ground-truths for similarity. The approach taken in this work is to examine the relative ordering of graph pairs, when compared with respect to a common reference graph. Ground- truth outcomes are obtained from a novel theory: the theory of irreducible change in graphs. This theory supports stronger claims than those made for edit distances. Whereas edit distances are sensitive to a configuration of costs, irreducible change under the new theory is independent of such parameters. Ground-truth data is obtained by isolating test cases for which a common outcome is assured for all possible least measures of change that can be formulated within a chosen change descriptor space. By isolating these specific cases, and excluding others, the research introduces a framework for evaluating similarity measures on mathematically defensible grounds. The evaluation method is demonstrated in a series of case studies which evaluate the similarity performance of known graph similarity measures. The findings of these experiments provide the first general characterisation of common similarity measures over a wide range of graph properties. The similarity computed from the maximum common induced subgraph (Dice-MCIS) is shown to provide good general similarity judgement. However, it is shown that Blondel's similarity measure can exceed the judgement sensitivity of Dice-MCIS, provided the graphs have both sufficient attribute label diversity, and edge density. The final contribution is the introduction of a new similarity measure for graphs, which is shown to have statistically greater judgement sensitivity than all other measures examined. All of these findings are made possible through the theory of irreducible change in graphs. The research provides the first mathematical basis for reasoning about the quality of similarity judgments. This enables researchers to analyse similarity measures directly, making similarity measures first class objects of scientific inquiry.
- Full Text:
- Date Issued: 2014
A methodology to institutionalise user experience in a South African provincial government
- Authors: Pretorius, Marco Cobus
- Date: 2012
- Subjects: Human-computer interaction , User interfaces (Computer systems) , Government Web sites -- South Africa , Web site development , Electronic government information
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:10489 , http://hdl.handle.net/10948/d1019961
- Description: The number of citizens, who access e-Government websites, is growing significantly and their expectations for additional services are increasing. The Internet has become an essential instrument to distribute information to citizens. Poorly designed websites, however, can divide governments and its citizens. Consensus amongst researchers is that user experience (UX) is an important factor in designing websites specifically e-Government websites. Problems, experienced with website usability, prevent people from accessing and eventually adopting technology, such as e-Government. Countries, such as the United States, United Kingdom and Canada, have shown increased support for UX in e-Government websites. At present, a number of guidelines and design principles exists for e-Government website UX design; however, the effectiveness of the implementation of these guidelines and principles depends on the profiles of the individuals on a website development team and on an organisation’s understanding of UX. Despite the highlighted importance of UX, guidelines and principles are rarely adopted in South African e-Government websites. Usability and UX guidelines cannot be implemented; if there is no executive support; an inadequately trained staff; no routine UX practice; insufficient budget; inefficient use of usability methodologies and user-centred design (UCD) processes. The challenge at present in the UX design field is the institutionalisation of UX, specifically at government level. The goal of this research was to propose and evaluate a methodology to institutionalise UX in South African Provincial Governments (PGs), named the “Institutionalise UX in Government (IUXG) methodology”. The research used the Western Cape Government (WCG) in South Africa as a case study to evaluate the proposed methodology to institutionalise UX in a South African PG. The IUXG methodology (1.0) was proposed from five UX methodologies, as well as from best practices found in literature. The IUXG methodology (1.1) was updated, based on results of a survey to South African PGs, a survey to WCG employees, as well as literature from the WCG. The IUXG methodology (2.0) was updated a final time, based on the case study results and on a confirmation survey with WCG employees after the implementation of the case study. The research study made use of three surveys during this research. The first survey, incorporating UX maturity models, confirmed that understanding and buy-in of UX are limited and that UX maturity levels are low at South African PG level. The second and third surveys were administered to WCG e-Government website officials before and after the implementation of the IUXG methodology. The surveys measured the UX maturity level of the WCG in the component, e-Government for Citizens (e-G4C), responsible for the WCG e-Government website. The final survey results demonstrated that, after the implementation of the IUXG methodology, the WCG improved its level of UX maturity on the identified UX maturity models. Implementation of the IUXG methodology institutionalised UX in the WCG. UX activities became standard practice in the e-Government website environment after the systems development lifecycle (SDLC) incorporated UCD. UX policy, strategy and guidelines were documented for the WCG e-Government website. The WCG constructed the first usability testing facility for a South African PG and improvements to the WCG e-Government website were implemented. The proposed IUXG methodology institutionalised UX in the WCG e-Government website environment. This research is a major contribution, to addressing the current lack of UX practices in South African PGs. South African PGs can use the proposed IUXG methodology to institutionalise UX and it will assist PG officials to develop increased UX maturity levels. The advantage of the IUXG methodology is that it provides PG officials with a step-by-step method how to institutionalise UX in a PG by following the six phases of the IUXG methodology: startup, setup, organisation, method, standards and long-term. The IUXG methodology will assist South African PGs to establish UX practice as a norm. The IUXG methodology will assist PGs with the resources, methods and tools to enable them to implement UX guidelines, which will result in an improved, more usable and more user-centric PG e-Government website.
- Full Text:
- Date Issued: 2012
- Authors: Pretorius, Marco Cobus
- Date: 2012
- Subjects: Human-computer interaction , User interfaces (Computer systems) , Government Web sites -- South Africa , Web site development , Electronic government information
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:10489 , http://hdl.handle.net/10948/d1019961
- Description: The number of citizens, who access e-Government websites, is growing significantly and their expectations for additional services are increasing. The Internet has become an essential instrument to distribute information to citizens. Poorly designed websites, however, can divide governments and its citizens. Consensus amongst researchers is that user experience (UX) is an important factor in designing websites specifically e-Government websites. Problems, experienced with website usability, prevent people from accessing and eventually adopting technology, such as e-Government. Countries, such as the United States, United Kingdom and Canada, have shown increased support for UX in e-Government websites. At present, a number of guidelines and design principles exists for e-Government website UX design; however, the effectiveness of the implementation of these guidelines and principles depends on the profiles of the individuals on a website development team and on an organisation’s understanding of UX. Despite the highlighted importance of UX, guidelines and principles are rarely adopted in South African e-Government websites. Usability and UX guidelines cannot be implemented; if there is no executive support; an inadequately trained staff; no routine UX practice; insufficient budget; inefficient use of usability methodologies and user-centred design (UCD) processes. The challenge at present in the UX design field is the institutionalisation of UX, specifically at government level. The goal of this research was to propose and evaluate a methodology to institutionalise UX in South African Provincial Governments (PGs), named the “Institutionalise UX in Government (IUXG) methodology”. The research used the Western Cape Government (WCG) in South Africa as a case study to evaluate the proposed methodology to institutionalise UX in a South African PG. The IUXG methodology (1.0) was proposed from five UX methodologies, as well as from best practices found in literature. The IUXG methodology (1.1) was updated, based on results of a survey to South African PGs, a survey to WCG employees, as well as literature from the WCG. The IUXG methodology (2.0) was updated a final time, based on the case study results and on a confirmation survey with WCG employees after the implementation of the case study. The research study made use of three surveys during this research. The first survey, incorporating UX maturity models, confirmed that understanding and buy-in of UX are limited and that UX maturity levels are low at South African PG level. The second and third surveys were administered to WCG e-Government website officials before and after the implementation of the IUXG methodology. The surveys measured the UX maturity level of the WCG in the component, e-Government for Citizens (e-G4C), responsible for the WCG e-Government website. The final survey results demonstrated that, after the implementation of the IUXG methodology, the WCG improved its level of UX maturity on the identified UX maturity models. Implementation of the IUXG methodology institutionalised UX in the WCG. UX activities became standard practice in the e-Government website environment after the systems development lifecycle (SDLC) incorporated UCD. UX policy, strategy and guidelines were documented for the WCG e-Government website. The WCG constructed the first usability testing facility for a South African PG and improvements to the WCG e-Government website were implemented. The proposed IUXG methodology institutionalised UX in the WCG e-Government website environment. This research is a major contribution, to addressing the current lack of UX practices in South African PGs. South African PGs can use the proposed IUXG methodology to institutionalise UX and it will assist PG officials to develop increased UX maturity levels. The advantage of the IUXG methodology is that it provides PG officials with a step-by-step method how to institutionalise UX in a PG by following the six phases of the IUXG methodology: startup, setup, organisation, method, standards and long-term. The IUXG methodology will assist South African PGs to establish UX practice as a norm. The IUXG methodology will assist PGs with the resources, methods and tools to enable them to implement UX guidelines, which will result in an improved, more usable and more user-centric PG e-Government website.
- Full Text:
- Date Issued: 2012
A Model for Crime Management in Smart Cities
- Authors: Westraadt, Lindsay
- Date: 2019
- Subjects: Smart cities , Computer networks -- security measures
- Language: English
- Type: Thesis , Doctoral , PHD
- Identifier: http://hdl.handle.net/10948/45635 , vital:38922
- Description: The main research problem addressed in this study is that South African cities are not effectively integrating and utilising available, and rapidly emerging smart city data sources for planning and management. To this end, it was proposed that a predictive model, that assimilates data from traditionally isolated management silos, could be developed for prediction and simulation at the system-of-systems level. As proof of concept, the study focused on only one aspect of smart cities, namely crime management. Subsequently, the main objective of this study was to develop and evaluate a predictive model for crime management in smart cities that effectively integrated data from traditionally isolated management silos. The Design Science Research process was followed to develop and evaluate a prototype model. The practical contributions of this study was the development of a prototype model for integrated decision-making in smart cities, and the associated guidelines for the implementation of the developed modelling approach within the South African IDP context. Theoretically, this work contributed towards the development of a modelling paradigm for effective integrated decision-making in smart cities. This work also contributed towards developing strategic-level predictive policing tools aimed at proactively meeting community needs, and contributed to the body of knowledge regarding complex systems modelling.
- Full Text:
- Date Issued: 2019
- Authors: Westraadt, Lindsay
- Date: 2019
- Subjects: Smart cities , Computer networks -- security measures
- Language: English
- Type: Thesis , Doctoral , PHD
- Identifier: http://hdl.handle.net/10948/45635 , vital:38922
- Description: The main research problem addressed in this study is that South African cities are not effectively integrating and utilising available, and rapidly emerging smart city data sources for planning and management. To this end, it was proposed that a predictive model, that assimilates data from traditionally isolated management silos, could be developed for prediction and simulation at the system-of-systems level. As proof of concept, the study focused on only one aspect of smart cities, namely crime management. Subsequently, the main objective of this study was to develop and evaluate a predictive model for crime management in smart cities that effectively integrated data from traditionally isolated management silos. The Design Science Research process was followed to develop and evaluate a prototype model. The practical contributions of this study was the development of a prototype model for integrated decision-making in smart cities, and the associated guidelines for the implementation of the developed modelling approach within the South African IDP context. Theoretically, this work contributed towards the development of a modelling paradigm for effective integrated decision-making in smart cities. This work also contributed towards developing strategic-level predictive policing tools aimed at proactively meeting community needs, and contributed to the body of knowledge regarding complex systems modelling.
- Full Text:
- Date Issued: 2019
A model for mobile, context-aware in-car communication systems to reduce driver distractions
- Authors: Tchankue-Sielinou, Patrick
- Date: 2015
- Subjects: Automobiles -- Safety measures , Context-aware computing , Mobile communication systems
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10948/4144 , vital:20556
- Description: Driver distraction remains a matter of concern throughout the world as the number of car accidents caused by distracted driving is still unacceptably high. Industry and academia are working intensively to design new techniques that will address all types of driver distraction including visual, manual, auditory and cognitive distraction. This research focuses on an existing technology, namely in-car communication systems (ICCS). ICCS allow drivers to interact with their mobile phones without touching or looking at them. Previous research suggests that ICCS have reduced visual and manual distraction. Two problems were identified in this research: existing ICCS are still expensive and only available in limited models of car. As a result of that, only a small number of drivers can obtain a car equipped with an ICCS, especially in developing countries. The second problem is that existing ICCS are not aware of the driving context, which plays a role in distracting drivers. This research project was based on the following thesis statement: A mobile, context-aware model can be designed to reduce driver distraction caused by the use of ICCS. A mobile ICCS is portable and can be used in any car, addressing the first problem. Context-awareness will be used to detect possible situations that contribute to distracting drivers and the interaction with the mobile ICCS will be adapted so as to avert calls and text messages. This will address the second problem. As the driving context is dynamic, drivers may have to deal with critical safety-related tasks while they are using an existing ICCS. The following steps were taken in order to validate the thesis statement. An investigation was conducted into the causes and consequences of driver distraction. A review of literature was conducted on context-aware techniques that could potentially be used. The design of a model was proposed, called the Multimodal Interface for Mobile Info-communication with Context (MIMIC) and a preliminary usability evaluation was conducted in order to assess the feasibility of a speech-based, mobile ICCS. Despite some problems with the speech recognition, the results were satisfying and showed that the proposed model for mobile ICCS was feasible. Experiments were conducted in order to collect data to perform supervised learning to determine the driving context. The aim was to select the most effective machine learning techniques to determine the driving context. Decision tree and instance-based algorithms were found to be the best performing algorithms. Variables such as speed, acceleration and linear acceleration were found to be the most important variables according to an analysis of the decision tree. The initial MIMIC model was updated to include several adaptation effects and the resulting model was implemented as a prototype mobile application, called MIMIC-Prototype.
- Full Text:
- Date Issued: 2015
- Authors: Tchankue-Sielinou, Patrick
- Date: 2015
- Subjects: Automobiles -- Safety measures , Context-aware computing , Mobile communication systems
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10948/4144 , vital:20556
- Description: Driver distraction remains a matter of concern throughout the world as the number of car accidents caused by distracted driving is still unacceptably high. Industry and academia are working intensively to design new techniques that will address all types of driver distraction including visual, manual, auditory and cognitive distraction. This research focuses on an existing technology, namely in-car communication systems (ICCS). ICCS allow drivers to interact with their mobile phones without touching or looking at them. Previous research suggests that ICCS have reduced visual and manual distraction. Two problems were identified in this research: existing ICCS are still expensive and only available in limited models of car. As a result of that, only a small number of drivers can obtain a car equipped with an ICCS, especially in developing countries. The second problem is that existing ICCS are not aware of the driving context, which plays a role in distracting drivers. This research project was based on the following thesis statement: A mobile, context-aware model can be designed to reduce driver distraction caused by the use of ICCS. A mobile ICCS is portable and can be used in any car, addressing the first problem. Context-awareness will be used to detect possible situations that contribute to distracting drivers and the interaction with the mobile ICCS will be adapted so as to avert calls and text messages. This will address the second problem. As the driving context is dynamic, drivers may have to deal with critical safety-related tasks while they are using an existing ICCS. The following steps were taken in order to validate the thesis statement. An investigation was conducted into the causes and consequences of driver distraction. A review of literature was conducted on context-aware techniques that could potentially be used. The design of a model was proposed, called the Multimodal Interface for Mobile Info-communication with Context (MIMIC) and a preliminary usability evaluation was conducted in order to assess the feasibility of a speech-based, mobile ICCS. Despite some problems with the speech recognition, the results were satisfying and showed that the proposed model for mobile ICCS was feasible. Experiments were conducted in order to collect data to perform supervised learning to determine the driving context. The aim was to select the most effective machine learning techniques to determine the driving context. Decision tree and instance-based algorithms were found to be the best performing algorithms. Variables such as speed, acceleration and linear acceleration were found to be the most important variables according to an analysis of the decision tree. The initial MIMIC model was updated to include several adaptation effects and the resulting model was implemented as a prototype mobile application, called MIMIC-Prototype.
- Full Text:
- Date Issued: 2015
A multi-factor model for range estimation in electric vehicles
- Authors: Smuts, Martin Bradley
- Date: 2019
- Subjects: Electric vehicles , Hybrid electric vehicles Energy consumption Machine learning Information technology -- Management
- Language: English
- Type: Thesis , Doctoral , DPhil
- Identifier: http://hdl.handle.net/10948/43589 , vital:36926
- Description: Electric vehicles (EVs) are well-known for their challenges related to trip planning and energy consumption estimation. Range anxiety is currently a barrier to the adoption of EVs. One of the issues influencing range anxiety is the inaccuracy of the remaining driving range (RDR) estimate in on-board displays. RDR displays are important as they can help drivers with trip planning. The RDR is a parameter that changes under environmental and behavioural conditions. Several factors (for example, weather, and traffic) can influence the energy consumption of an EV that are not considered during the RDR estimation in traditional on-board computers or third-party applications, such as navigation or mapping applications. The need for accurate RDR estimation is growing, since this can reduce the range anxiety of drivers. One way of overcoming range anxiety is to provide trip planning applications that provide accurate estimations of the RDR, based on various factors, and which adapt to the users’ driving behaviour. Existing models used for estimating the RDR are often simplified, and do not consider all the factors that can influence it. Collecting data for each factor also presents several challenges. Powerful computing resources are required to collect, transform, and analyse the disparate datasets that are required for each factor. The aim of this research was to design a Multi-factor Model for range estimation in EVs. Five main factors that influence the energy consumption of EVs were identified from literature, namely, Route and Terrain, Driving Behaviour, Weather and Environment, Vehicle Modelling, and Battery Modelling. These factors were used throughout this research to guide the data collection and analysis processes. A Multi-factor Model was proposed based on four main components that collect, process, analyse, and visualise data from available data sources to produce estimates relating to trip planning. A proof-of-concept RDR system was developed and evaluated in field experiments, to demonstrate that the Multi-factor Model addresses the main aim of this research. The experiments were performed to collect data for each of the five factors, and to analyse their impact on energy consumption. Several machine learning techniques were used, and evaluated, for accuracy in estimating the energy consumption, from which the RDR can be derived, for a specified trip. A case study was conducted with an electric mobility programme (uYilo) in Port Elizabeth, South Africa (SA). The case study was used to investigate whether the available resources at uYilo were sufficient to provide data for each of the five factors. Several challenges were noted during the data collection. These were shortages of software applications, a lack of quality data, technical interoperability and data access between the data collection instruments and systems. Data access was a problem in some cases, since proprietary systems restrict access to external developers. The theoretical contribution of this research is a list of factors that influence RDR and a classification of machine learning techniques that can be used to estimate the RDR. The practical contributions of this research include a database of EV trips, proof-of-concept RDR estimation system, and a deployed machine learning model that can be accessed by researchers and EV practitioners. Four research papers were published and presented at local and international conferences. In addition, one conference paper was published in an accredited journal: NextComp 2017 (Appendix C), Conference Paper, Pointe aux Piments (Mauritius); SATNAC 2017 (Appendix F), Conference Paper, Barcelona (Spain); GITMA 2018 (Appendix B), Conference Paper, Mexico City (Mexico); SATNAC 2018 (Appendix G), Conference Paper, George (South Africa), and IFIP World Computer Congress 2018 (Appendix E), Journal Article.
- Full Text:
- Date Issued: 2019
- Authors: Smuts, Martin Bradley
- Date: 2019
- Subjects: Electric vehicles , Hybrid electric vehicles Energy consumption Machine learning Information technology -- Management
- Language: English
- Type: Thesis , Doctoral , DPhil
- Identifier: http://hdl.handle.net/10948/43589 , vital:36926
- Description: Electric vehicles (EVs) are well-known for their challenges related to trip planning and energy consumption estimation. Range anxiety is currently a barrier to the adoption of EVs. One of the issues influencing range anxiety is the inaccuracy of the remaining driving range (RDR) estimate in on-board displays. RDR displays are important as they can help drivers with trip planning. The RDR is a parameter that changes under environmental and behavioural conditions. Several factors (for example, weather, and traffic) can influence the energy consumption of an EV that are not considered during the RDR estimation in traditional on-board computers or third-party applications, such as navigation or mapping applications. The need for accurate RDR estimation is growing, since this can reduce the range anxiety of drivers. One way of overcoming range anxiety is to provide trip planning applications that provide accurate estimations of the RDR, based on various factors, and which adapt to the users’ driving behaviour. Existing models used for estimating the RDR are often simplified, and do not consider all the factors that can influence it. Collecting data for each factor also presents several challenges. Powerful computing resources are required to collect, transform, and analyse the disparate datasets that are required for each factor. The aim of this research was to design a Multi-factor Model for range estimation in EVs. Five main factors that influence the energy consumption of EVs were identified from literature, namely, Route and Terrain, Driving Behaviour, Weather and Environment, Vehicle Modelling, and Battery Modelling. These factors were used throughout this research to guide the data collection and analysis processes. A Multi-factor Model was proposed based on four main components that collect, process, analyse, and visualise data from available data sources to produce estimates relating to trip planning. A proof-of-concept RDR system was developed and evaluated in field experiments, to demonstrate that the Multi-factor Model addresses the main aim of this research. The experiments were performed to collect data for each of the five factors, and to analyse their impact on energy consumption. Several machine learning techniques were used, and evaluated, for accuracy in estimating the energy consumption, from which the RDR can be derived, for a specified trip. A case study was conducted with an electric mobility programme (uYilo) in Port Elizabeth, South Africa (SA). The case study was used to investigate whether the available resources at uYilo were sufficient to provide data for each of the five factors. Several challenges were noted during the data collection. These were shortages of software applications, a lack of quality data, technical interoperability and data access between the data collection instruments and systems. Data access was a problem in some cases, since proprietary systems restrict access to external developers. The theoretical contribution of this research is a list of factors that influence RDR and a classification of machine learning techniques that can be used to estimate the RDR. The practical contributions of this research include a database of EV trips, proof-of-concept RDR estimation system, and a deployed machine learning model that can be accessed by researchers and EV practitioners. Four research papers were published and presented at local and international conferences. In addition, one conference paper was published in an accredited journal: NextComp 2017 (Appendix C), Conference Paper, Pointe aux Piments (Mauritius); SATNAC 2017 (Appendix F), Conference Paper, Barcelona (Spain); GITMA 2018 (Appendix B), Conference Paper, Mexico City (Mexico); SATNAC 2018 (Appendix G), Conference Paper, George (South Africa), and IFIP World Computer Congress 2018 (Appendix E), Journal Article.
- Full Text:
- Date Issued: 2019
A multiscale remote sensing assessment of subtropical indigenous forests along the wild coast, South Africa
- Authors: Blessing, Sithole Vhusomuzi
- Date: 2015
- Subjects: Forests and forestry -- South Africa -- Remote sensing , Forest conservation , Remote sensing , Geographic information systems
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:10677 , http://hdl.handle.net/10948/d1021169
- Description: The subtropical forests located along South Africa’s Wild Coast region, declared as one of the biodiversity hotspots, provide benefits to the local and national economy. However, there is evidence of increased pressure exerted on the forests by growing population and reduced income from activities not related to forest products. The ability of remote sensing to quantify subtropical forest changes over time, perform species discrimination (using field spectroscopy) and integrating field spectral and multispectral data were all assessed in this study. Investigations were conducted at pixel, leaf and sub-pixel levels. Both per-pixel and sub-pixel classification methods were used for improved forest characterisation. Using SPOT 6 imagery for 2013, the study determined the best classification algorithm for mapping sub-tropical forest and other land cover types to be the maximum likelihood classifier. Maximum likelihood outperformed minimum distance, spectral angle mapper and spectral information divergence algorithms, based on overall accuracy and Kappa coefficient values. Forest change analysis was made based on spectral measurements made at top of the atmosphere (TOC) level. When applied to the 2005 and 2009 SPOT 5 images, subtropical forest changes between 2005-2009 and 2009-2013 were quantified. A temporal analysis of forest cover trends in the periods 2005-2009 and 2009-2013 identified a decreasing trend of -3648.42 and -946.98 ha respectively, which translated to 7.81 percent and 2.20 percent decrease. Although there is evidence of a trend towards decreased rates of forest loss, more conservation efforts are required to protect the Wild Coast ecosystem. Using field spectral measurements data, the hierarchical method (comprising One-way ANOVA with Bonferroni correction, Classification and Regression Trees (CART) and Jeffries Matusita method) successfully selected optimal wavelengths for species discrimination at leaf level. Only 17 out of 2150 wavelengths were identified, thereby reducing the complexities related to data dimensionality. The optimal 17 wavelength bands were noted in the visible (438, 442, 512 and 695 nm), near infrared (724, 729, 750, 758, 856, 936, 1179, 1507 and 1673 nm) and mid-infrared (2220, 2465, 2469 and 2482 nm) portions of the electromagnetic spectrum. The Jeffries-Matusita (JM) distance method confirmed the separability of the selected wavelength bands. Using these 17 wavelengths, linear discriminant analysis (LDA) classified subtropical species at leaf level more accurately than partial least squares discriminant analysis (PLSDA) and random forest (RF). In addition, the study integrated field-collected canopy spectral and multispectral data to discriminate proportions of semi-deciduous and evergreen subtropical forests at sub-pixel level. By using the 2013 land cover (using MLC) to mask non-forested portions before sub-pixel classification (using MTMF), the proportional maps were a product of two classifiers. The proportional maps show higher proportions of evergreen forests along the coast while semi-deciduous subtropical forest species were mainly on inland parts of the Wild Coast. These maps had high accuracy, thereby proving the ability of an integration of field spectral and multispectral data in mapping semi-deciduous and evergreen forest species. Overall, the study has demonstrated the importance of the MLC and LDA and served to integrate field spectral and multispectral data in subtropical forest characterisation at both leaf and top-of-atmosphere levels. The success of both the MLC and LDA further highlighted how essential parametric classifiers are in remote sensing forestry applications. Main subtropical characteristics highlighted in this study were species discrimination at leaf level, quantifying forest change at pixel level and discriminating semi-deciduous and evergreen forests at sub-pixel level.
- Full Text:
- Date Issued: 2015
- Authors: Blessing, Sithole Vhusomuzi
- Date: 2015
- Subjects: Forests and forestry -- South Africa -- Remote sensing , Forest conservation , Remote sensing , Geographic information systems
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:10677 , http://hdl.handle.net/10948/d1021169
- Description: The subtropical forests located along South Africa’s Wild Coast region, declared as one of the biodiversity hotspots, provide benefits to the local and national economy. However, there is evidence of increased pressure exerted on the forests by growing population and reduced income from activities not related to forest products. The ability of remote sensing to quantify subtropical forest changes over time, perform species discrimination (using field spectroscopy) and integrating field spectral and multispectral data were all assessed in this study. Investigations were conducted at pixel, leaf and sub-pixel levels. Both per-pixel and sub-pixel classification methods were used for improved forest characterisation. Using SPOT 6 imagery for 2013, the study determined the best classification algorithm for mapping sub-tropical forest and other land cover types to be the maximum likelihood classifier. Maximum likelihood outperformed minimum distance, spectral angle mapper and spectral information divergence algorithms, based on overall accuracy and Kappa coefficient values. Forest change analysis was made based on spectral measurements made at top of the atmosphere (TOC) level. When applied to the 2005 and 2009 SPOT 5 images, subtropical forest changes between 2005-2009 and 2009-2013 were quantified. A temporal analysis of forest cover trends in the periods 2005-2009 and 2009-2013 identified a decreasing trend of -3648.42 and -946.98 ha respectively, which translated to 7.81 percent and 2.20 percent decrease. Although there is evidence of a trend towards decreased rates of forest loss, more conservation efforts are required to protect the Wild Coast ecosystem. Using field spectral measurements data, the hierarchical method (comprising One-way ANOVA with Bonferroni correction, Classification and Regression Trees (CART) and Jeffries Matusita method) successfully selected optimal wavelengths for species discrimination at leaf level. Only 17 out of 2150 wavelengths were identified, thereby reducing the complexities related to data dimensionality. The optimal 17 wavelength bands were noted in the visible (438, 442, 512 and 695 nm), near infrared (724, 729, 750, 758, 856, 936, 1179, 1507 and 1673 nm) and mid-infrared (2220, 2465, 2469 and 2482 nm) portions of the electromagnetic spectrum. The Jeffries-Matusita (JM) distance method confirmed the separability of the selected wavelength bands. Using these 17 wavelengths, linear discriminant analysis (LDA) classified subtropical species at leaf level more accurately than partial least squares discriminant analysis (PLSDA) and random forest (RF). In addition, the study integrated field-collected canopy spectral and multispectral data to discriminate proportions of semi-deciduous and evergreen subtropical forests at sub-pixel level. By using the 2013 land cover (using MLC) to mask non-forested portions before sub-pixel classification (using MTMF), the proportional maps were a product of two classifiers. The proportional maps show higher proportions of evergreen forests along the coast while semi-deciduous subtropical forest species were mainly on inland parts of the Wild Coast. These maps had high accuracy, thereby proving the ability of an integration of field spectral and multispectral data in mapping semi-deciduous and evergreen forest species. Overall, the study has demonstrated the importance of the MLC and LDA and served to integrate field spectral and multispectral data in subtropical forest characterisation at both leaf and top-of-atmosphere levels. The success of both the MLC and LDA further highlighted how essential parametric classifiers are in remote sensing forestry applications. Main subtropical characteristics highlighted in this study were species discrimination at leaf level, quantifying forest change at pixel level and discriminating semi-deciduous and evergreen forests at sub-pixel level.
- Full Text:
- Date Issued: 2015
A new synthetic approach for preparation of Efavirenz
- Authors: Chada, Sravanthi
- Date: 2017
- Subjects: Antiretroviral agents , Asymmetric synthesis , Enzyme inhibitors , HIV (Viruses) -- Enzymes
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10948/15512 , vital:28265
- Description: Efavirenz, a drug that is still inaccessible to millions of people worldwide, is potent non nucleoside reverse transcriptase inhibitor (NNRTI), is one of the preferred agents used in combination therapy for first-line treatment of the human immunodeficiency virus (HIV). NNRTIs attach to and block an HIV enzyme called reverse transcriptase, by blocking reverse transcriptase; NNRTIs prevent HIV from multiplying and can reduce the amount of HIV in the body. Efavirenz can't cure HIV/AIDS, but taken in combination with other HIV medicines (called an HIV regimen) every day helps people with HIV live longer healthier lives. Efavirenz also reduces the risk of HIV transmission and can be used by children who are suffering from HIV/AIDS. All the above therapeutic uses of efavirenz prompted us to identify the novel and hopefully cost efficient synthetic methodology for the preparation of efavirenz. In this thesis a new synthetic method for asymmetric synthesis of efavirenz is described. This route started from commercially available starting materials and it is first established in traditional batch chemistry and further the parameters transferred to a semi continuous flow protocol for optimization.
- Full Text:
- Date Issued: 2017
- Authors: Chada, Sravanthi
- Date: 2017
- Subjects: Antiretroviral agents , Asymmetric synthesis , Enzyme inhibitors , HIV (Viruses) -- Enzymes
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10948/15512 , vital:28265
- Description: Efavirenz, a drug that is still inaccessible to millions of people worldwide, is potent non nucleoside reverse transcriptase inhibitor (NNRTI), is one of the preferred agents used in combination therapy for first-line treatment of the human immunodeficiency virus (HIV). NNRTIs attach to and block an HIV enzyme called reverse transcriptase, by blocking reverse transcriptase; NNRTIs prevent HIV from multiplying and can reduce the amount of HIV in the body. Efavirenz can't cure HIV/AIDS, but taken in combination with other HIV medicines (called an HIV regimen) every day helps people with HIV live longer healthier lives. Efavirenz also reduces the risk of HIV transmission and can be used by children who are suffering from HIV/AIDS. All the above therapeutic uses of efavirenz prompted us to identify the novel and hopefully cost efficient synthetic methodology for the preparation of efavirenz. In this thesis a new synthetic method for asymmetric synthesis of efavirenz is described. This route started from commercially available starting materials and it is first established in traditional batch chemistry and further the parameters transferred to a semi continuous flow protocol for optimization.
- Full Text:
- Date Issued: 2017