The application of Classification Trees in the Banking Sector

**Authors:**Mtwa, Sithayanda**Date:**2021-04**Subjects:**To be added**Language:**English**Type:**thesis , text , Masters , MSc**Identifier:**http://hdl.handle.net/10962/178514 , vital:42946**Description:**Access restricted until April 2026. , Thesis (MSc) -- Faculty of Science, Statistics, 2021**Full Text:****Date Issued:**2021-04

**Authors:**Mtwa, Sithayanda**Date:**2021-04**Subjects:**To be added**Language:**English**Type:**thesis , text , Masters , MSc**Identifier:**http://hdl.handle.net/10962/178514 , vital:42946**Description:**Access restricted until April 2026. , Thesis (MSc) -- Faculty of Science, Statistics, 2021**Full Text:****Date Issued:**2021-04

Bayesian accelerated life tests for the Weibull distribution under non-informative priors

**Authors:**Mostert, Philip**Date:**2020**Subjects:**Accelerated life testing -- Statistical methods , Accelerated life testing -- Mathematical models , Failure time data analysis , Bayesian statistical decision theory , Monte Carlo method , Weibull distribution**Language:**English**Type:**text , Thesis , Masters , MSc**Identifier:**http://hdl.handle.net/10962/172181 , vital:42173**Description:**In a competitive world where products are designed to last for long periods of time, obtaining time-to-failure data is both difficult and costly. Hence for products with high reliability, accelerated life testing is required to obtain relevant life-data quickly. This is done by placing the products under higher-than-use stress levels, thereby causing the products to fail prematurely. Part of the analysis of accelerated life-data requires a life distribution that describes the lifetime of a product at a given stress level and a life-stress relationship – which is some function that describes the way in which the life distribution changes across different stress levels. In this thesis it is assumed that the underlying life distribution is the wellknown Weibull distribution, with shape parameter constant over all stress levels and scale parameter as a log-linear function of stress. The primary objective of this thesis is to obtain estimates from Bayesian analysis, and this thesis considers five types of non-informative prior distributions: Jeffreys’ prior, reference priors, maximal data information prior, uniform prior and probability matching priors. Since the associated posterior distribution under all the derived non-informative priors are of an unknown form, the propriety of the posterior distributions is assessed to ensure admissible results. For comparison purposes, estimates obtained via the method of maximum likelihood are also considered. Finding these estimates requires solving non-linear equations, hence the Newton-Raphson algorithm is used to obtain estimates. A simulation study based on the time-to-failure of accelerated data is conducted to compare results between maximum likelihood and Bayesian estimates. As a result of the Bayesian posterior distributions being analytically intractable, two methods to obtain Bayesian estimates are considered: Markov chain Monte Carlo methods and Lindley’s approximation technique. In the simulation study the posterior means and the root mean squared error values of the estimates under the symmetric squared error loss function and the two asymmetric loss functions: the LINEX loss function and general entropy loss function, are considered. Furthermore the coverage rates for the Bayesian Markov chain Monte Carlo and maximum likelihood estimates are found, and are compared by their average interval lengths. A case study using a dataset based on accelerated time-to-failure of an insulating fluid is considered. The fit of these data for the Weibull distribution is studied and is compared to that of other popular life distributions. A full simulation study is conducted to illustrate convergence of the proper posterior distributions. Both maximum likelihood and Bayesian estimates are found for these data. The deviance information criterion is used to compare Bayesian estimates between the prior distributions. The case study is concluded by finding reliability estimates of the data at use-stress levels.**Full Text:****Date Issued:**2020

**Authors:**Mostert, Philip**Date:**2020**Subjects:**Accelerated life testing -- Statistical methods , Accelerated life testing -- Mathematical models , Failure time data analysis , Bayesian statistical decision theory , Monte Carlo method , Weibull distribution**Language:**English**Type:**text , Thesis , Masters , MSc**Identifier:**http://hdl.handle.net/10962/172181 , vital:42173**Description:**In a competitive world where products are designed to last for long periods of time, obtaining time-to-failure data is both difficult and costly. Hence for products with high reliability, accelerated life testing is required to obtain relevant life-data quickly. This is done by placing the products under higher-than-use stress levels, thereby causing the products to fail prematurely. Part of the analysis of accelerated life-data requires a life distribution that describes the lifetime of a product at a given stress level and a life-stress relationship – which is some function that describes the way in which the life distribution changes across different stress levels. In this thesis it is assumed that the underlying life distribution is the wellknown Weibull distribution, with shape parameter constant over all stress levels and scale parameter as a log-linear function of stress. The primary objective of this thesis is to obtain estimates from Bayesian analysis, and this thesis considers five types of non-informative prior distributions: Jeffreys’ prior, reference priors, maximal data information prior, uniform prior and probability matching priors. Since the associated posterior distribution under all the derived non-informative priors are of an unknown form, the propriety of the posterior distributions is assessed to ensure admissible results. For comparison purposes, estimates obtained via the method of maximum likelihood are also considered. Finding these estimates requires solving non-linear equations, hence the Newton-Raphson algorithm is used to obtain estimates. A simulation study based on the time-to-failure of accelerated data is conducted to compare results between maximum likelihood and Bayesian estimates. As a result of the Bayesian posterior distributions being analytically intractable, two methods to obtain Bayesian estimates are considered: Markov chain Monte Carlo methods and Lindley’s approximation technique. In the simulation study the posterior means and the root mean squared error values of the estimates under the symmetric squared error loss function and the two asymmetric loss functions: the LINEX loss function and general entropy loss function, are considered. Furthermore the coverage rates for the Bayesian Markov chain Monte Carlo and maximum likelihood estimates are found, and are compared by their average interval lengths. A case study using a dataset based on accelerated time-to-failure of an insulating fluid is considered. The fit of these data for the Weibull distribution is studied and is compared to that of other popular life distributions. A full simulation study is conducted to illustrate convergence of the proper posterior distributions. Both maximum likelihood and Bayesian estimates are found for these data. The deviance information criterion is used to compare Bayesian estimates between the prior distributions. The case study is concluded by finding reliability estimates of the data at use-stress levels.**Full Text:****Date Issued:**2020

Default in payment, an application of statistical learning techniques

**Authors:**Gcakasi, Lulama**Date:**2020**Subjects:**Credit -- South Africa -- Risk assessment , Risk management -- Statistical methods -- South Africa , Credit -- Management -- Statistical methods , Commercial statistics**Language:**English**Type:**text , Thesis , Masters , MSc**Identifier:**http://hdl.handle.net/10962/141547 , vital:37984**Description:**The ability of financial institutions to detect whether a customer will default on their credit card payment is essential for its profitability. To that effect, financial institutions have credit scoring systems in place to be able to estimate the credit risk associated with a customer. Various classification models are used to develop credit scoring systems such as k-nearest neighbours, logistic regression and classification trees. This study aims to assess the performance of different classification models on the prediction of credit card payment default. Credit data is usually of high dimension and as a result dimension reduction techniques, namely principal component analysis and linear discriminant analysis, are used in this study as a means to improve model performance. Two classification models are used, namely neural networks and support vector machines. Model performance is evaluated using accuracy and area under the curve (AUC). The neuarl network classifier performed better than the support vector machine classifier as it produced higher accuracy rates and AUC values. Dimension reduction techniques were not effective in improving model performance but did result in less computationally expensive models.**Full Text:****Date Issued:**2020

**Authors:**Gcakasi, Lulama**Date:**2020**Subjects:**Credit -- South Africa -- Risk assessment , Risk management -- Statistical methods -- South Africa , Credit -- Management -- Statistical methods , Commercial statistics**Language:**English**Type:**text , Thesis , Masters , MSc**Identifier:**http://hdl.handle.net/10962/141547 , vital:37984**Description:**The ability of financial institutions to detect whether a customer will default on their credit card payment is essential for its profitability. To that effect, financial institutions have credit scoring systems in place to be able to estimate the credit risk associated with a customer. Various classification models are used to develop credit scoring systems such as k-nearest neighbours, logistic regression and classification trees. This study aims to assess the performance of different classification models on the prediction of credit card payment default. Credit data is usually of high dimension and as a result dimension reduction techniques, namely principal component analysis and linear discriminant analysis, are used in this study as a means to improve model performance. Two classification models are used, namely neural networks and support vector machines. Model performance is evaluated using accuracy and area under the curve (AUC). The neuarl network classifier performed better than the support vector machine classifier as it produced higher accuracy rates and AUC values. Dimension reduction techniques were not effective in improving model performance but did result in less computationally expensive models.**Full Text:****Date Issued:**2020

Bayesian hierarchical modelling with application in spatial epidemiology

**Authors:**Southey, Richard**Date:**2018**Subjects:**Spatial analysis (Statistics) , Bayesian statistical decision theory , Medical mapping , Mouth -- Cancer**Language:**English**Type:**text , Thesis , Masters , MSc**Identifier:**http://hdl.handle.net/10962/59489 , vital:27617**Description:**Disease mapping and spatial statistics have become an important part of modern day statistics and have increased in popularity as the methods and techniques have evolved. The application of disease mapping is not only confined to the analysis of diseases as other applications of disease mapping can be found in Econometric and financial disciplines. This thesis will consider two data sets. These are the Georgia oral cancer 2004 data set and the South African acute pericarditis 2014 data set. The Georgia data set will be used to assess the hyperprior sensitivity of the precision for the uncorrelated heterogeneity and correlated heterogeneity components in a convolution model. The correlated heterogeneity will be modelled by a conditional autoregressive prior distribution and the uncorrelated heterogeneity will be modelled with a zero mean Gaussian prior distribution. The sensitivity analysis will be performed using three models with conjugate, Jeffreys' and a fixed parameter prior for the hyperprior distribution of the precision for the uncorrelated heterogeneity component. A simulation study will be done to compare four prior distributions which will be the conjugate, Jeffreys', probability matching and divergence priors. The three models will be fitted in WinBUGS® using a Bayesian approach. The results of the three models will be in the form of disease maps, figures and tables. The results show that the hyperprior of the precision for the uncorrelated heterogeneity and correlated heterogeneity components are sensitive to changes and will result in different results depending on the specification of the hyperprior distribution of the precision for the two components in the model. The South African data set will be used to examine whether there is a difference between the proper conditional autoregressive prior and intrinsic conditional autoregressive prior for the correlated heterogeneity component in a convolution model. Two models will be fitted in WinBUGS® for this comparison. Both the hyperpriors of the precision for the uncorrelated heterogeneity and correlated heterogeneity components will be modelled using a Jeffreys' prior distribution. The results show that there is no significant difference between the results of the model with a proper conditional autoregressive prior and intrinsic conditional autoregressive prior for the South African data, although there are a few disadvantages of using a proper conditional autoregressive prior for the correlated heterogeneity which will be stated in the conclusion.**Full Text:****Date Issued:**2018

**Authors:**Southey, Richard**Date:**2018**Subjects:**Spatial analysis (Statistics) , Bayesian statistical decision theory , Medical mapping , Mouth -- Cancer**Language:**English**Type:**text , Thesis , Masters , MSc**Identifier:**http://hdl.handle.net/10962/59489 , vital:27617**Description:**Disease mapping and spatial statistics have become an important part of modern day statistics and have increased in popularity as the methods and techniques have evolved. The application of disease mapping is not only confined to the analysis of diseases as other applications of disease mapping can be found in Econometric and financial disciplines. This thesis will consider two data sets. These are the Georgia oral cancer 2004 data set and the South African acute pericarditis 2014 data set. The Georgia data set will be used to assess the hyperprior sensitivity of the precision for the uncorrelated heterogeneity and correlated heterogeneity components in a convolution model. The correlated heterogeneity will be modelled by a conditional autoregressive prior distribution and the uncorrelated heterogeneity will be modelled with a zero mean Gaussian prior distribution. The sensitivity analysis will be performed using three models with conjugate, Jeffreys' and a fixed parameter prior for the hyperprior distribution of the precision for the uncorrelated heterogeneity component. A simulation study will be done to compare four prior distributions which will be the conjugate, Jeffreys', probability matching and divergence priors. The three models will be fitted in WinBUGS® using a Bayesian approach. The results of the three models will be in the form of disease maps, figures and tables. The results show that the hyperprior of the precision for the uncorrelated heterogeneity and correlated heterogeneity components are sensitive to changes and will result in different results depending on the specification of the hyperprior distribution of the precision for the two components in the model. The South African data set will be used to examine whether there is a difference between the proper conditional autoregressive prior and intrinsic conditional autoregressive prior for the correlated heterogeneity component in a convolution model. Two models will be fitted in WinBUGS® for this comparison. Both the hyperpriors of the precision for the uncorrelated heterogeneity and correlated heterogeneity components will be modelled using a Jeffreys' prior distribution. The results show that there is no significant difference between the results of the model with a proper conditional autoregressive prior and intrinsic conditional autoregressive prior for the South African data, although there are a few disadvantages of using a proper conditional autoregressive prior for the correlated heterogeneity which will be stated in the conclusion.**Full Text:****Date Issued:**2018

Generalized linear models, with applications in fisheries research

**Authors:**Sidumo, Bonelwa**Date:**2018**Language:**English**Type:**text , Thesis , Masters , MSc**Identifier:**http://hdl.handle.net/10962/61102 , vital:27975**Description:**Gambusia affinis (G. affinis) is an invasive fish species found in the Sundays River Valley of the Eastern Cape, South Africa, The relative abundance and population dynamics of G. affinis were quantified in five interconnected impoundments within the Sundays River Valley, This study utilised a G. affinis data set to demonstrate various, classical ANOVA models. Generalized linear models were used to standardize catch per unit effort (CPUE) estimates and to determine environmental variables which influenced the CPUE, Based on the generalized linear model results dam age, mean temperature, Oreochromis mossambicus abundance and Glossogobius callidus abundance had a significant effect on the G. affinis CPUE. The Albany Angling Association collected data during fishing tag and release events. These data were utilized to demonstrate repeated measures designs. Mixed-effects models provided a powerful and flexible tool for analyzing clustered data such as repeated measures data and nested data, lienee it has become tremendously popular as a framework for the analysis of bio-behavioral experiments. The results show that the mixed-effects methods proposed in this study are more efficient than those based on generalized linear models. These data were better modeled with mixed-effects models due to their flexibility in handling missing data.**Full Text:****Date Issued:**2018

**Authors:**Sidumo, Bonelwa**Date:**2018**Language:**English**Type:**text , Thesis , Masters , MSc**Identifier:**http://hdl.handle.net/10962/61102 , vital:27975**Description:**Gambusia affinis (G. affinis) is an invasive fish species found in the Sundays River Valley of the Eastern Cape, South Africa, The relative abundance and population dynamics of G. affinis were quantified in five interconnected impoundments within the Sundays River Valley, This study utilised a G. affinis data set to demonstrate various, classical ANOVA models. Generalized linear models were used to standardize catch per unit effort (CPUE) estimates and to determine environmental variables which influenced the CPUE, Based on the generalized linear model results dam age, mean temperature, Oreochromis mossambicus abundance and Glossogobius callidus abundance had a significant effect on the G. affinis CPUE. The Albany Angling Association collected data during fishing tag and release events. These data were utilized to demonstrate repeated measures designs. Mixed-effects models provided a powerful and flexible tool for analyzing clustered data such as repeated measures data and nested data, lienee it has become tremendously popular as a framework for the analysis of bio-behavioral experiments. The results show that the mixed-effects methods proposed in this study are more efficient than those based on generalized linear models. These data were better modeled with mixed-effects models due to their flexibility in handling missing data.**Full Text:****Date Issued:**2018

Missing values: a closer look

**Authors:**Thorpe, Kerri**Date:**2017**Language:**English**Type:**Thesis , Masters , MSc**Identifier:**http://hdl.handle.net/10962/d1017827 , vital:20798**Description:**Problem: In today’s world, missing values are more present than ever. Due to the ever-changing and fast paced global society in which we live, most business and research data produced around the world contain missing data. This means that locating data which is meticulously precise can be a hard task in itself, but at times may prove essential as the consequences of making use of incomplete data could be disastrous. The reasons for missing data cropping up in almost all forms of work are numerous and shall be discussed in this dissertation. For example, those being interviewed or polled may choose to simply ignore questions which are posed to them, recording equipment may malfunction or be misplaced, or organisers may not be able to locate the respondent in order to rectify the missing data. Whatever the reasons for data being incomplete, it is necessary to avoid having to use inefficient and incomplete data as a result from the above problems. Therefore, various strategies or methods have been developed in order to handle these missing values. It is important, however, that these strategies or methods are utilised effectively as missing data treatment can introduce bias into the analysis. This dissertation shall look at these and other problems in more detail by using a data set which consists of records for 581 children who were interviewed in 1990 as part of the National Longitudinal Survey of Youth (NLSY). Approach: As mentioned above, many strategies or methods have been developed in order to deal with missing values. More specifically, traditional methods such as complete case analysis, available case analysis or single imputation are widely used by researchers and shall be discussed herein. Although these methods are simple and easy to implement, they require assumptions about the data that are not often satisfied in practice. Over the years, more up to date and relevant methods, such as multiple imputation and maximum likelihood have been developed. These methods rely on weaker assumptions and contain superior statistical properties when compared to the traditional techniques. In this dissertation, these traditional methods shall be reviewed and assessed in SAS and shall be compared to the more modern techniques. Results: The ad hoc techniques for handling missing data such as complete case and available case methods produce biased parameter estimates when the data is not missing completely at random (MCAR). Single imputation techniques likewise produce biased estimates as well as result in the underestimation of standard errors. Although the expectation maximisation (EM) algorithm yields unbiased parameter estimates, the lack of convenient standard errors suggests that using this algorithm for hypothesis testing is not a good idea. Multiple imputation, however, yields unbiased parameter estimates and correctly estimates standard errors. Conclusion: Ignoring missing data in any analysis produces biased parameter estimates. Using single imputation to handle missing values is not recommended, as using a single value to replace missing values does not account for the variation that would have been present if the variables were observed. As a result, the variance will be greatly underestimated. The more modern missing data methods such as the EM algorithm and multiple imputation are preferred over the traditional techniques as they require less stringent assumptions and they also mitigate the downsides of the older methods.**Full Text:****Date Issued:**2017

**Authors:**Thorpe, Kerri**Date:**2017**Language:**English**Type:**Thesis , Masters , MSc**Identifier:**http://hdl.handle.net/10962/d1017827 , vital:20798**Description:**Problem: In today’s world, missing values are more present than ever. Due to the ever-changing and fast paced global society in which we live, most business and research data produced around the world contain missing data. This means that locating data which is meticulously precise can be a hard task in itself, but at times may prove essential as the consequences of making use of incomplete data could be disastrous. The reasons for missing data cropping up in almost all forms of work are numerous and shall be discussed in this dissertation. For example, those being interviewed or polled may choose to simply ignore questions which are posed to them, recording equipment may malfunction or be misplaced, or organisers may not be able to locate the respondent in order to rectify the missing data. Whatever the reasons for data being incomplete, it is necessary to avoid having to use inefficient and incomplete data as a result from the above problems. Therefore, various strategies or methods have been developed in order to handle these missing values. It is important, however, that these strategies or methods are utilised effectively as missing data treatment can introduce bias into the analysis. This dissertation shall look at these and other problems in more detail by using a data set which consists of records for 581 children who were interviewed in 1990 as part of the National Longitudinal Survey of Youth (NLSY). Approach: As mentioned above, many strategies or methods have been developed in order to deal with missing values. More specifically, traditional methods such as complete case analysis, available case analysis or single imputation are widely used by researchers and shall be discussed herein. Although these methods are simple and easy to implement, they require assumptions about the data that are not often satisfied in practice. Over the years, more up to date and relevant methods, such as multiple imputation and maximum likelihood have been developed. These methods rely on weaker assumptions and contain superior statistical properties when compared to the traditional techniques. In this dissertation, these traditional methods shall be reviewed and assessed in SAS and shall be compared to the more modern techniques. Results: The ad hoc techniques for handling missing data such as complete case and available case methods produce biased parameter estimates when the data is not missing completely at random (MCAR). Single imputation techniques likewise produce biased estimates as well as result in the underestimation of standard errors. Although the expectation maximisation (EM) algorithm yields unbiased parameter estimates, the lack of convenient standard errors suggests that using this algorithm for hypothesis testing is not a good idea. Multiple imputation, however, yields unbiased parameter estimates and correctly estimates standard errors. Conclusion: Ignoring missing data in any analysis produces biased parameter estimates. Using single imputation to handle missing values is not recommended, as using a single value to replace missing values does not account for the variation that would have been present if the variables were observed. As a result, the variance will be greatly underestimated. The more modern missing data methods such as the EM algorithm and multiple imputation are preferred over the traditional techniques as they require less stringent assumptions and they also mitigate the downsides of the older methods.**Full Text:****Date Issued:**2017

Reliability analysis: assessment of hardware and human reliability

**Authors:**Mafu, Masakheke**Date:**2017**Subjects:**Bayesian statistical decision theory , Reliability (Engineering) , Human machine systems , Probabilities , Markov processes**Language:**English**Type:**Thesis , Masters , MSc**Identifier:**http://hdl.handle.net/10962/6280 , vital:21077**Description:**Most reliability analyses involve the analysis of binary data. Practitioners in the field of reliability place great emphasis on analysing the time periods over which items or systems function (failure time analyses), which make use of different statistical models. This study intends to introduce, review and investigate four statistical models for modeling failure times of non-repairable items, and to utilise a Bayesian methodology to achieve this. The exponential, Rayleigh, gamma and Weibull distributions will be considered. The performance of the two non-informative priors will be investigated. An application of two failure time distributions will be carried out. To meet these objectives, the failure rate and the reliability functions of failure time distributions are calculated. Two non-informative priors, the Jeffreys prior and the general divergence prior, and the corresponding posteriors are derived for each distribution. Simulation studies for each distribution are carried out, where the coverage rates and credible intervals lengths are calculated and the results of these are discussed. The gamma distribution and the Weibull distribution are applied to failure time data.The Jeffreys prior is found to have better coverage rate than the general divergence prior. The general divergence shows undercoverage when used with the Rayleigh distribution. The Jeffreys prior produces coverage rates that are conservative when used with the exponential distribution. These priors give, on average, the same average interval lengths and increase as the value of the parameter increases. Both priors perform similar when used with the gamma distribution and the Weibull distribution. A thorough discussion and review of human reliability analysis (HRA) techniques will be considered. Twenty human reliability analysis (HRA) techniques are discussed; providing a background, description and advantages and disadvantages for each. Case studies in the nuclear industry, railway industry, and aviation industry are presented to show the importance and applications of HRA. Human error has been shown to be the major contributor to system failure.**Full Text:****Date Issued:**2017

**Authors:**Mafu, Masakheke**Date:**2017**Subjects:**Bayesian statistical decision theory , Reliability (Engineering) , Human machine systems , Probabilities , Markov processes**Language:**English**Type:**Thesis , Masters , MSc**Identifier:**http://hdl.handle.net/10962/6280 , vital:21077**Description:**Most reliability analyses involve the analysis of binary data. Practitioners in the field of reliability place great emphasis on analysing the time periods over which items or systems function (failure time analyses), which make use of different statistical models. This study intends to introduce, review and investigate four statistical models for modeling failure times of non-repairable items, and to utilise a Bayesian methodology to achieve this. The exponential, Rayleigh, gamma and Weibull distributions will be considered. The performance of the two non-informative priors will be investigated. An application of two failure time distributions will be carried out. To meet these objectives, the failure rate and the reliability functions of failure time distributions are calculated. Two non-informative priors, the Jeffreys prior and the general divergence prior, and the corresponding posteriors are derived for each distribution. Simulation studies for each distribution are carried out, where the coverage rates and credible intervals lengths are calculated and the results of these are discussed. The gamma distribution and the Weibull distribution are applied to failure time data.The Jeffreys prior is found to have better coverage rate than the general divergence prior. The general divergence shows undercoverage when used with the Rayleigh distribution. The Jeffreys prior produces coverage rates that are conservative when used with the exponential distribution. These priors give, on average, the same average interval lengths and increase as the value of the parameter increases. Both priors perform similar when used with the gamma distribution and the Weibull distribution. A thorough discussion and review of human reliability analysis (HRA) techniques will be considered. Twenty human reliability analysis (HRA) techniques are discussed; providing a background, description and advantages and disadvantages for each. Case studies in the nuclear industry, railway industry, and aviation industry are presented to show the importance and applications of HRA. Human error has been shown to be the major contributor to system failure.**Full Text:****Date Issued:**2017

Stochastic models in finance

**Authors:**Mazengera, Hassan**Date:**2017**Subjects:**Finance -- Mathematical models , C++ (Computer program language) , GARCH model , Lebesgue-Radon-Nikodym theorems , Radon measures , Stochastic models , Stochastic processes , Stochastic processes -- Computer programs , Martingales (Mathematics) , Pricing -- Mathematical models**Language:**English**Type:**text , Thesis , Masters , MSc**Identifier:**http://hdl.handle.net/10962/162724 , vital:40976**Description:**Stochastic models for pricing financial securities are developed. First, we consider the Black Scholes model, which is a classic example of a complete market model and finally focus on Lévy driven models. Jumps may render the market incomplete and are induced in a model by inclusion of a Poisson process. Lévy driven models are more realistic in modelling of asset price dynamics than the Black Scholes model. Martingales are central in pricing, especially of derivatives and we give them the desired attention in the context of pricing. There are an increasing number of important pricing models where analytical solutions are not available hence computational methods come in handy, see Broadie and Glasserman (1997). It is also important to note that computational methods are also applicable to models with analytical solutions. We computationally value selected stochastic financial models using C++. Computational methods are also used to value or price complex financial instruments such as path dependent derivatives. This pricing procedure is applied in the computational valuation of a stochastic (revenue based) loan contract. Derivatives with simple pay of functions and models with analytical solutions are considered for illustrative purposes. The Black-Scholes P.D.E is complex to solve analytically and finite difference methods are widely used. Explicit finite difference scheme is considered in this thesis for computational valuation of derivatives that are modelled by the Black-Scholes P.D.E. Stochastic modelling of asset prices is important for the valuation of derivatives: Gaussian, exponential and gamma variates are simulated for the valuation purposes.**Full Text:****Date Issued:**2017

**Authors:**Mazengera, Hassan**Date:**2017**Subjects:**Finance -- Mathematical models , C++ (Computer program language) , GARCH model , Lebesgue-Radon-Nikodym theorems , Radon measures , Stochastic models , Stochastic processes , Stochastic processes -- Computer programs , Martingales (Mathematics) , Pricing -- Mathematical models**Language:**English**Type:**text , Thesis , Masters , MSc**Identifier:**http://hdl.handle.net/10962/162724 , vital:40976**Description:**Stochastic models for pricing financial securities are developed. First, we consider the Black Scholes model, which is a classic example of a complete market model and finally focus on Lévy driven models. Jumps may render the market incomplete and are induced in a model by inclusion of a Poisson process. Lévy driven models are more realistic in modelling of asset price dynamics than the Black Scholes model. Martingales are central in pricing, especially of derivatives and we give them the desired attention in the context of pricing. There are an increasing number of important pricing models where analytical solutions are not available hence computational methods come in handy, see Broadie and Glasserman (1997). It is also important to note that computational methods are also applicable to models with analytical solutions. We computationally value selected stochastic financial models using C++. Computational methods are also used to value or price complex financial instruments such as path dependent derivatives. This pricing procedure is applied in the computational valuation of a stochastic (revenue based) loan contract. Derivatives with simple pay of functions and models with analytical solutions are considered for illustrative purposes. The Black-Scholes P.D.E is complex to solve analytically and finite difference methods are widely used. Explicit finite difference scheme is considered in this thesis for computational valuation of derivatives that are modelled by the Black-Scholes P.D.E. Stochastic modelling of asset prices is important for the valuation of derivatives: Gaussian, exponential and gamma variates are simulated for the valuation purposes.**Full Text:****Date Issued:**2017

A review of generalized linear models for count data with emphasis on current geospatial procedures

**Authors:**Michell, Justin Walter**Date:**2016**Subjects:**Spatial analysis (Statistics) , Bayesian statistical decision theory , Geospatial data , Malaria -- Botswana -- Statistics , Malaria -- Botswana -- Research -- Statistical methods**Language:**English**Type:**Thesis , Masters , MCom**Identifier:**vital:5582 , http://hdl.handle.net/10962/d1019989**Description:**Analytical problems caused by over-fitting, confounding and non-independence in the data is a major challenge for variable selection. As more variables are tested against a certain data set, there is a greater risk that some will explain the data merely by chance, but will fail to explain new data. The main aim of this study is to employ a systematic and practicable variable selection process for the spatial analysis and mapping of historical malaria risk in Botswana using data collected from the MARA (Mapping Malaria Risk in Africa) project and environmental and climatic datasets from various sources. Details of how a spatial database is compiled for a statistical analysis to proceed is provided. The automation of the entire process is also explored. The final bayesian spatial model derived from the non-spatial variable selection procedure using Markov Chain Monte Carlo simulation was fitted to the data. Winter temperature had the greatest effect of malaria prevalence in Botswana. Summer rainfall, maximum temperature of the warmest month, annual range of temperature, altitude and distance to closest water source were also significantly associated with malaria prevalence in the final spatial model after accounting for spatial correlation. Using this spatial model malaria prevalence at unobserved locations was predicted, producing a smooth risk map covering Botswana. The automation of both compiling the spatial database and the variable selection procedure proved challenging and could only be achieved in parts of the process. The non-spatial selection procedure proved practical and was able to identify stable explanatory variables and provide an objective means for selecting one variable over another, however ultimately it was not entirely successful due to the fact that a unique set of spatial variables could not be selected.**Full Text:****Date Issued:**2016

**Authors:**Michell, Justin Walter**Date:**2016**Subjects:**Spatial analysis (Statistics) , Bayesian statistical decision theory , Geospatial data , Malaria -- Botswana -- Statistics , Malaria -- Botswana -- Research -- Statistical methods**Language:**English**Type:**Thesis , Masters , MCom**Identifier:**vital:5582 , http://hdl.handle.net/10962/d1019989**Description:**Analytical problems caused by over-fitting, confounding and non-independence in the data is a major challenge for variable selection. As more variables are tested against a certain data set, there is a greater risk that some will explain the data merely by chance, but will fail to explain new data. The main aim of this study is to employ a systematic and practicable variable selection process for the spatial analysis and mapping of historical malaria risk in Botswana using data collected from the MARA (Mapping Malaria Risk in Africa) project and environmental and climatic datasets from various sources. Details of how a spatial database is compiled for a statistical analysis to proceed is provided. The automation of the entire process is also explored. The final bayesian spatial model derived from the non-spatial variable selection procedure using Markov Chain Monte Carlo simulation was fitted to the data. Winter temperature had the greatest effect of malaria prevalence in Botswana. Summer rainfall, maximum temperature of the warmest month, annual range of temperature, altitude and distance to closest water source were also significantly associated with malaria prevalence in the final spatial model after accounting for spatial correlation. Using this spatial model malaria prevalence at unobserved locations was predicted, producing a smooth risk map covering Botswana. The automation of both compiling the spatial database and the variable selection procedure proved challenging and could only be achieved in parts of the process. The non-spatial selection procedure proved practical and was able to identify stable explanatory variables and provide an objective means for selecting one variable over another, however ultimately it was not entirely successful due to the fact that a unique set of spatial variables could not be selected.**Full Text:****Date Issued:**2016

Bayesian accelerated life tests: exponential and Weibull models

**Authors:**Izally, Sharkay Ruwade**Date:**2016**Language:**English**Type:**Thesis , Masters , MSc**Identifier:**http://hdl.handle.net/10962/3003 , vital:20351**Description:**Reliability life testing is used for life data analysis in which samples are tested under normal conditions to obtain failure time data for reliability assessment. It can be costly and time consuming to obtain failure time data under normal operating conditions if the mean time to failure of a product is long. An alternative is to use failure time data from an accelerated life test (ALT) to extrapolate the reliability under normal conditions. In accelerated life testing, the units are placed under a higher than normal stress condition such as voltage, current, pressure, temperature, to make the items fail in a shorter period of time. The failure information is then transformed through an accelerated model commonly known as the time transformation function, to predict the reliability under normal operating conditions. The power law will be used as the time transformation function in this thesis. We will first consider a Bayesian inference model under the assumption that the underlying life distribution in the accelerated life test is exponentially distributed. The maximal data information (MDI) prior, the Ghosh Mergel and Liu (GML) prior and the Jeffreys prior will be derived for the exponential distribution. The propriety of the posterior distributions will be investigated. Results will be compared when using these non-informative priors in a simulation study by looking at the posterior variances. The Weibull distribution as the underlying life distribution in the accelerated life test will also be investigated. The maximal data information prior will be derived for the Weibull distribution using the power law. The uniform prior and a mixture of Gamma and uniform priors will be considered. The propriety of these posteriors will also be investigated. The predictive reliability at the use-stress will be computed for these models. The deviance information criterion will be used to compare these priors. As a result of using a time transformation function, Bayesian inference becomes analytically intractable and Markov Chain Monte Carlo (MCMC) methods will be used to alleviate this problem. The Metropolis-Hastings algorithm will be used to sample from the posteriors for the exponential model in the accelerated life test. The adaptive rejection sampling method will be used to sample from the posterior distributions when the Weibull model is considered.**Full Text:****Date Issued:**2016

**Authors:**Izally, Sharkay Ruwade**Date:**2016**Language:**English**Type:**Thesis , Masters , MSc**Identifier:**http://hdl.handle.net/10962/3003 , vital:20351**Description:**Reliability life testing is used for life data analysis in which samples are tested under normal conditions to obtain failure time data for reliability assessment. It can be costly and time consuming to obtain failure time data under normal operating conditions if the mean time to failure of a product is long. An alternative is to use failure time data from an accelerated life test (ALT) to extrapolate the reliability under normal conditions. In accelerated life testing, the units are placed under a higher than normal stress condition such as voltage, current, pressure, temperature, to make the items fail in a shorter period of time. The failure information is then transformed through an accelerated model commonly known as the time transformation function, to predict the reliability under normal operating conditions. The power law will be used as the time transformation function in this thesis. We will first consider a Bayesian inference model under the assumption that the underlying life distribution in the accelerated life test is exponentially distributed. The maximal data information (MDI) prior, the Ghosh Mergel and Liu (GML) prior and the Jeffreys prior will be derived for the exponential distribution. The propriety of the posterior distributions will be investigated. Results will be compared when using these non-informative priors in a simulation study by looking at the posterior variances. The Weibull distribution as the underlying life distribution in the accelerated life test will also be investigated. The maximal data information prior will be derived for the Weibull distribution using the power law. The uniform prior and a mixture of Gamma and uniform priors will be considered. The propriety of these posteriors will also be investigated. The predictive reliability at the use-stress will be computed for these models. The deviance information criterion will be used to compare these priors. As a result of using a time transformation function, Bayesian inference becomes analytically intractable and Markov Chain Monte Carlo (MCMC) methods will be used to alleviate this problem. The Metropolis-Hastings algorithm will be used to sample from the posteriors for the exponential model in the accelerated life test. The adaptive rejection sampling method will be used to sample from the posterior distributions when the Weibull model is considered.**Full Text:****Date Issued:**2016

Prediction of protein secondary structure using binary classificationtrees, naive Bayes classifiers and the Logistic Regression Classifier

- Eldud Omer, Ahmed Abdelkarim

**Authors:**Eldud Omer, Ahmed Abdelkarim**Date:**2016**Subjects:**Bayesian statistical decision theory , Logistic regression analysis , Biostatistics , Proteins -- Structure**Language:**English**Type:**Thesis , Masters , MSc**Identifier:**vital:5581 , http://hdl.handle.net/10962/d1019985**Description:**The secondary structure of proteins is predicted using various binary classifiers. The data are adopted from the RS126 database. The original data consists of protein primary and secondary structure sequences. The original data is encoded using alphabetic letters. These data are encoded into unary vectors comprising ones and zeros only. Different binary classifiers, namely the naive Bayes, logistic regression and classification trees using hold-out and 5-fold cross validation are trained using the encoded data. For each of the classifiers three classification tasks are considered, namely helix against not helix (H/∼H), sheet against not sheet (S/∼S) and coil against not coil (C/∼C). The performance of these binary classifiers are compared using the overall accuracy in predicting the protein secondary structure for various window sizes. Our result indicate that hold-out cross validation achieved higher accuracy than 5-fold cross validation. The Naive Bayes classifier, using 5-fold cross validation achieved, the lowest accuracy for predicting helix against not helix. The classification tree classifiers, using 5-fold cross validation, achieved the lowest accuracies for both coil against not coil and sheet against not sheet classifications. The logistic regression classier accuracy is dependent on the window size; there is a positive relationship between the accuracy and window size. The logistic regression classier approach achieved the highest accuracy when compared to the classification tree and Naive Bayes classifiers for each classification task; predicting helix against not helix with accuracy 77.74 percent, for sheet against not sheet with accuracy 81.22 percent and for coil against not coil with accuracy 73.39 percent. It is noted that it is easier to compare classifiers if the classification process could be completely facilitated in R. Alternatively, it would be easier to assess these logistic regression classifiers if SPSS had a function to determine the accuracy of the logistic regression classifier.**Full Text:****Date Issued:**2016

**Authors:**Eldud Omer, Ahmed Abdelkarim**Date:**2016**Subjects:**Bayesian statistical decision theory , Logistic regression analysis , Biostatistics , Proteins -- Structure**Language:**English**Type:**Thesis , Masters , MSc**Identifier:**vital:5581 , http://hdl.handle.net/10962/d1019985**Description:**The secondary structure of proteins is predicted using various binary classifiers. The data are adopted from the RS126 database. The original data consists of protein primary and secondary structure sequences. The original data is encoded using alphabetic letters. These data are encoded into unary vectors comprising ones and zeros only. Different binary classifiers, namely the naive Bayes, logistic regression and classification trees using hold-out and 5-fold cross validation are trained using the encoded data. For each of the classifiers three classification tasks are considered, namely helix against not helix (H/∼H), sheet against not sheet (S/∼S) and coil against not coil (C/∼C). The performance of these binary classifiers are compared using the overall accuracy in predicting the protein secondary structure for various window sizes. Our result indicate that hold-out cross validation achieved higher accuracy than 5-fold cross validation. The Naive Bayes classifier, using 5-fold cross validation achieved, the lowest accuracy for predicting helix against not helix. The classification tree classifiers, using 5-fold cross validation, achieved the lowest accuracies for both coil against not coil and sheet against not sheet classifications. The logistic regression classier accuracy is dependent on the window size; there is a positive relationship between the accuracy and window size. The logistic regression classier approach achieved the highest accuracy when compared to the classification tree and Naive Bayes classifiers for each classification task; predicting helix against not helix with accuracy 77.74 percent, for sheet against not sheet with accuracy 81.22 percent and for coil against not coil with accuracy 73.39 percent. It is noted that it is easier to compare classifiers if the classification process could be completely facilitated in R. Alternatively, it would be easier to assess these logistic regression classifiers if SPSS had a function to determine the accuracy of the logistic regression classifier.**Full Text:****Date Issued:**2016

Eliciting and combining expert opinion : an overview and comparison of methods

**Authors:**Chinyamakobvu, Mutsa Carole**Date:**2015**Subjects:**Decision making -- Statistical methods , Expertise , Bayesian statistical decision theory , Statistical decision , Delphi method , Paired comparisons (Statistics)**Language:**English**Type:**Thesis , Masters , MSc**Identifier:**vital:5579 , http://hdl.handle.net/10962/d1017827**Description:**Decision makers have long relied on experts to inform their decision making. Expert judgment analysis is a way to elicit and combine the opinions of a group of experts to facilitate decision making. The use of expert judgment is most appropriate when there is a lack of data for obtaining reasonable statistical results. The experts are asked for advice by one or more decision makers who face a specific real decision problem. The decision makers are outside the group of experts and are jointly responsible and accountable for the decision and committed to finding solutions that everyone can live with. The emphasis is on the decision makers learning from the experts. The focus of this thesis is an overview and comparison of the various elicitation and combination methods available. These include the traditional committee method, the Delphi method, the paired comparisons method, the negative exponential model, Cooke’s classical model, the histogram technique, using the Dirichlet distribution in the case of a set of uncertain proportions which must sum to one, and the employment of overfitting. The supra Bayes approach, the determination of weights for the experts, and combining the opinions of experts where each opinion is associated with a confidence level that represents the expert’s conviction of his own judgment are also considered.**Full Text:****Date Issued:**2015

**Authors:**Chinyamakobvu, Mutsa Carole**Date:**2015**Subjects:**Decision making -- Statistical methods , Expertise , Bayesian statistical decision theory , Statistical decision , Delphi method , Paired comparisons (Statistics)**Language:**English**Type:**Thesis , Masters , MSc**Identifier:**vital:5579 , http://hdl.handle.net/10962/d1017827**Description:**Decision makers have long relied on experts to inform their decision making. Expert judgment analysis is a way to elicit and combine the opinions of a group of experts to facilitate decision making. The use of expert judgment is most appropriate when there is a lack of data for obtaining reasonable statistical results. The experts are asked for advice by one or more decision makers who face a specific real decision problem. The decision makers are outside the group of experts and are jointly responsible and accountable for the decision and committed to finding solutions that everyone can live with. The emphasis is on the decision makers learning from the experts. The focus of this thesis is an overview and comparison of the various elicitation and combination methods available. These include the traditional committee method, the Delphi method, the paired comparisons method, the negative exponential model, Cooke’s classical model, the histogram technique, using the Dirichlet distribution in the case of a set of uncertain proportions which must sum to one, and the employment of overfitting. The supra Bayes approach, the determination of weights for the experts, and combining the opinions of experts where each opinion is associated with a confidence level that represents the expert’s conviction of his own judgment are also considered.**Full Text:****Date Issued:**2015

An analysis of the Libor and Swap market models for pricing interest-rate derivatives

**Authors:**Mutengwa, Tafadzwa Isaac**Date:**2012**Subjects:**LIBOR market model , Monte Carlo method , Interest rates -- Mathematical models , Derivative securities**Language:**English**Type:**Thesis , Masters , MSc**Identifier:**vital:5573 , http://hdl.handle.net/10962/d1005535**Description:**This thesis focuses on the non-arbitrage (fair) pricing of interest rate derivatives, in particular caplets and swaptions using the LIBOR market model (LMM) developed by Brace, Gatarek, and Musiela (1997) and Swap market model (SMM) developed Jamshidan (1997), respectively. Today, in most financial markets, interest rate derivatives are priced using the renowned Black-Scholes formula developed by Black and Scholes (1973). We present new pricing models for caplets and swaptions, which can be implemented in the financial market other than the Black-Scholes model. We theoretically construct these "new market models" and then test their practical aspects. We show that the dynamics of the LMM imply a pricing formula for caplets that has the same structure as the Black-Scholes pricing formula for a caplet that is used by market practitioners. For the SMM we also theoretically construct an arbitrage-free interest rate model that implies a pricing formula for swaptions that has the same structure as the Black-Scholes pricing formula for swaptions. We empirically compare the pricing performance of the LMM against the Black-Scholes for pricing caplets using Monte Carlo methods.**Full Text:****Date Issued:**2012

**Authors:**Mutengwa, Tafadzwa Isaac**Date:**2012**Subjects:**LIBOR market model , Monte Carlo method , Interest rates -- Mathematical models , Derivative securities**Language:**English**Type:**Thesis , Masters , MSc**Identifier:**vital:5573 , http://hdl.handle.net/10962/d1005535**Description:**This thesis focuses on the non-arbitrage (fair) pricing of interest rate derivatives, in particular caplets and swaptions using the LIBOR market model (LMM) developed by Brace, Gatarek, and Musiela (1997) and Swap market model (SMM) developed Jamshidan (1997), respectively. Today, in most financial markets, interest rate derivatives are priced using the renowned Black-Scholes formula developed by Black and Scholes (1973). We present new pricing models for caplets and swaptions, which can be implemented in the financial market other than the Black-Scholes model. We theoretically construct these "new market models" and then test their practical aspects. We show that the dynamics of the LMM imply a pricing formula for caplets that has the same structure as the Black-Scholes pricing formula for a caplet that is used by market practitioners. For the SMM we also theoretically construct an arbitrage-free interest rate model that implies a pricing formula for swaptions that has the same structure as the Black-Scholes pricing formula for swaptions. We empirically compare the pricing performance of the LMM against the Black-Scholes for pricing caplets using Monte Carlo methods.**Full Text:****Date Issued:**2012

Bayesian logistic regression models for credit scoring

**Authors:**Webster, Gregg**Date:**2011**Subjects:**Bayesian statistical decision theory Credit scoring systems Regression analysis Logistic regression analysis Monte Carlo method Markov processes Financial institutions**Language:**English**Type:**Thesis , Masters , MCom**Identifier:**vital:5574 , http://hdl.handle.net/10962/d1005538**Description:**The Bayesian approach to logistic regression modelling for credit scoring is useful when there are data quantity issues. Data quantity issues might occur when a bank is opening in a new location or there is change in the scoring procedure. Making use of prior information (available from the coefficients estimated on other data sets, or expert knowledge about the coefficients) a Bayesian approach is proposed to improve the credit scoring models. To achieve this, a data set is split into two sets, “old” data and “new” data. Priors are obtained from a model fitted on the “old” data. This model is assumed to be a scoring model used by a financial institution in the current location. The financial institution is then assumed to expand into a new economic location where there is limited data. The priors from the model on the “old” data are then combined in a Bayesian model with the “new” data to obtain a model which represents all the available information. The predictive performance of this Bayesian model is compared to a model which does not make use of any prior information. It is found that the use of relevant prior information improves the predictive performance when the size of the “new” data is small. As the size of the “new” data increases, the importance of including prior information decreases**Full Text:****Date Issued:**2011

**Authors:**Webster, Gregg**Date:**2011**Subjects:**Bayesian statistical decision theory Credit scoring systems Regression analysis Logistic regression analysis Monte Carlo method Markov processes Financial institutions**Language:**English**Type:**Thesis , Masters , MCom**Identifier:**vital:5574 , http://hdl.handle.net/10962/d1005538**Description:**The Bayesian approach to logistic regression modelling for credit scoring is useful when there are data quantity issues. Data quantity issues might occur when a bank is opening in a new location or there is change in the scoring procedure. Making use of prior information (available from the coefficients estimated on other data sets, or expert knowledge about the coefficients) a Bayesian approach is proposed to improve the credit scoring models. To achieve this, a data set is split into two sets, “old” data and “new” data. Priors are obtained from a model fitted on the “old” data. This model is assumed to be a scoring model used by a financial institution in the current location. The financial institution is then assumed to expand into a new economic location where there is limited data. The priors from the model on the “old” data are then combined in a Bayesian model with the “new” data to obtain a model which represents all the available information. The predictive performance of this Bayesian model is compared to a model which does not make use of any prior information. It is found that the use of relevant prior information improves the predictive performance when the size of the “new” data is small. As the size of the “new” data increases, the importance of including prior information decreases**Full Text:****Date Issued:**2011

Cointegration in equity markets: a comparison between South African and major developed and emerging markets

**Authors:**Petrov, Pavel**Date:**2011**Subjects:**Cointegration Stock exchanges -- South Africa Stock exchanges -- Developing countries Stock exchanges -- Developed countries South Africa -- Economic conditions Portfolio management -- South Africa Econometrics Autoregression (Statistics)**Language:**English**Type:**Thesis , Masters , MCom**Identifier:**vital:5575 , http://hdl.handle.net/10962/d1005539**Description:**Cointegration has important implications for portfolio diversification. One of these is that in order to spread risk it is advisable to invest in markets that are not cointegrated. Over the last several decades communication technology has made the world a smaller place and hence cointegration in equity markets has become more prevalent. The bulk of research into cointegration focuses on developed and Asian markets, with little research been done on African markets. This study compares the Engle-Granger and Johansen tests for cointegration and uses them to calculate the level of cointegration between South African and other global equity markets. Each market is compared pair-wise with South Africa and the results have been that in general South Africa is cointegrated with other emerging markets but not really with African nor developed markets. Short-run analysis with the error correction was carried out and showed that in general markets respond slowly to any disequilibrium. Innovation accounting methods showed that the country placed first in Cholesky ordering dominates the other one. Multivariate cointegration was carried out using three selections of 4, 6 and 8 market portfolios. One of the markets was SA and the others were all chosen based on the criteria that they are not pair-wise cointegrated with SA. The level of cointegration varied depending on the portfolios, as did the error correction rates, impulse responses and variance decomposition. The one constant was that the USA dominated any portfolio where it was introduced. Recommendations were finally made about which market portfolio an investor should consider as most favourable.**Full Text:****Date Issued:**2011

**Authors:**Petrov, Pavel**Date:**2011**Subjects:**Cointegration Stock exchanges -- South Africa Stock exchanges -- Developing countries Stock exchanges -- Developed countries South Africa -- Economic conditions Portfolio management -- South Africa Econometrics Autoregression (Statistics)**Language:**English**Type:**Thesis , Masters , MCom**Identifier:**vital:5575 , http://hdl.handle.net/10962/d1005539**Description:**Cointegration has important implications for portfolio diversification. One of these is that in order to spread risk it is advisable to invest in markets that are not cointegrated. Over the last several decades communication technology has made the world a smaller place and hence cointegration in equity markets has become more prevalent. The bulk of research into cointegration focuses on developed and Asian markets, with little research been done on African markets. This study compares the Engle-Granger and Johansen tests for cointegration and uses them to calculate the level of cointegration between South African and other global equity markets. Each market is compared pair-wise with South Africa and the results have been that in general South Africa is cointegrated with other emerging markets but not really with African nor developed markets. Short-run analysis with the error correction was carried out and showed that in general markets respond slowly to any disequilibrium. Innovation accounting methods showed that the country placed first in Cholesky ordering dominates the other one. Multivariate cointegration was carried out using three selections of 4, 6 and 8 market portfolios. One of the markets was SA and the others were all chosen based on the criteria that they are not pair-wise cointegrated with SA. The level of cointegration varied depending on the portfolios, as did the error correction rates, impulse responses and variance decomposition. The one constant was that the USA dominated any portfolio where it was introduced. Recommendations were finally made about which market portfolio an investor should consider as most favourable.**Full Text:****Date Issued:**2011

Improved tree species discrimination at leaf level with hyperspectral data combining binary classifiers

**Authors:**Dastile, Xolani Collen**Date:**2011**Subjects:**Mathematical statistics , Analysis of variance , Nearest neighbor analysis (Statistics) , Trees--Classification**Language:**English**Type:**Thesis , Masters , MSc**Identifier:**vital:5567 , http://hdl.handle.net/10962/d1002807 , Mathematical statistics , Analysis of variance , Nearest neighbor analysis (Statistics) , Trees--Classification**Description:**The purpose of the present thesis is to show that hyperspectral data can be used for discrimination between different tree species. The data set used in this study contains the hyperspectral measurements of leaves of seven savannah tree species. The data is high-dimensional and shows large within-class variability combined with small between-class variability which makes discrimination between the classes challenging. We employ two classification methods: G-nearest neighbour and feed-forward neural networks. For both methods, direct 7-class prediction results in high misclassification rates. However, binary classification works better. We constructed binary classifiers for all possible binary classification problems and combine them with Error Correcting Output Codes. We show especially that the use of 1-nearest neighbour binary classifiers results in no improvement compared to a direct 1-nearest neighbour 7-class predictor. In contrast to this negative result, the use of neural networks binary classifiers improves accuracy by 10% compared to a direct neural networks 7-class predictor, and error rates become acceptable. This can be further improved by choosing only suitable binary classifiers for combination.**Full Text:****Date Issued:**2011

**Authors:**Dastile, Xolani Collen**Date:**2011**Subjects:**Mathematical statistics , Analysis of variance , Nearest neighbor analysis (Statistics) , Trees--Classification**Language:**English**Type:**Thesis , Masters , MSc**Identifier:**vital:5567 , http://hdl.handle.net/10962/d1002807 , Mathematical statistics , Analysis of variance , Nearest neighbor analysis (Statistics) , Trees--Classification**Description:**The purpose of the present thesis is to show that hyperspectral data can be used for discrimination between different tree species. The data set used in this study contains the hyperspectral measurements of leaves of seven savannah tree species. The data is high-dimensional and shows large within-class variability combined with small between-class variability which makes discrimination between the classes challenging. We employ two classification methods: G-nearest neighbour and feed-forward neural networks. For both methods, direct 7-class prediction results in high misclassification rates. However, binary classification works better. We constructed binary classifiers for all possible binary classification problems and combine them with Error Correcting Output Codes. We show especially that the use of 1-nearest neighbour binary classifiers results in no improvement compared to a direct 1-nearest neighbour 7-class predictor. In contrast to this negative result, the use of neural networks binary classifiers improves accuracy by 10% compared to a direct neural networks 7-class predictor, and error rates become acceptable. This can be further improved by choosing only suitable binary classifiers for combination.**Full Text:****Date Issued:**2011

Application of multiserver queueing to call centres

**Authors:**Majakwara, Jacob**Date:**2010**Subjects:**Call centers , ERLANG (Computer program language) , Queuing theory**Language:**English**Type:**Thesis , Masters , MSc**Identifier:**vital:5578 , http://hdl.handle.net/10962/d1015461**Description:**The simplest and most widely used queueing model in call centres is the M/M/k system, sometimes referred to as Erlang-C. For many applications the model is an over-simplification. Erlang-C model ignores among other things busy signals, customer impatience and services that span multiple visits. Although the Erlang-C formula is easily implemented, it is not easy to obtain insight from its answers (for example, to find an approximate answer to questions such as "how many additional agents do I need if the arrival rate doubles?"). An approximation of the Erlang-C formula that gives structural insight into this type of question would be of use to better understand economies of scale in call centre operations. Erlang-C based predictions can also turn out highly inaccurate because of violations of underlying assumptions and these violations are not straightforward to model. For example, non-exponential service times lead one to the M/G/k queue which, in stark contrast to the M/M/k system, is difficult to analyse. This thesis deals mainly with the general M/GI/k model with abandonment. The arrival process conforms to a Poisson process, service durations are independent and identically distributed with a general distribution, there are k servers, and independent and identically distributed customer abandoning times with a general distribution. This thesis will endeavour to analyse call centres using M/GI/k model with abandonment and the data to be used will be simulated using EZSIM-software. The paper by Brown et al. [3] entitled "Statistical Analysis of a Telephone Call Centre: A Queueing-Science Perspective," will be the basis upon which this thesis is built.**Full Text:****Date Issued:**2010

**Authors:**Majakwara, Jacob**Date:**2010**Subjects:**Call centers , ERLANG (Computer program language) , Queuing theory**Language:**English**Type:**Thesis , Masters , MSc**Identifier:**vital:5578 , http://hdl.handle.net/10962/d1015461**Description:**The simplest and most widely used queueing model in call centres is the M/M/k system, sometimes referred to as Erlang-C. For many applications the model is an over-simplification. Erlang-C model ignores among other things busy signals, customer impatience and services that span multiple visits. Although the Erlang-C formula is easily implemented, it is not easy to obtain insight from its answers (for example, to find an approximate answer to questions such as "how many additional agents do I need if the arrival rate doubles?"). An approximation of the Erlang-C formula that gives structural insight into this type of question would be of use to better understand economies of scale in call centre operations. Erlang-C based predictions can also turn out highly inaccurate because of violations of underlying assumptions and these violations are not straightforward to model. For example, non-exponential service times lead one to the M/G/k queue which, in stark contrast to the M/M/k system, is difficult to analyse. This thesis deals mainly with the general M/GI/k model with abandonment. The arrival process conforms to a Poisson process, service durations are independent and identically distributed with a general distribution, there are k servers, and independent and identically distributed customer abandoning times with a general distribution. This thesis will endeavour to analyse call centres using M/GI/k model with abandonment and the data to be used will be simulated using EZSIM-software. The paper by Brown et al. [3] entitled "Statistical Analysis of a Telephone Call Centre: A Queueing-Science Perspective," will be the basis upon which this thesis is built.**Full Text:****Date Issued:**2010

Lag length selection for vector error correction models

**Authors:**Sharp, Gary David**Date:**2010**Subjects:**Akaike Information Criterion Mathematical models -- Evaluation Autoregression (Statistics) Error analysis (Mathematics)**Language:**English**Type:**Thesis , Doctoral , PhD**Identifier:**vital:5568 , http://hdl.handle.net/10962/d1002808**Description:**This thesis investigates the problem of model identification in a Vector Autoregressive framework. The study reviews the existing research, conducts an extensive simulation based analysis of thirteen information theoretic criterion (IC), one of which is a novel derivation. The simulation exercise considers the evaluation of seven alternative error restricted vector autoregressive models with four different lag lengths. Alternative sample sizes and parameterisations are also evaluated and compared to results in the existing literature. The results of the comparative analysis provide strong support for the efficiency based criterion of Akaike and in particular the selection capability of the novel criterion, referred to as a modified corrected Akaike information criterion, demonstrates useful finite sample properties.**Full Text:****Date Issued:**2010

**Authors:**Sharp, Gary David**Date:**2010**Subjects:**Akaike Information Criterion Mathematical models -- Evaluation Autoregression (Statistics) Error analysis (Mathematics)**Language:**English**Type:**Thesis , Doctoral , PhD**Identifier:**vital:5568 , http://hdl.handle.net/10962/d1002808**Description:**This thesis investigates the problem of model identification in a Vector Autoregressive framework. The study reviews the existing research, conducts an extensive simulation based analysis of thirteen information theoretic criterion (IC), one of which is a novel derivation. The simulation exercise considers the evaluation of seven alternative error restricted vector autoregressive models with four different lag lengths. Alternative sample sizes and parameterisations are also evaluated and compared to results in the existing literature. The results of the comparative analysis provide strong support for the efficiency based criterion of Akaike and in particular the selection capability of the novel criterion, referred to as a modified corrected Akaike information criterion, demonstrates useful finite sample properties.**Full Text:****Date Issued:**2010

Analytic pricing of American put options

**Authors:**Glover, Elistan Nicholas**Date:**2009**Subjects:**Options (Finance) -- Prices -- Mathematical models , Derivative securities -- Prices -- Mathematical models , Finance -- Mathematical models , Martingales (Mathematics)**Language:**English**Type:**Thesis , Masters , MSc**Identifier:**vital:5566 , http://hdl.handle.net/10962/d1002804 , Options (Finance) -- Prices -- Mathematical models , Derivative securities -- Prices -- Mathematical models , Finance -- Mathematical models , Martingales (Mathematics)**Description:**American options are the most commonly traded financial derivatives in the market. Pricing these options fairly, so as to avoid arbitrage, is of paramount importance. Closed form solutions for American put options cannot be utilised in practice and so numerical techniques are employed. This thesis looks at the work done by other researchers to find an analytic solution to the American put option pricing problem and suggests a practical method, that uses Monte Carlo simulation, to approximate the American put option price. The theory behind option pricing is first discussed using a discrete model. Once the concepts of arbitrage-free pricing and hedging have been dealt with, this model is extended to a continuous-time setting. Martingale theory is introduced to put the option pricing theory in a more formal framework. The construction of a hedging portfolio is discussed in detail and it is shown how financial derivatives are priced according to a unique riskneutral probability measure. Black-Scholes model is discussed and utilised to find closed form solutions to European style options. American options are discussed in detail and it is shown that under certain conditions, American style options can be solved according to closed form solutions. Various numerical techniques are presented to approximate the true American put option price. Chief among these methods is the Richardson extrapolation on a sequence of Bermudan options method that was developed by Geske and Johnson. This model is extended to a Repeated-Richardson extrapolation technique. Finally, a Monte Carlo simulation is used to approximate Bermudan put options. These values are then extrapolated to approximate the price of an American put option. The use of extrapolation techniques was hampered by the presence of non-uniform convergence of the Bermudan put option sequence. When convergence was uniform, the approximations were accurate up to a few cents difference.**Full Text:****Date Issued:**2009

**Authors:**Glover, Elistan Nicholas**Date:**2009**Subjects:**Options (Finance) -- Prices -- Mathematical models , Derivative securities -- Prices -- Mathematical models , Finance -- Mathematical models , Martingales (Mathematics)**Language:**English**Type:**Thesis , Masters , MSc**Identifier:**vital:5566 , http://hdl.handle.net/10962/d1002804 , Options (Finance) -- Prices -- Mathematical models , Derivative securities -- Prices -- Mathematical models , Finance -- Mathematical models , Martingales (Mathematics)**Description:**American options are the most commonly traded financial derivatives in the market. Pricing these options fairly, so as to avoid arbitrage, is of paramount importance. Closed form solutions for American put options cannot be utilised in practice and so numerical techniques are employed. This thesis looks at the work done by other researchers to find an analytic solution to the American put option pricing problem and suggests a practical method, that uses Monte Carlo simulation, to approximate the American put option price. The theory behind option pricing is first discussed using a discrete model. Once the concepts of arbitrage-free pricing and hedging have been dealt with, this model is extended to a continuous-time setting. Martingale theory is introduced to put the option pricing theory in a more formal framework. The construction of a hedging portfolio is discussed in detail and it is shown how financial derivatives are priced according to a unique riskneutral probability measure. Black-Scholes model is discussed and utilised to find closed form solutions to European style options. American options are discussed in detail and it is shown that under certain conditions, American style options can be solved according to closed form solutions. Various numerical techniques are presented to approximate the true American put option price. Chief among these methods is the Richardson extrapolation on a sequence of Bermudan options method that was developed by Geske and Johnson. This model is extended to a Repeated-Richardson extrapolation technique. Finally, a Monte Carlo simulation is used to approximate Bermudan put options. These values are then extrapolated to approximate the price of an American put option. The use of extrapolation techniques was hampered by the presence of non-uniform convergence of the Bermudan put option sequence. When convergence was uniform, the approximations were accurate up to a few cents difference.**Full Text:****Date Issued:**2009

Clustering algorithms and their effect on edge preservation in image compression

**Authors:**Ndebele, Nothando Elizabeth**Date:**2009**Subjects:**Image compression , Vector analysis , Cluster analysis , Cluster anaylsis -- Data processing , Algorithms**Language:**English**Type:**Thesis , Masters , MSc**Identifier:**vital:5576 , http://hdl.handle.net/10962/d1008210 , Image compression , Vector analysis , Cluster analysis , Cluster anaylsis -- Data processing , Algorithms**Description:**Image compression aims to reduce the amount of data that is stored or transmitted for images. One technique that may be used to this end is vector quantization. Vectors may be used to represent images. Vector quantization reduces the number of vectors required for an image by representing a cluster of similar vectors by one typical vector that is part of a set of vectors referred to as the code book. For compression, for each image vector, only the closest codebook vector is stored or transmitted. For reconstruction, the image vectors are again replaced by the the closest codebook vectors. Hence vector quantization is a lossy compression technique and the quality of the reconstructed image depends strongly on the quality of the codebook. The design of the codebook is therefore an important part of the process. In this thesis we examine three clustering algorithms which can be used for codebook design in image compression: c-means (CM), fuzzy c-means (FCM) and learning vector quantization (LVQ). We give a description of these algorithms and their application to codebook design. Edges are an important part of the visual information contained in an image. It is essential therefore to use codebooks which allow an accurate representation of the edges. One of the shortcomings of using vector quantization is poor edge representation. We therefore carry out experiments using these algorithms to compare their edge preserving qualities. We also investigate the combination of these algorithms with classified vector quantization (CVQ) and the replication method (RM). Both these methods have been suggested as methods for improving edge representation. We use a cross validation approach to estimate the mean squared error to measure the performance of each of the algorithms and the edge preserving methods. The results reflect that the edges are less accurately represented than the non - edge areas when using CM, FCM and LVQ. The advantage of using CVQ is that the time taken for code book design is reduced particularly for CM and FCM. RM is found to be effective where the codebook is trained using a set that has larger proportions of edges than the test set.**Full Text:****Date Issued:**2009

**Authors:**Ndebele, Nothando Elizabeth**Date:**2009**Subjects:**Image compression , Vector analysis , Cluster analysis , Cluster anaylsis -- Data processing , Algorithms**Language:**English**Type:**Thesis , Masters , MSc**Identifier:**vital:5576 , http://hdl.handle.net/10962/d1008210 , Image compression , Vector analysis , Cluster analysis , Cluster anaylsis -- Data processing , Algorithms**Description:**Image compression aims to reduce the amount of data that is stored or transmitted for images. One technique that may be used to this end is vector quantization. Vectors may be used to represent images. Vector quantization reduces the number of vectors required for an image by representing a cluster of similar vectors by one typical vector that is part of a set of vectors referred to as the code book. For compression, for each image vector, only the closest codebook vector is stored or transmitted. For reconstruction, the image vectors are again replaced by the the closest codebook vectors. Hence vector quantization is a lossy compression technique and the quality of the reconstructed image depends strongly on the quality of the codebook. The design of the codebook is therefore an important part of the process. In this thesis we examine three clustering algorithms which can be used for codebook design in image compression: c-means (CM), fuzzy c-means (FCM) and learning vector quantization (LVQ). We give a description of these algorithms and their application to codebook design. Edges are an important part of the visual information contained in an image. It is essential therefore to use codebooks which allow an accurate representation of the edges. One of the shortcomings of using vector quantization is poor edge representation. We therefore carry out experiments using these algorithms to compare their edge preserving qualities. We also investigate the combination of these algorithms with classified vector quantization (CVQ) and the replication method (RM). Both these methods have been suggested as methods for improving edge representation. We use a cross validation approach to estimate the mean squared error to measure the performance of each of the algorithms and the edge preserving methods. The results reflect that the edges are less accurately represented than the non - edge areas when using CM, FCM and LVQ. The advantage of using CVQ is that the time taken for code book design is reduced particularly for CM and FCM. RM is found to be effective where the codebook is trained using a set that has larger proportions of edges than the test set.**Full Text:****Date Issued:**2009