Stochastic models in finance
- Authors: Mazengera, Hassan
- Date: 2017
- Subjects: Finance -- Mathematical models , C++ (Computer program language) , GARCH model , Lebesgue-Radon-Nikodym theorems , Radon measures , Stochastic models , Stochastic processes , Stochastic processes -- Computer programs , Martingales (Mathematics) , Pricing -- Mathematical models
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/162724 , vital:40976
- Description: Stochastic models for pricing financial securities are developed. First, we consider the Black Scholes model, which is a classic example of a complete market model and finally focus on Lévy driven models. Jumps may render the market incomplete and are induced in a model by inclusion of a Poisson process. Lévy driven models are more realistic in modelling of asset price dynamics than the Black Scholes model. Martingales are central in pricing, especially of derivatives and we give them the desired attention in the context of pricing. There are an increasing number of important pricing models where analytical solutions are not available hence computational methods come in handy, see Broadie and Glasserman (1997). It is also important to note that computational methods are also applicable to models with analytical solutions. We computationally value selected stochastic financial models using C++. Computational methods are also used to value or price complex financial instruments such as path dependent derivatives. This pricing procedure is applied in the computational valuation of a stochastic (revenue based) loan contract. Derivatives with simple pay of functions and models with analytical solutions are considered for illustrative purposes. The Black-Scholes P.D.E is complex to solve analytically and finite difference methods are widely used. Explicit finite difference scheme is considered in this thesis for computational valuation of derivatives that are modelled by the Black-Scholes P.D.E. Stochastic modelling of asset prices is important for the valuation of derivatives: Gaussian, exponential and gamma variates are simulated for the valuation purposes.
- Full Text:
- Date Issued: 2017
- Authors: Mazengera, Hassan
- Date: 2017
- Subjects: Finance -- Mathematical models , C++ (Computer program language) , GARCH model , Lebesgue-Radon-Nikodym theorems , Radon measures , Stochastic models , Stochastic processes , Stochastic processes -- Computer programs , Martingales (Mathematics) , Pricing -- Mathematical models
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/162724 , vital:40976
- Description: Stochastic models for pricing financial securities are developed. First, we consider the Black Scholes model, which is a classic example of a complete market model and finally focus on Lévy driven models. Jumps may render the market incomplete and are induced in a model by inclusion of a Poisson process. Lévy driven models are more realistic in modelling of asset price dynamics than the Black Scholes model. Martingales are central in pricing, especially of derivatives and we give them the desired attention in the context of pricing. There are an increasing number of important pricing models where analytical solutions are not available hence computational methods come in handy, see Broadie and Glasserman (1997). It is also important to note that computational methods are also applicable to models with analytical solutions. We computationally value selected stochastic financial models using C++. Computational methods are also used to value or price complex financial instruments such as path dependent derivatives. This pricing procedure is applied in the computational valuation of a stochastic (revenue based) loan contract. Derivatives with simple pay of functions and models with analytical solutions are considered for illustrative purposes. The Black-Scholes P.D.E is complex to solve analytically and finite difference methods are widely used. Explicit finite difference scheme is considered in this thesis for computational valuation of derivatives that are modelled by the Black-Scholes P.D.E. Stochastic modelling of asset prices is important for the valuation of derivatives: Gaussian, exponential and gamma variates are simulated for the valuation purposes.
- Full Text:
- Date Issued: 2017
Statistical analyses of artificial waterpoints: their effect on the herbaceous and woody structure composition within the Kruger National Park
- Authors: Goodall, Victoria Lucy
- Date: 2007
- Subjects: South African National Parks , Ecology -- Statistical methods , Regression analysis , Log-linear models , Game reserves -- South Africa , Kruger National Park (South Africa)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5570 , http://hdl.handle.net/10962/d1002810 , South African National Parks , Ecology -- Statistical methods , Regression analysis , Log-linear models , Game reserves -- South Africa , Kruger National Park (South Africa)
- Description: The objective of this project is to link the statistical theory used in the ecological sciences with an actual project that was developed for the South African National Parks Scientific Services. It investigates the changes that have occurred in the herbaceous and woody structure due to the closure of artificial waterpoints; including the impacts that elephants and other herbivores have on the vegetation of the Kruger National Park. This project was designed in conjunction with South African National Parks (SANP) Scientific Services and it is a registered project with this department. The results of this project will be submitted to Scientific Services in accordance with the terms and conditions of a SANP research project. A major concern within the KNP is the declining numbers of rare antelope and numerous projects have been developed to investigate possible ways of halting this decline and thus protecting the heterogeneity of the Kruger National Park. Three different datasets were investigated, covering three aspects of vegetation structure and composition within the KNP. The first investigated the changes that have occurred since the N'washitsumbe enclosure in the Far Northern KNP was fenced off from the rest of the park. The results show that over the 40 years since the enclosure was built, changes have occurred which have resulted in a significant difference in the abundance of Increaser 2 and Decreaser grass species between the inside and the outside of the enclosure. Increaser 2 and Decreaser categories are the result of a grass species classification depending on whether the species thrives or is depressed by heavy grazing. The difference in grass species composition and structure between the inside and the outside of the enclosure indicates that the grazing animals within the KNP have influenced the grass composition in a way that favours the dominant animals. This has resulted in a declining roan antelope population - one of the species that is considered as a 'rare antelope'. Many artificial waterpoints (boreholes and dams) have also been closed throughout the KNP in the hope of resulting in a change in vegetation structure and composition in favour of the roan. Veld condition assessment data for 87 boreholes throughout the Park was analyzed to determine whether the veld in the vicinity is beginning to change towards a more Decreaser dominated sward which would favour the roan. The results were analyzed for the different regions of the Park; and they indicate that changes are becoming evident; however, the results are not particularly conclusive, yet. The majority of the boreholes were closed between 1994 and 1998 which means that not a lot of data were available to be analyzed. A similar study conducted in another 10 years time might reveal more meaningful results. However the results are moving in the direction hoped for by the management of the KNP. The results show that the grass composition has a higher proportion of Decreaser grasses since the closure of the waterpoints, and the grass biomass around these areas has also improved. The results were analyzed on an individual basis; and then on a regional basis as the minimal data meant that the individual analyses did not provide any significant results. A third study was then done on the impact that the rapidly increasing elephant population on the vegetation within the Riparian zone along three rivers in the Far Northern region of the KNP. The riparian zone is an important part of the landscape, in terms of providing food for many animals as well as shade. The elephant population has increased substantially since the termination of the culling program and this means that the feeding requirements of the population has increased which could result in severe damage upon the vegetation, as elephants can be very destructive feeders. The results show surprising differences between the three years of data that were analyzed; however the results indicate that the elephants are targeting specific height ranges of trees when feeding; however they do not seem to consistently target specific tree species. This is positive for the diversity of the Riparian zone as this region is very important both ecologically and aesthetically for the tourists who visit the Park.
- Full Text:
- Date Issued: 2007
- Authors: Goodall, Victoria Lucy
- Date: 2007
- Subjects: South African National Parks , Ecology -- Statistical methods , Regression analysis , Log-linear models , Game reserves -- South Africa , Kruger National Park (South Africa)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5570 , http://hdl.handle.net/10962/d1002810 , South African National Parks , Ecology -- Statistical methods , Regression analysis , Log-linear models , Game reserves -- South Africa , Kruger National Park (South Africa)
- Description: The objective of this project is to link the statistical theory used in the ecological sciences with an actual project that was developed for the South African National Parks Scientific Services. It investigates the changes that have occurred in the herbaceous and woody structure due to the closure of artificial waterpoints; including the impacts that elephants and other herbivores have on the vegetation of the Kruger National Park. This project was designed in conjunction with South African National Parks (SANP) Scientific Services and it is a registered project with this department. The results of this project will be submitted to Scientific Services in accordance with the terms and conditions of a SANP research project. A major concern within the KNP is the declining numbers of rare antelope and numerous projects have been developed to investigate possible ways of halting this decline and thus protecting the heterogeneity of the Kruger National Park. Three different datasets were investigated, covering three aspects of vegetation structure and composition within the KNP. The first investigated the changes that have occurred since the N'washitsumbe enclosure in the Far Northern KNP was fenced off from the rest of the park. The results show that over the 40 years since the enclosure was built, changes have occurred which have resulted in a significant difference in the abundance of Increaser 2 and Decreaser grass species between the inside and the outside of the enclosure. Increaser 2 and Decreaser categories are the result of a grass species classification depending on whether the species thrives or is depressed by heavy grazing. The difference in grass species composition and structure between the inside and the outside of the enclosure indicates that the grazing animals within the KNP have influenced the grass composition in a way that favours the dominant animals. This has resulted in a declining roan antelope population - one of the species that is considered as a 'rare antelope'. Many artificial waterpoints (boreholes and dams) have also been closed throughout the KNP in the hope of resulting in a change in vegetation structure and composition in favour of the roan. Veld condition assessment data for 87 boreholes throughout the Park was analyzed to determine whether the veld in the vicinity is beginning to change towards a more Decreaser dominated sward which would favour the roan. The results were analyzed for the different regions of the Park; and they indicate that changes are becoming evident; however, the results are not particularly conclusive, yet. The majority of the boreholes were closed between 1994 and 1998 which means that not a lot of data were available to be analyzed. A similar study conducted in another 10 years time might reveal more meaningful results. However the results are moving in the direction hoped for by the management of the KNP. The results show that the grass composition has a higher proportion of Decreaser grasses since the closure of the waterpoints, and the grass biomass around these areas has also improved. The results were analyzed on an individual basis; and then on a regional basis as the minimal data meant that the individual analyses did not provide any significant results. A third study was then done on the impact that the rapidly increasing elephant population on the vegetation within the Riparian zone along three rivers in the Far Northern region of the KNP. The riparian zone is an important part of the landscape, in terms of providing food for many animals as well as shade. The elephant population has increased substantially since the termination of the culling program and this means that the feeding requirements of the population has increased which could result in severe damage upon the vegetation, as elephants can be very destructive feeders. The results show surprising differences between the three years of data that were analyzed; however the results indicate that the elephants are targeting specific height ranges of trees when feeding; however they do not seem to consistently target specific tree species. This is positive for the diversity of the Riparian zone as this region is very important both ecologically and aesthetically for the tourists who visit the Park.
- Full Text:
- Date Issued: 2007
An analysis of neural networks and time series techniques for demand forecasting
- Authors: Winn, David
- Date: 2007
- Subjects: Time-series analysis , Neural networks (Computer science) , Artificial intelligence , Marketing -- Management , Marketing -- Data processing , Marketing -- Statistical methods , Consumer behaviour
- Language: English
- Type: Thesis , Masters , MCom
- Identifier: vital:5572 , http://hdl.handle.net/10962/d1004362 , Time-series analysis , Neural networks (Computer science) , Artificial intelligence , Marketing -- Management , Marketing -- Data processing , Marketing -- Statistical methods , Consumer behaviour
- Description: This research examines the plausibility of developing demand forecasting techniques which are consistently and accurately able to predict demand. Time Series Techniques and Artificial Neural Networks are both investigated. Deodorant sales in South Africa are specifically studied in this thesis. Marketing techniques which are used to influence consumer buyer behaviour are considered, and these factors are integrated into the forecasting models wherever possible. The results of this research suggest that Artificial Neural Networks can be developed which consistently outperform industry forecasting targets as well as Time Series forecasts, suggesting that producers could reduce costs by adopting this more effective method.
- Full Text:
- Date Issued: 2007
- Authors: Winn, David
- Date: 2007
- Subjects: Time-series analysis , Neural networks (Computer science) , Artificial intelligence , Marketing -- Management , Marketing -- Data processing , Marketing -- Statistical methods , Consumer behaviour
- Language: English
- Type: Thesis , Masters , MCom
- Identifier: vital:5572 , http://hdl.handle.net/10962/d1004362 , Time-series analysis , Neural networks (Computer science) , Artificial intelligence , Marketing -- Management , Marketing -- Data processing , Marketing -- Statistical methods , Consumer behaviour
- Description: This research examines the plausibility of developing demand forecasting techniques which are consistently and accurately able to predict demand. Time Series Techniques and Artificial Neural Networks are both investigated. Deodorant sales in South Africa are specifically studied in this thesis. Marketing techniques which are used to influence consumer buyer behaviour are considered, and these factors are integrated into the forecasting models wherever possible. The results of this research suggest that Artificial Neural Networks can be developed which consistently outperform industry forecasting targets as well as Time Series forecasts, suggesting that producers could reduce costs by adopting this more effective method.
- Full Text:
- Date Issued: 2007
Bayesian logistic regression models for credit scoring
- Authors: Webster, Gregg
- Date: 2011
- Subjects: Bayesian statistical decision theory Credit scoring systems Regression analysis Logistic regression analysis Monte Carlo method Markov processes Financial institutions
- Language: English
- Type: Thesis , Masters , MCom
- Identifier: vital:5574 , http://hdl.handle.net/10962/d1005538
- Description: The Bayesian approach to logistic regression modelling for credit scoring is useful when there are data quantity issues. Data quantity issues might occur when a bank is opening in a new location or there is change in the scoring procedure. Making use of prior information (available from the coefficients estimated on other data sets, or expert knowledge about the coefficients) a Bayesian approach is proposed to improve the credit scoring models. To achieve this, a data set is split into two sets, “old” data and “new” data. Priors are obtained from a model fitted on the “old” data. This model is assumed to be a scoring model used by a financial institution in the current location. The financial institution is then assumed to expand into a new economic location where there is limited data. The priors from the model on the “old” data are then combined in a Bayesian model with the “new” data to obtain a model which represents all the available information. The predictive performance of this Bayesian model is compared to a model which does not make use of any prior information. It is found that the use of relevant prior information improves the predictive performance when the size of the “new” data is small. As the size of the “new” data increases, the importance of including prior information decreases
- Full Text:
- Date Issued: 2011
- Authors: Webster, Gregg
- Date: 2011
- Subjects: Bayesian statistical decision theory Credit scoring systems Regression analysis Logistic regression analysis Monte Carlo method Markov processes Financial institutions
- Language: English
- Type: Thesis , Masters , MCom
- Identifier: vital:5574 , http://hdl.handle.net/10962/d1005538
- Description: The Bayesian approach to logistic regression modelling for credit scoring is useful when there are data quantity issues. Data quantity issues might occur when a bank is opening in a new location or there is change in the scoring procedure. Making use of prior information (available from the coefficients estimated on other data sets, or expert knowledge about the coefficients) a Bayesian approach is proposed to improve the credit scoring models. To achieve this, a data set is split into two sets, “old” data and “new” data. Priors are obtained from a model fitted on the “old” data. This model is assumed to be a scoring model used by a financial institution in the current location. The financial institution is then assumed to expand into a new economic location where there is limited data. The priors from the model on the “old” data are then combined in a Bayesian model with the “new” data to obtain a model which represents all the available information. The predictive performance of this Bayesian model is compared to a model which does not make use of any prior information. It is found that the use of relevant prior information improves the predictive performance when the size of the “new” data is small. As the size of the “new” data increases, the importance of including prior information decreases
- Full Text:
- Date Issued: 2011
Reliability analysis: assessment of hardware and human reliability
- Authors: Mafu, Masakheke
- Date: 2017
- Subjects: Bayesian statistical decision theory , Reliability (Engineering) , Human machine systems , Probabilities , Markov processes
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/6280 , vital:21077
- Description: Most reliability analyses involve the analysis of binary data. Practitioners in the field of reliability place great emphasis on analysing the time periods over which items or systems function (failure time analyses), which make use of different statistical models. This study intends to introduce, review and investigate four statistical models for modeling failure times of non-repairable items, and to utilise a Bayesian methodology to achieve this. The exponential, Rayleigh, gamma and Weibull distributions will be considered. The performance of the two non-informative priors will be investigated. An application of two failure time distributions will be carried out. To meet these objectives, the failure rate and the reliability functions of failure time distributions are calculated. Two non-informative priors, the Jeffreys prior and the general divergence prior, and the corresponding posteriors are derived for each distribution. Simulation studies for each distribution are carried out, where the coverage rates and credible intervals lengths are calculated and the results of these are discussed. The gamma distribution and the Weibull distribution are applied to failure time data.The Jeffreys prior is found to have better coverage rate than the general divergence prior. The general divergence shows undercoverage when used with the Rayleigh distribution. The Jeffreys prior produces coverage rates that are conservative when used with the exponential distribution. These priors give, on average, the same average interval lengths and increase as the value of the parameter increases. Both priors perform similar when used with the gamma distribution and the Weibull distribution. A thorough discussion and review of human reliability analysis (HRA) techniques will be considered. Twenty human reliability analysis (HRA) techniques are discussed; providing a background, description and advantages and disadvantages for each. Case studies in the nuclear industry, railway industry, and aviation industry are presented to show the importance and applications of HRA. Human error has been shown to be the major contributor to system failure.
- Full Text:
- Date Issued: 2017
- Authors: Mafu, Masakheke
- Date: 2017
- Subjects: Bayesian statistical decision theory , Reliability (Engineering) , Human machine systems , Probabilities , Markov processes
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/6280 , vital:21077
- Description: Most reliability analyses involve the analysis of binary data. Practitioners in the field of reliability place great emphasis on analysing the time periods over which items or systems function (failure time analyses), which make use of different statistical models. This study intends to introduce, review and investigate four statistical models for modeling failure times of non-repairable items, and to utilise a Bayesian methodology to achieve this. The exponential, Rayleigh, gamma and Weibull distributions will be considered. The performance of the two non-informative priors will be investigated. An application of two failure time distributions will be carried out. To meet these objectives, the failure rate and the reliability functions of failure time distributions are calculated. Two non-informative priors, the Jeffreys prior and the general divergence prior, and the corresponding posteriors are derived for each distribution. Simulation studies for each distribution are carried out, where the coverage rates and credible intervals lengths are calculated and the results of these are discussed. The gamma distribution and the Weibull distribution are applied to failure time data.The Jeffreys prior is found to have better coverage rate than the general divergence prior. The general divergence shows undercoverage when used with the Rayleigh distribution. The Jeffreys prior produces coverage rates that are conservative when used with the exponential distribution. These priors give, on average, the same average interval lengths and increase as the value of the parameter increases. Both priors perform similar when used with the gamma distribution and the Weibull distribution. A thorough discussion and review of human reliability analysis (HRA) techniques will be considered. Twenty human reliability analysis (HRA) techniques are discussed; providing a background, description and advantages and disadvantages for each. Case studies in the nuclear industry, railway industry, and aviation industry are presented to show the importance and applications of HRA. Human error has been shown to be the major contributor to system failure.
- Full Text:
- Date Issued: 2017
Lag length selection for vector error correction models
- Authors: Sharp, Gary David
- Date: 2010
- Subjects: Akaike Information Criterion Mathematical models -- Evaluation Autoregression (Statistics) Error analysis (Mathematics)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:5568 , http://hdl.handle.net/10962/d1002808
- Description: This thesis investigates the problem of model identification in a Vector Autoregressive framework. The study reviews the existing research, conducts an extensive simulation based analysis of thirteen information theoretic criterion (IC), one of which is a novel derivation. The simulation exercise considers the evaluation of seven alternative error restricted vector autoregressive models with four different lag lengths. Alternative sample sizes and parameterisations are also evaluated and compared to results in the existing literature. The results of the comparative analysis provide strong support for the efficiency based criterion of Akaike and in particular the selection capability of the novel criterion, referred to as a modified corrected Akaike information criterion, demonstrates useful finite sample properties.
- Full Text:
- Date Issued: 2010
- Authors: Sharp, Gary David
- Date: 2010
- Subjects: Akaike Information Criterion Mathematical models -- Evaluation Autoregression (Statistics) Error analysis (Mathematics)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:5568 , http://hdl.handle.net/10962/d1002808
- Description: This thesis investigates the problem of model identification in a Vector Autoregressive framework. The study reviews the existing research, conducts an extensive simulation based analysis of thirteen information theoretic criterion (IC), one of which is a novel derivation. The simulation exercise considers the evaluation of seven alternative error restricted vector autoregressive models with four different lag lengths. Alternative sample sizes and parameterisations are also evaluated and compared to results in the existing literature. The results of the comparative analysis provide strong support for the efficiency based criterion of Akaike and in particular the selection capability of the novel criterion, referred to as a modified corrected Akaike information criterion, demonstrates useful finite sample properties.
- Full Text:
- Date Issued: 2010
Pricing exotic options using C++
- Authors: Nhongo, Tawuya D R
- Date: 2007
- Subjects: C++ (Computer program language) , Monte Carlo method , Simulation methods , Options (Finance) -- Mathematical models , Pricing -- Mathematical models
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5577 , http://hdl.handle.net/10962/d1008373 , C++ (Computer program language) , Monte Carlo method , Simulation methods , Options (Finance) -- Mathematical models , Pricing -- Mathematical models
- Description: This document demonstrates the use of the C++ programming language as a simulation tool in the efficient pricing of exotic European options. Extensions to the basic problem of simulation pricing are undertaken including variance reduction by conditional expectation, control and antithetic variates. Ultimately we were able to produce a modularized, easily extend-able program which effectively makes use of Monte Carlo simulation techniques to price lookback, Asian and barrier exotic options. Theories of variance reduction were validated except in cases where we used control variates in combination with the other variance reduction techniques in which case we observed increased variance. Again, the main aim of this half thesis was to produce a C++ program which would produce stable pricings of exotic options.
- Full Text:
- Date Issued: 2007
- Authors: Nhongo, Tawuya D R
- Date: 2007
- Subjects: C++ (Computer program language) , Monte Carlo method , Simulation methods , Options (Finance) -- Mathematical models , Pricing -- Mathematical models
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5577 , http://hdl.handle.net/10962/d1008373 , C++ (Computer program language) , Monte Carlo method , Simulation methods , Options (Finance) -- Mathematical models , Pricing -- Mathematical models
- Description: This document demonstrates the use of the C++ programming language as a simulation tool in the efficient pricing of exotic European options. Extensions to the basic problem of simulation pricing are undertaken including variance reduction by conditional expectation, control and antithetic variates. Ultimately we were able to produce a modularized, easily extend-able program which effectively makes use of Monte Carlo simulation techniques to price lookback, Asian and barrier exotic options. Theories of variance reduction were validated except in cases where we used control variates in combination with the other variance reduction techniques in which case we observed increased variance. Again, the main aim of this half thesis was to produce a C++ program which would produce stable pricings of exotic options.
- Full Text:
- Date Issued: 2007
The application of Classification Trees in the Banking Sector
- Authors: Mtwa, Sithayanda
- Date: 2021-04
- Subjects: To be added
- Language: English
- Type: thesis , text , Masters , MSc
- Identifier: http://hdl.handle.net/10962/178514 , vital:42946
- Description: Access restricted until April 2026. , Thesis (MSc) -- Faculty of Science, Statistics, 2021
- Full Text:
- Date Issued: 2021-04
- Authors: Mtwa, Sithayanda
- Date: 2021-04
- Subjects: To be added
- Language: English
- Type: thesis , text , Masters , MSc
- Identifier: http://hdl.handle.net/10962/178514 , vital:42946
- Description: Access restricted until April 2026. , Thesis (MSc) -- Faculty of Science, Statistics, 2021
- Full Text:
- Date Issued: 2021-04
Analytic pricing of American put options
- Authors: Glover, Elistan Nicholas
- Date: 2009
- Subjects: Options (Finance) -- Prices -- Mathematical models , Derivative securities -- Prices -- Mathematical models , Finance -- Mathematical models , Martingales (Mathematics)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5566 , http://hdl.handle.net/10962/d1002804 , Options (Finance) -- Prices -- Mathematical models , Derivative securities -- Prices -- Mathematical models , Finance -- Mathematical models , Martingales (Mathematics)
- Description: American options are the most commonly traded financial derivatives in the market. Pricing these options fairly, so as to avoid arbitrage, is of paramount importance. Closed form solutions for American put options cannot be utilised in practice and so numerical techniques are employed. This thesis looks at the work done by other researchers to find an analytic solution to the American put option pricing problem and suggests a practical method, that uses Monte Carlo simulation, to approximate the American put option price. The theory behind option pricing is first discussed using a discrete model. Once the concepts of arbitrage-free pricing and hedging have been dealt with, this model is extended to a continuous-time setting. Martingale theory is introduced to put the option pricing theory in a more formal framework. The construction of a hedging portfolio is discussed in detail and it is shown how financial derivatives are priced according to a unique riskneutral probability measure. Black-Scholes model is discussed and utilised to find closed form solutions to European style options. American options are discussed in detail and it is shown that under certain conditions, American style options can be solved according to closed form solutions. Various numerical techniques are presented to approximate the true American put option price. Chief among these methods is the Richardson extrapolation on a sequence of Bermudan options method that was developed by Geske and Johnson. This model is extended to a Repeated-Richardson extrapolation technique. Finally, a Monte Carlo simulation is used to approximate Bermudan put options. These values are then extrapolated to approximate the price of an American put option. The use of extrapolation techniques was hampered by the presence of non-uniform convergence of the Bermudan put option sequence. When convergence was uniform, the approximations were accurate up to a few cents difference.
- Full Text:
- Date Issued: 2009
- Authors: Glover, Elistan Nicholas
- Date: 2009
- Subjects: Options (Finance) -- Prices -- Mathematical models , Derivative securities -- Prices -- Mathematical models , Finance -- Mathematical models , Martingales (Mathematics)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5566 , http://hdl.handle.net/10962/d1002804 , Options (Finance) -- Prices -- Mathematical models , Derivative securities -- Prices -- Mathematical models , Finance -- Mathematical models , Martingales (Mathematics)
- Description: American options are the most commonly traded financial derivatives in the market. Pricing these options fairly, so as to avoid arbitrage, is of paramount importance. Closed form solutions for American put options cannot be utilised in practice and so numerical techniques are employed. This thesis looks at the work done by other researchers to find an analytic solution to the American put option pricing problem and suggests a practical method, that uses Monte Carlo simulation, to approximate the American put option price. The theory behind option pricing is first discussed using a discrete model. Once the concepts of arbitrage-free pricing and hedging have been dealt with, this model is extended to a continuous-time setting. Martingale theory is introduced to put the option pricing theory in a more formal framework. The construction of a hedging portfolio is discussed in detail and it is shown how financial derivatives are priced according to a unique riskneutral probability measure. Black-Scholes model is discussed and utilised to find closed form solutions to European style options. American options are discussed in detail and it is shown that under certain conditions, American style options can be solved according to closed form solutions. Various numerical techniques are presented to approximate the true American put option price. Chief among these methods is the Richardson extrapolation on a sequence of Bermudan options method that was developed by Geske and Johnson. This model is extended to a Repeated-Richardson extrapolation technique. Finally, a Monte Carlo simulation is used to approximate Bermudan put options. These values are then extrapolated to approximate the price of an American put option. The use of extrapolation techniques was hampered by the presence of non-uniform convergence of the Bermudan put option sequence. When convergence was uniform, the approximations were accurate up to a few cents difference.
- Full Text:
- Date Issued: 2009
A review of generalized linear models for count data with emphasis on current geospatial procedures
- Authors: Michell, Justin Walter
- Date: 2016
- Subjects: Spatial analysis (Statistics) , Bayesian statistical decision theory , Geospatial data , Malaria -- Botswana -- Statistics , Malaria -- Botswana -- Research -- Statistical methods
- Language: English
- Type: Thesis , Masters , MCom
- Identifier: vital:5582 , http://hdl.handle.net/10962/d1019989
- Description: Analytical problems caused by over-fitting, confounding and non-independence in the data is a major challenge for variable selection. As more variables are tested against a certain data set, there is a greater risk that some will explain the data merely by chance, but will fail to explain new data. The main aim of this study is to employ a systematic and practicable variable selection process for the spatial analysis and mapping of historical malaria risk in Botswana using data collected from the MARA (Mapping Malaria Risk in Africa) project and environmental and climatic datasets from various sources. Details of how a spatial database is compiled for a statistical analysis to proceed is provided. The automation of the entire process is also explored. The final bayesian spatial model derived from the non-spatial variable selection procedure using Markov Chain Monte Carlo simulation was fitted to the data. Winter temperature had the greatest effect of malaria prevalence in Botswana. Summer rainfall, maximum temperature of the warmest month, annual range of temperature, altitude and distance to closest water source were also significantly associated with malaria prevalence in the final spatial model after accounting for spatial correlation. Using this spatial model malaria prevalence at unobserved locations was predicted, producing a smooth risk map covering Botswana. The automation of both compiling the spatial database and the variable selection procedure proved challenging and could only be achieved in parts of the process. The non-spatial selection procedure proved practical and was able to identify stable explanatory variables and provide an objective means for selecting one variable over another, however ultimately it was not entirely successful due to the fact that a unique set of spatial variables could not be selected.
- Full Text:
- Date Issued: 2016
- Authors: Michell, Justin Walter
- Date: 2016
- Subjects: Spatial analysis (Statistics) , Bayesian statistical decision theory , Geospatial data , Malaria -- Botswana -- Statistics , Malaria -- Botswana -- Research -- Statistical methods
- Language: English
- Type: Thesis , Masters , MCom
- Identifier: vital:5582 , http://hdl.handle.net/10962/d1019989
- Description: Analytical problems caused by over-fitting, confounding and non-independence in the data is a major challenge for variable selection. As more variables are tested against a certain data set, there is a greater risk that some will explain the data merely by chance, but will fail to explain new data. The main aim of this study is to employ a systematic and practicable variable selection process for the spatial analysis and mapping of historical malaria risk in Botswana using data collected from the MARA (Mapping Malaria Risk in Africa) project and environmental and climatic datasets from various sources. Details of how a spatial database is compiled for a statistical analysis to proceed is provided. The automation of the entire process is also explored. The final bayesian spatial model derived from the non-spatial variable selection procedure using Markov Chain Monte Carlo simulation was fitted to the data. Winter temperature had the greatest effect of malaria prevalence in Botswana. Summer rainfall, maximum temperature of the warmest month, annual range of temperature, altitude and distance to closest water source were also significantly associated with malaria prevalence in the final spatial model after accounting for spatial correlation. Using this spatial model malaria prevalence at unobserved locations was predicted, producing a smooth risk map covering Botswana. The automation of both compiling the spatial database and the variable selection procedure proved challenging and could only be achieved in parts of the process. The non-spatial selection procedure proved practical and was able to identify stable explanatory variables and provide an objective means for selecting one variable over another, however ultimately it was not entirely successful due to the fact that a unique set of spatial variables could not be selected.
- Full Text:
- Date Issued: 2016
Missing values: a closer look
- Authors: Thorpe, Kerri
- Date: 2017
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/d1017827 , vital:20798
- Description: Problem: In today’s world, missing values are more present than ever. Due to the ever-changing and fast paced global society in which we live, most business and research data produced around the world contain missing data. This means that locating data which is meticulously precise can be a hard task in itself, but at times may prove essential as the consequences of making use of incomplete data could be disastrous. The reasons for missing data cropping up in almost all forms of work are numerous and shall be discussed in this dissertation. For example, those being interviewed or polled may choose to simply ignore questions which are posed to them, recording equipment may malfunction or be misplaced, or organisers may not be able to locate the respondent in order to rectify the missing data. Whatever the reasons for data being incomplete, it is necessary to avoid having to use inefficient and incomplete data as a result from the above problems. Therefore, various strategies or methods have been developed in order to handle these missing values. It is important, however, that these strategies or methods are utilised effectively as missing data treatment can introduce bias into the analysis. This dissertation shall look at these and other problems in more detail by using a data set which consists of records for 581 children who were interviewed in 1990 as part of the National Longitudinal Survey of Youth (NLSY). Approach: As mentioned above, many strategies or methods have been developed in order to deal with missing values. More specifically, traditional methods such as complete case analysis, available case analysis or single imputation are widely used by researchers and shall be discussed herein. Although these methods are simple and easy to implement, they require assumptions about the data that are not often satisfied in practice. Over the years, more up to date and relevant methods, such as multiple imputation and maximum likelihood have been developed. These methods rely on weaker assumptions and contain superior statistical properties when compared to the traditional techniques. In this dissertation, these traditional methods shall be reviewed and assessed in SAS and shall be compared to the more modern techniques. Results: The ad hoc techniques for handling missing data such as complete case and available case methods produce biased parameter estimates when the data is not missing completely at random (MCAR). Single imputation techniques likewise produce biased estimates as well as result in the underestimation of standard errors. Although the expectation maximisation (EM) algorithm yields unbiased parameter estimates, the lack of convenient standard errors suggests that using this algorithm for hypothesis testing is not a good idea. Multiple imputation, however, yields unbiased parameter estimates and correctly estimates standard errors. Conclusion: Ignoring missing data in any analysis produces biased parameter estimates. Using single imputation to handle missing values is not recommended, as using a single value to replace missing values does not account for the variation that would have been present if the variables were observed. As a result, the variance will be greatly underestimated. The more modern missing data methods such as the EM algorithm and multiple imputation are preferred over the traditional techniques as they require less stringent assumptions and they also mitigate the downsides of the older methods.
- Full Text:
- Date Issued: 2017
- Authors: Thorpe, Kerri
- Date: 2017
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/d1017827 , vital:20798
- Description: Problem: In today’s world, missing values are more present than ever. Due to the ever-changing and fast paced global society in which we live, most business and research data produced around the world contain missing data. This means that locating data which is meticulously precise can be a hard task in itself, but at times may prove essential as the consequences of making use of incomplete data could be disastrous. The reasons for missing data cropping up in almost all forms of work are numerous and shall be discussed in this dissertation. For example, those being interviewed or polled may choose to simply ignore questions which are posed to them, recording equipment may malfunction or be misplaced, or organisers may not be able to locate the respondent in order to rectify the missing data. Whatever the reasons for data being incomplete, it is necessary to avoid having to use inefficient and incomplete data as a result from the above problems. Therefore, various strategies or methods have been developed in order to handle these missing values. It is important, however, that these strategies or methods are utilised effectively as missing data treatment can introduce bias into the analysis. This dissertation shall look at these and other problems in more detail by using a data set which consists of records for 581 children who were interviewed in 1990 as part of the National Longitudinal Survey of Youth (NLSY). Approach: As mentioned above, many strategies or methods have been developed in order to deal with missing values. More specifically, traditional methods such as complete case analysis, available case analysis or single imputation are widely used by researchers and shall be discussed herein. Although these methods are simple and easy to implement, they require assumptions about the data that are not often satisfied in practice. Over the years, more up to date and relevant methods, such as multiple imputation and maximum likelihood have been developed. These methods rely on weaker assumptions and contain superior statistical properties when compared to the traditional techniques. In this dissertation, these traditional methods shall be reviewed and assessed in SAS and shall be compared to the more modern techniques. Results: The ad hoc techniques for handling missing data such as complete case and available case methods produce biased parameter estimates when the data is not missing completely at random (MCAR). Single imputation techniques likewise produce biased estimates as well as result in the underestimation of standard errors. Although the expectation maximisation (EM) algorithm yields unbiased parameter estimates, the lack of convenient standard errors suggests that using this algorithm for hypothesis testing is not a good idea. Multiple imputation, however, yields unbiased parameter estimates and correctly estimates standard errors. Conclusion: Ignoring missing data in any analysis produces biased parameter estimates. Using single imputation to handle missing values is not recommended, as using a single value to replace missing values does not account for the variation that would have been present if the variables were observed. As a result, the variance will be greatly underestimated. The more modern missing data methods such as the EM algorithm and multiple imputation are preferred over the traditional techniques as they require less stringent assumptions and they also mitigate the downsides of the older methods.
- Full Text:
- Date Issued: 2017
Cointegration in equity markets: a comparison between South African and major developed and emerging markets
- Authors: Petrov, Pavel
- Date: 2011
- Subjects: Cointegration Stock exchanges -- South Africa Stock exchanges -- Developing countries Stock exchanges -- Developed countries South Africa -- Economic conditions Portfolio management -- South Africa Econometrics Autoregression (Statistics)
- Language: English
- Type: Thesis , Masters , MCom
- Identifier: vital:5575 , http://hdl.handle.net/10962/d1005539
- Description: Cointegration has important implications for portfolio diversification. One of these is that in order to spread risk it is advisable to invest in markets that are not cointegrated. Over the last several decades communication technology has made the world a smaller place and hence cointegration in equity markets has become more prevalent. The bulk of research into cointegration focuses on developed and Asian markets, with little research been done on African markets. This study compares the Engle-Granger and Johansen tests for cointegration and uses them to calculate the level of cointegration between South African and other global equity markets. Each market is compared pair-wise with South Africa and the results have been that in general South Africa is cointegrated with other emerging markets but not really with African nor developed markets. Short-run analysis with the error correction was carried out and showed that in general markets respond slowly to any disequilibrium. Innovation accounting methods showed that the country placed first in Cholesky ordering dominates the other one. Multivariate cointegration was carried out using three selections of 4, 6 and 8 market portfolios. One of the markets was SA and the others were all chosen based on the criteria that they are not pair-wise cointegrated with SA. The level of cointegration varied depending on the portfolios, as did the error correction rates, impulse responses and variance decomposition. The one constant was that the USA dominated any portfolio where it was introduced. Recommendations were finally made about which market portfolio an investor should consider as most favourable.
- Full Text:
- Date Issued: 2011
- Authors: Petrov, Pavel
- Date: 2011
- Subjects: Cointegration Stock exchanges -- South Africa Stock exchanges -- Developing countries Stock exchanges -- Developed countries South Africa -- Economic conditions Portfolio management -- South Africa Econometrics Autoregression (Statistics)
- Language: English
- Type: Thesis , Masters , MCom
- Identifier: vital:5575 , http://hdl.handle.net/10962/d1005539
- Description: Cointegration has important implications for portfolio diversification. One of these is that in order to spread risk it is advisable to invest in markets that are not cointegrated. Over the last several decades communication technology has made the world a smaller place and hence cointegration in equity markets has become more prevalent. The bulk of research into cointegration focuses on developed and Asian markets, with little research been done on African markets. This study compares the Engle-Granger and Johansen tests for cointegration and uses them to calculate the level of cointegration between South African and other global equity markets. Each market is compared pair-wise with South Africa and the results have been that in general South Africa is cointegrated with other emerging markets but not really with African nor developed markets. Short-run analysis with the error correction was carried out and showed that in general markets respond slowly to any disequilibrium. Innovation accounting methods showed that the country placed first in Cholesky ordering dominates the other one. Multivariate cointegration was carried out using three selections of 4, 6 and 8 market portfolios. One of the markets was SA and the others were all chosen based on the criteria that they are not pair-wise cointegrated with SA. The level of cointegration varied depending on the portfolios, as did the error correction rates, impulse responses and variance decomposition. The one constant was that the USA dominated any portfolio where it was introduced. Recommendations were finally made about which market portfolio an investor should consider as most favourable.
- Full Text:
- Date Issued: 2011
Improved tree species discrimination at leaf level with hyperspectral data combining binary classifiers
- Authors: Dastile, Xolani Collen
- Date: 2011
- Subjects: Mathematical statistics , Analysis of variance , Nearest neighbor analysis (Statistics) , Trees--Classification
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5567 , http://hdl.handle.net/10962/d1002807 , Mathematical statistics , Analysis of variance , Nearest neighbor analysis (Statistics) , Trees--Classification
- Description: The purpose of the present thesis is to show that hyperspectral data can be used for discrimination between different tree species. The data set used in this study contains the hyperspectral measurements of leaves of seven savannah tree species. The data is high-dimensional and shows large within-class variability combined with small between-class variability which makes discrimination between the classes challenging. We employ two classification methods: G-nearest neighbour and feed-forward neural networks. For both methods, direct 7-class prediction results in high misclassification rates. However, binary classification works better. We constructed binary classifiers for all possible binary classification problems and combine them with Error Correcting Output Codes. We show especially that the use of 1-nearest neighbour binary classifiers results in no improvement compared to a direct 1-nearest neighbour 7-class predictor. In contrast to this negative result, the use of neural networks binary classifiers improves accuracy by 10% compared to a direct neural networks 7-class predictor, and error rates become acceptable. This can be further improved by choosing only suitable binary classifiers for combination.
- Full Text:
- Date Issued: 2011
- Authors: Dastile, Xolani Collen
- Date: 2011
- Subjects: Mathematical statistics , Analysis of variance , Nearest neighbor analysis (Statistics) , Trees--Classification
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5567 , http://hdl.handle.net/10962/d1002807 , Mathematical statistics , Analysis of variance , Nearest neighbor analysis (Statistics) , Trees--Classification
- Description: The purpose of the present thesis is to show that hyperspectral data can be used for discrimination between different tree species. The data set used in this study contains the hyperspectral measurements of leaves of seven savannah tree species. The data is high-dimensional and shows large within-class variability combined with small between-class variability which makes discrimination between the classes challenging. We employ two classification methods: G-nearest neighbour and feed-forward neural networks. For both methods, direct 7-class prediction results in high misclassification rates. However, binary classification works better. We constructed binary classifiers for all possible binary classification problems and combine them with Error Correcting Output Codes. We show especially that the use of 1-nearest neighbour binary classifiers results in no improvement compared to a direct 1-nearest neighbour 7-class predictor. In contrast to this negative result, the use of neural networks binary classifiers improves accuracy by 10% compared to a direct neural networks 7-class predictor, and error rates become acceptable. This can be further improved by choosing only suitable binary classifiers for combination.
- Full Text:
- Date Issued: 2011
Bayesian accelerated life tests for the Weibull distribution under non-informative priors
- Authors: Mostert, Philip
- Date: 2020
- Subjects: Accelerated life testing -- Statistical methods , Accelerated life testing -- Mathematical models , Failure time data analysis , Bayesian statistical decision theory , Monte Carlo method , Weibull distribution
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/172181 , vital:42173
- Description: In a competitive world where products are designed to last for long periods of time, obtaining time-to-failure data is both difficult and costly. Hence for products with high reliability, accelerated life testing is required to obtain relevant life-data quickly. This is done by placing the products under higher-than-use stress levels, thereby causing the products to fail prematurely. Part of the analysis of accelerated life-data requires a life distribution that describes the lifetime of a product at a given stress level and a life-stress relationship – which is some function that describes the way in which the life distribution changes across different stress levels. In this thesis it is assumed that the underlying life distribution is the wellknown Weibull distribution, with shape parameter constant over all stress levels and scale parameter as a log-linear function of stress. The primary objective of this thesis is to obtain estimates from Bayesian analysis, and this thesis considers five types of non-informative prior distributions: Jeffreys’ prior, reference priors, maximal data information prior, uniform prior and probability matching priors. Since the associated posterior distribution under all the derived non-informative priors are of an unknown form, the propriety of the posterior distributions is assessed to ensure admissible results. For comparison purposes, estimates obtained via the method of maximum likelihood are also considered. Finding these estimates requires solving non-linear equations, hence the Newton-Raphson algorithm is used to obtain estimates. A simulation study based on the time-to-failure of accelerated data is conducted to compare results between maximum likelihood and Bayesian estimates. As a result of the Bayesian posterior distributions being analytically intractable, two methods to obtain Bayesian estimates are considered: Markov chain Monte Carlo methods and Lindley’s approximation technique. In the simulation study the posterior means and the root mean squared error values of the estimates under the symmetric squared error loss function and the two asymmetric loss functions: the LINEX loss function and general entropy loss function, are considered. Furthermore the coverage rates for the Bayesian Markov chain Monte Carlo and maximum likelihood estimates are found, and are compared by their average interval lengths. A case study using a dataset based on accelerated time-to-failure of an insulating fluid is considered. The fit of these data for the Weibull distribution is studied and is compared to that of other popular life distributions. A full simulation study is conducted to illustrate convergence of the proper posterior distributions. Both maximum likelihood and Bayesian estimates are found for these data. The deviance information criterion is used to compare Bayesian estimates between the prior distributions. The case study is concluded by finding reliability estimates of the data at use-stress levels.
- Full Text:
- Date Issued: 2020
- Authors: Mostert, Philip
- Date: 2020
- Subjects: Accelerated life testing -- Statistical methods , Accelerated life testing -- Mathematical models , Failure time data analysis , Bayesian statistical decision theory , Monte Carlo method , Weibull distribution
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/172181 , vital:42173
- Description: In a competitive world where products are designed to last for long periods of time, obtaining time-to-failure data is both difficult and costly. Hence for products with high reliability, accelerated life testing is required to obtain relevant life-data quickly. This is done by placing the products under higher-than-use stress levels, thereby causing the products to fail prematurely. Part of the analysis of accelerated life-data requires a life distribution that describes the lifetime of a product at a given stress level and a life-stress relationship – which is some function that describes the way in which the life distribution changes across different stress levels. In this thesis it is assumed that the underlying life distribution is the wellknown Weibull distribution, with shape parameter constant over all stress levels and scale parameter as a log-linear function of stress. The primary objective of this thesis is to obtain estimates from Bayesian analysis, and this thesis considers five types of non-informative prior distributions: Jeffreys’ prior, reference priors, maximal data information prior, uniform prior and probability matching priors. Since the associated posterior distribution under all the derived non-informative priors are of an unknown form, the propriety of the posterior distributions is assessed to ensure admissible results. For comparison purposes, estimates obtained via the method of maximum likelihood are also considered. Finding these estimates requires solving non-linear equations, hence the Newton-Raphson algorithm is used to obtain estimates. A simulation study based on the time-to-failure of accelerated data is conducted to compare results between maximum likelihood and Bayesian estimates. As a result of the Bayesian posterior distributions being analytically intractable, two methods to obtain Bayesian estimates are considered: Markov chain Monte Carlo methods and Lindley’s approximation technique. In the simulation study the posterior means and the root mean squared error values of the estimates under the symmetric squared error loss function and the two asymmetric loss functions: the LINEX loss function and general entropy loss function, are considered. Furthermore the coverage rates for the Bayesian Markov chain Monte Carlo and maximum likelihood estimates are found, and are compared by their average interval lengths. A case study using a dataset based on accelerated time-to-failure of an insulating fluid is considered. The fit of these data for the Weibull distribution is studied and is compared to that of other popular life distributions. A full simulation study is conducted to illustrate convergence of the proper posterior distributions. Both maximum likelihood and Bayesian estimates are found for these data. The deviance information criterion is used to compare Bayesian estimates between the prior distributions. The case study is concluded by finding reliability estimates of the data at use-stress levels.
- Full Text:
- Date Issued: 2020
Protein secondary structure prediction using neural networks and support vector machines
- Authors: Tsilo, Lipontseng Cecilia
- Date: 2009
- Subjects: Neural networks (Computer science) , Support vector machines , Proteins -- Structure -- Mathematical models
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5569 , http://hdl.handle.net/10962/d1002809 , Neural networks (Computer science) , Support vector machines , Proteins -- Structure -- Mathematical models
- Description: Predicting the secondary structure of proteins is important in biochemistry because the 3D structure can be determined from the local folds that are found in secondary structures. Moreover, knowing the tertiary structure of proteins can assist in determining their functions. The objective of this thesis is to compare the performance of Neural Networks (NN) and Support Vector Machines (SVM) in predicting the secondary structure of 62 globular proteins from their primary sequence. For each NN and SVM, we created six binary classifiers to distinguish between the classes’ helices (H) strand (E), and coil (C). For NN we use Resilient Backpropagation training with and without early stopping. We use NN with either no hidden layer or with one hidden layer with 1,2,...,40 hidden neurons. For SVM we use a Gaussian kernel with parameter fixed at = 0.1 and varying cost parameters C in the range [0.1,5]. 10- fold cross-validation is used to obtain overall estimates for the probability of making a correct prediction. Our experiments indicate for NN and SVM that the different binary classifiers have varying accuracies: from 69% correct predictions for coils vs. non-coil up to 80% correct predictions for stand vs. non-strand. It is further demonstrated that NN with no hidden layer or not more than 2 hidden neurons in the hidden layer are sufficient for better predictions. For SVM we show that the estimated accuracies do not depend on the value of the cost parameter. As a major result, we will demonstrate that the accuracy estimates of NN and SVM binary classifiers cannot distinguish. This contradicts a modern belief in bioinformatics that SVM outperforms other predictors.
- Full Text:
- Date Issued: 2009
- Authors: Tsilo, Lipontseng Cecilia
- Date: 2009
- Subjects: Neural networks (Computer science) , Support vector machines , Proteins -- Structure -- Mathematical models
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5569 , http://hdl.handle.net/10962/d1002809 , Neural networks (Computer science) , Support vector machines , Proteins -- Structure -- Mathematical models
- Description: Predicting the secondary structure of proteins is important in biochemistry because the 3D structure can be determined from the local folds that are found in secondary structures. Moreover, knowing the tertiary structure of proteins can assist in determining their functions. The objective of this thesis is to compare the performance of Neural Networks (NN) and Support Vector Machines (SVM) in predicting the secondary structure of 62 globular proteins from their primary sequence. For each NN and SVM, we created six binary classifiers to distinguish between the classes’ helices (H) strand (E), and coil (C). For NN we use Resilient Backpropagation training with and without early stopping. We use NN with either no hidden layer or with one hidden layer with 1,2,...,40 hidden neurons. For SVM we use a Gaussian kernel with parameter fixed at = 0.1 and varying cost parameters C in the range [0.1,5]. 10- fold cross-validation is used to obtain overall estimates for the probability of making a correct prediction. Our experiments indicate for NN and SVM that the different binary classifiers have varying accuracies: from 69% correct predictions for coils vs. non-coil up to 80% correct predictions for stand vs. non-strand. It is further demonstrated that NN with no hidden layer or not more than 2 hidden neurons in the hidden layer are sufficient for better predictions. For SVM we show that the estimated accuracies do not depend on the value of the cost parameter. As a major result, we will demonstrate that the accuracy estimates of NN and SVM binary classifiers cannot distinguish. This contradicts a modern belief in bioinformatics that SVM outperforms other predictors.
- Full Text:
- Date Issued: 2009
An analysis of the Libor and Swap market models for pricing interest-rate derivatives
- Authors: Mutengwa, Tafadzwa Isaac
- Date: 2012
- Subjects: LIBOR market model , Monte Carlo method , Interest rates -- Mathematical models , Derivative securities
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5573 , http://hdl.handle.net/10962/d1005535
- Description: This thesis focuses on the non-arbitrage (fair) pricing of interest rate derivatives, in particular caplets and swaptions using the LIBOR market model (LMM) developed by Brace, Gatarek, and Musiela (1997) and Swap market model (SMM) developed Jamshidan (1997), respectively. Today, in most financial markets, interest rate derivatives are priced using the renowned Black-Scholes formula developed by Black and Scholes (1973). We present new pricing models for caplets and swaptions, which can be implemented in the financial market other than the Black-Scholes model. We theoretically construct these "new market models" and then test their practical aspects. We show that the dynamics of the LMM imply a pricing formula for caplets that has the same structure as the Black-Scholes pricing formula for a caplet that is used by market practitioners. For the SMM we also theoretically construct an arbitrage-free interest rate model that implies a pricing formula for swaptions that has the same structure as the Black-Scholes pricing formula for swaptions. We empirically compare the pricing performance of the LMM against the Black-Scholes for pricing caplets using Monte Carlo methods.
- Full Text:
- Date Issued: 2012
- Authors: Mutengwa, Tafadzwa Isaac
- Date: 2012
- Subjects: LIBOR market model , Monte Carlo method , Interest rates -- Mathematical models , Derivative securities
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5573 , http://hdl.handle.net/10962/d1005535
- Description: This thesis focuses on the non-arbitrage (fair) pricing of interest rate derivatives, in particular caplets and swaptions using the LIBOR market model (LMM) developed by Brace, Gatarek, and Musiela (1997) and Swap market model (SMM) developed Jamshidan (1997), respectively. Today, in most financial markets, interest rate derivatives are priced using the renowned Black-Scholes formula developed by Black and Scholes (1973). We present new pricing models for caplets and swaptions, which can be implemented in the financial market other than the Black-Scholes model. We theoretically construct these "new market models" and then test their practical aspects. We show that the dynamics of the LMM imply a pricing formula for caplets that has the same structure as the Black-Scholes pricing formula for a caplet that is used by market practitioners. For the SMM we also theoretically construct an arbitrage-free interest rate model that implies a pricing formula for swaptions that has the same structure as the Black-Scholes pricing formula for swaptions. We empirically compare the pricing performance of the LMM against the Black-Scholes for pricing caplets using Monte Carlo methods.
- Full Text:
- Date Issued: 2012
Clustering algorithms and their effect on edge preservation in image compression
- Authors: Ndebele, Nothando Elizabeth
- Date: 2009
- Subjects: Image compression , Vector analysis , Cluster analysis , Cluster anaylsis -- Data processing , Algorithms
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5576 , http://hdl.handle.net/10962/d1008210 , Image compression , Vector analysis , Cluster analysis , Cluster anaylsis -- Data processing , Algorithms
- Description: Image compression aims to reduce the amount of data that is stored or transmitted for images. One technique that may be used to this end is vector quantization. Vectors may be used to represent images. Vector quantization reduces the number of vectors required for an image by representing a cluster of similar vectors by one typical vector that is part of a set of vectors referred to as the code book. For compression, for each image vector, only the closest codebook vector is stored or transmitted. For reconstruction, the image vectors are again replaced by the the closest codebook vectors. Hence vector quantization is a lossy compression technique and the quality of the reconstructed image depends strongly on the quality of the codebook. The design of the codebook is therefore an important part of the process. In this thesis we examine three clustering algorithms which can be used for codebook design in image compression: c-means (CM), fuzzy c-means (FCM) and learning vector quantization (LVQ). We give a description of these algorithms and their application to codebook design. Edges are an important part of the visual information contained in an image. It is essential therefore to use codebooks which allow an accurate representation of the edges. One of the shortcomings of using vector quantization is poor edge representation. We therefore carry out experiments using these algorithms to compare their edge preserving qualities. We also investigate the combination of these algorithms with classified vector quantization (CVQ) and the replication method (RM). Both these methods have been suggested as methods for improving edge representation. We use a cross validation approach to estimate the mean squared error to measure the performance of each of the algorithms and the edge preserving methods. The results reflect that the edges are less accurately represented than the non - edge areas when using CM, FCM and LVQ. The advantage of using CVQ is that the time taken for code book design is reduced particularly for CM and FCM. RM is found to be effective where the codebook is trained using a set that has larger proportions of edges than the test set.
- Full Text:
- Date Issued: 2009
- Authors: Ndebele, Nothando Elizabeth
- Date: 2009
- Subjects: Image compression , Vector analysis , Cluster analysis , Cluster anaylsis -- Data processing , Algorithms
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5576 , http://hdl.handle.net/10962/d1008210 , Image compression , Vector analysis , Cluster analysis , Cluster anaylsis -- Data processing , Algorithms
- Description: Image compression aims to reduce the amount of data that is stored or transmitted for images. One technique that may be used to this end is vector quantization. Vectors may be used to represent images. Vector quantization reduces the number of vectors required for an image by representing a cluster of similar vectors by one typical vector that is part of a set of vectors referred to as the code book. For compression, for each image vector, only the closest codebook vector is stored or transmitted. For reconstruction, the image vectors are again replaced by the the closest codebook vectors. Hence vector quantization is a lossy compression technique and the quality of the reconstructed image depends strongly on the quality of the codebook. The design of the codebook is therefore an important part of the process. In this thesis we examine three clustering algorithms which can be used for codebook design in image compression: c-means (CM), fuzzy c-means (FCM) and learning vector quantization (LVQ). We give a description of these algorithms and their application to codebook design. Edges are an important part of the visual information contained in an image. It is essential therefore to use codebooks which allow an accurate representation of the edges. One of the shortcomings of using vector quantization is poor edge representation. We therefore carry out experiments using these algorithms to compare their edge preserving qualities. We also investigate the combination of these algorithms with classified vector quantization (CVQ) and the replication method (RM). Both these methods have been suggested as methods for improving edge representation. We use a cross validation approach to estimate the mean squared error to measure the performance of each of the algorithms and the edge preserving methods. The results reflect that the edges are less accurately represented than the non - edge areas when using CM, FCM and LVQ. The advantage of using CVQ is that the time taken for code book design is reduced particularly for CM and FCM. RM is found to be effective where the codebook is trained using a set that has larger proportions of edges than the test set.
- Full Text:
- Date Issued: 2009
Default in payment, an application of statistical learning techniques
- Authors: Gcakasi, Lulama
- Date: 2020
- Subjects: Credit -- South Africa -- Risk assessment , Risk management -- Statistical methods -- South Africa , Credit -- Management -- Statistical methods , Commercial statistics
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/141547 , vital:37984
- Description: The ability of financial institutions to detect whether a customer will default on their credit card payment is essential for its profitability. To that effect, financial institutions have credit scoring systems in place to be able to estimate the credit risk associated with a customer. Various classification models are used to develop credit scoring systems such as k-nearest neighbours, logistic regression and classification trees. This study aims to assess the performance of different classification models on the prediction of credit card payment default. Credit data is usually of high dimension and as a result dimension reduction techniques, namely principal component analysis and linear discriminant analysis, are used in this study as a means to improve model performance. Two classification models are used, namely neural networks and support vector machines. Model performance is evaluated using accuracy and area under the curve (AUC). The neuarl network classifier performed better than the support vector machine classifier as it produced higher accuracy rates and AUC values. Dimension reduction techniques were not effective in improving model performance but did result in less computationally expensive models.
- Full Text:
- Date Issued: 2020
- Authors: Gcakasi, Lulama
- Date: 2020
- Subjects: Credit -- South Africa -- Risk assessment , Risk management -- Statistical methods -- South Africa , Credit -- Management -- Statistical methods , Commercial statistics
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/141547 , vital:37984
- Description: The ability of financial institutions to detect whether a customer will default on their credit card payment is essential for its profitability. To that effect, financial institutions have credit scoring systems in place to be able to estimate the credit risk associated with a customer. Various classification models are used to develop credit scoring systems such as k-nearest neighbours, logistic regression and classification trees. This study aims to assess the performance of different classification models on the prediction of credit card payment default. Credit data is usually of high dimension and as a result dimension reduction techniques, namely principal component analysis and linear discriminant analysis, are used in this study as a means to improve model performance. Two classification models are used, namely neural networks and support vector machines. Model performance is evaluated using accuracy and area under the curve (AUC). The neuarl network classifier performed better than the support vector machine classifier as it produced higher accuracy rates and AUC values. Dimension reduction techniques were not effective in improving model performance but did result in less computationally expensive models.
- Full Text:
- Date Issued: 2020
Prediction of protein secondary structure using binary classificationtrees, naive Bayes classifiers and the Logistic Regression Classifier
- Eldud Omer, Ahmed Abdelkarim
- Authors: Eldud Omer, Ahmed Abdelkarim
- Date: 2016
- Subjects: Bayesian statistical decision theory , Logistic regression analysis , Biostatistics , Proteins -- Structure
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5581 , http://hdl.handle.net/10962/d1019985
- Description: The secondary structure of proteins is predicted using various binary classifiers. The data are adopted from the RS126 database. The original data consists of protein primary and secondary structure sequences. The original data is encoded using alphabetic letters. These data are encoded into unary vectors comprising ones and zeros only. Different binary classifiers, namely the naive Bayes, logistic regression and classification trees using hold-out and 5-fold cross validation are trained using the encoded data. For each of the classifiers three classification tasks are considered, namely helix against not helix (H/∼H), sheet against not sheet (S/∼S) and coil against not coil (C/∼C). The performance of these binary classifiers are compared using the overall accuracy in predicting the protein secondary structure for various window sizes. Our result indicate that hold-out cross validation achieved higher accuracy than 5-fold cross validation. The Naive Bayes classifier, using 5-fold cross validation achieved, the lowest accuracy for predicting helix against not helix. The classification tree classifiers, using 5-fold cross validation, achieved the lowest accuracies for both coil against not coil and sheet against not sheet classifications. The logistic regression classier accuracy is dependent on the window size; there is a positive relationship between the accuracy and window size. The logistic regression classier approach achieved the highest accuracy when compared to the classification tree and Naive Bayes classifiers for each classification task; predicting helix against not helix with accuracy 77.74 percent, for sheet against not sheet with accuracy 81.22 percent and for coil against not coil with accuracy 73.39 percent. It is noted that it is easier to compare classifiers if the classification process could be completely facilitated in R. Alternatively, it would be easier to assess these logistic regression classifiers if SPSS had a function to determine the accuracy of the logistic regression classifier.
- Full Text:
- Date Issued: 2016
- Authors: Eldud Omer, Ahmed Abdelkarim
- Date: 2016
- Subjects: Bayesian statistical decision theory , Logistic regression analysis , Biostatistics , Proteins -- Structure
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5581 , http://hdl.handle.net/10962/d1019985
- Description: The secondary structure of proteins is predicted using various binary classifiers. The data are adopted from the RS126 database. The original data consists of protein primary and secondary structure sequences. The original data is encoded using alphabetic letters. These data are encoded into unary vectors comprising ones and zeros only. Different binary classifiers, namely the naive Bayes, logistic regression and classification trees using hold-out and 5-fold cross validation are trained using the encoded data. For each of the classifiers three classification tasks are considered, namely helix against not helix (H/∼H), sheet against not sheet (S/∼S) and coil against not coil (C/∼C). The performance of these binary classifiers are compared using the overall accuracy in predicting the protein secondary structure for various window sizes. Our result indicate that hold-out cross validation achieved higher accuracy than 5-fold cross validation. The Naive Bayes classifier, using 5-fold cross validation achieved, the lowest accuracy for predicting helix against not helix. The classification tree classifiers, using 5-fold cross validation, achieved the lowest accuracies for both coil against not coil and sheet against not sheet classifications. The logistic regression classier accuracy is dependent on the window size; there is a positive relationship between the accuracy and window size. The logistic regression classier approach achieved the highest accuracy when compared to the classification tree and Naive Bayes classifiers for each classification task; predicting helix against not helix with accuracy 77.74 percent, for sheet against not sheet with accuracy 81.22 percent and for coil against not coil with accuracy 73.39 percent. It is noted that it is easier to compare classifiers if the classification process could be completely facilitated in R. Alternatively, it would be easier to assess these logistic regression classifiers if SPSS had a function to determine the accuracy of the logistic regression classifier.
- Full Text:
- Date Issued: 2016
Eliciting and combining expert opinion : an overview and comparison of methods
- Authors: Chinyamakobvu, Mutsa Carole
- Date: 2015
- Subjects: Decision making -- Statistical methods , Expertise , Bayesian statistical decision theory , Statistical decision , Delphi method , Paired comparisons (Statistics)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5579 , http://hdl.handle.net/10962/d1017827
- Description: Decision makers have long relied on experts to inform their decision making. Expert judgment analysis is a way to elicit and combine the opinions of a group of experts to facilitate decision making. The use of expert judgment is most appropriate when there is a lack of data for obtaining reasonable statistical results. The experts are asked for advice by one or more decision makers who face a specific real decision problem. The decision makers are outside the group of experts and are jointly responsible and accountable for the decision and committed to finding solutions that everyone can live with. The emphasis is on the decision makers learning from the experts. The focus of this thesis is an overview and comparison of the various elicitation and combination methods available. These include the traditional committee method, the Delphi method, the paired comparisons method, the negative exponential model, Cooke’s classical model, the histogram technique, using the Dirichlet distribution in the case of a set of uncertain proportions which must sum to one, and the employment of overfitting. The supra Bayes approach, the determination of weights for the experts, and combining the opinions of experts where each opinion is associated with a confidence level that represents the expert’s conviction of his own judgment are also considered.
- Full Text:
- Date Issued: 2015
- Authors: Chinyamakobvu, Mutsa Carole
- Date: 2015
- Subjects: Decision making -- Statistical methods , Expertise , Bayesian statistical decision theory , Statistical decision , Delphi method , Paired comparisons (Statistics)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5579 , http://hdl.handle.net/10962/d1017827
- Description: Decision makers have long relied on experts to inform their decision making. Expert judgment analysis is a way to elicit and combine the opinions of a group of experts to facilitate decision making. The use of expert judgment is most appropriate when there is a lack of data for obtaining reasonable statistical results. The experts are asked for advice by one or more decision makers who face a specific real decision problem. The decision makers are outside the group of experts and are jointly responsible and accountable for the decision and committed to finding solutions that everyone can live with. The emphasis is on the decision makers learning from the experts. The focus of this thesis is an overview and comparison of the various elicitation and combination methods available. These include the traditional committee method, the Delphi method, the paired comparisons method, the negative exponential model, Cooke’s classical model, the histogram technique, using the Dirichlet distribution in the case of a set of uncertain proportions which must sum to one, and the employment of overfitting. The supra Bayes approach, the determination of weights for the experts, and combining the opinions of experts where each opinion is associated with a confidence level that represents the expert’s conviction of his own judgment are also considered.
- Full Text:
- Date Issued: 2015