Missing values: a closer look
- Authors: Thorpe, Kerri
- Date: 2017
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/d1017827 , vital:20798
- Description: Problem: In today’s world, missing values are more present than ever. Due to the ever-changing and fast paced global society in which we live, most business and research data produced around the world contain missing data. This means that locating data which is meticulously precise can be a hard task in itself, but at times may prove essential as the consequences of making use of incomplete data could be disastrous. The reasons for missing data cropping up in almost all forms of work are numerous and shall be discussed in this dissertation. For example, those being interviewed or polled may choose to simply ignore questions which are posed to them, recording equipment may malfunction or be misplaced, or organisers may not be able to locate the respondent in order to rectify the missing data. Whatever the reasons for data being incomplete, it is necessary to avoid having to use inefficient and incomplete data as a result from the above problems. Therefore, various strategies or methods have been developed in order to handle these missing values. It is important, however, that these strategies or methods are utilised effectively as missing data treatment can introduce bias into the analysis. This dissertation shall look at these and other problems in more detail by using a data set which consists of records for 581 children who were interviewed in 1990 as part of the National Longitudinal Survey of Youth (NLSY). Approach: As mentioned above, many strategies or methods have been developed in order to deal with missing values. More specifically, traditional methods such as complete case analysis, available case analysis or single imputation are widely used by researchers and shall be discussed herein. Although these methods are simple and easy to implement, they require assumptions about the data that are not often satisfied in practice. Over the years, more up to date and relevant methods, such as multiple imputation and maximum likelihood have been developed. These methods rely on weaker assumptions and contain superior statistical properties when compared to the traditional techniques. In this dissertation, these traditional methods shall be reviewed and assessed in SAS and shall be compared to the more modern techniques. Results: The ad hoc techniques for handling missing data such as complete case and available case methods produce biased parameter estimates when the data is not missing completely at random (MCAR). Single imputation techniques likewise produce biased estimates as well as result in the underestimation of standard errors. Although the expectation maximisation (EM) algorithm yields unbiased parameter estimates, the lack of convenient standard errors suggests that using this algorithm for hypothesis testing is not a good idea. Multiple imputation, however, yields unbiased parameter estimates and correctly estimates standard errors. Conclusion: Ignoring missing data in any analysis produces biased parameter estimates. Using single imputation to handle missing values is not recommended, as using a single value to replace missing values does not account for the variation that would have been present if the variables were observed. As a result, the variance will be greatly underestimated. The more modern missing data methods such as the EM algorithm and multiple imputation are preferred over the traditional techniques as they require less stringent assumptions and they also mitigate the downsides of the older methods.
- Full Text:
- Date Issued: 2017
- Authors: Thorpe, Kerri
- Date: 2017
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/d1017827 , vital:20798
- Description: Problem: In today’s world, missing values are more present than ever. Due to the ever-changing and fast paced global society in which we live, most business and research data produced around the world contain missing data. This means that locating data which is meticulously precise can be a hard task in itself, but at times may prove essential as the consequences of making use of incomplete data could be disastrous. The reasons for missing data cropping up in almost all forms of work are numerous and shall be discussed in this dissertation. For example, those being interviewed or polled may choose to simply ignore questions which are posed to them, recording equipment may malfunction or be misplaced, or organisers may not be able to locate the respondent in order to rectify the missing data. Whatever the reasons for data being incomplete, it is necessary to avoid having to use inefficient and incomplete data as a result from the above problems. Therefore, various strategies or methods have been developed in order to handle these missing values. It is important, however, that these strategies or methods are utilised effectively as missing data treatment can introduce bias into the analysis. This dissertation shall look at these and other problems in more detail by using a data set which consists of records for 581 children who were interviewed in 1990 as part of the National Longitudinal Survey of Youth (NLSY). Approach: As mentioned above, many strategies or methods have been developed in order to deal with missing values. More specifically, traditional methods such as complete case analysis, available case analysis or single imputation are widely used by researchers and shall be discussed herein. Although these methods are simple and easy to implement, they require assumptions about the data that are not often satisfied in practice. Over the years, more up to date and relevant methods, such as multiple imputation and maximum likelihood have been developed. These methods rely on weaker assumptions and contain superior statistical properties when compared to the traditional techniques. In this dissertation, these traditional methods shall be reviewed and assessed in SAS and shall be compared to the more modern techniques. Results: The ad hoc techniques for handling missing data such as complete case and available case methods produce biased parameter estimates when the data is not missing completely at random (MCAR). Single imputation techniques likewise produce biased estimates as well as result in the underestimation of standard errors. Although the expectation maximisation (EM) algorithm yields unbiased parameter estimates, the lack of convenient standard errors suggests that using this algorithm for hypothesis testing is not a good idea. Multiple imputation, however, yields unbiased parameter estimates and correctly estimates standard errors. Conclusion: Ignoring missing data in any analysis produces biased parameter estimates. Using single imputation to handle missing values is not recommended, as using a single value to replace missing values does not account for the variation that would have been present if the variables were observed. As a result, the variance will be greatly underestimated. The more modern missing data methods such as the EM algorithm and multiple imputation are preferred over the traditional techniques as they require less stringent assumptions and they also mitigate the downsides of the older methods.
- Full Text:
- Date Issued: 2017
Reliability analysis: assessment of hardware and human reliability
- Authors: Mafu, Masakheke
- Date: 2017
- Subjects: Bayesian statistical decision theory , Reliability (Engineering) , Human machine systems , Probabilities , Markov processes
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/6280 , vital:21077
- Description: Most reliability analyses involve the analysis of binary data. Practitioners in the field of reliability place great emphasis on analysing the time periods over which items or systems function (failure time analyses), which make use of different statistical models. This study intends to introduce, review and investigate four statistical models for modeling failure times of non-repairable items, and to utilise a Bayesian methodology to achieve this. The exponential, Rayleigh, gamma and Weibull distributions will be considered. The performance of the two non-informative priors will be investigated. An application of two failure time distributions will be carried out. To meet these objectives, the failure rate and the reliability functions of failure time distributions are calculated. Two non-informative priors, the Jeffreys prior and the general divergence prior, and the corresponding posteriors are derived for each distribution. Simulation studies for each distribution are carried out, where the coverage rates and credible intervals lengths are calculated and the results of these are discussed. The gamma distribution and the Weibull distribution are applied to failure time data.The Jeffreys prior is found to have better coverage rate than the general divergence prior. The general divergence shows undercoverage when used with the Rayleigh distribution. The Jeffreys prior produces coverage rates that are conservative when used with the exponential distribution. These priors give, on average, the same average interval lengths and increase as the value of the parameter increases. Both priors perform similar when used with the gamma distribution and the Weibull distribution. A thorough discussion and review of human reliability analysis (HRA) techniques will be considered. Twenty human reliability analysis (HRA) techniques are discussed; providing a background, description and advantages and disadvantages for each. Case studies in the nuclear industry, railway industry, and aviation industry are presented to show the importance and applications of HRA. Human error has been shown to be the major contributor to system failure.
- Full Text:
- Date Issued: 2017
- Authors: Mafu, Masakheke
- Date: 2017
- Subjects: Bayesian statistical decision theory , Reliability (Engineering) , Human machine systems , Probabilities , Markov processes
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/6280 , vital:21077
- Description: Most reliability analyses involve the analysis of binary data. Practitioners in the field of reliability place great emphasis on analysing the time periods over which items or systems function (failure time analyses), which make use of different statistical models. This study intends to introduce, review and investigate four statistical models for modeling failure times of non-repairable items, and to utilise a Bayesian methodology to achieve this. The exponential, Rayleigh, gamma and Weibull distributions will be considered. The performance of the two non-informative priors will be investigated. An application of two failure time distributions will be carried out. To meet these objectives, the failure rate and the reliability functions of failure time distributions are calculated. Two non-informative priors, the Jeffreys prior and the general divergence prior, and the corresponding posteriors are derived for each distribution. Simulation studies for each distribution are carried out, where the coverage rates and credible intervals lengths are calculated and the results of these are discussed. The gamma distribution and the Weibull distribution are applied to failure time data.The Jeffreys prior is found to have better coverage rate than the general divergence prior. The general divergence shows undercoverage when used with the Rayleigh distribution. The Jeffreys prior produces coverage rates that are conservative when used with the exponential distribution. These priors give, on average, the same average interval lengths and increase as the value of the parameter increases. Both priors perform similar when used with the gamma distribution and the Weibull distribution. A thorough discussion and review of human reliability analysis (HRA) techniques will be considered. Twenty human reliability analysis (HRA) techniques are discussed; providing a background, description and advantages and disadvantages for each. Case studies in the nuclear industry, railway industry, and aviation industry are presented to show the importance and applications of HRA. Human error has been shown to be the major contributor to system failure.
- Full Text:
- Date Issued: 2017
Stochastic models in finance
- Authors: Mazengera, Hassan
- Date: 2017
- Subjects: Finance -- Mathematical models , C++ (Computer program language) , GARCH model , Lebesgue-Radon-Nikodym theorems , Radon measures , Stochastic models , Stochastic processes , Stochastic processes -- Computer programs , Martingales (Mathematics) , Pricing -- Mathematical models
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/162724 , vital:40976
- Description: Stochastic models for pricing financial securities are developed. First, we consider the Black Scholes model, which is a classic example of a complete market model and finally focus on Lévy driven models. Jumps may render the market incomplete and are induced in a model by inclusion of a Poisson process. Lévy driven models are more realistic in modelling of asset price dynamics than the Black Scholes model. Martingales are central in pricing, especially of derivatives and we give them the desired attention in the context of pricing. There are an increasing number of important pricing models where analytical solutions are not available hence computational methods come in handy, see Broadie and Glasserman (1997). It is also important to note that computational methods are also applicable to models with analytical solutions. We computationally value selected stochastic financial models using C++. Computational methods are also used to value or price complex financial instruments such as path dependent derivatives. This pricing procedure is applied in the computational valuation of a stochastic (revenue based) loan contract. Derivatives with simple pay of functions and models with analytical solutions are considered for illustrative purposes. The Black-Scholes P.D.E is complex to solve analytically and finite difference methods are widely used. Explicit finite difference scheme is considered in this thesis for computational valuation of derivatives that are modelled by the Black-Scholes P.D.E. Stochastic modelling of asset prices is important for the valuation of derivatives: Gaussian, exponential and gamma variates are simulated for the valuation purposes.
- Full Text:
- Date Issued: 2017
- Authors: Mazengera, Hassan
- Date: 2017
- Subjects: Finance -- Mathematical models , C++ (Computer program language) , GARCH model , Lebesgue-Radon-Nikodym theorems , Radon measures , Stochastic models , Stochastic processes , Stochastic processes -- Computer programs , Martingales (Mathematics) , Pricing -- Mathematical models
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/162724 , vital:40976
- Description: Stochastic models for pricing financial securities are developed. First, we consider the Black Scholes model, which is a classic example of a complete market model and finally focus on Lévy driven models. Jumps may render the market incomplete and are induced in a model by inclusion of a Poisson process. Lévy driven models are more realistic in modelling of asset price dynamics than the Black Scholes model. Martingales are central in pricing, especially of derivatives and we give them the desired attention in the context of pricing. There are an increasing number of important pricing models where analytical solutions are not available hence computational methods come in handy, see Broadie and Glasserman (1997). It is also important to note that computational methods are also applicable to models with analytical solutions. We computationally value selected stochastic financial models using C++. Computational methods are also used to value or price complex financial instruments such as path dependent derivatives. This pricing procedure is applied in the computational valuation of a stochastic (revenue based) loan contract. Derivatives with simple pay of functions and models with analytical solutions are considered for illustrative purposes. The Black-Scholes P.D.E is complex to solve analytically and finite difference methods are widely used. Explicit finite difference scheme is considered in this thesis for computational valuation of derivatives that are modelled by the Black-Scholes P.D.E. Stochastic modelling of asset prices is important for the valuation of derivatives: Gaussian, exponential and gamma variates are simulated for the valuation purposes.
- Full Text:
- Date Issued: 2017
- «
- ‹
- 1
- ›
- »