A framework to measure human behaviour whilst reading
- Salehzadeh, Seyed Amirsaleh, Greyling, Jean
- Authors: Salehzadeh, Seyed Amirsaleh , Greyling, Jean
- Date: 2019
- Subjects: Computational intelligence , Machine learning Artificial intelligence Neural networks (Computer science)
- Language: English
- Type: Thesis , Doctoral , DPhil
- Identifier: http://hdl.handle.net/10948/43578 , vital:36921
- Description: The brain is the most complex object in the known universe that gives a sense of being to humans and characterises human behaviour. Building models of brain functions is perhaps the most fascinating scientific challenge in the 21st century. Reading is a significant cognitive process in the human brain that plays a critical role in the vital process of learning and in performing some daily activities. The study of human behaviour during reading has been an area of interest for researchers in different fields of science. This thesis is based upon providing a novel framework, called ARSAT (Assisting Researchers in the Selection of Appropriate Technologies), that measures the behaviour of humans when reading text. The ARSAT framework aims at assisting researchers in the selection and application of appropriate technologies to measure the behaviour of a person who is reading text. The ARSAT framework will assist to researchers who investigate the reading process and find it difficult to select appropriate theories, metrics, data collection methods and data analytics techniques. The ARSAT framework enhances the ability of its users to select appropriate metrics indicating the effective factors on the characterisation of different aspects of human behaviour during the reading process. As will be shown in this research study, human behaviour is characterised by a complicated interplay of action, cognition and emotion. The ARSAT framework also facilitates selecting appropriate sensory technologies that can be used to monitor and collect data for the metrics. Moreover, this research study will introduce BehaveNet, a novel Deep Learning modelling approach, which can be used for training Deep Learning models of human behaviour from the sensory data collected. In this thesis, a comprehensive literature study is presented that was conducted to acquire adequate knowledge for designing the ARSAT framework. In order to identify the contributing factors that affect the reading process, an overview of some existing theories of the reading process is provided. Furthermore, a number of sensory technologies and techniques that can be applied to monitoring the changes in the metrics indicating the factors are also demonstrated. Only, the technologies that are commercially available on the market are recommended by the ARSAT framework. A variety of Machine Learning techniques were also investigated when designing the BehaveNet. The BehaveNet takes advantage of the complementarity of Convolutional Neural Networks, Long Short-Term Memory networks and Deep Neural Networks. The design of a Human Behaviour Monitoring System (HBMS), by utilising the ARSAT framework for recognising three attention-seeking activities of humans, is also presented in this research study. Reading printed text, as well as speaking out loudly and watching a programme on TV were proposed as activities that a person unintentionally may shift his/her attention from reading into distractions. Between sensory devices recommended by the ARSAT framework, the Muse headband which is an Electroencephalography (EEG) and head motion-sensing wearable device, was selected to track the forehead EEG and a person’s head movements. The EEG and 3-axes accelerometer data were recorded from eight participants when they read printed text, as well as the time they performed two other activities. An imbalanced dataset consisting over 1.2 million rows of noisy data was created and used to build a model of the activities (60% training and 20% validating data) and evaluating the model (20% of the data). The efficiency of the framework is demonstrated by comparing the performance of the models built by utilising the BehaveNet, with the models built by utilising a number of competing Deep Learning models for raw EEG and accelerometer data, that have attained state-of-the-art performance. The classification results are evaluated by some metrics including the classification accuracy, F1 score, confusion matrix, Receiver Operating Characteristic curve, and Area under Curve (AUC) score. By considering the results, the BehaveNet contributed to the body of knowledge as an approach for measuring human behaviour by using sensory devices. In comparison with the performance of the other models, the models built by utilising the BehaveNet, attained better performance when classifying data of two EEG channels (Accuracy = 95%; AUC=0.99; F1 = 0.95), data of a single EEG channel (Accuracy = 85%; AUC=0.96; F1 = 0.83), accelerometer data (Accuracy = 81%; AUC = 0.9; F1 = 0.76) and all of the data in the dataset (Accuracy = 97%; AUC = 0.99; F1 = 0.96). The dataset and the source code of this project are also published on the Internet to help the science community. The Muse headband is also shown to be an economical and standard wearable device that can be successfully used in behavioural research.
- Full Text:
- Date Issued: 2019
- Authors: Salehzadeh, Seyed Amirsaleh , Greyling, Jean
- Date: 2019
- Subjects: Computational intelligence , Machine learning Artificial intelligence Neural networks (Computer science)
- Language: English
- Type: Thesis , Doctoral , DPhil
- Identifier: http://hdl.handle.net/10948/43578 , vital:36921
- Description: The brain is the most complex object in the known universe that gives a sense of being to humans and characterises human behaviour. Building models of brain functions is perhaps the most fascinating scientific challenge in the 21st century. Reading is a significant cognitive process in the human brain that plays a critical role in the vital process of learning and in performing some daily activities. The study of human behaviour during reading has been an area of interest for researchers in different fields of science. This thesis is based upon providing a novel framework, called ARSAT (Assisting Researchers in the Selection of Appropriate Technologies), that measures the behaviour of humans when reading text. The ARSAT framework aims at assisting researchers in the selection and application of appropriate technologies to measure the behaviour of a person who is reading text. The ARSAT framework will assist to researchers who investigate the reading process and find it difficult to select appropriate theories, metrics, data collection methods and data analytics techniques. The ARSAT framework enhances the ability of its users to select appropriate metrics indicating the effective factors on the characterisation of different aspects of human behaviour during the reading process. As will be shown in this research study, human behaviour is characterised by a complicated interplay of action, cognition and emotion. The ARSAT framework also facilitates selecting appropriate sensory technologies that can be used to monitor and collect data for the metrics. Moreover, this research study will introduce BehaveNet, a novel Deep Learning modelling approach, which can be used for training Deep Learning models of human behaviour from the sensory data collected. In this thesis, a comprehensive literature study is presented that was conducted to acquire adequate knowledge for designing the ARSAT framework. In order to identify the contributing factors that affect the reading process, an overview of some existing theories of the reading process is provided. Furthermore, a number of sensory technologies and techniques that can be applied to monitoring the changes in the metrics indicating the factors are also demonstrated. Only, the technologies that are commercially available on the market are recommended by the ARSAT framework. A variety of Machine Learning techniques were also investigated when designing the BehaveNet. The BehaveNet takes advantage of the complementarity of Convolutional Neural Networks, Long Short-Term Memory networks and Deep Neural Networks. The design of a Human Behaviour Monitoring System (HBMS), by utilising the ARSAT framework for recognising three attention-seeking activities of humans, is also presented in this research study. Reading printed text, as well as speaking out loudly and watching a programme on TV were proposed as activities that a person unintentionally may shift his/her attention from reading into distractions. Between sensory devices recommended by the ARSAT framework, the Muse headband which is an Electroencephalography (EEG) and head motion-sensing wearable device, was selected to track the forehead EEG and a person’s head movements. The EEG and 3-axes accelerometer data were recorded from eight participants when they read printed text, as well as the time they performed two other activities. An imbalanced dataset consisting over 1.2 million rows of noisy data was created and used to build a model of the activities (60% training and 20% validating data) and evaluating the model (20% of the data). The efficiency of the framework is demonstrated by comparing the performance of the models built by utilising the BehaveNet, with the models built by utilising a number of competing Deep Learning models for raw EEG and accelerometer data, that have attained state-of-the-art performance. The classification results are evaluated by some metrics including the classification accuracy, F1 score, confusion matrix, Receiver Operating Characteristic curve, and Area under Curve (AUC) score. By considering the results, the BehaveNet contributed to the body of knowledge as an approach for measuring human behaviour by using sensory devices. In comparison with the performance of the other models, the models built by utilising the BehaveNet, attained better performance when classifying data of two EEG channels (Accuracy = 95%; AUC=0.99; F1 = 0.95), data of a single EEG channel (Accuracy = 85%; AUC=0.96; F1 = 0.83), accelerometer data (Accuracy = 81%; AUC = 0.9; F1 = 0.76) and all of the data in the dataset (Accuracy = 97%; AUC = 0.99; F1 = 0.96). The dataset and the source code of this project are also published on the Internet to help the science community. The Muse headband is also shown to be an economical and standard wearable device that can be successfully used in behavioural research.
- Full Text:
- Date Issued: 2019
The selection and evaluation of a sensory technology for interaction in a warehouse environment
- Zadeh, Seyed Amirsaleh Saleh, Greyling, Jean
- Authors: Zadeh, Seyed Amirsaleh Saleh , Greyling, Jean
- Date: 2016
- Subjects: Human-computer interaction User interfaces (Computer systems) Computer architecture
- Language: English
- Type: Thesis , Masters , MCom
- Identifier: http://hdl.handle.net/10948/13193 , vital:27160
- Description: In recent years, Human-Computer Interaction (HCI) has become a significant part of modern life as it has improved human performance in the completion of daily tasks in using computerised systems. The increase in the variety of bio-sensing and wearable technologies on the market has propelled designers towards designing more efficient, effective and fully natural User-Interfaces (UI), such as the Brain-Computer Interface (BCI) and the Muscle-Computer Interface (MCI). BCI and MCI have been used for various purposes, such as controlling wheelchairs, piloting drones, providing alphanumeric inputs into a system and improving sports performance. Various challenges are experienced by workers in a warehouse environment. Because they often have to carry objects (referred to as hands-full) it is difficult to interact with traditional devices. Noise undeniably exists in some industrial environments and it is known as a major factor that causes communication problems. This has reduced the popularity of using verbal interfaces with computer applications, such as Warehouse Management Systems. Another factor that effects the performance of workers are action slips caused by a lack of concentration during, for example, routine picking activities. This can have a negative impact on job performance and allow a worker to incorrectly execute a task in a warehouse environment. This research project investigated the current challenges workers experience in a warehouse environment and the technologies utilised in this environment. The latest automation and identification systems and technologies are identified and discussed, specifically the technologies which have addressed known problems. Sensory technologies were identified that enable interaction between a human and a computerised warehouse environment. Biological and natural behaviours of humans which are applicable in the interaction with a computerised environment were described and discussed. The interactive behaviours included the visionary, auditory, speech production and physiological movement where other natural human behaviours such paying attention, action slips and the action of counting items were investigated. A number of modern sensory technologies, devices and techniques for HCI were identified with the aim of selecting and evaluating an appropriate sensory technology for MCI. iii MCI technologies enable a computer system to recognise hand and other gestures of a user, creating means of direct interaction between a user and a computer as they are able to detect specific features extracted from a specific biological or physiological activity. Thereafter, Machine Learning (ML) is applied in order to train a computer system to detect these features and convert them to a computer interface. An application of biomedical signals (bio-signals) in HCI using a MYO Armband for MCI is presented. An MCI prototype (MCIp) was developed and implemented to allow a user to provide input to an HCI, in a hands-free and hands-full situation. The MCIp was designed and developed to recognise the hand-finger gestures of a person when both hands are free or when holding an object, such a cardboard box. The MCIp applies an Artificial Neural Network (ANN) to classify features extracted from the surface Electromyography signals acquired by the MYO Armband around the forearm muscle. The MCIp provided the results of data classification for gesture recognition to an accuracy level of 34.87% with a hands-free situation. This was done by employing the ANN. The MCIp, furthermore, enabled users to provide numeric inputs to the MCIp system hands-full with an accuracy of 59.7% after a training session for each gesture of only 10 seconds. The results were obtained using eight participants. Similar experimentation with the MYO Armband has not been found to be reported in any literature at submission of this document. Based on this novel experimentation, the main contribution of this research study is a suggestion that the application of a MYO Armband, as a commercially available muscle-sensing device on the market, has the potential as an MCI to recognise the finger gestures hands-free and hands-full. An accurate MCI can increase the efficiency and effectiveness of an HCI tool when it is applied to different applications in a warehouse where noise and hands-full activities pose a challenge. Future work to improve its accuracy is proposed.
- Full Text:
- Date Issued: 2016
- Authors: Zadeh, Seyed Amirsaleh Saleh , Greyling, Jean
- Date: 2016
- Subjects: Human-computer interaction User interfaces (Computer systems) Computer architecture
- Language: English
- Type: Thesis , Masters , MCom
- Identifier: http://hdl.handle.net/10948/13193 , vital:27160
- Description: In recent years, Human-Computer Interaction (HCI) has become a significant part of modern life as it has improved human performance in the completion of daily tasks in using computerised systems. The increase in the variety of bio-sensing and wearable technologies on the market has propelled designers towards designing more efficient, effective and fully natural User-Interfaces (UI), such as the Brain-Computer Interface (BCI) and the Muscle-Computer Interface (MCI). BCI and MCI have been used for various purposes, such as controlling wheelchairs, piloting drones, providing alphanumeric inputs into a system and improving sports performance. Various challenges are experienced by workers in a warehouse environment. Because they often have to carry objects (referred to as hands-full) it is difficult to interact with traditional devices. Noise undeniably exists in some industrial environments and it is known as a major factor that causes communication problems. This has reduced the popularity of using verbal interfaces with computer applications, such as Warehouse Management Systems. Another factor that effects the performance of workers are action slips caused by a lack of concentration during, for example, routine picking activities. This can have a negative impact on job performance and allow a worker to incorrectly execute a task in a warehouse environment. This research project investigated the current challenges workers experience in a warehouse environment and the technologies utilised in this environment. The latest automation and identification systems and technologies are identified and discussed, specifically the technologies which have addressed known problems. Sensory technologies were identified that enable interaction between a human and a computerised warehouse environment. Biological and natural behaviours of humans which are applicable in the interaction with a computerised environment were described and discussed. The interactive behaviours included the visionary, auditory, speech production and physiological movement where other natural human behaviours such paying attention, action slips and the action of counting items were investigated. A number of modern sensory technologies, devices and techniques for HCI were identified with the aim of selecting and evaluating an appropriate sensory technology for MCI. iii MCI technologies enable a computer system to recognise hand and other gestures of a user, creating means of direct interaction between a user and a computer as they are able to detect specific features extracted from a specific biological or physiological activity. Thereafter, Machine Learning (ML) is applied in order to train a computer system to detect these features and convert them to a computer interface. An application of biomedical signals (bio-signals) in HCI using a MYO Armband for MCI is presented. An MCI prototype (MCIp) was developed and implemented to allow a user to provide input to an HCI, in a hands-free and hands-full situation. The MCIp was designed and developed to recognise the hand-finger gestures of a person when both hands are free or when holding an object, such a cardboard box. The MCIp applies an Artificial Neural Network (ANN) to classify features extracted from the surface Electromyography signals acquired by the MYO Armband around the forearm muscle. The MCIp provided the results of data classification for gesture recognition to an accuracy level of 34.87% with a hands-free situation. This was done by employing the ANN. The MCIp, furthermore, enabled users to provide numeric inputs to the MCIp system hands-full with an accuracy of 59.7% after a training session for each gesture of only 10 seconds. The results were obtained using eight participants. Similar experimentation with the MYO Armband has not been found to be reported in any literature at submission of this document. Based on this novel experimentation, the main contribution of this research study is a suggestion that the application of a MYO Armband, as a commercially available muscle-sensing device on the market, has the potential as an MCI to recognise the finger gestures hands-free and hands-full. An accurate MCI can increase the efficiency and effectiveness of an HCI tool when it is applied to different applications in a warehouse where noise and hands-full activities pose a challenge. Future work to improve its accuracy is proposed.
- Full Text:
- Date Issued: 2016
- «
- ‹
- 1
- ›
- »