A dynamically weighted multi-modal biometric security system
- Brown, Dane L, Bradshaw, Karen L
- Authors: Brown, Dane L , Bradshaw, Karen L
- Date: 2016
- Subjects: To be catalogued
- Language: English
- Type: text , book
- Identifier: http://hdl.handle.net/10962/476629 , vital:77945 , ISBN 9780620724180
- Description: The face, fingerprint and palmprint feature vectors are automatically extracted and dynamically selected for fusion at the feature-level, toward an improved human identification accuracy. The feature-level has a higher potential accuracy than the match score-level. However, leveraging this potential requires a new approach. This work demonstrates a novel dynamic weighting algorithm for improved image-based biometric feature-fusion. A comparison is performed on uni-modal, bi-modal, tri-modal and proposed dynamic approaches. The proposed dynamic approach yields a high genuine acceptance rate of 99.25% genuine acceptance rate at a false acceptance rate of 1% on challenging datasets and big impostor datasets.
- Full Text:
- Date Issued: 2016
- Authors: Brown, Dane L , Bradshaw, Karen L
- Date: 2016
- Subjects: To be catalogued
- Language: English
- Type: text , book
- Identifier: http://hdl.handle.net/10962/476629 , vital:77945 , ISBN 9780620724180
- Description: The face, fingerprint and palmprint feature vectors are automatically extracted and dynamically selected for fusion at the feature-level, toward an improved human identification accuracy. The feature-level has a higher potential accuracy than the match score-level. However, leveraging this potential requires a new approach. This work demonstrates a novel dynamic weighting algorithm for improved image-based biometric feature-fusion. A comparison is performed on uni-modal, bi-modal, tri-modal and proposed dynamic approaches. The proposed dynamic approach yields a high genuine acceptance rate of 99.25% genuine acceptance rate at a false acceptance rate of 1% on challenging datasets and big impostor datasets.
- Full Text:
- Date Issued: 2016
A dynamically weighted multi-modal biometric security system
- Brown, Dane L, Bradshaw, Karen L
- Authors: Brown, Dane L , Bradshaw, Karen L
- Date: 2016
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/473684 , vital:77672 , xlink:href="https://www.researchgate.net/publication/315839228_A_Dynamically_Weighted_Multi-Modal_Biometric_Security_System"
- Description: The face, fingerprint and palmprint feature vectors are automatically extracted and dynamically selected for fusion at the feature-level, toward an improved human identification accuracy. The feature-level has a higher potential accuracy than the match score-level. However, leveraging this potential requires a new approach. This work demonstrates a novel dynamic weighting algorithm for improved image-based biometric feature-fusion. A comparison is performed on uni-modal, bi-modal, tri-modal and proposed dynamic approaches. The proposed dynamic approach yields a high genuine acceptance rate of 99.25% genuine acceptance rate at a false acceptance rate of 1% on challenging datasets and big impostor datasets.
- Full Text:
- Date Issued: 2016
- Authors: Brown, Dane L , Bradshaw, Karen L
- Date: 2016
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/473684 , vital:77672 , xlink:href="https://www.researchgate.net/publication/315839228_A_Dynamically_Weighted_Multi-Modal_Biometric_Security_System"
- Description: The face, fingerprint and palmprint feature vectors are automatically extracted and dynamically selected for fusion at the feature-level, toward an improved human identification accuracy. The feature-level has a higher potential accuracy than the match score-level. However, leveraging this potential requires a new approach. This work demonstrates a novel dynamic weighting algorithm for improved image-based biometric feature-fusion. A comparison is performed on uni-modal, bi-modal, tri-modal and proposed dynamic approaches. The proposed dynamic approach yields a high genuine acceptance rate of 99.25% genuine acceptance rate at a false acceptance rate of 1% on challenging datasets and big impostor datasets.
- Full Text:
- Date Issued: 2016
A multi-biometric feature-fusion framework for improved uni-modal and multi-modal human identification
- Brown, Dane L, Bradshaw, Karen L
- Authors: Brown, Dane L , Bradshaw, Karen L
- Date: 2016
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/473696 , vital:77673 , xlink:href="https://ieeexplore.ieee.org/abstract/document/7568927"
- Description: The lack of multi-biometric fusion guidelines at the feature-level are addressed in this work. A feature-fusion framework is geared toward improving human identification accuracy for both single and multiple biometrics. The foundation of the framework is the improvement over a state-of-the-art uni-modal biometric verification system, which is extended into a multi-modal identification system. A novel multi-biometric system is thus designed based on the framework, which serves as fusion guidelines for multi-biometric applications that fuse at the feature-level. This framework was applied to the face and fingerprint to achieve a 91.11% recognition accuracy when using only a single training sample. Furthermore, an accuracy of 99.69% was achieved when using five training samples.
- Full Text:
- Date Issued: 2016
- Authors: Brown, Dane L , Bradshaw, Karen L
- Date: 2016
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/473696 , vital:77673 , xlink:href="https://ieeexplore.ieee.org/abstract/document/7568927"
- Description: The lack of multi-biometric fusion guidelines at the feature-level are addressed in this work. A feature-fusion framework is geared toward improving human identification accuracy for both single and multiple biometrics. The foundation of the framework is the improvement over a state-of-the-art uni-modal biometric verification system, which is extended into a multi-modal identification system. A novel multi-biometric system is thus designed based on the framework, which serves as fusion guidelines for multi-biometric applications that fuse at the feature-level. This framework was applied to the face and fingerprint to achieve a 91.11% recognition accuracy when using only a single training sample. Furthermore, an accuracy of 99.69% was achieved when using five training samples.
- Full Text:
- Date Issued: 2016
A Practical Use for AI-Generated Images
- Boby, Alden, Brown, Dane L, Connan, James
- Authors: Boby, Alden , Brown, Dane L , Connan, James
- Date: 2023
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/463345 , vital:76401 , xlink:href="https://link.springer.com/chapter/10.1007/978-3-031-43838-7_12"
- Description: Collecting data for research can be costly and time-consuming, and available methods to speed up the process are limited. This research paper compares real data and AI-generated images for training an object detection model. The study aimed to assess how the utilisation of AI-generated images influences the performance of an object detection model. The study used a popular object detection model, YOLO, and trained it on a dataset with real car images as well as a synthetic dataset generated with a state-of-the-art diffusion model. The results showed that while the model trained on real data performed better on real-world images, the model trained on AI-generated images, in some cases, showed improved performance on certain images and was good enough to function as a licence plate detector on its own. The study highlights the potential of using AI-generated images for data augmentation in object detection models and sheds light on the trade-off between real and synthetic data in the training process. The findings of this study can inform future research in object detection and help practitioners make informed decisions when choosing between real and synthetic data for training object detection models.
- Full Text:
- Date Issued: 2023
- Authors: Boby, Alden , Brown, Dane L , Connan, James
- Date: 2023
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/463345 , vital:76401 , xlink:href="https://link.springer.com/chapter/10.1007/978-3-031-43838-7_12"
- Description: Collecting data for research can be costly and time-consuming, and available methods to speed up the process are limited. This research paper compares real data and AI-generated images for training an object detection model. The study aimed to assess how the utilisation of AI-generated images influences the performance of an object detection model. The study used a popular object detection model, YOLO, and trained it on a dataset with real car images as well as a synthetic dataset generated with a state-of-the-art diffusion model. The results showed that while the model trained on real data performed better on real-world images, the model trained on AI-generated images, in some cases, showed improved performance on certain images and was good enough to function as a licence plate detector on its own. The study highlights the potential of using AI-generated images for data augmentation in object detection models and sheds light on the trade-off between real and synthetic data in the training process. The findings of this study can inform future research in object detection and help practitioners make informed decisions when choosing between real and synthetic data for training object detection models.
- Full Text:
- Date Issued: 2023
A Robust Portable Environment for First-Year Computer Science Students
- Brown, Dane L, Connan, James
- Authors: Brown, Dane L , Connan, James
- Date: 2021
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/465113 , vital:76574 , xlink:href="https://link.springer.com/chapter/10.1007/978-3-030-92858-2_6"
- Description: Computer science education in both South African universities and worldwide often aim at making students confident at problem solving by introducing various programming exercises. Standardising a computer environment where students can apply their computational thinking knowledge on a more even playing field – without worrying about software issues – can be beneficial for problem solving in classroom of diverse students. Research shows that having consistent access to this exposes students to core concepts of Computer Science. However, with the diverse student base in South Africa, not everyone has access to a personal computer or expensive software. This paper describes a new approach at first-year level that uses the power of a modified Linux distro on a flash drive to enable access to the same, fully-fledged, free and open-source environment, including the convenience of portability. This is used as a means to even the playing field in a diverse country like South Africa and address the lack of consistent access to a problem solving environment. Feedback from students and staff at the Institution are effectively heeded and attempted to be measured.
- Full Text:
- Date Issued: 2021
- Authors: Brown, Dane L , Connan, James
- Date: 2021
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/465113 , vital:76574 , xlink:href="https://link.springer.com/chapter/10.1007/978-3-030-92858-2_6"
- Description: Computer science education in both South African universities and worldwide often aim at making students confident at problem solving by introducing various programming exercises. Standardising a computer environment where students can apply their computational thinking knowledge on a more even playing field – without worrying about software issues – can be beneficial for problem solving in classroom of diverse students. Research shows that having consistent access to this exposes students to core concepts of Computer Science. However, with the diverse student base in South Africa, not everyone has access to a personal computer or expensive software. This paper describes a new approach at first-year level that uses the power of a modified Linux distro on a flash drive to enable access to the same, fully-fledged, free and open-source environment, including the convenience of portability. This is used as a means to even the playing field in a diverse country like South Africa and address the lack of consistent access to a problem solving environment. Feedback from students and staff at the Institution are effectively heeded and attempted to be measured.
- Full Text:
- Date Issued: 2021
Adaptive machine learning based network intrusion detection
- Chindove, Hatitye E, Brown, Dane L
- Authors: Chindove, Hatitye E , Brown, Dane L
- Date: 2021
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/464052 , vital:76471 , xlink:href="https://doi.org/10.1145/3487923.3487938"
- Description: Network intrusion detection system (NIDS) adoption is essential for mitigating computer network attacks in various scenarios. However, the increasing complexity of computer networks and attacks make it challenging to classify network traffic. Machine learning (ML) techniques in a NIDS can be affected by different scenarios, and thus the recency, size and applicability of datasets are vital factors to consider when selecting and tuning a machine learning classifier. The proposed approach evaluates relatively new datasets constructed such that they depict real-world scenarios. It includes analyses of dataset balancing and sampling, feature engineering and systematic ML-based NIDS model tuning focused on the adaptive improvement of intrusion detection. A comparison between machine learning classifiers forms part of the evaluation process. Results on the proposed approach model effectiveness for NIDS are discussed. Recurrent neural networks and random forests models consistently achieved high f1-score results with macro f1-scores of 0.73 and 0.87 for the CICIDS 2017 dataset; and 0.73 and 0.72 against the CICIDS 2018 dataset, respectively.
- Full Text:
- Date Issued: 2021
- Authors: Chindove, Hatitye E , Brown, Dane L
- Date: 2021
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/464052 , vital:76471 , xlink:href="https://doi.org/10.1145/3487923.3487938"
- Description: Network intrusion detection system (NIDS) adoption is essential for mitigating computer network attacks in various scenarios. However, the increasing complexity of computer networks and attacks make it challenging to classify network traffic. Machine learning (ML) techniques in a NIDS can be affected by different scenarios, and thus the recency, size and applicability of datasets are vital factors to consider when selecting and tuning a machine learning classifier. The proposed approach evaluates relatively new datasets constructed such that they depict real-world scenarios. It includes analyses of dataset balancing and sampling, feature engineering and systematic ML-based NIDS model tuning focused on the adaptive improvement of intrusion detection. A comparison between machine learning classifiers forms part of the evaluation process. Results on the proposed approach model effectiveness for NIDS are discussed. Recurrent neural networks and random forests models consistently achieved high f1-score results with macro f1-scores of 0.73 and 0.87 for the CICIDS 2017 dataset; and 0.73 and 0.72 against the CICIDS 2018 dataset, respectively.
- Full Text:
- Date Issued: 2021
Adaptive network intrusion detection using optimised machine learning models
- Chindove, Hatitye E, Brown, Dane L
- Authors: Chindove, Hatitye E , Brown, Dane L
- Date: 2021
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/465634 , vital:76627 , xlink:href="https://www.researchgate.net/publication/358046953_Adaptive_Network_Intrusion_Detection_using_Optimised_Machine_Learning_Models"
- Description: Network intrusion detection system (NIDS) adoption is essential for mitigating computer network attacks in various scenarios. However, the increasing complexity of computer networks and attacks make it challenging to classify network traffic. Machine learning (ML) techniques in a NIDS can be affected by different scenarios, and thus the recency, size and applicability of datasets are vital factors to consider when selecting and tuning a machine learning classifier. The proposed approach evaluates relatively new datasets constructed such that they depict real-world scenarios. It includes empirical analyses of practical, systematic ML-based NIDS with significant network traffic for improved intrusion detection. A comparison between machine learning classifiers, including deep learning, form part of the evaluation process. Results on how the proposed approach increased model effectiveness for NIDS in a more practical setting are discussed. Recurrent neural networks and random forests models consistently achieved the best results.
- Full Text:
- Date Issued: 2021
- Authors: Chindove, Hatitye E , Brown, Dane L
- Date: 2021
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/465634 , vital:76627 , xlink:href="https://www.researchgate.net/publication/358046953_Adaptive_Network_Intrusion_Detection_using_Optimised_Machine_Learning_Models"
- Description: Network intrusion detection system (NIDS) adoption is essential for mitigating computer network attacks in various scenarios. However, the increasing complexity of computer networks and attacks make it challenging to classify network traffic. Machine learning (ML) techniques in a NIDS can be affected by different scenarios, and thus the recency, size and applicability of datasets are vital factors to consider when selecting and tuning a machine learning classifier. The proposed approach evaluates relatively new datasets constructed such that they depict real-world scenarios. It includes empirical analyses of practical, systematic ML-based NIDS with significant network traffic for improved intrusion detection. A comparison between machine learning classifiers, including deep learning, form part of the evaluation process. Results on how the proposed approach increased model effectiveness for NIDS in a more practical setting are discussed. Recurrent neural networks and random forests models consistently achieved the best results.
- Full Text:
- Date Issued: 2021
An evaluation of hand-based algorithms for sign language recognition
- Marais, Marc, Brown, Dane L, Connan, James, Boby, Alden
- Authors: Marais, Marc , Brown, Dane L , Connan, James , Boby, Alden
- Date: 2022
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/465124 , vital:76575 , xlink:href="https://ieeexplore.ieee.org/abstract/document/9856310"
- Description: Sign language recognition is an evolving research field in computer vision, assisting communication between hearing disabled people. Hand gestures contain the majority of the information when signing. Focusing on feature extraction methods to obtain the information stored in hand data in sign language recognition may improve classification accuracy. Pose estimation is a popular method for extracting body and hand landmarks. We implement and compare different feature extraction and segmentation algorithms, focusing on the hands only on the LSA64 dataset. To extract hand landmark coordinates, MediaPipe Holistic is implemented on the sign images. Classification is performed using poplar CNN architectures, namely ResNet and a Pruned VGG network. A separate 1D-CNN is utilised to classify hand landmark coordinates extracted using MediaPipe. The best performance was achieved on the unprocessed raw images using a Pruned VGG network with an accuracy of 95.50%. However, the more computationally efficient model using the hand landmark data and 1D-CNN for classification achieved an accuracy of 94.91%.
- Full Text:
- Date Issued: 2022
- Authors: Marais, Marc , Brown, Dane L , Connan, James , Boby, Alden
- Date: 2022
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/465124 , vital:76575 , xlink:href="https://ieeexplore.ieee.org/abstract/document/9856310"
- Description: Sign language recognition is an evolving research field in computer vision, assisting communication between hearing disabled people. Hand gestures contain the majority of the information when signing. Focusing on feature extraction methods to obtain the information stored in hand data in sign language recognition may improve classification accuracy. Pose estimation is a popular method for extracting body and hand landmarks. We implement and compare different feature extraction and segmentation algorithms, focusing on the hands only on the LSA64 dataset. To extract hand landmark coordinates, MediaPipe Holistic is implemented on the sign images. Classification is performed using poplar CNN architectures, namely ResNet and a Pruned VGG network. A separate 1D-CNN is utilised to classify hand landmark coordinates extracted using MediaPipe. The best performance was achieved on the unprocessed raw images using a Pruned VGG network with an accuracy of 95.50%. However, the more computationally efficient model using the hand landmark data and 1D-CNN for classification achieved an accuracy of 94.91%.
- Full Text:
- Date Issued: 2022
An Evaluation of Machine Learning Methods for Classifying Bot Traffic in Software Defined Networks
- Van Staden, Joshua, Brown, Dane L
- Authors: Van Staden, Joshua , Brown, Dane L
- Date: 2023
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/463357 , vital:76402 , xlink:href="https://link.springer.com/chapter/10.1007/978-981-19-7874-6_72"
- Description: Internet security is an ever-expanding field. Cyber-attacks occur very frequently, and so detecting them is an important aspect of preserving services. Machine learning offers a helpful tool with which to detect cyber attacks. However, it is impossible to deploy a machine-learning algorithm to detect attacks in a non-centralized network. Software Defined Networks (SDNs) offer a centralized view of a network, allowing machine learning algorithms to detect malicious activity within a network. The InSDN dataset is a recently-released dataset that contains a set of sniffed packets within a virtual SDN. These sniffed packets correspond to various attacks, including DDoS attacks, Probing and Password-Guessing, among others. This study aims to evaluate various machine learning models against this new dataset. Specifically, we aim to evaluate their classification ability and runtimes when trained on fewer features. The machine learning models tested include a Neural Network, Support Vector Machine, Random Forest, Multilayer Perceptron, Logistic Regression, and K-Nearest Neighbours. Cluster-based algorithms such as the K-Nearest Neighbour and Random Forest proved to be the best performers. Linear-based algorithms such as the Multilayer Perceptron performed the worst. This suggests a good level of clustering in the top few features with little space for linear separability. The reduction of features significantly reduced training time, particularly in the better-performing models.
- Full Text:
- Date Issued: 2023
- Authors: Van Staden, Joshua , Brown, Dane L
- Date: 2023
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/463357 , vital:76402 , xlink:href="https://link.springer.com/chapter/10.1007/978-981-19-7874-6_72"
- Description: Internet security is an ever-expanding field. Cyber-attacks occur very frequently, and so detecting them is an important aspect of preserving services. Machine learning offers a helpful tool with which to detect cyber attacks. However, it is impossible to deploy a machine-learning algorithm to detect attacks in a non-centralized network. Software Defined Networks (SDNs) offer a centralized view of a network, allowing machine learning algorithms to detect malicious activity within a network. The InSDN dataset is a recently-released dataset that contains a set of sniffed packets within a virtual SDN. These sniffed packets correspond to various attacks, including DDoS attacks, Probing and Password-Guessing, among others. This study aims to evaluate various machine learning models against this new dataset. Specifically, we aim to evaluate their classification ability and runtimes when trained on fewer features. The machine learning models tested include a Neural Network, Support Vector Machine, Random Forest, Multilayer Perceptron, Logistic Regression, and K-Nearest Neighbours. Cluster-based algorithms such as the K-Nearest Neighbour and Random Forest proved to be the best performers. Linear-based algorithms such as the Multilayer Perceptron performed the worst. This suggests a good level of clustering in the top few features with little space for linear separability. The reduction of features significantly reduced training time, particularly in the better-performing models.
- Full Text:
- Date Issued: 2023
An Evaluation of Machine Learning Methods for Classifying Bot Traffic in Software Defined Networks
- Van Staden, Joshua, Brown, Dane L
- Authors: Van Staden, Joshua , Brown, Dane L
- Date: 2021
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/465645 , vital:76628 , xlink:href="https://link.springer.com/chapter/10.1007/978-981-19-7874-6_72"
- Description: Internet security is an ever-expanding field. Cyber-attacks occur very frequently, and so detecting them is an important aspect of preserving services. Machine learning offers a helpful tool with which to detect cyber attacks. However, it is impossible to deploy a machine-learning algorithm to detect attacks in a non-centralized network. Software Defined Networks (SDNs) offer a centralized view of a network, allowing machine learning algorithms to detect malicious activity within a network. The InSDN dataset is a recently-released dataset that contains a set of sniffed packets within a virtual SDN. These sniffed packets correspond to various attacks, including DDoS attacks, Probing and Password-Guessing, among others. This study aims to evaluate various machine learning models against this new dataset. Specifically, we aim to evaluate their classification ability and runtimes when trained on fewer features. The machine learning models tested include a Neural Network, Support Vector Machine, Random Forest, Multilayer Perceptron, Logistic Regression, and K-Nearest Neighbours. Cluster-based algorithms such as the K-Nearest Neighbour and Random Forest proved to be the best performers. Linear-based algorithms such as the Multilayer Perceptron performed the worst. This suggests a good level of clustering in the top few features with little space for linear separability. The reduction of features significantly reduced training time, particularly in the better-performing models.
- Full Text:
- Date Issued: 2021
- Authors: Van Staden, Joshua , Brown, Dane L
- Date: 2021
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/465645 , vital:76628 , xlink:href="https://link.springer.com/chapter/10.1007/978-981-19-7874-6_72"
- Description: Internet security is an ever-expanding field. Cyber-attacks occur very frequently, and so detecting them is an important aspect of preserving services. Machine learning offers a helpful tool with which to detect cyber attacks. However, it is impossible to deploy a machine-learning algorithm to detect attacks in a non-centralized network. Software Defined Networks (SDNs) offer a centralized view of a network, allowing machine learning algorithms to detect malicious activity within a network. The InSDN dataset is a recently-released dataset that contains a set of sniffed packets within a virtual SDN. These sniffed packets correspond to various attacks, including DDoS attacks, Probing and Password-Guessing, among others. This study aims to evaluate various machine learning models against this new dataset. Specifically, we aim to evaluate their classification ability and runtimes when trained on fewer features. The machine learning models tested include a Neural Network, Support Vector Machine, Random Forest, Multilayer Perceptron, Logistic Regression, and K-Nearest Neighbours. Cluster-based algorithms such as the K-Nearest Neighbour and Random Forest proved to be the best performers. Linear-based algorithms such as the Multilayer Perceptron performed the worst. This suggests a good level of clustering in the top few features with little space for linear separability. The reduction of features significantly reduced training time, particularly in the better-performing models.
- Full Text:
- Date Issued: 2021
An Evaluation of YOLO-Based Algorithms for Hand Detection in the Kitchen
- Van Staden, Joshua, Brown, Dane L
- Authors: Van Staden, Joshua , Brown, Dane L
- Date: 2021
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/465134 , vital:76576 , xlink:href="https://ieeexplore.ieee.org/abstract/document/9519307"
- Description: Convolutional Neural Networks have offered an accurate method with which to run object detection on images. Specifically, the YOLO family of object detection algorithms have proven to be relatively fast and accurate. Since its inception, the different variants of this algorithm have been tested on different datasets. In this paper, we evaluate the performances of these algorithms on the recent Epic Kitchens-100 dataset. This dataset provides egocentric footage of people interacting with various objects in the kitchen. Most prominently shown in the footage is an egocentric view of the participants' hands. We aim to use the YOLOv3 algorithm to detect these hands within the footage provided in this dataset. In particular, we examine the YOLOv3 algorithm using two different backbones: MobileNet-lite and VGG16. We trained them on a mixture of samples from the Egohands and Epic Kitchens-100 datasets. In a separate experiment, average precision was measured on an unseen Epic Kitchens-100 subset. We found that the models are relatively simple and lead to lower scores on the Epic Kitchens 100 dataset. This is attributed to the high background noise on the Epic Kitchens 100 dataset. Nonetheless, the VGG16 architecture was found to have a higher Average Precision (AP) and is, therefore, more suited for retrospective analysis. None of the models was suitable for real-time analysis due to complex egocentric data.
- Full Text:
- Date Issued: 2021
- Authors: Van Staden, Joshua , Brown, Dane L
- Date: 2021
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/465134 , vital:76576 , xlink:href="https://ieeexplore.ieee.org/abstract/document/9519307"
- Description: Convolutional Neural Networks have offered an accurate method with which to run object detection on images. Specifically, the YOLO family of object detection algorithms have proven to be relatively fast and accurate. Since its inception, the different variants of this algorithm have been tested on different datasets. In this paper, we evaluate the performances of these algorithms on the recent Epic Kitchens-100 dataset. This dataset provides egocentric footage of people interacting with various objects in the kitchen. Most prominently shown in the footage is an egocentric view of the participants' hands. We aim to use the YOLOv3 algorithm to detect these hands within the footage provided in this dataset. In particular, we examine the YOLOv3 algorithm using two different backbones: MobileNet-lite and VGG16. We trained them on a mixture of samples from the Egohands and Epic Kitchens-100 datasets. In a separate experiment, average precision was measured on an unseen Epic Kitchens-100 subset. We found that the models are relatively simple and lead to lower scores on the Epic Kitchens 100 dataset. This is attributed to the high background noise on the Epic Kitchens 100 dataset. Nonetheless, the VGG16 architecture was found to have a higher Average Precision (AP) and is, therefore, more suited for retrospective analysis. None of the models was suitable for real-time analysis due to complex egocentric data.
- Full Text:
- Date Issued: 2021
An investigation of face and fingerprint feature-fusion guidelines
- Brown, Dane L, Bradshaw, Karen L
- Authors: Brown, Dane L , Bradshaw, Karen L
- Date: 2016
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/473751 , vital:77678 , xlink:href="https://doi.org/10.1007/978-3-319-34099-9_45"
- Description: There are a lack of multi-modal biometric fusion guidelines at the feature-level. This paper investigates face and fingerprint features in the form of their strengths and weaknesses. This serves as a set of guidelines to authors that are planning face and fingerprint feature-fusion applications or aim to extend this into a general framework. The proposed guidelines were applied to the face and fingerprint to achieve a 91.11 % recognition accuracy when using only a single training sample. Furthermore, an accuracy of 99.69 % was achieved when using five training samples.
- Full Text:
- Date Issued: 2016
- Authors: Brown, Dane L , Bradshaw, Karen L
- Date: 2016
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/473751 , vital:77678 , xlink:href="https://doi.org/10.1007/978-3-319-34099-9_45"
- Description: There are a lack of multi-modal biometric fusion guidelines at the feature-level. This paper investigates face and fingerprint features in the form of their strengths and weaknesses. This serves as a set of guidelines to authors that are planning face and fingerprint feature-fusion applications or aim to extend this into a general framework. The proposed guidelines were applied to the face and fingerprint to achieve a 91.11 % recognition accuracy when using only a single training sample. Furthermore, an accuracy of 99.69 % was achieved when using five training samples.
- Full Text:
- Date Issued: 2016
Darknet Traffic Detection Using Histogram-Based Gradient Boosting
- Brown, Dane L, Sepula, Chikondi
- Authors: Brown, Dane L , Sepula, Chikondi
- Date: 2023
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/464063 , vital:76472 , xlink:href="https://link.springer.com/chapter/10.1007/978-981-99-1624-5_59"
- Description: The network security sector has observed a rise in severe attacks emanating from the darknet or encrypted networks in recent years. Network intrusion detection systems (NIDS) capable of detecting darknet or encrypted traffic must be developed to increase system security. Machine learning algorithms can effectively detect darknet activities when trained on encrypted and conventional network data. However, the performance of the system may be influenced, among other things, by the choice of machine learning models, data preparation techniques, and feature selection methodologies. The histogram-based gradient boosting strategy known as categorical boosting (CatBoost) was tested to see how well it could find darknet traffic. The performance of the model was examined using feature selection strategies such as correlation coefficient, variance threshold, SelectKBest, and recursive feature removal (RFE). Following the categorization of traffic as “darknet” or “regular”, a multi-class classification was used to determine the software application associated with the traffic. Further study was carried out on well-known machine learning methods such as random forests (RF), decision trees (DT), linear support vector classifier (SVC Linear), and long-short term memory (LST) (LSTM). The proposed model achieved good results with 98.51% binary classification accuracy and 88% multi-class classification accuracy.
- Full Text:
- Date Issued: 2023
- Authors: Brown, Dane L , Sepula, Chikondi
- Date: 2023
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/464063 , vital:76472 , xlink:href="https://link.springer.com/chapter/10.1007/978-981-99-1624-5_59"
- Description: The network security sector has observed a rise in severe attacks emanating from the darknet or encrypted networks in recent years. Network intrusion detection systems (NIDS) capable of detecting darknet or encrypted traffic must be developed to increase system security. Machine learning algorithms can effectively detect darknet activities when trained on encrypted and conventional network data. However, the performance of the system may be influenced, among other things, by the choice of machine learning models, data preparation techniques, and feature selection methodologies. The histogram-based gradient boosting strategy known as categorical boosting (CatBoost) was tested to see how well it could find darknet traffic. The performance of the model was examined using feature selection strategies such as correlation coefficient, variance threshold, SelectKBest, and recursive feature removal (RFE). Following the categorization of traffic as “darknet” or “regular”, a multi-class classification was used to determine the software application associated with the traffic. Further study was carried out on well-known machine learning methods such as random forests (RF), decision trees (DT), linear support vector classifier (SVC Linear), and long-short term memory (LST) (LSTM). The proposed model achieved good results with 98.51% binary classification accuracy and 88% multi-class classification accuracy.
- Full Text:
- Date Issued: 2023
Deep face-iris recognition using robust image segmentation and hyperparameter tuning
- Authors: Brown, Dane L
- Date: 2022
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/465145 , vital:76577 , xlink:href="https://link.springer.com/chapter/10.1007/978-981-16-3728-5_19"
- Description: Biometrics are increasingly being used for tasks that involve sensitive or financial data. Hitherto, security on devices such as smartphones has not been a priority. Furthermore, users tend to ignore the security features in favour of more rapid access to the device. A bimodal system is proposed that enhances security by utilizing face and iris biometrics from a single image. The motivation behind this is the ability to acquire both biometrics simultaneously in one shot. The system’s biometric components: face, iris(es) and their fusion are evaluated. They are also compared to related studies. The best results were yielded by a proposed lightweight Convolutional Neural Network architecture, outperforming tuned VGG-16, Xception, SVM and the related works. The system shows advancements to ‘at-a-distance’ biometric recognition for limited and high computational capacity computing devices. All deep learning algorithms are provided with augmented data, included in the tuning process, enabling additional accuracy gains. Highlights include near-perfect fivefold cross-validation accuracy on the IITD-Iris dataset when performing identification. Verification tests were carried out on the challenging CASIA-Iris-Distance dataset and performed well on few training samples. The proposed system is practical for small or large amounts of training data and shows great promise for at-a-distance recognition and biometric fusion.
- Full Text:
- Date Issued: 2022
- Authors: Brown, Dane L
- Date: 2022
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/465145 , vital:76577 , xlink:href="https://link.springer.com/chapter/10.1007/978-981-16-3728-5_19"
- Description: Biometrics are increasingly being used for tasks that involve sensitive or financial data. Hitherto, security on devices such as smartphones has not been a priority. Furthermore, users tend to ignore the security features in favour of more rapid access to the device. A bimodal system is proposed that enhances security by utilizing face and iris biometrics from a single image. The motivation behind this is the ability to acquire both biometrics simultaneously in one shot. The system’s biometric components: face, iris(es) and their fusion are evaluated. They are also compared to related studies. The best results were yielded by a proposed lightweight Convolutional Neural Network architecture, outperforming tuned VGG-16, Xception, SVM and the related works. The system shows advancements to ‘at-a-distance’ biometric recognition for limited and high computational capacity computing devices. All deep learning algorithms are provided with augmented data, included in the tuning process, enabling additional accuracy gains. Highlights include near-perfect fivefold cross-validation accuracy on the IITD-Iris dataset when performing identification. Verification tests were carried out on the challenging CASIA-Iris-Distance dataset and performed well on few training samples. The proposed system is practical for small or large amounts of training data and shows great promise for at-a-distance recognition and biometric fusion.
- Full Text:
- Date Issued: 2022
Deep Learning Approach to Image Deblurring and Image Super-Resolution using DeblurGAN and SRGAN
- Kuhlane, Luxolo L, Brown, Dane L, Connan, James, Boby, Alden, Marais, Marc
- Authors: Kuhlane, Luxolo L , Brown, Dane L , Connan, James , Boby, Alden , Marais, Marc
- Date: 2022
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/465157 , vital:76578 , xlink:href="https://www.researchgate.net/profile/Luxolo-Kuhlane/publication/363257796_Deep_Learning_Approach_to_Image_Deblurring_and_Image_Super-Resolution_using_DeblurGAN_and_SRGAN/links/6313b5a01ddd44702131b3df/Deep-Learning-Approach-to-Image-Deblurring-and-Image-Super-Resolution-using-DeblurGAN-and-SRGAN.pdf"
- Description: Deblurring is the task of restoring a blurred image to a sharp one, retrieving the information lost due to the blur of an image. Image deblurring and super-resolution, as representative image restoration problems, have been studied for a decade. Due to their wide range of applications, numerous techniques have been proposed to tackle these problems, inspiring innovations for better performance. Deep learning has become a robust framework for many image processing tasks, including restoration. In particular, generative adversarial networks (GANs), proposed by [1], have demonstrated remarkable performances in generating plausible images. However, training GANs for image restoration is a non-trivial task. This research investigates optimization schemes for GANs that improve image quality by providing meaningful training objective functions. In this paper we use a DeblurGAN and Super-Resolution Generative Adversarial Network (SRGAN) on the chosen dataset.
- Full Text:
- Date Issued: 2022
- Authors: Kuhlane, Luxolo L , Brown, Dane L , Connan, James , Boby, Alden , Marais, Marc
- Date: 2022
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/465157 , vital:76578 , xlink:href="https://www.researchgate.net/profile/Luxolo-Kuhlane/publication/363257796_Deep_Learning_Approach_to_Image_Deblurring_and_Image_Super-Resolution_using_DeblurGAN_and_SRGAN/links/6313b5a01ddd44702131b3df/Deep-Learning-Approach-to-Image-Deblurring-and-Image-Super-Resolution-using-DeblurGAN-and-SRGAN.pdf"
- Description: Deblurring is the task of restoring a blurred image to a sharp one, retrieving the information lost due to the blur of an image. Image deblurring and super-resolution, as representative image restoration problems, have been studied for a decade. Due to their wide range of applications, numerous techniques have been proposed to tackle these problems, inspiring innovations for better performance. Deep learning has become a robust framework for many image processing tasks, including restoration. In particular, generative adversarial networks (GANs), proposed by [1], have demonstrated remarkable performances in generating plausible images. However, training GANs for image restoration is a non-trivial task. This research investigates optimization schemes for GANs that improve image quality by providing meaningful training objective functions. In this paper we use a DeblurGAN and Super-Resolution Generative Adversarial Network (SRGAN) on the chosen dataset.
- Full Text:
- Date Issued: 2022
Deep Palmprint Recognition with Alignment and Augmentation of Limited Training Samples
- Brown, Dane L, Bradshaw, Karen L
- Authors: Brown, Dane L , Bradshaw, Karen L
- Date: 2022
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/440249 , vital:73760 , xlink:href="https://doi.org/10.1007/s42979-021-00859-3"
- Description: This paper builds upon a previously proposed automatic palmprint alignment and classification system. The proposed system was geared towards palmprints acquired from either contact or contactless sensors. It was robust to finger location and fist shape changes—accurately extracting the palmprints in images without fingers. An extension to this previous work includes comparisons of traditional and deep learning models, both with hyperparameter tuning. The proposed methods are compared with related verification systems and a detailed evaluation of open-set identification. The best results were yielded by a proposed Convolutional Neural Network, based on VGG-16, and outperforming tuned VGG-16 and Xception architectures. All deep learning algorithms are provided with augmented data, included in the tuning process, enabling significant accuracy gains. Highlights include near-zero and zero EER on IITD-Palmprint verification using one training sample and leave-one-out strategy, respectively. Therefore, the proposed palmprint system is practical as it is effective on data containing many and few training examples.
- Full Text:
- Date Issued: 2022
- Authors: Brown, Dane L , Bradshaw, Karen L
- Date: 2022
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/440249 , vital:73760 , xlink:href="https://doi.org/10.1007/s42979-021-00859-3"
- Description: This paper builds upon a previously proposed automatic palmprint alignment and classification system. The proposed system was geared towards palmprints acquired from either contact or contactless sensors. It was robust to finger location and fist shape changes—accurately extracting the palmprints in images without fingers. An extension to this previous work includes comparisons of traditional and deep learning models, both with hyperparameter tuning. The proposed methods are compared with related verification systems and a detailed evaluation of open-set identification. The best results were yielded by a proposed Convolutional Neural Network, based on VGG-16, and outperforming tuned VGG-16 and Xception architectures. All deep learning algorithms are provided with augmented data, included in the tuning process, enabling significant accuracy gains. Highlights include near-zero and zero EER on IITD-Palmprint verification using one training sample and leave-one-out strategy, respectively. Therefore, the proposed palmprint system is practical as it is effective on data containing many and few training examples.
- Full Text:
- Date Issued: 2022
Deep palmprint recognition with alignment and augmentation of limited training samples
- Brown, Dane L, Bradshaw, Karen L
- Authors: Brown, Dane L , Bradshaw, Karen L
- Date: 2022
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/464074 , vital:76473 , xlink:href="https://doi.org/10.1007/s42979-021-00859-3"
- Description: This paper builds upon a previously proposed automatic palmprint alignment and classification system. The proposed system was geared towards palmprints acquired from either contact or contactless sensors. It was robust to finger location and fist shape changes—accurately extracting the palmprints in images without fingers. An extension to this previous work includes comparisons of traditional and deep learning models, both with hyperparameter tuning. The proposed methods are compared with related verification systems and a detailed evaluation of open-set identification. The best results were yielded by a proposed Convolutional Neural Network, based on VGG-16, and outperforming tuned VGG-16 and Xception architectures. All deep learning algorithms are provided with augmented data, included in the tuning process, enabling significant accuracy gains. Highlights include near-zero and zero EER on IITD-Palmprint verification using one training sample and leave-one-out strategy, respectively. Therefore, the proposed palmprint system is practical as it is effective on data containing many and few training examples.
- Full Text:
- Date Issued: 2022
- Authors: Brown, Dane L , Bradshaw, Karen L
- Date: 2022
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/464074 , vital:76473 , xlink:href="https://doi.org/10.1007/s42979-021-00859-3"
- Description: This paper builds upon a previously proposed automatic palmprint alignment and classification system. The proposed system was geared towards palmprints acquired from either contact or contactless sensors. It was robust to finger location and fist shape changes—accurately extracting the palmprints in images without fingers. An extension to this previous work includes comparisons of traditional and deep learning models, both with hyperparameter tuning. The proposed methods are compared with related verification systems and a detailed evaluation of open-set identification. The best results were yielded by a proposed Convolutional Neural Network, based on VGG-16, and outperforming tuned VGG-16 and Xception architectures. All deep learning algorithms are provided with augmented data, included in the tuning process, enabling significant accuracy gains. Highlights include near-zero and zero EER on IITD-Palmprint verification using one training sample and leave-one-out strategy, respectively. Therefore, the proposed palmprint system is practical as it is effective on data containing many and few training examples.
- Full Text:
- Date Issued: 2022
Early dehydration detection using infrared imaging
- Poole, Louise C, Brown, Dane L, Connan, James
- Authors: Poole, Louise C , Brown, Dane L , Connan, James
- Date: 2021
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/465656 , vital:76629 , xlink:href="https://www.researchgate.net/profile/Louise-Poole-3/publication/357578445_Early_Dehydration_Detection_Using_Infrared_Imaging/links/61d5664eb8305f7c4b231d50/Early-Dehydration-Detection-Using-Infrared-Imaging.pdf"
- Description: Crop loss and failure have devastating impacts on a country’s economy and food security. Developing effective and inexpensive systems to minimize crop loss has become essential. Recently, multispectral imaging—in particular visible and infrared imaging—have become popular for analyzing plants and show potential for early identification of plant stress. We created a directly comparable visible and infrared image dataset for dehydration in spinach leaves. We created and compared various models trained on both datasets and concluded that the models trained on the infrared dataset outperformed all of those trained on the visible dataset. In particular, the models trained to identify early signs of dehydration yielded 45% difference in accuracy, with the infrared model obtaining 70% accuracy and the visible model obtaining 25% accuracy. Infrared imaging thus shows promising potential for application in early plant stress and disease identification.
- Full Text:
- Date Issued: 2021
- Authors: Poole, Louise C , Brown, Dane L , Connan, James
- Date: 2021
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/465656 , vital:76629 , xlink:href="https://www.researchgate.net/profile/Louise-Poole-3/publication/357578445_Early_Dehydration_Detection_Using_Infrared_Imaging/links/61d5664eb8305f7c4b231d50/Early-Dehydration-Detection-Using-Infrared-Imaging.pdf"
- Description: Crop loss and failure have devastating impacts on a country’s economy and food security. Developing effective and inexpensive systems to minimize crop loss has become essential. Recently, multispectral imaging—in particular visible and infrared imaging—have become popular for analyzing plants and show potential for early identification of plant stress. We created a directly comparable visible and infrared image dataset for dehydration in spinach leaves. We created and compared various models trained on both datasets and concluded that the models trained on the infrared dataset outperformed all of those trained on the visible dataset. In particular, the models trained to identify early signs of dehydration yielded 45% difference in accuracy, with the infrared model obtaining 70% accuracy and the visible model obtaining 25% accuracy. Infrared imaging thus shows promising potential for application in early plant stress and disease identification.
- Full Text:
- Date Issued: 2021
Early Plant Disease Detection using Infrared and Mobile Photographs in Natural Environment
- De Silva, Malitha, Brown, Dane L
- Authors: De Silva, Malitha , Brown, Dane L
- Date: 2023
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/464085 , vital:76474 , xlink:href="https://link.springer.com/chapter/10.1007/978-3-031-37717-4_21"
- Description: Plant disease identification is a critical aspect of plant health management. Identifying plant diseases is challenging since they manifest themselves in various forms and tend to occur when the plant is still in its juvenile stage. Plant disease also has cascading effects on food security, livelihoods and the environment’s safety, so early detection is vital. This work demonstrates the effectiveness of mobile and multispectral images captured in viable and Near Infrared (NIR) ranges to identify plant diseases under realistic environmental conditions. The data sets were classified using popular CNN models Xception, DenseNet121 and ResNet50V2, resulting in greater than 92% training and 74% test accuracy for all the data collected using various Kolari vision lenses. Moreover, an openly available balanced data set was used to compare the effect of the data set balance and unbalanced characteristics on the classification accuracy. The result showed that balanced data sets do not impact the outcome.
- Full Text:
- Date Issued: 2023
- Authors: De Silva, Malitha , Brown, Dane L
- Date: 2023
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/464085 , vital:76474 , xlink:href="https://link.springer.com/chapter/10.1007/978-3-031-37717-4_21"
- Description: Plant disease identification is a critical aspect of plant health management. Identifying plant diseases is challenging since they manifest themselves in various forms and tend to occur when the plant is still in its juvenile stage. Plant disease also has cascading effects on food security, livelihoods and the environment’s safety, so early detection is vital. This work demonstrates the effectiveness of mobile and multispectral images captured in viable and Near Infrared (NIR) ranges to identify plant diseases under realistic environmental conditions. The data sets were classified using popular CNN models Xception, DenseNet121 and ResNet50V2, resulting in greater than 92% training and 74% test accuracy for all the data collected using various Kolari vision lenses. Moreover, an openly available balanced data set was used to compare the effect of the data set balance and unbalanced characteristics on the classification accuracy. The result showed that balanced data sets do not impact the outcome.
- Full Text:
- Date Issued: 2023
Efficient Biometric Access Control for Larger Scale Populations
- Brown, Dane L, Bradshaw, Karen L
- Authors: Brown, Dane L , Bradshaw, Karen L
- Date: 2018
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/465667 , vital:76630 , xlink:href="https://www.researchgate.net/profile/Dane-Brown-2/publication/335378829_Efficient_Biometric_Access_Control_for_Larger_Scale_Populations/links/5d61159ea6fdccc32ccd2c8a/Efficient-Biometric-Access-Control-for-Larger-Scale-Populations.pdf"
- Description: Biometric applications and databases are growing at an alarming rate. Processing large or complex biometric data induces longer wait times that can limit usability during application. This paper focuses on increasing the processing speed of biometric data, and calls for a parallel approach to data processing that is beyond the capability of a central processing unit (CPU). The graphical processing unit (GPU) is effectively utilized with compute unified device architecture (CUDA), and results in at least triple the processing speed when compared with a previously presented accurate and secure multimodal biometric system. When saturating the CPU-only implementation with more individuals than the available thread count, the GPU-assisted implementation outperforms it exponentially. The GPU-assisted implementation is also validated to have the same accuracy of the original system, and thus shows promising advancements in both accuracy and processing speed in the challenging big data world.
- Full Text:
- Date Issued: 2018
- Authors: Brown, Dane L , Bradshaw, Karen L
- Date: 2018
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/465667 , vital:76630 , xlink:href="https://www.researchgate.net/profile/Dane-Brown-2/publication/335378829_Efficient_Biometric_Access_Control_for_Larger_Scale_Populations/links/5d61159ea6fdccc32ccd2c8a/Efficient-Biometric-Access-Control-for-Larger-Scale-Populations.pdf"
- Description: Biometric applications and databases are growing at an alarming rate. Processing large or complex biometric data induces longer wait times that can limit usability during application. This paper focuses on increasing the processing speed of biometric data, and calls for a parallel approach to data processing that is beyond the capability of a central processing unit (CPU). The graphical processing unit (GPU) is effectively utilized with compute unified device architecture (CUDA), and results in at least triple the processing speed when compared with a previously presented accurate and secure multimodal biometric system. When saturating the CPU-only implementation with more individuals than the available thread count, the GPU-assisted implementation outperforms it exponentially. The GPU-assisted implementation is also validated to have the same accuracy of the original system, and thus shows promising advancements in both accuracy and processing speed in the challenging big data world.
- Full Text:
- Date Issued: 2018