Enabling Vehicle Search Through Robust Licence Plate Detection
- Boby, Alden, Brown, Dane L, Connan, James, Marais, Marc, Kuhlane, Luxolo L
- Authors: Boby, Alden , Brown, Dane L , Connan, James , Marais, Marc , Kuhlane, Luxolo L
- Date: 2023
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/463372 , vital:76403 , xlink:href="https://ieeexplore.ieee.org/abstract/document/10220508"
- Description: Licence plate recognition has many practical applications for security and surveillance. This paper presents a robust licence plate detection system that uses string-matching algorithms to identify a vehicle in data. Object detection models have had limited application in the character recognition domain. The system utilises the YOLO object detection model to perform character recognition to ensure more accurate character predictions. The model incorporates super-resolution techniques to enhance the quality of licence plate images to increase character recognition accuracy. The proposed system can accurately detect license plates in diverse conditions and can handle license plates with varying fonts and backgrounds. The system's effectiveness is demonstrated through experimentation on components of the system, showing promising license plate detection and character recognition accuracy. The overall system works with all the components to track vehicles by matching a target string with detected licence plates in a scene. The system has potential applications in law enforcement, traffic management, and parking systems and can significantly advance surveillance and security through automation.
- Full Text:
- Date Issued: 2023
- Authors: Boby, Alden , Brown, Dane L , Connan, James , Marais, Marc , Kuhlane, Luxolo L
- Date: 2023
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/463372 , vital:76403 , xlink:href="https://ieeexplore.ieee.org/abstract/document/10220508"
- Description: Licence plate recognition has many practical applications for security and surveillance. This paper presents a robust licence plate detection system that uses string-matching algorithms to identify a vehicle in data. Object detection models have had limited application in the character recognition domain. The system utilises the YOLO object detection model to perform character recognition to ensure more accurate character predictions. The model incorporates super-resolution techniques to enhance the quality of licence plate images to increase character recognition accuracy. The proposed system can accurately detect license plates in diverse conditions and can handle license plates with varying fonts and backgrounds. The system's effectiveness is demonstrated through experimentation on components of the system, showing promising license plate detection and character recognition accuracy. The overall system works with all the components to track vehicles by matching a target string with detected licence plates in a scene. The system has potential applications in law enforcement, traffic management, and parking systems and can significantly advance surveillance and security through automation.
- Full Text:
- Date Issued: 2023
Plant Disease Detection using Vision Transformers on Multispectral Natural Environment Images
- De Silva, Malitha, Brown, Dane L
- Authors: De Silva, Malitha , Brown, Dane L
- Date: 2023
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/463456 , vital:76410 , xlink:href="https://ieeexplore.ieee.org/abstract/document/10220517"
- Description: Enhancing agricultural practices has become essential in mitigating global hunger. Over the years, significant technological advancements have been introduced to improve the quality and quantity of harvests by effectively managing weeds, pests, and diseases. Many studies have focused on identifying plant diseases, as this information aids in making informed decisions about applying fungicides and fertilizers. Advanced systems often employ a combination of image processing and deep learning techniques to identify diseases based on visible symptoms. However, these systems typically rely on pre-existing datasets or images captured in controlled environments. This study showcases the efficacy of utilizing multispectral images captured in visible and Near Infrared (NIR) ranges for identifying plant diseases in real-world environmental conditions. The collected datasets were classified using popular Vision Transformer (ViT) models, including ViT- S16, ViT-BI6, ViT-LI6 and ViT-B32. The results showed impressive training and test accuracies for all the data collected using diverse Kolari vision lenses with 93.71 % and 90.02 %, respectively. This work highlights the potential of utilizing advanced imaging techniques for accurate and reliable plant disease identification in practical field conditions.
- Full Text:
- Date Issued: 2023
- Authors: De Silva, Malitha , Brown, Dane L
- Date: 2023
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/463456 , vital:76410 , xlink:href="https://ieeexplore.ieee.org/abstract/document/10220517"
- Description: Enhancing agricultural practices has become essential in mitigating global hunger. Over the years, significant technological advancements have been introduced to improve the quality and quantity of harvests by effectively managing weeds, pests, and diseases. Many studies have focused on identifying plant diseases, as this information aids in making informed decisions about applying fungicides and fertilizers. Advanced systems often employ a combination of image processing and deep learning techniques to identify diseases based on visible symptoms. However, these systems typically rely on pre-existing datasets or images captured in controlled environments. This study showcases the efficacy of utilizing multispectral images captured in visible and Near Infrared (NIR) ranges for identifying plant diseases in real-world environmental conditions. The collected datasets were classified using popular Vision Transformer (ViT) models, including ViT- S16, ViT-BI6, ViT-LI6 and ViT-B32. The results showed impressive training and test accuracies for all the data collected using diverse Kolari vision lenses with 93.71 % and 90.02 %, respectively. This work highlights the potential of utilizing advanced imaging techniques for accurate and reliable plant disease identification in practical field conditions.
- Full Text:
- Date Issued: 2023
Real-Time Detecting and Tracking of Squids Using YOLOv5
- Kuhlane, Luxolo, Brown, Dane L, Marais, Marc
- Authors: Kuhlane, Luxolo , Brown, Dane L , Marais, Marc
- Date: 2023
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/463467 , vital:76411 , xlink:href="https://ieeexplore.ieee.org/abstract/document/10220521"
- Description: This paper proposes a real-time system for detecting and tracking squids using the YOLOv5 object detection algorithm. The system utilizes a large dataset of annotated squid images and videos to train a YOLOv5 model optimized for detecting and tracking squids. The model is fine-tuned to minimize false positives and optimize detection accuracy. The system is deployed on a GPU-enabled device for real-time processing of video streams and tracking of detected squids across frames. The accuracy and speed of the system make it a valuable tool for marine scientists, conservationists, and fishermen to better understand the behavior and distribution of these elusive creatures. Future work includes incorporating additional computer vision techniques and sensor data to improve tracking accuracy and robustness.
- Full Text:
- Date Issued: 2023
- Authors: Kuhlane, Luxolo , Brown, Dane L , Marais, Marc
- Date: 2023
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/463467 , vital:76411 , xlink:href="https://ieeexplore.ieee.org/abstract/document/10220521"
- Description: This paper proposes a real-time system for detecting and tracking squids using the YOLOv5 object detection algorithm. The system utilizes a large dataset of annotated squid images and videos to train a YOLOv5 model optimized for detecting and tracking squids. The model is fine-tuned to minimize false positives and optimize detection accuracy. The system is deployed on a GPU-enabled device for real-time processing of video streams and tracking of detected squids across frames. The accuracy and speed of the system make it a valuable tool for marine scientists, conservationists, and fishermen to better understand the behavior and distribution of these elusive creatures. Future work includes incorporating additional computer vision techniques and sensor data to improve tracking accuracy and robustness.
- Full Text:
- Date Issued: 2023
Spatiotemporal Convolutions and Video Vision Transformers for Signer-Independent Sign Language Recognition
- Marais, Marc, Brown, Dane L, Connan, James, Boby, Alden
- Authors: Marais, Marc , Brown, Dane L , Connan, James , Boby, Alden
- Date: 2023
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/463478 , vital:76412 , xlink:href="https://ieeexplore.ieee.org/abstract/document/10220534"
- Description: Sign language is a vital tool of communication for individuals who are deaf or hard of hearing. Sign language recognition (SLR) technology can assist in bridging the communication gap between deaf and hearing individuals. However, existing SLR systems are typically signer-dependent, requiring training data from the specific signer for accurate recognition. This presents a significant challenge for practical use, as collecting data from every possible signer is not feasible. This research focuses on developing a signer-independent isolated SLR system to address this challenge. The system implements two model variants on the signer-independent datasets: an R(2+ I)D spatiotemporal convolutional block and a Video Vision transformer. These models learn to extract features from raw sign language videos from the LSA64 dataset and classify signs without needing handcrafted features, explicit segmentation or pose estimation. Overall, the R(2+1)D model architecture significantly outperformed the ViViT architecture for signer-independent SLR on the LSA64 dataset. The R(2+1)D model achieved a near-perfect accuracy of 99.53% on the unseen test set, with the ViViT model yielding an accuracy of 72.19 %. Proving that spatiotemporal convolutions are effective at signer-independent SLR.
- Full Text:
- Date Issued: 2023
- Authors: Marais, Marc , Brown, Dane L , Connan, James , Boby, Alden
- Date: 2023
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/463478 , vital:76412 , xlink:href="https://ieeexplore.ieee.org/abstract/document/10220534"
- Description: Sign language is a vital tool of communication for individuals who are deaf or hard of hearing. Sign language recognition (SLR) technology can assist in bridging the communication gap between deaf and hearing individuals. However, existing SLR systems are typically signer-dependent, requiring training data from the specific signer for accurate recognition. This presents a significant challenge for practical use, as collecting data from every possible signer is not feasible. This research focuses on developing a signer-independent isolated SLR system to address this challenge. The system implements two model variants on the signer-independent datasets: an R(2+ I)D spatiotemporal convolutional block and a Video Vision transformer. These models learn to extract features from raw sign language videos from the LSA64 dataset and classify signs without needing handcrafted features, explicit segmentation or pose estimation. Overall, the R(2+1)D model architecture significantly outperformed the ViViT architecture for signer-independent SLR on the LSA64 dataset. The R(2+1)D model achieved a near-perfect accuracy of 99.53% on the unseen test set, with the ViViT model yielding an accuracy of 72.19 %. Proving that spatiotemporal convolutions are effective at signer-independent SLR.
- Full Text:
- Date Issued: 2023
Comparison of fluorophore and peroxidase labeled aptamer assays for MUC1 detection in cancer cells
- Flanagan, Shane, Limson, Janice, Fogel, Ronen
- Authors: Flanagan, Shane , Limson, Janice , Fogel, Ronen
- Date: 2014
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/431076 , vital:72742 , xlink:href="10.1109/BioCAS.2014.6981720"
- Description: Aptamers hold great promise for cancer diagnosis and therapy. Several biosensors incorporate aptamers as biorecognition elements for tumor markers although few evaluate their detection in a native conformation and cellular micro-environment. In this study, fluorophore and peroxidase labeled aptamer configurations were compared for the detection of MCF7 breast and SW620 colon cancer cell lines expressing the tumor marker MUC1. Fluorescence based detection showed selective binding to the cell lines relative to a nonbinding control sequence with sequence specific binding differences between MUC1 aptamers accredited to variation in the glycosylation state of expressed MUC1. The peroxidase labeled assay showed high detection sensitivity although low binding specificity was observed for the MUC1 aptamers to the cell lines. Results suggest that aptamers susceptible to non specific binding to cells may limit the applicability of enzymatic amplification to improve aptasensor sensitivity.
- Full Text:
- Date Issued: 2014
- Authors: Flanagan, Shane , Limson, Janice , Fogel, Ronen
- Date: 2014
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/431076 , vital:72742 , xlink:href="10.1109/BioCAS.2014.6981720"
- Description: Aptamers hold great promise for cancer diagnosis and therapy. Several biosensors incorporate aptamers as biorecognition elements for tumor markers although few evaluate their detection in a native conformation and cellular micro-environment. In this study, fluorophore and peroxidase labeled aptamer configurations were compared for the detection of MCF7 breast and SW620 colon cancer cell lines expressing the tumor marker MUC1. Fluorescence based detection showed selective binding to the cell lines relative to a nonbinding control sequence with sequence specific binding differences between MUC1 aptamers accredited to variation in the glycosylation state of expressed MUC1. The peroxidase labeled assay showed high detection sensitivity although low binding specificity was observed for the MUC1 aptamers to the cell lines. Results suggest that aptamers susceptible to non specific binding to cells may limit the applicability of enzymatic amplification to improve aptasensor sensitivity.
- Full Text:
- Date Issued: 2014
Electrochemical inclusion of catechol into singlewalled carbon nanotubes: application for sensors
- Authors: Oni, Joshua , Limson, Janice
- Date: 2014
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/431090 , vital:72743 , xlink:href="10.1109/BioCAS.2014.6981727"
- Description: We report on the use of catechol for the electrochemical activation of acid-functionalised single-walled carbon nanotubes immobilised on glassy carbon electrodes. Following well-published methods for catechol activation of bare glassy carbon electrodes, these studies show the efficacy of extending the method to activation of carbon nanotubes. Voltammetric scans in catechol show an increase in current response of 37 μA for the catechol redox pair over a maximum of three cycles during the catechol activation step. An increase in the ease of electron flow is indicated by a larger value for K app , which corresponds to a decrease in R ct obtained during impedance measurements. Catechol activation enhanced electron transfer, potentially afforded by an ease of electron passage due to a decrease in the resistance of the layer.
- Full Text:
- Date Issued: 2014
- Authors: Oni, Joshua , Limson, Janice
- Date: 2014
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/431090 , vital:72743 , xlink:href="10.1109/BioCAS.2014.6981727"
- Description: We report on the use of catechol for the electrochemical activation of acid-functionalised single-walled carbon nanotubes immobilised on glassy carbon electrodes. Following well-published methods for catechol activation of bare glassy carbon electrodes, these studies show the efficacy of extending the method to activation of carbon nanotubes. Voltammetric scans in catechol show an increase in current response of 37 μA for the catechol redox pair over a maximum of three cycles during the catechol activation step. An increase in the ease of electron flow is indicated by a larger value for K app , which corresponds to a decrease in R ct obtained during impedance measurements. Catechol activation enhanced electron transfer, potentially afforded by an ease of electron passage due to a decrease in the resistance of the layer.
- Full Text:
- Date Issued: 2014
- «
- ‹
- 1
- ›
- »