Plant Disease Detection using Vision Transformers on Multispectral Natural Environment Images
- De Silva, Malitha, Brown, Dane L
- Authors: De Silva, Malitha , Brown, Dane L
- Date: 2023
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/463456 , vital:76410 , xlink:href="https://ieeexplore.ieee.org/abstract/document/10220517"
- Description: Enhancing agricultural practices has become essential in mitigating global hunger. Over the years, significant technological advancements have been introduced to improve the quality and quantity of harvests by effectively managing weeds, pests, and diseases. Many studies have focused on identifying plant diseases, as this information aids in making informed decisions about applying fungicides and fertilizers. Advanced systems often employ a combination of image processing and deep learning techniques to identify diseases based on visible symptoms. However, these systems typically rely on pre-existing datasets or images captured in controlled environments. This study showcases the efficacy of utilizing multispectral images captured in visible and Near Infrared (NIR) ranges for identifying plant diseases in real-world environmental conditions. The collected datasets were classified using popular Vision Transformer (ViT) models, including ViT- S16, ViT-BI6, ViT-LI6 and ViT-B32. The results showed impressive training and test accuracies for all the data collected using diverse Kolari vision lenses with 93.71 % and 90.02 %, respectively. This work highlights the potential of utilizing advanced imaging techniques for accurate and reliable plant disease identification in practical field conditions.
- Full Text:
- Date Issued: 2023
- Authors: De Silva, Malitha , Brown, Dane L
- Date: 2023
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/463456 , vital:76410 , xlink:href="https://ieeexplore.ieee.org/abstract/document/10220517"
- Description: Enhancing agricultural practices has become essential in mitigating global hunger. Over the years, significant technological advancements have been introduced to improve the quality and quantity of harvests by effectively managing weeds, pests, and diseases. Many studies have focused on identifying plant diseases, as this information aids in making informed decisions about applying fungicides and fertilizers. Advanced systems often employ a combination of image processing and deep learning techniques to identify diseases based on visible symptoms. However, these systems typically rely on pre-existing datasets or images captured in controlled environments. This study showcases the efficacy of utilizing multispectral images captured in visible and Near Infrared (NIR) ranges for identifying plant diseases in real-world environmental conditions. The collected datasets were classified using popular Vision Transformer (ViT) models, including ViT- S16, ViT-BI6, ViT-LI6 and ViT-B32. The results showed impressive training and test accuracies for all the data collected using diverse Kolari vision lenses with 93.71 % and 90.02 %, respectively. This work highlights the potential of utilizing advanced imaging techniques for accurate and reliable plant disease identification in practical field conditions.
- Full Text:
- Date Issued: 2023
Plant disease detection using deep learning on natural environment images
- De Silva, Malitha, Brown, Dane L
- Authors: De Silva, Malitha , Brown, Dane L
- Date: 2022
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/465212 , vital:76583 , xlink:href="https://ieeexplore.ieee.org/abstract/document/9855925"
- Description: Improving agriculture is one of the major concerns today, as it helps reduce global hunger. In past years, many technological advancements have been introduced to enhance harvest quality and quantity by controlling and preventing weeds, pests, and diseases. Several studies have focused on identifying diseases in plants, as it helps to make decisions on spraying fungicides and fertilizers. State-of-the-art systems typically combine image processing and deep learning methods to identify conditions with visible symptoms. However, they use already available data sets or images taken in controlled environments. This study was conducted on two data sets of ten plants collected in a natural environment. The first dataset contained RGB Visible images, while the second contained Near-Infrared (NIR) images of healthy and diseased leaves. The visible image dataset showed higher training and validation accuracies than the NIR image dataset with ResNet, Inception, VGG and MobileNet architectures. For the visible image and NIR dataset, ResNet-50V2 outperformed other models with validation accuracies of 98.35% and 94.01%, respectively.
- Full Text:
- Date Issued: 2022
- Authors: De Silva, Malitha , Brown, Dane L
- Date: 2022
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/465212 , vital:76583 , xlink:href="https://ieeexplore.ieee.org/abstract/document/9855925"
- Description: Improving agriculture is one of the major concerns today, as it helps reduce global hunger. In past years, many technological advancements have been introduced to enhance harvest quality and quantity by controlling and preventing weeds, pests, and diseases. Several studies have focused on identifying diseases in plants, as it helps to make decisions on spraying fungicides and fertilizers. State-of-the-art systems typically combine image processing and deep learning methods to identify conditions with visible symptoms. However, they use already available data sets or images taken in controlled environments. This study was conducted on two data sets of ten plants collected in a natural environment. The first dataset contained RGB Visible images, while the second contained Near-Infrared (NIR) images of healthy and diseased leaves. The visible image dataset showed higher training and validation accuracies than the NIR image dataset with ResNet, Inception, VGG and MobileNet architectures. For the visible image and NIR dataset, ResNet-50V2 outperformed other models with validation accuracies of 98.35% and 94.01%, respectively.
- Full Text:
- Date Issued: 2022
- «
- ‹
- 1
- ›
- »