Coconut Disease Detection Using Deep Learning: A Novel Approach

SUMMARY

Coconut palm (Cocos nucifera) is a vital economic resource in tropical regions, providing essential food products, raw materials, and income to farmers. However, coconut diseases such as Gray Leaf Spot, Leaf Rot, Stem Bleeding, Bud Rot, and Bud Root Dropping can drastically reduce yields and affect the livelihood of smallholder farmers. Early detection is crucial to minimizing damage and avoiding costly treatments. In this research, we developed a deep learning-based model using a dataset of 33,250 labeled images of coconut trees. Our model incorporates both Convolutional Neural Networks (CNNs) and transfer learning to improve accuracy in disease classification. By experimenting with MobileNetV2 and ResNet50, we optimized performance while balancing computational efficiency. The results show that this AI-driven approach provides a feasible and scalable solution for diagnosing coconut diseases, enabling early intervention and reduced losses.


INTRODUCTION

Coconuts are a primary agricultural product in many developing tropical countries, with their production providing critical income to millions of farmers. Coconut trees, however, are vulnerable to a variety of diseases that threaten both yield and tree health, ultimately affecting food security and livelihoods. Among the most prevalent coconut diseases are Gray Leaf Spot, Leaf Rot, Stem Bleeding, Bud Rot, and Bud Root Dropping. These diseases can spread rapidly if not detected early, leading to substantial financial losses for smallholder farmers who rely on coconuts for income. In many regions, expert consultation is not readily accessible, making automated, AI-based disease detection a valuable tool for disease management.

Traditional approaches to diagnosing coconut diseases involve manual inspection by trained professionals. However, this is often slow, expensive, and impractical for large-scale farms. Advances in machine learning (ML) and deep learning (DL), particularly in image recognition, have opened new possibilities for automated disease detection in agriculture. Studies have shown that CNNs are effective for detecting plant diseases from images , providing a scalable solution to this widespread problem.

Our research leverages these technological advancements to develop a deep learning model capable of detecting coconut diseases from images. By using field-collected data from farms in southern India, we aim to offer a cost-effective solution for early disease detection, enabling timely intervention and disease management. This paper explores the performance of various deep learning models on our dataset, compares their accuracy, and discusses the potential applications of this technology in sustainable agriculture.


RELATED WORK

Several studies have demonstrated the efficacy of using AI and deep learning models in agriculture, particularly for crop and plant disease detection. Kamilaris and Prenafeta-Boldú (2018) presented an extensive survey on the application of deep learning in agriculture, noting its potential for image-based disease detection . Mohanty et al. (2016) utilized CNNs to identify 26 different diseases in plants with high accuracy, highlighting the ability of AI to handle complex classification tasks in agriculture.

In the domain of coconut disease detection, fewer studies exist, though some promising work has been done. Ferentinos (2018) explored the use of deep learning models for plant disease detection, achieving high accuracy rates in identifying multiple types of crop diseases . The research showed the power of CNNs when applied to image data, particularly when combined with transfer learning approaches, as these models can utilize pre-trained networks to enhance performance on new tasks . Our work builds on these previous studies, applying similar deep learning techniques to the specific task of coconut disease detection.


DATASET AND DATA PREPROCESSING
Dataset Collection

The dataset for this study was collected from 40 farms in southern India using drone photography and field-based imaging techniques. The dataset consists of 33,250 labeled images, categorized into six classes: Healthy, Gray Leaf Spot, Leaf Rot, Stem Bleeding, Bud Rot, and Bud Root Dropping. Each image was taken in varying environmental conditions, including different lighting and backgrounds, to simulate real-world conditions.

Category Number of Images
Healthy 6,500
Gray Leaf Spot 6,320
Leaf Rot 5,650
Stem Bleeding 5,050
Bud Rot 4,970
Bud Root Dropping 4,760
Preprocessing

All images were resized to 224x224 pixels to standardize input size for the CNN models. Image augmentation techniques were applied to increase the dataset’s variability and prevent overfitting. These techniques included random rotations, flips, zoom, and brightness adjustments. We also applied image normalization, converting pixel values to a range between 0 and 1 .

Data augmentation plays a critical role in deep learning models, as it increases the number of training samples, which improves model generalization. As shown in Krizhevsky et al. 's work with ImageNet, augmentation is essential for preventing overfitting in image classification tasks .


MODEL ARCHITECTURE

We experimented with two deep learning architectures: MobileNetV2 and ResNet50. Both models were fine-tuned using transfer learning techniques to adapt to the coconut disease dataset.

MobileNetV2

MobileNetV2 was selected for its lightweight architecture, which is particularly useful for deploying models on mobile devices or systems with limited computational resources . MobileNetV2 employs depth wise separable convolutions, which significantly reduce computational complexity while maintaining high accuracy in image classification tasks .

The pre-trained MobileNetV2 model, trained on the ImageNet dataset, was fine-tuned for our classification task by replacing the final classification layer with a softmax layer designed to classify the six disease categories. We used a learning rate of 0.0001 and trained the model for 50 epochs with a batch size of 32.

ResNet50

ResNet50, known for its "residual connections," was selected for its ability to handle deeper networks without the vanishing gradient problem . This model has been widely used in medical image analysis and agricultural disease detection due to its high accuracy and robustness in dealing with complex image datasets .

Like MobileNetV2, ResNet50 was pre-trained on ImageNet and fine-tuned for our task. We experimented with different learning rates, finally selecting 0.00001 to ensure stable training. ResNet50 was trained for 60 epochs using a batch size of 16.

Training and Optimization

Both models were optimized using the Adam optimizer, which is well-suited for deep learning tasks that involve large datasets . The models were trained on an NVIDIA Tesla V100 GPU to speed up the training process. Early stopping was implemented to prevent overfitting, monitoring the validation loss to halt training if no improvement was observed after 10 epochs .


EVALUATION METRICS

We evaluated the models using several performance metrics, including:

Confusion matrices were generated to provide insights into misclassifications across different disease categories. Cross-validation using stratified K-fold validation was employed to ensure the model's robustness and ability to generalize .


RESULTS
Model Performance

Both models showed strong performance in detecting coconut diseases, with ResNet50 slightly outperforming MobileNetV2.

Model Accuracy Precision Recall F1-Score
MobileNetV2 87.2% 86.8% 87.0% 86.9%
ResNet50 89.4% 89.1% 89.5% 89.3%

ResNet50’s deeper architecture allowed it to capture more complex features, improving its overall performance, particularly in differentiating between visually similar diseases like Bud Rot and Bud Root Dropping. The confusion matrix for ResNet50 showed fewer misclassifications in these categories than MobileNetV2.

Disease-wise Performance
Disease Accuracy (MobileNetV2) Accuracy (ResNet50)
Healthy 92.1% 94.3%
Gray Leaf Spot 88.0% 90.5%
Leaf Rot 85.5% 87.8%
Stem Bleeding 83.0% 85.6%
Bud Rot 82.9% 85.4%

DISCUSSION
Model Selection and Performance

Both MobileNetV2 and ResNet50 proved effective for detecting coconut diseases. MobileNetV2’s efficiency makes it a suitable candidate for real-time deployment on mobile devices or in field conditions where computational resources are limited. However, ResNet50’s deeper architecture provides better overall accuracy, especially for diseases with subtle differences in symptoms. The trade-off between model size and performance should be considered based on the application—whether for large-scale farm management or individual farmer use.

Challenges

Despite the high accuracy achieved, several challenges were observed during the training and testing phases. Diseases with similar symptoms, such as Bud Rot and Bud Root Dropping, were sometimes misclassified, especially in images with poor lighting or resolution. This aligns with findings from previous research, where visually similar diseases often lead to classification challenges.

To address these issues, future work could explore multimodal approaches, incorporating additional data sources such as spectral imaging or soil analysis. Such methods could provide more contextual information, allowing the model to differentiate diseases more effectively .


CONCLUSION

This study successfully developed a deep learning-based model for detecting coconut diseases using CNN architectures. Both MobileNetV2 and ResNet50 demonstrated strong performance, with ResNet50 showing slightly higher accuracy across all disease categories. The model has practical implications for early disease detection, enabling smallholder farmers to take timely action to protect their crops. Future work could focus on extending the model to include additional disease categories and improving its robustness through multi-modal data fusion.


REFERENCES