From Code to Field: Evaluating the Robustness of Convolutional Neural Networks for Disease Diagnosis in Mango Leaves

By: Gabriel Vitorino de Andrade, Saulo Roberto dos Santos, Itallo Patrick Castro Alves da Silva, Emanuel Adler Medeiros Pereira, Erick de Andrade Barboza

Published: 2025-12-16

View on arXiv →
#cs.AIAI Analyzed#Computer Vision#Agriculture#Robustness#CNN#Mango Diseases#Deep LearningAgricultureAgTechArtificial IntelligenceMobile Software

Abstract

Evaluates the robustness of CNNs for diagnosing diseases in mango leaves, highlighting practical applications of AI in agriculture for crop health monitoring. This research directly contributes to sustainable farming practices by enabling early and accurate disease detection, leading to improved yields and reduced losses.

Impact

practical

Topics

6

💡 Simple Explanation

Imagine an app that acts as a doctor for mango trees. Usually, these apps are trained on perfect, studio-quality photos of leaves. But in the farm, photos are blurry, dark, or messy. This research tested how well these AI 'doctors' perform on bad photos and found they often fail. The scientists then fixed this by teaching the AI with 'messy' examples, making it much better at diagnosing diseases in real farm conditions.

🎯 Problem Statement

There is a significant performance degradation of deep learning models when moving from validation on clean, curated datasets to inference in dynamic, noisy agricultural environments, leading to unreliable disease diagnosis in the field.

🔬 Methodology

The study employs a Comparative Robustness Framework. 1. **Dataset Curation**: Use of a labeled dataset of mango leaves divided into 'Clean' (lab) and 'Field' (wild). 2. **Model Training**: Training CNNs (ResNet, EfficientNet, MobileNet) via Transfer Learning. 3. **Perturbation**: Applying 15 algorithmically generated corruptions (Noise, Blur, Weather, Digital) at 5 severity levels. 4. **Evaluation**: Measuring accuracy drop (mCE - mean Corruption Error) and testing mitigation via 'AugMix' data augmentation.

📊 Results

The study found that while ResNet-50 achieved 98.5% accuracy on the clean test set, its accuracy dropped to ~68% under severe noise and blur corruptions. MobileNetV3 showed better latency but similar fragility. By implementing 'AugMix' augmentation, the average robustness accuracy improved by 12-15%, and the gap between clean and field performance narrowed significantly. EfficientNet-B0 offered the best trade-off between parameter size and robustness.

Key Takeaways

High leaderboard accuracy does not guarantee field utility in AgTech. Models must be stress-tested against environmental corruptions. Simple augmentation strategies like AugMix are cost-effective ways to improve reliability without changing the model architecture.

🔍 Critical Analysis

The paper provides a much-needed reality check for agricultural AI, moving beyond sterile lab accuracy. However, it leans heavily on synthetic corruptions as a proxy for field issues, which doesn't fully capture the complexity of occlusion, complex backgrounds, or multi-disease symptoms. The method is sound, but the innovation lies in evaluation rigor rather than architectural novelty.

💰 Practical Applications

  • Freemium mobile app for farmers (basic diagnosis free, history/analytics paid).
  • API licensing to large agro-holdings for automated drone surveys.
  • Selling aggregated disease spread data to fungicide manufacturers.

🏷️ Tags

#Computer Vision#Agriculture#Robustness#CNN#Mango Diseases#Deep Learning

🏢 Relevant Industries

AgricultureAgTechArtificial IntelligenceMobile Software