Pneumonia Detection with Explainable AI

Date: June 2025

Overview

DenseNet201-based chest X-ray classification API achieving 88.4% precision and 93.0% recall for pneumonia detection with explainable Grad-CAM visualizations. The system enables radiologists to validate AI predictions through visual saliency maps, supporting clinical decision-making with transparent, interpretable results.

Problem / Opportunity

Medical imaging diagnosis is time-intensive and requires expert interpretation, creating bottlenecks in healthcare delivery. Radiologists need AI assistance that not only achieves high accuracy but also provides interpretable results to validate findings. Existing models often lack explainability, making clinical adoption difficult. This project delivers a production-ready AI system that augments radiologist capabilities through accurate detection and transparent reasoning visualizations.

Approach

  1. Implemented DenseNet201 architecture fine-tuned for chest X-ray binary classification (pneumonia vs normal) achieving optimal precision-recall balance
  2. Applied extensive data augmentation including resizing, color jitter, Gaussian blur, random perspective, and affine transformations to improve model robustness
  3. Integrated Grad-CAM visualization generating saliency maps highlighting regions influencing predictions for radiologist validation
  4. Deployed FastAPI production service with dedicated endpoints for inference and visual explanation generation
  5. Created interactive demo on Hugging Face Spaces enabling instant testing with X-ray image uploads

Technical Details: Trained for 20 epochs using AdamW optimizer and OneCycleLR scheduler. Model architecture includes custom classifier with ReLU activation and 50% dropout for regularization.

Outcomes & Impact

Model Performance

Explainability & Trust

Deployment & Accessibility

Project Visualizations

X-ray Analysis Visualization
CAM Heatmap Visualization

Code Repository

Technical Skills

Python PyTorch Torchvision FastAPI scikit-learn NumPy OpenCV Grad-CAM DenseNet-201 Medical Imaging

Learnings/Takeaways

A key takeaway is the effectiveness of transfer learning by fine-tuning a pre-trained DenseNet model for a specialized medical imaging task. Implementing Grad-CAM highlighted the critical importance of model interpretability (XAI) for building trust in clinical applications, providing visual evidence for predictions. Furthermore, packaging the entire pipeline into a production-ready FastAPI service provided practical experience in model deployment and MLOps, bridging the gap between research and real-world application.

Back to Projects