I build deployable computer-vision systems for real-world sensing and decision support, with a focus on trustworthy/understandable AI and end-to-end deployment (edge capture → cloud analytics → stakeholder-facing tools). My work sits at the intersection of machine learning, public health, and systems engineering, where models are evaluated not only by accuracy but by robustness, interpretability, and operational usability.
Current focus areas
- MosquitoAI: AI-powered mosquito surveillance from heterogeneous imagery and field-deployed traps
- Dense-object localization: robust detection under occlusion, clutter, and domain shift
- Trustworthy AI: explainability, uncertainty-aware monitoring, and human-in-the-loop
Highlights
- 10+ publications in interdisciplinary venues spanning AI + biomedicine + entomology
- 1 U.S. patent application on computer-vision–based biological inference
- Field-facing collaborations with public-health partners to translate models into real surveillance workflows
Technical strengths
Modeling
- CNNs + Transformers for vision, detection, and segmentation
- Handling imbalance: sampling strategies, loss design, threshold selection, and metric-driven tuning
- Interpretable ML: ROI-based analysis, saliency methods
Engineering
- Python ML stacks (training, evaluation, visualization, packaging)
- Dataset + experiment management, reproducibility, and reporting
- System integration patterns for edge devices and cloud inference