Paper Contents
Abstract
The increasing adoption of AI in health care has led to powerful systems that can assist diagnosis, symptom checking, and patient monitoring. However, many of these systems are black-box models whose decision processes are opaque to users and clinicians, reducing trust, acceptance, and safety. In this paper, we propose a Virtual Health Assistant (VHA) that integrates symptom-based disease prediction with explainable AI (XAI) feedback. Using machine learning classification models (e.g. Random Forest, XGBoost) trained on a curated symptom-disease dataset, and employing XAI methods such as SHAP and LIME, the system not only predicts likely diseases given symptoms but also gives interpretable explanations of those predictions. We evaluate the system in terms of prediction performance (accuracy, precision, recall), explanation quality via user feedback, and compare trust and acceptance metrics with a non-explainable baseline. The results indicate that XAI feedback significantly improves user trust and understanding, with only minor compromise (if any) in predictive performance. The proposed design can serve as a foundation for trustworthy Virtual Health Assistants.
Copyright
Copyright © 2025 Ritu Raj, Sagar choudhary . This is an open access article distributed under the Creative Commons Attribution License.