Voice-Activated E-Commerce: Bridging the Gap for Visually Impaired Shoppers
Harshal Jadhav Jadhav
Paper Contents
Abstract
A systematic review of voice-activated e-commerce systems with the goal of improving online shopping accessibility for users with visual impairments. Ten seminal and recent studies are analyzed, encompassing humancomputer interaction methodologies, established accessibility standards (e.g., WCAG), voice user interface design principles, and machine-learning techniques for speech recognition and personalized recommendations. Through cross-study synthesis, key design patterns and technical challenges are identified, alongside quantitative usability outcomes that inform best practices for inclusive platforms. A comparative summary table captures each systems principal features, performance metrics, and limitations. The review reveals critical gaps in real-time adaptability, error-handling mechanisms, and support for multimodal interaction, motivating the proposal of an enhanced architecture. Proposed architecture integrates a Next.js frontend, a Flask-based backend, SQLite data storage, advanced NLP models, and a robust voice API to enable context-aware dialogue management and on-the-fly recommendation updates, while maintaining strict compliance with accessibility guidelines. A roadmap for future research is outlined, emphasizing the development of more naturalistic, resilient, and personalized voice-driven shopping experiences through multimodal feedback integration, improved error recovery strategies, and dynamic conversational adaptation. Proposed work thereby offers a comprehensive foundation for both researchers and practitioners aiming to advance voice-first e-commerce solutions that fully accommodate the needs of visually impaired users.
Copyright
Copyright © 2025 Harshal Jadhav. This is an open access article distributed under the Creative Commons Attribution License.