VoiceDrive

VoiceDrive is a concept prototype of an automotive infotainment system, which explores interactions with Natural Language Processing, and Speech Recognition from the purview of Design and Human Factors. Comprehensive research was carried out to outline the advantages and drawbacks of Speech Recognition, how it is implemented in the automotive environment, how it has grown and its future in the domain. The research is an investigation on the possible feedback mechanisms the car might give if being interacted with predominantly by speech, and explorations in diminishing our dependencies on a visual interface.

Breaking away from the strictly closed and guide-lined realm of embedded automotive HMI design, VoiceDrive proposes a new, more open ecosystem, To allow 3rd party developers and other players to co-create newer applications in the Automotive Space accessing some of 150+ sensors on board. VoiceDrive was designed at Tata Elxsi, as my undergraduate thesis.

Contributors
Nick Talbot (Guide)
Madhavan Ayyavu (help with android prototyping)
Role
Research
Concept Design
VVV prototyping
Android App Devleopment
Year
2012

Research

Over the past decade, road crashes have become the 10th leading cause of death in the world, and is predicted to rise to the fifth position by 2030. India is the number one contributor to global road crash mortality and morbidity figures. (savelifefoundation 2017)

Interaction

Distracted driving is a major cause of road accidents. VoiceDrive uses a what-you-say-is-what-you-get approach to attempt to take the cognitive load away from the driver. By keeping driver’s hands on the wheel and eyes on the road, the driver interacts with VoiceDrive with no physical “balancing acts”.

Setup-hardware

VoiceDrive was designed to be interacted with by the driver and all passengers in the car. Buttons and microphones were strategically placed to activate the system in easily.

Screen Hierarchy

A big insight from research and testing was that, most people had difficulties interacting with the system without knowing what its features and capabilities were at first. The Screens themselves were present to aid with the learning curve and after the first 4-5 uses, the dependency on them reduced.

The possibility of voice and speech interactions, allowed me to visualise the hierarchy of screens differently, and the navigation afforded a novel flat structure, allowing people to cut through the extraneous menus and bring up information by just vocalising what was required.

Project Documentation

Undergraduate thesis document, covering the process, research, design and development in detail