

Conversations between doctors and their patients, on the other hand, have presented more difficulties due to overlapping dialogue, voice distance and qualities, varying speech patterns, and differences in vocabulary.

It is a foundational technology that information extraction and summarization technologies can build on top of to help relieve the documentation burden.”Ĭurrently, most ASR products designed specifically for healthcare transcription are limited to doctor’s dictations, which consist of a single speaker using predictable terminologies.

“With the growing shortage of primary care physicians and higher burnout rates, an ASR technology that could accelerate transcription of the clinical visit seemed imminently useful. “With the widespread adoption of EHR systems, doctors are now spending ~6 hours of their 11-hour workdays inside the EHR and ~1.5 hours on documentation alone,” the researchers wrote in the study. In a recent proof of concept study, Google researchers described their experiences developing two automatic speech recognition (ASR) methodologies for multi-speaker medical conversations, and concluded that both models could be used to streamline practitioners’ workflows. Voice recognition technology employed by Google Assistant, Google Home, and Google Translate could soon become a transcription tool for documenting patient-doctor conversations.
