3Play Media’s State of Automatic Speech Recognition (ASR) Report Finds ASR Technology Has Advanced Significantly
3Play Media, the leading media accessibility provider, released its annual State of Automatic Speech Recognition (ASR) report. The study looks at the general state of speech-to-text technology and evaluates how 9 major speech recognition engines perform at the task of captioning and transcription. According to the study, the accuracy of the technology has improved measurably since the company’s last report, published in January of 2021.
3Play Media tested all 9 engines using a large dataset representative of 3Play Media’s diverse customer base. Accuracy was evaluated against two measurements: Word Error Rate (WER) and Formatted Error Rate (FER), which includes formatting errors like grammar, speaker identification, and non-speech elements in addition to word errors.
In both WER and FER measurements, Speechmatics with 3Play modeling and post-processing led the pack, followed by Speechmatics alone and Microsoft. Rev, Google VM, and Voicegain followed, each with respectable scores which were close enough that these vendors are hard to differentiate. Despite exciting improvement across the board, all engines performed well below the industry standard of 99% accuracy, confirming that ASR on its own still falls short of being “good enough” for compliance with closed captioning legal requirements.
“As the AI models driving ASR continue to evolve, many of the engines we evaluated have shown significant strides in their transcription accuracy over the last two years,” Chris Antunes, Co-CEO and Co-Founder, 3Play Media, said. “We run this report every year because we use ASR in our own transcription process, and we have a vested interest in making sure we’re utilizing the best engine on the market. Speechmatics remains a clear industry leader in both pre-recorded and live automated transcription, and applying 3Play’s mappings and post-processing resulted in an exciting improvement in word error rate of over 8%.”
The study showed a wide range in accuracy among the technology tested, with the highest and lowest performing engines differing by over 15 percentage points. This suggests that different engines are optimizing for different goals, and some ASR engines will not perform well for transcription. Compared to other uses of speech-to-text technology like automated assistants that are able to train on a specific voice, transcription is a very difficult task, with variables like diverse sentence structure and spontaneous speech, specialized terminology, and complex patterns including multiple speakers, accents, and background noise.
Comments are closed.