Wednesday, August 31, 2011

Speech Recognition Leaps Forward - Is it a revolution?


Great news from Microsoft about substantial progress in LVCSR.
Please comment if you experience this technology and if you indeed view it as a revolution.
Thx, Ofer




Speech Recognition Leaps Forward
August 29, 2011 12:01 AM PT
During Interspeech 2011, the 12th annual Conference of the International Speech Communication Association being held in Florence, Italy, from Aug. 28 to 31, researchers from Microsoft Research will present work that dramatically improves the potential of real-time, speaker-independent, automatic speech recognition.
Dong Yu, researcher at Microsoft Research Redmond, and Frank Seide, senior researcher and research manager with Microsoft Research Asia, have been spearheading this work, and their teams have collaborated on what has developed into a research breakthrough in the use of artificial neural networks for large-vocabulary speech recognition.

The Holy Grail of Speech Recognition

Commercially available speech-recognition technology is behind applications such as voice-to-text software and automated phone services. Accuracy is paramount, and voice-to-text typically achieves this by having the user “train” the software during setup and by adapting more closely to the user’s speech patterns over time. Automated voice services that interact with multiple speakers do not allow for speaker training because they must be usable instantly by any user. To cope with the lower accuracy, they either handle only a small vocabulary or strongly restrict the words or patterns that users can say.
The ultimate goal of automatic speech recognition is to deliver out-of-the-box, speaker-independent speech-recognition services—a system that does not require user training to perform well for all users under all conditions.


“This goal has increased importance in a mobile world,” Yu says, “where voice is an essential interface mode for smartphones and other mobile devices. Although personal mobile devices would be ideal for learning their user’s voices, users will continue to use speech only if the initial experience, which is before the user-specific models can even be built, is good.”
Speaker-independent speech recognition also addresses other scenarios where it’s not possible to adapt a speech-recognition system to individual speakers—call centers, for example, where callers are unknown and speak only for a few seconds, or web services for speech-to-speech translation, where users would have privacy concerns over stored speech samples.

Renewed Interest in Neural Networks

Artificial neural networks (ANNs), mathematical models of the low-level circuits in the human brain, have been a familiar concept since the 1950s. The notion of using ANNs to improve speech-recognition performance has been around since the 1980s, and a model known as the ANN-Hidden Markov Model (ANN-HMM) showed promise for large-vocabulary speech recognition. Why then, are commercial speech-recognition solutions not using ANNs?
“It all came down to performance,” Yu explains. “After the invention of discriminative training, which refines the model and improves accuracy, the conventional, context-dependent Gaussian mixture model HMMs (CD-GMM-HMMs) outperformed ANN models when it came to large-vocabulary speech recognition.”
Yu and members of the Speech group at Microsoft Research Redmond became interested in ANNs when recent progress in building more complex “deep” neural networks (DNNs) began to show promise at achieving state-of-the-art performance for automatic speech-recognition tasks. In June 2010, intern George Dahl, from the University of Toronto, joined the team, and researchers began investigating how DNNs could be used to improve large-vocabulary speech recognition.
“George brought a lot of insight on how DNNs work,” Yu says, “as well as strong experience in training DNNs, which is one of the key components in this system.”
A speech recognizer is essentially a model of fragments of sounds of speech. An example of such sounds are “phonemes,” the roughly 30 or so pronunciation symbols used in a dictionary. State-of-the-art speech recognizers use shorter fragments, numbering in the thousands, called “senones.”
Earlier work on DNNs had used phonemes. The research took a leap forward when Yu, after discussions with principal researcher Li Deng and Alex Acero, principal researcher and manager of the Speech group, proposed modeling the thousands of senones, much smaller acoustic-model building blocks, directly with DNNs. The resulting paper, Context-Dependent Pre-trained Deep Neural Networks for Large Vocabulary Speech Recognition by Dahl, Yu, Deng, and Acero, describes the first hybrid context-dependent DNN-HMM (CD-DNN-HMM) model applied successfully to large-vocabulary speech-recognition problems.
“Others have tried context-dependent ANN models,” Yu observes, “using different architectural approaches that did not perform as well. It was an amazing moment when we suddenly saw a big jump in accuracy when working on voice-based Internet search. We realized that by modeling senones directly using DNNs, we had managed to outperform state-of-the-art conventional CD-GMM-HMM large-vocabulary speech-recognition systems by a relative error reduction of more than 16 percent. This is extremely significant when you consider that speech recognition has been an active research area for more than five decades.”
The team also accelerated the experiments by using general-purpose graphics-processing units to train and decode speech. The computation for neural networks is similar in structure to 3-D graphics as used in popular computer games, and modern graphics cards can process almost 500 such computations simultaneously. Harnessing this computational power for neural networks contributed to the feasibility of the architectural model.
In October 2010, when Yu presented the paper to an internal Microsoft Research Asia audience, he spoke about the challenges of scalability and finding ways to parallelize training as the next steps toward developing a more powerful acoustic model for large-vocabulary speech recognition. Seide was excited by the research and joined the project, bringing with him experience in large-vocabulary speech recognition, system development, and benchmark setups.

Benchmarking on a Neural Network

“It has been commonly assumed that hundreds or thousands of senones were just too many to be accurately modeled or trained in a neural network,” Seide says. “Yet Yu and his colleagues proved that doing so is not only feasible, but works very well with notably improved accuracy. Now, it was time to show that the exact same CD-DNN-HMM could be scaled up effectively in terms of training-data size.”
The new project applied CD-DNN-HMM models to speech-to-text transcription and was tested against Switchboard, a highly challenging phone-call transcription benchmark recognized by the speech-recognition research community.
First, the team had to migrate the DNN training tool to support a larger training data set. Then, with help from Gang Li, research software-development engineer at Microsoft Research Asia, they applied the new model and tool to the Switchboard benchmark with more than 300 hours of speech-training data. To support that much data, the researchers built giant ANNs, one of which contains more than 66 million inter-neural connections, the largest ever created for speech recognition.
The subsequent benchmarks achieved an astonishing word-error rate of 18.5 percent, a 33-percent relative improvement compared with results obtained by a state-of-the-art conventional system.


“When we began running the Switchboard benchmark,” Seide recalls, “we were hoping to achieve results similar to those observed in the voice-search task, between 16- and 20-percent relative gains. The training process, which takes about 20 days of computation, emits a new, slightly more refined model every few hours. I impatiently tested the latest model every few hours. You can’t imagine the excitement when it went way beyond the expected 20 percent, kept getting better and better, and finally settled at a gain of more than 30 percent. Historically, there have been very few individual technologies in speech recognition that have led to improvements of this magnitude.”
The resulting paper, titled Conversational Speech Transcription Using Context-Dependent Deep Neural Networks by Seide, Li, and Yu, is scheduled for presentation on Aug. 29. The work already has attracted considerable attention from the research community, and the team hopes that taking the paper to the conference will ignite a new line of research that will help advance the state of the art for DNNs in large-vocabulary speech recognition.

Bringing the Future Closer

With a novel way of using artificial neural networks for speaker-independent speech recognition, and with results a third more accurate than what conventional systems can deliver, Yu, Seide, and their teams have brought fluent speech-to-speech applications much closer to reality. This innovation simplifies speech processing and delivers high accuracy in real time for large-vocabulary speech-recognition tasks.
“This work is still in the research stages, with more challenges ahead, most notably scalability when dealing with tens of thousands of hours of training data. Our results represent just a beginning to exciting future developments in this field,” Seide says. “Our goal is to open possibilities for new and fluent voice-based services that were impossible before. We believe this research will be used for services that change how we work and live. Imagine applications such as live speech-to-speech translation of natural, fluent conversations, audio indexing, or conversational, natural language interactions with computers.”

Monday, June 13, 2011

iOS Speech Recognition Settings Confirm Nuance-Apple Partnership

Interesting post @ http://www.macrumors.com/2011/06/11/ios-speech-recognition-settings-confirm-nuance-apple-partnership/









A couple of screenshots posted on Twitter by @ChronicWire reveals hidden Nuance preferences found in the latest internal iOS builds that confirms that Apple has been actively working on building in speech recognition into iOS.

Rumors of a Nuance-Apple partnership had been heavy in the weeks prior to WWDC, though no announcements were made during the keynote. Later, comments by Robert Scoble indicated that the deals were simply not completed in time for WWDC but were still in the works:
I was told weeks ago by my source (same one who told me Twitter would be integrated deeply into the OS) that Siri wouldn't be done in time. Maybe for this fall's release of iPhone 5? After all, they need to have some fun things to demo for us in August, no?
The source of the screenshots (@Chronic / @SonnyDickson) has been known to have legitimate sources in the past. So, it seems certain that Apple is actively working on bringing Nuance speech recognition into iOS, perhaps as early as iOS 5 this fall.

Saturday, June 11, 2011

Again: Nuance Slaps Vlingo With Another Patent Lawsuit Over Voice Recognition Technology

I guess Nuance is trying again to acquire Vlingo (given its standard sue before acquire strategy).

See below from Techcrunch: http://techcrunch.com/2011/06/09/nuance-sues-vlingo-again-over-voice-recognition-patents/#comments


Well, this is interesting. Nuance, a company that develops imaging and voice recognition technologies, is once again suing competitor Vlingo, which also develops a voice search technology and is backed by Yahoo, AT&T and Charles River Ventures.
According to the suit, which we’ve embedded below, Nuance claims Vlingo is infringing on number of Nuance’s patents including U.S. patent no. 6,487,534 B1, which relates to a “Distributed Client-Server Speech Recognition System.” By making, using, selling, offering to sell, and or importing its products and services related to speech recognition, Nuance says Vlingo is infringing on its patent.
Nuance is also claiming that Vlingo is infringing on that U.S. patent no. 6,785,653 B1, which is titled “Distributed Voice Web Architecture and Associated Components and Methods,” U.S. patent no. 6,839,669 B1, titled “Performing Actions Identified in Recognized Speech;” U.S. patent number No. 7,058,573 B1, titled “Speech Recognition System to Selectively Utilize Different Speech Recognition Techniques Over Multiple Speech Recognition Passes;” and U.S. patent no. 7,127,393 B2, titled “Dynamic Semantic Control of a Speech Recognition System.”
Nuance is requesting that Vlingo pay damages for infringing and profiting off the patents, but it’s unclear what the dollar amount of these damages are.
The two companies have a bit of a storied past. Nuance slapped Vlingo with a patent lawsuit back in 2008. Vlingo then bought a number of patents last year relating to voice and speech recognition, that aimed to force Nuance to drop its suit.
Dave Grannan, CEO of Vlingo, recently compared the act of competing with Nuance to
“having a venereal disease that’s in remission.” He tells Bloomberg BusinessWeek, “We crush them whenever we go head-to-head with them. But just when you’re thinking life is great – boom, there’s a sore on your lip.” Gross.
Nuance is a massive company with a $6 billion market cap and is a formidable competitor. In fact, Apple appears to be licensing Nuance’s technology in OS X Lion. And we heard that Nuance was in negotiations with Apple for a partnership to license and use the company’s voice recognition technology, though Nuance was missing from the lineup of products revealed this week’s WWDC conference. And we’ve learned that Apple may already be using Nuance technology in their new massive data center in North Carolina.
Photo Credit/Flickr/KWDesigns

Tuesday, January 11, 2011

The Search for a Clearer Voice - How Google's Voice Search is getting so good.

An interesting post by Paul Boutin: http://www.technologyreview.com/blog/guest/26242/?p1=A2

It raises again the issue of talking with the right (= US) accent to your phone.

The Search for a Clearer Voice

How Google's Voice Search is getting so good.
Paul Boutin 01/10/2011



Smart phones are great at a lot of things, with one exception: Typing on a touch screen or a downsized keyboard is still frustrating compared to a full-size computer keyboard. That's probably why Google says that, even before the release of its new personalized Voice Search app for Android in mid-December, one in four mobile searches were already input by voice rather than from a keyboard.
The improved Voice Search takes speech recognition to its next level: Google's servers will now log up to two years of your voice commands in order to more precisely parse exactly what you're saying.
In tests on the new app, which appeared in Google's Android Market a week before Christmas, the app originally got about three out of five searches correct. After a few days, the ratio crept up to four out of five. It's surprisingly good at searches that involve common nouns ("heathen child lyrics") and what search experts call vertical searches for popular topics like airline flights and movie listings. Voice Search knows "United Flight 714" and "True Grit show times 90066" when it hears them. Less successful are searches involving people's names. In repeated attempts to Google up WikiLeaks founder Julian Assange, Voice Search got no closer than "wikileaks founder julian of songs."
How does it work? Rather than try to use the phone itself to do speech recognition, Voice Search digitizes the user's input commands and sends them off to Google's gargantuan server farms. There, the spoken words are broken down and compared both to statistical models of what words other people mean when they utter those syllables, plus a history of the user's own voice commands, through which Google refines its matching algorithm for that particular voice. The app recognizes five different flavors of English—American, British, Australian, Indian and South African—plus Afrikaans, Cantonese, Czech, Dutch, French, German, Italian, Japanese, Korean, Mandarin, Polish, Portugese, Russian, Spanish, Turkish, and Zulu.
The tricky part—and the motive for a personalized search app—is that human voices vary wildly between men and women, between young people and old people, and among those with various accents and dialects. By storing hundreds, perhaps thousands of what speech recognition experts call "utterances" by the same person over months of use, Voice Search can better guess at what that particular person is saying.
That mathematical model used to recognize phrases was refined over three years using voice samples from Google's now-defunct GOOG-411 automated directory assistance service, which the company operated from 2007 through late last year specifically to capture a wide-ranging set of voice samples for analysis. The company's first Voice Search app, for iPhone only, was launched a year after GOOG-411 in November 2008.
Voice Search doubles as a spoken-command system for the phone. As shown in this video, it understands commands such as, "Send mail to Mike LeBeau. How's life in New York treating you? The weather's beautiful here." The app will find LeBeau in your contacts—it's better at matching names here than in a Web search, because it's working with a limited set—and will fill in the subject line with your first sentence. You can speak additional text into the message, or edit it with the phone's keyboard, before sending it.
Google has clearly put a lot of effort into its speech recognition technology. But the impact on it bottom line is obvious: By removing the aggravation of typing on tiny keys, the company hopes to get customers to reach for its search and e-mail services much more often.
Bookmark and Share