Saturday, May 12, 2012

How Much is Your Enterprise Wasting on Repeat Customer Service and Sales Calls? - Enkata


How Much is Your Enterprise Wasting on Repeat Customer Service and Sales Calls?
May 10, 2012 | Joe McFadden — Sr. Director, Marketing

For most contact centers, 40% of incoming calls are unnecessary repeats. Improving FCR is a top priority for many call centers because it can dramatically help reduce operating costs (up to 25% for large enterprises) AND improve the overall customer experience. As Emily Yellin, a consultant and author of Your Call Is (Not That) Important to Us explains  “When customers have a problem, we only call the company as a last resort. So if we can’t get what we need the first time around and have to call back, we remember the frustration and lost time and are more likely to go elsewhere the next time we need to buy whatever that company is selling.”  The more times a customer needs to call back, the worse the customer experience becomes.
On the other hand, when a customer’s service or sales issue is addressed quickly and efficiently, they walk away from that call feeling good about your brand and their overall experience. This customer experience is often what sets two companies in the same industry apart from each other and what creates brand loyal customers.
However, sometimes it’s hard to get the “powers that be” in a large enterprise to view the contact center as a branding tool, and not just as a line item on a budget. If you are in need of compelling data to help convince the higher ups in your organization to invest in a First Contact Resolution (FCR) solution, these figures might just do the trick:
According to a recent analysis of Enkata’s own in-house customer data, there was an estimated $20 billion spent on unnecessary repeat calls and call transfers by North American companies.
The figures below are estimates based on average repeat call reductions for Enkata customers deploying Enkata’s First Contact Resolution solution applied across total estimated customer service calls placed in North America.
So how much did you company waste on unnecessary repeats call in 2011? How much are you prepared to waste in 2012?



http://www.enkata.com/how-much-is-your-enterprise-wasting-on-repeat-calls/


A Conversation On The Role Of Big Data In Marketing And Customer Service



A Conversation On The Role Of Big Data In Marketing And Customer Service


Big data is here! And, marketers are one of the professional groups that stand to gain the most from these new-found capabilities to analyze data that, until recently, would have been too complex to capture, store and make sense of. Behind the hype lies a golden opportunity for marketers and customer service to help their organizations get ahead of the competition. Cutting through all the noise can be a challenge, so it's important to understand what big data can achieve, what data is most useful, and how to go about using it.
In the following conversation, Verint's Daniel Ziv and Ovum analyst Keith Dawson share perspective on the sudden lure to the term "big data" and what it means for companies in the coming year.
1. Big data is a buzzword that seems to be making its way into conversations more frequently. How would you define big data, and how is this new business concept different from traditional business intelligence?
Keith: Big data is a buzzword partly because the definition of "big" changes all the time, as processing power improves and data storage capabilities grow vaster. However, it does have a real meaning, and is usually shorthanded by the four V's, which are as follows:
  • Volume: the amount of data being worked with
  • Variety: the number of different sources and data types
  • Velocity: the speed at which it changes, which is very fast
  • Value: the ability of an organization to process and leverage machine-derived insights
Daniel: The first three V's seem to have become the de facto definition for big data, but Keith's addition of the fourth Vrepresenting "value" may be the most important yet. Many organizations have a lot of data, but not all are generating significant value from it. This may partially be due to the fact that big data initiatives do not always involve the business earlier on in the process.
2. Is big data something IT departments need to manage and address? How does it have an impact on the marketing organization as well?
Keith: Big data starts with IT, most definitely, because they are responsible for acquiring and deploying the infrastructure. From a business point of view, marketing is poised to reap significant value out of big data. Look at the wealth of actionable knowledge inherent in CRM data, social media mining, or even basic customer call recording -- there is enormous potential not being realized with traditional analytics structures. This theory applies beyond the contact center and also to business intelligence and ERP systems, which are still trying to figure out how to put data from some of those external sources to work.
Daniel: I agree completely! We, as an industry, have witnessed some phenomenal examples regarding how much value this data can represent -- especially when driven effectively by the business including marketing departments. For example, I've seen a large telecom provider do this very thing. The company is BI savvy and has traditionally analyzed structured data. It added a speech analytics solution to help analyze contact center calls, and as a resultin the first year of deploymentthey identified $180 million worth of savings, while increasing customer satisfaction by 30%. What's even more interesting with this particular initiative is that it was driven mostly by marketing, not by the IT department where many big data deployments reside.
3. Do you believe that a company's corporate big data assets and its use of analytics could become something as powerful as the company's brand?
Keith: There are already companies for which the ability to analyze big data is functionally equivalent to their brand. Facebook is the obvious example, and Google too. Then you get beyond that to customer-facing companies like Netflix or Amazon, where their ability to determine patterns of customer preference and behavior stems from analysis of huge data sets. They are making business decisions on automated data analysis -- the kinds of things that used to be done with focus groups. The difference is that they are coming to much richer, statistically valid and more insightful conclusions.
Daniel: Facebook definitely has the potential to be a key data analytics force. The acquisition of Instagram brings yet another dimension of unstructured data to social media. It will be interesting to track how Facebook uses and monetizes the tremendous amounts of data now available. Google is an example of a company where almost all of its services and revenue is driven by collecting, indexing and analyzing data. The recent updated privacy policy also now provides Google with the ability to link various customer insights from its many different services. It holds tremendous potential value for Google, as well as for the consumers, given customer privacy concerns are properly addressed. Recommendation engines from Netflix, Pandora and Amazon have also proven to be tremendously effective in driving sales and building loyalty.
4. Can you share other examples of how companies use this new asset to effectively compete?
Keith: You have to look at the social networking space to really see the most advanced use of big data analysis to make lightning fast business decisions. Ad traffic is monetized on Facebook almost exclusively through big data analytics. Without big data, Facebook isn't perceived as a giant company. The financial services industry also has been applying this kind of analysis to credit card and loan customer transactions for quite a while. You see it in airlines: dynamic pricing of tickets, for example, and for segmentation of customers.
Daniel: The social networking space has created a tremendous amount of new customer data, and may be partly responsible for the emergence of the big data conceptgiven the explosive growth in the amount and velocity of this information. According to Twitter's own research in early 2012, it sees roughly 175 million tweets every day, and has more than 465 million accounts. What many organizations neglect to realize is that their internal corporate data assets may significantly exceed this in terms of content and value. While a typical tweet is only a handful of words or abbreviations with limited context, an average five minute contact center call is typically over 1,000 wordsproviding much richer context that drives more actionable insights, when mined with the proper tools. I've heard industry estimates that for every word tweeted, there are over 200 words spoken in the contact center directly by your customers and CSRs. The challenge is connecting the dots between the different sources.
5. One of the key challenges of big data is transforming the common silo approach where each department has its own data assets, which prevents organizations from getting a unified view of the customer. What strategy and technology solutions are available to handle these challenges?
Keith: My view is that the main barriers are more cultural than technical. You need business structures in place to share data, and to encourage the deployment and use of data warehouses that cross departments and functions. The idea of big data isn't really a product, rather it's more of a process or a label that describes those very strategies implied by the question. I don't really see siloization as a challenge of big datainstead, it's a challenge of organizational problem solving and priority setting. Once those silos have been broken down, companies can forge ahead with a more intelligent data analytics strategy. Big data isn't an end in itself, because organizations can just as likely find themselves in a situation where mountains of data are being analyzed, and they still don't know how to act on or monetize it. From that point of view, big data is an IT issue. My personal sense is that as teams outside IT begin to understand the potential value embedded in their data, they'll start to look internally for collaborators who can help them unlock it. That's going to be a unique process in every organization.
Daniel: I think technology can help make this process easier but agree that the key issue is the organizational structures and processes. The emergence of the chief customer officer role and customer experience departments that own the end-to-end customer journey can help drive the right attention and actions. By making sure the organization has a unified 360 degree view of the voice of the customer, these teams will know better how to take action on valuable insights.
6. What industries do you think have the strongest potential to leverage big data as a competitive marketing advantage?
Keith: I'd have to say the financial services industry has the strongest potential to leverage big data, because it has been doing big data analysis long before the trend even had a name. Retail also has a lot of potential. Look at the data that's gathered from supermarket loyalty cards, as just one example. Travel and hospitality, telecomreally any market where there are a high volume of transactions or  interactions that have been historically too "low value" to examine individually, but that add up to a collective picture that's attractive to mine.
Daniel: Financial institutions use things like credit scores to segment customers and offer differentiated products and pricing. However, it seems in the past that most of the data that was used was structured by nature and not necessarily leveraged as a competitive differentiator. When these organizations start mining the tremendous amount of unstructured content they have, for example, in their contact center calls, emails or even social media they can go much further in their ability to customize offers and leverage that as a competitive force.
Thank you, Keith, for the insight and perspective. Big data is a trend worth monitoring, as its evolution will greatly impact the way organizations -- including marketing departments -- take advantage of the huge potential value.
It would be interesting to hear what readers think. Please share your comments and input on what big data means to you!

Wednesday, August 31, 2011

Speech Recognition Leaps Forward - Is it a revolution?


Great news from Microsoft about substantial progress in LVCSR.
Please comment if you experience this technology and if you indeed view it as a revolution.
Thx, Ofer




Speech Recognition Leaps Forward
August 29, 2011 12:01 AM PT
During Interspeech 2011, the 12th annual Conference of the International Speech Communication Association being held in Florence, Italy, from Aug. 28 to 31, researchers from Microsoft Research will present work that dramatically improves the potential of real-time, speaker-independent, automatic speech recognition.
Dong Yu, researcher at Microsoft Research Redmond, and Frank Seide, senior researcher and research manager with Microsoft Research Asia, have been spearheading this work, and their teams have collaborated on what has developed into a research breakthrough in the use of artificial neural networks for large-vocabulary speech recognition.

The Holy Grail of Speech Recognition

Commercially available speech-recognition technology is behind applications such as voice-to-text software and automated phone services. Accuracy is paramount, and voice-to-text typically achieves this by having the user “train” the software during setup and by adapting more closely to the user’s speech patterns over time. Automated voice services that interact with multiple speakers do not allow for speaker training because they must be usable instantly by any user. To cope with the lower accuracy, they either handle only a small vocabulary or strongly restrict the words or patterns that users can say.
The ultimate goal of automatic speech recognition is to deliver out-of-the-box, speaker-independent speech-recognition services—a system that does not require user training to perform well for all users under all conditions.


“This goal has increased importance in a mobile world,” Yu says, “where voice is an essential interface mode for smartphones and other mobile devices. Although personal mobile devices would be ideal for learning their user’s voices, users will continue to use speech only if the initial experience, which is before the user-specific models can even be built, is good.”
Speaker-independent speech recognition also addresses other scenarios where it’s not possible to adapt a speech-recognition system to individual speakers—call centers, for example, where callers are unknown and speak only for a few seconds, or web services for speech-to-speech translation, where users would have privacy concerns over stored speech samples.

Renewed Interest in Neural Networks

Artificial neural networks (ANNs), mathematical models of the low-level circuits in the human brain, have been a familiar concept since the 1950s. The notion of using ANNs to improve speech-recognition performance has been around since the 1980s, and a model known as the ANN-Hidden Markov Model (ANN-HMM) showed promise for large-vocabulary speech recognition. Why then, are commercial speech-recognition solutions not using ANNs?
“It all came down to performance,” Yu explains. “After the invention of discriminative training, which refines the model and improves accuracy, the conventional, context-dependent Gaussian mixture model HMMs (CD-GMM-HMMs) outperformed ANN models when it came to large-vocabulary speech recognition.”
Yu and members of the Speech group at Microsoft Research Redmond became interested in ANNs when recent progress in building more complex “deep” neural networks (DNNs) began to show promise at achieving state-of-the-art performance for automatic speech-recognition tasks. In June 2010, intern George Dahl, from the University of Toronto, joined the team, and researchers began investigating how DNNs could be used to improve large-vocabulary speech recognition.
“George brought a lot of insight on how DNNs work,” Yu says, “as well as strong experience in training DNNs, which is one of the key components in this system.”
A speech recognizer is essentially a model of fragments of sounds of speech. An example of such sounds are “phonemes,” the roughly 30 or so pronunciation symbols used in a dictionary. State-of-the-art speech recognizers use shorter fragments, numbering in the thousands, called “senones.”
Earlier work on DNNs had used phonemes. The research took a leap forward when Yu, after discussions with principal researcher Li Deng and Alex Acero, principal researcher and manager of the Speech group, proposed modeling the thousands of senones, much smaller acoustic-model building blocks, directly with DNNs. The resulting paper, Context-Dependent Pre-trained Deep Neural Networks for Large Vocabulary Speech Recognition by Dahl, Yu, Deng, and Acero, describes the first hybrid context-dependent DNN-HMM (CD-DNN-HMM) model applied successfully to large-vocabulary speech-recognition problems.
“Others have tried context-dependent ANN models,” Yu observes, “using different architectural approaches that did not perform as well. It was an amazing moment when we suddenly saw a big jump in accuracy when working on voice-based Internet search. We realized that by modeling senones directly using DNNs, we had managed to outperform state-of-the-art conventional CD-GMM-HMM large-vocabulary speech-recognition systems by a relative error reduction of more than 16 percent. This is extremely significant when you consider that speech recognition has been an active research area for more than five decades.”
The team also accelerated the experiments by using general-purpose graphics-processing units to train and decode speech. The computation for neural networks is similar in structure to 3-D graphics as used in popular computer games, and modern graphics cards can process almost 500 such computations simultaneously. Harnessing this computational power for neural networks contributed to the feasibility of the architectural model.
In October 2010, when Yu presented the paper to an internal Microsoft Research Asia audience, he spoke about the challenges of scalability and finding ways to parallelize training as the next steps toward developing a more powerful acoustic model for large-vocabulary speech recognition. Seide was excited by the research and joined the project, bringing with him experience in large-vocabulary speech recognition, system development, and benchmark setups.

Benchmarking on a Neural Network

“It has been commonly assumed that hundreds or thousands of senones were just too many to be accurately modeled or trained in a neural network,” Seide says. “Yet Yu and his colleagues proved that doing so is not only feasible, but works very well with notably improved accuracy. Now, it was time to show that the exact same CD-DNN-HMM could be scaled up effectively in terms of training-data size.”
The new project applied CD-DNN-HMM models to speech-to-text transcription and was tested against Switchboard, a highly challenging phone-call transcription benchmark recognized by the speech-recognition research community.
First, the team had to migrate the DNN training tool to support a larger training data set. Then, with help from Gang Li, research software-development engineer at Microsoft Research Asia, they applied the new model and tool to the Switchboard benchmark with more than 300 hours of speech-training data. To support that much data, the researchers built giant ANNs, one of which contains more than 66 million inter-neural connections, the largest ever created for speech recognition.
The subsequent benchmarks achieved an astonishing word-error rate of 18.5 percent, a 33-percent relative improvement compared with results obtained by a state-of-the-art conventional system.


“When we began running the Switchboard benchmark,” Seide recalls, “we were hoping to achieve results similar to those observed in the voice-search task, between 16- and 20-percent relative gains. The training process, which takes about 20 days of computation, emits a new, slightly more refined model every few hours. I impatiently tested the latest model every few hours. You can’t imagine the excitement when it went way beyond the expected 20 percent, kept getting better and better, and finally settled at a gain of more than 30 percent. Historically, there have been very few individual technologies in speech recognition that have led to improvements of this magnitude.”
The resulting paper, titled Conversational Speech Transcription Using Context-Dependent Deep Neural Networks by Seide, Li, and Yu, is scheduled for presentation on Aug. 29. The work already has attracted considerable attention from the research community, and the team hopes that taking the paper to the conference will ignite a new line of research that will help advance the state of the art for DNNs in large-vocabulary speech recognition.

Bringing the Future Closer

With a novel way of using artificial neural networks for speaker-independent speech recognition, and with results a third more accurate than what conventional systems can deliver, Yu, Seide, and their teams have brought fluent speech-to-speech applications much closer to reality. This innovation simplifies speech processing and delivers high accuracy in real time for large-vocabulary speech-recognition tasks.
“This work is still in the research stages, with more challenges ahead, most notably scalability when dealing with tens of thousands of hours of training data. Our results represent just a beginning to exciting future developments in this field,” Seide says. “Our goal is to open possibilities for new and fluent voice-based services that were impossible before. We believe this research will be used for services that change how we work and live. Imagine applications such as live speech-to-speech translation of natural, fluent conversations, audio indexing, or conversational, natural language interactions with computers.”

Monday, June 13, 2011

iOS Speech Recognition Settings Confirm Nuance-Apple Partnership

Interesting post @ http://www.macrumors.com/2011/06/11/ios-speech-recognition-settings-confirm-nuance-apple-partnership/









A couple of screenshots posted on Twitter by @ChronicWire reveals hidden Nuance preferences found in the latest internal iOS builds that confirms that Apple has been actively working on building in speech recognition into iOS.

Rumors of a Nuance-Apple partnership had been heavy in the weeks prior to WWDC, though no announcements were made during the keynote. Later, comments by Robert Scoble indicated that the deals were simply not completed in time for WWDC but were still in the works:
I was told weeks ago by my source (same one who told me Twitter would be integrated deeply into the OS) that Siri wouldn't be done in time. Maybe for this fall's release of iPhone 5? After all, they need to have some fun things to demo for us in August, no?
The source of the screenshots (@Chronic / @SonnyDickson) has been known to have legitimate sources in the past. So, it seems certain that Apple is actively working on bringing Nuance speech recognition into iOS, perhaps as early as iOS 5 this fall.

Saturday, June 11, 2011

Again: Nuance Slaps Vlingo With Another Patent Lawsuit Over Voice Recognition Technology

I guess Nuance is trying again to acquire Vlingo (given its standard sue before acquire strategy).

See below from Techcrunch: http://techcrunch.com/2011/06/09/nuance-sues-vlingo-again-over-voice-recognition-patents/#comments


Well, this is interesting. Nuance, a company that develops imaging and voice recognition technologies, is once again suing competitor Vlingo, which also develops a voice search technology and is backed by Yahoo, AT&T and Charles River Ventures.
According to the suit, which we’ve embedded below, Nuance claims Vlingo is infringing on number of Nuance’s patents including U.S. patent no. 6,487,534 B1, which relates to a “Distributed Client-Server Speech Recognition System.” By making, using, selling, offering to sell, and or importing its products and services related to speech recognition, Nuance says Vlingo is infringing on its patent.
Nuance is also claiming that Vlingo is infringing on that U.S. patent no. 6,785,653 B1, which is titled “Distributed Voice Web Architecture and Associated Components and Methods,” U.S. patent no. 6,839,669 B1, titled “Performing Actions Identified in Recognized Speech;” U.S. patent number No. 7,058,573 B1, titled “Speech Recognition System to Selectively Utilize Different Speech Recognition Techniques Over Multiple Speech Recognition Passes;” and U.S. patent no. 7,127,393 B2, titled “Dynamic Semantic Control of a Speech Recognition System.”
Nuance is requesting that Vlingo pay damages for infringing and profiting off the patents, but it’s unclear what the dollar amount of these damages are.
The two companies have a bit of a storied past. Nuance slapped Vlingo with a patent lawsuit back in 2008. Vlingo then bought a number of patents last year relating to voice and speech recognition, that aimed to force Nuance to drop its suit.
Dave Grannan, CEO of Vlingo, recently compared the act of competing with Nuance to
“having a venereal disease that’s in remission.” He tells Bloomberg BusinessWeek, “We crush them whenever we go head-to-head with them. But just when you’re thinking life is great – boom, there’s a sore on your lip.” Gross.
Nuance is a massive company with a $6 billion market cap and is a formidable competitor. In fact, Apple appears to be licensing Nuance’s technology in OS X Lion. And we heard that Nuance was in negotiations with Apple for a partnership to license and use the company’s voice recognition technology, though Nuance was missing from the lineup of products revealed this week’s WWDC conference. And we’ve learned that Apple may already be using Nuance technology in their new massive data center in North Carolina.
Photo Credit/Flickr/KWDesigns

Tuesday, January 11, 2011

The Search for a Clearer Voice - How Google's Voice Search is getting so good.

An interesting post by Paul Boutin: http://www.technologyreview.com/blog/guest/26242/?p1=A2

It raises again the issue of talking with the right (= US) accent to your phone.

The Search for a Clearer Voice

How Google's Voice Search is getting so good.
Paul Boutin 01/10/2011



Smart phones are great at a lot of things, with one exception: Typing on a touch screen or a downsized keyboard is still frustrating compared to a full-size computer keyboard. That's probably why Google says that, even before the release of its new personalized Voice Search app for Android in mid-December, one in four mobile searches were already input by voice rather than from a keyboard.
The improved Voice Search takes speech recognition to its next level: Google's servers will now log up to two years of your voice commands in order to more precisely parse exactly what you're saying.
In tests on the new app, which appeared in Google's Android Market a week before Christmas, the app originally got about three out of five searches correct. After a few days, the ratio crept up to four out of five. It's surprisingly good at searches that involve common nouns ("heathen child lyrics") and what search experts call vertical searches for popular topics like airline flights and movie listings. Voice Search knows "United Flight 714" and "True Grit show times 90066" when it hears them. Less successful are searches involving people's names. In repeated attempts to Google up WikiLeaks founder Julian Assange, Voice Search got no closer than "wikileaks founder julian of songs."
How does it work? Rather than try to use the phone itself to do speech recognition, Voice Search digitizes the user's input commands and sends them off to Google's gargantuan server farms. There, the spoken words are broken down and compared both to statistical models of what words other people mean when they utter those syllables, plus a history of the user's own voice commands, through which Google refines its matching algorithm for that particular voice. The app recognizes five different flavors of English—American, British, Australian, Indian and South African—plus Afrikaans, Cantonese, Czech, Dutch, French, German, Italian, Japanese, Korean, Mandarin, Polish, Portugese, Russian, Spanish, Turkish, and Zulu.
The tricky part—and the motive for a personalized search app—is that human voices vary wildly between men and women, between young people and old people, and among those with various accents and dialects. By storing hundreds, perhaps thousands of what speech recognition experts call "utterances" by the same person over months of use, Voice Search can better guess at what that particular person is saying.
That mathematical model used to recognize phrases was refined over three years using voice samples from Google's now-defunct GOOG-411 automated directory assistance service, which the company operated from 2007 through late last year specifically to capture a wide-ranging set of voice samples for analysis. The company's first Voice Search app, for iPhone only, was launched a year after GOOG-411 in November 2008.
Voice Search doubles as a spoken-command system for the phone. As shown in this video, it understands commands such as, "Send mail to Mike LeBeau. How's life in New York treating you? The weather's beautiful here." The app will find LeBeau in your contacts—it's better at matching names here than in a Web search, because it's working with a limited set—and will fill in the subject line with your first sentence. You can speak additional text into the message, or edit it with the phone's keyboard, before sending it.
Google has clearly put a lot of effort into its speech recognition technology. But the impact on it bottom line is obvious: By removing the aggravation of typing on tiny keys, the company hopes to get customers to reach for its search and e-mail services much more often.
Bookmark and Share