Sentiment Analysis Moves into Voice Interactions

From Speech Technology

Sentiment analysis technology has been around for decades, with the earliest iterations centered on opinion polarity, gauging whether someone had a positive, negative, or neutral opinion about something.

Today, sentiment analysis has become one of the most heavily researched and fastest-growing areas in computer science. Efforts already undertaken have moved the technology from simple polarity detection to more complex nuances of emotions, going, for example, beyond simple negativity detection to distinguishing between anger and grief.

The most significant advances, though, have been moving the technology from what was primarily a tool for text analysis to uncover insights from voice interactions.

In this article from Speech Technology, TTEC Digital's Robert Wakefield-Carl joins subject matter experts from NICE, Verint, and more to discuss advances in sentiment analysis -- particularly in the realm of voice interactions. 

“Contact centers have wanted to understand customers and their sentiment for decades, but it was not until 10 or 15 years ago that we actually had the natural language processing models in place to properly transcribe calls and to understand fully not only what is being said but how it portrays the customer’s concerns or feelings,” says Robert Wakefield-Carl, senior director of innovation architects at TTEC Digital.”
Robert Wakefield-Carl, senior director of innovation architects at TTEC Digital