Today, a developer’s Blog post published in which Alexa AI director of applied science Ruhi Sarikaya explained the advancements in machine self-learning techniques. It would allow Alexa to understand users in Contextual Clues. According to Sarikaya, these advancements played an important role in reducing user friction and making Alexa more conversational.
From the past few Months, Amazon was working on self-learning techniques that teach Alexa to automatically correct its own errors or mistakes. It does not need any Human annotation and as per Sarikaya, Alexa uses customers “implicit or explicit contextual signals to detect unsatisfactory interactions or failures of understanding.”
The Contextual signals range from Customers historical activity, preferences, and Alexa skills like where the device is located or what kind of Alexa device it is. It also consists of name-free skill interaction that guides customers to Alexa skills through a more natural process. For instance, You say “Alexa, get me a car,” and the voice assistant will understand the command without specifying the name of your ride-sharing service.
The Name free feature has expanded beyond the US to the UK, Canada, Australia, India, Germany, and Japan. Today, In the US name-free interaction features rolling out, Customers can remove the complexities from commands like to “Alexa, start cleaning,” previously, they’d have to specify and remember skills by saying, “Alexa, ask Roomba to start cleaning.”
“For example, if a customer says “What’s the weather in Seattle?” and, after Alexa’s response, says “How about Boston?”, Alexa infers that the customer is asking about the weather in Boston. If, after Alexa’s response about the weather in Boston, the customer asks, “Any good restaurants there?”, Alexa infers that the customer is asking about restaurants in Boston,” Sarikaya writes.
However, Context carryover and Follow-Up Mode will expand beyond the US to Canada, the UK, Australia, New Zealand, India, and Germany today.