BARCELONA, Spain, February 26, 2017 (ENS) – At the Mobile World Congress in Barcelona this week, Ford will reveal its latest advance in mobility and connectivity – a car that not only understands what we say, but knows our moods, needs and desires – and can help us get what we want.
From February 27 to March 2, Barcelona’s Fira Gran Via conference facility will host the world’s largest gathering for the mobile industry, featuring a wide range of devices across the automotive, education, healthcare and utilities sectors that utilize wireless connectivity.
It’s not an auto show, but this is the venue Ford has chosen to introduce the latest evolution of its “empathetic car.”
By 2022, nearly 90 percent of all new cars are expected to offer voice recognition capability.
The Fords of tomorrow will also be able to pick up on tiny changes in our facial expressions as well as modulations and inflections in our speaking voices – in more than 20 different languages.
“We’re well on the road to developing the empathetic car which might tell you a joke to cheer you up, offer advice when you need it, remind you of birthdays and keep you alert on a long drive,” said Fatima Vital, senior director, Marketing Automotive with Nuance Communications, the Massachusetts-based company that is working with Ford to develop voice recognition of the SYNC in-car communication and infotainment system.
Ford’s SYNC in-car connectivity system has been powered by Nuance technology since it was first released in 2007, allowing drivers to make hands-free telephone calls, control music and perform other functions with the use of voice commands.
The current SYNC voice system understands more than 10,000 first-level commands, up from 100 with the first-generation SYNC system.
At the Consumer Electronics Show, CES, in Las Vegas in January, Ford showed the industry’s first in-car Alexa integration with SYNC 3 AppLink. Drivers can command internet-enabled functions such as lighting, security systems, garage doors and other Alexa smart home devices. Electric vehicle owners can start and stop their engines, lock and unlock doors, and monitor vehicle readings, including fuel level and battery range.
This summer, Ford’s in-car connectivity system SYNC 3 will enable drivers to connect to Amazon’s virtual assistant Alexa and communicate in 23 different languages and many local accents.
SYNC 3 already enables voice control, Apple CarPlay™ and Android Auto™.
Apple CarPlay provides a simplified way to use the iPhone interface on a car’s touch screen, giving users access to Siri Eyes-Free voice controls, as well as Apple Maps, Apple Music, Phone, Messages, and a variety of third party apps.
Android Auto delivers Google Maps and music to a car’s screen while enabling voice controls for phone calls and messaging.
By accessing cloud-based resources, cars of the future could enable even more drivers to speak their native languages and communicate with their vehicles.
Cloud-based voice control is anticipated to be available on 75 percent of new cars by 2022, and future systems are predicted to evolve into personal assistants that shuffle appointments and order takeaways when drivers are held up in traffic jams.
“Voice commands like ‘I’m hungry’ to find a restaurant and ‘I need coffee’ have already brought SYNC 3 into personal assistant territory,” said Mareike Sauer, voice control engineer, Connectivity Application Team, Ford of Europe.
“For the next step, drivers will not only be able to use their native tongue, spoken in their own accent, but also use their own wording, for more natural speech,” she says.
With scientists at Germany’s RWTH Aachen University, Ford is researching using multiple microphones to improve speech processing and reduce the effect of external noise and potential disruptions.
Soon, advanced in-vehicle systems equipped with sophisticated microphones and cameras will learn which music we like to hear when we are stressed, or joyous, and identify occasions when we prefer silence. Interior lighting can automatically adjust to complement our moods.
Brigitte Richardson, Ford’s lead engineer for global voice control technology and speech systems, says, “By combining our advanced voice recognition capabilities with intent and language understanding, we’re not only able to hear what drivers are saying, but better understand what they are looking to accomplish – be it listen to songs by Train or change the temperature to 75 degrees.”
“In the end, we want the user’s wish to be the system’s command,” says Richardson.
Within the next two years, voice control systems could prompt us with questions such as, “Would you like to order flowers for your mum for Mothers’ Day?” “Shall I choose a less congested but slower route home?” and “You’re running low on your favorite chocolate and your favorite store has some in stock. Want to stop by and pick some up?”
Future gesture and eye control could allow drivers to answer calls by nodding, adjust the volume with short twisting motions, and set the navigation with a quick glance at a destination on a map.
So is there a danger that, as in the Scarlett Johannson movie “Her,” we might fall in love with our advanced voice recognition systems?
Dominic Watt, senior lecturer in the Department of Language and Linguistic Science at the University of York, thinks that’s quite possible.
“Lots of people already love their cars, but with new in-car systems that learn and adapt, we can expect some seriously strong relationships to form,” said Watt. “The car will soon be our assistant, travel companion and sympathetic ear, and you’ll be able to discuss everything and ask anything, to the point many of us might forget we’re even talking to a machine.”
Copyright Environment News Service (ENS) 2017. All rights reserved.
© Environment News Service, 2023. All rights reserved.