Skip to content
Naked Security Naked Security

Is Google’s Duplex AI helpful or plain creepy?

Last week, Google CEO Sundar Pichai used the company’s annual I/O event to demo an experimental new feature of Google Assistant: Duplex.

Last week, Google CEO Sundar Pichai used the company’s annual I/O event to demo an experimental new feature of Google Assistant.
It consisted of two ordinary-sounding one-minute voice conversations, one to book a hair appointment, the other to make a restaurant reservation.
The unusual aspect of those conversations – which Google said were not staged – is that in both the caller was a computer powered by its Duplex AI technology capable of talking and responding to human beings on the other end using natural language.
The clever (or creepy) bit is that had Pichai not told audience members about the AI they would have been unlikely to have detected it.
Computer-generated voice systems are supposed to be stilted, synthesised, and limited in their responses, but this one sounded convincingly human in every way right down to its reassuringly disfluent use of “mhmm” and “um” as part of its chatter.
Duplex is robust enough that Google will start offering it to a small number of Voice Assistant Android users this summer, which they’ll use to make simple reservations like the ones in the demo.
As I/O attendees applauded, and online watchers wondered aloud whether Duplex might be good enough to pass the famous Turing test, the doubters offered a less optimistic assessment of Google’s cleverness.
Might criminals use voice AI to deceive people? What are the implications of people delegating social interaction to machines?  Will it put millions of service industry workers out of a job?
Then there are nuanced ethical issues Google faces from day one, such as do people have a right to know they are talking to a machine?


This struck many as a big tech firm doing something because it could, said one New York Times writer who described the demo as “horrifying”:

Silicon Valley is ethically lost, rudderless and has not learned a thing.

Stung, Google clarified:

We are designing this feature with disclosure built-in, and we’ll make sure the system is appropriately identified. What we showed at I/O was an early technology demo, and we look forward to incorporating feedback as we develop this into a product.

But if people talking to Duplex will be told they are talking to a machine, why make it sound so convincingly human?
Technically, Duplex is a combination of systems including automatic speech recognition (ASR) and neural network ‘deep learning’ whose capabilities have surged in the last five years on the back of processing improvements and the high salaries offered to PhDs.
For now, the technology can only be used for a narrow set of tasks, but inevitably this will expand quickly, which in turn will lead to calls for more rules and regulation.
Google can probably cope with this, but what will be more difficult will be changing how people see AI, especially where it is being used to automate social interactions that express deeper meanings that are slow to evolve.
As with the ‘Turk’, a famous 19th century chess-playing automaton which turned out to be a very small man hiding under the table, it’s as if AI makes us feel like we are being deceived.
Google’s Duplex is no clever trick, but on some unconscious level, the pattern has been set – people feel compelled to look under the table or behind the curtain to find something that reminds them of themselves.

4 Comments

Is this going to replace the crummy AI that does robo calls? (with local spoofed phone #s)
Some day people may stop answering the phone all together.

Reply

Can this system enter into a binding contract? It calls a beauty salon to book an appoint for a wedding party, and nobody shows, what is the recourse?

Reply

How long will it be before this technology is incorporated into robo-calling software? It seems made-to-order for call center scammers.

Reply

I am thinking that this will be like the classic 1950s secretary phone contact, and will become commonplace, where you will receive a call and the caller will identify themselves as User Q. Name’s artificial personal assistant, Insert Name Here, and continue the conversation, exactly as if it were human, and it will become commonplace. It just makes sense and will probably make Google money. Money will always drive these innovations, and Google is simply trying to get in front of the 8-ball.
The ironic twist would be if everyone who currently uses phone conversations started to use Google’s voice-to-text reader and automatic calendar system to handle all phone calls, setting up automatic calendars from the information originally sent to Google as a command from the originator. I wonder if Google is already anticipating that, and enabling “reverse audio lookup” to lookup the original data from the original command, to insert directly into their own calendar system, or whatever system the end-point user wishes to use. It would be approximately the same amount of processing as interpreting the artificial Google audio conversation and processing it into data, commands, and database entry references.
There are still plenty of people who need to be addressed “the old fashioned” way with voice, so I think this will take off, regardless of ethical dilemmas.

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to get the latest updates in your inbox.
Which categories are you interested in?
You’re now subscribed!