Every few years, a big shift in how we interact with computers arrives. In the 1970s we swapped out punch cards for terminals, and then moved on to using a mouse and desktop icons in the 1980s. Today, touch screens and mobile devices are the face of computing.
But in the near future, voice technology and artificial intelligence will bring about another huge change, says Paul Cutsinger, who is in charge of code for Amazon’s home assistant, Alexa.
“Design for the ear not the eye-throw away what we know about design today and start fresh,” Cutsinger told the Kairos Global Summit in New York on Friday. “We need to focus on how things sound, not how they look.”
Cutsinger believes we’re not far away from a world of “ambient computing” in which today’s simple interactions (such as “Alexa, turn on the lights”) will be replaced with rich and layered discussions. He gave the example of a student having a conversation with a biology textbook.
Get Data Sheet, Fortune’s technology newsletter.
For it to work, though, Cutsinger says developers and designs must reevaluate the use of text and writing. For instance, he points out the way we convey information in writing is very different than how we speak: the process of describing things in a list using commas (as we do when we write) is very efficient, but also comes across as cold and mechanical when spoken out loud.
That’s why he says computer design must start afresh, and not try to extend existing paradigms of design to conversational AI.
“Applying what we learned from responsive design for mobile won’t work for voice. If you apply the old principles, you’ll have a command-and-control system,” says Cutsinger, explaining that command-and-control makes sense for touchscreen design but not for voice-based interactions.
So how and when will this design revolution arrive? After all, Alexa currently still struggles to recite basic Wikipedia entries, and interacting with her feels far from genuine conversation.
You Can Make Your Amazon Alexa Smarter
While Cutsinger thinks conversational AI is starting to flourish in niches like gaming, he also cited fields like finance and medicine where computer programs “designed for the ear” are making progress.
But whatever the pace of progress, it feels like there is no going back in the push towards voice computing. As my colleague, Adam Lashinsky, suggested while describing Google Home’s latest breakthrough, the new ear is already upon us.
“If the voice-controlled web were to get even better, you could begin to imagine a world where the value of a smartphone was diminished,” he wrote.