Google Workspace VP: 'Losing the trust of the user is the ultimate point of no return'

In this article:

Google (GOOG, GOOGL) on Wednesday rolled out a slew of new generative AI features for its vast ecosystem of products as part of its I/O conference. The event was equal parts a means for the company to show off what it’s been working on for the past year and proving to its users, and investors, that it, not Microsoft (MSFT), is the AI leader.

One of the major announcements during the show included the addition of a generative AI feature called Help Me Write, which will, well, help you write emails. But because it uses generative AI, Google Workspace VP Aparna Pappu says the company needs to take care that it doesn’t push out inappropriate responses or the company risks losing users’ trust.

“Losing the trust of the user is the ultimate point of no return,” she told Yahoo Finance. “That's our Northstar, ‘Cannot lose user trust.’ ”

Generative AI has raised a slew of thorny questions ranging from whether it will make jobs, like yours truly’s, obsolete to whether it should be allowed in schools. How much users can trust the technology to produce accurate responses to their prompts has also become a point of contention.

Google CEO Sundar Pichai speaks on-stage during the Google I/O keynote session at Shoreline Amphitheatre in Mountain View, California, on May 10, 2023. (Photo by Josh Edelson / AFP) (Photo by JOSH EDELSON/AFP via Getty Images)
Google CEO Sundar Pichai speaks on-stage during the Google I/O keynote session at Shoreline Amphitheatre in Mountain View, California, on May 10, 2023. (Photo by Josh Edelson / AFP) (Photo by JOSH EDELSON/AFP via Getty Images) (JOSH EDELSON via Getty Images)

Pappu says that’s also part of the reason Google is only rolling out its generative AI features to trusted testers before going public.

“Before we get into [general availability] we have you got to test it and put it through the wringer even to get it to Labs [Google’s early user test program] it goes through so much rigorous usage testing, and responsibility testing, safety testing before we even let a single external user try it,” Pappu explained.

Both Google and Microsoft’s generative AI offerings specifically state that they’re in early testing phases or that some answers may not be accurate. Moreover, generative AI in general is prone to “hallucinate,” which is basically an elaborate way of saying it can make up responses to some user queries that seem accurate, but are wrong.

During a “60 Minutes” segment Google’s Bard chatbot hallucinated a book that doesn’t exist. CEO Sundar Pichai explained during the segment that the problem is one that many chatbots have at this point and something engineers are trying to better understand.

To that end, Pappu says Google continues to test its systems as a means of trying to prevent false answers or inappropriate responses.

Sign up for Yahoo Finance's tech newsletter.
Sign up for Yahoo Finance's tech newsletter. (Yahoo Finance)

“Responsible and safe AI, it does not get born overnight. It is years of working on AI and knowing how to do adversarial testing,” Pappu explained. “So there's fundamentally responsible, safe AI baked into how we build these products.”

One problem that Pappu says is unique to Workspace is that it’s used by billions of users, each with their own understanding of technology. Introducing a feature like generative AI to the equation without confusing those users is a conundrum all its own.

“We have 3 billion users,” she said. “We have a responsibility to make these things really simple and easy to use.”

By Daniel Howley, tech editor at Yahoo Finance. Follow him @DanielHowley

Click here for the latest stock market news and in-depth analysis, including events that move stocks

Read the latest financial and business news from Yahoo Finance

Advertisement