Google’s Bard chatbot repeats mistake that wiped $120bn off share price

·2 min read
google headquarters
google headquarters

Google’s artificial intelligence chatbot is still making the same error that contributed to a $120bn wipeout for the tech giant’s share price a month ago.

Bard, which was opened to the public in the US and UK on Tuesday, still incorrectly claims that the James Webb Space Telescope took “the very first pictures of a planet outside of our own solar system”.

The first picture ever captured of a planet outside the solar system – an exo planet – was in fact taken by the Very Large Telescope in Chile in 2004.

Bard gave the same wrong answer when it was debuted by Google in February.

The error contributed to a $120bn sell-off in the internet search giant’s shares, amid doubts over the technology.

At the time, Google insisted it planned to test the bot to "make sure Bard's responses meet a high bar for quality, safety and groundedness in real-world information".

However, when asked the same prompt in questioning by The Telegraph on Wednesday, Bard still produced the same false information.

Google has admitted that the chatbot, which was released for a public trial on Tuesday, will make errors when it is asked factual questions by users.

In a blog post announcing the open testing, Google admitted its algorithms "can provide inaccurate, misleading or false information while presenting it confidently".

A Google spokesman pointed The Telegraph to a paper published by a Google research executive on the limits of the technology behind Bard.

The paper said the models used by Bard can "generate plausible-sounding responses that include factual errors – not ideal when factuality matters but potentially useful for generating creative or unexpected outputs".

Google has labelled Bard an "experiment", rather than a product ready for general use.

The chatbot is designed to offer conversational responses to questions from users, digesting information gathered through its search engine and billions of lines of text.

The AI technology is based on a large language model, which is designed to provide plausible-sounding responses to questions. However, it has little ability to separate fact from fiction and will often repeat false information it scrapes from the web.

This can lead the AI bots to "hallucinate" by inserting realistic-sounding text into answers.