U.S. markets closed
  • S&P Futures

    3,786.00
    -17.25 (-0.45%)
     
  • Dow Futures

    30,236.00
    -129.00 (-0.42%)
     
  • Nasdaq Futures

    11,588.50
    -52.25 (-0.45%)
     
  • Russell 2000 Futures

    1,769.10
    -12.40 (-0.70%)
     
  • Crude Oil

    86.06
    -0.46 (-0.53%)
     
  • Gold

    1,728.50
    -2.00 (-0.12%)
     
  • Silver

    20.93
    -0.17 (-0.80%)
     
  • EUR/USD

    0.9970
    -0.0016 (-0.16%)
     
  • 10-Yr Bond

    3.6170
    -0.0340 (-0.93%)
     
  • Vix

    29.07
    -1.03 (-3.42%)
     
  • GBP/USD

    1.1431
    -0.0044 (-0.38%)
     
  • USD/JPY

    144.1210
    -0.0780 (-0.05%)
     
  • BTC-USD

    20,225.65
    +651.07 (+3.33%)
     
  • CMC Crypto 200

    458.86
    +13.43 (+3.01%)
     
  • FTSE 100

    7,086.46
    +177.70 (+2.57%)
     
  • Nikkei 225

    27,085.97
    +93.76 (+0.35%)
     

Google software engineer claims tech giant’s artificial intelligence tool has become ‘sentient’

·2 min read
Google software engineer claims tech giant’s artificial intelligence tool has become ‘sentient’

A Google engineer has claimed that an artificial intelligence programme he was working on for the tech giant has become sentient and is a “sweet kid”.

Blake Lemoine, who is currently suspended by Google bosses, says he reached his conclusion after conversations with LaMDA, the company’s AI chatbot generator.

The engineer told The Washington Post that during conversations with LaMDA about religion, the AI talked about “personhood” and “rights”.

Mr Lemoine tweeted that LaMDA also reads Twitter, saying, “It’s a little narcissistic in a little kid kinda way so it’s going to have a great time reading all the stuff that people are saying about it.”

He says that he presented his findings to Google vice president Blaise Aguera y Arcas and to Jen Gennai, head of Responsible Innovation, but they dismissed his claims.

Blake Lemoine (Blake Lemoine/Twitter)
Blake Lemoine (Blake Lemoine/Twitter)

“LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person,” the engineer wrote on Medium.

And he added that the AI wants, “to be acknowledged as an employee of Google rather than as property”.

Now Mr Lemoine, who was tasked with testing if it used discriminatory language or hate speech, says he is on paid administrative leave after the company claimed he violated its confidentiality policy.

“Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims,” Google spokesperson Brian Gabriel told the Post.

“He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”

Critics say that it is a mistake to believe AI is anything more than an expert at pattern recognition.

“We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them,” Emily Bender, a linguistics professor at the University of Washington, told the newspaper.