Advertisement
U.S. markets close in 36 minutes
  • S&P 500

    5,069.15
    -9.03 (-0.18%)
     
  • Dow 30

    38,909.84
    -62.57 (-0.16%)
     
  • Nasdaq

    15,955.26
    -80.04 (-0.50%)
     
  • Russell 2000

    2,045.03
    -11.08 (-0.54%)
     
  • Crude Oil

    78.41
    -0.46 (-0.58%)
     
  • Gold

    2,042.80
    -1.30 (-0.06%)
     
  • Silver

    22.64
    -0.12 (-0.51%)
     
  • EUR/USD

    1.0839
    -0.0009 (-0.09%)
     
  • 10-Yr Bond

    4.2740
    -0.0410 (-0.95%)
     
  • GBP/USD

    1.2658
    -0.0028 (-0.22%)
     
  • USD/JPY

    150.6870
    +0.2070 (+0.14%)
     
  • Bitcoin USD

    60,225.97
    +3,135.73 (+5.49%)
     
  • CMC Crypto 200

    885.54
    0.00 (0.00%)
     
  • FTSE 100

    7,624.98
    -58.04 (-0.76%)
     
  • Nikkei 225

    39,208.03
    -31.49 (-0.08%)
     

AI threat demands new approach to security designs -US official

FILE PHOTO: Illustration shows AI (Artificial Intelligence) letters and computer motherboard

OTTAWA (Reuters) - The potential threat posed by the rapid development of artificial intelligence (AI) means safeguards need to be built in to systems from the start rather than tacked on later, a top U.S. official said on Monday.

"We've normalized a world where technology products come off the line full of vulnerabilities and then consumers are expected to patch those vulnerabilities. We can't live in that world with AI," said Jen Easterly, director of the U.S. Cybersecurity and Infrastructure Security Agency.

"It is too powerful, it is moving too fast," she said in a telephone interview after holding talks in Ottawa with Sami Khoury, head of Canada's Centre for Cyber Security.

Easterly spoke the same day that agencies from 18 countries, including the United States, endorsed new British-developed guidelines on AI cyber security that focus on secure design, development, deployment and maintenance.

"We have to look at security throughout the lifecycle of that AI capability," Khoury said.

Earlier this month, leading AI developers agreed to work with governments to test new frontier models before they are released to help manage the risks of the rapidly developing technology.

"I think we have done as much as we possibly could do at this point in time, to help come together with nations around the world, with technology companies, to set out from a technical perspective how to build these build these capabilities as securely and safely as possible," said Easterly.

(Reporting by David Ljunggren in Ottawa; Editing by Matthew Lewis)

Advertisement