President Biden signed an executive order on artificial intelligence regulation — the first broadly sweeping U.S. government action on AI — demanding safety and privacy guardrails for users.
Sinan Aral, MIT Professor of Management, Marketing, IT and Data Science, joins Yahoo Finance Live to weigh in on the impact of the order and how these regulations could impact tech giants, and even pharma and biotech companies.
“This is big, it’s bold, it’s broad,” Aral states on the comprehensiveness of the regulation, suggesting it is a bigger step than Congress has taken due to ongoing division in the House. Aral insists that bipartisan efforts are needed for personal data and privacy.
For more expert insight and the latest market action, click here to watch this full episode of Yahoo Finance Live.
- This is an interesting executive order. Do you think-- first of all, just give us big picture, whether you think it's done a good job?
SINAN ARAL: Well, I'll say this, in the absence of any ability to have congressional action, the difficulty of bipartisan supported congressional action. This is big, it's bold, it's broad. It has a number of provisions. It has provisions for safety, for privacy, for equity, for workers, for competition and innovation, and leadership abroad. And it really targets those foundation models, the big AI companies in terms of their safety and security standards.
- And so now there are companies, least, you know relatively more big well-established ones who have been asking to be regulated. How do you think companies-- public and private are going to respond to this? Will they be enthusiastic?
SINAN ARAL: Well, I got to tell you, it depends on where they sit. So when it comes to the safety provisions, the executive order directs NIST-- the National Institute for Standards and Technology to develop safety security and even red teaming standards for these foundation AI models. Red teams are those teams that search for vulnerabilities and exploits.
And so these companies must report testing on their training to the government before bringing any of these new models public. It also affects biosynthesis. It creates biosynthesis screening as a requirement for federal funding and reporting. And it also is setting standards for detecting AI content, and watermarking it as being from AI.
So this is going to affect biotech and pharmaceutical companies that use AI, this is going to affect foundational AI companies, like Google, Microsoft, OpenAI. It's going to affect anyone using AI to create content that is going out on social media or anywhere else on the internet. In that way, it's going to affect a lot of companies.
There's also provisions for privacy, a big call and leaning on congress to pass bipartisan, federal privacy and data protection legislation. Depending on how that turns out, it could affect every company that holds data on any kind of individual.