Who is responsible for AI? It isn’t always the CEO

Fortune· Nick Little
In this article:

Nearly a decade ago, business software giant Intuit kicked off its journey in artificial investments.

It began with so-called “traditional AI,” including natural language processing, machine learning, and building recommendation systems. Intuit also invested in generative AI well before the big splash of ChatGPT’s launch caught the world’s attention a year ago.

But six years ago, Intuit—which serves 100 million customers through platforms including TurboTax and QuickBooks—hired Ashok Srivastava as chief data officer and empowered him to create a single capacity that spans the entire company for AI, data, and analytics.

“Ultimately, I make the decisions, but I do it in a collaborative way with our chief technology officer, our chief executive officer, and with my team,” says Srivastava. His oversight of AI “allows for a clear vision to be implemented quickly.”

An estimated 40% of companies surveyed by McKinsey say they will increase their investments in AI overall because of advancements in generative AI. Generative AI could add up to $4.4 trillion annually to productivity globally. By comparison, gross domestic product in the United Kingdom—the world’s sixth largest national economy—was $3.1 trillion last year.

But who is ultimately responsible for the adoption and deployment of AI, a world-changing technology experts compare to the industrial revolution or the birth of the internet? The answer is as complicated as AI itself.

“The board should have oversight over the CEO and management team,” says Jeetu Patel, executive vice president of collaboration and security for Webex, which is owned by Cisco Systems. “The CEO should have oversight over their direct reports of the employee base. And each business owner should be thinking about what the implications of AI are going to be for the good and bad, in their products, and in their daily work life.”

As Fortune interviewed Patel over Webex, he pointed to a feature that can transcribe our interview in 120 different languages simultaneously. That’s AI hard at work in the background, enabling productivity.

Webex and Cisco have implemented two phases of AI. The first made AI tools available for employees to allow experimentation internally. Phase 2 is underway now and incentivizes employees to focus on productivity improvements that can be achieved with those tools.

Cisco has invested billions in AI over the past decade, and Patel says “there is no such thing as overengagement in AI. What we are undergoing right now is probably the largest platform shift that we have experienced yet in the history of humanity.”

Food production giant ADM’s top leadership is similarly assertive about AI. It flows from the CEO and executive council, says Jason Reynolds, head of ADM’s digital and analytics. “They all have a thirst for it,” Reynolds explains. “With generative AI, it is one of the first technologies that I can remember that has such a natural usability for even nontechnical people.”

With most technology investments, Reynolds says a compelling case is needed to convince top leadership of its importance. But with AI, there’s a lot of competing ideas and excitement about the technology’s potential. That’s led to AI being deployed in key parts of the business including nutrition, a growth engine for ADM where consumer preferences tend to move quickly and AI can help speed up the research and development process.

Cloud services company Akamai Technologies takes a more decentralized approach. CEO Tom Leighton gives some direction from the top, but after that, “we allow a lot of autonomous decision-making to happen within the organization in terms of what are the best ways to solve these problems and what are the most interesting technologies to approach,” says Dr. Robert Blumofe, Akamai’s chief technology officer.

Today, various teams are experimenting with generative AI, always within the published guardrails leadership has established to ensure customer proprietary data is safely secured.

UKG, the human resource management company, has a similar strategy. “We try to be a bit federated in our approach,” says Hugo Sarrazin, UKG's chief product and technology officer. UKG has been using predictive AI models since 2015, with more than 2,500 going into production. AI can help with repetitive tasks including payroll review and building shift schedules.

An oversight governance group—which includes diverse perspectives from Sarrazin's team as well as HR and legal, among others—ensures AI is deployed responsibly and makes policies clear to employees, including a warning earlier this year not to put any company data in ChatGPT.

One way UKG tries to encourage innovation is through hackathons. It hosts quarterly 48-hour generative AI hackathons that attract over 1,000 participants to share ideas for the tech to solve a specific business problem.

Juniper Networks, which sells networking and cybersecurity products, paid $405 million in 2019 to acquire Mist Systems, an AI-driven platform that helps make Wi-Fi more predictable. Sharon Mandell, chief information officer at Juniper Networks, says the company has long thought about how it would evaluate AI in a manner that protects intellectual property, individual privacy, and moral decisions.

But how Juniper talks about it has also evolved. For years, the company had a governance group that worked on a set of AI principles, with a more structured group put in place a little over two years ago. That’s when outside investor groups started asking questions about the principles that Juniper was relying on and queried why those ideals weren’t publicly published. The company soon made those values public and internal meetings about AI have accelerated to a biweekly cadence.

“With the introduction of [generative] AI and how broad we feel that is going to touch the company, it has gotten much more structured,” says Mandell.

Juniper’s chief technology officer, Raj Yavatkar, is the executive that’s responsible for AI, says Mandell, but “at the end of the day, our CEO is the one responsible and accountable.”

Software maker Adobe utilizes an AI ethics board that not only oversees the principles for responsible AI, but also greenlights all AI capabilities that could have potential negative implications. This function sits under chief trust officer Dana Rao’s office.

This oversight is especially important because Adobe creates AI tools like Adobe Firefly, which has created over 3.5 billion images since launch in March using generative AI. A creative tool like Firefly was built for safe commercial use so that the images created don’t infringe on intellectual property and avoid bias. The feature was stress-tested by the ethics board, but also experimented with internally by the broader Adobe employee base.

“We had an internal data test and they gave us so much feedback, they forced us to innovate very fast to solve some of the gaps we had,” says Alexandru Costin, Adobe vice president of generative AI. After shipping, customers were also given an opportunity to share feedback and give Adobe an opportunity to address any concerns speedily.

Insurance provider Nationwide has deployed AI models for years. In 2018, it established an enterprise analytics office to own the most complex models used for predicting outcomes and decision-making. A few years later, a technical facility was built within Nationwide to house the models and deploy them across the business. A chief analytics officer partners closely with Nationwide's chief technology officer, Jim Fowler, to pair the analytical and software development teams.

“It is becoming a very cross-functional team,” says Fowler. After ChatGPT launched, Nationwide’s CEO set up an executive-level steering committee, which includes leaders from every function inside the company, to determine the business use cases they would pursue.

That’s led to 15 active use cases, including two in production today, that are organized around a blue and red team approach. The blue team is empowered to think of all the ways generative AI can do good for Nationwide, while the red team is equally mandated to raise questions about the risks.

“We think it is going to disrupt our industry in ways we can’t even imagine yet,” says Fowler. “And because of that, we’re going to put time and resources against it with real-life business use cases, cases that we know will have a big impact on Nationwide.”

This story was originally featured on Fortune.com

Advertisement