Opinion: How to avoid the AI apocalypse

Pattern CEO Dave Wright speaks during the first Silicon Slopes Artificial Intelligence Summit at Utah Valley University in Orem on Thursday, June 15, 2023.
Pattern CEO Dave Wright speaks during the first Silicon Slopes Artificial Intelligence Summit at Utah Valley University in Orem on Thursday, June 15, 2023. | Kristin Murphy, Deseret News

Will artificial intelligence be the end of civilization?

The very question sounds like theoretical hyperventilating — the kind of thing someone in the future will dig up for laughs, along with predictions that the world would never develop a market for computers or that every home would have a nuclear-powered vacuum cleaner by the end of the 20th century.

And, as much as I tend to scoff at the notion myself, I can’t help but note that even the best predicted uses of AI, taken to their logical extensions, seem to end up in disaster — theoretically, at least.

Dave Wright, the co-founder and CEO of an e-commerce Utah startup called Pattern, which now employs 1,400 people worldwide, is a huge believer in AI. He also exudes the energy of optimism.

At a Silicon Slopes Summit on artificial intelligence, held on the Utah Valley University campus last week, he showed how his company could use AI to draft a 246-page product sheet on a ceiling fan in about seven minutes, including content in eight languages and using about 300 trillion data points for reference.

Ask him what he fears about AI, however, as I did after his presentation, and the first thing he mentions is a byproduct of this sort of instant analysis that other, more traditional companies might take months to produce. Those who know how to use AI will be much more successful in business than those who don’t.

“That’s my biggest fear with it is the economic disparity that will start to happen,” he said, later adding, “I think the biggest thing will be the widening of the haves and have-nots.”

When that gulf widens, one of two things might happen. The have-nots could rise up in revolutionary zeal, or (more likely in the United States), they could lobby government to regulate the haves and keep them from getting such an advantage.

Related

Of course, a third possibility is that we’ll all adapt, just as the economy did when automobiles replaced horses or when computers made typewriters obsolete. Leveraging AI could become as second nature in business as texting is today. It could reduce barriers to entry and create a new generation of wealthy, job-producing entrepreneurs.

But ask him what excites him the most and he will rhapsodize about health care. AI will help doctors diagnose problems much faster, even as it will develop treatments tuned to a specific patient’s DNA.

Then he quickly jumps to the logical conclusion of that thread. “I think in a couple of generations we might start broaching on more of an immortal human.”

A mortal form of immortality is fascinating to ponder. However, it might, among other things, eventually lead to an overpopulated world where people may kill each other to survive.

We never can seem to escape doomsday.

The world is at an interesting crossroads when it comes to artificial intelligence and the potential, for good and bad, of machine learning.

The UVU summit coincided roughly with the European Union’s decision last week to advance a law that could be a major step toward regulating AI. The A.I. Act, as it’s known, would limit the use of facial recognition software and require transparency from the makers of products such as ChatGPT, requiring them to reveal the data used by the program.

In the United States, members of Congress are worried about falling behind the curve. As The New York Times put it, “Policymakers everywhere from Washington to Beijing are now racing to control an evolving technology that is alarming even some of its earliest creators.”

But they’re doing it with different motives. China, for example, worries about chatbots violating censorship laws.

Wright’s concerns about economic disparity and health care are not as stark as the worries many other people have. Earlier in the same UVU summit, Utah Attorney General Sean Reyes spoke about so-called deep-fake videos and voice cloning, which he said has already led to some fake-kidnapping extortion crimes. Perpetrators secretly record a person’s voice, use AI to create a sound file that uses that voice to plead for help, then contact a relative demanding ransom.

How hard, he wondered, will it be to someday prove your innocence in court against convincing fake video evidence? The answers may lie in both private-sector and careful government regulation.

“If we don’t architect into the DNA of AI certain safeguards, we will be too far behind and always playing catch up,” he said.

But laws can go only so far in a world where the lawless have access to the same technology. And if the United States passes laws that are too restrictive, nations with a more sensible approach could gain the upper hand with job-producing technology.

I have serious doubts about human engineered immortality and machines taking over the world. But when Wright warns against governments being in a hurry to regulate, it makes a lot of sense.

“If you’re regulating all the people who follow the rules, you’re slowing them down,” he said, noting that he, an entrepreneur and CEO of a tech company, doesn’t know how to draft effective regulations, “so how do the policy makers know?”

I also like his brand of optimism. If civilization collapses, he said, it’s likely to be from a virus or some other destructive mechanism. “I think it’s far more likely than, say, AI.”

Advertisement