What the latest flurry of AI news has in common with the Great Tea Race

Fortune· Prisma by Dukas—Universal Images Group/Getty Images

Hello and welcome to Eye on AI.

In the 19th century, major shipping companies would compete to be the first to bring the season’s tea from China to London. There was money on the line—the first tea in the market commanded a premium. So investors built ever faster clipper ships, carrying ever greater amounts of sail, and with sleek, copper-bottomed hulls. To incentivize the captains to push vessels to their limits, the tea merchants funded a cash prize for the first crew to reach London’s docks. The first ship also won the right to fly a special “Blue Ribbon” pennant.

The race was about money, but for the captains and crews, it was as much about ego and prestige. It was also about risk—the clippers were built for speed, not stability. They took great skill to sail. The Taeping and Ariel, which split the winning prize in the 1866 Tea Race after an epic battle across the globe that saw them arrive at the mouth of the Thames within an hour of one another after 99 days at sea, both later sank. In fact, all five clippers that competed in the 1866 race were eventually wrecked or lost at sea.

What does this have to do with AI? Well, I feel like we’re kind of watching the 21st-century version of the Great Tea Race with AI today. The leading AI companies are leapfrogging one another across multiple dimensions of capability and performance in a contest that seems to be a little bit about money, but an awful lot about the ego and prestige of getting the credit for bringing a particular capability to market first. There’s also something about the seasonality of this flurry of new model releases—there was a similar glut of updates and releases in the first quarter of last year—that is reminiscent of the arrival of the new tea crates in London each September.

In the past two weeks, OpenAI and Google have both been unveiling new AI models and product features at a furious pace, each pushing the boundaries of what the technology can do. First OpenAI gave ChatGPT the ability to remember past conversations with users as well as their personal details and preferences. Then Google put its most powerful model, Gemini 1.0 Ultra, into wide release. It followed this with a limited launch of a new Gemini 1.5 Pro model that was as capable as Ultra, but in a smaller, less expensive package. What makes the 1.5 Pro special though is its remarkably large “context window,” which is the amount of stuff you can feed it in a prompt. The 1.5 Pro can analyze an hour of video, 11 hours of audio, or about seven books’ worth of text. Then, on Thursday, OpenAI showed off Sora, a new text-to-video generation model that can produce minute-long videos of stunning quality.

There’s no sign of this pace letting up, with more announcements hinted at for the coming weeks. Plus, these developments will no doubt force other AI companies to move faster too. Cristóbal Valenzuela, the CEO of Runway, which had arguably been leading the field in text-to-video generation space, simply tweeted “game on” in response to OpenAI’s Sora reveal. Google DeepMind had in January released a model called Lumiere that was competitive with Runway’s Gen 2.0 model, but it too will no doubt be working to release a more capable version in response to Sora. I wouldn't be surprised if Anthropic, as well as tech giant Meta and well-funded startup Inflection, debut models in the coming weeks that match the long context window of Google’s 1.5 Pro.

For those of us watching from the shore, as it were, this is all as thrilling as it was to 19th-century newspaper readers who followed the Great Tea Race. But it also seems a bit dangerous. And unlike with the Great Tea Race, the risk is not just to those participating in the race, but to us all.

While giving ChatGPT memory makes it more useful for users, it also presents increased risks that the model will leak users’ personal details, as already occurred once with an earlier version of the chatbot. Sora’s hyperrealistic videos could produce more convincing deepfakes. (For now Sora is only available to the "red teamers"—select individuals and companies that OpenAI hires specifically to test the model for safety and security vulnerabilities. It did not say when the model would be released to the wider public.) Many AI ethicists criticized OpenAI for not appending some kind of visual digital watermark to the videos it used for its demo that would clearly identify them as AI-generated. They also faulted the company for revealing next to nothing about how Sora was trained, with many suspecting that copyrighted material was probably used without the owners’ consent. In the future, a system like Sora could also put a lot of people in Hollywood out of work.

Then there are the even bigger risks—that this flurry of model enhancements is driving us ever faster toward superpowerful AI software that could pose a danger in the wrong hands, or even itself pose a risk to humanity. There’s certainly no evidence that the tech companies are paying a great deal of attention to safety as they race to roll out model after model.

OpenAI claimed that by learning through video footage, Sora had gained an intuitive understanding of physics and common-sense reasoning that models trained in other ways lacked. In making this claim, the company sought to position the model as an important step towards its official goal of creating artificial general intelligence—a single AI system able to do all the economically valuable cognitive tasks a person can. Except lots of people, including Elon Musk, were quick to point out that Sora’s grasp of physics seemed dubious. (Even the OpenAI researchers highlighted several instances where Sora seemed to not quite understand that chairs could not flap about and fly like birds, as one seemed to in one of the videos they released.) It also seemed to have trouble portraying certain aspects of the natural world—such as the number of legs an ant has—properly.

So perhaps the AGI framing is just hype, a way for OpenAI researchers to justify working on a project that is really only about the company showing the world that it can beat Google and Runway at the video generation game. But it is disturbing to see the OpenAI researchers frame Sora as a step toward AGI while not spending much time detailing any testing they’ve done or precautions they’ve taken so far to make the new model safe. They did say they were red-teaming the model at the moment and were not revealing any information about when they plan to release it. But just revealing its existence will spur other companies, such as Runway and Google, to move faster on competing products. And in this race, as with the clipper ship captains, caution might take a back seat to speed.

In the Great Tea Race, the captains knew their destination, and the routes for getting there were well-established. The Great AI Race is different. In a way, it combines elements of that 19th-century competition with those of an even earlier age of sail: the Great Voyages of Discovery in the 15th through 18th centuries, when captains would set sail over the horizon, bound for the unknown. They were racing one another then too. The wealth and prestige of whole kingdoms rode the waves with them. What they would find on these journeys would transform the world. But they would also bring disease, death, conflict, and subjugation to the people they encountered.

I guess we have to hope the Great AI Race winds up a bit more like the Great Tea Race, where only the ships themselves were in jeopardy. With the tea race, the core technology itself—the clipper ship—was short-lived. Even in 1866, a ship equipped with a steam engine in addition to sails, beat all the clippers back to London by 15 days. Three years later, the Suez Canal opened, shaving even more time off the journey. Within a decade, the clippers had been largely eclipsed by steamships in global trade. We may find that today’s transformer-based neural network AI models are similarly overtaken by some other AI technology that can grasp that chairs don’t normally fly like birds and ants have six legs, perhaps even without seeing millions of examples during training.

And there’s another lesson from the Tea Races too: In the years after 1866, bumper tea crops were harvested in China. The price fell dramatically and there was no longer much premium to be gained by being first to market. This too could happen with AI, as freely available open-source models quickly match the capabilities of today’s top proprietary software and businesses no longer feel they have to pay top dollar to access generative AI capabilities.

In the meantime, it’s exciting—and a little bit frightening—to watch the race. But we should all be more than a little skeptical of the motives involved and whether the risk will ultimately be worth the reward.

Below, there's more AI news. But before you go, if you want to learn about the latest developments in AI and how they will impact your business, please join me alongside leading figures from the business world, government, and academia at Fortune's inaugural London edition of our Brainstorm AI conference. It's April 15-16 in London. You can apply to attend here.

Also, I want to highlight a fantastic interview that Fortune CEO Alan Murray conducted recently for Fortune's Leadership Next podcast with Wasem Khaled, cofounder and CEO of Blackbird AI. You can check that out here.

Jeremy Kahn

This story was originally featured on Fortune.com