How AI brought Val Kilmer’s ‘Iceman’ back into Top Gun: Maverick

Artificial intelligence landed a major part in the new Top Gun sequel: Val Kilmer’s voice.

Scientists helped Kilmer reprise his role as Iceman by using A.I. to craft a computer-generated replica of the actor’s voice that could read his lines.

Kilmer—who has starred in films including Batman Forever, The Doors, and Heat—suffered irreparable damage to his voice after being diagnosed with throat cancer and undergoing a tracheotomy in 2014.

He later compared acting after the procedure to learning a new language in an interview with Good Morning America.

In August last year, London-based tech startup Sonantic teamed up with Kilmer to create an A.I.-powered voice for the actor—which ultimately went on to be used in Top Gun: Maverick.

“From the beginning, our aim was to make a voice model that Val would be proud of,” John Flynn, CTO and cofounder of Sonantic, said in a blog post at the time.

“We were eager to give him his voice back, providing a new tool for whatever creative projects are ahead.”

Sonantic uses A.I. to create computer-generated voices that are either completely synthetic or mimic the voice of a real person. Its voices have been used in video games, Hollywood sound production, and speech therapy.

The voice the company generated for Kilmer is entirely synthetic and mimics old recordings of the actor.

The process

Ordinarily, when the company creates a voice model with an actor, the actor records performances that they read from a script, but Sonantic said Kilmer’s case required “a bit more manual work.”

After cleaning up old audio recordings of Kilmer, the startup used a “voice engine” to teach the voice model how to speak like Kilmer.

The engine had around 10 times less data than it would have been given in a typical project, Sonantic said, and it wasn’t enough. The company then decided to come up with new algorithms that could produce a higher-quality voice model using the available data.

“In the end, we generated more than 40 different voice models and selected the best, highest-quality, most expressive one,” Flynn said. “Those new algorithms are now embedded into our voice engine, so future clients can automatically take advantage of them as well.”

Once the voice model was produced, creative teams were able to input text and fine-tune the performance.

Kilmer said at the end of the project that Sonantic had “masterfully restored my voice in a way I’ve never imagined possible.”

“As human beings, the ability to communicate is the core of our existence, and the effects from throat cancer have made it difficult for others to understand me,” he added. “The chance to tell my story, in a voice that feels authentic and familiar, is an incredibly special gift.”

A “very special” role

Kilmer’s daughter told the New York Post that her father’s role in the movie was “very special.”

Sonantic’s “voices” are developed using a proprietary deep learning software, which has been responsible for many breakthroughs in A.I. in recent years, but the company hasn’t given any details on the specific system it uses to generate its voices.

One of the biggest A.I. breakthroughs that can be used to create fake voices is a learning system called WaveNet: technology that was pioneered by Google-owned DeepMind and open-sourced in 2016.

Google has used WaveNet to make advances when it comes to designing virtual assistants for its devices, using the technology to create realistic voices for its Google Duplex assistant.

However, the tech does have some potential drawbacks.

Fake voices were used in a multimillion-dollar case of bank fraud last year, with the culprits stealing $35 million after cloning the voice of a company director in the UAE.

This story was originally featured on Fortune.com

Advertisement