Scarlett Johansson is angry at OpenAI

Scarlett Johansson “felt shocked and indignant” when GPT-4o’s voice eerily resembled her after Sam Altman contacted her twice but did not receive her consent.

CNN quoted actress Scarlett Johansson as saying she authorized a lawyer to work with OpenAI after discovering that Sky, GPT-4o ‘s voice assistant , sounded eerily similar to her.

She is the female lead in the science fiction film Her , released in 2013. In the film, she plays a virtual assistant. The male lead, played by Joaquin Phoenix, fell deeply in love with this AI, but his heart was broken when the AI ​​admitted to loving hundreds of other users. Ultimately, the virtual assistant model collapsed, becoming inaccessible.

On May 14, when announcing GPT-4o, OpenAI CEO Sam Altman also posted the word “her” on the X account, reminiscent of the classic movie.

OpenAI now stops Sky on the new language model. In a post on X on May 20, the company said: “We receive many questions about voice selection in ChatGPT, especially Sky. The company is stopping Sky while resolving the issue.” Previously, many people also reported that the voice in GPT-4o was somewhat flirtatious and indecent.

Scarlett Johansson and the OpenAI logo behind. Illustration: Newsx

Scarlett Johansson and the OpenAI logo behind. Illustration: Newsx

Meanwhile, Johansson revealed that last September, Sam Altman asked her to voice the company’s artificial AI . However, she refused due to personal reasons. “Two days before GPT-4o was launched, Altman contacted my representative to reconsider the offer. But OpenAI announced the platform before the deal was agreed upon,” the actress said.

Johansson said she authorized lawyers and OpenAI “reluctantly agreed” to take down Sky following two letters sent to CEO Altman.

“In an era where people are grappling with deepfakes and protecting their image, work and identity, I believe these are questions that deserve to be clarified. I look forward to receiving a transparent solution to ensure the rights Individuals are protected under the law,” Johansson emphasized.

In response, OpenAI confirmed that GPT-4o’s voice is not related to Johansson but belongs to “another professional actress”. The company said it used this person’s natural voice to train the AI. However, OpenAI does not clearly state the identity of the voice owner.

Rifts within OpenAI

The legal troubles involving Scarlett Johansson are just part of the turmoil in the company led by Sam Altman.

Immediately after the company launched GPT-4o, Jan Leike, Chief AI Safety Officer, and Ilya Sutskever , OpenAI’s chief scientist, simultaneously announced their resignations on X. Leike even publicly criticized OpenAI leadership are putting “flashy products” above safety. Sam Altman shared Leike’s post and said: “He’s right, we have a lot of work to do. We’re committed to doing it.”

According to CNBC, OpenAI last week disbanded the Superalignment project, established in 2023 with the mission of researching the potential long-term risks of artificial intelligence. Meanwhile, The Information reported that two AI safety researchers, Leopold Aschenbrenner and Pavel Izmailov, were fired by OpenAI for leaking internal information. The LinkedIn profile of Cullen O’Keefe, head of policy research, shows he left in April. Diane Yoon, Vice President of Human Resources, and Chris Clark, Director of Strategic and Nonprofit Initiatives , has also resigned from OpenAI.

According to Business Insider , the disbandment of the AI ​​safety team caused Sam Altman to face a lot of skepticism from observers. On the Joe Rogan podcast last year, he said: “Many of us are extremely worried about the safety of AI. With the ‘don’t destroy humanity’ version, we have a lot of work to do.” However, what is happening at OpenAI erodes public trust in Altman. Former employee Daniel Kokotajlo told Vox that he “gradually lost faith in OpenAI’s leadership and their ability to handle AGI superintelligence responsibly.”

Another staffing crisis at OpenAI is the “employee gag policy”. Vox said the company has strict agreements that force employees not to share information about the company after leaving their jobs. On May 19, Sam Altman said on X that this was “one of the few times” he felt embarrassed running OpenAI. He affirmed that he did not know that this provision applied to former employees and was making efforts to correct it.

Analysts say this is the rare time Altman admits a mistake, contrary to the calm image he is building in the chaotic context of OpenAI.