ChatGPT Developer OpenAI Unveils AI Tool to Generate Video Clips

OpenAI logo seen on screen with ChatGPT website displayed on mobile seen in this illustrat
Jonathan Raa/NurPhoto via Getty Images

AI giant OpenAI has revealed its newest creation, an AI system named Sora that can generate realistic videos from text descriptions. As AI-generated deepfake videos already cause problems online ranging from fake porn to realistic scam calls, the advent of easy to create video comes with the potential for trouble.

CNBC reports that OpenAI, the company behind AI chatbot ChatGPT, announced on Thursday that it is expanding into video generation capabilities with Sora. Sora allows users to type out descriptions of desired video scenes, and it will generate high-definition video clips based on the text prompts.

According to OpenAI, Sora can not only generate videos from scratch, but also extend existing videos or fill in missing frames. The AI model is able to produce videos up to one minute in length currently.

Sam Altman, chief executive officer of OpenAI Inc., speaks with members of the media during the Allen & Co. Media and Technology Conference in Sun Valley, Idaho, US, on Wednesday, July 12, 2023. The summit is typically a hotbed for etching out mergers over handshakes, but could take on a much different tone this year against the backdrop of lackluster deal volume, inflation and higher interest rates. Photographer: David Paul Morris/Bloomberg via Getty Images

Sam Altman, chief executive officer of OpenAI Inc. Photographer: David Paul Morris/Bloomberg via Getty Images

Sora represents OpenAI’s efforts to offer multimodal AI systems that can work with text, images, and now video. As OpenAI COO Brad Lightcap stated, “The world is multimodal. If you think about the way we as humans process the world and engage with the world, we see things, we hear things, we say things — the world is much bigger than text.”

Axios reported that a representative from OpenAI emphasized that the company currently has no intention of releasing Sora to the general public. This is because OpenAI is still working on addressing safety concerns such as reducing the spread of misinformation, hateful content, and biased output from the model. Additionally, OpenAI is working on clearly labeling the output as generated by AI.

See some examples of Sora in action below:

The launch of Sora positions OpenAI in direct competition with other tech giants working on similar video AI generators, including Mark Zuckerberg’s Meta, Google, and Adobe. Meta and Google have showcased comparable models that turn text into video clips.

Sora is based on a diffussion AI architecture, like ChatGPT and OpenAI’s image generator DALL-E. The company said Sora serves as a foundation for models that can simulate and understand the real world.

So far, OpenAI has only provided a small preview of Sora’s capabilities on its website with 10 sample videos. The company said it is initially limiting access to “red teamers” who test for potential dangers like bias and misinformation spreading.

The release of Sora raises concerns about the potential for AI-generated fake video content, known as deepfakes. The number of deepfakes online has already grown 900 percent from last year. OpenAI stated it is developing tools to detect Sora-made videos and will embed metadata to identify AI-created content.

Breitbart News recently reported that at least six prominent technology firms intend to finalize an accord on AI election interference at the Munich Security Conference this week. The deal comes as over 50 countries prepare for significant national elections in 2024, with AI disinformation threats already emerging. For example, AI voice cloning robocalls sought to deter voting in New Hampshire’s primary election by impersonating President Joe Biden.

The companies — which reportedly include Adobe, Google, Meta, Microsoft, OpenAI and TikTok — allegedly hope the agreement will guide joint efforts to halt the deceptive use of AI targeting voters. Details remain undisclosed, and it is unclear why the rest of the world would trust TikTok, accused of being a Chinese psyop on western nations, to make any good faith effort to preserve election integrity.

Elections worldwide face rising threats from deepfake media — falsely attributed images and recordings produced using AI generative models. Deepfakes could be weaponized to undermine candidates and mislead voters via propaganda. The companies aim to counter these risks through a unified stance against AI disinformation campaigns.

Read more at CNBC here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.


Please let us know if you're having issues with commenting.