, ,

OpenAI introduces text-to-video model Sora

Today OpenAI introduced Sora, their impressive text-to-video model. With Sora, you can very easily generate videos up to a minute long based on a text instruction you provide as a user. 

See below an example of a video created from the text instruction:

"A litter of golden retriever puppies playing in the snow. Their heads pop out of the snow, covered in."

Or how about this one:

'Beautiful, snowy Tokyo city is bustling. The camera moves through the bustling city street, following several people enjoying the beautiful snowy weather and shopping at nearby stalls. Gorgeous sakura petals are flying through the wind along with snowflakes.' 

Sora is able to generate complex scenes with multiple characters, specific types of movements and accurate details of the subject and background. The model understands not only what the user has requested in the prompt, but also how those things exist in the physical world.

The model has a deep understanding of language, enabling it to accurately interpret prompts and generate compelling characters that express vivid emotions. Sora can also create multiple shots within a single generated video that accurately preserve characters and visual style.

Sora was only made available to a select group of users yesterday, but is being rolled out incrementally. 

See more video previews and information about Sora here.

Take a leap forward in your marketing AI transformation every week


Every Friday, we bring you the latest insights, news and real-world examples on the impact of AI in the marketing world. Whether you want to improve your marketing efficiency, increase customer engagement, sharpen your marketing strategy or digitally transform your business, "Marketing AI Friday" is your weekly guide.

Sign up for Marketing AI Friday for free.