OpenAI introduces Sora, its text-to-video AI model

OpenAI has unveiled Sora, a new text-to-video AI model capable of generating realistic and imaginative videos from textual prompts. Sora can produce videos up to one minute long, featuring complex scenes with multiple characters, motions, and detailed backgrounds. It can also enhance still images into videos, fill in missing frames, or extend existing footage. While the technology shows promise, with demos demonstrating its capabilities, there are still challenges, such as accurately simulating physics and cause-and-effect relationships.

The rapid advancement in video generation AI is evident, with companies like Runway, Pika, and Google’s Lumiere developing similar capabilities. Sora, however, is currently in a testing phase with “red teamers” evaluating potential risks, and select visual artists and designers providing feedback. OpenAI acknowledges the potential issues with the authenticity of AI-generated content, especially as it works on mitigating the risks of its creations being mistaken for real footage, similar to the challenges faced with its text-to-image tool DALL-E 3.
Read more at The Verge…

Discover more from Emsi's feed

Subscribe now to keep reading and get access to the full archive.

Continue reading