OpenAI’s Sora team unveiled Sora 2, a next‑generation generative video model that uses diffusion transformers and space‑time tokens to simulate entire video sequences with physics‑consistent behavior. By treating video as a world simulator, Sora 2 can maintain object permanence and produce realistic motion, avoiding the over‑optimistic errors of earlier models. The team emphasized an iterative rollout strategy to let society adapt to powerful simulation tech, and highlighted the model’s broad data mix—from real footage to anime—to build robust internal world models. They also speculated that future, more capable simulators could eventually replace physical labs for scientific experimentation.
Comments
Want to join the conversation?
Loading comments...