OpenAI is preparing an updated version of Sora, its text-to-video generator, that will allow inclusion of copyrighted content by default – unless copyright holders explicitly opt out, the Wall Street Journal reported on Monday, citing people familiar with the matter. This shift marks a move away from seeking prior permission, placing the onus on studios, creators, and rights holders to act.
The company has begun notifying talent agencies and film studios about the opt-out process ahead of the product launch. Under the new policy, movie studios and other intellectual property owners must file specific requests if they do not want their copyrighted works included. A blanket, wide-ranging opt-out across an entire catalogue won’t be accepted; rights holders must identify specific violations.
How the Policy Works
OpenAI’s new approach means copyrighted materials are treated as “in” unless actively blocked. Rights holders must provide detailed information to prevent their works from being used, which adds monitoring and administrative burdens for creators. Unlike a universal exclusion, this selective opt-out model forces case-by-case action.
The company will continue to restrict the generation of recognisable public figures, separating likeness rights from copyright. OpenAI also notes it is applying similar safeguards to those rolled out with its image generation tool earlier in 2025. By extending the same framework, OpenAI seeks consistency across its generative AI product line.
OpenAI is also launching Sora 2, an app offering vertical 10-second video generation and optional identity verification for people who want to use their own likeness. At launch, users will not be able to upload existing media from devices, limiting inputs to text-based prompts.
Reactions, Risks, and Future Outlook
The opt-out model has drawn pushback from creative industries. Rights holders argue that prior consent and compensation would be more appropriate than requiring constant monitoring. Critics see the strategy as part of a broader pattern in AI – prioritizing rapid deployment over negotiated rights.
Legal experts point to risks in the training process. Independent reviews suggest that Sora can reproduce logos, watermarks, and characters, which indicates that copyrighted content may have been included in its datasets. If disputed outputs appear, lawsuits could follow, especially in markets with strict intellectual property laws.
Subscription models and user growth remain uncertain. Whether consumers will embrace Sora 2 as a creative tool depends not only on its technical capabilities but also on how OpenAI manages copyright conflicts. The company’s handling of opt-outs, compensation, and transparency will likely shape whether Sora becomes a standard tool for creators or a focal point for legal battles.