OpenAI’s (PC:OPAIQ) newest video generator, Sora 2, is drawing attention for the wrong reasons. Soon after its launch, users discovered that the content filters could be bypassed with simple tricks. Tests by 404 Media found that people could recreate copyrighted games and shows by misspelling names or changing descriptions. For example, when “Animal Crossing gameplay” was blocked, typing “crossing aminal” worked. Similar tests showed results for Fox’s American Dad and videos that mimic real people.
Meet Your ETF AI Analyst
- Discover how TipRanks' ETF AI Analyst can help you make smarter investment decisions
- Explore ETFs TipRanks' users love and see what insights the ETF AI Analyst reveals about the ones you follow.
As a result, posts online now share lists of workarounds. This has raised new concerns that the tool can create protected content even when filters are active. OpenAI said it expects some “edge cases” and that it is talking directly with studios. However, it has not explained how it can remove copyrighted data already built into Sora 2’s system. That data would be expensive and slow to retrain.
Pushback From Japan and Growing Legal Pressure
In related development, Japan’s Content Overseas Distribution Association, which represents Studio Ghibli, Bandai Namco, and Square Enix, has requested that OpenAI cease using member content for training without consent. Japan’s government supported that request, saying such use could break copyright law. Meanwhile, U.S. group Public Citizen warned that Sora 2’s watermarks and identity checks are easy to remove. Researchers said they could erase the watermark in under four minutes with free tools.
At the same time, OpenAI is still in a court battle with The New York Times. The newspaper says the company used millions of its articles to train ChatGPT without permission. A judge recently ordered OpenAI to hand over 20 million anonymized chat logs so the Times can study how its content may have been used. OpenAI said sharing the logs would risk user privacy, but the court found that safeguards were already in place.
Why It Matters for Investors
The pressure from global media groups, governments, and advocacy bodies adds new legal risk for OpenAI and its main partner, Microsoft (MSFT). If courts decide that AI training used copyrighted content unfairly, it could lead to costly settlements or retraining efforts. That outcome could affect the pace of new AI products across the industry.
For now, OpenAI says it is improving safety measures and talking with rightsholders. Yet, questions remain over how AI models can create realistic videos without crossing legal lines. Investors watching the sector may see higher uncertainty ahead as regulators move to set clearer rules for AI-generated media.
We used TipRanks’ Comparison Tool to line up and compare some of the notable companies that employ AI chatbots similar to OpenAI’s ChatGPT.


