For U.S. tech firms, AI growth still runs with few firm rules. There is no full federal law that limits what AI models can say or refuse. As a result, firms move fast and set their own limits.
Claim 70% Off TipRanks This Holiday Season
- Unlock hedge fund-level data and powerful investing tools for smarter, sharper decisions
- Stay ahead of the market with the latest news and analysis and maximize your portfolio's potential
In China, the path is very different. The Chinese government plays a direct role in what AI models can do. Before release, models must pass strict tests set by state officials. The goal is to limit political risk and social harm.
According to a Wall Street Journal report, AI models must answer about 2,000 test questions. These questions change over time as new risks appear. To pass, models must refuse at least 95% of prompts tied to banned topics. These include Tiananmen Square, party rule, and rights issues.
As a result, Chinese AI tools are designed to avoid sensitive responses. This applies even when models aim to compete outside China.
Tradeoffs Between Control and Safety
At the same time, these rules create unintended side effects. The exact limits that block political topics also block violent or adult content. They also restrict replies tied to self-harm. In contrast, U.S. firms still adjust these rules after launch.
For investors, the gap matters. U.S. firms gain speed and scale. Chinese firms gain state backing but face limits on output and use. Over time, this split may shape how AI tools grow across markets.
In short, AI growth now follows two paths. One favors open release and fast change. The other favors control and risk limits.
We used TipRanks’ Comparison Tool to align notable traded companies, Chinese and American, that employ chatbots similar to Alibaba’s (BABA) Qwen, and Alphabet’s (GOOGL) Gemini. It’s an excellent tool to gain a broader perspective on each stock and the AI industry as a whole.


