Tech companies developing AI, including Alphabet’s (GOOGL) Google DeepMind, Meta (META), OpenAI (PC:OPAIQ), Anthropic (PC:ANTPQ), and Alibaba Cloud (BABA), lack “a credible plan for preventing catastrophic misuse or loss of control.”
TipRanks Cyber Monday Sale
- Claim 60% off TipRanks Premium for data-backed insights and research tools you need to invest with confidence.
- Subscribe to TipRanks' Smart Investor Picks and see our data in action through our high-performing model portfolio - now also 60% off
This is one of the key findings of an evaluation carried out by the non-profit Future of Life Institute in partnership with top-level academic and industry experts.
Experts Warn of ‘Widening Gap’ between AI Dev and Safety
The organization noted that the “widening gap” between efforts by these firms to develop artificial general intelligence and superintelligence — the form that surpasses human intelligence — and arrangements to ensure human control is “increasingly alarming.”
Other firms studied by the organization include Elon Musk’s xAI (PC:xAI) and Chinese AI firms DeepSeek and Z.ai. The growing use of AI has already drawn legal scrutiny for these companies, with recent cases ranging from an alleged suicide linked to OpenAI’s ChatGPT to flirty celebrity chatbot clones tied to Meta AI. In addition, Anthropic’s system has been associated with a recent cyberattack originating in China.
Report Flags ‘Substantial Gap’ Between AI Firms
According to the non-profit, all companies assessed “continue to fall short of emerging global standards” on safety practices. However, the report noted that there is “a substantial gap” between the top performers, Anthropic, OpenAI, and Google DeepMind, compared to others: xAI, Z.ai, Meta, DeepSeek, and Alibaba Cloud.
“While many companies partially align with these emerging standards, the depth, specificity, and quality of implementation remain uneven, resulting in safety practices that do not yet meet the rigor, measurability, or transparency envisioned by frameworks such as the EU AI Code of Practice,” explained the report, dubbed the AI Safety Index: Winter 2025.
Leaders Push Back against Superintelligence
AI firms have continued to invest heavily in growing their AI capacities as they battle for early advantage in an area considered the next frontier of human innovation. Nonetheless, fears about loss of control remain a public concern.
In October, hundreds of business leaders around the world, including Apple (AAPL) co-founder Steve Wozniak, called for a ban on the development of superintelligence.
What Is the Best AI Stock to Buy?
The TipRanks Stock Comparison tool provides Wall Street insight into several AI firms and their stocks, including the publicly traded ones mentioned in this article. Kindly refer to the image below.


