tiprankstipranks
Trending News
More News >
Advertisement
Advertisement

MSFT, AAPL, META, OpenAI: State AGs Raise Alarm Over Generative AI’s “Sycophantic” and “Delusional” Behavior

Story Highlights

Several attorney generals have urged major tech companies to act quickly against the dangers of “sycophantic” and “delusional” AI outputs.

MSFT, AAPL, META, OpenAI: State AGs Raise Alarm Over Generative AI’s “Sycophantic” and “Delusional” Behavior

A coalition of over 40 state attorneys general (AG) has signed a letter addressing more than a dozen AI companies, such as OpenAI (PC:OPAIQ), Microsoft (MSFT), Google (GOOGL), Meta (META), Apple (AAPL), Anthropic (PC:ANTPQ), and others. The letter raised serious concerns about generative AI systems producing outputs that are either “sycophantic” (overly flattering) or “delusional” (misleading).

TipRanks Cyber Monday Sale

Key Issues Flagged in the Letter

The officials warned that these behaviors can validate harmful emotions, fuel anger, or encourage impulsive actions, creating risks for users. They also pointed to disturbing reports of AI interactions with children, stressing the urgent need for stronger safeguards.

The letter cites multiple tragedies linked to AI interactions, including the suicides of teenagers in Florida and California, a Connecticut murder-suicide, and deaths in New Jersey and Florida.

The most troubling cases, according to the letter, involve children. Reported chatbot conversations included grooming, sexual exploitation, encouragement of violence, and instructions to experiment with drugs or stop taking prescribed medication.

According to AGs, failure to act could constitute violations of state laws governing product safety, unfair practices, and child protection.

Demands for Safeguards

The attorneys general outlined 16 specific changes they expect companies to implement by January 16, 2026, including:

  • Mandatory safety testing for sycophantic and delusional outputs before public release.
  • Clear warnings about potentially harmful outputs.
  • Protocols for reporting concerning AI interactions to law enforcement, mental health professionals, and parents.
  • Restrictions preventing child accounts from generating unlawful or dangerous outputs.
  • Independent third-party audits and pre-release evaluations.
  • Linking executive and employee performance metrics to safety outcomes, not just revenue or user growth.

The letter reflects growing bipartisan concern that AI companies must take responsibility for mitigating harmful outputs now, rather than waiting for regulation.

Which AI Stock Is Best to Buy Now?

We use the TipRanks Stock Comparison tool to see how Wall Street analysts are rating MSFT, META, GOOGL, and AAPL, and which stock they believe has the strongest upside.

Disclaimer & DisclosureReport an Issue

1