Meta plans to use AI to automate up to 90% of its privacy and integrity risk assessments, including sensitive areas like AI safety and youth risk, according to internal documents viewed by NPR, Bobby Allyn and Shannon Bond reported on Friday. Under the new system, Meta says product teams will be asked to fill out a questionnaire about their work, then will receive an “instant decision” with AI-identified risks, along with requirements that an update or feature must meet before it launches. Meta said in a statement that it has invested billions of dollars to support user privacy and that the product risk review changes are intended to streamline decision-making, adding that “human expertise” is still being used for “novel and complex issues,” and that only “low-risk decisions” are being automated.
Confident Investing Starts Here:
- Easily unpack a company's performance with TipRanks' new KPI Data for smart investment decisions
- Receive undervalued, market resilient stocks right to your inbox with TipRanks' Smart Value Newsletter
Published first on TheFly – the ultimate source for real-time, market-moving breaking financial news. Try Now>>
Read More on META:
- Meta Platforms Soars with AI Innovations and Defense Deal
- Meta Platforms Concludes Annual Shareholder Meeting Decisions
- Analysts Remain Bullish on Meta Platforms Stock Despite Near-Term Headwinds
- Now Streaming: Analysts raise price targets on Netflix
- Alphabet Stock (GOOGL) Suffers as Leading Investor Warns of AI Undercutting
Looking for a trading platform? Check out TipRanks' Best Online Brokers , and find the ideal broker for your trades.
Report an Issue