According to a recent LinkedIn post from Musubi, the company observes that 62% of newly posted trust-and-safety roles now include explicit AI safety responsibilities, up from 28% two years ago. The post highlights Musubi’s focus on operationalizing AI in trust-and-safety workflows and outlines lessons learned from its work in this area.
Claim 55% Off TipRanks
- Unlock hedge fund-level data and powerful investing tools for smarter, sharper decisions
- Discover top-performing stock ideas and upgrade to a portfolio of market leaders with Smart Investor Picks
The company’s LinkedIn post suggests four key practices for AI-enabled trust and safety teams, including using policy experts as prompt engineers and building evaluation frameworks before deployment. It also emphasizes designing workflows that reduce cognitive load and using AI to free senior analysts for higher-value investigative and policy work rather than simply cutting headcount.
For investors, the content points to Musubi’s positioning at the intersection of AI safety, policy design, and trust-and-safety operations, an area of growing regulatory and reputational scrutiny for large platforms. If Musubi can translate these operational insights into scalable products or advisory services, it could benefit from rising demand as companies adapt their T&S organizations to AI-heavy environments.
The broader data point on AI safety responsibilities in T&S roles may indicate a structurally expanding market for AI governance and risk tooling, which could support Musubi’s long-term addressable market. The linked guide on skills, team structure, and failure modes, while not detailed in the post, suggests an effort to deepen thought leadership, potentially improving brand visibility and deal flow among enterprise trust-and-safety buyers.

