According to a recent LinkedIn post from Token Security, the company is drawing attention to security risks associated with custom AI assistants embedded in enterprise workflows. The post highlights research by security researcher Sharon Shama, who examines how misconfigurations and overlooked integrations may create hidden exposure.
Claim 30% Off TipRanks
- Unlock hedge fund-level data and powerful investing tools for smarter, sharper decisions
- Discover top-performing stock ideas and upgrade to a portfolio of market leaders with Smart Investor Picks
The LinkedIn post suggests that as organizations connect customized GPT-based assistants to proprietary knowledge and internal systems via APIs, these tools can query databases, call external services, and automate actions, expanding the potential attack surface. It also points readers to practical steps security teams can take to identify and mitigate these risks, emphasizing the need for an appropriate security model around AI assistants.
For investors, the focus on AI security and governance may indicate Token Security’s intent to position itself as a specialist in securing emerging AI-driven workflows, an area likely to see rising demand as enterprises scale AI adoption. If the company can translate this thought leadership into product capabilities or consulting offerings, it could strengthen its competitive positioning within the cybersecurity segment and open additional revenue opportunities tied to AI safety and identity security.

