According to a recent LinkedIn post from Noma Security, the company is emphasizing emerging security risks associated with AI agents, Model Context Protocol (MCP) servers, and agent Skills. The post cites internal research suggesting that a high share of popular Skills and MCPs may carry risky or high-risk capabilities that could enable arbitrary code execution.
Claim 55% Off TipRanks
- Unlock hedge fund-level data and powerful investing tools for smarter, sharper decisions
- Discover top-performing stock ideas and upgrade to a portfolio of market leaders with Smart Investor Picks
The post highlights findings from a research report titled “Lethal by Design,” which examines how different mechanisms for extending AI agents create distinct attack surfaces. It also references a proposed “No Excessive CAP” framework intended to govern what agents can do, decide, and access in enterprise environments.
For investors, this research-focused positioning suggests Noma is seeking to establish thought leadership in agentic AI security at an early stage of market development. If enterprises adopt similar governance frameworks, demand for specialized security tooling around AI agents and MCP infrastructure could expand, potentially benefiting vendors perceived as domain experts.
The emphasis on pre-incident preparation and visibility into agent behavior also points to a potential product focus on monitoring, traceability, and control of AI-powered workflows. As organizations scale AI deployments, these capabilities may become budget priorities, which could support Noma Security’s long-term growth prospects and competitive standing in the cybersecurity segment of the AI ecosystem.

