tiprankstipranks
Advertisement
Advertisement

FriendliAI Platform Highlighted in Cost-Efficient AI-Guided Security Testing

FriendliAI Platform Highlighted in Cost-Efficient AI-Guided Security Testing

According to a recent LinkedIn post from FriendliAI, the company’s infrastructure was used in a security-focused collaboration involving GLM-5 and Team Atlanta, the 2025 DARPA AIxCC winners. The post highlights that on a 54-vulnerability benchmark, GLM-5 running on FriendliAI with the Gondar framework reportedly found 35 security bugs at an approximate compute cost of $392.

Claim 55% Off TipRanks

By comparison, the post notes that traditional fuzzing alone identified 8 bugs at an estimated compute cost of about $3,264, while a Gemini-2.5-Pro plus Gondar setup surfaced 41 bugs at roughly $2.4K–$3.1K per run. The LinkedIn content positions open-weight models served on fast inference infrastructure as materially lowering the cost of large-scale, LLM-guided fuzzing for vulnerability discovery.

For investors, the post suggests FriendliAI’s platform may be gaining traction in high-value cybersecurity research and testing workflows. If such use cases scale, FriendliAI could deepen its positioning in cost-sensitive, computation-intensive AI applications, potentially expanding its addressable market among security teams constrained by infrastructure budgets.

The focus on cost-per-bug metrics and performance on structured inputs hints at a potential competitive angle versus traditional fuzzing tools and more expensive closed models. Over time, broader adoption of open-weight models for security testing could support demand for FriendliAI’s inference infrastructure and reinforce its role within the AI security ecosystem.

Disclaimer & DisclosureReport an Issue

1