tiprankstipranks
Advertisement
Advertisement

FriendliAI Highlighted in Cost-Efficient LLM-Guided Fuzzing for Vulnerability Discovery

FriendliAI Highlighted in Cost-Efficient LLM-Guided Fuzzing for Vulnerability Discovery

According to a recent LinkedIn post from FriendliAI, the company is featured in a security-focused collaboration that evaluates large-language-model-guided fuzzing on a 54-vulnerability benchmark. The post highlights work with Team Atlanta, winners of the 2025 DARPA AIxCC competition, using the GLM-5 open-weight model served on FriendliAI’s infrastructure.

Claim 55% Off TipRanks

The LinkedIn post describes comparative results across three approaches: traditional fuzzing, Gemini-2.5-Pro paired with Gondar, and GLM-5 on FriendliAI with Gondar. Traditional fuzzing reportedly uncovered 8 bugs at an estimated compute cost of about $3,264, while the Gemini-2.5-Pro setup is described as finding 41 bugs at roughly $2,400–$3,100 per run.

By comparison, the post suggests GLM-5 on FriendliAI discovered 35 bugs at an estimated cost of $392, implying a materially lower cost per bug versus the closed-model configuration. The analysis in the post emphasizes that many real-world vulnerabilities reside behind structured inputs, such as valid XML or specific path patterns, which random mutation-based fuzzing may miss.

According to the content shared, LLM-guided fuzzing is portrayed as a way to penetrate deeper execution paths that are difficult to reach with traditional tools alone. The post further argues that open-weight models, when combined with high-performance inference infrastructure, could make advanced vulnerability discovery more financially accessible.

For investors, the message suggests FriendliAI is positioning its platform as a cost-efficient backbone for running open-weight models in high-intensity security research workloads. If these performance and cost claims gain broader validation, FriendliAI could benefit from increased adoption by cybersecurity teams seeking scalable, lower-cost alternatives to proprietary frontier models.

The post also points readers to a full technical write-up detailing cost-per-bug metrics and how Gondar integrates with tools like Jazzer to probe deeper program behavior. This emphasis on quantitative benchmarks and integration with established fuzzing frameworks may help FriendliAI build credibility with enterprise and security-focused buyers, potentially supporting future revenue growth in infrastructure and model-serving services.

Disclaimer & DisclosureReport an Issue

1