According to a recent LinkedIn post from FriendliAI, the company is promoting its presence at GTC, where it is hosting daily sessions on optimizing AI workloads and designing model benchmarks, alongside a talk on the development of the GLM model by Z.ai’s Head of Developer Ecosystem. The post also emphasizes FriendliAI’s focus on GLM‑5 performance, positioning its infrastructure as delivering low latency for this open‑weight model.
Claim 30% Off TipRanks
- Unlock hedge fund-level data and powerful investing tools for smarter, sharper decisions
- Discover top-performing stock ideas and upgrade to a portfolio of market leaders with Smart Investor Picks
The content suggests FriendliAI is targeting technically sophisticated AI customers who require high‑performance inference, which may support growth in usage‑based revenue if event exposure translates into new deployments. By highlighting both serverless and dedicated endpoints for GLM‑5, the post indicates a product strategy aimed at flexible, enterprise‑grade AI serving, which could strengthen the company’s competitive positioning in the AI infrastructure and model‑serving segment.

