According to a recent LinkedIn post from DeepSeek AI, the company is previewing its DeepSeek-V4 large language model as an open‑source release with support for context windows up to 1 million tokens. The post highlights two variants, DeepSeek‑V4‑Pro and DeepSeek‑V4‑Flash, which are described as targeting high performance and cost‑efficient deployment, respectively.
Claim 55% Off TipRanks
- Unlock hedge fund-level data and powerful investing tools for smarter, sharper decisions
- Discover top-performing stock ideas and upgrade to a portfolio of market leaders with Smart Investor Picks
The LinkedIn post suggests that DeepSeek‑V4‑Pro uses a 1.6 trillion parameter architecture with 49 billion active parameters, aiming to rival leading closed‑source models in capability. DeepSeek‑V4‑Flash is presented as a more lightweight option at 284 billion total and 13 billion active parameters, positioned for faster and more economical inference while still offering extended context handling.
As described in the post, access is available via the company’s chat interface under Expert Mode and Instant Mode, and the associated API has reportedly been updated to support the new models. The post also points to a technical report and open weights, signaling an emphasis on transparency and developer adoption, which may help stimulate ecosystem growth and third‑party integrations.
From an investor perspective, the move toward open‑sourcing a high‑capacity, long‑context model could enhance DeepSeek AI’s visibility among enterprises and researchers, potentially accelerating usage and feedback cycles. If the performance claims prove competitive with top closed‑source systems while maintaining cost advantages, the company may strengthen its position in the rapidly evolving AI infrastructure and model‑as‑a‑service markets.
The emphasis on cost‑effective long‑context inference may be particularly relevant for use cases such as document analysis, software development assistance, and multi‑step reasoning, where larger context windows are increasingly demanded. Broad availability of open weights could also lower barriers for smaller firms and academic institutions, potentially expanding the model’s user base and creating indirect monetization opportunities through hosted services and support.
However, open‑sourcing advanced models can compress pricing power and intensify competition as others build on the same technology base. Investors may therefore focus on DeepSeek AI’s ability to differentiate through proprietary tooling, managed services, and enterprise features layered on top of the open models, as well as its capacity to monetize API usage at scale while maintaining cost efficiency.

