According to a recent LinkedIn post from Databricks, Google’s Gemini 3.1 Pro models are now accessible on the Databricks platform for building and scaling generative AI applications on enterprise data. The post highlights capabilities around governance and production-ready operational tooling, with an emphasis on grounded question answering for PDF-based documents.
Claim 30% Off TipRanks
- Unlock hedge fund-level data and powerful investing tools for smarter, sharper decisions
- Discover top-performing stock ideas and upgrade to a portfolio of market leaders with Smart Investor Picks
The company’s LinkedIn post suggests that Gemini 3.1 Pro performs strongly on OfficeQA, a Databricks benchmark focused on real-world document Q&A. This positioning indicates a push into high-value use cases such as extracting answers, summaries, and key takeaways from policies, contracts, reports, and manuals, which are central to many enterprise workflows.
As shared in the post, Databricks portrays itself as the only platform currently offering access to Gemini, Anthropic’s Claude, and OpenAI models in one environment. For investors, this multi-model strategy may reinforce Databricks’ role as a neutral, interoperable AI infrastructure layer, potentially improving customer stickiness and cross-sell opportunities within its data and AI ecosystem.
The move also appears to deepen Databricks’ integration with major foundation model providers, which could enhance its competitive position versus rival data platforms and cloud-native AI services. If enterprises adopt these capabilities at scale for document intelligence and other GenAI use cases, the expanded model catalog could support higher consumption of Databricks’ compute and platform services over time.

