According to a recent LinkedIn post from Dataiku, the company is highlighting data privacy as a primary obstacle to scaling generative AI in large enterprises. The post introduces Kiji Privacy Proxy™, described as an open-source privacy layer designed to let organizations use external AI models without exposing personally identifiable information.
Claim 55% Off TipRanks
- Unlock hedge fund-level data and powerful investing tools for smarter, sharper decisions
- Discover top-performing stock ideas and upgrade to a portfolio of market leaders with Smart Investor Picks
The post suggests that Kiji Privacy Proxy works by detecting and substituting sensitive data before it leaves the organization, sending anonymized requests to external AI models, and then re-associating the original data in the returned responses. The company also emphasizes that this approach is intended to fit into existing workflows without requiring changes to prompts or processes.
For investors, this focus on privacy-preserving AI infrastructure may indicate Dataiku’s attempt to address a key adoption barrier for enterprise GenAI deployments, particularly in regulated or data-sensitive industries. If Kiji Privacy Proxy gains traction as an open-source standard, it could strengthen Dataiku’s ecosystem influence and support higher platform stickiness, potentially improving its competitive position in the broader AI and data science market.
At the same time, the open-source nature of the tool may limit direct monetization, suggesting its strategic value could lie more in lead generation, brand positioning, and integration depth with enterprise clients than in standalone revenue. Investor attention may therefore center on how effectively Dataiku converts this type of privacy tooling into expanded platform usage, larger contracts, or higher retention over time.

