According to a recent LinkedIn post from OpenOrigins, the company is drawing attention to growing legal risks associated with deepfakes and synthetic media. The post references commentary from its Head of Legal in an IBT Media article and outlines concerns spanning fraud, impersonation, harassment, and copyright.
Claim 55% Off TipRanks
- Unlock hedge fund-level data and powerful investing tools for smarter, sharper decisions
- Discover top-performing stock ideas and upgrade to a portfolio of market leaders with Smart Investor Picks
The LinkedIn post highlights how AI-generated voices and videos could be used to authorize payments, manipulate markets, or impersonate executives, creating material financial and reputational risks for enterprises. It also notes issues around non‑consensual content, training data for AI models, and the use of identity and likeness without clear ownership or control.
The post suggests that OpenOrigins sees an opportunity to address these challenges by “anchoring media at the point of creation” and making provenance publicly verifiable. For investors, this positioning implies a focus on compliance‑driven, risk‑mitigation technology that may gain relevance as regulators, financial institutions, and content platforms seek safeguards against synthetic media abuse.
If OpenOrigins can demonstrate robust verification tools that integrate into existing media and payment workflows, it could benefit from increased demand in sectors exposed to fraud and brand‑integrity risks. However, the scale of commercial impact will depend on adoption rates, evolving legal standards, and competition from other authenticity and watermarking solutions in the digital media security market.

