Policies we adopt or choose not to adopt, on social and ethical issues, particularly regarding the development, deployment or use of our products, may be viewed as controversial by employees, customers, potential customers, or regulators who have varied, evolving and oftentimes conflicting expectations. These perceptions have in the past, and may in the future, impact our ability to attract or retain employees and customers and may result in negative publicity or reputational harm. Our decisions about whether to conduct business with potential customers, or whether to continue or expand relationships with existing customers, may also impact our stakeholder relationships and reputation. Actions taken by our customers or employees, including through the use or misuse of our products or technologies for unlawful activities, improper information sharing or other harmful purposes, may result in reputational harm, regulatory scrutiny or legal liability. Regulatory frameworks such as the EU Digital Services Act ("DSA"), the EU AI Act and other rapidly evolving and sometimes conflicting global laws and regulations related to AI, privacy and consumer protection could increase compliance costs, restrict features or data flows, delay launches and expose us to penalties or litigation.
We are increasingly building AI into many of our offerings, including generative and agentic AI. As with many technical innovations, AI offerings, such as Agentforce, and our Agentforce 360 Platform present additional risks and challenges that could affect customer adoption and therefore our business. For example, the development and deployment of AI offerings, including those involving information regarding our customers' customers, raise emerging ethical, legal and operational considerations. If we enable or offer solutions that draw controversy due to their perceived or actual impact on human rights, privacy, employment or other social concerns, we may experience new or heightened governmental or regulatory scrutiny, investigations, enforcement actions, fines or penalties and reputational or competitive harm.
As we develop and deploy AI applications in our products and services, including in customer-facing contexts, the speed, scale and complexity of related data processing may increase. AI applications may be targeted or misused, or may perform in ways that are inaccurate, biased, unreliable or otherwise harmful, or that implicate personal or proprietary data in ways that may be difficult to anticipate, detect or control. Such issues could result in regulatory scrutiny, litigation, contractual disputes, loss of customer trust or reputational harm.
Inadequate or ineffective AI development, testing, deployment, content labeling, governance, monitoring or oversight, whether by us or others, could result in our AI applications not operating as intended or with reduced functionality, reduced acceptance of our products and services or diminished confidence in the decisions, predictions, analysis or other content that our AI applications produce. Such risks may be heightened as AI applications are integrated into a broader range of our products and services. This could subject us to regulatory scrutiny, competitive harm, legal liability and reputational damage.
We may rely in part on third-party models, datasets, cloud infrastructure or other technologies in developing or offering certain AI applications. Our reliance on such third parties exposes us to risks relating to performance, availability, security vulnerabilities, intellectual property claims, licensing restrictions, cost increases or changes in terms of service. If a third-party provider modifies, suspends or terminates access to its models, data or infrastructure, or if such technologies fail to perform as expected, our ability to offer, scale or support certain AI applications may be adversely affected, and we may incur additional costs to mitigate such impacts.
The rapid evolution of AI technologies and related regulatory requirements will require the allocation of significant resources to develop, test, monitor and maintain our products and services in order to comply with applicable laws and regulations and to address concerns relating to accuracy, bias, transparency, explainability, security and data provenance. Uncertainty surrounding new and emerging AI applications, including generative AI content creation and AI agents, may require additional investment in compliance programs, governance frameworks, licensing arrangements, documentation, auditing and risk mitigation processes, which may be complex, costly and could impact our profit margins. Moreover,expansion of our AI capabilities to content generation, including through Agentforce and other generative AI offerings, introduces additional risks and responsibilities.
AI technologies may produce content that is inaccurate, misleading, biased, offensive or unsafe, and may implicate privacy, security or intellectual property rights, including through the generation of copyrighted or other protected material. If our customers or others rely on such content to their detriment, or if regulators or rights holders assert claims relating to AI-generated content, we may be exposed to litigation, regulatory investigations, enforcement actions, indemnification obligations, monetary penalties or other liabilities, or reputational harm. Developing, testing and deploying AI technologies may also increase the cost profile of our offerings due to the computing costs required. If we are unable to effectively mitigate these risks, or if our mitigation efforts result in excessive costs, our reputation, business, operating results and financial condition may be adversely affected.