We use, and plan to expand the use of, artificial intelligence and machine learning in our services with the goal of improving efficiency and reducing costs. AI initiatives are complex, costly and uncertain. They may not achieve anticipated performance, accuracy, efficiency or financial outcomes, and could adversely affect our business, results of operations and financial condition.
Model performance, explainability, and fairness. AI outputs may be inaccurate, incomplete, biased, or otherwise unreliable, which could contribute to errors in claims handling support, misalignment of work, missed fraud signals, or inaccurate data provided to clients. Where AI is used in HR, errors or bias could lead to employment claims. Failures to document model design, training data, testing, monitoring and human oversight could impede explainability and remediation, invite client disputes, or lead to adverse regulatory scrutiny.
Data governance, privacy and security. AI systems rely on large volumes of data, including personal, sensitive, and client-confidential information. Inadequate data governance (including provenance, minimization, quality, retention, access controls, and vendor controls) may result in privacy or data protection violations, cross border transfer non-compliance, or contractual breaches. Use of third-party AI tools can increase risk of data leakage, prompt-injection, model exfiltration and other cyber threats. Any such incident could result in service disruption, regulatory investigations, litigation, fines, contractual liability, and reputational harm.
Emerging and evolving AI regulation. Multiple jurisdictions have adopted or proposed AI specific frameworks that may classify certain uses as higher risk and impose obligations such as risk management systems, testing and validation, technical documentation, transparency notices, impact/risk assessments, incident reporting, human oversight, and vendor/processor due diligence. We may also face requirements under general consumer protection, unfair practices, anti-discrimination and sectoral rules (including scrutiny of AI used in claims processes by insurance market stakeholders). Divergent or rapidly changing obligations can increase compliance costs, require product or process changes, delay deployments, and affect cross-border operations.
Contractual and client assurance pressures. Clients are increasingly requiring AI related representations, warranties, audit rights, transparency, impact assessments, and indemnities. Failure to meet client standards or disagreements over the scope of AI use, data rights, performance, bias, or audit findings could lead to lost business, disputes, or liabilities.
If any of these risks materialize, we could face investigations, enforcement actions, fines or penalties; private litigation; contractual liability to clients and partners; increased compliance and assurance costs; constraints on data use or model deployment; and loss of customers or market reputation. Compliance with new or differing AI requirements across jurisdictions could be costly and complex, and we may be unable to timely adapt our systems, processes and vendor arrangements.