Our internal computer systems, or those of our customers, collaborators or other contractors, third-party service providers and vendors may be subject to cyber-attacks, compromises or security incidents, which could result in a material disruption of our product development programs. Despite the implementation of security measures, our internal computer systems and infrastructures and those of our customers, collaborators, contractors, third-party service providers, vendors or other third parties are vulnerable to damage, compromise or interruption from computer viruses, unauthorized access, misuse, or other security compromises or breaches. Cyber-attacks are increasing in their frequency, sophistication and intensity, and have become increasingly difficult to detect and may be enhanced or facilitated by AI. Cyber-attacks could include the deployment of harmful malware, ransomware, denial-of-service attacks, wrongful conduct by employees, vendors, or other third parties, hostile foreign governments, industrial espionage, social engineering and business email compromises, and other means to affect service reliability and threaten or compromise the security, confidentiality, integrity and availability of systems and information. Cyber-attacks also could include phishing attempts or e-mail fraud to cause payments or information to be transmitted to an unintended recipient. Further, attempts to disrupt or gain unauthorized access to our and our third-party vendors’ information systems from malicious third parties or insider threats may incorporate widely varying and frequently changing tactics, which may be enhanced or facilitated by AI. Like other companies in our industry, we, and our third-party vendors, have in the past experienced threats and security incidents related to our data and systems, and we may in the future experience other threats, compromises, breaches, or incidents. For example, in February 2026, a third-party service provider notified the Company of a ransomware attack affecting the service provider’s systems, which may have exposed data maintained on Company systems. At this time, the Company does not have sufficient information regarding the scope or impact of the incident to determine whether it is material to the Company. A cyber-attack or security compromise or incident could cause interruptions in our operations and could result in a material disruption of our business operations, damage to our reputation or a loss of revenues. In the ordinary course of our business, we collect and store confidential and/or proprietary information or other sensitive information, including, among other things, personal information about our employees and patients, intellectual property, and proprietary business information. Any cyber-attack or security compromise or incident that leads to unauthorized access, use, disclosure, loss, corruption or other compromise of confidential and/or proprietary information or other sensitive information could harm our reputation, cause us not to comply with federal and/or state breach notification laws and foreign law equivalents and otherwise subject us to liability under laws and regulations, including those that protect the privacy and security of personal information. In addition, we could be subject to risks caused by misappropriation, misuse, leakage, falsification or intentional or accidental release or loss of information maintained in the information technology systems, infrastructure, and networks of our company and our vendors, including personal information of our employees, and patients, and company and vendor confidential data. In addition, outside parties may attempt to penetrate our systems and infrastructure or those of our vendors or fraudulently induce our personnel or the personnel of our vendors to disclose sensitive information in order to gain access to our data and/or systems. If an incident or compromise of our information technology systems or infrastructure or those of our vendors occurs, the market perception of the effectiveness of our security measures could be harmed and our reputation and credibility could be damaged. We could be required to expend significant amounts of money and other resources to detect, mitigate and respond to these threats, compromises, or breaches and to repair or replace information technology systems infrastructure or networks and could suffer financial loss or the loss of valuable confidential and/or proprietary information. In addition, we could be subject to regulatory actions, inquiries, investigations, orders, penalties, fines, and/or claims made by individuals and groups in private litigation, including those involving privacy and security issues related to data collection and use practices and other data privacy and security laws and regulations, including claims for misuse or inappropriate disclosure of data, as well as unfair or deceptive practices. Although we develop and maintain systems and controls designed to prevent these events from occurring, and we have a process designed to identify and mitigate threats, the development and maintenance of these systems, controls and processes is costly and requires ongoing monitoring and updating as technologies change and efforts to overcome security measures become increasingly sophisticated. Moreover, despite our efforts, instances of unauthorized access to our computer systems have occurred in the past, though these events have not resulted in financial loss or disruption to our operations. The possibility of these events occurring in the future cannot be eliminated entirely. There can be no assurance that any measures we take will prevent or adequately address cyber-attacks or security compromises or incidents that could adversely affect our business. Our contracts may not contain limitations of liability, and even where they do, there can be no assurance that limitations of liability in our contracts are sufficient to protect us from liabilities, damages, or claims related to our privacy and data security obligations. Further, although we maintain cyber liability insurance, this insurance may not provide adequate coverage against potential liabilities related to any experienced cybersecurity incident or data breach. We, our collaborators and our service providers may be subject to a variety of privacy and data protection laws, regulations and contractual obligations, which may require us to incur substantial compliance costs, and any failure or perceived failure by us to comply with them could expose us to fines or other penalties and otherwise harm our business and operations. In the United States, several layers of federal and state data protection laws and regulations may apply to our business, including HIPAA, the Federal Trade Commission (“FTC”) Act and state consumer privacy and health data privacy laws. For example, the California Consumer Privacy Act (“CCPA”) is a comprehensive privacy law that creates new individual privacy rights for California consumers (as defined in the law) and places increased privacy and security obligations on entities handling personal data of consumers or households in California. The CCPA requires covered companies to provide certain disclosures to consumers about its data collection, use and sharing practices, and to provide affected California residents with ways to opt-out of certain sales or transfers of personal information. The CCPA went into effect on January 1, 2020 and the California State Attorney General became empowered to commence enforcement actions against violators as of July 1, 2020. Further, as of January 1, 2023, the California Privacy Rights Act, created additional obligations with respect to processing and storing personal information. Similar consumer privacy laws have passed or come into force in numerous U.S. states. Like the CCPA, these laws grant consumers rights in relation to their personal information and impose new obligations on regulated businesses, including, in some instances, broader data security requirements. In addition, federal and state legislators and regulators have signaled their intention to further regulate health and other sensitive information, and new and strengthened requirements relating to this information could impact our business. At the state level, some states have passed or proposed laws to specifically regulate health information. For example, Washington’s My Health My Data Act, which went into effect in March 2024, requires regulated entities to obtain consent to collect health information, grants consumers certain rights, including to request deletion, and provides for robust enforcement mechanisms, including enforcement by the Washington state attorney-general and a private right of action for consumer claims. At the federal level, the FTC has used its authority over “unfair or deceptive acts or practices” to impose stringent requirements on the collection and disclosure of sensitive categories of personal information, including health information. Moreover, the FTC’s expanded interpretation of a “breach” under its Health Breach Notification Rule could impose new disclosure obligations that would apply in the event of a qualifying breach. European data collection is governed by restrictive regulations governing the use, processing, and cross-border transfer of personal information. The collection and use of personal data, including personal health data in the European Economic Area (“EEA”) and the UK is governed by the provisions of the EU General Data Protection Regulation (“EU GDPR”) (with regards to the EEA) and the UK General Data Protection Regulation (“UK GDPR”) (with regards to the UK), as well as applicable data protection laws in effect in the member states of the EEA and in the UK (including the UK Data Protection Act 2018). In this Annual Report on Form 10-K, “GDPR” refers to both the EU GDPR and the UK GDPR, unless specified otherwise. The GDPR applies to the processing of personal data by any company established in the EEA/UK and to companies established outside the EEA/UK to the extent they process personal data in connection with the offering of goods or services to data subjects in the EEA/UK or the monitoring of the behavior of data subjects in the EEA/UK. The GDPR imposes a broad range of strict requirements on companies subject to the GDPR, such as including requirements relating to having legal bases or conditions for processing personal data relating to identifiable individuals and transferring such information outside the EEA/UK, including to the United States., providing details to those individuals regarding the processing of their personal data, implementing safeguards to keep personal data secure, having data processing agreements with third parties who process personal data, providing information to individuals regarding data processing activities, responding to individuals’ requests to exercise their rights in respect of their personal data, where required obtaining consent of the individuals to whom the personal data relates, reporting security and privacy breaches involving personal data to the competent national data protection authority and affected individuals, appointing data protection officers, conducting data protection impact assessments, and record-keeping. In the event of any non-compliance with the GDPR and any supplemental EEA Member State or UK national data protection laws, we could be subject to warning letters, mandatory audits, orders to cease/change the use of data, and financial penalties, including fines of up to €20,000,000 (£17.5 million for the UK GDPR) or 4% of total annual global revenue, whichever is greater. The GDPR also confers a private right of action on data subjects and consumer associations to lodge complaints with supervisory authorities, seek judicial remedies, and obtain compensation for damages resulting from violations of the GDPR. The GDPR imposes strict rules on the transfer of personal data outside of the EEA or the UK to countries that do not ensure an adequate level of protection, like the United States in certain circumstances unless adequate safeguards (such as the European Commission approved standard contractual clauses (“SCCs”) or the UK International Data Transfer Agreement/Addendum, (“UK IDTA”) and transfer impact assessments carried out when relying on the SCCs and UK IDTA. The international transfer obligations under the EU data protection laws will require significant effort and cost and may result in us needing to make strategic considerations around where EEA and UK personal data is transferred and which service providers we can utilize for the processing of EEA and UK personal data. Any inability to transfer personal data from the EEA and UK to the United States in compliance with data protection laws may impede our ability to conduct trials and may adversely affect our business and financial position. Although the UK is regarded as a third country under the EU GDPR, the European Commission (“EC”) issued a decision recognizing the UK as providing adequate protection under the EU GDPR and, therefore, transfers of personal data originating in the EEA to the UK remain unrestricted. In December 2025, the European Commission adopted a decision to extend the validity of the UK adequacy decision for six years until December 2031, determining that the UK continues to offer a level of data protection that is “essentially equivalent” to the EU standards. This follows the UK’s adoption of the Data (Use and Access) Act 2025 (the “DUAA”) on June 19, 2025. Like the EU GDPR, the UK GDPR restricts personal data transfers outside the UK to countries not regarded by the UK as providing adequate protection. The UK government has confirmed that personal data transfers from the UK to the EEA remain free flowing. The UK’s data protection regime is independent from but aligned to the EU’s data protection regime. However, following the UK’s exit (“Brexit”) from the European Union (“EU”), there will be increasing scope for divergence in application, interpretation and enforcement of the data protection laws between these territories. For example, the DUAA may have the effect of further altering the similarities between the UK and EEA data protection regimes, which may lead to additional compliance costs and could increase our overall risk. This lack of clarity on future UK laws and regulations and their interaction with EU laws and regulations could add legal risk, complexity and cost to our handling of European personal data and our privacy and data security compliance programs, and could require us to implement different compliance measures for the UK and the EEA. Compliance with the GDPR will be a rigorous and time-intensive process that may increase our cost of doing business or require us to change our business practices, and despite those efforts, there is a risk that we may be subject to fines and penalties, litigation, and reputational harm in connection with any European and UK-based activities. Issues relating to the use of artificial intelligence and machine learning could adversely affect our business and operating results. While AI and machine learning present opportunities for enhanced productivity and innovation, they also introduce cybersecurity, data privacy, IT, intellectual property, regulatory, legal, operational, competitive, reputational and other risks that could adversely impact our business and reputation. Specifically, risks related to bias, AI hallucinations, discrimination, harmful content, misinformation, fraud, scams, targeted attacks, surveillance, data leakage, inequality, environmental harms, and other harms may flow from our development, use, or deployment of AI technologies. Further, the use of certain AI technology can give rise to intellectual property risks, including compromises to proprietary intellectual property and intellectual property infringement. The evolving regulatory landscape surrounding AI also poses a risk, as new laws and regulations could impose additional compliance burdens, resulting in increased operational costs to comply with U.S. and non-U.S. laws concerning the use of AI. We expect to see increasing regulation related to AI use and ethics, which may also significantly increase the burden and cost of research, development and compliance in this area. For example, the EU’s Artificial Intelligence Act (“AI Act”) originally entered into force on August 1, 2024, and is expected to undergo amendments as introduced in the EU’s November 2025 Digital Omnibus. As enacted, the AI Act imposes significant obligations on providers and deployers of high-risk AI systems and encourages providers and deployers of AI systems to account for EU ethical principles in their development and use of these systems. Likewise, in the United States, the regulatory environment is complex and uncertain. Over the past year, states have advanced, and in some cases passed, dozens of laws focusing on AI governance and regulation, including on deployment of AI in healthcare settings. At the federal level, the Trump Administration has endorsed a federal moratorium on the enforcement of state AI laws, including through a December 11, 2025, executive order on “Ensuring a National Policy Framework for Artificial Intelligence.” So far, these efforts have not been successful at curtailing state action on AI regulation, contributing to a complicated legislative patchwork, which may be litigated in state and federal courts. Various federal and state regulators have also issued guidance and focused enforcement efforts on the use of AI in regulated sectors, such as healthcare. The FDA, for example, issued guidance on the use of AI in medical devices, requiring detailed risk management and review processes to obtain approvals. If we develop or use AI systems that are governed by these laws or regulations we will need to meet higher standards of data quality, transparency, and human oversight, as well as adhering to specific and potentially burdensome and costly ethical, accountability, and administrative requirements. We may also be subject to significant enforcement or litigation in the event of any perceived non-compliance. We are committed to implementing governance and control mechanisms to mitigate these risks, but there can be no assurance that such measures will adequately prevent or mitigate the adverse effects that the integration and use of AI may have on our business, financial condition, and results of operations. To the extent that AI is integrated into our products and services, the rapid evolution of AI may require the application of significant resources to design, develop, test and maintain our products and services to help ensure that AI is implemented in accordance with applicable law and regulation and in a socially responsible manner and to minimize any real or perceived unintended harmful impacts. Our vendors may in turn incorporate AI tools into their offerings, and the providers of these AI tools may not meet existing or rapidly evolving regulatory or industry standards, including with respect to privacy and data security. Further, bad actors around the world use increasingly sophisticated methods, including the use of AI, to engage in illegal activities involving the theft and misuse of personal information, confidential information and intellectual property. In addition, the use of generative AI models in our internal or third-party systems may create new attack surfaces or methods for adversaries, which could impact us and our vendors. The integration of AI systems, by us or by our vendors, may increase cybersecurity risk. Any of these effects could damage our reputation, result in the loss of valuable property and information, cause us to breach applicable laws and regulations, and adversely impact our business.