Disputes, regulatory enforcement and product liability
AI is a transformational technology for every organization. It offers numerous use cases, opportunities and benefits, but seeking to procure, design and deploy a new AI solution raises a complex matrix of legal risks and challenges. The increasingly mainstream application and use of GenAI presents a level of additional risk to ‘traditional’ AI implementations, as does the patchwork of potential regulatory sanctions that are coming into force in the near term. The risk of dispute, investigation and enforcement action can be reduced, and the organization better placed and prepared to deal with them, if these risks are appreciated and appraised holistically at an early stage.
Anyone designing, deploying and providing technology solutions will know of the potential risks, challenges and disputes that can arise during the contract lifecycle, including requirement setting, delays and defects, acceptance criteria and testing, and contract governance. The design and deployment of a traditional AI solution can bring similar risks, and we have seen particular issues arise around requirement definition, training datasets, acceptance criteria and testing, performance, IP ownership (IP risks are covered elsewhere in this hub), and contract exit. These things matter as much for suppliers of AI as they do for customers. They also apply to the various new applications and opportunities that GenAI presents.
AI is also upturning the very foundations of product liability and product safety law – the basic premise of such laws, and engineering practices, has been that a manufacturer should have a complete understanding as to how their product will behave and function before it is placed on the market. AI driven devices learning by themselves present a significant challenge to this assumption. Lawmakers are beginning to rework product liability and product safety law so as to account for products iteratively self-learning but all roads lead to a greater risk exposure for manufacturers who find themselves in the vulnerable position of being responsible for devices without knowing entirely how they will behave.
Key legal risks / issues
1. Procurement and development: A lack of definition concerning requirements and deliverables, assumptions and dependencies, data use and training, testing criteria and acceptance, and misunderstanding agile development, can lead to a myriad of issues and disputes as the solution is procured, developed and deployed.
2. Deployment, evolution and exit: Measuring performance following go-live and, if performance is not as expected, assessing what went wrong, and determining liability in what is a complex, integrated system can be challenging. Audit rights and escrow can be additional areas of contention. How the contract adapts to changing or new technology and use cases, and what happens on contract exit, are additional risk areas (particularly if not addressed at the outset and during the contract lifecycle).
3. Specific GenAI risks: Where Gen AI is used in the operation of customer contracts or for internal use there may be risks (for example) in relation to whether its use: (i) is expressly or impliedly prohibited; (ii) is in compliance with specific contractual provisions or general obligations such as ‘reasonable skill and care’ or ‘good industry practice’ (or equivalents); (iii) involves inputs which would constitute misuse of confidential information, a data security breach or trade secrets; (iv) produces outputs which could if used incorrectly lead to negligent misstatement, misrepresentation, defamation (or equivalents), etc.
4. Employers’ vicarious liability: Employers will likely be vicariously liable for their employees use of AI solutions. Relevant inputs and outputs will likely be disclosable/subject to discovery in legal proceedings.
5. Regulatory risk: With increasing regulation of AI globally, there are potential risks in relation to the impact of that regulation on an organization’s operations, products and services, as well as potential regulatory investigations and enforcement action.
Actions for consideration
1. Ensure clarity of the contract, including in relation to the issues identified in (1) and (2) above. This extends to the mechanisms and collaboration that allow the contract to remain flexible/adaptable as (for example) the solution is piloted, developed, deployed and changes over the life of the contract. This includes ensuring clear allocations of risk/liability over those different phases. The dispute resolution mechanism, choice of law and jurisdiction should also ensure a swift and certain route to resolving disputes in-flight should they occur.
2. From day 1 of an AI development, how an organization identifies issues and deals with them is critical to success. The scope to resolve issues is greatest the earlier they are identified. However, while everyone is focused on keeping the program on track, issues may be missed, ignored or not addressed as early as they could be. This can lead to further issues, intractable positions, growing losses and a decreasing likelihood of successful outcomes. In such circumstances, interventions then come too late, are too costly, not effective or exacerbate the problems. This matters for both suppliers and customers. Best practice contract governance and a systematic approach to objectively assess and benchmark on a regular basis, to identify risks and formulate timely interventions, is therefore critical.
3. In relation to regulation, there is a need to actively monitor and consider the impact of incoming regulation and connected codes of practice. A playbook for how to respond to any investigation and/or proposed regulator enforcement action should be developed. Lobbying or challenging government should also be actively considered, especially where regulation is being developed at fast pace.
4. Engineering and quality teams need to tool up to ensure that the product liability and safety risks associated with AI are properly handled at the development stage and that data on product behavior in the field is closely monitored and adaptions made where appropriate. Producers have always been responsible in law for the safety of their products – that hasn’t changed but the challenge is how to get sufficient assurance of safety where products are actively changing and adapting after they hit the market.
Related contacts
Nasser Ali Khasawneh
Global Head of AI E: nasseralikhasawneh@eversheds-sutherland.com T: +97 1 43 89 70 03 View profile
© Eversheds Sutherland. All rights reserved. Eversheds Sutherland is a global provider of legal and other services operating through various separate and distinct legal entities. Eversheds Sutherland is the name and brand under which the members of Eversheds Sutherland Limited (Eversheds Sutherland (International) LLP and Eversheds Sutherland (US) LLP) and their respective controlled, managed and affiliated firms and the members of Eversheds Sutherland (Europe) Limited (each an "Eversheds Sutherland Entity" and together the "Eversheds Sutherland Entities") provide legal or other services to clients around the world. Eversheds Sutherland Entities are constituted and regulated in accordance with relevant local regulatory and legal requirements and operate in accordance with their locally registered names. The use of the name Eversheds Sutherland, is for description purposes only and does not imply that the Eversheds Sutherland Entities are in a partnership or are part of a global LLP. The responsibility for the provision of services to the client is defined in the terms of engagement between the instructed firm and the client.
Share this page