Responsible AI and ethics
Responsible AI systems have significant potential to bring about positive changes in the business landscape globally. What it means for AI systems to behave in a responsible and ethical way is a discussion engaging governments, regulators and businesses worldwide.
Governmental and non-governmental organizations such as the US National Institute of Standards and Technology (NIST), the OECD, the G7 and the EU have created complimentary and overlapping frameworks with regards to using AI responsibly. In the UK, the Information Commissioner’s Office (ICO) summary of principles says that AI should be (i) transparent; (ii) accountable; (iii) considerate of the context in which it is operating; and (iv) reflective of the impact of an AI system on both individuals and society.
The private sector has developed similar types of advice. For instance, Google’s responsible AI guide offers a practice guide for fairness, interpretability, privacy and safety and security. And the ISO recently released a 42001 standard for AI management systems that builds on the NIST and OECD frameworks.
The key point of consensus among these various organizations is that any ethical use of AI begins with a governance and risk framework that is established within an organization in order to align outputs with ethics, control for issues that may arise and provide oversight and corrective action for any problems arising.
Quick links
Governance
The governance of AI systems plays a pivotal role in shaping the responsible and ethical deployment of artificial intelligence. AI governance refers to the guardrails that ensure AI tools and systems are safe, ethical, and aligned with societal values. It establishes frameworks, rules, and standards to direct AI investment, development, and use.
AI governance starts with the board of directors, who must understand how AI is used by the company and its competitors. They evaluate how AI may disrupt the business and industry, considering implications for strategy and risk. The board also assesses the impact of AI on the workforce and other stakeholders. And directors must oversee AI-related policies, controls, and internal systems that are delegated to management, emphasizing the importance of legal compliance, ethics, and risk management. Senior management collaborates with the board to execute AI strategies, align practices with values, and manage risks and opportunities.
AI governance provides a structured approach to mitigating the risks associated with human biases and errors in AI training and use. Regular assessments ensure that machine learning algorithms remain accurate and perform as intended, while preventing flawed or harmful decisions.
In summary, AI governance is essential for achieving compliance, trust and efficiency in the development and deployment of AI technologies, while safeguarding against unintended negative consequences.
Key legal risks / issues
1. Unintended bias in AI system's outputs that cause human harm.
2. Derogatory and harmful outputs from chatbots.
3. Misinformation that harms a company's reputation.
Actions for consideration
1. Training of board of directors and senior management in the benefits and risks of AI deployment.
2. Adopting of an AI Policy that outlines the company’s commitment to the ethical use of AI technologies.
3. Frequent independent assessments to ensure that the AI systems perform as intended and without causing harm, with results reported through the AI governance framework.
Transparency and explainability
Transparency in the AI context means that firstly, users should be aware that they are working with an AI tool and secondly, that users are aware of the nature of the AI system as well as its limitations. It is important for users to understand the types of information processed by an AI system so that users can make informed decisions and challenge any responses that may not be suitable to them in their situation.
Explainability means that users and individuals impacted by an AI outcome are aware generally as to how an AI has arrived at that outcome. The objective is to ensure that users are analytical and critical about the nature of the information they have received from an AI tool so that they are able to make informed decisions about AI outcomes.
Proponents believe that fostering transparency and explainability in AI models is the best way to create long-term trust in the technology.
Key legal risks / issues
1. Lawsuits from external users who rely on an AI system claiming they were deceived and unaware that the outcome was produced by an AI system.
2. Proprietary data claims against a company when internal users rely on AI outputs that are in breach of data licensing agreements.
3. Regulatory charges that an AI system had failed to adequately explain its decision, when the AI system had become so complex that explaining an outcome process was no longer possible.
Actions for consideration
1. Create clear policies with regards to the transparent use of AI systems including day-to-day operations as well as public-facing systems.
2. Analyze Terms and Conditions documentation of vendor AI systems to obtain a clear understanding of the specific risks, controls and workings of the AI system and ensure explainable outcomes.
3. Design and implement internal team education and communication strategies for any client-facing roles that regularly utilize AI systems.
Safety and security
Any risk associated with the use of an AI system should be mitigated to the greatest extent possible. AI systems should prioritize human safety and avoid causing harm.
There are a range of safety concerns that might arise. AI improperly deployed without safeguards may be modelled from training data that is unsuitable for its intended use or from data that has been infected with malicious code. Datasets can become stale or outdated and without maintenance lead to data or concept drift. Pre-trained models may not scale adequately when faced with new and challenging real-world situations.
From a cybersecurity perspective, the full range of AI system compromises from a malicious actor are not yet known. Vulnerabilities exist to vectors such as prompt engineering or prompt injection attacks that can lead to security breaches.
It is important for the organization to have a dynamic understanding of the emerging safety and security risks of AI systems so as to anticipate and mitigate any harmful consequences to stakeholders.
Key legal risks / issues
1. Lawsuit for damages when the inappropriate outputs of an AI system are not flagged or stopped in a timely manner, causing financial loss to stakeholders.
2. Government enforcement action when AI outputs use confidential, sensitive data from training models, causing a data privacy incident.
3. Reputational and financial damage when an AI system, using a ‘deepfake’ dataset, defrauds company personnel into giving malicious actors access to customer financial accounts.
Actions for consideration
1. Put strong security and safety policies in place and formally audit the implementation of those policies at regular intervals, including red teaming of significant AI systems.
2. Review and update incident reporting and vulnerability management programs on a regular basis.
3. Conduct regular review and oversight of security controls by a senior leader with authority to implement changes and who is accountable for gaps.
Bias, discrimination and fairness
As AI technologies become increasingly pervasive, ethical and legal concerns about bias and fairness have come to the forefront. AI relies on trained algorithms that analyze vast amounts of historical data. However, if not properly managed, this training data can introduce biases into the AI system, potentially leading to unintended discriminatory outcomes. When AI systems are used in decision-making, they may perpetuate existing inequalities and unfairly impact individuals. For instance, biased algorithms could inadvertently favor certain demographics and disfavor others. As companies embrace AI tools, it becomes increasingly important to proactively assess fairness and mitigate bias, both in the AI system and how it is being used to inform decision-making. In many jurisdictions such as in the United States, anti-discrimination laws are being deployed by regulatory authorities to combat algorithmic bias. In Europe, regulators are identifying high-risk AI applications and mandating continuous risk management systems including impact assessments and transparency requirements that aim to evaluate potential biases in AI systems and ensure accountability in its application. Where AI decision-making significantly affects individuals, such as financially, regulators are also requiring human oversight in the process, to reduce the risk of bias and unfairness from automated systems.
Key legal risks / issues
1. Regulatory enforcement action alleging violation of antidiscrimination laws that could lead to significant fines and requirements to disgorge the offending training data and algorithms
2. Civil lawsuit by affected individuals alleging bias, which may take the form of a class action complaint
3. Declining stock value, damage to reputation and consumer backlash
Actions for consideration
1. Identify AI Systems that have the potential to produce biased results
2. Conduct an AI Impact Assessment of the technology to identify legal irks
3. Test the outputs of the AI Systems for compliance with anti-discrimination laws, regulations, bulletins and guidance
4. Monitor its ongoing operation and implement human oversight measures as appropriate
5. Ensure compliance with transparency requirements, including making users aware as necessary
Accountability
Accountability in AI systems context involves both documenting internal behaviors and evidencing organizational choices. An accountable person should be able to justify and explain these choices to senior management, the board or even shareholders. Accountability outcomes will be measurable, trackable and recorded in a format that can be provided to a regulatory authority and auditor if the need arises.
Without suitable oversight, any system can be a target for exploitation or negative outcomes. Organizations that run AI systems need this senior accountability individual to own the responsibility behind AI governance. Accountability ownership also means that an AI systems governance teams will have an advocate for the internal resources needed to effect suitable compliance and a successful policy.
An accountability role ensures that governance structures are in place to provide guidelines for transparency, explainability, fairness, security and safety – as well as the performance of internal audits if required – to make sure the goals of creating responsible AI systems are met.
Key legal risks / issues
1. Lawsuits for damages when tampering of AI training data occurs because no senior individual is responsible for the safety and security of the AI system; that lack of process ownership within the organization creates a weakness that can be exploited and result in harms to the organization.
2. Liability for data breach and loss of sensitive data from an AI system that can occur if senior management fails to allocate responsibility for AI system security to a responsible individual in a timely manner.
3. Class action lawsuits from users claiming that a maliciously engineered prompt had been entered into the company’s internal AI system that led to harmful outcomes for them.
Actions for consideration
1. Appoint a Chief AI Officer who is responsible for creating and maintaining accountable AI systems enterprise-wide.
2. Create a robust internal and external audit mechanism for responsible AI systems.
3. Report data metrics of the AI system publicly and validate them frequently.
4. Identify indemnity insurance that will cover losses resulting from harmful AI outcomes.
Related contacts
Nasser Ali Khasawneh
Global Head of AI E: nasseralikhasawneh@eversheds-sutherland.com T: +97 1 43 89 70 03 View profile
Mary Jane Wilson-Bilik
US Lead, Artificial Intelligence in Financial Services E: mjwilson-bilik@eversheds-sutherland.com T: +1 202 383 0660 View profile
© Eversheds Sutherland. All rights reserved. Eversheds Sutherland is a global provider of legal and other services operating through various separate and distinct legal entities. Eversheds Sutherland is the name and brand under which the members of Eversheds Sutherland Limited (Eversheds Sutherland (International) LLP and Eversheds Sutherland (US) LLP) and their respective controlled, managed and affiliated firms and the members of Eversheds Sutherland (Europe) Limited (each an "Eversheds Sutherland Entity" and together the "Eversheds Sutherland Entities") provide legal or other services to clients around the world. Eversheds Sutherland Entities are constituted and regulated in accordance with relevant local regulatory and legal requirements and operate in accordance with their locally registered names. The use of the name Eversheds Sutherland, is for description purposes only and does not imply that the Eversheds Sutherland Entities are in a partnership or are part of a global LLP. The responsibility for the provision of services to the client is defined in the terms of engagement between the instructed firm and the client.
Share this page