Employment
The use of AI systems in employment is increasing rapidly and this trend is expected to continue. For example, AI is transforming employee support services, such as automating HR helpdesks and administration. Employers are incorporating AI into recruitment, performance-management and promotion processes, to screen candidates, monitor employees, schedule work and analyze performance data, among other uses. Workers are also using AI in their jobs - sometimes without their employers’ knowledge or consent.
While AI can improve the workplace and provide benefits to both employees and employers, it is a growing area of sensitivity for workers and trade unions. This reflects potential issues over its use relating to fairness, privacy, security, accuracy and transparency. AI’s impact on jobs and the future of work is another area of focus as its capability in the workplace grows.
In response, some jurisdictions and regulators are classifying some AI workplace systems as high risk, resulting in a stricter set of requirements and higher governance being applied. For example, the EU AI Act deems AI in the employment context as high risk if it is used in the following cases:
- in the recruitment or selection of candidates, including screening and evaluating candidates or placing targeted job adverts
- to make decisions affecting terms and conditions, promotion and termination
- to allocate tasks based on individual behavior, or personal traits or characteristics
- to monitor and evaluate workers’ performance and behavior
In addition, trade unions and works councils in many jurisdictions are demanding that workers be consulted on how AI is being used in the workplace. A right to workforce information and/or consultation on AI is, and is expected to be, a feature of some new regulatory frameworks, including in the EU. Depending on the jurisdiction, existing information and consultation obligations may also apply to the introduction of workplace AI.
Key legal risks / issues
1. Upholding the security, privacy and protection of workers’ data: Workplace AI involves the monitoring, sharing and processing of workers’ personal data, potentially including the sharing of sensitive data (such as on health or religion) with third party AI services. This raises significant risks for the rights and freedoms of workers, as well as data protection compliance challenges for employers. Additional obligations may apply if AI is making solely automated decisions about workers.
2. Protecting workers from discrimination and bias: AI may be perceived as a neutral tool that can eliminate human biases, but it can still result in discrimination. For example, where AI is trained on data which contains bias, where algorithms reinforce gender and other stereotypes or where its application has a disparate impact on certain workers, such as certain disabled workers with reduced access to AI-enabled technology.
3. Supporting employee relations, trust and confidence: If the use of AI is not clarified and explained to workers, particularly where it is integral to important decisions on performance, promotion and dismissal, the employer risks a breakdown in employee relations, potentially leading to disputes, strikes and legal claims.
4. Safeguarding the recruitment of talent: AI in recruitment is a growth area for employers. Employers should ensure that they can explain its role in decision-making and the main elements of the decision. Firstly, to confirm that it is positively supporting the employer’s recruitment objectives, and, secondly, to explain the decision to affected candidates. Being prepared to explain AI’s role in decision making reflects the direction of travel in AI regulation, such as the EU’s AI Act.
5. Ensuring a cohesive approach to AI governance: AI risks share common themes across different functions, departments, sectors, operations and countries. Ensuring that key stakeholders speak the same language, by applying consistent principles, policies and standards, is critical to the successful deployment of AI services by employers.
Actions for consideration
To mitigate AI workplace risks, employers should, as a minimum, consider:
1. New and updated policies: An AI policy is becoming more commonplace as employers seek to apply guardrails around the acceptable use of AI at work, both open-access and specialist products. For example, a policy may include: a description of the acceptable use of AI applications; a list of authorized, and any prohibited, AI applications; the consequences of non-compliance; cross-references to data protection, diversity and other rules and policies.
It should also be considered whether employee representative bodies should be involved in the preparation and rollout of new and updated policies. This is often advisable from an employee relations perspective, but in some organizations and/or countries there will be a legal requirement too.
2. Refreshing contractual terms: Reviewing, and potentially strengthening, employment contractual terms relating to data confidentiality and security, IT security, and intellectual property rights is also recommended to address potential new risks associated with workplace AI.
3. Training needs: AI workplace training may include presenting to the organization’s leaders to update them on key legal, compliance and reputational risks, as well as providing workforce training on a new AI policy and the use of AI tools. Reflecting the diversity of knowledge among those being trained, some employers are adopting pre-training questionnaires to ensure that it is pitched appropriately.
4. Review processes for communicating new AI applications to the workforce: As above, ensuring that the workforce understand any new AI applications that impact them can be integral to good employee relations and help to reduce the risk of disputes. In addition, depending on the jurisdiction and the nature of the AI application being introduced, there may also be a legal requirement to inform and/or consult with workers and/or worker representatives.
5. Reviewing procurement processes: When procuring workplace AI applications, employers should scrutinize vendor assurances on the management of bias, discrimination, accuracy, data protection and other risks, given that employers may be liable, in event of a successful worker claim. This is particularly relevant where new AI regulation is proposing new, specific employer liabilities.
6. Auditing workforce risks, including data and discrimination protection: Employers should identify existing AI applications in the workplace and conduct impact and risk assessments, having particular regard to any new AI regulation to ensure existing applications are compliant.
Related contacts
© Eversheds Sutherland. All rights reserved. Eversheds Sutherland is a global provider of legal and other services operating through various separate and distinct legal entities. Eversheds Sutherland is the name and brand under which the members of Eversheds Sutherland Limited (Eversheds Sutherland (International) LLP and Eversheds Sutherland (US) LLP) and their respective controlled, managed and affiliated firms and the members of Eversheds Sutherland (Europe) Limited (each an "Eversheds Sutherland Entity" and together the "Eversheds Sutherland Entities") provide legal or other services to clients around the world. Eversheds Sutherland Entities are constituted and regulated in accordance with relevant local regulatory and legal requirements and operate in accordance with their locally registered names. The use of the name Eversheds Sutherland, is for description purposes only and does not imply that the Eversheds Sutherland Entities are in a partnership or are part of a global LLP. The responsibility for the provision of services to the client is defined in the terms of engagement between the instructed firm and the client.
Share this page