Regulation of AI in the Workplace

Introduction

The Department of Enterprise, Trade and Employment this week opened a public consultation process on Ireland’s implementation of the EU Artificial Intelligence Act (the AI Act), which on its implementation into national law will, among other things, regulate certain uses of artificial intelligence in employment settings. The AI Act will apply on a phased basis over a three year period, once it enters into effect, which is anticipated to happen in June.   

Artificial intelligence has very much entered the mainstream over the past 18 months or so, with generative AI programs such as ChatGPT, Google Gemini and others very much to the fore. It has swiftly become apparent that AI technology is capable of much more than drawing up an itinerary for your upcoming holiday, or finally providing an answer to the question that has troubled humankind for centuries – whether the chicken or egg came first (spoiler alert: the egg apparently). More and more companies are beginning to recognise, and to harness, the benefits inherent in utilising AI in the workplace, and in particular the potential efficiencies it can engender across HR processes and functions.

Of course, as with any technology that ultimately impacts on individuals in an employment setting, AI cannot exist in a vacuum and those organisations that wish to integrate AI into their HR systems need to be mindful of the various ways in which such use of AI is regulated.


The AI Act & Platform Work Directive

The forthcoming AI Act is a significant piece of legislation and is set to be the first ever legal framework focusing on addressing the risks that come with the technology.

The AI Act adopts a risk based approach to the regulation of AI systems, with the aim being that the accompanying obligations within the AI Act will be targeted and proportionate, and in accordance with the level of risk inherent in the intended end use of the system.

The category of unacceptable risk AI systems, uses those that would pose a clear threat to the safety and fundamental rights of individuals, will be largely prohibited. Low and minimal risk systems, comprising such AI programs as chatbots and email spam filters will be subject to limited, or no, regulation.

The category of high risk AI systems is extensive and incorporates AI systems used across such disparate areas as medical devices, critical infrastructure, migration & border control management as well as employment & management of workers. It is this latter area that employers need to be cognizant of.

Organisations which develop AI systems as a tool in recruitment to sort through high volumes of CVs for example, or to match candidates to job specifications, or as an aid in performance management, will be subject to strict obligations before such systems can be deployed.

Those compliance duties will include obligations dealing with:

  • Record Keeping
  • Transparency
  • Human Oversight
  • Technical Documentation
  • Risk Management
  • Data Governance
  • Activity Logs

 

A key question for companies will then be whether they are considered deployers of a high risk AI system, with responsibility to ensure the accompanying compliance obligations are met, or merely a user. For an organisation that develops a high risk AI system, or engages an external provider to develop a system on their behalf, it is likely they will be classed as a developer. An organisation on the other hand that licenses an existing system will likely be classified as a user.

The AI Act also regulates general-purpose AI models, the category of AI that includes well known programs such as ChatGPT. It is recognised that increasingly general-purpose AI models are becoming components of AI systems. These AI models can perform and adapt countless different tasks. The AI Act will introduce transparency obligations for general-purpose AI models, in addition to further risk management obligations for more powerful and impactful models.


Algorithm Management and the Digital Gig Economy

Alongside and distinct from the AI Act those organisations operating in the digital gig economy space need also to be mindful of the EU Platform Work Directive (the Directive). The draft Directive seeks to enhance the employment rights of “Platform Workers” employed by digital platforms or websites that provide products and services in the gig economy, including for example taxi services, elder care, translation services and food delivery, to name but a few.

Platform companies that use algorithms for human resource management, such as automated decision making or monitoring of workers, will have new obligations under the Directive. Concise information will need to be provided to workers from their first day of work regarding the types and functions of systems in use. Platform companies will not be permitted to process certain types of information which may be gathered from workers, including:

  • Data related to a worker’s emotional or psychological state;
  • Data which could be used to predict actual or potential trade union activity;
  • Data which could be used to infer certain protected characteristics of a worker (e.g. sexual orientation, religious beliefs);
  • Biometric data; or
  • Date relating to private conversations.

Monitoring of automated systems will be required and “human oversight” of “significant decisions” must be provided.


Automated Decision Making and the Right to Privacy

Employers should also be aware that Article 22 GDPR sets out an entitlement for data subjects, with limited exceptions, not to be subjected to fully automated decision making which has a legal or similarly significant effect on them. As such, if using an AI system as part of an automated recruitment or performance management system, it is important to retain an element of human oversight and judgment throughout the process.

Employees also have a right to privacy under Article 8 of the European Convention on Human Rights, and the European Court of Human rights has set out in case law that employee surveillance can infringe that right. There can often be valid business reasons underpinning a certain degree of workplace monitoring, including for example to ensure compliance with working time obligations. Where using AI to enable more technologically sophisticated methods of monitoring however, organisations need to be careful not to infringe on employees’ right to privacy.


Bias and Discrimination

A criticism periodically levelled at AI programs centres on the way that such systems learn based on the underlying data sets that they base their decisions on. Using flawed data can result in algorithms that repeatedly produce errors, unfair outcomes, or even amplify the bias inherent in the flawed data.

In turn that bias can lead to work related discrimination. A company in the United States last year was forced to agree a settlement of hundreds of thousands of dollars after a job applicant discovered that the company’s AI hiring tool was discriminating against older applicants – rejecting applications from women over 55 and men over 60. The applicant in question had re-applied using a fake younger age and his application was accepted.

In Ireland, the Employment Equality acts prohibit employment related discrimination, whether direct or indirect, on the basis of the nine protected grounds – age, disability, race, membership of the traveller community, religious beliefs etc. When using AI as a tool in recruitment, or across other HR functions, it is important to ensure that outputs are verified and that compliance with employment equality legislation is maintained.

 

Harry Wall, Associate Legal Director, Ibec