Skip to main content
30.03.2022

Computer says no: unpicking the employment risks of AI

Last week several newspapers ran a story about three make-up artists who had been 'dismissed by algorithm' during a redundancy exercise. They sued their former employer and received an out of court settlement. I was intrigued. Had Estee Lauder effectively outsourced its decision making to a machine to determine which employees to retain or dismiss? Well yes, but mainly no. It had used AI (specifically facial recognition technology) to interview the women but, according to Estee Lauder, this only accounted for 1% of its decision making. The rest of the process was conducted by a human being.

We consider the legal implications for employers using AI to make significant decisions about their staff.  

Background

The development of new technologies, including those powered by AI and machine learning, is transforming the world of work. Automation has started to replace people in some sectors (particularly in manufacturing and retail). But it's not just about machines performing work that used to require physical labour - technology is also being used to make decisions about employees. Last year, the TUC published an important report into the use of AI - technology managing people: the worker experience. It concluded that there had been:

'a quiet technological revolution taking place in how people are managed at work. AI-powered tools are now used at all stages of the employment relationship, from recruitment to line management to dismissal. This includes the use of algorithms to make decisions about people at work. 

According to a report prepared for the TUC: Technology Managing People – the legal implications, by September 2020, over 40% of enterprises had adopted at least one AI-powered technology and a quarter had adopted at least two across the EU27, Norway, Iceland, and the UK. A further 18% had plans to adopt AI in the next two years. The pandemic has accelerated the use of new technology and it's likely that these figures underestimate the current numbers of businesses using, or planning to adopt, AI.

These systems appeal because they can perform some tasks much quicker than humans. For example, in the employment context, employers can use AI to weed out unsuitable candidates. The programme scans CV's and, via key words, identifies those candidates who have the essential qualities required for the job, creating a smaller pile of applications to be reviewed for shortlisting. 

However, AI is not confined to recruitment and is increasingly being used in a wider HR context to allocate shifts and holiday dates, generate disciplinary or absence reviews and to train staff. There are also AI programmes which give instructions to staff, allocate work and monitor their productivity. 

Legal issues

The key components driving most AI/machine learning applications involve data input, data processing and predictive models generated by machine learning. The instructions to the machine come from algorithms. And the machine can be programmed to learn and make decisions based on new data and to predict how someone is likely to behave. Each individual component of an AI system (the data, the predictive models, the decision rules and the resulting action) has relevance for workers who are being managed by AI. 

There's a perception that a decision reached by a machine is untainted by human biases and is therefore completely logical and objective. But that's not necessarily the case. These new forms of technology can create discrimination as well as replicate and embed existing discrimination through the machine learning process. Robin Allen QC and Dee Masters argue that:    

'It is obvious that the AI system an employer uses will be the product of many different actors: those who wrote the initial code for the first algorithm, those who wrote the machine-learning instructions, those who supplied the initial data on which the system was trained, and so on, down to those who determined that the system should be bought and used and those who then applied its outcomes. So, we can see that discrimination might be perpetrated by an employer in circumstances where a different organisation had created the discrimination in the first place.'

A good example of this is the algorithm Amazon used to recruit staff. It used data submitted by applicants over a 10-year period to determine who was the ideal candidate. The problem was most of the data it relied on were from male applicants. And because the data was skewed to favour men, the machine taught itself that male candidates were preferable. 

Liability

An employer is legally responsible for the decisions it makes about its staff even if these are generated by AI. And its not a defence to say they didn't know a particular programme was discriminatory or reached an unfair outcome.  

Discrimination claims

In the context of discrimination claims, although the Equality Act 2010 was drafted without AI in mind, it does provide a framework for candidates and employees to bring claims. 

It's arguable that algorithms and the databases used to train machine learning and reach fully or semi-automated decisions can amount to a provision, criteria or practice required to support a claim of indirect discrimination. Even if an employer can establish that it had a legitimate reason for using  AI (which probably won't be too difficult) it has to show that using it in the way it did was both proportionate and that it couldn't have achieved the same result in a less discriminatory way. That's likely to be a more difficult hurdle to clear. It's also possible that if the employer can't explain how the decision has been reached, a tribunal might infer discrimination from the lack of transparency within the AI-powered process itself. 

Unfair dismissal

There's also a risk that employees who are dismissed as a result of computer generated decisions will be able to bring unfair dismissal claims. Ordinarily, an employee can ask questions or seek to challenge a human-made decision to understand whether it is rational and fair. That's much harder where decisions have been made by AI. The TUC report found that managers often didn't understand why certain decisions had been made and were, therefore, unable to properly explain them. 

Employees have a right to be treated fairly both in terms of the process adopted and the outcome reached. That means that employers need to be able to explain their decisions, give the employee a right of reply and usually allow them to appeal. If the rationale behind the decision is not transparent, employees will be able to argue that their treatment is not fair or unlawful.  

Trust and confidence

There's also a wider point. The relationship between an employee and employer is based on trust and confidence. It's difficult to have a personal relationship with a member of staff if most of the decisions made are automated. An extreme example is gig economy workers who may be managed almost entirely by an App and have little personal interaction with the company they work for. 

We know from the TUC report that many workers are suspicious about AI and fear that it is being deployed in ways they haven't agreed to and don't properly understand. If they don't trust the processes their employers are using to manage them they are likely to see any decision that adversely affects them as being unfair. And, if their employer can't reassure them that the process is both rational and fair (by explaining how the computer reached the decision it did), they are likely to complain. In serious cases, it could also undermine the relationship to such an extent that the employee resigns and claims constructive unfair dismissal.  

Privacy and data protection

There are obvious privacy and data protection issues that employers also need to consider. The main issue is the fact that under Article 22 of UK GDPR automated decision making (i.e. machine made decisions without any human intervention) which results in significant impact is only permitted in very limited circumstances. Employers therefore need to be careful not to rely solely on decisions made by computer. If they intend to rely on automated decisions, they need to bring themselves within the scope of what is permitted. 

Employers also need carry out a data protection impact assessment to assess the privacy risks and demonstrate that they have thought about and considered how to address them. As with any use of personal data employers should be transparent and so should have clear privacy notices and should ensure that they are meeting the accuracy, retention and data minimisation requirements under UK GDPR.

The Information Commissioner has a focus on the increased use on machine learning and AI and has updated its guidance on Big data, artificial intelligence, machine learning and data protection.

Acas recommendations

In March 2020, Acas published a research paper: My boss the algorithm: an ethical look at algorithms in the workplace which makes a number of recommendations (some of which are directed at government, rather than employers):

  1. Algorithms should be used to advise and work alongside human line managers but not to replace them. A human manager should always have final responsibility for any workplace decisions.
  2. Employers should understand clearly the problem they are trying to solve and consider alternative options before adopting an algorithmic management approach. 
  3. Line managers need to be trained in how to understand algorithms and how to use an ever increasing amount of data. 
  4. Algorithms should never be used to mask intentional discrimination by managers. 
  5. There needs to be greater transparency for employees (and prospective employees) about when algorithms are being used and how they can be challenged; particularly in recruitment, task allocation and performance management. 
  6. We need agreed standards on the ethical use of algorithms around bias, fairness, surveillance and accuracy. 
  7. Early communication and consultation between employers and employees are the best way to ensure new technology is well implemented and improves workplace outcomes. 
  8. Company reporting on equality and diversity, such as around the gender pay gap, should include information on any use of relevant algorithms in recruitment or pay decisions and how they are programmed to minimise biases. 
  9. The benefits of algorithms at work should be shared with workers as well as employers. 
  10. We need a wider debate about the likely winners and losers in the use of all forms of technology at work. 

Need help?

This is a very tricky area. Please speak to one of our employment law experts to find out how we can help you.