Skip to main content
30.04.2024

Uber Eats compensates driver after its AI facial recognition tool discriminated against him

The development of new technologies, including those powered by AI and machine learning is transforming the world of work. Automation has replaced people in some sectors and technology is also being used to make decisions about staff. 

Background 

Mr Manjang works as a delivery driver for the online food delivery service Uber Eats. He was required to use its app when he was available for work which, occasionally, asked him to send “selfies” to register for jobs. However, after the company switched to the Microsoft-powered Uber Eats app, Mr Manjang received repeated requests to verify his identity. If he didn't pass these checks he couldn't access the platform or obtain any work.  

Mr Manjang was removed from the platform due to “continued mismatches” in the photos he submitted to the facial recognition system.  Every image Mr Manjang submitted was of himself and there were no obvious changes to his appearance. Despite this, the facial recognition AI consistently failed to recognise him.  

Mr Manjang believed that he had not been recognised because he was black and that the system discriminated against him. He was not given any opportunity to challenge his suspension and brought legal proceedings against the company.  

Legal Claim

Mr Manjang argued that he had been discriminated against on the grounds of his race and had been harassed and victimised. He was supported by the Equality and Human Rights Commission (EHRC) and the App Drivers and Couriers Union who shared Mr Manjang’s concerns about how the use of AI and automation could lead to the permanent suspension of a driver’s access to the app, and consequently, to their livelihood. 

Baroness Kishwer Falkber, the chairwoman of the EHRC, highlighted the importance of transparency and openness when implementing AI systems, stating that Mr Manjang should not have had to commence legal action to understand the process that affected his access to work.

Mr Manjang settled with Uber before the final hearing. He was reinstated and continues to work for the company.

What we can learn 

AI systems, whilst seemingly neutral, can foster unintended biases due to the data used for their learning. For instance, consider facial recognition: if an AI system learns from a dataset primarily composed of white, middle-aged male faces, it may struggle to accurately identify young non-white women. Microsoft has previously admitted its facial-recognition software works less well for people belonging to ethnic minorities. 

This kind bias within AI is not limited to facial recognition and can be found in a wide range of AI systems. 

The problem for employers is that they won't know that their shiny new AI system may have inherent biases until they start to use it. You should test any new system before rolling it out, but you may still not be able to spot all problems. If you are going to introduce technology which has limited human oversight, you must make sure that your staff can raise issues which you will need to investigate and deal with. 

Our newsletters

We publish monthly employment newsletters. If you'd like to be added to the mailing list, please let me know. 

Mr Manjang, who was reinstated and continues to work for Uber Eats, in Oxfordshire, said the out-of-court settlement marked the end of a "long and difficult" period for him.

His case "shines a spotlight" on the potential problems with AI, particularly for "low-paid workers in the gig economy".

And he hoped the decision would help strengthen "rights and protections of workers in relation to AI, particularly ethnic minorities".”