AI and Age Assessments: A remedy or reinforcement of the Home Office’s troubled age assessment record?

featured image

In 2024, over 4,000 unaccompanied children sought asylum in the UK. These children should be afforded the same legal protections as any other ‘child in need’. But, for individuals whose age is in dispute, arbitrary age assessments can result in significant numbers of children being misclassified as adults – often exposing them to serious safeguarding risks and undermining their legal protections.

23.03.2026

Following the Immigration White Paper (May 2025) setting out the UK Government’s intention to explore different scientific and technological methods in asylum age assessments, Dame Angela Eagle (then Minister of State for Border Security and Asylum) announced plans on 22 July 2025 to introduce AI into the asylum system, as an age verification tool where the age of individuals is uncertain or disputed. This could be rolled out later this year.

Background and legal framework

Age assessments are conducted to evaluate the age of an individual claiming asylum. When the Home Office does not accept that an individual is a child, an initial assessment will be conducted. This may result in a decision to treat the individual as an adult, or as a child. If there remains uncertainty, they should be treated as a child pending further consideration, which may include a more robust assessment conducted by the local authority.

Assessing age is a difficult and imprecise endeavour. However, the outcome of age assessments have profound implications on an individual’s rights, as under the Children’s Act 1989, children are afforded significant legal protections. In particular:

  • Section 17 imposes a general duty on local authorities to safeguard and promote the welfare of children in need (including unaccompanied asylum-seeking children); and
  • Section 20 places a duty on local authorities to provide accommodation to children in need in their area.

Children incorrectly assessed as adults can lose access to those vital legal provisions. Instead of receiving the care and protection they need, they are sometimes exposed to isolation and re-traumatisation in environments intended for adults, where they are also vulnerable to exploitation. A report of the Independent Chief Inspector of Borders and Immigration (ICIBI) dated July 2025 documented alarming safeguarding concerns, including cases where children wrongly assessed as adults were forced to share rooms with unrelated male adults in asylum accommodation, detained in Immigration Removal Centres, and subjected to abuse.

Data suggests that between January 2022 and June 2023, more than 1,300 unaccompanied children were initially misclassified by the Home Office as adults. In the same period, the Humans for Rights Network recorded 832 safeguarding cases of individuals strongly believed to be children sharing accommodation with an unrelated adult.

While the ICIBI acknowledged the inherent challenges of accurately assessing age in the absence of a “foolproof test”, it is evident that the current system is often failing vulnerable children. However, it does not automatically follow that AI offers a safe and reliable alternative. 

The Role of AI in age assessments

The Government plans to roll out Facial Age Estimation technology in age assessments, relying on AI trained on millions of images and faces. This was described by Dame Eagle as the “most cost-effective option”, offering results at a “fraction of the cost” to other methods such as bone x-rays and MRI scans. Whilst some may welcome the shift away from assessments being made on the basis of somewhat nebulous concepts of physical appearance and demeanour, the introduction of AI into the asylum system raises a host of other concerns - not least whether speed and cost-efficiency are worth the risk of compromising the rights and safety of children. 

Dame Eagle has described Facial Age Estimation as “able to produce an age estimate with a known degree of accuracy.” However, AI systems are only as reliable as the data they are trained on. AI decision-making risks a reflection and reinforcement of any biases present in their training data particularly in relation to ethnicity, gender, and age-related facial characteristics. This risk is compounded by the opacity of machine decision-making, often described as the ‘black box’, which could hinder accountability in the asylum process. In a system where the consequences are profound, the margin for error must be exceptionally narrow and individuals must be able to scrutinise decisions to effectively challenge these.

Human rights and AI inquiry

In response to the growing role of AI, in July 2025 the Joint Committee on Human Rights announced an inquiry into Human Rights and the Regulation of Artificial Intelligence. Chair of the Committee, Lord David Alton, stated:

“AI is set to transform the world we live in. We need to make sure the human rights landscape is prepared for it. The development of AI won’t wait for regulation to meet its potential size and scope. That is why we have launched this inquiry – to understand the risks and rewards that AI poses for human rights and work out if greater protections are needed.”

This inquiry has recently heard oral evidence, and its findings could be a key development within the landscape of AI and human rights. However, in the meantime, introducing AI into the asylum system, potentially before fully exploring some of the possible human rights implications, raises serious ethical concerns - particularly when the technology is being trialled on unaccompanied children, one of the most vulnerable groups in society. In such a high-stakes context, where errors can have life-altering consequences, the question remains: is the asylum system truly an appropriate testing ground for this? 

Conclusion

NGOs and children’s rights advocates have warned against the proposed use of AI age estimation, emphasising instead that age assessments should be conducted by independent, trained social workers using Merton-compliant principles (meaning that they should take a holistic approach and adhere to established guidance set out in case law). Despite the existing issues and the intrinsic complexities of age determination in the asylum context, many would argue that Merton-compliant assessments are currently a safer alternative to AI – one that (at least in theory) ought to promote fairness and transparency through a human-led process.

It is not yet clear what level of accuracy can be expected of the facial age estimation technology on asylum seeking children. Concerns have been raised as to whether software could accurately estimate ages for children of different ethnicities who have experienced displacement and trauma. The potential for racial bias, in particular, is a cause for serious concern. Human Rights Watch have also queried how AI tools would take into account factors such as malnutrition and sleep deprivation which can profoundly affect a child’s appearance. At present, further operational details about the proposed use of age verification software, including its possible interface with human-led assessment, remain to be seen. 

Ultimately, it is clear that the Government needs to tread carefully, and above all, maintain a commitment to protecting and not eroding children’s rights. AI presents both risk and opportunity, and this can be a delicate balancing act.  As the Government moves forward, it must ask a host of difficult questions, and since the use of AI in this context is unchartered and complex territory, it should not expect straightforward answers. 

Find out more about Irwin Mitchell's expertise in upholding a person's human rights at our dedicated protecting your rights section

Key Contacts

Related Articles