Skip to main content
17.07.2025

AI and Professional Negligence: Lessons from Ayinde

In 2025, the integration of artificial intelligence (“AI”) into legal practice has not only become commonplace but is actively being encouraged.

Legal Professionals increasingly rely on AI tools for tasks including the streamlining of research and to enhance operational efficiency.

Recent High Court decisions however, in cases such as Ayinde v London Borough of Haringey and Al-Haroun v Qatar National Bank ([2025] EWHC 1383 have marked a critical turning point in the legal profession’s relationship with AI, highlighting the risks of misuse, or indeed irresponsible use, and the evolving standards of professional responsibility.

The Ayinde Case

The Ayinde case arose from a judicial review application challenging a local authority’s housing decision. The Claimant’s legal team, comprising of a solicitor, paralegal and pupil barrister, submitted judicial review grounds that cited five legal authorities. Upon scrutiny, it was established that none of the cited cases existed. The citations were fabricated, and statutory provisions were inaccurately stated; both were products of unverified outputs and ‘hallucinations’ of generative AI tools. 

The High Court found that the legal representatives had failed in their fundamental duty to verify the accuracy of their submissions. The judgment emphasised that while AI can be a valuable tool, it must be used with rigorous oversight. The judgment also put forward the notion that uncritical reliance on AI generated content was deemed capable of amounting to gross negligence and undermining the integrity of the legal profession.

The judgment specified that the duty rests on lawyers who use AI to conduct research themselves or rely on the work of others who have done so as there is no difference from the responsibility of a senior or supervising lawyer who relies on the work of a trainee solicitor or a pupil barrister for example, or on information obtained from a basic internet search.

The judgement further identified that there are serious implications for the proper administration of justice and public confidence in the judicial system if AI is misused. Practical and effective measures must now be taken by those within the legal profession with individual leadership responsibilities and by those with the responsibility of regulating the provision of legal services.

The judgment further specified that “those measures must ensure that every individual currently providing legal services within this jurisdiction (whenever and wherever they were qualified to do so) understands and complies with their professional and ethical obligations and their duties to the court if using [AI]”. 

The Al-Haroun Case

In Al-Haroun the Claimant sought nearly £89.4 million in damages for alleged breaches of a financing agreement. His legal submissions included 45 citations, 18 of which referred to non-existent cases. Other citations were misquoted or irrelevant to the case in point. The Claimant admitted to using generative AI tools for legal research, believing the outputs to be accurate. 

The High Court heard both cases under its “Hamid” jurisdiction, which allows it to regulate proceedings and uphold lawyers duties to the court. While contempt proceedings were ultimately not initiated in either of these cases, the court made it clear that the decision it reached should not be seen as a precedent and that the profession can expect the court to ensure leadership responsibilities were being fulfilled in future cases. The individuals involved in these matters were instead referred to professional regulators (the Bar Standards Board and Solicitors Regulation Authority) for further investigation. 

Legal and Professional Implications 

The decisions in both Ayinde and Al-Haroun reinforce the principle that legal professionals owe a dual duty of care to both their client and to the court. This duty of care includes an obligation to ensure that all legal submissions are accurate, properly sourced and based on genuine authorities. The use of AI does not diminish this obligation. Rather, it heightens the need for vigilance as AI tools can still produce plausible but entirely fictitious content. 

The court’s findings in the Ayinde and Al-Haroun cases suggest that failure to verify AI generated material may constitute a breach of the standard of care expected of a reasonably competent legal professional, potentially giving rise to a claim in professional negligence against them. 

Further, the court warned that lawyers who fail to meet their professional obligations when using AI risk severe sanctions which could include: 

  • Referral for criminal investigation 
  • Contempt proceedings
  • Referral to regulator 
  • Strike out and Costs Management 
  • Admonishment 

These potential penalties reflect the seriousness with which the judiciary views the misuse of AI in legal practice. 

Lessons for Legal Practitioners

The Ayinde judgment highlights several key takeaways for legal professionals navigating the new and evolving AI era. 

  • Verification is non negotiable: AI tools must not be used as a shortcut for legal research. Anything produced from an AI tool must be cross checked against authoritative sources before being relied upon and filed at court. 
  • Ethical Oversight: Firms and chambers must implement training and supervision protocols to ensure responsible use of AI, especially if AI outputs will be incorporated into day-to-day practices. 
  • Professional Accountability: Courts will hold individuals accountable for the misuse of AI, regardless of intent. 
  • Regulatory Scrutiny is Increasing: Legal professionals and the practices which they represent may face disciplinary proceedings, negligence claims, or reputational damage for careless and irresponsible use of AI. 

The Ayinde and Al-Haroun cases mark a pivotal moment in the legal profession’s adaption to AI. They serve as a stark reminder that while technology can enhance efficiency, it cannot replace the human element behind the profession. As the legal landscape evolves, so too must the standards of diligence and ethical responsibility, especially when the stakes involve both clients and the broader administration of justice. 

The key takeaway is that legal professionals cannot abdicate responsibility to modern technology such as AI; the duty to verify remains firmly on the human professional involved.