The Impact of AI on Litigation
The integration of artificial intelligence into the legal landscape is transforming how parties prepare, manage and argue cases. As the courts begin to grapple with AI-generated submissions, solicitors and legal practitioners must strike a balance between embracing new technology and maintaining accountability and professional responsibility.
In the absence of formal legislation in the UK governing the use of artificial intelligence at present, there remains a risk that improper or unsupervised use of AI tools could expose legal practitioners to professional negligence claims. It is therefore essential that practitioners adhere to the relevant guidance issued by regulatory bodies and exercise appropriate oversight when implementing AI tools.
Law Society Guidance on the use of AI in Legal Practice
The Law Society has issued tailored guidance to help solicitors understand how to use artificial intelligence responsibly and mitigate associated risks. Some of the key takeaways from the guidance are:
- Accountability remains with the solicitor: The use of generative AI does not absolve solicitors of their duty to verify the accuracy of any content produced. Solicitors remain fully responsible for all work, including work assisted by AI.
- Accuracy in legal documents: Solicitors are professionally accountable for the factual accuracy of documents such as witness statements and expert reports.
- Consequences of misuse: Improper use of AI such as citing fictitious cases or failing to verify AI-generated content can have serious consequences, including regulatory referrals and contempt of court proceedings.
- Verification and supervision: All AI-generated content must be carefully reviewed. Citations should be checked for authenticity, and AI-assisted work must be appropriately supervised.
- Professional obligations: Solicitors must ensure their use of AI aligns with the SRA Code of Conduct. Presenting inaccurate information due to AI misuse constitutes a breach of professional standards.
- Clear communication: Firms should educate staff on the capabilities, limitations, and risks of AI tools, and ensure clients are informed when AI is used in their matters, where appropriate.
Courts and Tribunals Judicial Guidance on AI
In April 2025, the Courts and Tribunals Judiciary also published guidance on the use of AI by legal professionals and court staff. The key takeaways from their guidance are:
- AI is not authoritative: Output from AI tools can reflect non-UK legal systems (e.g. US law) and must be verified and interrogated for relevance and accuracy.
- Team awareness: Open discussion within legal teams is encouraged to manage risks and ensure responsible use.
- Understanding the tools: Before using AI, legal professionals should have a basic understanding of its capabilities, limitations, and potential risks.
- Confidentiality: Private or sensitive information must never be entered into public AI platforms or chatbots.
- Responsibility for submissions: All legal representatives are responsible for the material they put before the court/tribunal.
Key cases
The Ayinde case where judgment was handed down earlier in 2025 has drawn significant attention across the legal sector this year for its implications regarding the misuse of generative AI in legal drafting. As a brief summary, the barrister instructed by the claimant’s solicitors referenced 5 non-existent cases in submissions for the claimant. When questioned about these cases by the Defendant’s solicitors, the claimant’s solicitors failed to provide any justification. This case serves as a stark reminder about a solicitor’s non-delegable professional duty to check the accuracy of AI-generated content against authoritative sources.
In handing down judgment, in the Ayinde case Dame Victoria Sharp P. stated at paras 7 and 8:
7. “Those who use artificial intelligence to conduct legal research notwithstanding these risks have a professional duty therefore to check the accuracy of such research by reference to authoritative sources, before using it in the course of their professional work (to advise clients or before a court, for example).”
8. ”This duty rests on lawyers who use artificial intelligence to conduct research themselves or rely on the work of others who have done so. This is no different from the responsibility of a lawyer who relies on the work of a trainee solicitor or a pupil barrister for example, or on information obtained from an internet search.”
You can read more about this case here from our comment dated July 2025.
The Future of AI
The future role of artificial intelligence in litigation is expected to be transformative and we can only expect to see its use grow in the legal sphere.
AI is already proving particularly helpful for streamlining phases such as e-discovery and disclosure. AI can act as a useful tool for managing large volumes of electronic data during legal proceedings, reducing both time and costs associated with manual review. With the above guidance in mind, lawyers should therefore be receptive to welcoming the use of AI at work (to the extent appropriate).
However, the growing use of artificial intelligence inevitably encourages more AI-related disputes which we can certainly expect to see more of in the coming years. We anticipate that such disputes will include data protection and privacy, intellectual property and copyright, cybersecurity and discrimination disputes.
For now, AI is here to stay and is an incredible tool which should benefit parties with efficiency and cost saving. However, when deploying artificial intelligence tools human oversight should be used to interrogate and verify the accuracy of AI-generated content. Further, within commercial organisations, regular training and transparency amongst colleagues regarding the use of AI and internal policies can help manage the risks of AI more effectively.
Webinar
Joanna Wilkinson and Ross Condie presented on the topic of AI and Litigation and how AI was moulding the future in November 2025; the webinar can be viewed by following this link.
