The Brief You Didn’t Know You Needed: AI in a Small Legal Practice

The UK’s small law firms are in an odd position with AI. The technology is genuinely useful, the pressure to adopt it is real, and the regulatory consequences of getting it wrong are severe in a way they are not in most other small businesses. A PR firm that botches an AI rollout loses some productivity. A solicitors’ practice that does the same risks referral to the SRA, wasted costs orders, and clients whose matters have been prejudiced by fabricated case law appearing in their submissions.

None of this means small firms should ignore AI. It means they should understand what the risks actually are before deciding which ones to accept.

What happened in the Ayinde case

The clearest illustration of what goes wrong appeared in the High Court in June 2025. Two cases were heard together under what is known as the Hamid jurisdiction — a mechanism allowing the court to refer potential abuses of procedure for scrutiny. In the first, Ayinde v London Borough of Haringey, a barrister had submitted grounds for judicial review that cited five cases which did not exist. When asked to produce them, she could not. The judge stopped short of a formal finding that AI had been used, but noted that the descriptions of the non-existent cases read exactly like model hallucinations, and that she had been unable to offer any other coherent explanation. In the second case, Al-Haroun v Qatar National Bank, 45 cases were cited; 18 of them did not exist. One fabricated decision was attributed to the very judge hearing the matter.

The barrister in Ayinde received a wasted costs order. Both practitioners were referred to their regulators. Dame Victoria Sharp, President of the King’s Bench Division, was direct: generative AI tools trained on large language models are not capable of conducting reliable legal research. They produce confident outputs that may be entirely wrong. They cite sources that do not exist. They quote passages from real cases that do not appear in those cases.

This is not a problem that applies only to careless or inexperienced practitioners. The Ayinde barrister described herself as having inadvertently relied on AI-generated responses surfaced through Google searches during her research – a workflow that most practitioners would recognise and that would not obviously announce itself as dangerous.

The regulatory position

The SRA has not, as of early 2026, issued substantive guidance specifically addressing how the duty of competence applies to AI tools. An August 2025 commentary in Legal Futures described this regulatory silence as unsustainable. The Law Society published its generative AI essentials guidance in November 2023, which remains the primary reference document. It is worth reading carefully.

The core regulatory position is straightforward: the SRA Code of Conduct applies regardless of what technology you used to produce the work. Principle 7 — act in the best interests of each client — does not acquire an AI exemption. Paragraph 3.2 requires maintaining competence and professional knowledge. Paragraph 3.5 requires effective supervision of work done for clients. An AI tool is not a supervised person; it is a tool you are responsible for. If it produces something wrong and you submit it, the error is yours.

The SRA has said it expects COLPs — Compliance Officers for Legal Practice — to take responsibility for regulatory compliance when new technology is introduced. For many small practices, the COLP is the senior partner. The practical implication is that adopting AI without a policy and without defined oversight is a compliance gap, not just an operational choice.

Where the data risk sits

Most solicitors understand client confidentiality in its traditional form. The AI version of that risk is less intuitive.

Consumer AI tools — the free or low-cost tier of ChatGPT, Gemini, and others — have, at various points, used conversation data to improve their models by default. The position varies by product and changes with terms updates, but the general principle holds: when you paste client information into a consumer tool, you are sending that data to a third-party server in a jurisdiction you have not assessed, under terms you may not have read, with retention policies you have not verified. That is a UK GDPR problem before you have even considered whether the output was accurate.

The Society for Computers & Law noted in June 2025 that attorney-client privilege can be waived when communications are shared with third-party AI providers. Some commentators have warned that privilege could potentially be challenged if confidential material is shared with AI providers.

The practical answer is not to avoid AI; it is to use tools where you have read the data processing terms and confirmed that your data is not used for training, where the processing location is known and documented, and where you have a data processing agreement in place with the provider if required. Enterprise tiers of most major tools offer these conditions. Consumer tiers often do not.

Where AI is actually useful in a small legal practice

The distinction that matters most is between AI generating legal content and AI handling administrative or structural work around legal content.

Research drafts and case summaries produced by a general AI model need to be verified against actual sources before anything reaches a client or a court. This is not optional and it is not a light touch — it means checking that the cases cited exist, that they say what the summary claims they say, and that they have not been subsequently overturned. Legal research platforms that are purpose-built and restricted to verified sources — Lexis+ AI and Thomson Reuters CoCounsel are examples, though neither is cheap — carry lower hallucination risk on factual citation than general models, but even these require verification. The court’s position in Ayinde was explicit: no AI tool can be used as a shield from professional accountability.

Where AI carries substantially less risk is in drafting work that does not involve legal citation and that will be reviewed before it leaves the firm. Client care letters, first drafts of standard-form correspondence, meeting notes turned into file notes, and boilerplate clauses in documents where the solicitor reviews and edits the output are all workable use cases. The human review step is not a formality — it is the point at which the solicitor takes professional responsibility for the content — but the risk of hallucinated case law destroying a client’s position simply does not apply when you are drafting a completion notice.

Administrative tasks carry the least risk and offer some of the most consistent time savings: scheduling summaries, internal document organisation, drafting staff communications, producing first-pass responses to standard enquiries that a solicitor then personalises and approves. None of this involves confidential client data if done with discipline, and none of it reaches a court.

The policy question

Another risk small firms sometimes overlook is the one created by trying to prohibit AI entirely. In practice, blanket bans often lead to fee earners quietly using consumer tools outside firm systems to save time. That creates a different kind of exposure: no oversight, no audit trail, and no control over where client information ends up. From a compliance perspective, unmanaged “shadow AI” can be riskier than controlled, policy-driven use.

A small legal practice that wants to use AI without unnecessary exposure needs to answer four questions, in writing, before any fee earner uses any AI tool on a matter:

Which tools are approved, and on what basis? That basis should include confirmation of where data is processed, whether conversation data is used for training, and what the contractual position is.

What categories of work may AI be used on? A blanket approval is not a policy. Drafting correspondence is different from conducting legal research is different from producing documents for court.

Who reviews AI output before it goes anywhere — to a client, to the other side, to a court — and what does that review consist of? “Someone checks it” is not an answer.

What do clients know? Informing clients that AI tools are used in their matter is not currently a hard regulatory requirement in England and Wales, but the Law Society’s guidance treats transparency as consistent with acting in a client’s best interests. Some clients will have concerns, particularly around confidentiality. Updating the client care letter to address this in general terms costs very little.

The practical bottom line

AI is not going away from legal practice, and attempting to prohibit it entirely is probably less realistic than it sounds — the risk then is that fee earners use it anyway, without any policy, and the firm has no oversight at all. The Ayinde cases are useful precisely because they are concrete: they show what happens when a practitioner uses AI for legal research and submits the output without verification. Wasted costs order. Referral to regulator. Public judicial criticism. A client’s judicial review prejudiced by citations that did not exist.

The technology is not the problem. The workflow — using it as though it were a verified legal database when it is nothing of the kind — is the problem. Fixing the workflow is not complicated. It requires a policy, defined review steps, appropriate tool selection for the task, and a clear-eyed understanding of where the professional responsibility actually sits.

It sits where it always has.


Note: The SRA regulatory position described here reflects the position as of early 2026. The SRA’s guidance on AI and the duty of competence is an active area and further specific guidance is expected. The Law Society’s guidance on generative AI (November 2023) and its updated materials remain the primary reference point for practitioners.