19th May 2025 | Paul Marmor | International, AI, Law
This article has been written by humans to explore the effect of AI on the legal profession.
The only point we asked AI to re-write was the heading, and ChatGPT came back with something a little more dramatic: “The AI reckoning: global legal battles, regulatory chaos, and the high-stakes transformation of the legal profession.” Having jazzed-up the title, AI then offered us a more dramatic sub-title: “As artificial intelligence disrupts borders and courtrooms alike, the future of law hangs in the balance” – gulp!
That is a bit of a bleak perspective, so our team have given this some thought and have carried out a review of some of the leading jurisdictions, i.e. the USA, China, Europe and, of course, the UK where we are based. This is what we have come up with.
“Our byline: from novelty to norm in society …”
AI has made a move from novelty to norm in society, playing a central role in the everyday lives of many people. Popularity in using AI has rapidly increased, with weekly users now being estimated at well over 500 million. Commercially, it is no different. AI has the potential to reshape legal practice as we know it with many professionals embracing it in some way, shape or form.
The 2024 Thomson Reuters Future of Professionals Report found that 85% of those surveyed believe that AI will create new roles and demand new skills. This stands in contrast to the common alarmist narrative that AI will render humans and their jobs obsolete. Significantly, 72% of legal professionals surveyed in the report view AI as a force for good. In fact, this report predicted that AI could save lawyers as many as four hours a week, creating more room for business development and chargeable work that remains outside the broad scope encompassed by this rapidly developing technology.
However, as with any revolution, technological or otherwise, it brings considerable new risks. AI and its global userbase has rapidly expanded, and the need for regulation has never been more essential. Particularly in situations of professional and legal responsibility, a framework must be in place to cover transparent and accountable use of AI. This framework must be flexible, to evolve at the same rate as the technology. Clear and proactive regulation is needed to ensure AI is indeed a force for good in the legal profession, rather than destabilising it. So, how have regulators begun to address these emerging issues?
The United States
The approach to AI regulation in the US is centred around privacy. Several states such as California and Colorado have enacted different AI regulations primarily focused on this point. At the federal level, there is no comprehensive legislation – President Trump revoked an Executive Order imposed by President Biden surrounding AI in January 2025, aiming to remove barriers to innovation and encourage competition in the economy, making the US the world capital in AI. There is also a proposal from a federal judicial panel to regulate AI-generated evidence in trials, which aims to tackle concerns surrounding the reliability of AI technology in making inferences from data, similarly to how expert witnesses’ testimony has been scrutinised. It has received a positive reception so far, but it remains to be seen whether this will ultimately be implemented.
The US legal system creates a fragmented approach, especially for companies that are multi-jurisdictional. AI is markedly less regulated than some of the US’s counterparts, creating a wider scope for misuse and ethical harm. Trump’s Executive Order 14179 does however provide for an AI Action Plan that should achieve ‘America’s global AI dominance’. Many tech leaders such as Google and OpenAI have already provided their proposals for this plan.
China
China’s regulation of AI has focused on content control and national security. Comparatively, China framework to govern AI is one of, if not the tightest. Rather than addressing systems by risk, China is introducing ‘Measures for the Management of Generative Artificial Intelligence Services’. This will focus on specific applications. Coming into effect in September 2025, it specifies that AI-generated content must be labelled, and algorithms registered with authorities. Effects on the legal profession are likely to be visible in the future, with the Supreme People’s Court aiming to prioritise AI protections in 2025 due to the rise in intellectual property disputes that are being caused by AI based content in China. Similarly to the US, China has also announced their aim to be a global leader by 2030, however with far more stringent regulations than displayed by the US presently.
The European Union
The EU AI Act was enforced in July 2024. First proposed by the European Commission in 2021, the Act introduced a risk-based tier system, putting in place compliance regulations on what it deems ‘high-risk’ AI systems. The aim of this Act was to impose restrictions, mainly on developers rather than users of the systems, while allowing the opportunity for innovation – however AI developers have criticised the act for affecting the deployment of their systems. On the other hand, the Act has also faced concern over potential loopholes, and the enforcement of the regulations. A dedicated European AI office oversees the enforcement in collaboration with national market surveillance authorities. Although this will mainly affect developers, the implications for legal professionals will be noted by those using high-risk AI systems as deployers of systems, i.e. employers, also face restrictions and will have to comply with the requirements of the Act.
The United Kingdom
In January 2025 Prime Minister Keir Starmer announced the AI Opportunities Action Plan. Part of this plan involves streamlining regulations but taking a pro-innovation stance and a desire to work with the private sector. The approach in the UK appears to sit somewhere
between the EU and US – by prioritising innovation like the US but also highlighting the importance of regulations. The UK government has displayed a more cautious approach to regulations than some other leading nations, demonstrated by their refusal to sign the Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet at the AI Action Summit in February 2025. Despite the statement being endorsed by over 60 other countries, the UK government cited national security concerns as a barrier to endorsement. The UK is clearly striving to remain globally competitive, however, its reluctance to implement comprehensive legislation or join international declarations risks regulatory fragmentation. The lack of enforceable safeguards could leave ethical and judicial issues inadequately addressed.
The Legal Profession and PI
What then, do lawyers and those working in or around the legal profession need to be aware of when navigating the current or impending regulations? One clear concern is the conflict created by using AI professionally and legal ethical duties. Professional regulatory bodies advise that AI generated work must be checked in order to maintain confidentiality and accuracy. AI should not be responsible for giving clients legal advice. Lawyers remain responsible for the outputs of any AI tools used. If a lawyer relies on advice from AI that turns out to be faulty, they are at the least liable to negligence claims or potential complaints. The well-known case of Steven Schwartz is a prime early example of this. Schwartz, US attorney, used AI to submit a brief that included six fictional court decisions. Schwartz and his employers faced serious sanctions from the Southern District of New York Court as a result. More recently in the UK, a barrister and her instructing solicitors were both fined after the barrister included multiple fake cases in her pleading. Although the judge was hesitant to put this error down to AI, he mentioned that it would have been negligent if AI was used, and not checked.
Cases such as this are likely to become more prevalent as use of AI in the legal profession increases. Both law firms and their insurers will have to evaluate their professional indemnity insurance policies to assess how the technological capabilities of AI, and their employees using it, could affect them. In England and Wales, the SRA mandate that their PI policies should comply with their ‘Minimum Terms and Conditions’. These terms cover ‘civil liability’. Under this wording, if an error occurred due to AI analysis and the firm is liable, the liability should be covered although not explicitly named. The SRA standards and terms do not currently cover use of AI tools, and AI exclusions are not widely seen in PI policies at present. The effect being that if an error occurs, especially due to lack of human oversight over AI use, it may still fall within the scope of coverage. However, insurers would be wise to introduce AI exclusions in future to avoid unintended coverage and avoid the lengthy process of examining the policy and actions in the event of an AI error. Although not commonly seen now, it is likely that we will see more specific wording and exclusions creep into policies as AI continues its incline in use.
Written by Amanda Newman and edited by Paul Marmor, Sherrards Solicitors LLP.
For more information about the deployment of AI within the legal profession and generally, please reach out to Tom Clark, Amanda Newman and Max Marmor, who are part of Sherrards’ AI group: +44 (0) 207 478 9010.