As organisations increasingly operate across borders, the regulatory and legal risks surrounding AI are no longer confined to one jurisdiction. Diverging approaches in the UK, EU, US and Asia mean legal teams must assess AI issues through an international lens.
This article looks at how recent cases and regulatory developments are shaping AI governance in the legal and wider professional services sectors.
Getty Images v Stability AI: international implications of the High Court’s AI judgment
Key issues considered by the High Court
On 4 November 2025 the High Court of England and Wales handed down its judgment in the landmark case of Getty Images (US) Inc and others v Stability AI Limited [2025] EWHC 2863 (Ch).
A significant part of the claim had centred on the alleged use of Getty Images’ portfolio in training Stability AI’s diffusion model, but this element was ultimately withdrawn during the trial.
For a broader summary of the background to the dispute, you can read our earlier analysis of the High Court proceedings.
Questions remaining before judgment
The remaining issues included whether synthetic images generated by the model could infringe trade marks under the Trade Marks Act 1994, and whether the imported model could amount to secondary copyright infringement under the Copyright, Designs and Patents Act 1988 (CDPA).
Even with a key part of its case withdrawn last minute, key questions still remained. Could the diffusion model generating synthetic images bearing Getty Images’ own trade marks (or images which appeared to be very close to the trade marks), be contrary to sections 10(1), 10(2) and 10(3) of the Trade Marks Act 1994 (the ‘TMA’)?
Questions also remained around whether generative AI, once imported into the UK, could be capable of secondary copyright infringement contrary to sections 22 and 23 of the Copyright, Designs and Patents Act 1988even if the imported model did not itself contain copies of the materials.
Why the case attracted significant interest
The judgment therefore remained keenly awaited and was one of the most interesting cases heard in the High Courts of England and Wales in 2025.
As was commented on by Mrs Justice Joanna Smith DBE herself within the judgment,
“Both sides emphasise the significance of this case to the different industries they represent: the creative industry on one side and the AI industry and innovators on the other. Where the balance should be struck between the interests of these opposing factions is of very real societal importance.”
The judgment was therefore anticipated to be of vital importance, both by those in the creative and/or AI development industries, but also those interested in the future, development, and in some cases moral implications of AI.
If it was decided in Getty Images’ favour, what restrictions could be put in place for developing generative AI companies, and how would this impact pre-existing generative AI that had used such materials? On the other side, if Getty Images’ case failed then what protections would the creative industry have?
What the judgment found
Mrs Justice Joanna Smith DBE found there was double identity infringement under section 10(1) TMA in relation to one trademark owned by Getty Images by one version of Stability AI’s diffusion models. She also found infringement under section 10(2) TMA in relation to two trademarks owned by Getty Images by two different versions of the diffusion models.
However, she found no infringement under section 10(3) TMA, no passing off, and no secondary infringement of copyright contrary to sections 22 and 23 CDPA.
The judgment also commented that, while Getty Images partially succeeded in its case for trademark infringement, the infringements identified were “both historic and extremely limited in scope”.
Permission to appeal and ongoing uncertainty
Interestingly, in January 2026 the High Court granted Getty Images permission to appeal the findings that were made against it with regards to its secondary copyright infringement claim. Stability AI also applied for permission to appeal the findings of trade mark infringement, but permission was refused.
The appeal hearing will therefore focus on whether a model that itself does not contain a copy of any works can be considered an ‘infringing copy’ under section 22 of the CDPA, which will be both academically very interesting and has the potential for a much wider impact on the industry if the appeal is granted.
What the judgment means for the future of AI development
In the meantime, where does this very mixed judgment leave the future of AI development and creative industries? It could certainly be considered that some of Getty Images’ arguments failed on technical points such as the very point they are appealing as set out above.
In addition to this, part of the reason Getty Images’ case on primary copyright infringement was withdrawn was that there was no evidence that the training of the diffusion model took placed in England and Wales, which would have been a very difficult case to prove seeing as so many servers are now being hosted in off-shore jurisdictions.
It is also important to note that this decision is partially based on statute which was first created long before most of us had even thought about generative AI being used on this scale, and while the UK government have been slow off the mark when compared with our EU counterparts in legislating on AI they are now starting to take steps towards progress such as by launching consultations on AI and Copyright in late 2024.
In light of this recent judgment, it is likely they will come under further pressure to consider the risks associated with AI being adopted at such a scale, in particular by content creators who will be concerned that they are being left behind.
International relevance of the case
Although this judgment was issued by the High Court of England and Wales, similar questions around training data, copyright and trademark reproduction are arising globally. Courts in the US and EU are facing comparable challenges, showing that disputes involving generative AI models are inherently cross‑border.
For international legal and professional services teams, this highlights a wider issue. As AI models are often developed, trained and deployed in multiple jurisdictions, compliance and risk management cannot rely solely on domestic legal principles.
For law firms and in‑house teams, this means AI procurement, vendor contracts and internal guidance should all explicitly address where models are trained and how IP risks are managed across jurisdictions.
How AI is being adopted within legal practice
In addition to AI being a hot topic in the High Court, it is also continuing to be a point of interest within the behaviour of the legal sector itself.
In November 2024 the Law Society of England and Wales published an Artificial intelligence (AI) strategy which established three “long-term outcomes” to work towards:
- Innovation – Using AI across the legal sector in a way that benefits firms and clients in legal service delivery.
- Impact – There being an effective AI regulatory landscape that has been informed and influenced by the legal sector.
- Integrity – AI being used responsibly and ethically to support the rule of law and access to justice.
Similar debates and policy work are taking place globally. Bar associations and regulators in Europe, the United States, Canada, Australia and Singapore are issuing guidance on the responsible use of AI in legal practice. Multinational firms therefore need AI policies that work across borders, not just within one regulatory environment.
The second outcome links into the recent decision of Getty Images (US) Inc and others v Stability AI Limited as has been discussed above, but what about the remaining two outcomes? What are firms currently doing to integrate AI into their practice, and how do we ensure it is used with integrity?
How firms are beginning to use AI in practice
With regards to the first question, the landscape has developed significantly in 2025. In May 2025 the SRA approved its first AI-driven law firm, which describes itself as “the first law firm in the world authorised and regulated to deliver legal services entirely through AI”.
It assists with small debt claims, and the primary goal of the founders and CEO appear to be to assist with access to justice, in particular with debt claims which their CEO says are not always collected “because time and cost were prohibitive”.
With costs starting from as little as £2 to draft a “polite chaser”, many consumers will see the appeal of these models and wonder why they would ever need to pay an hourly rate ever again.
The first answer to this is of course that this service is designed for simpler and lower value claims, and anything more complex would inevitably need an expert eye to understand the nuances, commercial goals of the potential claimant and be able to advise on overall strategy.
For professional services organisations, the key question is not whether AI will be used, but how it is governed so that efficiency gains do not undermine client care, regulatory compliance or professional duties.
In addition to the positives of building a relationship with a real person, there are also risks of relying too heavily on AI to draft legal letters and proceedings, even with lower value claims. This year there have already been several instances of litigants in person and professional lawyers being caught out for presenting citations in submissions before the court which turned out to be entirely false, which were rumoured to be a result of using generative AI.
Risks of using generative AI in legal drafting
The issue of fabricated authorities is not limited to the UK. Courts in the US, Canada and other jurisdictions have publicly reprimanded legal representatives for filing submissions that included AI‑generated but entirely fictional case law.
In April 2026 one of the US’s most prestigious law firms sent a letter of apology to a New York judge after acknowledging that a recent filing contained AI hallucinations. A universal expectation is emerging: no matter where an organisation operates, AI output must be verified.
In the case of The King (on the application of Frederick Ayinde) v The London Borough of Haringey [2025] EWHC 1040 (Admin) heard in April 2025, the Honourable Mr Justice Ritchie stated that he was “not in a position to determine whether [the barrister] did use AI”, but that “it is the responsibility of the legal team, including the solicitors, to see that the statement of facts and grounds are correct”.
The judge also commented that both the barrister and the solicitors instructing her should have referred themselves to the Bar Council and Solicitors Regulation Authority for professional misconduct as a result of these false cases.
It is therefore clear that, when AI is used, it should be done with the utmost care and will still require input from a professional. Does this mean that AI should be avoided entirely so as to remove the risk? Not necessarily. Those that do not invest in the technology could be at risk of being outpriced by more tech-savvy competitors with higher efficiency, and so there is a careful balance to be made.
Use of AI in litigation and disclosure
In dispute resolution alone there are clear advantages to making use of AI in a way that maintains quality but also saves costs for the client. As an example, large scale disclosure can significantly benefit from using AI, and CPR Practice Direction 57AD specifically permits the use of “technology assisted review”.
The use of AI in this resource heavy task such as predictive coding can reduce time and costs for the client and free up the legal representatives’ time to focus on more key decisions and to review the documents most likely to be relevant.
However, as with all technology, this must be done carefully and with adequate checks and should always be done in a way that would stand up to scrutiny should another party or the court have questions on how this has been undertaken and the results that have been yielded.
As with most emerging technologies, the advice seems to be to proceed but with sufficient caution and understanding to ensure your duties to the court and to the client are still being fulfilled.
In practice, this might include:
- documenting when and how AI tools are used in drafting;
- requiring manual verification of all citations, authorities and factual statements;
- limiting AI use to certain tasks, such as summarising documents or suggesting structures; or
- training teams on how to recognise and correct AI‑generated errors.
The environmental impact of AI
As another angle to the discussion on AI, it has been reported that generative AI has a significant environmental impact due to the electricity demand, water consumption and increased carbon dioxide emissions.
In January 2025, MIT published an article which stated that the increased number of data centres that needed to train and run the deep learning modules (e.g. the type ChatGPT uses) has over doubled the power requirements in North America from 2022 to 2023.
In addition to this, the International Energy Agency has estimated that a request made through an AI assistant consumes around ten times more electricity than a simple web search. All of this power then also needs to use chilled water in order to cool the equipment, and producing the required power results in increased carbon dioxide emissions.
Does this mean that we should all immediately stop using AI? Again, not necessarily. AI can assist a lot of problems, including tracking certain environmental and ecological threats. One research team in Google, for example, is focusing on using AI to address the challenge of climate change and improve the forecasting of extreme weather events such as flooding.
It also needs to be kept in mind that while AI may use a greater density of power, if by using it the process being undertaken is more efficient and requires fewer resources then this is also a positive impact that needs to be taken into account.
The environmental footprint of AI also varies heavily by region. Data centres powered by coal‑heavy grids have far greater carbon intensity than those using renewable energy. For international organisations, understanding where AI workloads are processed is now an important ESG consideration.
International ESG considerations when using AI:
- assess whether vendors disclose where their data centres are located;
- check if processing occurs in high-emission regions; and
- consider whether alternative, lower-impact AI models could achieve the same outcome.
What’s next for organisations using AI?
The rapid adoption of AI, combined with differing global legal approaches, means organisations must strike a careful balance. Efficiency gains are clear, but so too are the legal, regulatory and environmental risks, particularly for multinational teams.
A practical middle ground involves monitoring global regulatory developments, implementing robust internal safeguards and regularly reviewing AI governance policies to ensure they work internationally.
Practical next steps for organisations operating across borders:
Using AI across several jurisdictions can feel complex, but a few practical measures can make a big difference. These steps offer a helpful starting point for teams who want to stay on top of risks while still getting the benefits of the technology.
- Review cross‑border AI governance frameworks.
- Monitor major developments, including the EU AI Act and US federal and state initiatives.
- Assess sustainability impacts based on where data processing occurs.
- Ensure human oversight, transparency and traceability across all jurisdictions.
These steps help organisations strike a balance between efficiency and risk when integrating AI into legal processes. Documenting AI use ensures transparency if questions arise later, particularly in litigation or regulatory reviews. Manual verification acts as a safeguard against inaccuracies, such as fabricated citations or misstatements produced by generative models.
Limiting AI to appropriate tasks preserves professional judgement for higher‑value work where nuance is required, while still enabling efficiency.
Finally, training teams to identify and correct AI‑generated errors helps create a culture of responsible use across global operations and reduces the risk of inconsistent or unreliable outputs.
This content is provided for general informational purposes only and does not constitute legal advice. It is not intended to address the circumstances of any individual or entity, nor should it be relied upon as a substitute for specific advice from a qualified solicitor. The information reflects the legal position as at the date specified and may be subject to change. If you require advice on a specific matter, please contact us directly.

