Author

Stewart Argo

Published
6th August 2025

Contents

Summarise Blog

How to manage supplier risks when using AI and protect your data

With the use of AI becoming increasingly prevalent within many businesses and organisations, AI is being used, not only, by suppliers of AI tools – and other software and SaaS products marketed as using or being ‘powered by’ AI – but also by many providers of products and services to supply their goods to their customers.

Given the increasingly wider use of AI, businesses should consider whether suppliers are using AI and take appropriate measures to address and manage the risks this use may present.

What are the risks of suppliers using AI?

Such risks may include:

  • misuse of confidential information;
  • intellectual property rights ownership and infringement;
  • data security and personal data issues;
  • bias and discrimination; and
  • other ethical concerns.

In addition to the above, there is often a misconception that the use of AI will always be precise and free from the risk of human error, however, there is no guarantee that any results produced by AI will be wholly accurate, and any such results should be treated with caution.

How to manage the risks

There are a number of steps that business and organisations can take to manage the risks imposed by a supplier’s use of AI.

1. Implement policies and procedures

Having an AI policy in place that sets out clear guidelines outlining how your organisation uses artificial intelligence in a responsible and ethical way:

  • sets clear expectations for employees in relation to the use of AI;
  • provides direction on the use and procurement of AI tools and data protection; and
  • identifies potential risks that will help guide your organisation in its approach to AI and your organisation’s attitude to use of AI by your organisation’s suppliers and service providers.

Having an AI policy in place will help inform what controls your organisation should put in place on suppliers’ use of AI to mitigate the risk of data leakage, misuse of confidential information, intellectual property rights infringement, and errors and hallucinations arising from the use of the AI solution.

Read more on AI policies and how to mitigate employees’ use of AI.

2. Understand the scope of use of AI by suppliers

Having and implementing an AI policy is important. However, the policy in isolation will not sufficiently address a supplier’s use of AI. It is essential for your organisation to identify and understand what use a supplier is making of AI, how data and prompts input into the AI system is treated, what information and datasets are used to generate outputs and what level of human intervention is present.

Although the use of AI by suppliers can help drive efficiencies, and has many benefits, its use may expose your organisation to potential areas of vulnerability and risk – which can only be assessed once your organisation has an awareness of whether AI is being used by the supplier and how it is being used.

When procuring products and services, your organisation should ask suppliers to disclose their use of AI, whether in your supplier onboarding processes, supplier due diligence questionnaires or tender documentation.

3. Conduct risk assessments

Once your organisation has an understanding of what use is made of AI by your suppliers, the risk posed by this use can be assessed.

This risk assessment should analyse and evaluate the potential risks with the supplier’s particular use of AI as outlined above. Where personal data is involved, a data protection impact assessment should also be carried out before deciding whether to proceed with the procurement of an AI solution or with a supplier or service provider who uses AI to provide its products or services.

4. Establish contractual controls

Where a supplier is using AI, your organisation can include controls on the supplier’s use of AI in its contractual terms with the supplier. Some of the crucial clauses that should be included in agreements with suppliers who use AI include:

Restrictions on use of AI

If your organisation is not comfortable with the supplier using AI to provide its products and services, a prohibition on the supplier using AI should be included in the agreement.

Customer data

The agreement should deal with whether the supplier can make of customer data to train the AI system. If so, the agreement should state to what purposes can the supplier use customer data to train the AI system and the basis on which it can do so.

If any customer data (for example, personal data) should not be input into the AI system at all, then the agreement should include a prohibition on the supplier from inputting such customer data into the AI system.

Confidential information

It is important to ensure that an agreement has appropriate mechanisms in place to ensure the confidentiality of customer data, and the definition of ‘confidential information’ should be wide enough to cover all customer data that is transmitted through and collected by any AI systems used.

To offer greater protection for the risk of any breaches of confidential information, it is also helpful to obtain an indemnity from a supplier regarding any data breaches.

Security

The agreement should set out appropriate security standard with which the supplier must comply, obligations on the supplier to notify your organisation of security breaches and a right for your organisation to require the supplier to cease its use of the relevant AI systems in the event of a security breach or other circumstances.

Intellectual property rights ownership and infringement

It is important that the agreement makes clear who will own the intellectual property rights in the outputs generated by the AI system and what rights either party has to use such outputs.

Appropriate warranties should be included in the agreement from the supplier that the use of the outputs by your organisation will not infringe the intellectual property rights of a third party and an equivalent indemnity be given by the supplier.

Supplier liability

The agreement should clearly set out what the supplier will be responsible for in relation to the use of AI. For example, a supplier may accept responsibility for preventing any failures with the AI system but may not be willing to accept responsibility nor liability for any inaccurate or biased data that is produced by the AI system. Liability for any confidentiality or data breaches should also be explicitly provided for in the agreement.

It is important for customers to ensure that any limits or exclusions on liability in relation to any loss or damage caused by use of the AI system are appropriate. Any liability caps will likely need to be higher than those for other damage or losses, and it will be important to ensure that suppliers have the required level of insurance cover in place.

Supplier warranties

It is also useful for a customer to request warranties from the supplier in relation to the use of AI systems. This may include warranties that the supplier will carry out regular monitoring of the results produced by the AI system, as well as carry out regular training of employees who will be interreacting with the AI system.

Human oversight

The agreement should provide what level of human oversight is required in relation to the AI system and what measures the supplier will put in place to ensure that such human oversight is effective.

5. Provide training and implement operational controls

The contractual controls mentioned above should be used in conjunction with operational and technical controls.

Your organisation should provide training to your employees to ensure they avoid inputting confidential information and personal data into the AI system, where appropriate.

Technical measures that prevent inappropriate content from being input into the AI system by employees should also be considered, as well as technical measures that attribute the same permissions and restrictions to the outputs of the AI system to those attached to the prompts.

6. Ensure ongoing monitoring

Suppliers’ AI systems and processes should be regularly evaluated and tested to ensure that the outputs produced are accurate and precise. One way of doing this is by carrying out a variety of ’spot checks’ on any outputs that are produced.

Your organisation should consider what circuit breakers should be implemented to prevent harmful outputs from being generated by the AI system, whether due to inaccuracies or bias in the AI system and models.

Suppliers should also be required to regularly assess how data is accessed and stored within its AI systems. There should be efficient measures in place to ensure that all data is stored securely to avoid any breaches of confidentiality and ensure that data is not stored for any longer than is necessary.

Conclusion

The use of AI by suppliers and service providers can be extremely beneficial to all parties involved, however such AI usage should be exercised with caution and the risks of supplier’s using AI should not be underestimated.

Organisations who procure products and services from suppliers who use AI should actively prioritise the security and confidentiality of data and to seek to impose measures upon the supplier to ensure that outputs generated by the AI system are accurate and precise.

This can be achieved through implementing both contractual controls and operational and technical controls on the supplier’s use of AI. Once the supply or service provision commences, and the AI system is implemented or used by the supplier, its use should be regularly monitored, and employees should be obliged to undertake regular training to ensure they are aware of any potential risks that may occur through using AI.

If you need any support with reviewing or implementing policies to protect your customer data, then our team of experienced commercial solicitors can help.

Our latest commercial content

A legal introduction to non-disclosure agreements

Commercial
read more >
‘Notices’ clauses in your commercial contracts: please take notice!
Commercial
read more >
Pre-contract representation and contract formation
Commercial
read more >
Recruitment challenges lie ahead for the social care sector
Business Immigration
read more >

See more guides >

Our legal experts are here to answer any question you might have

If you’d like to speak to a member of our team, please fill out the form and we’ll be in touch within two hours.
If you know who you need to contact, you will find a full list of our people with email and telephone numbers here.
Call Us: 0330 024 0333

About the Author

Stewart Argo

Legal Director

Stewart's work includes advising on supply of goods and services agreements, outsourcing, software licencing, SaaS and cloud agreements, software development agreements, software support and maintenance agreements and other agreements for IT services, data protection, assignment and licencing of intellectual property rights, e-commerce, distribution agreements and standard terms of business. Stewart has experience advising clients in the technology, energy, education, manufacturing, and logistics sectors on both business to business and business to consumer arrangements.