Procuring AI – Commercial Considerations Checklist

Published on 04 October 2024

Many companies will no doubt be considering using AI within their business to take advantage of the massive opportunities for increased productivity and cost efficiencies promised. In this section, we set out the key issues a company will need to consider before procuring an AI-powered solution from a provider, assuming this is a relatively complex solution which requires customisation and deployment by the provider. Please see 'AI-as-a-Service – Key Issues' if you are using simple off-the-shelf AI solutions. 

1. Create and comply with company AI use policies

It is advisable to develop clear policies and training around the development and use of AI across the business. For example, do you have strong data governance processes around the access, storage, and transfer of the data sets used to ensure data quality, integrity and accuracy? And do you have a robust use case testing procedure to ensure that AI is only used where appropriate, given the potential harms and risks associated with its use?  Of course, policies only work to the extent you monitor compliance with them regularly.

2. Define your strategy and budget

A focused strategy is particularly important when procuring an AI solution. AI is impressive but many AI-powered solutions have not been tested on a large scale. Any AI solution you procure will likely need significant on-boarding and fine-tuning for it to work as planned. 

For this reason, consider how exactly you intend to integrate the AI solution in your business and where there are likely to be opportunities to maximise benefits in the short term, and how the AI can be leveraged in the longer term. Be clear on your available budget for this project and factor in sufficient time, money and human resource for the procurement, deployment and on-going training and maintenance of the solution. If the AI solution works as planned, is the anticipated return on investment acceptable?  

3. Scope your requirements

Unlike a standard IT procurement, you may not be able to define waterfall-style specifications for your intended solution. Many AI solutions will need a period of iterative experimentation, training and tweaking before the functionality is properly developed. Furthermore, you would not want to blinker yourself into working off narrow requirements when there might be greater opportunities available. Instead, develop problem statements, challenges, opportunities and use cases that you intend the AI solution to address. 

You should also assess the relevant data sets that you intend to use for the project. Where is the data sourced from? Do you have a licence to use the data for the project or might you be breaching confidentiality restrictions? Do the data sets include personal data (see 'AI and Privacy – 10 Questions to Ask' for further guidance)? Are there any limitations (e.g. quality) to this data that need to be addressed first?  On what basis will you share data with the provider for use on the AI system (if any)? Can you use synthetic or anonymised data for training purposes to avoid issues with the data or to fill data gaps?

Conduct an initial impact assessment to determine the key risks of the solution on your business and whether these can be mitigated. Will the AI be used with other software, and if so, do you have the appropriate rights and licences to do so from the relevant third party software suppliers? Is the use and/or commercial parameters of the other software still appropriate when being used with the AI given, amongst other things, automated processing? Will your insurance cover apply where AI is used?

4. Upskill your teams

Any successful AI project requires that customer and provider teams work cooperatively to develop the solution. You will need to create your own multidisciplinary team of experts to advise you through the procurement lifecycle including legal and commercial experts, technologists, data and systems engineers, and ethics advisors. Diversity is essential to minimise blind spots and unintended bias. 

5. Choose a provider

It is critical to do your homework on the provider and the AI product in question and not simply be swayed by the sales pitch. Over the next months and years we will see AI solutions and providers consolidate, and some others fail, so it's important not to back the wrong horse. Review any potential provider's project team to ensure they are diverse, multidisciplinary, and have the right skills and qualifications. Alternatively, is the best way to source the AI solution through a combination of providers and, if so, have you considered the integration risk between these various systems?

6. Key contract issues

Consider the following key issues when contracting with your chosen provider. 

Contract structure. Consider structuring the contract in phases to allow for discovery and development. A 3-6 month pilot phase (longer or shorter depending on the complexity of the project) may be appropriate. Ensure that the overall contract is aimed towards flexibility, the ability to change, and iterative product development.   

Use of data. The contract should specify the customer data sets that the provider will have access to, and what the provider can do with such data. Do you need the provider to remedy any limitations with the data before using it? Will the solution have access to the internet or will it be a "walled garden"? Will your data be used to train the model generally for the benefit of the provider's other customers? Will you benefit from any new fine-tuning or user feedback data the provider applies to its model generally? 

Training the model. How will the parties train the AI solution? Consider a governance framework that sets out the parties' responsibilities for each aspect of the training. How long will the solution need to be trained before it can go live? Will the provider continue to train the solution on a regular basis post go-live?

Intellectual Property. General purpose LLMs will have been trained using data obtained from web scraping, the IP implications of which are still being debated (see Generative AI - Addressing Copyright). The provider may also carry out further training based on owned or licensed data sets. Seek warranty and indemnity protection that your use of the solution and any output does not infringe third party IP rights. Consider also if you need to own the IP in any output and if the provider should be required to assign the IP in such material to you. Note, however, that the legal position on copyright in AI-generated works is still unclear (see Generative AI - Addressing Copyright).   

Testing and acceptance. Prior to go live, the provider should be required to test the AI solution's functionality within the customer's IT environment. Seek to include specific metrics to determine when an AI system is ready to go live. Consider what tools are available to confirm that, under the bonnet, the AI is also working as intended. This might be through audit rights or verification tools offered by your provider or the right to have an independent review or by using other third party tools to make that assessment. In any case, repeated testing and validation will need to be an ongoing process that continues through implementation.

Performance of the solution. Consider the service standards and service levels you require of the AI solution and endeavour to make these objective and quantifiable. Providers should be required to comply with internationally-recognised standards on AI systems, for example, ISO/IEC 42001 that provides a certifiable AI management system framework focused on strong AI governance (see also 'Part 5 - AI Regulation Globally' for more on international standards).  The contract should also include an agreed process for the provider to investigate and fix errors and hallucinations. Output should be tested for discrimination and unfairness, and to ensure that the tool and outputs comply with ethical principles – see 'The Ethics of AI – The Digital Dilemma'. 

Over time, service levels and standards should increase as the model improves. Similarly, any concept of "Good Industry Practice" (and an obligation on the provider to comply with it) will evolve as the industry adapts to the new technology.  Benchmarking by third parties will be important to periodically confirm that the AI solution remains comparable to other models. Considering the breakneck speed at which AI is developing, the provider should be required to continually improve its solution and ensure it is state of the art. Consider how new updates will be applied to your solution.   

Collaboration and governance. The success of the project will depend on whether the parties are able to work openly and collaboratively whilst understanding their respective roles and responsibilities. AI solutions learn from humans (see 'What is a Foundational Model?' for more on reinforced learning from human feedback) so there should be a process for the provider to gather feedback from the end users so that the solution improves. Regular governance meetings (more frequent during the development phase) are also crucial to ensuring that the project stays on track and that issues are dealt with early and swiftly. 

Explainability. You must be able to explain how your AI system works as you will need to demonstrate that AI is used responsibly and appropriately (see 'The Ethics of AI - The Digital Dilemma' for more on explainability). For an AI system to be explainable, you will need the provider to provide you with detailed information on: 

  • how the AI works 
  • how it was tested
  • how it has been designed to be fair
  • the logic behind the output of the AI. 

Records should be produced of how the AI system was developed including the parameters that represent the model's learnt 'knowledge' used to produce the intended output. The solution should also be designed to log information regarding how decisions are made, so that these can be verified and explained if necessary.

Risk allocation. At this stage, the market is still getting to grips with AI and has yet to develop established positions on risk allocation. AI providers are building their customer base and working to make their offering more attractive. For this reason, customers who contract early may be in a stronger position when negotiating liability clauses under their contracts. In any case, risk allocation will depend on the commercials and each party's role and responsibilities in relation to the project. For example, a customer is unlikely to get blanket indemnities from the supplier for third party IP infringement claims if a significant portion of the training is carried out by the customer using the customer's own data.  

Security. Aside from the standard security issues that would arise in any tech procurement (e.g. data encryption, user controls etc), there are certain security threats which are novel to AI systems. These include prompt injection (bad actors instructing the model to perform actions you don't intend), prompt stealing (accessing a user's prompt), model inversion attacks (attempting to recover the dataset used to train a model), membership inference attacks (attempting to determine if certain information was used to train a model), and data poisoning (tampering with data sets). The provider must ensure that its solution is appropriately secure against known threats and have robust processes to address future threats. At the very least, solutions should comply with generally-accepted standards on cyber security, for example, the National Cyber Security Centre's Guidelines for Secure AI System Development – see (see also 'Part 5 – AI Regulation globally' for more on standards). There are also specialist escrow providers with whom you may store input data, algorithms, and AI applications to ensure that the solution remains available should the provider unexpectedly cease operations. 

Compliance. Consider who bears the risk and the cost of compliance with AI regulations as they change from time to time. New regulations may result in significant changes needing to be made to the solution, potentially down to the underlying infrastructure level. A regulatory change control procedure should also be included to set out a process by which the parties may agree and implement required changes and allocate the costs of the same.  

Exit. Although not particular to AI solutions, vendor lock-in is a risk that is heightened when procuring AI due to its complexity. You minimise this risk if you and your potential replacement providers are able to understand how the solution works. The incumbent provider should be required to train your personnel and ensure knowledge transfer over the lifecycle of the project. At the outset, consider the interoperability of the solution you procure with other suppliers' models, software, and systems. Consider also the data you will need to migrate the solution to a replacement provider upon exit from the contract.

 

Discover more insights on the AI guide

Stay connected and subscribe to our latest insights and views 

Subscribe Here