Understanding and managing the risks in artificial intelligence (AI) technology projects
Introduction
In the Autumn 2022 edition of RPC's Retail Compass, Tania Williams wrote about what you need to know to procure AI successfully. Further considerations to bear in mind when making decisions about using AI are also addressed in this article. But having worked through these considerations and having successfully procured the right AI solution, how do you go about managing the risks and challenges that might arise during the deployment of the technology? The first step is to identify those risks and challenges, and then to develop strategies for their mitigation and management.
Risks and challenges in AI technology projects
AI technology projects are similar to other technology projects, in that they are technically challenging, require extensive collaboration between the customer and the provider, and often evolve and change as the project develops. Given the costs involved and the potential impact on core business functions, there are significant risks for all parties concerned. There is also often no guarantee that any project will be successful or achieve all its aims. In addition to these general risks associated with technology projects, there are certain special risks and challenges that might arise during an AI technology project.
The AI workflow
Developing software systems that incorporate AI technologies often requires an 'AI workflow' to be integrated into the project plan. This workflow generally includes (amongst other things) data collection, data preparation, model design, model training, and model evaluation. Because the stages of the AI workflow are relatively unique to AI technology projects, it can be difficult to predict in advance how long each stage will take, what resources will be required at each stage, and when it is appropriate to move on to the next stage. Perhaps more importantly, traditional project management techniques, and software development methods (such as Agile methods), may not be suitable for properly planning and managing AI technology projects and their outcomes.
Specialist AI hardware
AI workloads are often performed using specialist or adapted hardware. Graphical processing units (GPUs) are frequently used for AI workloads, as are application-specific integrated circuits (ASICs) and a range of other devices and architectures designed to support, execute and accelerate AI workloads. The specialist nature of certain AI hardware means that its availability is more susceptible to supply chain issues, which can cause delay and potentially require a change in approach.
The AI's system's use of third-party software
Software licensed by third parties may not be suitable for use in conjunction with an AI solution. For example, such licenses may restrict the number of API calls that can be made or the amount of data that can be used. Both of these may increase substantially following the integration of an automated AI system. In addition, there may be issues over whether software usage by an AI system constitutes a new 'user' for the purpose of the licence.
Data accessibility issues
Many AI technology solutions are predicated on the analysis of a substantial amount of data. That data is often owned, or at least controlled by, the customer. Depending on the customer's systems, practices and data management team, its data can be difficult to access and work effectively with. Moreover, some of the most challenging data accessibility and suitability issues may only arise after the project has already started and substantial preparatory work has been undertaken. This can cause delay and disruption and, depending on the severity of the issues, may require a reconsideration of the project approach and objectives.
Testing and evaluation
Testing is a fundamental part of most technology projects. In very simple terms, the goal of testing is to ensure that the technology is working correctly and is properly integrated with the customer's other systems before it is fully deployed.
AI technologies have their own characteristics which mean that traditional software testing approaches may not be suitable. For example, even if the code is error free, and the system is properly integrated, that does not guarantee that the AI solution is delivering the intended results. Further, AI models often require ongoing monitoring, testing and evaluation, even after they have been deployed, to ensure that they are continuing to perform as intended.
Management and mitigation
The following is a starting point for managing and mitigating the risks discussed above:
- When developing and agreeing the project timetable, be conscious of the AI workflow and what impact this might have, particularly if things do not go as planned.
- Build flexibility into the project timetable and agree on processes at the outset that allow the project to evolve as necessary.
- The project objectives should be kept under review and, if necessary, changes should be made to ensure that the project remains viable. This is particularly relevant in cases where an AI model is intended to continuously learn and evolve, even after deployment.
- Review your own data management practices and make necessary improvements to maximise the quality and accessibility of your data. If data management issues arise during the course of a project, consider engaging external assistance to ensure that the problems can be worked through with limited disruption to the AI technology project.
- Ensure that a vigorous testing approach is devised and implemented. Consider what further support, such as Machine Learning Operations (ML Ops) support, may be required after deployment, and whether this will be provided by the same or a different supplier.
- Informal dispute resolution processes can provide a means for the parties to resolve issues relating to delay and variations to the project's specifications without having to resort to more formal (and time-consuming) processes.
- If the AI solution will be dependent on, or work in conjunction with, software licensed by third parties, consider whether the terms of those licenses permit such use. Parties should also consider whether any other third-party consent is required, and whether any use of data is in accordance with regulatory requirements.
Conclusion
An AI technology project carries with it many of the same risks and challenges as any other technology project. As such, many of the same strategies can be employed to manage and mitigate these risks. These include taking care at the contact formation stage to ensure that risks are properly identified and allocated, that there are appropriate disincentives for delay and non-performance, and that there are robust and practical mechanisms for resolving disputes, should they arise.
However, as set out above, there are also special risks associated with AI technology projects that should be borne in mind. These risks arise in relation to a range of matters, including hardware, project management, licensing, data management and the nature of AI technologies. Ultimately, each project is different, and each will carry its own risks. It is therefore important to take proper advice on these matters, both at the outset of any project, and as matters progress.
Stay connected and subscribe to our latest insights and views
Subscribe Here