Part 6 – Practical Considerations

Published on 04 October 2024

This is Part 6 of 'Regulation of AI'

AI providers have been focussing on their forthcoming AI obligations and on governance for some time, but it is now prudent for the majority of organisations to assess how their use of AI will come within the scope of regulation in key territories, become familiar with each regime, and devise a means to keep up with anticipated changes). The approach to AI regulation across jurisdictions currently appears so varied that organisations need to factor the costs of compliance into their strategy for the jurisdictions that they plan to provide or deploy AI in. Where compliance costs are expected, these need to be built into current business planning together with: (i) AI governance (including systems and procedures for data retention and record keeping); (ii) building internal AI expertise; and (iii) identifying trusted advisors from the "noise" of what is being offered externally.

Providing training to allow individuals to perform their roles and/or use the AI system in a way that is consistent with related policies and procedures will help businesses to clarify roles, demonstrate accountability and minimise risks. See here for our recommendations on training your staff on AI.

All providers should establish written policies, procedures, and instructions for various aspects of the AI system (including oversight of the system) and produce documentation explaining the technicalities of their AI model and its output. They should assess and document the likelihood and impact of any risks associated with the AI system, including in relation to privacy and security.

Some uses of AI will come under prohibited or high risk classifications, for example in the EU, and businesses should consider removing or adjusting their products and services to remove or limit these risks.

Where appropriate businesses might consider using voluntary commitments in their relevant industry sector.  In December 2023, in the US, 28 healthcare companies agreed to voluntary commitments on the use and purchase of safe, secure and trustworthy AI.

As discussed in Part 5 – AI regulation globally, AI, IT or cyber ISO/IEC standards (such as ISO 23894 or ISO/IEC 4200 1:2023) can be used as tools to support the safety, security and resilience of AI systems and solutions together with research and development programmes addressing key technical challenges, development of metrics, and risk assessment to measure and evaluate AI. Under these standards, organisations should prepare to be in a position to be able to provide information on AI system decision making and source of training data to regulators. 

 

Discover more insights on the RPC AI guide

Stay connected and subscribe to our latest insights and views 

Subscribe Here