What 7 key regulatory changes lie ahead in 2024?
AI impacts, green claims and product safety among issues explored in Regulatory radar
Will 2024 prove to be the year AI outpaces regulators – and what should businesses be doing to prepare employees? What does industry need to know now about green claims and risks or preventing economic crime? And what does the future hold for product safety?
These are just some of the insightful questions explored in the latest edition of Regulatory radar from international law firm RPC.
Regulatory radar scans the horizon for the biggest regulatory changes in store for 2024 and teams it with insight from RPC's market-leading lawyers.
Highlights include:
1. Greater corporate liability for economic crime
The Economic Crime and Corporate Transparency Act has introduced the new corporate criminal offence of failure to prevent fraud and expanded the reach of the identification doctrine for a wide range of financial crimes, meaning that the actions of "senior managers" as well as Directors and Officers can create criminal liability for a company. Businesses are advised to act now and assess whether current controls around fraud are reasonable to prevent the company and third parties committing an offence.
2. AI regulation here and abroad
The majority of AI regulatory regimes are in their nascent stages, however the outlines of domestic and international frameworks are beginning to emerge. These include the G7's international guiding principles on AI, in addition to a voluntary code of conduct for AI developers aimed at promoting safe, secure and trustworthy AI; the UK's “Bletchley Declaration”, which places emphasis on international collaboration on identifying AI safety risks and creating risk-based policies to counter these; and the Artificial Intelligence (Regulation) Bill introduced in the House of Lords that includes the creation of an AI Authority to construct regulatory sandboxes. Authentication and provenance mechanisms, use of copyright-protected material and protection of personal data are among the most important points being considered.
3. The impact of AI on employees
Increasingly organisations are looking to embed machine learning (including generative AI) into business operations. However, there are concerns around privacy, confidentiality, bias, IP (if open source), accuracy (colloquially referred to as "hallucinations") and explainability. This is in addition to AI-supported programmes that facilitate or augment decisions that directly impact individuals' workplace opportunities, benefits or progression, eg recruitment, performance assessments, monitoring and the allocation of tasks, could have a direct impact on employees.
4. AI, competition and consumer protection
The Competition and Markets Authority (CMA) is in the second phase of its review into how AI could impact markets from a competition and consumer protection perspective. With the Digital Markets, Competition and Consumers Bill (DMCC) on the horizon in 2024, the CMA is gearing up for the new pro-competition digital markets regime overseen by the regulator's Digital Markets Unit (DMU). Businesses will be expected to be responsible for effective oversight of their AI, machine-learning and broader algorithmic systems, which should include robust governance, holistic impact assessments, monitoring and evaluation.
5. Greater ESG responsibilities
The Corporate Sustainability Due Diligence Directive (Due Diligence Directive) will mandate that companies conduct ESG due diligence as a means of identifying, preventing and mitigating specific ESG-related risks and impacts in their business activities and supply chains. This latest development reflects a growing momentum to hold corporations more accountable for managing environmental and human rights risks.
6. Product safety revamp
Earlier this year, the Smarter Regulation: UK Product Safety Review, was opened as part of the government’s programme of regulatory reform. This followed a call for evidence in March 2021 issued by the Office for Product Safety and Standards to look into the UK’s system of product safety regulation. The new regime seeks to modernise the way in which product safety is regulated in the UK and will likely have significant implications on considerations of product safety for online marketplaces, AI and the ESG agenda.
7. Going green
The CMA recently published its Green Agreements Guidance for actual or potential competitors to prevent, reduce or mitigate the adverse impact of their activities on the environment, or to assess their impact. Examples include an agreement between fashion manufacturers to stop using certain fabrics contributing to microplastic pollution or delivery companies switching to using electric vehicles. The CMA has made clear it considers sustainability issues are hugely important given the UK’s binding net zero obligations under the Climate Change Act 2008 and has taken a leading position on investigating green claims.
Gavin Reese, Partner and Head of Regulatory at RPC, says: “Businesses are increasingly having to up their game when it comes to preventing financial crime or falling foul of green claims and risks, but now they are also having to grapple with the lightning-paced changes that artificial intelligence brings. Regulators are also finding themselves having to move two steps at a time to keep up with innovation. With a raft of regulations in the pipeline for 2024, businesses would do well to take steps now to ensure they are able to face these changes head on."
For more insights and to read Regulatory radar click here.
Stay connected and subscribe to our latest insights and views
Subscribe Here