Generating competition: What is driving competition regulators to focus on AI?
It would be an understatement to say that AI has grown in popularity for businesses and consumers alike and this evolving technology is now expected to contribute an eye-watering $15.7 trillion to the global economy by 2030.
Unsurprisingly, regulators across a variety of market sectors and jurisdictions are paying attention to this growth. Particularly, competition regulators have started setting their expectations for AI and, in their latest signal to the market, regulators from the EU, UK and US recently issued a joint statement on competition in generative AI foundation models and AI products.
But why are competition regulators so concerned with AI?
Tech targeted by competition regulators
To help answer this question, we can start by acknowledging how competition regulators have recently focussed not just on AI but on the tech industry as a whole.
The tech industry has seen scrutiny from competition regulators on some of its largest mergers and acquisitions. Last year, the UK's Competition and Markets Authority (CMA) initially blocked Microsoft's $75bn acquisition of Activision before accepting a revised structure for the deal. This included ensuring that video game consumers still benefited from a competitive market after the deal by requiring Microsoft to license Activision's cloud streaming rights outside of the European Economic Area exclusively to French rival Ubisoft. Similarly, Adobe's planned $20bn acquisition of collaborative design software developer Figma was abandoned after opposition from the CMA and the EU's competition regulator, the European Commission.
The CMA also recently responded to online shopping becoming mainstream by issuing guidance designed to protect consumers across a variety of digital retailers, including dos and don'ts for DIY recommendation sites, and principles for the discount pricing models often used by online mattress sellers.
This attention on the tech industry is understandable as new legislation has been introduced to protect consumers in the digital age. For example, the CMA has been granted expanded powers under the Digital Markets, Competition and Consumers Act 2024 (DMCCA), which came into force on 24 May 2024. As explored in detail by RPC, the DMCCA grants the CMA direct consumer law enforcement powers, the ability to impose higher penalties on those who fail to cooperate with the CMA's investigations, and the ability to designate the largest tech business operating in the UK with "Strategic Market Status" which comes with increased expectations to abide by specific conduct requirements.
Why the focus on AI?
Nonetheless, competition regulators have been taking a very close look at the growing AI industry and, as set out in their joint statement, the risks to consumers that regulators feel the industry presents. Competition rules are often used to regulate perceived market power, and possible monopolies, in the absence of other effective legislation and this may explain why competition regulators are keen to act whilst governments try to keep up with legislating in response to AI's rapid development.
As such, although the risks identified in the joint statement largely focus on anti-competitive behaviour, they also address key aspects of the growing industry. This includes the risk that existing AI firms may attempt to restrict the role of others in the development of new AI technologies. It also includes the risks that firms with existing market power in the broader tech industry, not just in AI, could entrench their position to limit competition in the sector, and that partnerships and investment structures could be used by large tech companies to limit AI competition and to "steer market outcomes in their favour" at consumers' expense. EU, UK and US regulators believe that if these risks materialise then they will do so in way "that does not respect international borders", and, therefore, a joint approach to managing them is required.
Moreover, the anti-competitive risk of partnership and investment structures is an area that the CMA has particularly focused on and has explored through a series of reports on Foundation Models (FMs) in the AI sector. FMs are the large machine learning models which are trained on vast amounts of data and developed into the AI tools now used by many businesses and consumers. Over 500 FMs are known publicly to exist and this number is increasing as developers build, train and deploy new FMs into the market. Many of these new models are being developed by start-ups and small tech businesses but the CMA has begun tracking how large tech companies are consistently appearing in the partnership and investment arrangements for the FMs that seem most promising. As such, the CMA is concerned that if FM development is strictly steered by a limited number of large tech companies, and their investors, then there is a risk to consumers, and developers throughout the AI supply chain, that access to the market will be limited and prices will be driven higher than necessary. If this came to fruition then the impact would detract from the efficiencies, cost-reductions and disruption that AI is predicted to bring.
Action already underway
Not content simply expressing their concerns about the AI industry, some regulators have already begun to act. The US' Federal Trade Commission and Department of Justice have both started to investigate possible violations of competition law by Microsoft, OpenAI, and Nvidia, as well as the data collection techniques used by consumer facing AI tools. The CMA has started similar examinations into possible anti-competitive behaviour occurring when large tech companies hire former employees of smaller AI start-ups.
Regulators will take further action as AI specific legislation is introduced. For example, the European Artificial Intelligence Act (AI Act) came into force on 1 August 2024 with the goal of fostering "responsible artificial intelligence development and deployment in the EU." The AI Act introduces a risk-based approach for the EU to assess and act upon developments in the AI market. As a first step to implementing the AI Act, the European Commission has launched a consultation on a proposed Code of Practice for providers of general-purpose AI models. It hopes to address issues such as transparency and copyright rules which, if left unchecked, may contribute to anti-competitive behaviour. The UK's new Labour government also announced a proposal for a similar AI Bill but promised that it would focus on governing the most advanced AI products in circulation to date rather than becoming a "Christmas tree bill" which imposes wide-ranging new regulations and risks stifling innovation in the sector.
What can businesses do to prepare?
It's clear that scrutiny from competition regulators is here to stay and may even become stronger as the AI industry matures and grows. This means businesses involved in the sector already, or even just planning to use AI tools in their day-to-day work, need to understand and stay up to date with competition regulators' powers, areas of interest and the outcomes of their investigations and enquiries. A good starting point would be to read RPC's AI guide and especially its sections on AI regulation.
Business should also be aware of how other regulations also require approval of products and transactions involving AI. For example, AI is one of the 17 categories of business activity now scrutinised for security risks under the UK's National Security and Investment Act 2021 (NSI). The NSI requires mandatory notification of transactions involving targets that research, develop, or produce AI tools regardless of the turnover of the target (whereas minimum turnover thresholds are a common feature of competition regulations) and this means that very small transactions involving the AI industry may face regulatory delays under the NSI even if they raise no competition concerns.
UK AI developers can also start to make use of the AI and Digital Hub, which is hosted by the Digital Regulation Cooperation Forum and is a joint initiative from the CMA, Ofcom, the Information Commissioners Office, and the Financial Conduct Authority. Aimed at speeding up the processes through which UK tech firms bring products to market, the Hub allows AI start-ups to engage directly with regulators and receive informal advice to help understand the already complex, and likely to expand, AI regulatory landscape.
Stay connected and subscribe to our latest insights and views
Subscribe Here