Part 4 – AI Regulation in Asia

Published on 06 August 2024

This is Part 4 of 'Regulation of AI 

While much of Asia takes a light touch and voluntary approach to AI regulation, some jurisdictions like China have taken a more prescriptive approach. This section provides a flavour of the diverse regulatory approaches across Asia.

Singapore

While existing laws such as the Personal Data Protection Act 2012 govern specific aspects of AI in Singapore, there is currently no overarching legislation regulating AI. Instead, a series of frameworks have been launched which provide general guidance to interested parties on the subject but have no legally binding effect. This soft touch approach is intended to encourage the use of AI in accordance with Singapore's National AI Strategy, first published in 2019 and updated in 2023.

Singapore first launched its Model AI Governance Framework in 2019 and updated it again in 2020. The framework follows two fundamental principles: that use of AI in the decision-making process should be explainable, transparent and fair; and that AI systems should be human-centric.

In 2022 after the EU announced the then draft EU AI Act, Singapore launched AI Verify, an open source AI governance testing framework and software toolkit that validates the performance of AI systems against a set of eleven internationally recognised AI ethics and governance principles through standardised tests, and is consistent with AI governance frameworks such as those from EU, OECD and Singapore. The principle of transparency requires that appropriate information is provided to individuals impacted by AI systems, and this is assessed by way of process checks of documentary evidence (e.g. company policy and communication collaterals) providing appropriate information to individuals who may be impacted by the AI system. Such information might include the use of AI in the system, its intended use, limitations, and risk assessments. Singapore also, in mid 2023, published Advisory Guidelines on the Use of Personal Data for AI.

In February 2024, Singapore also unveiled a new draft framework specifically targeted at generative AI. The concerns that the framework seeks to address include hallucinations, copyright infringement and value alignment. While the draft generative AI framework does not provide any direct solutions to these issues, it recognises the need for stakeholders at all levels to cooperate and work together throughout the process of AI model development, implementation and deployment, and proposes nine dimensions involving the use of both ex ante and ex post measures to foster a trusted ecosystem both at the governmental and the organisation levels. The nine dimensions proposed are accountability, data, trusted development and deployment, incident reporting, testing and assurance, security, content provenance, safety and alignment research and development, and AI for public good.

In addition to the broader national frameworks, sector-specific regulators have been developing frameworks that are applicable to narrower audiences. The Monetary Authority of Singapore have launched a project with industry partners to develop a risk framework for the use of generative AI in financial sectors, in addition to the guidance on principles of fairness, ethics, accountability and transparency in the use of AI and data analytics in Singapore's financial sector released in late 2018. More recently on 1 March 2024, the Singapore Personal Data Protection Commission (PDPC) issued a set of Advisory Guidelines on the use of personal data in AI recommendation and decision systems, applicable to third party developers of bespoke AI systems. The guidelines clarify how Singapore's data protection laws apply when organisations use personal data to develop and train AI systems, and set out baseline guidance and best practices for organisations to adopt. While the guidelines do not themselves have legal effect, they indicate the manner in which the PDPC will interpret provisions of Singapore's personal data protection laws.

Hong Kong

In Hong Kong, there is similarly no overarching regulation specifically governing AI, although aspects of AI may be regulated by existing laws. A number of guidance notes have been published by government bodies to govern and facilitate the ethical use of AI technology. For example, to assist organisations in complying with the Personal Data (Privacy) Ordinance to protect personal data, the Office of the Privacy Commissioner for Personal Data has published the Artificial Intelligence: Model Personal Data Protection Framework (Model Framework) in June 2024, and Guidance on the Ethical Development of AI in Hong Kong (the 2021 Guidance) in 2021. The Model Framework provides organisations with practical recommendations and best practices in the procurement, implementation and use of AI, while the 2021 Guidance focuses more on broad ethical principles for organisations to consider when developing and deploying AI involving personal data.

In August 2023, the Office of the Government Chief Information Officer published the Ethical Artificial Intelligence Framework (Ethical AI Framework). The Ethical AI Framework was originally developed for internal adoption within the Hong Kong government to assist with planning, designing and implementing AI and big data analytics in IT projects or services. It is now also available for general reference by organisations when adopting AI and big data analytics in IT projects.

In July 2024, the Hong Kong Intellectual Property Department released a consultation paper to modernise the Hong Kong Copyright Ordinance to keep pace with the rapid development and prevalence of AI, to ensure that Hong Kong's copyright regime remains robust and competitive.

China

China is a frontrunner in the regulation of AI in Asia. While there is currently no general or overarching AI law in China, the regulators (the Cybersecurity Administration of China in particular) have in recent years introduced mandatory technology-specific regulations and measures to address the risks associated with different aspects of AI. Unlike the frameworks in Singapore, these regulations and measures have legal effect. They include but are not limited to the following:

  • Provisions on the Administration of Algorithmic Recommendations for Internet Information Services (effective 1 March 2022): These provisions specifically apply to services that push content or make recommendations to users via algorithms, such as Douyin (TikTok in other countries). Under these regulations, a user must be provided with a choice to not be targeted based on the user’s individual characteristics, such as demographic or location information. Algorithmic recommendation service providers are also prohibited from pushing content to minors that may be harmful to one’s health or violate social morality, such as alcohol or tobacco, and from setting up algorithmic models that induce users to indulge in addiction or excessive consumption;
  • Provisions on the Administration of Deep Synthesis of Internet Information Services (effective 10 January 2023): These provisions regulate deep synthesis technology such as deep machine learning algorithms that have the ability to create generated content such as deepfakes, and regulates providers and users of such technology as well as platforms distributing such applications. Key obligations include requirements imposed on providers relating to security assessments, user verification, as well as the requirement to report any use of the technology by users to create harmful or undesirable content. The provisions also require the providers of deep synthesis technology to label AI-generated or edited content (such as images and videos) with a noticeable mark to inform users and the public of the nature and origin of such generated content.
  • Interim Measures for the Management of Generative AI Services (effective 15 August 2023): These measures apply to generative AI technology and services in China that have the ability to generate content such as texts, images, audio, or video, and have extraterritorial effect in respect of the provision of generative AI services that originate outside of China. The measures place strong emphasis on the transparency, quality and legitimacy of both the training data and the generated content. Generative AI service providers are required to respect intellectual property rights and only use training data and foundational models that have lawful sources. Service providers are also subject to content moderation requirements to address illegal content and illegal use of their services, such as the removal of such illegal content and suspension of provision of services to users in violation, and are required to report such illegal content or use to the relevant authorities. Service providers are also required to employ effective measures to increase the quality, accuracy and reliability of both training data and AI generated content, and to establish convenient and transparent portals for complaints and reports from the public. Other measures to protect minors and the confidentiality of users' input data are also imposed. On 29 February 2024, the National Information Security Standardization Technical Committee (TC260) (the leading standards body for digital technologies) has issued the TC260-003 Basic security requirements for generative AI service to provide organisation with practical guidance on compliance with these interim measures.

Certain higher risk service providers of AI systems are also subject to heightened regulatory compliance obligations (including the carrying out of security assessments and algorithm filings) as a result of their ability to disseminate information to a large groups of individuals. The current AI governance regime in China appears to target specific issues arising from AI technology while still promoting the development of AI in all industries and fields, with the burden of regulatory compliance placed largely on AI service providers as the gatekeepers for the security and quality of their AI services. This is especially evident from the TC260-003 Basic Security Requirements for Generative AI Service which set out comprehensive requirements for service providers to follow when conducting security assessments.

The current regulatory approach in China can be contrasted with that of the EU. The EU AI Act, meant to be a prescriptive and overarching piece of legislation, prescribes risk classification for AI systems and imposes maximum fines for different aspects of non-compliance. In contrast, the existing regulations and measures in China target specific AI technologies instead of introducing risk-based classification and regulation of AI services, and prescribes that violations may be prosecuted in accordance with public security and criminal laws.

Looking ahead, more comprehensive legislation to regulate AI is expected to be introduced in mainland China. AI service providers active in mainland China should take steps to comply with current regulations where applicable and to keep abreast of further AI regulatory developments.

Vietnam

On 2 July 2024, the Vietnam Ministry of Information and Communication released for public consultation a draft digital technology industry law to regulate digital technology products and services, including AI. Under the draft law, AI is proposed to be regulated in the following manner:

  • Ethical principles for the development, deployment, and application of AI will be issued by the ministry;
  • Digital technology products created by AI must be labelled for identification to ensure that the output of the AI systems can be recognised as artificially created or manipulated; and
  • AI systems will be classified according to risk levels based on their impact on health, the rights and lawful interests of organisations and individuals, human safety or property, the safety of national critical information systems, and critical infrastructure. These classifications will be used to implement regulatory measures in accordance with their risk levels.

The public consultation is due to end in September 2024, and comes shortly after the ministry's press conference in May 2024 where it acknowledged the significant revenue generated by Vietnamese digital technology enterprises providing services and products to foreign markets, and the need to accelerate the drafting and implementation of the digital technology industry law to encourage domestic digital technology firms to do business abroad.

Taiwan

In Taiwan, a draft AI basic law has been proposed by a private Taiwanese research foundation, namely the International Artificial Intelligence and Law Research Foundation. The draft basic law sets out fundamental principles concerning the research and use of AI, emphasises the need to protect privacy and personal data in the development and application of AI, and proposes the regulation of AI based on level of risk, similar to the draft EU AI Act and the draft US Algorithmic Accountability Act of 2022. It is expected that the draft law will be reviewed by the Taiwan Congress.

In the meantime, the Finance Supervisory Commission of Taiwan has released Guidelines for the use of AI in the finance industry. The guidelines, which do not have legal effect, contain provisions for the management and mitigation of risks in using AI technology, and for the establishment of a review and evaluation mechanism based on financial institutions' own professionalism and resource levels, including reviews by independent third parties with AI expertise.

South Korea

In February 2023, the Science, ICT, Broadcasting and Communications Committee of the South Korean National Assembly passed proposed legislation to enact the Act on Promotion of the AI Industry and Framework for Establishing Trustworthy AI (AI Bill). If the AI Bill is subsequently passed into law following final votes from the Korean National Assembly, it will be the first piece of statutory legislation to comprehensively govern and regulate the AI industry in Korea. The AI Bill incorporates seven AI-related bills introduced since 2022 and seeks to not only promote the AI industry, but also to protect users of AI-based services by fostering a more secure ecosystem through the imposition of stringent notice and certification requirements for high-risk AI services that are used in direct connection with human life and safety. South Korea appears to have taken a supportive approach towards AI by making it a general principle in the AI Bill that AI regulations must allow anyone to develop new AI technology without having to obtain any government pre-approval.

Japan

It is reported that on 7 November 2023, the government of Japan set out 10 principles in draft guidelines for organisations involved with AI. The principles are based on rules agreed to by the G7 (of which Japan is a member) on generative AI and other matters via the Hiroshima AI Process. Japan has taken a highly permissive approach to the use of copyright materials for machine learning, and it will be interesting to see if it retains this line in the mid to long term.

Japan and the Association of Southeast Asian Nations (ASEAN) adopted a joint statement on 17 December 2023 that included a commitment to greater cooperation between the two jurisdictions on AI governance, including support for the recently published ASEAN Guide on AI Governance and Ethics . Japan has also launched a new AI Safety Institute, which will among other things implement standards for the development of generative AI.

ASEAN

There have also been developments in AI regulation on a regional level. The Association of Southeast Asian Nationals ("ASEAN"), which comprises the 10 member states of Brunei Darussalam, Cambodia, Indonesia, Lao PDR, Malaysia, Myanmar, Philippines, Singapore, Thailand and Vietnam, has issued an ASEAN Guide to AI ethics and governance in February 2024 for AI design, development and deployment by organisations, as well as for policy formulation by governments in the region. It maps out a voluntary and light touch approach to regulating AI. 

The ASEAN Guide nevertheless focuses on traditional AI technologies that exclude generative AI, and is similar to Singapore's Model AI Governance Framework. It offers both national-level recommendations for its 10 member states, as well as ASEAN regional-level recommendations. Among other things, it asks companies to take countries' cultural differences into consideration and does not prescribe unacceptable risk categories, unlike the EU AI Act.

With the exception of China, most countries in Asia have thus far adopted a light touch and voluntary approach towards AI regulation, with a clear intention by most Asian governments to support the development of AI industry and tools. Nevertheless, some countries, including Vietnam and South Korea, appear to be moving towards the adoption of a more prescriptive regulatory approach towards AI. It is likely that more countries will look to implement AI regulatory laws once the effects of EU AI Act are felt and assessed. 

Discover more insights on the AI guide

Stay connected and subscribe to our latest insights and views 

Subscribe Here