AI Regulation: Different Regional Approaches and a Glimpse of the Future

Image Source: Freepik
As artificial intelligence (AI) becomes more ubiquitous, the need for regulation and legislation becomes more pronounced. Several regions around the world have already started, with the European Union (EU), the Gulf states, and Southeast Asia emerging as frontrunners. Here’s how their approaches vary.
Three Approaches to AI Regulation
Regional approaches to AI regulation reflect divergent priorities, legal cultures, and stages of digital maturity. In Europe, the regulatory landscape is firmly rights-based, exemplified by the EU AI Act. The act classifies AI systems by risk and imposes strict obligations on high-risk applications. This reflects a foundational aim of the European project: the protection of fundamental human rights and democratic integrity. The EU has experienced several high-profile data privacy and AI misuse scandals that have threatened democratic processes and public trust. These experiences have led to a regulatory model that embeds human dignity, transparency, and accountability into AI governance. Efforts to use data and AI applications to erode European democracy and create social division have pushed the EU to be more proactive with data compared to pre-GDPR. Recently, the EU has published its forward-looking General-Purpose AI Code of Practice, which focuses on transparency and copyright as well as safety and security.
Read More >> AI, Upskilling, and the Game-Changing Power of Skepticism
The Gulf states are developing AI and data governance legislation that is appropriate to their needs and respects their long-standing demands of personal privacy. Legal and policy frameworks being developed also prioritize digital transformation and economic diversification while gradually layering in ethical standards, sectoral codes, and capacity-building. Governments in the Gulf are supporting AI through national visions, investment zones, and ethical charters rather than binding regulations, and they have actively participated in developing international principles such as UNESCO’s Ethics of AI recommendations.
Southeast Asia has tended to observe a balanced model that blends industry co-regulation with practical guidance on governance. Frameworks are designed to be adaptive and agile, promoting innovation while embedding explainability and oversight. These trends highlight the absence of a one-size-fits-all approach. We are, however, seeing increasing convergence on common principles such as fairness, transparency, and accountability. These shared values offer a foundation for future interoperability even if national enforcement mechanisms remain diverse.
Where Do We Go From Here?
Now that principles of AI governance—namely, accountability, human control, and transparency—are generally accepted, organizations must begin incorporating such standards locally. The standardization of AI management systems to ensure responsible risk management through international and organizational adoption of standards such as ISO/IEC 42001 will likely continue to become the norm. This is not to say that such standards are perfect. Indeed, I’ve argued in recent research that ISO/IEC 42001 requires an amendment to ensure continuous audit rather than a periodic approach, since the threats emerging from AI to organizations are far beyond the ISO/IEC 27001 standards designed for risk management of non-AI technology.
Read More >> Lessons From Global Leaders: How AI Is Changing Public Service Delivery
Larger organizations will see more AI governance professionals and ethics boards emerge, while smaller organizations will need to have greater training and understanding of AI adoption to adhere to local and global requirements and ensure their own responsible use. Organizations and regulators don’t have unlimited resources, and their focus will naturally be on ensuring transparency and accountability for critical AI. High-risk AI uses, such as in healthcare, will require stricter and more detailed oversight, while lower-risk innovations can face lighter-touch rules.
**********
About the Author:

Jon Truby, Ph.D is a Visiting Research Associate Professor in AI and Technology Law at the Centre for International Law (CIL), National University of Singapore (NUS), where he leads research in AI and technology law. Dr. Truby is Chair of the ILA Committee on AI and Technology Law, a member of the OECD.AI Expert Group on AI Compute and Climate, and a member of the CIL Peace Project. He has served on several AI policy and advisory committees, including the UNESCO Group of Friends on the Ethics of AI. He also presented his research on digital decarbonization to a panel at the UN General Assembly and led various research grant projects and publications on technology law.