Presenter Information

Onur Bakiner, Seattle University

Location

Wolff Auditorium, Jepson Center

Start Date

4-4-2025 1:15 PM

End Date

4-4-2025 2:00 PM

Description

Chaired by Darian Spearman, Ph.D. (Gonzaga University)

The enduring trope about contemporary technologies is that they have developed in a regulatory vacuum. Cheerleaders and skeptics alike paint a picture of lawlessness to describe the current state of affairs in the tech sector. Cheerleaders rejoice in the absence of perceived impediments to endless innovation that has given us the Internet, social media, AI, and more, while skeptics say that the absence of democratic control over these innovations is precisely why we endure their harms.

The characterization of today’s technology landscape as devoid of regulation is misleading. The law permeates every sphere of life in modern society, technology being no exception. A more accurate description of the relationship between law and technology today is that the specific content of laws, directives, court decisions and policies since the early 1990s has been exceptionally permissive towards the technology sector, especially in the United States, where many of the pioneering technology research and developments have taken place. The law has enabled and directed technological developments, though not in ways that protect vulnerable persons and populations by mitigating or eliminating. The development of contemporary AI has thus begun not in the absence of law, but rather in the presence of business-friendly and rights-indifferent law.

That landscape is beginning to change, even if slowly. Legislative proposals to regulate AI have proliferated since 2019 around the world. The European Union (EU) legislated the AI Act in early 2024. Politicians voice their willingness to put in place guardrails to direct the future development of AI through law in Brazil, Canada, Chile, China, Colombia, India, Mexico, the United Kingdom (UK), the United States (US), and Venezuela. However, as of late 2024, none of these bills are in effect, and only the ones in Canada and China have a realistic chance of getting adopted within a year or two.

Even if the adoption of AI-centric laws, i.e., laws that take AI as their only or main subject matter, is slow, AI-relevant laws are already shaping the landscape of AI. Discrimination on the basis of protected categories is banned under the International Covenant on Civil and Political Rights and other international treaties, as well as constitutions and statutes of numerous states. The right to privacy is recognized as a constitutional norm in many countries. Laws regulating digital privacy, consumer rights, content moderation, and the market power of online platforms have brought protections for citizens, and along with them, controversies.

This presentation documents AI-centric and AI-relevant bills and laws around the world. What is more, the failure of the national government to pass relevant laws has pushed subnational decision-makers to step in. Most notably, states and cities in the US have enacted a number of laws that regulate data collection and analysis through AI in their jurisdiction. Documenting national and subnational bills and laws is necessarily limited, as AI-relevant norms are to be found in diverse subfields of the law, and as the number of AI-centric and AI-related laws is likely to increase dramatically in the next decade or so. Nonetheless, this presentation’s goal is to provide as complete a global picture of AI-centric and AI-relevant laws and bills as possible.

I identify a number of patterns emerging in global AI regulation. First and foremost, legislative attempts have been increasing in number. The legal regulation of AI is likely to look very different in the 2030s than in the 2010s. Second, two separate models appear to have emerged: the EU’s risk-based model offers a cross-industry and technology-neutral logic of regulation, while China prefers industry-by-industry rules. Third, laws and bills tend to seek a balance between sanctions for offending businesses and breathing space for businesses to self-regulate. In fact, even some of the more restrictive laws, such as the EU’s AI Act, ban very few AI applications. Even the use of emotion recognition systems, which has received so much criticism from academia and civil society, is listed as a high-risk system rather than a prohibited one – to be precise, the Act bans inferring individuals’ individual states but does not disallow identifying expressions of emotion. Fourth, military AI remains unregulated despite vocal calls to ban lethal autonomous weapons systems and keep a close watch on other military uses. All in all, AI laws and bills envision a rather light-touch model of technology regulation.

Legal regulation is neither a deadweight on innovation nor the cure-all to AI risks and harms. AI laws will push businesses to reassess their conduct. However, it is worth acknowledging that laws do not always function in the ways and to the extent their sponsors envision. Given the light-touch approach described above, especially when it comes to military AI, legal regulation is likely to serve as a necessary, but not sufficient, mechanism to address AI risks and harms in the future.

Share

COinS
 
Apr 4th, 1:15 PM Apr 4th, 2:00 PM

Regulating AI through Law: The Global Landscape

Wolff Auditorium, Jepson Center

Chaired by Darian Spearman, Ph.D. (Gonzaga University)

The enduring trope about contemporary technologies is that they have developed in a regulatory vacuum. Cheerleaders and skeptics alike paint a picture of lawlessness to describe the current state of affairs in the tech sector. Cheerleaders rejoice in the absence of perceived impediments to endless innovation that has given us the Internet, social media, AI, and more, while skeptics say that the absence of democratic control over these innovations is precisely why we endure their harms.

The characterization of today’s technology landscape as devoid of regulation is misleading. The law permeates every sphere of life in modern society, technology being no exception. A more accurate description of the relationship between law and technology today is that the specific content of laws, directives, court decisions and policies since the early 1990s has been exceptionally permissive towards the technology sector, especially in the United States, where many of the pioneering technology research and developments have taken place. The law has enabled and directed technological developments, though not in ways that protect vulnerable persons and populations by mitigating or eliminating. The development of contemporary AI has thus begun not in the absence of law, but rather in the presence of business-friendly and rights-indifferent law.

That landscape is beginning to change, even if slowly. Legislative proposals to regulate AI have proliferated since 2019 around the world. The European Union (EU) legislated the AI Act in early 2024. Politicians voice their willingness to put in place guardrails to direct the future development of AI through law in Brazil, Canada, Chile, China, Colombia, India, Mexico, the United Kingdom (UK), the United States (US), and Venezuela. However, as of late 2024, none of these bills are in effect, and only the ones in Canada and China have a realistic chance of getting adopted within a year or two.

Even if the adoption of AI-centric laws, i.e., laws that take AI as their only or main subject matter, is slow, AI-relevant laws are already shaping the landscape of AI. Discrimination on the basis of protected categories is banned under the International Covenant on Civil and Political Rights and other international treaties, as well as constitutions and statutes of numerous states. The right to privacy is recognized as a constitutional norm in many countries. Laws regulating digital privacy, consumer rights, content moderation, and the market power of online platforms have brought protections for citizens, and along with them, controversies.

This presentation documents AI-centric and AI-relevant bills and laws around the world. What is more, the failure of the national government to pass relevant laws has pushed subnational decision-makers to step in. Most notably, states and cities in the US have enacted a number of laws that regulate data collection and analysis through AI in their jurisdiction. Documenting national and subnational bills and laws is necessarily limited, as AI-relevant norms are to be found in diverse subfields of the law, and as the number of AI-centric and AI-related laws is likely to increase dramatically in the next decade or so. Nonetheless, this presentation’s goal is to provide as complete a global picture of AI-centric and AI-relevant laws and bills as possible.

I identify a number of patterns emerging in global AI regulation. First and foremost, legislative attempts have been increasing in number. The legal regulation of AI is likely to look very different in the 2030s than in the 2010s. Second, two separate models appear to have emerged: the EU’s risk-based model offers a cross-industry and technology-neutral logic of regulation, while China prefers industry-by-industry rules. Third, laws and bills tend to seek a balance between sanctions for offending businesses and breathing space for businesses to self-regulate. In fact, even some of the more restrictive laws, such as the EU’s AI Act, ban very few AI applications. Even the use of emotion recognition systems, which has received so much criticism from academia and civil society, is listed as a high-risk system rather than a prohibited one – to be precise, the Act bans inferring individuals’ individual states but does not disallow identifying expressions of emotion. Fourth, military AI remains unregulated despite vocal calls to ban lethal autonomous weapons systems and keep a close watch on other military uses. All in all, AI laws and bills envision a rather light-touch model of technology regulation.

Legal regulation is neither a deadweight on innovation nor the cure-all to AI risks and harms. AI laws will push businesses to reassess their conduct. However, it is worth acknowledging that laws do not always function in the ways and to the extent their sponsors envision. Given the light-touch approach described above, especially when it comes to military AI, legal regulation is likely to serve as a necessary, but not sufficient, mechanism to address AI risks and harms in the future.