Enfoques de Derecho indicativo para la gobernanza de la IA

7 de julio de 2021

En este podcast de Inteligencia Artificial e Igualdad, la investigadora principal Anja Kaspersen habla con Gary Marchant y Carlos Ignacio Gutiérrez, de la Universidad Estatal de Arizona, sobre su trabajo de caracterización de los programas de Derecho indicativo para la gobernanza de la IA. ¿Qué papel desempeñan estos programas en la gestión de las aplicaciones de la IA?

ANJA KASPERSEN: Welcome to this online conversation/podcast, which today will explore the role of soft law in governing emerging technologies. I am very pleased to be joined by Gary Marchant and Carlos Ignacio Gutierrez from Arizona State University to share with us insights from their work together, leading scholars in law, governance, and artificial intelligence (AI), to investigate the use of soft law governance for AI as an alternative to traditional legal and regulatory frameworks, often perceived to hinder innovation and therefore can quickly become too outdated.

Before we get started with today's conversation, a few words about our guests today. Gary Marchant is a regents professor of law at Arizona State University and a director of the Center for Law, Science, and Innovation. His research interests include legal aspects of genomics and personalized medicine, the use of genetic information in environmental regulation, risks and the precautionary principle in governance of emerging technologies such as nanotechnology, neuroscience, biotechnology, and artificial intelligence. Prior to joining Arizona State University, Gary was a partner at the Washington, DC office of Kirkland & Ellis, where his practice focused on environmental and administrative law, also looking at regulations back then. He is an elected lifetime member of the American Law Institute and a fellow of the American Association for the Advancement of Science.

Carlos Ignacio Gutierrez is a governance of artificial intelligence fellow in the Center for Law, Science, and Innovation at Arizona State University. His research interests focus on examining the soft and hard law approaches employed by government and non-government stakeholders in the management of AI and its effects on society. Gutierrez was formerly with RAND.

Gary, first of all, the work that both you and Carlos have been involved in for some time now, going into various phases—and you will speak more of that—is focusing on this concept of soft law. For all of our listeners who may not have an insight into what that is, what is soft law, and why is it so important in the context of regulating emerging technologies including artificial intelligence and machine learning systems?

GARY MARCHANT: Soft law is some kind of substantive commitment or an obligation that is not directly enforceable by government, so it can include a very broad range of different types of things. It can be self-regulation by a company; it can be a code of conduct agreed to by a professional association, a network of companies, or a trade association; it can be partnership programs with various types of non-governmental organization (NGO) programs; and it can be other types of voluntary programs that involve government. We will see when we present our data that government is a very important player in soft law. Even though we traditionally think of them as hard law, they actually play a big role in soft law of a lot of different emerging technologies, recognizing that it is an essential part of the government's spectrum.

We are not saying soft law is a be-all and end-all, but it is a part of how we deal with technologies, and the reason is because it is so quick to adopt. We can put that in place very quickly. Government regulation is a longer process and has to go through a lot of steps. Soft law governance can be adopted and revised on the go, which is very important for a fast-moving technology like AI.

ANJA KASPERSEN: Thank you, Gary, for outlining what soft law means. I thought it would be a good way to start, since we are going to be referring to that concept a lot throughout this conversation.

You have together been working on this rather big project, outlining or reviewing 634 different soft law "programs" you call it or soft law initiatives to regulate AI. Can you tell us more about this project, going to you, Carlos?

CARLOS IGNACIO GUTIERREZ: Yes. This project was born out of an initiative that Gary began about two years ago studying soft law in three contexts—past, present, and future. This output is the present. What we wanted to do was to identify all the places where soft law was being used to govern artificial intelligence, its methods, and its applications regardless of the issue, the institution, or the jurisdiction.

What we did is compiled this database. In other places there are incredibly good sources of information on tools to govern AI. The Organisation for Economic Cooperation and Development (OECD) has its website. Many studies have been done by other institutions on principles of AI, but no other work has been done looking at all types of soft law programs that Gary described, principles, guidelines, codes of conduct, and private-sector standards. We published this maybe a month or two ago, and the main objective is the following: It is to compile these and find the trends in soft law. How is it being used? Where is it being used? How are individuals creating these programs and for what purpose?

Our hope is that with this database—which is readily available to anybody in the public—that if any individual organization wants to create their soft law or even amend it, they don't have to do the research that we already did to identify programs on ethics, meaningful control, or autonomous weapons systems. We have done all that for them. They can take the best practices and create these programs in a more informed manner.

ANJA KASPERSEN: Reading your reports and articles outlining the findings of this work that you were referring to, Carlos, I was really struck by what you said in the reports about the role of governments in soft law programs in all those areas reviewed, and soft law is often perceived to be opposite to government regulation, but in fact you have found that governments were the primary participants in most of these followed by industry and nonprofits. What is this telling us?

CARLOS IGNACIO GUTIERREZ: As Gary mentioned earlier, soft law is generally viewed by some individuals or some organizations as private-sector self-regulation. What we found is that government entities at all levels of government—national, provincial, state, or local—use soft law programs to guide stakeholders or to inform them about their expectations. There are several reasons why governments use this. It is either to anticipate hard law, to let them know that hard law is coming, or to provide this guidance as to what are the expectations that an entity has.

About 40-something percent of programs are created by government institutions, but it is really interesting to find that there are regional differences. The most interesting one is the difference between the United States and Europe. In the United States you can see that there is equity in participation between government, private sector, and non-profits. It is about 20-something percent for each sector, and then the rest is multistakeholder programs.

In Europe that is completely different. What we find is that the vast majority of programs are government-led, and we explain this by the philosophy of how these two regions work. Europe works a little bit more in the precautionary principle approach, whereas our government has a more active approach. We have seen this with previous technologies such as the General Data Protection Regulation, and the United States is more hands-off.

The United States, as you know, does not have many federal statutes on issues that Europe does, such as privacy in data. That is left to particular sectors such as Fair Housing or other areas where data has to be used in particular ways or concerning individuals below the ages below 13, and several states have created biometric laws—Illinois, Texas, and California—but the United States is more hands-off than Europe in that respect, and that is what we see in the regional differences, and, yes, you are right—and as Gary mentioned—government is the largest participant of all entities in the creation of soft law, which also surprised us.

ANJA KASPERSEN: Gary, from your side?

GARY MARCHANT: Yes. Again, it's a real diverse mix of programs. Some are industry programs, some are individual companies, some are NGOs leading the way, but for many of them the government is playing a key role.

I got tuned into this when I was working on nanotech about 15 years ago. We had a meeting off the record under Chatham House rules with all the federal agencies of the United States to talk about nanotech regulation and the role of hard-versus-soft law, and I was really surprised that at that meeting every single agency—I think we had several federal agencies—said: "In the short term we are going to have to rely on soft law. That is the only thing we are going to have. For us to adopt regulations we have to have definitions, we have to have measurement methods, we have to have all of these different things in place, and it's going to take us ten years to get there. So for the next decade we are going to be primarily relying on soft law with existing regulation to govern this technology."

That was a wake-up call to me because it made me realize that this is a really important part of the ecosystem for governance of emerging technologies. We can't just let it go on its own. It is going to be carrying a lot of the water for assuring that technologies are trying to be used in a responsible way.

Now we get to AI, and we see the exact same thing. It has come very quickly, it has a lot of different applications, a lot of players are developing, and we are already starting to see concerns and issues arise about biased algorithms, about unsafe systems, unreliable systems, and so on. How do we govern those? Eventually we are going to have more and more hard law government regulation. We are not against that. That is an important part of the ecosystem again, but that takes time, and it is going to usually be adopted incrementally, steps at a time. Maybe in the United States we would start with facial recognition, but it is unlikely we are going to have the "AI Regulatory Statute" or the "AI Regulatory Agency." People propose those types of things, but they never actually get adopted. We have seen through a whole series of technologies now that same pattern of more of an evolution rather than some revolutionary new regulatory system.

Europe is a little bit different. They have put in place proposed regulations, but it is going to be two or three years before those are binding. There is a lead time, it has to be finalized, and even when it is it does incorporate a lot of soft law within it. So even in the European system, which does have a greater emphasis on government regulation, we are still going to see soft law playing a key role.

So one of the issues is: How do these two things dance together? How do soft law and hard law interact? When is the right time to rely on soft law? When is the right time to rely on hard law? How does that change over time? Can soft law morph into hard law with experience? The interaction between these two types of governance is really critical, but they are both essential, I think, for responsible governance of a technology.

ANJA KASPERSEN: Building on what you just said, Gary, I think you were referring to thinking that government regulations will be a panacea to mitigate against unintended harm or harm coming out of these technologies is a failed approach in this case, but as we are looking at the planned comprehensive regulation versus what you are referring to as more of an incremental approach, what will be the impact of the two? You spoke about evolution. Is technology going to evolve differently depending on which approach we go for, and what will be the impact of those two approaches, the incremental versus the comprehensive part?

GARY MARCHANT: It's going to be an interesting experiment to see how these different systems play out because they are dealing with a lot of the same technologies in different parts of the world. It is not just the United States and Europe obviously. We also have a lot of other countries around the world that are active in this space—China, Korea, and many in Asia; India; even in Latin America there are countries now being quite active in this area. So each of them is going to be trying their own national systems of how to regulate or govern this.

It is very hard to regulate these technologies at the international level. What Wendell Wallach, who you work with a lot, and you have touched on a lot is the difficulty of international regulations. So we really do have then national regulation leading the way and different approaches based on different political, economic, and social systems in these different countries. So we will see how it plays out.

We see something like biotechnology, where the Europeans took a very heavy-handed approach, and they basically have no biotechnology industry. They are very conscious of that, and they don't want to do that with AI. They are using their regulations as something they hope to advance the technology because if it's responsible and seen that way it may be able to be more successful is their vision. Other places think it is going to be repressing the technology and that entities will move out of Europe. This is an experiment to be seen. The point is that in the background of these different hard law systems and different jurisdictions, how do we fill in the interstices with soft law? So soft law may play somewhat of a different role in different jurisdictions.

One of the great things about soft law, though, is it is inherently international. Because it is not bound to a specific regulatory agency or jurisdiction, it can apply internationally. It can be like an International Organization for Standardization (ISO) or an Institute of Electrical and Electronic Engineers (IEEE) standard, for example, applied at an international level. So those kinds of things might be very helpful in providing a level playing field, a little bit of a baseline of what we would expect an entity that is developing or using AI to follow, regardless of what their national regulations are. The national regulations might be at the fore, but there might be other things that should be done in addition to that.

CARLOS IGNACIO GUTIERREZ: Also to add, we found soft law in our database in about 65 jurisdictions, ranging from Argentina to the Vatican, meaning that even though the main players are higher-income countries, any institution, any government in any part of the world can create the soft law with ease, and that is the main benefit of it.

ANJA KASPERSEN: Building on what you both just alluded to, which is the role of industry, standards in industry, and standards organizations in particular, there seems to be a growing backlash against industry-led initiatives, often perceived to be more ethics-washing efforts rather than genuine attempts to regulate—I think we can agree on this—a highly unregulated and fluid industry, undermining confidence in self-regulation, which has often been something that industry themselves have pioneered, looking at chemical companies or other biological industries, where there has been a very high degree of self-regulation. But around issues such as AI, this seems to be less the case. What are we looking at here? Is there a lot of ethics washing going on, what is this backlash rooted in, and where do you see it going?

GARY MARCHANT: I think we are at a time where we have seen a lot of examples of corporate malfeasance, error, or just sloppiness, obviously the Boeing example, Facebook and the Cambridge Analytica thing. We have seen a whole series of corporations, like Theranos, acting irresponsibly or outside of what they say they are doing, and so there is a lot of distrust of industry today, and that has been earned unfortunately because of some of these actions by some companies. Even Google firing some of their ethicists creates a lot of distrust, I think, in companies.

For a company then to say, "Here are our principles of ethical AI," and expecting people to be satisfied by that just isn't going to work. So that is really what the whole focus of our project is. We recognize that soft law is an essential part of the governance sphere. How can we make it more credible and effective?

One of the things Carlos has done—he has done the major lifting on this huge database that he has now made available—was to look at what is in these 600-something different measures in terms of implementation. Is there some way to create an implementation and enforcement mechanism, even if it is not government enforcement, to make these more effective, make them more credible? And we found only about a third of the mechanisms in our database had anything at all. Of course, of the ones that do, they are not all equally effective. So what we really want to do is look at those and figure out: What are the things you can add into a soft law program in addition to just principles to create more effectiveness and more credibility? What are the tools?

So that is what we are doing in the third phase of our project, looking at soft law 2.0. It can't just be a company saying unilaterally: "Here's our principles of ethical guidance." It is good that they do that, but that is not going to carry the weight. That is not going to convince and persuade people to trust that company. So what are the other things that could be put in place to help reinforce that and make sure that they are actually applied?

There are a number of mechanisms we are exploring to do that, including things like: If you sign onto a standard and don't comply with it, you could be potentially liable; it includes insurance companies enforcing those; it includes professional associations and funders enforcing those guidelines; and it includes bringing in other stakeholders, not just the companies, but other NGOs, think tanks, and so on, that have a more independent perspective and who weigh in, contribute, and have an equal say in adopting those principles, standards, or whatever they are.

So there is a whole list of mechanisms we are looking at in the final stages of the project, which we are on right now, of what are the tools that could be used to go beyond ethics washing, because you are right, there is this wide perception that that is what these are, and there is no question of the net effect of some of these things because they have no follow-through.

I should also say that the first part of our project, which has now been published, a series of papers we commissioned by scholars to look at the history of soft law in other technologies, in the biosciences or life sciences, nanotech, and information and communication technology (ICT) environment, and those case studies are really interesting because they all sort of come out the same way, a really mixed record. Some examples of success, but also many examples of failure. So we can learn a lot from both those failures and successes of what makes a soft law program work.

CARLOS IGNACIO GUTIERREZ: To add to what Gary said, one of the things that we found in the last article in that Jurimetrics special issue is that there are relatively three incentives for why the successful programs work, and it is because incentives align in one of three ways:

The first one is that government creates this soft law to set expectations, and organizations follow because if they don't, there are consequences. Hard law will come.

The second is entities create soft law to avoid hard law. There are various cases of this. For example, the Entertainment Software Rating Board, which is a gaming labeling system that the gaming industry created to avoid congressionally mandated labeling of video games. That is the second one.

The third one is self-interest. Organizations create these soft law programs and implement them because they know that it will benefit them in some way or another. It will provide them reputational benefits, it will provide them protection against litigation, as Gary said, if they implement these standards, so it is always something that benefits them. So these three incentives are something that need to be thought about when creating or designing soft law programs because if the incentives don't align, the programs won't be implemented.

ANJA KASPERSEN: Building on what you said, Carlos, that some might have an appetite for soft law to circumvent or to avoid hard law, is there a risk then that hard law falls behind and lags behind and is finessed to be able to govern these technologies? You looked at 634 up to I think December of 2019, so of course quite a few have come out since, that is a lot of programs and initiatives that have been launched. You could say that there is a very strong appetite for these soft law instruments. Is that because of the void in hard law? Is it because the incentives are not there to do so? What do you think is the main reason why we are at this soft law stage of how we govern new and emerging technologies, particularly AI?

CARLOS IGNACIO GUTIERREZ: The way Gary and I study governance is that we are cognizant of the pacing problem, and the pacing problem is that government always takes more time than technology to react, and that is just because technology requires money and ideas to happen. Hard law governance requires much more. It requires that legislators agree, that the executive agrees and signs, and that the interests that are out there also agree. It takes so many individuals for legislation to occur that by definition it needs to be slower.

The problem with AI is that it is so encompassing in the methods and applications it touches upon that right now we are ending the learning stage of AI. When you talk about AI, usually a couple of years ago people would say: "I don't really understand it yet," or "Where is it?" or "Where have I seen it?" We are ending that phase already. Everybody and their parents know about AI because they have a smart speaker in their house that talks to them or they use it intuitively or their kids have shown it to them. Now we are going into the phase where, okay, there is more public interest, and legislators are now paying more attention, so this pacing problem will always exist.

What soft law allows us to do is it allows us to either find ways to substitute or complement hard law that is not there, and it is not there not because there is no appetite for it but because it is very difficult for it to exist, so the only reason that we found out that soft law has been prevalent is because of that increasing gap.

About 90 percent of soft law programs were created between 2016 and 2019. Mind you, we found the first soft law program in our database created by a foreign government in 2001, but between 2001 and 2015 or 2016 maybe 20 soft law programs were identified by our study. Our study has plenty of limitations. We focused mainly on English-speaking programs. We did use Google Translate to translate programs that we found out there that were not in English, but the things that we found were very recent, which means that starting in 2014 or 2015 these programs began to be created in the minds of people, and we do not foresee any stopping in the creation of this soft law. I would not be surprised if 200 new programs were created between December of 2019 and now, but unfortunately the governance of it all through hard law will be very slow, and, as Gary mentioned, the European Union will take a couple of years to finish their draft on AI regulation, which is completely normal.

ANJA KASPERSEN: Do you think, Gary, that the backlash that we just talked about and the prevalence of soft law instruments and soft law initiatives that has occurred, as Carlos was saying mostly since 2016 on—do you think there is a link there? What has prompted this massive evolution in standard-of-care type initiatives?

GARY MARCHANT: AI has come forward very quickly, as Carlos is saying, and we are already seeing examples of sloppy, uninsightful, and unthinking uses of it that do present real problems to people, whether they are used in hiring and turn out to have biases built into them, used in a medical context and turn out to be unreliable and give false results, or start to displace people from their jobs. That's one way to really get people fired up and concerned about a technology.

So we just have so many different applications, and because it has moved so fast without any regulation essentially, we are seeing a lot of things out there that are not well thought through, that are not properly tested and validated, and that are not accurate or not objective. Because of that, we are seeing this quick backlash develop. I think that's healthy. It's good that people are concerned and are identifying problems with the technology.

I was a geneticist and was very involved in the biotechnology debates. The objections resulted in a kneejerk rejection of the technology altogether, and to me that was very unconstructive on both sides. The industry was not being cooperative, but the opponents were very extreme and wanted to reject the technology. The good thing about AI is I don't see that happening. I see companies are starting to realize they have to be more careful.

Obviously there is a range of how they are responding, some better than others, but there is this growing recognition I think with the developers and users of AI that they have to be more thoughtful and responsive, and soft law can help them to do that, direct them some pathways to how to do that, and the critics are being I think much more constructive in this technology of recognizing legitimate concerns and problems but also coming up with solutions or things that should not be done and to try and provide that kind of input.

The question is: How do these forces interact? What are the forums or the opportunities for these issues to be debated and discussed? It's great to have webinars like this, and I know the Carnegie Council initiative that you and Wendell are leading is trying to think through how to create those kinds of forums, but there are a lot of different things going on. There are so many conferences, webinars, meetings, and committees, which is all good, but it is hard to keep track of it all. But I think it is a positive, vigorous response, as there should be, to this technology early on to put in some flags of where companies should not go or the processes or the assurances they need to provide before they go there.

I think it's actually quite a healthy ecosystem right now because there is so much focus and that there is concern. It is better than letting this technology totally prevail everywhere like with social media and some of these other technologies that have just proliferated without thinking through some of the bigger issues. We are addressing them early on, which I think is really healthy. We don't have perfect mechanisms to solve them, obviously, but it's an experiment, and there are a lot of different experiments going on.

ANJA KASPERSEN: I like the upbeat angle too, that we are ahead of time because some of us, of course, feel that we may be chasing after the bus a little bit when we talk about AI governance. As you said, any discussion around any type of technology is also a discussion about ethics and tradeoffs. Tradeoffs is the issue that my colleague Wendell has been focusing on a lot as well. Based on your work is there a consonance between public good issues and commercial incentives? I am saying this also because public good is of course a very wide topic as well because it differs whether you are looking at AI more from a gaming and strategic advantages in the military domain or you are looking at applications in the health domain. There is a very wide set of applications also in the government domain, but are you seeing these sort of tradeoff discussions happening in your work?

GARY MARCHANT: There definitely are these tradeoffs that are very prevalent and very important. How they are being addressed and whether they are being addressed, I think, varies across these various spectrums as you identify where AI is being adopted.

For example, it might be quite profitable for a company to use various types of surveillance technologies because they can sell it to a lot of governments or a lot of other entities that basically are not in the public interest, so there are these clear differences between private economic benefit and the public good, and that is where there needs to be governance, whether it is government regulation eventually for sure or something in the interim to fill that gap, but that is what creates the distrust that creates the problems when we are finding that our data is being collected in ways that we didn't think they were and being used against us in ways—for our employment or whatever—that we didn't anticipate, in our health, and so on.

There are other ones where there is more of an alignment. Certainly things like better detection of cancer. We all benefit from that. I had my own life saved by an AI system last summer when I had this sudden heart issue I didn't even know about that my Apple Watch alerted to me to, and I got to the emergency room, and the doctor said: "This saved your life. You would have died tonight." And I had no idea. So I am here today because of AI, and we can all benefit from those kinds of applications. It is clearly a public good, but also it is things like data—Who owns the data? What is the data used for?—because this technology is so driven by data. There are still tensions, no question, but there is more of an alignment.

Then, when you get to the military applications, those become even more difficult to deal with because you have national security interests and the community focusing on that, and they don't want to be second to anybody, whereas I think a lot of people—people like Wendell have led the way on this—are concerned about where are we going if we are going to have machines starting to kill people? We don't want to live in that kind of world, but are we inevitably going down that pathway because of these national security tensions and rivalries and so on, and how do we stop that? Those are really difficult issues. We have not done a lot on the military context, but those I think are really troubling for a lot of people.

ANJA KASPERSEN: Thank you, Gary.

Going to you, Carlos, much of your research, both the report and related articles—and I love this notion of jurimetrics. I think that's the name of one of the journals that you edited a version of, so probably we need more Jurimetrics going forward—where you examined past soft law approaches to other technologies, as Gary was referring to, to see what lessons can be drawn from governing current or future deployments of AI-based technologies and systems. Can you share with us what those are and why it is important, especially keeping in mind that fundamentally AI is what we call a "feature" technology that allows us to converge other traditional and cutting-edge technologies in new and sometimes unexpected ways? What can we learn from these other domains and fields that you have looked at, and how did that play into these 634 programs that you looked at?

CARLOS IGNACIO GUTIERREZ: That's a good question. These other emerging technologies are technologies that came up in some cases several decades ago. What we were trying to do is understand how soft law was used with those particular cases. We focused on nanotechnology, biological sciences, ICT, and medical technologies, and we found, as Gary mentioned, many successes and many failures. The main learning from that was that soft law is not a panacea but that there has to be alignment of incentives. The final article in that Jurimetrics issue is about these incentives and trying to foresee how that would play out in AI.

One of the more important things that we did with the database is that based on lessons from the Jurimetrics issue we added this idea of: Can we identify publicly disclosed implementation or enforcement mechanisms? We only found those in 31 percent of programs.

That is a very interesting finding. These are only public disclosed. It might be the case that some programs have these enforcement mechanisms that are not on their websites, that we are not privy to because nobody bothered to publish, but we only found 31 percent, and we divided those into two areas. They are either levers or roles, and they are either using internal resources or external resources.

It is a four-dimensional grid, and what it basically says is a lever is an instrument that can be used to implement soft law. Is it a committee of individuals that gets together and decides something? Is it the usage of indicators? Are indicators used to say, "Hey, this was successful, and because this was successful it is going to have repercussions throughout the company?" Is it an AI champion, an individual who is responsible for making sure that AI is used in a way according to the company?

Then, the role is the human resources to make these levers happen. We are not management consultants. We don't think of ourselves as knowing how organizations work, but these two dimensions of how soft law is being used are useful for institutions when they are trying to use soft law 2.0 and trying to create their own program: How do we do this? How do we make it effective and credible, so that when we look at these instruments as outsiders, we don't say: "Oh, these are just principles. They are ten ideas that maybe they are going to be used, and maybe they are not used in the foreseeable future."

I think we have Wendell's hand up with a question.

ANJA KASPERSEN: Good. Wendell, join us.

WENDELL WALLACH: Thank you very much. This has been great. I am wondering if we can focus a little bit more on the weaknesses and the ways to strengthen soft law.

First of all, as you both know so well, if you bring up soft law in a context of European policy planners, they are very quickly dismissive. Perhaps you can speak a little bit to what you think they misunderstand or do understand in the dismissals they are putting forward?

The second question is: I am wondering whether soft law can be subverted, whether the introduction or the illusion of soft law initiatives might be used by industry players who want to thwart any effective form of regulation.

The third part of my question: You have already given some examples of some effective initiatives that have enforcement behind them, and I am wondering if you have the capacity to structure or influence it. What kind of new mechanisms or incentives would you like to promote to make soft law initiatives more effective?

GARY MARCHANT: Great questions. It sort of covers our whole project in three questions.

The first one, there is definitely a philosophical difference between Europe and the United States, as you mention. If you suggest there should be industry governing this in Europe, you get this huge pushback that that goes against democratic legitimacy, and yet Europe is a huge user of soft law and applier of soft law.

Part of the problem is this perception that soft law means industry self-regulation. That is one type of soft law, but there are many others as well. In fact, the EU regulations that were just proposed do involve things like code of conduct, particularly for the lower-risk technology applications, so they are incorporating soft law into their governance scheme just like they have for nanotech and just like they did for biotech, so it's part of the system. It just plays somewhat of a lower role compared to the government regulation compared to the United States.

The United States is very strongly adhering to a soft law program approach. Under the last administration it was expressly adopted, but under the Obama administration, for example, it was as well. In their reports they recognized that hard law would be premature.

A lot has changed in the years since that Obama administration white paper, but still you are going to see a lot of reliance on soft law in the United States just because of how bureaucratic our government process is. Adopting a regulation takes years. Our statutes don't cover a lot of the issues, even with soft law, so no one maybe even has jurisdiction. There is just a ton of issues in terms of, are we going to have any type of comprehensive regulation any time soon? I don't think we will. I think that is just a fact.

We will have progressively more pieces of regulation at both the state and federal levels over time, and 15 years from now it is going to look quite different than it does now because there will be a lot of regulatory programs in place, but again they won't be perfect. Regulation has never completely solved any problem. It's not a panacea, just like soft law is not a panacea.

I was a government regulatory lawyer for ten years in DC, and every program I worked on had all kinds of flaws. It was used to keep out smaller entities or it had huge noncompliance that the government couldn't possibly ensure that there was compliance. There was a ton of issues and problems. They got outdated. They were regulating something that no longer existed and not the new applications. There are just huge problems with regulatory systems. So there is no perfect answer here. It is just a matter of a mix, and it may be different in different jurisdictions.

In terms of can entities misuse soft law or exploit it, absolutely. That is the ethics washing that Anja was talking about, and there is no question that occurs. We saw it in these other technologies we looked at, and we see it in AI as well. "If we just put out this set of great principles, we can now say we're ethical and responsible AI and go on our way," and obviously that is just ethics washing. That is just greenwashing, it is just saying something but not following through.

One of the things we are trying to emphasize in our project is that that is not enough. Soft law cannot just be the principles. It has to be: How do you implement them? What are the implementation mechanisms? How can you provide assurance to entities—the public, governments, and so on—that these are being applied? So that's really what we're looking at.

The third part of question is: What are the tools out there, and what can we learn about it? We have a bunch of ideas, but we are really interested in talking with other people who have ideas, things like the supply chain. When I was a regulatory lawyer I saw some companies who got really concerned about their products being misused and started to enforce soft law mechanisms through their supply chain. Could we start doing that with AI? Could some of the developers insist that the users follow through on this?

I think the issue of standards is very promising in that way. Carlos and I, and Wendell, I know you are, are working on the standard IEEE of governance of AI by entities that develop and use AI. Having a standard like that could be very effective because it can be sometimes legally enforced as a standard of care, at least in the United States.

There is another mechanism. The Federal Trade Commission (FTC) can enforce against companies that commit to say they are going to do something and then don't follow through on it. Even though it wasn't legally required by hard law, it may have been a soft law program, but they say they are going to do that, and they don't. That's "deceptive business practices," and the FTC can take enforcement action against that.

Insurers played a huge role in nanotechnology of helping to fill that gap because government was too slow to regulate because they had to go over these hurdles and through hoops in terms of defining things and test methods and all these things that take years to put in place for a government regulatory agency. Insurers were able to just put that in place instantly, and that had a huge impact on how companies actually handled the technology in many cases.

So there is no one perfect tool, but there is a whole set of them that I think should be analyzed and put together as a package: "These are the things, if you are going to rely on soft law, that have to be part of the concept and not just the principles."

ANJA KASPERSEN: Thank you, Gary. You alluded to it already and it builds on what Wendell also included in his question to you. I come from an arms-control background myself with big treaty law of course being the dominant feature. We all know that compliance and liability measures are difficult even in the best of times.

Based on your experiences—and you mentioned standards as one very important path to go—what are the characteristics that make a soft law program succeed, and what have you found as a common denominator among those that are less likely to succeed? Carlos referred to some of them earlier—lack of implementation mechanisms and these types of things—but what is popping out at you as main findings in this domain?

GARY MARCHANT: Again, they fall into those categories that Carlos identified of levers and champions. If a company doesn't have an incentive to do the right thing, then they are probably not going to. They are going to be motivated by the profit motive, and if they can make more money by selling a surveillance technology—what can push back against that? What can cause them to pause and think more ethically and responsibly and holistically about their technology? What are the mechanisms to do that?

Well, it can be government regulation, but it can also be soft law government influences, the government putting out guidelines or suggestions or codes of conduct. The European Union did that with nanotechnology: "Here is a code of conduct that we think companies should follow." That gave companies something to target, and if they didn't follow that, they were clearly outside of the bounds.

Funders—if they are getting government funding, small business loans or research funding—can put in place restrictions.

Journals can put in place requirements to follow certain things. This is happening, for example, in stem cells. You have to follow the International Stem Cell Research Institute's guidelines, which are soft law, if you want to publish in any of the nature journals anything to do with stem cells. That has a huge impact on the scientists and the technologists.

Insurance coverage: "We won't cover your liability unless you do this." The potential for liability. Creating an FTC-enforceable requirement that you basically set up that way. These are all levers.

The other side of it is the champions, of having people who have an interest in this. One of the examples we found is that when a soft law program does not have the buy-in of the top of the entity—the CEOs or whatever—it is much less effective, but when it does have that buy-in it is more effective.

Is there worker input? Do the employees of the entity have a fair and safe say in how the technology is being developed safely? That also helps to promote compliance to more responsible implementation. Sometimes an ethics committee may be an effective tool.

Who are the people who can really push this to work out. So you have the people and then the policy levers that combine to provide the incentives. Or, when they are not there, you see—predictably—soft law programs fail, as many of the examples in our Jurimetrics series showed, that because of these deficiencies it was better for the company to avoid the soft law and ignore it even if they were publicly saying they were going to follow it, but in reality they didn't because they didn't have those incentives. So we have to think about what are the incentives of an actor, and what are the things that are going to drive it to go to a more responsible direction, and try to put those in place upfront in the soft law program. One of our findings is that most soft law programs don't have that. This is why there is a crisis I think in soft law, because there are just too many principles and not enough implementation.

CARLOS IGNACIO GUTIERREZ: An example of what Gary says is that completely autonomous commercial vehicles do not exist today. They are not commercially available. Many companies are testing them, but what we found in the soft law database is that about 14 or 15 countries have these guidelines for developing, testing, and launching completely autonomous vehicles, and these are now regulations. These are governments telling firms: "Hey, this is what we think you should be doing. This is the kind of information you should be reporting to us." But there is no specific regulation at the national level that these countries are creating that tells them exactly what to do. They are guidelines as to suggested actions.

ANJA KASPERSEN: We need a very different level of soft law fluency at the top leadership level.

GARY MARCHANT: Exactly, yes. It is not just the principles. It is much more beyond that. It is a much more multidimensional thing.

ANJA KASPERSEN: I would be curious. Gary, both of you referred to standards earlier, and what I learned about standards was through engagement with some of the institutions that you mentioned already. There is, of course, a lot of competition between different standards bodies as well. I have been doing a lot of thinking about how do you make sure the standards also become more interoperable, especially around feature technologies that are being used to converge others. You don't have the same clear-cut domains that we were regulating in creating standard scores in the past.

How do we move standards forward faster as complements or as ways of supporting soft law programs, and how do we make sure that there is a level of interoperability that may not exist currently, or maybe we don't need it? Will that help us with enforcement and compliance that you spoke of earlier?

GARY MARCHANT: Yes, I think so. It is interesting to see the change in standards over the last couple of decades. Most of the standards that were in place a decade or so ago—we used to have an annual workshop on standards, and these are on things like interoperability and this device fitting that device and being able to plug into every computer—were very technical. They were not really policy or governance so much.

There has been a real shift, and I attribute the shift to nanotechnology, where standard-setting organizations realized that there was a huge gap in the management and risk management of this technology, and we started to see this proliferation of standards on the governance of nanotechnology and on risk management of nanotechnology, things that standard-setting organizations had not historically done. So they have moved into this different role of sort of quasi-regulators when there is this gap, when there is this pacing problem of government regulations, and that is an interesting step.

But one of the problems with standard-setting organizations is who is at the table and who is adopting these. Again, though, there has been some interesting evolution in that. For example, I am involved in the P2863 IEEE standard. This is part of this bigger initiative, the ethically aligned design by IEEE, that has this whole series of P7000 standards looking at different aspects, and one of the interesting things about it is you don't have to be a member of the organization to be a participant. So they have opened it up.

Then, one of the few blessings of COVID-19 is that these types of standard-setting meetings used to always be done in person, and it really limited who could afford the money to go to these meetings. People from developing countries usually couldn't afford to. Academics and NGOs couldn't afford it, so it was a very limited participation. Now with these being open to anybody, being online and therefore free to attend and participate, we have philosophy grad students who are there with an AI expert from a big company together talking about these issues. So it has very much enriched and broadened the participation both by opening it up but also, the unfortunate reality of COVID-19 is that so much is online, but it has actually had this beneficial effect of allowing other voices to come in and be part of the conversation now, which I think is really critical to have standards that really are useful.

There is this issue of competition between standards. In nanotech, for example, if you are an entity, you have about eight different risk management standards by different entities and countries or jurisdictions you operate in. How do you deal with eight different standards dealing with risk management? To the extent that they can be somewhat consistent, that would be great. I know in our process we are trying to talk with the ISO people and Australian standard setting and other organizations to try to share ideas, concepts, definitions, and approaches to try to get some consistency because they are probably going to be fairly similar in how you responsibly manage a technology like AI.

But, yes, it is a problem with soft law that you have a proliferation. If you are a company operating in this space, and you look at Carlos' data set of 600-something programs, how the heck are you supposed to decide which of these you use? You can't follow 600-something programs. It takes a lot of energy and resources. So how do you pick and choose? One of the benefits of soft law is that anybody can adopt them, but one of the downsides is the opposite of that, that you have this proliferation of mechanisms, and that is a real issue.

ANJA KASPERSEN: I do think it is a real issue. In conflict domains, we speak about the geography of conflicts and how sometimes conflicts can occur where boundaries and borders were drawn up in different historical epochs and that you might have these spaces in between, these no man's lands, because countries and places were created at different times. Sometimes when I am listening to some of the debates in AI governance, etc., it makes you think to say: Are we at risk of creating varying soft law programs, instruments, hard law, standards that essentially will create no man's lands in technology?

GARY MARCHANT: Right.

ANJA KASPERSEN: I don't have an answer to it. That's for you two to answer, but it is something that comes up in the back of my mind sometimes when I am listening to these conversations.

Gary, just one more quick question for you, given your background, and then over to Carlos. You mentioned earlier when you were explaining what soft law is that it is also an instrument or an approach in law that originated very much out of international law. Given that, what is the role of multilateral organizations? Did that feature—that is the question to Carlos—how much did that feature in the broader review that you did, the role of multilaterals and also the programs that have come out of multilateral institutions, not all of them hard law but more in the soft law domain?

GARY MARCHANT: As you know, Anja, as someone who has been working in the international sphere for a lot of time with a lot of experience, international law is hard because you don't have a police, you don't have a Congress, and you don't have a regulator who can just impose a regulation on the world. We have the United Nations, but they have very limited powers, and countries are not bound by anything unless they agree to it. It is a very different legal context than domestically, where we can have our various designated entities adopt a rule and enforce it against all of us. You can't do that at international levels. That's why soft law really evolved there, to fill that void again. We had treaties, but again countries had to agree to those. A tremendous amount of effort goes into that. So soft law evolved to fill this gap in international governance because it was the only thing we could do. But we can learn a lot from that.

So international organizations, multilateral organizations, are really critical then because they can take these soft law approaches they have used for years and try to make it work in AI and try to deal with some of the regional and jurisdictional differences. So I think there is a lot of potential there of what is the role for international governance beyond national and soft law governance, and Carlos I know has looked specifically at how much of this stuff is international versus national.

Do you want to weigh in on that, Carlos?

CARLOS IGNACIO GUTIERREZ: In terms of multistakeholders, about 20 percent of the programs that we found are led by all three sectors, either government, non-profits, and for-profits, so there is a sizable amount of activity there, and it is led by institutions such as the OECD or others that have created these instruments that have really had an impact in how other organizations are also viewing soft law, for example.

The OECD principles, for example, are just that, they are just principles, but they have been adopted by 42 countries. They work with non-profits and for-profit entities to kind of disperse this idea of governance of AI into their corporate governance, and they have done a great job in distributing those ideas, being a good node for it.

ANJA KASPERSEN: Mindful that our time is up, I would like to spend the last two minutes asking both of you—some of our listeners are still trying to wrap their mind around the whole soft law approach, what to do, and especially what you both were referring to, how to get implementation right. I think both of you said that without a good implementation mechanism and demonstrated willingness to enforce those mechanisms, there are no real approaches to talk of. What would you advise them? What would be one or two key takeaways that our listeners can take from this conversation and your work?

Carlos, maybe to you first.

CARLOS IGNACIO GUTIERREZ: Yes, of course. I think if any of our listeners are trying to create or change their soft law, they should now take the time to look at the variety of things that are out there. They should start with our database. Our database—we didn't mention this before—has up to 107 variables or themes, and that just means that we not only describe the characteristics of a program through the variables, but the themes are that we label the inside of the soft law program and classify it in 78 different ways. So if somebody is interested in autonomous weapons systems, transparency, explainability, or 78 other subgenres of information, these are all identified in our database, and that is why it is so powerful because it can go very specific really fast, and if institutions are looking to improve their processes or their soft law, they need to do it.

Gary?

GARY MARCHANT: Soft law is here. It is going to be part of the governance ecosystem for AI for quite some time. Its role will vary over time as more government regulation comes forward, but soft law will remain a critical part of this for the next decade at least. So it is incumbent on all of us to try to make it work better, that it is not just ethics washing, that it really does have some impact and some teeth.

There are a lot of creative ideas out there of trying to think of how to make that happen. Whether it is some kind of multistakeholder initiative, whether it is some kind of auditing scheme or other type of mechanism through insurance or whatever, there is a lot of ingenuity and creativity going on, and even more is needed to try to think of how can we make these instruments actually have some practical effect and do what we all want them to do.

ANJA KASPERSEN: It is my understanding that the next leg of your project will focus specifically on the characteristics of what would make you successful and where industries in the past have failed, so that will be a very useful resource for everyone listening and tuning into this.

Super. Huge thank-you to both of you for sharing your time and your knowledge. I think we can conclude that we need better soft law fluency at all levels of government, of industry, non-profits, for-profits, and to get started on building that fluency, go to Carlos' and Gary's database and stay tuned on their work because more will come on what we need to do to get enforcement compliance and implementation right.

With that, a special thanks also to my co-host Wendell Wallach, and the team at the Carnegie Council for Ethics in International Affairs, hosting and producing this podcast. My name is Anja Kaspersen, and I hope I earned the privilege of your time. Thank you, everyone.

También le puede interesar

APR 30, 2024 - Podcast

¿Es la IA sólo un artefacto? con Joanna Bryson

En este episodio, la presentadora Anja Kaspersen habla con Joanna Bryson, de la Hertie School, sobre la intersección entre las ciencias computacionales, cognitivas y del comportamiento y la IA.

APR 26, 2022 - Podcast

La promesa y el peligro de las interfaces cerebro-máquina, con Ricardo Chavarriaga

En este podcast de "Inteligencia Artificial e Igualdad", la investigadora principal Anja Kaspersen habla con el Dr. Ricardo Chavarriaga sobre la promesa y el peligro de las interfaces cerebro-máquina y ...

5 ENE 2022 - Podcast

"Esa no era mi intención": Replantear la ética en la era de la información, con Shannon Vallor

En este episodio del podcast "Inteligencia Artificial e Igualdad", el Senior Fellow Wendell Wallach se sienta con la profesora Shannon Vallor para hablar de cómo replantear la ética ...