Ética, gobernanza y tecnologías emergentes: Una conversación con la Iniciativa Carnegie para la Gobernanza del Clima (C2G) y la Iniciativa para la Inteligencia Artificial y la Igualdad (AIEI)

9 de diciembre de 2021

Las tecnologías emergentes con impacto mundial están creando nuevos espacios sin gobierno a un ritmo vertiginoso. Los responsables de las iniciativas C2G y AIEI de Carnegie Council explican cómo trabajan para educar y activar a las comunidades en torno a estas cuestiones de gobernanza.

JOEL ROSENTHAL: Good morning. Welcome, everybody. Welcome to the program on "Ethics, Governance, and Emerging Technologies." Our discussion this morning is prompted by the presence of our fellows from two of Carnegie Council's impact initiatives, the Carnegie Climate Governance Initiative, affectionately referred to as C2G, and the Artificial Intelligence & Equality Initiative, equally affectionately referred to as AIEI. Thank you for being here.

I want to give a special thanks to Janos and Cynthia from the C2G team for being here in person, to Wendell Wallach, who is here in person, and to Anja Kaspersen from the AIEI team. She is joining us via Zoom from her home in Norway.

I am not going to read the impressive bios of each of our fellows. That information is available to you on our website, but I will mention that each of these individuals has been a leader at the highest levels of the academic and policy communities, and they have shown us the best aspects of applied research as well as multiprofessional approaches to global-scale policy challenges, and we are very grateful for their work with us here at Carnegie Council.

This crossover discussion between these two initiatives and these four fellows gives us the opportunity to highlight the Council's distinct mission and purpose. Carnegie Council is distinct by virtue of our Carnegie legacy, our focus on ethics, and our tagline, featuring the word "empowering." Just a quick word on each of these signature features.

The Carnegie name signifies our standing as a civic institution working in the public interest. Independent, nonpartisan, research-driven, and public-facing, Carnegie Council is a good-faith broker in encouraging global cooperation. We believe our focus on ethics is a key leverage point for dialogue, enabling reflections on risks, tradeoffs, ends, and means. By facilitating inquiries that are inclusive, open, and rigorous we believe we can discover common values and interests that will lead to a better future.

What do we mean by "empowering" ethics? We believe empowerment derives from:

(1) Identifying the most critical issues of the day, naming them, and framing them amidst the crowded, noisy, and fractured public debate;

(2) Convening thought leadership to bring intellectual and practical resources to these issues;

(3) Creating new inclusive communities and constituencies around these issues; and

(4) Creating new educational resources through multimedia publishing, social media, and public engagement.

These features explain why the Council has organized its work around impact initiatives like C2G and AIEI. These initiatives are proactive and goal-oriented. They are interdisciplinary and multiprofessional. They seek change by putting issues on the global agenda, creating competent stakeholders to address them, and producing essential resources for educational purposes.

Finally, another way we empower ethics is by working with young professionals through our Carnegie New Leaders program. Far beyond a traditional professional development program, Carnegie New Leaders provides an applied ethics leadership curriculum as well as fellowship opportunities for its members. We believe we are at a generational moment in terms of ethics and governance, and so we are turning to our New Leaders to partner with us in all our work moving forward.

To that end, we have one of our most accomplished New Leaders with us today to moderate the discussion. Ronnie Saha is head of risk and strategy at Facebook. He was previously with the Public Markets and Emerging Markets practice at Deloitte, and Ronnie has been a key person in developing Carnegie Council's strategy and vision plan, so it's most appropriate for me to turn it over to Ronnie to take it from here.

Thank you all for coming. Ronnie, over to you.

RONNIE SAHA: Thanks very much, Joel. Thanks for this opportunity to moderate this discussion. I will also say it's very nice to be back here live for our first live event in some time.

I will provide a quick overview of how this session is going to go. We are going to hear from leaders of our two initiatives that are featured today, C2G and AIEI. We will then have a moderated discussion, and then there will be time for Q&A at the end. We will have microphones for the live audience, and we will be able to field questions online.

Without further ado, we would like to have an overview by the C2G team, Cynthia and Janos.

JANOS PASZTOR: Thank you, Ronnie, and thank you, Joel, for inviting us for this event. It is really a pleasure to be back. Cynthia and I will share a few thoughts about our work, and let's see how it goes.

C2G, the Carnegie Climate Governance Initiative, indeed is associated with Carnegie Council for Ethics in International Affairs. We have a very simple mission. Our mission is to contribute to the development and strengthening of governance frameworks for new emerging techniques like large-scale carbon dioxide removal and solar radiation modification.

We are not here to promote these technologies; we are not here to be against them. In fact, impartiality about their use or not is a very important factor in our work. It has allowed us to bring all the different stakeholders to the table. We are funded entirely from philanthropy, and we have set ourselves a time limit of working until the end of 2023 because we believe strongly that we will be able to achieve our mission.

A simple way to express our mission is to get these issues on the agenda. Once they are on the agenda, then governments will take care of them, and civil society advocacy groups will push or pull in different directions. It is not our business to solve all of the governance issues.

When we talk about governance it's very important that we recognize we don't just talk about governments doing something, that is, regulation and things like that. It is included, of course, but governance is much broader. It involves different stakeholders engaging in the process and contributing to decision making, to finding out more information, more knowledge, and so on. Governance is a broader concept in our work.

Very importantly, we work in a catalytic manner. We don't need to build up ourselves especially because in two years we will be out of the business, so our main objective is not to get things done by ourselves, but we encourage and work with others in a catalytic manner so that they do the work.

What that means in practice is that, for example, a little over two years ago we started working on the idea that the UN Environment Programme and its governing body, the UN Environment Assembly, based in Nairobi, would have an important role to play in the governance of these issues. So we talked to governments, we talked to the Secretariat, the executive director, and out of these discussions came strengthened ideas of what potentially the UN Environment Assembly could decide on this matter, and before we knew it one of the countries said: "Oh, yes, we like this idea. We'll take it on."

The moment the country took it on then it became a government-led process. In that case, it was Switzerland. They development a resolution, they got partners, and they submitted it to the UN Environment Assembly, and the Resolution was negotiated.

It didn't pass, but that's a different story. Our objective was not to get a specific resolution. No, no. Our objective was that it should be discussed, and it was.

What was most interesting was that our work generated a substantial intra- and intergovernmental discussion at that time, and it has continued even without any further action by us, and now governments are talking about what resolution they would wish to bring in to the next session of the UN Environment Assembly in February of next year. That's a good example of how we work.

What we now plan—and this is our strategic objective for the remaining two years of our life—is to bring the issue of the governance of solar radiation modification, particularly the challenging one of stratospheric aerosol injection, to be considered in the UN General Assembly (GA). In a similar way to the UN Environment Assembly, we are working with governments and engaging with them to start thinking and eventually take this on.

So we are at this problem: Why are we coming to the UN General Assembly on this issue? First of all, if stratospheric aerosol injection were ever to be done, it will be the most global endeavor humanity will have ever undertaken. It will have an impact on everybody, on the one global atmosphere that we have, and it requires some level of not just global consideration, but it is not just a climate issue, it is not just a health issue, it is not just a security issue—and my colleague will talk a little bit about that—it is indeed an issue that covers all sectors, and the UN General Assembly is not only the place where every country has a seat to talk, but it is also a place which can undertake cross-sectoral discussions.

What we are trying to achieve now is to engage with governments. We are here in New York. We came to New York because we organized an event Thursday morning where ambassadors and representatives will come together for the first time in an informal way, in a breakfast like this one, and discuss how they might work on this issue, how they might consider it, and how this process could go forward.

I want to mention two things, and then I'd like to ask Cynthia to add just briefly something.

First of all, stratospheric aerosol injection is considered to be extremely challenging, difficult, controversial, etc. One of the challenges we face is that people look at the risks of this technology and say: "Oh, my god. This is very risky." But what we have to do is we have to look at the risks of this technology versus the risks of not doing it in a world that it now looks like is heading toward 2–3 °C above the historical average, and that could be a catastrophe. Therefore, one has to look at a very risky situation of a high-temperature world compared to a situation of a world where one would use this technique. Will the world be better off or worse off? That is the kind of question one needs to ask, and that is at this point not being asked.

The other point—and then I will stop—is the question of moral hazard. There has been a great deal of discussion: "You can't talk about this. We have to do emission reductions." And it is true, we need massive emission reductions. The IPCC, the Intergovernmental Panel on Climate Change, has been very clear about that—"transformative" emission reductions, which means we need to not just do electric cars coming with power from solar panels. That's easy. We have to change how we do cement, our agriculture, our flying, our whole economy, our whole life. That's what we mean by transformative emission reductions.

But it's going to be very difficult. We also need to remove carbon to go to net zero and eventually to net negative because there is already too much carbon in the atmosphere. All of this will be difficult, and the IPCC is saying even if we do all our effort to do those two things which we must do, it may still not be enough to maintain our temperature goals, and that is why solar radiation modification might have to be considered.

The moral hazard goes both ways. If we don't talk about it, we may end up in a situation where we need something that we haven't worked on, but we must be absolutely sure that we do the things that we must do—emission reductions and carbon removal—because without those, solar radiation modification makes no sense. It's like a Band-Aid during the time that we are decarbonizing.

So here we are. Cynthia, if you could add a couple of thoughts about two sets of issues. Over to you.

CYNTHIA SCHARF: Thank you. It's a pleasure to be here this morning. I'm going to address very quickly security complications or implications of these emerging techniques and then the ethical components, perhaps most fitting for this audience.

In the type of technique that Janos just described there could be several implications for geopolitical security.

First of all, there is the question of who is actually involved in the decision-making process of whether to use this technique or not. Does it involve just the P5, the great powers? Does it involve climate-vulnerable countries? Who is sitting at that table and helping to make that decision?

Then, from what we understand right now from the models, there likely would be global winners and losers. How do we calibrate that kind of inequity in terms of our thinking about geopolitics? How would one address causality? How would one attribute impacts of this technique if it were ever used?

One could imagine Country A thinking that they need to use this technique to lower their outside temperature so their population can work outside, and this inadvertently causes a drought in neighboring Country B. That was not the intention, but neighboring Country B might easily feel that that was deliberate, a casus belli, and therefore you can see the potential for this kind of a technique sparking conflict. In that sense, it is very destabilizing.

There is a community of people who look at climate and security. That community is growing. It's comprised of not only think tanks and non-governmental organizations but many former military officials in any country really, and the idea is that climate is destabilizing, and looking at areas which are already fragile areas, areas like the Sahel, areas in the Himalayas, for example, which are conflict prone, and looking at how climate change itself could be contributing to the destabilization. Add into that mix an emerging technique whose impacts are not yet known. We are looking at them on computers, not even supercomputers yet, and so it's entering a new territory for the climate and security conversation.

That's just a word on the security implications of these new techniques.

Last but not least for Carnegie Council, there are many ethical considerations. What do I mean by that? Well, one of the most prominent ethical issues at the very center of the climate change conversation is equity. The countries that have done the least to cause climate change are the ones that are suffering the most from its effects. That has been with us from day one in terms of looking at how the conversation on global climate change has played out in the international community.

We saw in Glasgow [at Conference of Parties (COP26)] that that issue is far from resolved. In fact, there is increasing anger on the part of developing countries that equity is not being addressed, equity in finance, equity in terms of compensation, loss and damage, and equity in terms of who cuts emissions, how much, and who goes first. That issue is also at the center of thinking about some of these new techniques.

For example, who is deciding if this technique might be used? Are the most vulnerable, the most climate-impacted countries sitting at that table and helping make that decision? Whose data are they using? Who has the authority and who has the legitimacy globally to be making these decisions? That in part is an important reason why, as Janos mentioned, that we are looking to the UN General Assembly for the first truly international discussion of this technique in its broadest impact.

The UN General Assembly is the world's most representative body. It is considered therefore the most legitimate body, and we feel that the discussion on a technique that affects every single country in the world for better or for worse needs to at least start in the world's most representative body. Then, whatever the GA decides to do is their business, but that's why we have focused on the UN General Assembly as that starting point.

There are also issues of intergenerational equity. Without getting into too much of the detail, once one starts this technique it is very hard to stop. If one stops suddenly, there is what I call a "boomerang" or a "termination effect," whereby the temperature that has artificially been lowered then rises back to its previous level, and that variation in temperature would be devastating for biodiversity. We simply can't adapt species, flora or fauna, in that short of a time to that great of a temperature change. So one would have to wean off this technology very carefully over time. What that means is, if we start, we're effectively locking in a certain pathway for the future.

Of course, one can also argue that in our current pathway, what we're doing right now, heading to 2–3 °C, is also locking in a future. So, as Janos said, one has to look at this in the context of what are we doing right now and what might be the implications of doing something differently using techniques that are still being researched, but there are a great many unknowns involved with them.

Lastly, there is a question of legitimacy: Does humanity actually have the right to deliberately change the climate? Of course, we are changing the climate right now. That's why we have the problem of climate change. But it is not intentional. It is an aftereffect, a consequence of a fossil fuel economy.

These techniques would be deliberately trying to engineer the climate. Is that just human hubris on steroids? Do we really think we have the knowledge, the wisdom to be able to do that and to do it not in a one-time manner but over many, many decades? Are we just "playing god," as some people say? Do we have the legitimacy, the right as a species, to actually try to alter the climate?

These questions are swirling around some of the harder science questions and the geopolitical questions, but they are very much at the center of the discussion about these techniques. I will close there, and over to you.

RONNIE SAHA: Thanks, Cynthia and Janos. Wendell, would you like to speak a little bit about the AIEI project?

WENDELL WALLACH: Thank you very much. It's wonderful to be here.

There is very little that Janos and Cynthia said about climate change and engineering solutions to climate change that couldn't be said about biotech, about nanotech, and about infotech, particularly these issues about equity underscored by Cynthia in the end of her comments.

Reflecting upon that, Joel and I, in conversation almost two years ago now, decided that to approach the broader impact of emerging technologies, we were looking for a leverage point and a leverage point that was a little bit different than what other initiatives were doing out there, and we felt that if we at least named our initiative "AI & Equality" we would have that equity as a leverage point through which we could talk about the tradeoffs that were coming to bear as this broad consideration of differing approaches to emerging technologies came to the fore.

I think it's no secret to anyone that we are in what appears to be an inflection point in human history, and the two major forces giving shape to that inflection point are climate change and the impact of emerging technologies, whether positive or beneficial. We appreciate all the beneficial impacts particularly when we are talking about climate change, talking about solar resource management, talking about some of the ways in which hopefully there can be benefits of introducing technology into solving the climate problem, but I think we all recognize that sometimes our technological interventions could create as many hazards or as many undesirable societal consequences as good ones.

So we are in this strange period of, on the one hand, great benefits of emerging technology but also techno-solutionism that does not necessarily look very closely at the downsides, at the risks and undesirable societal consequences. As I said, again equity and equality is really a topic that gets directly at the fact that at least half of the planet's citizens are either not fully included in the decision-making process, may not even have access to the Internet, but are going to be the ones most impacted by the decisions being made and often by decisions being made by a small elite.

So we also jumped into this in terms of how can this be addressed through ethics, through governance, and through in particular the international governance.

We focused on artificial intelligence, largely because AI is the field that is at the forefront of a lot of this discussion but also because AI is so entangled in everything else. It's entangled in policy for climate change, but as I think most of you will know it is entangled in health care, it is entangled in public health, and it is entangled in nearly every facet of modern life.

What has taken place over the last two to three years has been an explosion of interest in AI ethics. It is perhaps one of the most-discussed topics out there at the moment. But it is a very diversified topic in terms of the breadth of different considerations coming to bear. We are trying to give that a bit of form, making sure that the tradeoffs are looked at as we engage in various considerations and also how we might impact or help give shape to this unfolding process.

This all began with something called the International Congress for the Governance of AI, which was expected to be a major event hosted by the city of Prague in May 2020. You all know what happened. In fact, Joel and I were just mentioning how our last planning meeting for that took place in this room in early March, about a week to ten days before everything shut down, and that was actually the last onsite, in-person meeting that Carnegie had.

It began with that. It began with the governance question, but we realized a number of different things. We brought in Anja Kaspersen, who I am glad to see has joined us. I am going to turn this over to her in just a minute or so. We realized we were getting such an explosion of initiatives that it wasn't like we needed to reinvent the wheel. In fact, we don't want to be reinventing the wheel. If other groups will take responsibility for differing facets of the way in which AI, AI ethics, and governance are impacting societies, then more power to them.

But what we thought we could do as an initial step was highlight topics that were not getting due consideration and creating a network of advisers who I would say in one way or another either had their fingers, their histories, or their activities involved in nearly every major initiative out there on the AI ethics and governance front or had deep experience in international affairs and international diplomacy.

Though the pandemic has intervened in our ability to tackle or get together or work together concertedly on some of these topics, we still have our antennae out for when this community of practice might actually be able to make a fundamental difference, and I think that is coming soon as we move beyond AI principles and the operationalizing of the AI principles to actual international treaties and the issues that are going to rise in getting international cooperation as we move toward those treaties.

I will come back to the equality aspect later, but why don't I turn this over to Anja?

ANJA KASPERSEN: Thank you, Wendell. I think you addressed many of the issues already, and I just wanted to build on what Cynthia was saying. There were three points in particular where I can see there is a strong correlation between our two projects. Of course there are geopolitical aspects of it, how these systems are being deployed, embedded, and by whom they are being embedded, and ethics is very much about grappling with the decisions made but also the outcome of the options not chosen, and these will obviously be very important issues both in terms of climate science and climate technology but also the broader deployment and embedment of technology that we address under AIEI.

Also there is the issue of what are we doing right now and how are we doing it differently, how will AI be impacting on decisions that we make, and how international relations will evolve, and also the point about it's not easy to wean off using these technologies once they have been deployed and embedded. That is definitely one of the issues that we are discussing within our initiative with some concern and some trepidation. Once this is deployed—immature systems are often being deployed and embedded—it is very difficult to wean off, as Cynthia said, to come off them.

Another point. Cynthia was mentioning the military and security aspects of it, and she is of course right. You have initiatives such as the International Military Council on Military and Security, consisting of many previous and current military leaders who are at the frontline addressing the real-time impacts both of climate change but also the broader applications of technologies that we look at under AIEI. In my experience, military leaders are often very honest and very closely connected to the impact that we don't often discuss in other formats, especially how this impacts on decision-making structures, military culture, and mission command structures, all of which are often absent in broader discussions around technology deployments. So these are important for us.

One of the things Wendell didn't mention that is very important within our initiative is this notion around how do you have an honest scientific discourse. There is a worrying trend within the AI research community of vilifying or even defaming people who speak to the limitations of these technologies, especially AI systems, which are often immature systems, flawed with our own sometimes misguided desires and biases. So creating that platform to bring out diversity of use but also speaking about the limitations, which is very important if you're going to guide responsibly its deployment and embedment, is very important, and ethics is also about embracing that diversity, which is very important to us.

RONNIE SAHA: Thanks to both teams for highlighting your work. I think one thing that came out of these presentations is clearly the importance of ethics and how it provides an approach to discuss tradeoffs between communities. I think that's a very common thing between these two initiatives.

Maybe just pivoting a little bit, we are obviously the Carnegie Council and we focus on international affairs. Janos and Cynthia, we are obviously one month out from the COP26. I would be curious to hear your perspective on the commitments that were made or maybe commitments that were not made in that conference of the international community. One headline I read in the Financial Times was "COP26 Achieved More than Expected, but Less than Hoped," which I thought was kind of a common headline I kept reading. I am just curious, now that we are a few weeks out, what your impressions were, particularly for the work that you guys do.

JANOS PASZTOR: Whether or to what extent COP26 was a success depends very much on who is doing the analysis, so let's be very clear about that.

I think for the purposes of this discussion what we need to say is that the Conference of the Parties is a process. There is a treaty that needs regular meetings, and things have to be done. Reports have to be considered and signed off on.

So there is that process that was quite successful. In that sense, COP26 has moved the issue forward because they sorted out one part, for example, of the Paris Agreement that was not agreed before, the so-called Article 6. We don't need to go into details on that, but still progress has been made. They also agreed, for example, on reviewing their national plans not every five years but every year because of the urgency. These are all good things.

Where there is a big challenge is that the goal of the United Nations Framework Convention on Climate Change process is to maintain the temperature to less than 1.5 °C warming over historical average, and, in spite of the plethora of pledges and commitments that were made, the world is not going to meet that. If all—and I mean all—the pledges are implemented, then the world will be heading towards something like 2 °C approximately, and it is very unlikely that all the pledges will be met. So the world is heading more like to 2–3 °C. In that sense there isn't enough success yet from the point of view of the atmosphere. It's as simple as that.

Second, and I will stop there, but Cynthia mentioned the issue of justice, and there is an increasing feeling that the whole climate negotiation process is more and more failing on the issue of climate justice vis-à-vis the Global South because the whole fundamental basis of the original climate treaty in 1992 was that the developed countries go first to reduce their emissions and to provide resources to the Global South so that they can also do things better. There were some bits and pieces over the years that were done, but on the whole, when you look at it from a 30-year perspective, that is not happening.

RONNIE SAHA: Cynthia, did you want to add anything?

CYNTHIA SCHARF: No. I just echo everything Janos said. It's too little too late, and now we're in a situation where we have no risk-free options. The risk of our current path is very clear, as you said, 2–3 °C. The risk of potentially using some of these emerging approaches is that we don't fully know what the implications might be, in some ways opening a Pandora's box. So we have risks on both sides, and how do we assess those risks?

RONNIE SAHA: Thanks.

Wendell and Anja, we will keep you on the theme of how your initiative intersects with international relations and politics. Let's dive into a little bit of that, how AI inequality intersects with these larger global trends. I know in the discussion we had last week we talked a little bit about the United States and China and how they are approaching this issue. Could we dive into that a little bit more?

WENDELL WALLACH: Sure. I think we all see that we're in this precipice where the world is either going to divide into two major spheres or we're going to find some way to cooperate. AI, for example, has a significant role in all that. We can be weaponizing AI. We can be using it for cyber insecurity. One of the big questions is: How will this proceed? How will the geopolitical forces give it shape?

I think for many of us within AIEI it's very important that we keep the international dialogue going, not least of which for international treaties over climate change and the use of the technologies for that. Therefore, some of us have been deeply engaged with China.

When we were putting together the International Congress for the Governance of AI, which did again have a rather diminished form through the Carnegie Council and is available on the web, we went out of our way to make sure that there was Chinese participation in that. I for one before the pandemic went back and forth to China every two months or so and have been on a good number of podcasts since then, so I think we are in that sense hopeful that in our small way we can use ethics and AI ethics to help nudge a dialogue that is international in scope over AI, and not just over issues such as climate change or geopolitical tension points.

Anja, do you have any thoughts on that?

ANJA KASPERSEN: I think you covered the geopolitical aspects quite well. As you said, the different geopolitical forces will definitely give it shape. I think what we can expect is that it won't evolve linearly. It will take very different forms depending on what region of the world that we are looking at.

I also think there is a separate aspect to the just deep-shored geopolitical aspect, which is also how will COVID-19 and the outcome of the pandemic impact on how AI evolves. Obviously we now have greater access to a new type of data, which are biodata, and the combination or the correlation between the behavioral data that a lot of these systems have been trained and built on, now combined with very in-depth biodata, would also give it a shape in years to come, and where that will go is still uncertain, but it is definitely going to be instrumental and important to watch.

JANOS PASZTOR: May I just continue on this line because I think, Wendell, you started it, and I find it very helpful, but I think we need to recognize that all of the issues—not just artificial intelligence, climate change, and eventually doing removal of carbon from the atmosphere—cannot be addressed in isolation. They are not just related, but they are related to lots of other things that are also equally big problems—poverty eradication, health, and food security. One of the main carbon removal techniques people talk about would require a lot of land. If it takes a lot of land, that land may be producing less food, which means there will be a food security problem for some people.

We are very good at solving a problem and creating five others in the process. I think we need to learn how to think big, and, yes, sometimes we need to focus for specific solutions and specific challenges, but we have to be able to think big so that these connections are made clear.

RONNIE SAHA: Using systems-thinking models.

One other thing that comes to my mind, Wendell and Anja, is automation. Obviously we are going to see more and more automation that is going to have an impact on the global economy. We will probably see some eradication of low-wage, labor-intensive jobs. Many of the developing countries have really grown since the 1990s through export, manufacturing, textiles. I could see some rebalancing occurring, and there is a good question, an ethical question of what jobs get automated and what don't and how will that impact the global economy? I think that's another area where I see some potential overlap with the work your team is doing.

WENDELL WALLACH: Yes. That's a key area. Every technology for 200 years has created fears that it's going to steal more jobs than it creates. None has done so yet, though there is often a lag between the destruction of some jobs and the creation of new ones. AI is likely to be the same in the sense that it will create many high-end jobs, but that's not going to be helpful to the truck drivers who now have been replaced by self-driving trucks. There are certainly regions of the world that will be devastated in that process. That's where we become particularly concerned about the tradeoffs that are taking place and the choices that we are making.

Perhaps one of the wonderful things about Carnegie Council for Ethics in International Affairs is the way in which it is trying to rethink what ethics means in this geopolitical world that we find ourselves in. I think perhaps one of the key elements in all of that is not looking at ethics in some binary sense but to truly appreciate that ethics is largely about tradeoffs, and it's largely about appreciating that any choice we make will have tradeoffs, and therefore looking at, if we are going to make a choice, how are we going to address the downsides or ameliorate the downsides of the choice that we have made?

I think all of us know that most of the choices that do get made are impacting the same citizens. It's the same citizens in all of our countries that get hurt over and over again. That's where equality comes into the equation.

But it's not just in terms of addressing that but it's also an understanding that simultaneously those of us who end up on the right side of the choices are getting rich. Think of all the money that was made by those who invested in tech firms during the pandemic while millions and billions of lives were being devastated.

So we're in that area where I think perhaps we in all of these subjects but particularly in climate and AI, since those are the two subjects we are addressing here today, how we can make sure that there is an awareness of what the tradeoffs are in the various choices we need to make.

RONNIE SAHA: That's hopeful.

Let me pivot a little bit. Where is the leadership going to come from on these issues? I know, Cynthia, you referred to the importance of the UN General Assembly in getting this on the docket, but I would be curious if you are seeing any countries leading in the way that they should be, I guess on both of these issues. Are we seeing leadership from nation-states that we would expect to be seeing, and who else needs to be in the conversation?

CYNTHIA SCHARF: It's an emerging topic, and thus far I would say that attention to this issue is deepening, but it is still very sensitive for governments to talk about this publicly, very, very sensitive. Janos mentioned the issue of moral hazard, meaning if you talk about A, does that weaken the will to do B? So the fear is if you talk about some new technique that looks like it is the silver bullet, does that tend to make people think like they don't have to actually reduce emissions, that they don't have to do the hard stuff?

The answer to that is no, but there is a fear that this could be the tradeoff, and therefore governments have not really been willing to go out and talk about governance of geoengineering in any kind of detail publicly. We are having private conversations with pretty much everyone we've approached. There is a great deal of interest, and I can say that over the last five years interest is deepening. That is for sure.

I think, Janos, you are best suited to talk about diplomatic leadership, where you might see that coming from.

I would just add one note, that it is very important for young people to learn about these techniques and to try to get involved in these discussions because this is the time when influence counts. If you're in on the early stages of governance, you have an opportunity to make your voice heard. Otherwise, the train might leave the station, and as these technologies and techniques are something that would impact the future for decades and potentially even longer to come, we believe that it is very important for young people to enter this debate.

JANOS PASZTOR: Just a couple of thoughts. First, I actually would like to recognize my other senior colleague sitting there, Kai-Uwe Schmidt, who is actually coordinating our work in relation to governments, so he may have a lot to say about this too.

I think leadership, particularly when it comes to the solar radiation modification part, is very challenging and very difficult.

Let's just take another technology, carbon removal. There is money to be made there by companies, so there is lots of interest. There is research, there are experiments going on, and companies see that there is business eventually, so they are engaged. There is a push from companies.

There is also a push from non-state actors and civil society organizations, and there are governments who recognize that whatever they do now in order to reach net zero emissions by mid-century and net negative after, they will need these technologies. So there are different forces on the system, and therefore leadership is sort of emerging because of those forces.

In solar radiation modification you don't have this. First of all, there isn't any money to be made. Of course, if ever the world decides to do something like that, somebody will have to make the kind of airplane that will reach the stratosphere, although we have just found out recently that even some current airplanes can do the job. There isn't much money to be made.

One of the problems with solar radiation modification is that the direct costs will be quite cheap, a few billion dollars a year, less than ten, to do a global system. So there isn't the pressure from the private sector. Non-state actors are confused because of some of the issues we have discussed already and the politics is challenging, so how do you get this going? This is kind of a chicken-and-egg problem. You have to get the discussion going, and people are interested, but they don't want to stand up front, they don't want to talk openly, publicly, but eventually one day it is going to move, and then things might happen.

This is the ideal scenario, but let's also talk about the non-ideal scenario, where maybe a country takes matters into its own hands and decides to go ahead unilaterally. What do we do then? And maybe potentially it's even a non-state actor who takes things into their own hands because it's relatively cheap. You could imagine a billionaire saying, "I'm going to save the world." That doesn't mean that a billionaire can just do it and the world standing by, but still the world needs to be ready to say, "Okay, how does one deal with these issues?"

It's a complex situation right now in terms of leadership, and what we are hoping for with our initiative is to catalyze these conversations by and between governments and to some extent by non-state actors so that at one moment more and more of them emerge and say, "Okay, let's start now talking seriously."

WENDELL WALLACH: I think Anja has given this question a lot of thought too.

RONNIE SAHA: That would be great. That's what I was going to pivot to next.

Your thoughts, Anja, on leadership.

ANJA KASPERSEN: Wendell will comment on this as well. I would just say something broad, which builds on what Janos just said and also Cynthia. We can have probably another hour just speaking about what leadership looks like in any of these respective fields, especially given the connections that Janos was referring to earlier.

But what I will say on the AI space is that there is still a lack of understanding of what the technology or some would say scientific methodology is really about, and there are a lot of vested interests. There are a lot of vested interests. There are lot of perceptions of capability, and due to all of those vested interests, where different people are thinking, What's in it for us?—are we talking about industrial competitiveness, is it national competitiveness issue, is it a national security issue, or is it just kind of getting ahead, what do others have, and what do we have? That is definitely diminishing or complicating the issue of leadership in this space.

I will leave it with that, and Wendell can fill in on the leadership issue.

WENDELL WALLACH: I'm particularly concerned about the leadership issue with the younger generation, with your generation and those younger than you. My concern is that these are pretty complex subject areas and require a kind of transdisciplinary or multidisciplinary facility, and there are no incentives to develop that in your life.

Young people are largely rewarded on being able to do some narrow thing very well. So the big concern for me is: Where is this generation of young leaders going to come from, or how can those of us who are slowly starting to move off the stage, what can we do to catalyze the development of a significant enough cadre of young leaders who have the transdisciplinary, the interdisciplinary, the expertise, experience, and skills to feel comfortable in roaming over these different territories where there are so many concerns impacting and so many tradeoffs taking place that they are capable of being aware of them and making the decisions for their generation and not just allowing those of us who are older to make those decisions for them?

RONNIE SAHA: The last question I had drafted was around what can the younger generation do, and both the teams have very much addressed that, so thank you, Cynthia, I appreciate the call to arms in a way for the younger generation to get involved in shaping governance discussions.

Wendell, to your question, I think the Carnegie Council is one of those rare institutions that does promote interdisciplinary thinking on these large ethical and governance issues, so I think maybe this is a good place for us to start that dialogue and conversation. So, thank you for that.

Maybe we could pivot now to Q&A. We have a few questions in the online portal, as well as if there is anyone in the room who would like to raise a question, please feel free to grab a microphone.

I am going to just read a question. This is from Mathilde da Rui: "What role if any do you see in reining in potentially difficult-to-control technological innovation, while accelerating valuable technological tools for both regulation and incentives with regards to corporate R&D, production, and allocation of public research funds?" To summarize the question, I think it's how do you basically incentivize technological innovation but then minimize the downside costs, a hard question for these large-scale problems.

Janos, you spoke a little bit to this, that there are some areas where corporate R&D and production could be ramped up and incentivized, and in other areas not necessarily incentives for the private sector, but any thoughts you might have on that.

JANOS PASZTOR: Let's look at the two sides here. One needs to incentivize private and public sectors to develop certain techniques that we don't have or we don't have enough. Clearly. In the area of climate, which is what we are working on, there is a very substantial need for more research and development and innovation on emission reductions. There is plenty more to do. We know lots of things, but we still have to figure out how to do things better.

More importantly, this whole new area of carbon removal. We need lots of incentives. I said that things are happening. Yes, they are, but not enough. So again, we need governments to step in—and different governments can do it in different ways, through legislation, through pricing, and through lots of different areas. That is to push things forward.

But then there is that part of governance which is about providing guardrails and limits and making sure that things happen within certain boundaries. For that to happen properly you need societal conversations, and these societal conversations will be very different in different countries because different countries organize themselves differently. That's okay, but still the different stakeholders need to be engaged, and that is again something we are very much promoting as part of our governance processes that one needs to converse about the issues. It is so important, particularly when it comes to these new techniques, like solar radiation modification. The interpretation of the risks posed by that or by not doing it will be very different depending on your cultural, geographic, and economic position in society as well. That's why it's so important to listen.

The second part is that it is not going to be solved by a silver bullet, but one needs to have conversations and engagement from different stakeholders so that we can develop systems that respond.

RONNIE SAHA: Wendell and Anja, do you have any thoughts on this question? I think in a way that much of AI is being driven by the private sector, but do you have any thoughts on how we might think about a regulatory framework that allows us to incentivize R&D in the right ways but also minimize any of the risks and what the role between the public and private sectors should be in this space?

WENDELL WALLACH: Anja, why don't you open on that?

ANJA KASPERSEN: Sure. What I'll mention builds on what Janos was just speaking about. Too often we are speaking about whether you are driving growth or innovation or are you doing the opposite like as binary forms. I think we need to try not to make it a one-or-the-other type of approach and sort of embrace that either one would require very specific responses.

Like I said in my introductory remarks, one of the concerns in the AI research community is that speaking about the limitations of these systems is often seen as a threat to R&D and innovation processes, where people are being like I said vilified or even pushed out of those processes, which in the end leads to poorer innovation and products that are not properly tested.

I think taking that time, making sure, rather than incentivizing, speak about taking the time to do the proper testing and certification, validation, or verification to actually see that that you are having real return off investment of the R&D that you are putting to use and to discuss the societal impact, which will be very important for the return on investment at the end of the day.

One of our eminent advisors, Gary Marchant, did a review of 634 different policy guidelines that were meant to provide these types of incentives or guidelines for doing responsible AI development. It was interesting, of course, after reviewing the 634 different types of instruments holding different types of standards and weight, the weakness with all of them was that none of them really had an implementation framework. What's the benchmark for ensuring that what you're doing is delivering on purpose and delivering particularly on the ethical considerations and societal impact considerations. I think that is instrumental to any discussion on responsible R&D and responsible science.

I also just want to add a point that we speak a lot about governance. I think we were speaking about having younger people involved. We also need a new type of dialogue, like how do we hold these types of dialogues and how do we discuss governance? I think when you speak about governance in the technology space, it often fails to look at that we need different types of governance depending on where we are in the lifecycle of technology, so where you are in the initial phases, the development phase looks very different where you are in terms of deployment and the embedment phase of the technology and even as you are starting to see it being part of the critical system that you are embedding various AI systems into.

We also need to avoid the temptation of treating governance as one kind of bucket concept and look at what does governance look like in the different aspects of its deployment. That is very important. For that I think we need to also be ready and honest about some of the current formats of having that dialogue may not be appropriate to where we need to go on this. This has to do with the multistakeholder aspect that Janos was referring to, but also these stakeholders are sometimes also part of these vested interests that we were referring to earlier, so making sure that you're not just replacing one kind of group with another one where you are not actually tackling the interest aspect of where you're driving this innovation and the R&D.

Over to you, Wendell.

WENDELL WALLACH: One of the things that we encounter that particularly comes out in the AI conversation is the extent to which it's dominated by a cult of innovation, that innovation is good at any cost, and therefore governance can be bad; naiveté about what the technology can really do and not do, and that becomes very problematic as technologies get deployed and people working with them have absolutely no idea of what they can rely upon and what they can't; hype, suggesting that these technologies are much more advanced than they are or will be more advanced by tomorrow, and usually it is tomorrow—it's treated as very soon, even though some of the promises are not to be realized for a long time; and techno-solutionism, which suggests that therefore everything can be solved technologically, and if you just have a little patience, we engineers are going to solve it for you. All of that has created a narrative which is not only bizarre but gets deeply in the way of thinking through the real serious challenges and decisions that need to be made today or within the next few years.

JANOS PASZTOR: May I add just a very small comment because what Anja and Wendell are saying underscores very importantly the concept I tried to allude to earlier, that governance is not just regulation but is the totality, and even the thinking about what kind of dialogue to have, who to involve, and who not to involve is part of governance as well. We need to look at this in this broader framework. Otherwise, it just ends up being a regulatory function by government, and then we think we're done, and we're not.

RONNIE SAHA: We did have a question specifically around governance of solar management. I'm not sure if we are in a position to answer this question, but the question is: "Which countries would support to consider governance of solar management at the UN General Assembly? Are there obvious ones who might oppose it?" Is this something we have a position on?

JANOS PASZTOR: It's not yet known because the issue has not yet surfaced in any kind of formal process. Also, we have to be clear that we know little still about this new emerging technique. We know something. We know certain things. Scientists seem to think that it would work. But there are questions, there are risks, there are new risks, and things like that.

In most cases as we talk to government representatives, what we find is that there is curiosity to talk about it. People want to engage in conversations because they don't know and they're not sure where this is going. At that level you can say that in our experience most interactions we had that government representatives want to continue the conversation.

Then there is the other issue. There are people who have a certain notion that they like or don't like or that they want to or don't want to make use of these techniques for many of the reasons we have outlined here together, including: "Can we play god? No, we can't." or "Techno-solutions will not help us." There are all kinds of positions, but the reality is that most representatives don't know enough yet. So it's too early to answer that question. We'll see how it evolves.

RONNIE SAHA: We are almost at time. I will pose one last question, which is from an online contributor: "Have we seen any efforts around lessons learned in regards to what has happened with technology?" The commentator is saying this in terms of "innovation run wild with unfortunate consequences for society."

Do we feel like there are lessons learned in this realm that are being addressed by governments or by leaders in the industry? Are we learning enough, and have we learned enough? In a sense technology has taken over so much of our lives, and are we ingesting these lessons in any meaningful way?

I guess, Wendell, this might be a good question for you. It kind of plays off the last remark you made.

WENDELL WALLACH: I was actually interested in what Cynthia might say about this. Do you have anything to throw in on that?

CYNTHIA SCHARF: I think this is a lesson not learned. I think there is a tendency for us as a society, particularly this society, to look at things in a techno-rationalist perspective, where actually when you're talking about some of these new emerging technologies, whether it's in AI or on the climate side, the matter of values is absolutely central to the conversation, and that often gets dropped or it can be seen as not important, but it frames how we look at things, it frames how much we might support or not support innovation, as you were mentioning, but values of: "How does this affect my life? What does it mean to me? What does it mean to my culture? What does it mean to my religious traditions? How do I think about the future of my children?"—all those things that are not put in numbers so easily and neatly.

I think that's something that we really haven't learned but that keeps coming up again and again, and we see it in the political polarization of this country, which breaks down not so much on who wins the debate on X or Y based on facts, but we're talking about different ways of seeing the world.

JANOS PASZTOR: Just to follow on this, I very much agree with you, Cynthia, on the importance of values, and I think one of the values that has lacked attention—and we see the results—is not just about what it does to me, but what it does to not just my neighbor but my neighbor far away. I spoke earlier about how the climate negotiations process over the last 30 years has kind of missed the climate justice issue in terms of North and South. I see that happening in all these different areas, whether it's AI or whether it's—

Look at how the world is learning from the pandemic, or rather how it is not. It's terrible. It's a current crisis. It's an issue, and we are not able to properly deal with each other in this world. I think we have a long way to go. We have to go in that direction, but we have a long way to go to get these things in the right direction.

We have instruments, imperfect as they may be in this rather imperfect world, but we have instruments, and we just have to try to make those things work better. We have the United Nations, we have other bodies and entities, and yes, they should be able to, and I think we have to put our energies in there to make it work.

WENDELL WALLACH: There's probably no shortage of incidences we could point to where lessons should have been learned. I think the question is whether society has a memory anymore, whether those lessons are going to get applied.

My hope—and I don't know yet; I think this is what we're going to witness probably in a very serious way over the next five to ten years—is that a lot is going to be laid out for the future of humanity in terms of what decisions are or are not made in the next five to ten years or so, but I think this emphasis on values is not a small thing, or this emphasis on ethics is not a small thing.

I think what we are seeing in the AI world at least is, is this present obsession with ethics going to just be politics by other means or is it indicating the recognition that they are a broad way of concern, starting with how we use data, how data affects privacy and can be biased and so forth, starting with data ethics but exploding into this array of issues up to the weaponization of AI, whether or not ethics is going to be an empowered tool, just as we at the Carnegie Council are trying to empower it over the next five to ten years, or whether business as usual, politics as usual, or the subverting of ethics for political goals is what will prevail.

ANJA KASPERSEN: If I may add, it is very important what Wendell just said, the issue of memory. The thing with memory is that we are also memorizing things very differently. I think how we experience being in a pandemic is a case in point, and more and more often we also see that memory and the memories that we hold and the memories that we curate are also being impacted increasingly by algorithmic technologies that are processing and feeding those memories.

That is obviously where technology plays into this notion of memory and also our sense of urgency. A lot of people have been asking, why don't we have the sense of urgency to do something about these issues, be it on climate, be it on mitigating potential negative impacts coming out of technology, and surprisingly that shared sense of urgency is not there, that shared memory that will guide us to take heed from previous lessons is not there, and that is obviously a much bigger discussion.

I really like Gillian Tett, the FT journalist. She has this fantastic phrase. Instead of saying AI, she speaks about "anthropological intelligence." I certainly am one of those who believes that we need a lot more of that type of intelligence—anthropological intelligence, climate intelligence, whatever intelligence you talk about—to really inform those choices we make and also learning from those lessons.

We know that AI and the impact of AI will impact equality. It already is. We know it will impact equity. It already is. But we seem to yet struggle a little bit with the fact that we are in charge of it on the how and the who, and that is where we need to act.

RONNIE SAHA: Thank you. I think we are nearing the end of this. I would maybe end with a comment that was made by one of our members of the AIEI Board of Advisors, Cordel Green, on the online portal, and this kind of gets back to the conversation that values do matter.

Cordel is saying: "If we're going to talk about values, we need to also be thinking about who is included in that conversation, Janos, to your point. Are members of the Global South included in the conversation around values? Are members of the younger generation included in that conversation on values?"

To me that has been a striking theme of this conversation, that we really do feel like that values point needs to be on top of a lot of this, but that the conversation needs to be inclusive of members of various parts of the community.

With that, we will draw this session to a close. Thanks, everybody, for participating, thanks for the audience, and thanks for the online community.

También le puede interesar

APR 30, 2024 - Podcast

¿Es la IA sólo un artefacto? con Joanna Bryson

En este episodio, la presentadora Anja Kaspersen habla con Joanna Bryson, de la Hertie School, sobre la intersección entre las ciencias computacionales, cognitivas y del comportamiento y la IA.

4 DE DICIEMBRE DE 2023 - Podcast

C2GTalk: ¿Cómo debe gobernar el mundo los nuevos planteamientos para afrontar el cambio climático? con Andy Reisinger

El mundo superará los 1,5 °C de calentamiento, y los países se enfrentarán a consecuencias más extremas a corto plazo, advierte Andy Reisinger en un C2GTalk.

NOV 29, 2021 - Podcast

C2GTalk: ¿Cómo incluir la modificación de la radiación solar en la agenda de gobernanza internacional? con Marc Vanheukelen 

Es importante reflexionar a escala internacional sobre planteamientos que alteren el clima, como la modificación de la radiación solar, en caso de que el mundo no sea capaz de cumplir los ...