Invitado
Jimena Viveros
IQuilibriumAI; Órgano asesor de alto nivel del Secretario General de las Naciones Unidas sobre IA
Organizado por
Kevin Maloney
Director de Comunicación, Carnegie Council
Sobre la serie
El podcast Values & Interest se adentra en las tensiones éticas y los equilibrios en la toma de decisiones en geopolítica, tecnología, filosofía y negocios.
¿Qué se necesita para construir un futuro tecnológico inclusivo? Jimena Viveros, abogada internacional y experta en IA, se une a Values & Interests para explorar por qué cerrar la brecha digital entre el Sur y el Norte Globales es un imperativo moral y económico. Viveros comparte su visión de un movimiento de base en el Sur Global que empodere a las comunidades para abordar las necesidades actuales y, al mismo tiempo, construir un ecosistema de innovación para el futuro.
KEVIN MALONEY: Hola a todos. Bienvenidos al último episodio del podcast Valores e Intereses . Soy su presentador, Kevin Maloney, director de comunicaciones de Carnegie Council Tengo muchas ganas de participar en la conversación de hoy con Jimena Viveros, una destacada experta en el impacto emergente de la inteligencia artificial (IA), particularmente en el Sur Global.
Jimena desempeña múltiples funciones en el mundo de la IA y ha estado en el centro de algunas de las conversaciones más cruciales que han definido el futuro de la gobernanza de la IA, incluyendo su participación como miembro del Órgano Asesor de Alto Nivel sobre Inteligencia Artificial del Secretario General de la ONU . También es la fundadora de IQuilibriumAI y la Fundación HumAIne , que trabaja para construir comunidades de desarrolladores y profesionales de la IA en el Sur Global.
Jimena, estoy muy feliz de que tengamos esta oportunidad de reconectarnos y charlar hoy.
JIMENA VIVEROS: Muchas gracias por invitarme, Kevin. Es un honor estar aquí para hablar contigo sobre temas relevantes.
KEVIN MALONEY: Nos conocimos el año pasado cuando nos visitaste. Carnegie Council Durante la Asamblea General de la ONU , me atrevo a decir que congeniamos en cuanto a nuestra conversación, no solo sobre los aspectos técnicos de la IA y la gobernanza, sino también sobre cuestiones éticas y de valores.
Creo que hay mucho que profundizar en los aspectos técnicos de la IA actualmente, así como en las cuestiones políticas y de gobernanza más amplias. Pero, como siempre hacemos en el podcast "Valores e Intereses", quiero empezar por explicarles a los oyentes su propia formación de valores y su trayectoria. Quizás podamos empezar por ahí y luego profundizar un poco en el mundo de la IA.
JIMENA VIVEROS: Esa es una pregunta muy compleja, así que intentaré explicarla lo mejor posible. Soy mexicana, pero crecí en diferentes lugares y me mudé mucho por el mundo. Creo que eso te da, y lo que es invaluable, perspectiva; te ayuda a ver problemas que, en tu contexto, pueden ser grandes o pequeños, pero que tienen un impacto muy diferenciado y afectan a otras personas en otros lugares, y lo que a ti te parece poco o gran cosa, puede cambiar la vida de alguien significativamente. Eso te da mucha compasión por los demás y comprensión de diferentes circunstancias.
I think that has been the driver of the work that I have tried to do with international law and now with AI, which is obviously revolutionizing the landscape everywhere, but it does so in very differentiated ways across the world. It has different impacts especially for the Global South, so shedding light into all of those different aspects has been critical across my different experiences.
KEVIN MALONEY: I think it is interesting. There is so much variability and nuance in the world. Even when you look at wealthier countries there are subsets of groups in those countries. So you have people who are maybe from poorer countries but still able to travel internationally, and you have people from richer countries who stay in one town their entire life. I think exposure is so important—exposure to other cultures in order to reshape ideas, form empathy, and start to interrogate the universal side of being human.
In terms of being a young woman from Mexico, what was the spark to look outward, to look internationally, what were those early foreign experiences when you traveled, and what eventually pulled you in the direction of working in the international relations space?
JIMENA VIVEROS: Well, everything, really. I was raised that way, but also I guess at a very early age I understood that the world was so big that it was hard to stay put and be conformant with one reality and one’s sphere of impact. You could amplify that by going places and bringing, moving, and shifting around knowledge and whatever you can do.
KEVIN MALONEY: To push a little bit more on the exposure and the travel front, was there a moment where you said, “Wow”—and this could be a good or bad reaction in terms of being an eye-opening moment for you during travel—that might be connected to your work today?
JIMENA VIVEROS: I guess professionally my first wow moment was Palestine. I was with a nongovernmental organization there. As a young international lawyer you think you have the world figured out, and then you show up at a place where literally every single law is broken and where the realities of what you thought about how the world operated are completely upside down, no one does anything about it, and there is complete impunity. That shifts the bedrock of your foundation, and it motivates you to do something. That ignited me to participate actively in making a difference and not just being a bystander or shrugging it off. At some point that did not seem like an option anymore.
KEVIN MALONEY: Thank you for sharing that. For many of us it is very jarring to have that first in-your-face experience with extreme violence or abject poverty, especially if you have had a more sheltered or privileged existence.
At the Council we think a lot about the moral reflection after that experience. How do you move forward in a reflective way? How can you be anti-fragile? Do you embrace this moral resilience and think, Okay, how do I address this issue? or How do I move forward purposefully? or do you embrace this surrender or this form of nihilism? Maybe we could explore that dynamic a bit in terms of your own experience.
JIMENA VIVEROS: Obviously it puts you into perspective—what are the exact and just dimensions of what you can do?—but it also motivates you in terms of what you could do or what you should be doing.
One thing we all have is a voice, so using it in the right way and not being afraid of speaking up or being congruent with what we believe just because we have this defeatist sense of, “Oh, well, the problem is too big”—if it is a bunch of us having the same view and marching in the right direction, we will make a difference sooner or later. Every one of us in our sphere of influence can definitely have an impact, and while you are there you can do little or big things that can significantly change people’s lives, and that is what I did.
KEVIN MALONEY: I found one of the great things about working at Carnegie Council is this ability to have a literal physical space where a good-faith community can form to interrogate these issues. I think especially now, when society seems more polarized, more fractured, and civic spaces are disappearing, there is so much value in finding spaces to create community. The in-person community, the values community that you can create does fend off this nihilism because it allows you to feel that you can make a difference as a group and you don’t feel like you are on this ethical or moral island.
There is a bit of an irony right now in that issues such as AI, climate, or migration are global issues and require collective action, and yet simultaneously people are embracing their own tribes only or are silo-ed off in a digital ecosystem. It is a moment when we need to lean into a good-faith community.
JIMENA VIVEROS: I feel like being in the field helps a lot with that because once you are removed from it—you know it is easy to get distracted or to actually feel far removed because you are. The problems change and the people you interact with and the conversations you have are different. That is why I love being in the field.
Back then I was literally turning 20 and everything was so big and so distant, but you forge relations that you can keep, that anchor you within the cause, and if you do that in different stages of your life obviously your view is more mature in life, the things that you can do and the impact you can have are bigger and bigger, but harnessing all of those things, not keeping them as fractals but integrating them into a coherent course of action that is aligned with your morals, values, and integrity I think is crucial for all of us to have. Maybe not all of us can be out in the field all of the time but trying to make an effort to put yourself out there every once in a while keeps you grounded, so I strongly recommend it.
KEVIN MALONEY: Another dimension of good faith in reacting to your comments is that good faith is not just finding a community that agrees with you 100 percent from a values perspective but is willing to engage with you in a respectful way or give you an honest opinion that is not performative or disingenuous. That is more and more difficult to come by, but it is increasingly important to pressure-test your values, interests, and ideas, especially within the international relations sphere doing that in good faith. That is a two-way street. That is finding someone who is willing to engage and then opening up your mind in order to be convinced or to be exposed.
JIMENA VIVEROS: Yes. Again within what you do you are very aware that doing it in an open space is different than doing it behind closed doors.
KEVIN MALONEY: Now that we have discussed the importance of this pluralistic approach to the world and literal exposure to different experiences, I want to pivot a bit to the AI space. You have had a front row seat to the most important conversations shaping AI governance over the last few years. Not only are you a commissioner with the Global Commission on Responsible AI in the Military Domain (GC-REAIM) looking at the intersection of artificial intelligence and war and the escalating ethical questions there, but from a high-level international governance perspective you served on the UN Secretary General’s High-Level AI Advisory Body.
That must have been quite the experience. This was a cross-sectoral, cross-country community that came together to attempt to understand the acceleration of AI, its impact geopolitically, and what we might do in a responsible way to address that. I would love to hear about the experience as a member of the Advisory Body, what worked, what didn’t, and what gives you hope. Maybe we can start there.
I don’t want to lead your response too much, but an issue I am particularly interested in is whether things like the UN and these meetings still fit for purpose in a world where AI technology is moving at such an exponential pace. Maybe we can touch on that to start.
JIMENA VIVEROS: That goes back to the fact that this is the most disruptive and transformative technology we have encountered and will encounter for the foreseeable future.
The Body that the secretary general had the vision to create was the first of its kind and quite significant. We did our best to do it as fast as we could because we understood that this was a fast-moving environment and that the goalpost was ever changing and ever moving.
We came up with an interim and then a final report within the span of less than a year. At the end we came up with seven recommendations that were adopted at the Summit of the Future within the context of the UN General Assembly.
The first thing we agreed on as a group was that we needed a common understanding of things because we were a multidisciplinary group coming from all ends of the world with representatives from all regions, all disciplines, and all age groups. It had a 50-50 gender balance, which was great. For that we recommended the creation of an international scientific panel on AI, which is still being discussed with the great support of Spain and Costa Rica. The same goes for the recommendation to create a policy dialogue on AI governance.
We also recommended an AI standards exchange, a global AI capacity development network, a global AI fund, a global AI data framework, and the creation of a small AI office. This has already taken place since January 1 of this year, headed by Amandeep Singh Gill, whose leadership is phenomenal. We expect great things to happen from all of this.
However, after this report and in the moving landscape of AI obviously there are humongous problems and challenges in the implementation phase, not only because this is supposed to be global governance, so 193—unfortunately not 194—Member States of the United Nations are not all starting at the same level and there are big gaps that need to be addressed before anything else can happen in terms of infrastructure, compute, data, and capacity building, so many things. That is what we are trying to do in order to level out the digital divide in the best way so that the Global South does not get left out of this technological revolution altogether, disproportionately affected as it is right now.
KEVIN MALONEY: Let’s pull on this digital divide thread a bit more. As our name states, the Council is focused on these ethical dimensions of international relations, but from a very realist perspective. One of these ethical realism principles that underpins our work is the critical importance of international cooperation, but I think what we are seeing now with the militarization and politicization of AI and the creation of stateless actors that wield significant power is that we have a geopolitical perfect storm around AI. You have two of the most powerful countries in the world, China and the United States, who are in their own ways attempting to undermine and reshape the multilateral system in a moment where international cooperation around things such as AI, migration, and climate are more necessary than ever.
With that realist framing—which you can agree or disagree with—how does the Global South fit into that equation? I am also cognizant that we do not want to start these conversations with these top-down Western framings, but from a realist perspective. Again I think there is no denying that China and the United States, at least from a computing power perspective, are in the driver’s seat right now in terms of AI development. Maybe I can get your reaction to that framing.
JIMENA VIVEROS: First of all, it is important to understand what the Global South is. We represent 80-plus percent of the world’s population. That is over 6 billion people, an incredible amount of people, out of which 2.6 billion people, one-third of the world’s population, do not have a steady connection to the Internet, and 750 million people, 10 percent of the world’s population, are without electricity.
Against these numbers, whether we are talking about AI or artificial general intelligence (AGI), quantum, and all these things, for all of these people these things are completely outside their spectrum of interests, needs, or priorities. It is important to think about that when we are talking about implementation and how to actually close the digital divide in a meaningful way.
It is not only about participation, which of course is where it starts, because we can all understand the different problems that exist out there in all these different regions. But we can also see the commonalities within the problems of all of those regions and creating this global identity of the Global South and understanding how we all need to stick together in order to push back against the powers you mentioned, so that we stop techno dependency, which is really what is happening, or technical organization, or even the rush for minerals that is happening right now and all of the armed conflicts that arise from that, moving blood diamonds to “blood chips.” This is not the way it is intended to go. As minerals and all of the resources, even water, are used we cause more scars because everything that is needed to maintain and sustain this AI race, which is what it is effectively, impacts the Global South.
Also, the data on which the models are being trained is not representative of the people of the Global South in terms of all of the different languages and all of the different cultural nuances and all of the different areas that are void spaces if you look at a map. That is a risk of AI in itself because that just increases bias and discrimination and so on.
Another problem we have is infrastructure. As I said, in some places there is not even electricity or the infrastructure to implement any of these things. Compute goes hand in hand with this as well as capacity building. All of these things don’t come out of will because of a lack of opportunity and because of how things have been structurally or systematically sustained a long time and historically. However, now this just makes it all the more pressing because with this technological revolution these 6 billion people are going to suffer, get displaced from industries, and completely fall out from what the new world will look like. Is that something we want to allow or be complicit in, or do we want to participate in redressing that?
KEVIN MALONEY: I think you hit the nail on the head there for a moral question I have been struggling with, which is around this inevitability-of-AI narrative that is being thrown at us constantly from OpenAI and Meta, in that, “If we don’t get there first, we are going to be last.” They basically couch this in the moral argument of what we have to lose if we don’t go full steam ahead on AI.
There is this concept that international relations scholar Hans Morgenthau speaks about, which is when somebody assumes a “moral mask” for their argument. We are seeing that a lot lately from the technology space in terms of again the moral stakes, whether it’s national security or your daily life, if we don’t win this AI race. This brings up many questions around the irresponsibility of having your moral framework basically be that there is an inevitability around AI and we have to get there first. However, there are tons of tradeoffs to consider, and to me—again, this is very antithetical to our approach at the Council—it seems like this inevitability-of-AI argument is very zero-sum in nature, even if you are making a moral argument for it.
In that scenario where does this leave the Global South? This narrative, this moral case, this moral mask that many technology companies are assuming right now, feels to me very sour from an ethics perspective. Maybe we could explore that a bit.
JIMENA VIVEROS: I completely agree with that. I feel like at the end of the day it is about solidarity and priorities. As you said, the priorities are in the hands of very few, and what they are thinking, their hyper-vision, is located in one goal alone, which abides by economic and power interests, and not at all about the common good, and that is wrong on so many levels.
What is the outcome? What is the world going to look like after this materializes and after the Global South sinks, after the people who live under the poverty line sink further below the poverty line? Even if you think about it in economic terms the more people you include in the work market and labor capabilities, the more the economy will benefit all of the countries and communities in the Global South, which can then participate in the global economy in a better way. Even if you think about it in such terms, investing in the Global South is going to pay off beyond what people might think the priorities are in the long run, apart from humanitarian perspectives, which are already very clear.
KEVIN MALONEY: As you said, there is certainly this individual, values-driven case to be made as to why we should be supporting the Global South and why we should be building communities of connection within the Global South around AI and around emerging technologies. I think a lot about Nobel Laureate Amartya Sen’s work in terms of human security and how that redefined the work of the United Nations not just around peace and security but in thinking about holistic development and how economic development is a rising tide that lifts all boats. I think we want to see that in the AI space.
What is the solution? How do we get there? How do we elevate more voices? How do we make AI more equitable moving forward?
JIMENA VIVEROS: I think what we need are more grassroots champions, the unsung heroes, so to speak. One of the things I am doing is building a platform, the Global South Synergies and Resilience Platform.
The point of this is to connect not only students but researchers, scholars, startups, entrepreneurs, all of these people who did not go to the Massachusetts Institute of Technology or one of those big universities, they don’t have big names, and they literally want to solve problems from their day-to-day lives. The technology is there, and they learn on their own, not in official or traditional educational systems as we understand them. They are surprisingly self-driven, and they come up with fantastic solutions that can be replicable, scaleable, and that can help not only their communities and adjacent communities but communities across the Global South.
Understanding the Global South as a shared identity and helping each other from a peer-to-peer standpoint is what is going to help us and tilt the needle from the grassroots level, not depending on governments and not depending on shifting priorities but literally solving problems. If you get your basic needs met, which is what so many people want done and are actively trying to do—they lack resources or opportunities, but everything else is there. The capacity is there, the will is there, and the technology is there. You just need to put it all together with empowerment, ownership, and agency at the community level.
Once all that is met, then you can move into other types of innovation, other types of sustainable development. With other things, as you know, you move one step forward but two back, so it does not add up to much on the global scale. What we need are concrete efforts to empower these communities, not with the traditional handout dynamic or any of the models that we have seen that do not necessarily foster this ownership or agency amongst communities that might build this resilience because that is what we really need to do, and the only way to create resilience is through synergies and alliances at this moment, from common problems, common solutions, and how to contextualize them in different areas across the Global South, in different communities, which do share many of the same problems.
KEVIN MALONEY: In the quick moment for me reacting to this that was very enlightening. I think we have gotten past this geopolitical grand strategy, top-down AI framing, which is what I wanted to get to, which is focusing on what are the solutions in terms of making AI more equitable based on what we have discussed. From an applied ethics perspective—ethics is on this spectrum between individual behavior and collective governance or collective policies that affect multiple people.
You talked about how is the technology affecting individuals, how is it affecting their day-to-day lives, and how can technology be harnessed in order to improve those day-to-day lives—“I use technology X to solve problem Y, which makes my life better.”
I think thinking about AI from that perspective is an important piece of the puzzle which you laid out. I am not saying that support for those types of things is going to happen in a non-manipulative way or purely altruistic way, but I think it is important, as you are doing, to build communities in a good-faith way that can support each other in harnessing the technology for their purposes and not being provided technology as charity or not just accepting a data set that runs a large language model that is not reflective of them.
It is important. We are not going to solve this today, but I am happy that we are touching on these points.
JIMENA VIVEROS: Great. The platform that I am building, which is precisely about that, is free. Everyone can join and enter this community, which is completely bottom-up, and that is the point, to not depend on anyone and to help improve your lives, your community, and the Global South in general by solving common problems that can then lead to common development. That is the point, to ultimately create this alternative blog that does not come with any of the antagonistic or politically charged anything because it does not come from the government or any political movement; it is literally about resilience, survival, and development. That’s it.
KEVIN MALONEY: Another kind of issue, narrative, or moral framework that has dominated the AI conversation in the last few years is effective altruism and this idea of longtermism. Maybe I can get your reaction to thinking about AI from a long-term perspective. The long-term perspective is informing the race to AGI, the race to superintelligence. What are your thoughts on balancing the now versus the long term?
JIMENA VIVEROS: One of the problems is that everyone is in “tomorrow world,” so to speak, not that it is bad at all, but we live in today’s world, which has so many problems already which are only going to get exacerbated in the world of tomorrow if we don’t do the right thing today. The pressing priority right now is to solve today’s problems, and this is how. At scale, with 6 million people in the Global South, I don’t see it in a more clear way than having a real impact.
I guess all of those scenarios might or might not happen, but at the same time these people might not even be alive if things just keep going this way. My takeaway from this would be let’s try to focus on the problems of today today.
KEVIN MALONEY: Definitely. I echo your call for focusing on the problems of today. We need to have that printed on a billboard in Silicon Valley.
When I hear heads of AI companies—on the podcast, on the stage at South by Southwest or another global conference—it just seems like there are a lot of snake oil salesmen pitches in terms of the sunny future combined with the risks of failing at AI. From an ethics perspective, it seems that there is something off about it.
JIMENA VIVEROS: I don’t think anyone is promising rainbows and sunshine. On the contrary, they are promising apocalypse and destruction in order to fuel the arms race, in order to get permission to allocate resources to that instead of everything else that could actually bring sunshine and rainbows.
KEVIN MALONEY: It seems there is almost a grocery list of upsides that are the narrative talking points, whether it is national security, improving the techno military industrial complex, or that AI will cure every form of cancer.
JIMENA VIVEROS: Yes, but we are going to cure cancer for the people who can come to whatever European country or the United States. That is not everyone in the world. The moral relativism in that is very noticeable.
KEVIN MALONEY: Moral relativism, longtermism, effective altruism, all these –isms, as I said before, and I used the term “moral mask,” seem to be deployed by very smart people in a very strategic way as an ethical car wash in terms of their ultimate goals. Of course, we will see what the future holds, and obviously people who are doing the work that you are doing provide us with some hope.
I always like to try to end things on a positive, “non-apocalyptic” note. I want to maybe pivot back to a more positive point in the interview, where you talked about the grassroots energy that is required to be cultivated and supported in an equitable way within the Global South. We talked a lot about the negative potential of AI technology, but maybe we could pivot to what you are seeing as the positive potential of community growth around AI in the Global South and where you get your inspiration and energy. Where would you recommend others put their energy and resources?
JIMENA VIVEROS: What gives me energy is the work that I do at the HumAIne Foundation and the Global South Synergies and Resilience Platform because I see people hungry for engagement and hungry to reap the benefits of technology to solve their day-to-day problems, to improve their lives and their communities, and to see their country, their area, or the region in better shape and not to dive deeper into the global divide. That gives me energy and hope as well as just speaking about it in forums like this, so thank you very much for the opportunity.
Every contribution is more than welcome, and I am happy to reach out and, as the name says, explore synergies to get resilience. That is what gives me energy and hope.
KEVIN MALONEY: I think that is a great place to end, Jimena. I appreciate you joining us today. I know we will be in touch in the future. Please visit us at Carnegie Council the next time you’re in New York.
JIMENA VIVEROS: Thank you so much for the work that you do in putting all of these issues out there on the table and spreading the word as well, Kevin. Great job to you and the Council.
Carnegie Council para la Ética en los Asuntos Internacionales es una organización independiente y no partidista sin ánimo de lucro. Las opiniones expresadas en este podcast son las de los ponentes y no reflejan necesariamente la posición de Carnegie Council.