Historias del ritmo de la publicidad: Un debate con Will Knight, periodista especializado en IA

26 de octubre de 2023 - 43 min escuchar

En este debate con Arthur Holland Michel, redactor jefe de Wired, Will Knight reflexiona sobre una década repleta de reportajes sobre inteligencia artificial. Dejando a un lado el bombo publicitario (y respirando hondo), Knight y Holland Michel debaten si estamos realmente ante una verdadera revolución de la IA, analizan si la tecnología es gobernable y si no lo es, y hablan de la experiencia de encontrarse cara a cara con un robot militar.

Para saber más de Knight, consulte su archivo deWired .

Enlace al podcast de Spotify Tales from the Hype Beat Tales from the Hype Beat Enlace al podcast de Michel Apple

ARTHUR HOLLAND MICHEL: Hola. Me llamo Arthur Holland Michel y soy investigador principal en Carnegie Council for Ethics in International Affairs. Este episodio del podcast Carnegie Council se presenta en colaboración con el Instituto de Investigación para la Paz de Oslo, en el marco de su proyecto RegulAIR. RegulAIR es una iniciativa de investigación plurianual sobre la integración de los drones y otras tecnologías emergentes en la vida cotidiana.

Me complace enormemente contar hoy con la presencia de Will Knight. Will es redactor jefe de Wired donde cubre la inteligencia artificial (IA) en todas sus formas. Sus reportajes sobre IA son meticulosos, preventivos y, lo que es más importante hoy en día, sensatos. Hoy nos llega desde Cambridge, Massachusetts.

Will, bienvenido al programa.

Gracias por recibirme, Arthur. Estoy encantado de estar aquí.

ARTHUR HOLLAND MICHEL: ¿Por qué no nos cuenta un poco más sobre lo que hace en realidad?

WILL KNIGHT: Es una gran pregunta. Se refiere un poco a la cuestión de qué es la IA. Mi especialidad es la IA. En realidad llevo escribiendo sobre ella más de una década, supongo que mucho antes de que fuera tan interesante como lo es ahora, porque me fascinaba como esfuerzo científico, esta idea de hacer avanzar la inteligencia de las máquinas, que está ligada a la historia de la informática. Más recientemente, la IA se ha convertido en un fenómeno enorme en la industria tecnológica. Creo que es similar al propio software en el sentido de que está transformando toda la industria tecnológica y cualquier industria que se vea afectada por la tecnología.

Me preguntabas cuál es la tesis de mi ritmo, y trato de centrarme en la medida de lo posible en las cuestiones más importantes en lo que respecta a la IA, las ramificaciones más importantes de la tecnología y cómo afecta a las cosas más importantes que se me ocurren, que creo que a menudo se reducen a esta idea del poder de determinadas empresas, las relaciones internacionales y los derechos de las personas. Da la sensación de que este es un momento en el que la tecnología está ligada a esas cosas. Es una tecnología tan omnipresente e importante, pero creo que está muy ligada a cuestiones de poder, influencia y derechos. Eso es importante.

Hay mucha gente que hace un gran trabajo informando sobre estas cosas. Supongo que equilibro mi tiempo entre observar algunos de los avances fundamentales y tratar de ser estratégico sobre cuáles son los impactos importantes, porque creo que es importante tratar de entender, especialmente ahora, la propia tecnología y cómo funciona. Ese es uno de los mayores retos ahora mismo, en 2023, porque estamos viendo cómo se desarrollan algunas cosas locas.

ARTHUR HOLLAND MICHEL: When you say you have been writing about AI for ten years, that makes you old hat in this space. It makes you a veteran in the truest sense of the word. I was wondering if you could share what the landscape of AI was like when you first started writing about it.

WILL KNIGHT: When I first started writing about it was when I came out of college and joined a magazine called New Scientist in the United Kingdom, which, as you would imagine, is very focused on developments in science. I was part of the technology team there and keen to write about AI partly because it was something that always fascinated me, and it was kind of in the doldrums in those days. It was one of these “AI winters,” where I remember buying a textbook that people would have in grad school or undergrad on AI, and neural networks were a small chapter. It was run over very quickly.

At the same time, even going back then, there were starting to be interesting things happening because of advances in computing and the Internet, so you were starting to see some early machine-learning stuff like Bayesian machine learning, transforming things like spam filtering, which was an amazing phenomenon. We take it for granted now, the idea of how machines actually try to quite capably understand what is going on in an email and filter out the ones that are bad or spam. That was something that people would try to do by hand, and then they started to use machine learning. It was a bit of a backwater, although there was this big Internet and technology company boom happening.

ARTHUR HOLLAND MICHEL: Would you say there are any continuities between then and now? What has stayed the same about the technology, the way people talk about it, or the way it is presented to the world?

WILL KNIGHT: That is a great question. The truth is that AI is tied up in advances in computer science, which are often indistinguishable from what people might call AI, and you definitely had moments—I remember Deep Blue happening. This is the chess computer that beat Kasparov. I remember talking to the people who built it and I interviewed Kasparov, which was great fun. They had this custom silicon to try to do this very old-fashioned way of looking ahead as much as possible but doing some more clever heuristics around it.

I think one of the things you will see if you go back then and even way before is that the understanding of AI if often a misunderstanding. It is often the case that when people talk about AI they talk about it as if it is something that is becoming more generally like a person and more generally capable. It has always been the case that you have carved up these small capabilities. Same thing with IBM Watson taking on Jeopardy! or AlphaGo and AlphaZero.

It can do these specific things, but when people see that, we are very hardwired as a species to see intelligence in other things, so we naturally say, “Oh.” Look at the reporting around Deep Blue and go back to many previous generations before I was working on the beat, and you will see the way people talk about AI as if it is some giant brain that is going to take over everything.

It is the same right now, even with this generative AI stuff like ChatGPT. People are extrapolating. It is understandable at each instance, perhaps particularly understandable with ChatGPT and so on, but it is often taking something that you see and then extrapolating in your mind what it is actually capable of and missing what the limitations and the problems of the technology are, which are often manifold.

ARTHUR HOLLAND MICHEL: The reason I ask is because I have been covering similar things—drones, AI, and other emerging technologies—for about the same amount of time, and I have noted something very recursive about the way these technologies are discussed. An example of that is that as long as you or I have been working in this space people have been talking about how the technology is moving at unprecedented speeds or that we are in a moment of unprecedented technological transformation, that things are accelerating in an unprecedented way, as though there is no precedent, but we have been talking about these unprecedented happenings for what feels like an unusually long time.

I wonder if, given what you have just pointed out, you feel like there is or is not anything specifically different about these past 12 or so months. Has something changed?

WILL KNIGHT: That is a wonderful question, and I think that is at the heart of what so many people are trying to figure out. To some degree I do not know because I think that is the reality. People do not know, and that is what is unsettling a lot of people.

Probably because I have been around like you, writing about it for a long time often from an outsider perspective, I do have a hunch that there is a lot more that has to be achieved and that it is not just going to suddenly fall into everybody’s laps in terms of full—whatever you want to call it—“human level intelligence,” and I think when you look carefully at the technology, when you look at what this is, where it is predicting the next word, it is important often to go back to the original idea of artificial intelligence as a discipline, and that was creating human-like intelligence.

If you look at human intelligence, no matter the people who are slavish about their computer science approach, it is the only model we have, and if you look at evidence from cognitive science, linguistics, neuroscience, and all these different fields, there is so much we do not know and there is so much these models cannot and do not do, so there is a lot that is missing. There is a ton that is missing and there is a ton that is problematic, and there is more and more coming out. As I do this reporting and chip away at these models, you see they are weird and quirky in some very fascinating and problematic ways if you are trying to do something that is going to be so generally useful.

At the same time, I think it is fair to say that what happened in the last year blew everybody’s socks off because there were these things that we thought for a long time, We don’t know how to do that, and this technology, specifically this machine-learning, deep-learning approach, does not lead to that, so it was very surprising to a lot of people and unsettling to say, “Oh, you just increase the volume of data and the amount of computing and, lo and behold, some of these things happen.”

I think, again going back to the extrapolations, people even within the field are extrapolating massively—and this is the phenomenon—they will point to this and say, “Well, there has been this progress so it is going to continue and reach human and superhuman levels.”

I do not know if that really follows. Even OpenAI has been saying that there is maybe not much more performance we can get out of just scaling up, so we have to look at other things, which would suggest that it is not going to continue like that. Also, there are things missing that do not seem that if you just do more and more that it will shake out and it suddenly gets able to deal with this stuff, but I would not bet on that. It is fundamentally appealing to believe that we are on the cusp of this once-in-forever moment when we are building something that is going to become superhuman or human-level. I do suspect there may be a lot more twists in that and that it is not quite as straightforward as we are being led to believe or people are kind of worrying in some cases.

ARTHUR HOLLAND MICHEL: It is undeniable that something that has changed in the last 12 or so months is just the number of people who are directly interacting with AI. Would you say that is fair? If so, is that a significant change in the history?

WILL KNIGHT: It is an interesting question. There are a lot of people interacting with these language models, and that is a different modality, a different way of interacting, that is incredibly powerful and affecting. Using language is fundamental to how we communicate, so the idea of communicating with machines in a more advanced way of doing that is a big deal. That is kind of new.

People have been using AI and machine learning going back for many years. It has been increasingly creeping into products and services and so on. I think it is this idea of having something that can actually seemingly converse in languages and everybody being aware of it, like everyone is talking about ChatGPT.

I do not want to downplay the importance because that is extraordinary. We did not think that it was possible. Literally some of the winters of AI were realizing that language was too difficult, so being able to do this much is pretty incredible, but I do also think it is interesting if you look at the perception of AI and the way language works, language relies on us, you and me, having this idea of an intelligence behind the screen, behind the other person’s eyes. We do not have proof of that, but you have this interaction, and language works because we have this mental model of another intelligence and it feeds into this feeling of that.

The other thing is that, compared to, say, Deep Blue or AlphaZero or something, ChatGPT is much more affecting. In terms of that cycle I think it feeds that idea that there is something very alive here or something seemingly intelligent, even more than in previous instances. It is a challenging time to try to make sense of it because there are big developments. It is undeniable. At the same time, it is tricky when those developments are being portrayed as the brink of artificial general intelligence (AGI).

ARTHUR HOLLAND MICHEL: The artificial general intelligence or the artificial superintelligence (ASI) discourse that you are referring to has been, as many people have pointed out, a major distraction.

Part of the reason it is so fascinating to hear you talk about the affecting nature of this new generation of AI tools and the new scale at which people are being affected by their direct interactions with these tools is that that would suggest that there could be a major impact in the way AI ethics is framed, discussed, and popularized as an integral piece of AI more broadly. With that in mind, I was wondering if you have noticed any change or evolution in the way AI ethics or regulations are being discussed, the way people talk about it, if the vocabulary has changed or the mindset of AI ethics is different compared to, say, a few years ago.

WILL KNIGHT: Oh, yes. I think it has completely been flipped upside down in the last year because all of a sudden you have a lot of people—there are always some people talking about superintelligence and existential risk, but now you have a lot of people talking about that and talking to governments about it.

I feel you have this almost split in the AI ethics field, where you have people who have been worried about bias and the way these tools can be used for influence campaigns or misinformation suddenly being pushed aside by people who are talking about long-term risks, which are predicated on this idea of AGI and ASI. I feel that has been quite a disruptive thing, and I think we are in the process of figuring out how that works and where that goes.

The British government is doing this big international summit, and much of the focus is going to be on the existential long-term risks, and I think a lot of people are worrying that the short-term issues and maybe some of the things you could actually hold companies to account on are being pushed to one side. I think that is a real problem potentially.

Talking about how common it is for people to interact with this AI in the form of things like ChatGPT, there are emerging issues which may be related to this new wave of technology that are not long-term existential ones but that we may be missing. Just the very fact that interacting with a language model can influence the way people think—you talk to another person, your views, if you test someone, can be slightly shifted by that conversation, and if people are holding very similar conversations with machines it is possible to sway them. I think that is an important thing we are not seeing discussed very much.

You have these models out there conversing with people, and it is all a bit of a Wild West right now, but you could see how companies might have an interest in using those to put forward a particular position or to subtly adjust people’s views. Governments, of course, are performing very subtle misinformation campaigns that do not even feel like that. Alexa tells you about this product, but it is really convincing because it has been programmed to know how to be very convincing. I think that will be a big thing. I do think some of those short-term risks are not very clear because you have this shouting about long-term existential dangers, which obviously people are going to focus on most because you would.

ARTHUR HOLLAND MICHEL: I have been thinking, for example, about these new AI celebrity influencer avatars that the company Meta has developed. Now you have someone who looks and speaks like the model Kendall Jenner doling out dating advice and potentially advice on other aspects of the closest, most intimate, and most human parts of our lives. In that sense there are some pretty immediate concerns that do not have anything to do with whether that chatbot will some day, I don’t know, get access to the nuclear codes.

WILL KNIGHT: Exactly. One of the things that has made ChatGPT so popular is that they did this reinforcement learning with human feedback, this process of having people use it and then say, “Well, that seemed like a good answer,” or, “That seemed like a convincing answer.” There is no reason why you cannot train through the same process models to be convincing about all sorts of things, if you wanted to present a particular position or sell a certain product.

We are holding a conversation here because this is so fundamental to how humans communicate, interact, and think through our use of language and expression through language. If you have machines that start to do that in an engaging way, it can definitely mess with people a lot. I think it is interesting seeing ChatGPT have this audio voice and vision capability. There was some blow-up on Twitter because people were saying, “It is just like talking to a therapist”—I think it was someone within OpenAI—but therapists are trained for a particular reason. They are not just language models trained on god knows what on Reddit. We are just playing with stuff that could be quite powerful, as you say in ways that do not have anything to do with existential dangers like getting the nuclear codes. Nobody can disprove that that is going to happen. That is a worry for sure, I think.

ARTHUR HOLLAND MICHEL: Something else that has been very repetitive about the AI space for all of these years we have been in it—and personally it feels like a lot more than a decade; maybe you feel that way too—is this notion of needing regulations and being on the cusp of having regulations or this urgent push for rules and guardrails. That has always been a couple of years on the horizon. I wonder if you feel like we are actually likely to see rules with teeth anytime soon, and if not, why? What are these obstacles that keep AI regulations on this infinite horizon?

WILL KNIGHT: That is interesting. I do not feel like we are going to see hugely meaningful regulations. The European Union is proposing slightly more stringent ones. So much of what we are seeing around regulations to me just feels like theater; it is like people want to say, “I’m doing something about it because it is so important.”

You cannot forget that the U.S. government will see this technology and see potentially something that could transform the economy and provide an enormous advantage to the economy, to their different industries, and to their military. They are not going to be very keen to regulate it. They are reacting to the public reaction in saying, “We are going to get people in and have them agree to these voluntary rules” and whatnot, but I don’t think they have very much interest in regulating it at all. It is the opposite. They want this to take off.

You can see similar things happening with autonomous driving. They have been very, very reluctant to regulate that much in the United States because they want to see that industry take off. It is understandable from the government policy and capitalist objective perspective, but it does not make sense that they would want to regulate that so much, so I am not super-optimistic that we are going to see very meaningful regulations, and I think that is probably the reason.

ARTHUR HOLLAND MICHEL: In that sense would you say that AI is different from other spaces and industries that have been regulated like aviation or the motor industry? On the tail end of that, often what we hear is that it is incredibly complicated to govern AI, that AI is just far too complex. I wonder if there is actually an evidentiary basis for that given that we have in the past succeeded in regulating some fairly sophisticated, complex, and multifaceted technologies.

WILL KNIGHT: That is true to some degree. It is probably closer to those industries, and you could certainly come up with much more stringent and much more meaningful regulation.

I am not an expert on regulating different industries. I do not doubt that there are challenges to doing it with AI that are unique, but as you say there are pretty significant challenges doing it for biotechnology and other complex, fast-moving industries, but you do have a moment where you have governments being told by a ton of experts that this technology is a generational shift and is going to change everything. The last thing they want to do is pump the brakes on it and put too many controls on it, especially in the United States, so I think that affects it.

ARTHUR HOLLAND MICHEL: There is something to be asked there about whether that need to balance strategic needs and the safety of one’s citizens is a unique balance in AI or whether perhaps there may be other forces that are driving this notion that regulating AI is going to preclude benefiting from its possibilities.

WILL KNIGHT: That is the narrative, isn’t it, that it is somehow unique. I do not know that it is not unique, but I also do not see that it necessarily is either. There are certainly huge numbers of lobbying forces at play here. Very, very rich and powerful companies are trying to preempt regulation.

Touching on your own work in the use of drones and military technology it also does feel like there is this kind of unusual climate of feeling that this technology has an enormous strategic potential, whether it is intelligence or military technology. Even if it is not out there in public discussions, I think that weighs a lot on the way the government is thinking about it.

ARTHUR HOLLAND MICHEL: I am glad you bring that up because the military space is perhaps where this ethical tension is the most fraught. You published a phenomenal feature a couple of months ago on autonomous military technologies. I would recommend everyone go out and read that story. Just for our purposes today can you tell us a little bit about that story and what were your main conclusions? Part of what I want to ask is also what was it like to actually come face to face with these technologies?

WILL KNIGHT: Thank you for saying that about the story. I was interested in this topic a couple of years ago because I felt that a lot of the reporting around it, mostly about Maven, this Google project, was very knee-jerk, and I thought that technology is not that black and white when it comes to its use in defense or military, so I wanted to learn more about that.

I spent a lot of time trying to build connections and learn more and more. I became very interested in this Navy application of the technology because it had not had much attention. It was also moving quite quickly because there is now an idea of using cheap autonomous systems to increase the visibility and responsiveness of forces, this idea of “maritime awareness,” and it was actually being tested in the Gulf of Oman by the U.S. Fifth Fleet because they had been given license to test some of these technologies.

Diving into it was fascinating because there are a lot of different forces at play often within military or defense-related circles. There are a lot of different views on what technology is going to be important and is not, and there are plenty of people who think AI is not as important as a lot of people who believe it is going to be transformational.

If you look at the history of military conflict and technology related to that, there are these enormously important moments. Technology is so fundamental to military capabilities and power and success and has been over history, so there is a strong incentive to try to be on top of the latest technologies that are going to be meaningfully important. It does not mean that the most exciting thing out of the tech industry is going to be important, but there is also this moment where technologies that have been private sector commercial technologies are suddenly becoming more applicable to the military sphere.

You can see this in Ukraine. From your writings on drones you know this very well. We see this drop in the cost of drones enormously over the last several years, and it is changing the nature of a lot of conflicts. It is massively important. That is not AI specifically, but it is related to AI.

There are a lot of forces at play, and there is this idea that is gaining currency I think that is quite appealing to people, that autonomy and AI are going to be a way to have a military edge.

There is also a strong incentive in some parts of the Pentagon to put this idea out there that they are racing to adopt AI, especially in a maritime situation, so that you create some doubts in the minds of America’s near-peer adversaries. The obvious one is China. Everyone is kind of obsessed with that and obsessed with the idea of some potential conflict. It is alarming to me how hawkish that I feel Washington has become about China.

Reading military history you see how the race to adopt technologies can sometimes become a self-fulfilling prophecy. I am no expert. I am trying to learn—a lot of people are experts on military history far more than I am, but I am trying to understand it, and it does strike you that if you try to race to adopt this stuff and deploy it, it almost feels like an end unto itself.

There is a lot more complexity to the question of AI and its use in military domains. You have written excellent stuff on this. It is not a slam dunk that it works or is useful at all. Often it just is not.

At the same time, there is this big momentum shift in that direction which I think is going to meaningfully—along with things like much cheaper autonomous systems—transform the makeup of different militaries. You see a lot of the investments that different countries are making, and it is a response to what happened in Ukraine, when you saw these cheap systems changing the nature of how one might expect that conflict to go, when you had those cheap drones at the beginning. That changed after a while, but it is interesting. I still feel like I am learning a lot about that. I feel like it is still an open question how useful AI really is and how quickly it will be used.

One of the key questions I think is, if we look at AI being deployed anywhere where there are autonomous systems or these chatbots, when you do it in critical situations what is the engineering around that because you cannot just put these machine-learning systems out there and see what happens because they do not always behave predictably. That is just fundamentally their nature, so you have to engineer to try to deal with that.

I think it should be a real worry that that is an issue because it is an emerging form of engineering. It is not well-known. I would actually expect that the United States might be very, very good at developing a way to do that relatively reliably, but I would worry about a lot of other countries that maybe are not as well-resourced and are racing to try to find parity. That picture is quite concerning.

ARTHUR HOLLAND MICHEL: Something that I always appreciated about your reporting is that you have picked up on questions and perspectives that maybe are not picked up by the, shall we say, the discursive mean of the AI space. For the purposes of our listeners, could you share what you think are some of the areas or questions in this space that you feel like we should be paying more attention to, and in particular if you see any ethical questions here that perhaps have not yet found a satisfying answer?

WILL KNIGHT: I am glad you asked that. One thing that has come up to me and that I find interesting—and I do not know the answer to this—is I have heard people who I very much respect in the AI space who are surprised by the capabilities of some of these models but point out limitations, things like they do not mimic a sense of self, they do not have a consistent one, and they do not have any goals. They do not set their own goals and make their own objectives.

I have heard people ask the question, well, maybe do we want to do that? If you think about what we are doing, you are trying to build something that is not purely at an abstract intellectual—the intelligence we are trying to recreate, especially learning from human behavior, is human-like intelligence. I think we have a lot of problems with very intelligent humans.

Look at the world now. It is really dismaying. When there is all this discussion about existential risk I wonder about the mechanics of how we are building this stuff. Is that the smartest way of doing it? I guess we do not know another way to try to make things very smart, and as I said before we do not have a model that is not human intelligence, but it does not feel like we are asking much about the basics of that when it comes to developing systems that might behave in ways that we don’t like or that are problematic. That is not an existential risk; that is just asking, “How do you avoid mimicking some of these things that maybe we don’t want to have in systems?”

It is a funny question because I do not want to feel like a Luddite who is saying, “Maybe we don’t do this or that with the technology.” A lot of the discussion around this has become a bit performative where people are talking about existential risks. I want to get more into what does that mean. There are some papers where they look at models where they try to deceive people. Is that interesting? Is that actually really a problem? Is that something you can easily fix? Some of the detail of the misbehavior of these systems I think is going to be very interesting and should be focused on a lot more.

Those are some of the main things. I do think the way these models can influence people subtly—developing a system that is not intelligent as I would describe intelligent but that can mimic conversation relentlessly and effectively toward a particular end feels like it could be a pretty unsettling thing to build more and more of them. I think that needs a bit more exploration and discussion.

One other thing. When you have companies going around saying, “I have got the most powerful technology in history potentially,” but they will not make it possible for people to examine it and look at it, there is not much transparency, that feels very weird to me. That does not feel like science; that feels problematic.

I think there are a lot of brilliant people who could try to understand some of the behaviors here that people are worrying about. I feel that is a real mistake. When you have companies driving the discussion around regulation and saying, “Well, we think there should be a national register and that you have to have permission to do it, but it is only going to include us and then we will not reveal how this stuff works,” that feels wrong. I think as a society, especially if it is so powerful and fundamental a technology, we should be trying to understand it more. It feels like people are just throwing things out there for the profit motive.

ARTHUR HOLLAND MICHEL: I would imagine that many of our listeners might find this barrage of questions, considerations, and uncertainties to be pretty daunting. I know that from my perspective I find it pretty daunting as well, as I am sure you do too perhaps at the best or worst of times.

I wanted to ask finally, what is it that gets you out of bed and through the day? What is it that either motivates you? What are you optimistic about? What gives you a joyful sense of the future, if anything?

WILL KNIGHT: It is a bit overwhelming at the moment because there are so many questions that it feels sort of unsettling.

I think it is important to remember, going back to the first question, covering this beat and looking at it early on, that this is an incredible moment honestly. Having a technology that can do something as general-seeming as these models can do is something to be celebrated. I feel lucky to be witnessing that quite close up. I do feel that is amazing.

One thing that is interesting is talking with people who are embedded in the field, they often say, “It is amazing to think my kids are going to grow up where there are these sorts of tools,” whether you think of them as intelligent and whether they might in time be indistinguishable from some kind of intelligence. It is crazy to think about that being something they will grow up with, and that is interesting, to have something that can converse with people so convincingly and often usefully. It is a fundamentally new technology. It is pretty striking to be at that moment in history.

I try to stay optimistic when it comes to everything. I think that is probably the thing that keeps me most optimistic, just thinking that we are at this pretty incredible moment. The hope is that there are a lot of positive outcomes that can be wrung from it, even if it is going to be a slightly unnerving and unsettling period for a while.

ARTHUR HOLLAND MICHEL: I might add that one of the things that keeps me optimistic is that there are folks like yourself who are on top of this, holding those creating these technologies and using these technologies to task about these sorts of questions and will continue to do so while hopefully practicing an enormous amount of self-care.

I would say that is a great note to end on, so I will just finish by saying that, Will, this has been a phenomenally fascinating conversation for me. I am very grateful for your time today.

WILL KNIGHT: You are very welcome. Thank you for having me. It has been fun.

Carnegie Council para la Ética en los Asuntos Internacionales es una organización independiente y no partidista sin ánimo de lucro. Las opiniones expresadas en este podcast son las de los ponentes y no reflejan necesariamente la posición de Carnegie Council.

También le puede interesar

13 DE NOVIEMBRE DE 2024 - Artículo

Una zona gris ética: Agentes de IA en las deliberaciones políticas

A medida que aumenta la adopción de la IA agéntica, es fundamental que los investigadores y los responsables políticos se pongan de acuerdo sobre los principios éticos para informar la gobernanza de esta tecnología emergente.

De izquierda a derecha: Eleonore Fournier-Tombs, Embajadora Chola Milambo, Embajadora Anna Karin Eneström, Doreen Bogdan-Martin, Vilas Dhar. CRÉDITO: Bryan Goldberg.

19 SEP 2024 - Vídeo

Desbloquear la cooperación: IA para todos

En vísperas de la Cumbre del Futuro, Carnegie Council y UNU-CPR organizaron un acto especial en el que se analizaron las implicaciones de la IA para el sistema multilateral.

16 SEP 2024 - Vídeo

AI para la accesibilidad de la información: De la base a la acción política

¿Cómo pueden colaborar los ciudadanos, las instituciones cívicas y los profesionales de la industria para garantizar que las tecnologías emergentes sean accesibles para todos?