IA para la accesibilidad de la información: La ética del "aumento de la inteligencia", con László Z. Karvalics.

Sep 20, 2022 - 28 min escuchar

En este episodio del podcast AI for Information Accessibility, la presentadora Ayushi Khemka habla de la profunda historia que hay detrás de la inteligencia artificial con László Z. Karvalics, director fundador del Instituto BME-UNESCO de Investigación sobre la Sociedad de la Información y las Tendencias. Su conversación toca el debate Google/AI sentience, la preservación de la información, los medios sociales y el concepto de "aumento de la inteligencia".

La serie de podcasts AI4IA está asociada a la Conferencia sobre Inteligencia Artificial para la Accesibilidad a la Información 2022, que se celebrará el 28 de septiembre para conmemorar el Día Internacional del Acceso Universal a la Información. La Conferencia AI4IA y la serie de podcasts también se organizan en colaboración con AI4Society y el Kule Institute for Advanced Studies, ambos de la Universidad de Alberta; el Centre for New Economic Diplomacy de la Observer Research Foundation de la India; y la Broadcasting Commission de Jamaica.

Para inscribirse en la conferencia, pulse aquí.


CORDEL GREEN: Hello and welcome. My name is Cordel Green, chairman of the UNESCO Information for All Programme Working Group on Information Accessibility. Welcome to the AI for Information Accessibility podcast, organized by Carnegie Council for Ethics in International Affairs. Your host is Ayushi Khemka, a Ph.D. student at the University of Alberta.

AYUSHI KHEMKA: Today we have with us Dr. László Karvalics. Dr. Karvalics is the founding director of the BME-UNESCO Information Society and Trend Research Institute. He is also an associate professor and former head of the Department of Cultural Heritage and Human Information Science at the University of Szeged.

Thank you for joining us, László. We are so happy to have you here today. Why don't you tell us a bit about how you came to be involved in AI and ethics and what you are working on at the moment?

LÁSZLÓ KARVALICS: I am a historian, and specifically I started to deal with information history. I tried to compose historical reconstruction of information technology solutions and the ecology of it the first time I ran into the problems of computing, automatization, and intelligence augmentation as a part of my historical research. It was extremely interesting to identify that the basic questions of the artificial intelligence (AI) narrative are older than anybody at this stage expected ever.

The story starts in the Paleolithic Age, when the hunter-gathers made their first treks with a "release and retarder mechanism" which is absolutely equivalent with the if/then gate of the programmers. Artificial intelligence in this approach is a very, very long story, and it was obvious to finish the information technology typology with the hot potatoes of artificial intelligence from this atypical perspective.

Let me mention one very important point. Information researchers are defining the different information technologies alongside the natural or original information cycle, the stages of information activity, which is described most popularly by the so-called "OODA" loop—observe, orient, decide, act. We can make the categories of mind-augmenting technologies alongside this typology.

If we try to find the place of artificial intelligence in this typology, it is not the observation stage. These are the sensors. It is not the orientation stage because this is the world of semantics, the meaning generation. It is the human brain, the human mind. It is not the decision because it is the human mind which makes decisions. It is in the act, the last stage of this information cycle because there is a very special act, objectivation of information, which means that we make signs from the information in our brains.

It is extremely important to understand that coding, decoding, and recombining these signs is a very narrow field of our information behavior, and when we can automatize this type of information behavior and support our numeric skills we construct machines which are able to compute at an enormous capacity. The place of artificial intelligence in our information behavior is in this very special angle.

When I approach the artificial intelligence field I always highlight this important fact, that we are talking about very special supportive technology, and all the nightmares and moral panic scenarios about artificial intelligence are just a consequence of ignoring this very, very basic and important aspect, that artificial intelligence entities are not making other things or activities other than computing at an extreme velocity and nothing else.

All the metaphors for its performance, like learning, decision-making, or sensing are bad verbs and wrong metaphors. I think it is very important to clarify in this first step of our conversation that I think there is nothing dangerous in artificial intelligence. The danger is coming from structures and institutions where people use old-school thinking and badly designed institutions and not from our beautiful technologies.

AYUSHI KHEMKA: That is so informational, László. I am thinking also in terms of when you talk about AI not being as dangerous as it sometimes is made out to be. How are those issues also related to information accessibility. Since our AI for IA Conference is also coming up, do you expect more such conversations from that conference? What are your expectations from it?

LÁSZLÓ KARVALICS: In my vocabulary IA is intelligence augmentation, which is the key expression. We have a lot of technologies, tools, and cultural weapons to augment our intelligence. The so-called "artificial intelligence technologies" are just one cluster or one set of these solutions. I hope in this conference we can talk more about the human brain and supporting mental work and decision-making competency with and by machine systems. I think it can be a very innovative subtopic, how to use artificial intelligence for intelligence augmentation in a new way or with new technology.

AYUSHI KHEMKA: I want to go back to the point where you were talking about AI and the dangers that institutions also bring into it. I am thinking in terms of when we approach AI through a technological lens in terms of its engineering and structures—or "institutions," if you want to call them that. Do you think we bring in a social consciousness, and can we imagine AI conversations in these binaries of the technological and the social, or do you think there are overlaps?

LÁSZLÓ KARVALICS: The technology development, the production part, is always within the economic and political dimension, but the usage and transformation of these technologies into parts of our everyday lives is always social. The gap is that of course a business approach or business-driven decision-making is always [combined] with possible users and profit margins, and this approach doesn't [take into account] civilization challenges, needs of local communities, and the cultural aspects or other important issues.

That is why the big mystery is how to generate a new kind of loop to influence somehow the business development from a social perspective, trying to identify the profit from another angle or another perspective, not only in monetary terms but as a social good. This AI for social good narrative can be a good starting point to extend the discourse in this direction because there is no business activity without well-designed [input] and [output]. If we can share special social needs with developers and we can convince the political elite to allocate resources for these goals, maybe a new trilateral conversation with business, politics, and society can lead to a lot of new developments in this AI for the social good arena.

AYUSHI KHEMKA: I am also thinking right now of the Google AI debate. There has been definitely a lot of conjecture around Google's AI becoming sentient. Do you have any thoughts on that? I am also trying to make meaning for people who are not necessarily working in the fields of AI. How do these debates affect regular people's lives?

LÁSZLÓ KARVALICS: I have a very extreme position about this debate if we are talking about the Google developer who started to talk about sentient and superintelligent AI. That is why I started with my position that AI doesn't think, doesn't feel, and doesn't decide but simply computes. There is no such entity as "sensing" AI, "thinking" AI, or "learning" AI. They are algorithms, engineers, and computing machines. That is why this statement is a foolish statement. From a Google perspective it was a very logical step to kick off this guy. I am with Google in this situation.

However, don't forget that this dumb machine which I am talking about is a supertool to overcome our shortages in several fields. In the medical field processing scientific data and in a lot of other domains we are not able to solve problems without this support, without this machine, but we do not need the overemphasized tabloid approach, which is only focusing on the potential danger of AI for us.

AYUSHI KHEMKA: That's quite insightful.

When we are talking about algorithms we cannot not talk about social media at large as well because oftentimes those two go hand-in-hand these days. Because a hot topic of the conference is around information accessibility and not just AI and we often consider social media as a platform wherein a lot of information is accessible to a large number of people and is available at their fingertips, what do you think about the concerns over privacy of data that come along with this?

LÁSZLÓ KARVALICS: I think that the privacy of data is a valid discourse in a social media context, but the usage and the patterns of usage of useful information through social media is a far more important context than the privacy problem. It is very easy to deal with privacy problems, but 90 percent of the discourses are about this privacy problem, and we do not talk enough about the benefits of these enormous information streams through social media.

When we are talking about fake news or a post-news era context it is easy to forget that the information cycle only starts with the input of information, news, or data. This is the observe stage, but the orient stage is the empire of brains. So the meaning is always a result or an outcome of a meeting of the data and the brain. So it is not about the data but about the brains and the usage of this information.

Of course there is a lot of bad usage of information, but it is easy to forget the other side, that the majority of usage is extremely useful. That is why the social media companies are successful and rich because they serve enormous information needs. Of course during this process there can be problems with privacy of data, but we have a lot of tools to overcome this situation.

Just as one example, we are talking about the infodemic in the context of a pandemic, infodemic like poison content or fake information. What is the medicine in the majority of the literature? Critical media literacy, which means that first of all we have to adjust our brains to be able to clear this information stream. As we clear the water which we drink we can clear the information stream which we consume. Our tools are evolving around our brains and not around the machines. The fake-news-detecting pages are run by people who can understand meanings, who can check facts, and who can find misinformation, and it is again about human brains and not about the machine.

AYUSHI KHEMKA: You talk about misinformation and disinformation. Are there some ways in which you think maybe different stakeholders can approach this problem, not like regular persons consuming this content but in terms of AI developers, government organizations, or social media platforms themselves. Do you think there is a way in which we could tackle this situation all together?

LÁSZLÓ KARVALICS: First of all it is a proportionality issue. If you compare the fake or poison information, its quantity, strengths, and reproduction skills, and compare it with the size of the non-fake, useful, and important information, the difference is enormous.

We overemphasize the importance of fake news and infodemic content in the giant stream of information, and we forget that even if somehow there will be act after decision using false-input[ considerations, it doesn't mean that there is not millions or trillions of successful information cycles on the other side. Fake news means nothing if there is no fake act from this fake news or there is overriding of this fake news by persons of experience or by using other relevant information. This field is an important field to analyze, to understand, and to cope with, but its importance is very limited compared to its very, very loud existence in our everyday debate.

AYUSHI KHEMKA: I want to circle back to our conversation on information accessibility a bit. When we talk about information preservation, there is a huge body of work that has been engaging with the whole idea of information preservation, be it researchers, industries, policymakers, and so on. Do you think that automatically implies information accessibility, or are those two different concepts which get conflated?

LÁSZLÓ KARVALICS: No, it doesn't automatically imply accessibility. Of course these are two separate empires of the information ecosystem. It is extremely important not to lose information before its digitization. That is why the endangered cultural heritage object, not only textual but visual or artifactual ones are extremely important to digitize because if they are part of this endless digital ecosystem they are potentially accessible.

Of course actual accessibility depends on a lot of other issues, perhaps definability, generating metadata, transcending the language problems, transcending the reading of a digitized manuscript, the transcription of this manuscript, and so on. So there are a lot of problems, but what is important is to make all this digitized information potentially accessible, and if this information is potentially accessible and there is a way to find it—and the information professionals can help a lot for different target groups to find the relevant information—I think this is the brave new world of the information ecosystem, but the first step is always preservation through digitization.

AYUSHI KHEMKA: Perfect. I think that brings us to end of our episode with you, László. Thank you so much. You have definitely given us some interesting insights. One of my favorites was your strong position on Google's AI debate. I love it when people take that. Thank you so much for coming and joining us for the podcast, László.

LÁSZLÓ KARVALICS: Thank you very much, Ayushi. All the best for you.

CORDEL GREEN: The AI4IA Conference and the podcast series are being hosted in collaboration with with AI4Society and the Kule Institute for Advanced Studies, both at the University of Alberta; the Centre for New Economic Diplomacy at the Observer Research Foundation in India; and the Broadcasting Commission of Jamaica.


También le puede interesar

SEP 13, 2022 - Podcast

IA para la accesibilidad de la información: Prólogo a la serie, con Cordel Green

En esta introducción al nuevo podcast AI for Information Accessibility, la presentadora Ayushi Khemka habla de los objetivos de la Conferencia sobre Inteligencia Artificial para la Accesibilidad a la Información 2022 ...

18 DE AGOSTO DE 2022 - Podcast

¿Hasta qué punto es real la realidad virtual? con David Chalmers

¿Es posible que el mundo en que vivimos sea una simulación? ¿Los entornos virtuales que se crean son reales o ilusiones? ¿Cuáles son las perspectivas de creación de ...

SEP 6, 2022 - Podcast

Ética, tecnologías digitales e IA: perspectivas del Sudeste Asiático, con Elina Noor

En este podcast de Inteligencia Artificial e Igualdad, Anja Kaspersen, investigadora principal, habla con Elina Noor, del Asia Society Policy Institute, sobre cómo ...