Iniciativa sobre IA e Igualdad: Piensa antes de codificar

24 de junio de 2021

ThinkTech es una asociación independiente sin ánimo de lucro, creada por y para estudiantes, jóvenes tecnólogos y profesionales que trabajan para dar forma al impacto de la inteligencia artificial y otras tecnologías digitales en las personas y la sociedad. Bajo el lema "Piensa antes de codificar", sirve de plataforma para crear una guía para el desarrollo responsable de la tecnología. En este podcast, la Senior Fellow Anja Kaspersen habla con Lukas D. Pöhler, Eva Charlotte Mayer y Agnes Gierulski de ThinkTech sobre sus proyectos.  

ANJA KASPERSEN: This is Anja Kaspersen with Carnegie Council for Ethics in International Affairs and the Artificial Intelligence & Equality Initiative (AIEI). I will be speaking today with three members from the artificial intelligence (AI) ethics initiative ThinkTech about how to engage young professionals in responsible AI development and society outreach.

Lukas, Agnes, and Eva, thank you so much for being with us here today. Tell us about this initiative.

LUKAS PÖHLER: Let me go to the start. In 2017 I realized as a student of electrical engineering that there was not any community where I could discuss the implications of the technology I was creating. I was at that time working in a robotics lab doing algorithms on humanoid robots. I just realized that there were several challenging implications from this technology, so I discussed it with a couple of students and friends of mine, and we said, yes, why not find out if there are other students in the community with whom we can discuss these topics.

A lot of students got interested. We had a one-day workshop, and there were more than 40 applicants for that workshop, and we discussed these topics on misuse of AI algorithms and also AI in health care. We realized that there was a big need for such a group, a platform, to come together that people from the tech fields or developers could discuss openly about their fears and their thoughts regarding the technology they are building.

During this time, our current slogan, "Think before you code," came up because we are building the technology, but we don't want to be responsible if we don't know what our role is in all of this field, so we don't want hinder innovation but somehow be responsible in innovating it.

Our core goal is to have responsible technology within a fair and just society. This builds on three pillars:

(1) That we are a platform to coordinate developers to initiate and host discussion;

(2) That we enable individuals and end users to understand and use the technology, and digital technology in particular;

(3) That we shape the development and deployment of the technology towards responsibility. As a number of our members are developers, this is a core goal, that we discuss and think through what is our role and what can we contribute to this responsibility.

ANJA KASPERSEN: Thank you for that, Lukas.

Agnes, can you elaborate on some of the points that Lukas just shared with us on the origin of the initiative and also where you see this initiative play a very important role?

AGNES GIERULSKI: I got a social science degree at the Technical University of Munich, which was basically trying to do just that, bridge this gap between society and technology and to discuss those tensions that arise there. I was searching for something where I could actually put something into practice, and that is where I found this group.

So when I got the opportunity to actually go into a more practical direction—what this actually means to develop responsible code or how this could be approached in a real-life environment. This has been so far the goal, and the approach we are taking is to raise awareness on potential pitfalls. This is for us the foundation of coding responsibly because in our understanding with all the disasters and all the mishaps that have happened, I don't believe they were done on purpose. Even though they were not done on purpose, they still happened, so something else must be done, and in our understanding raising awareness on where potential mishaps can happen and educating people in that area is what this AI ethics governance group is about.

ANJA KASPERSEN: Thank you, Agnes.

Eva, you are also a founding member of ThinkTech. Would you like to elaborate a bit more of your involvement in the initiative and where you see its value at?

EVA CHARLOTTE MAYER: I think actually the amount of technology that relies on artificial intelligence increases every day, and there are so many technologies that are used by millions of people every day.

To show my point with an example, in social media we have something that is called a recommender system, where a machine learning model predicts which posts are interesting for a certain user, and then the social media platform will only show the posts that receive a very high ranking by the recommender system to the user. This is a great invention because it keeps the users interested. It keeps them engaged, and that is what the software wants. That is a really cool thing.

However, I think something we also need to keep in mind is that it is problematic because the rules that the machine learning system is using to promote content are not very transparent. So, in the past, for example, if one newspaper was publishing a controversial opinion, then a different newspaper could state a different perspective. Nowadays, if you have a user posting some kind of conspiracy theory or something like this, then this might be shown to a lot of other users without anybody who has a different opinion noticing that. So it is not very transparent anymore.

This is a very critical change in my opinion in how we form our opinions and how people might be influenced in their particular leanings or something like this, and that is why I think we need to discuss some certain questions here: Do we want to have a control instance in this field? And if we want this, how do we implement it?

I think everybody should engage in the discussion, coming back to your question. This is something that concerns everybody because everybody uses this technology. Of course it concerns young professionals in the tech industry because they need to implement this, but it also concerns politicians, journalists, and everybody honestly because that issue is so diverse.

LUKAS PÖHLER: The example Eva mentioned is a great example of how these systems enter our private life. They have a very strong impact on a big scale, and the question I had at that time and which I still have is: Which rules are in place for this? What is the framework that I could somehow navigate upon?

I did not find very good answers, to be frank. There were no clear guidelines on what you do and what you don't do and how you do things. You develop an algorithm, you optimize the algorithm, and then you publish it and it is out there. This is how a lot of things work in the science world.

My question is or was: Who is then responsible for this algorithm? To put it more optimistically: Who is responsible then to make responsible algorithms? This led to ConsciousCoders and ThinkTech, but it is basically also the people who develop the algorithms or the developers or scientists working on AI systems. They do the first steps. They know the technology. They know the pitfalls and the shortcomings, so it is their responsibility on what they do and what they don't do. After we started ConsciousCoders, we thought: Okay, let's just think about it, and then we will do it good. No we don't do it badly, at least.

But it is not that simple because we realized if you are just talking with colleagues or friends from university, you have counter-quandaries: What is then in the end good or what is bad? How should we do it responsibly then? This is really challenging. This is what we learned.

This is why we also started to get more people involved from different spheres and different study courses at the beginning, when we said: "Okay, we want to really understand the selected sites, understanding the technology that is out there, but also to see how the world that we want to live in would be described. How do we define good technology, technology that is responsible or "socially aligned" as some initiatives call it. This is hard work. You can't only do it within your sphere or your initiative, but we learned we needed to have some outreach. We had to talk to people outside to get the perspective of users, of the society, and then take it back and think about: Okay, how do we turn these expectations, these hopes, but also this understanding gap of technology back to how we develop it and what we put into code, what has to be right, and so on?

ANJA KASPERSEN: Is there a receptiveness to this type of thinking?

EVA CHARLOTTE MAYER: Yes. Actually I do think there are lots of people who are very receptive to these topics, and it is also because these topics are becoming more and more popular. You can see that. If you go on Netflix now there are two or three different movies about these kinds of things like The Social Dilemma, and Coded Bias is coming out. That shows that this is becoming a hip thing to talk about. Maybe that is why, but I hope actually also because people do realize that they have a responsibility and that is why they are interested in hearing about it.

Actually I was doing an internship before I started my current job. I was with a company that is criticized a lot in Germany, and I was kind of expecting that people wouldn't like to talk about the things they are criticized for. But actually my manager sat down with me one day and was like: "You know, I heard that you were in this AI ethics initiative. Tell me more about it." We got talking about these topics and also about the criticism from that company. It was really great to have somebody to talk to about this and see that they are interested. So, yes, I do think people are becoming more and more eager to take on responsibility.

ANJA KASPERSEN: Would you say that is because of public pressure or because we are becoming better at demystifying what ethics is in this context?

EVA CHARLOTTE MAYER: I would say the latter.

AGNES GIERULSKI: I would like to refer back to Eva and agree with her point that people become more aware of it and companies as well are interested in it. However, I would say the challenge is what that actually translates to because it is really hard to have deterministic rules for a variety of cases, say, generalized rules that developers or decision-makers can always adhere to. Within the companies I have talked to there is a general agreement that this is an important topic and should be taken up.

It becomes much harder and much more complicated once there is the question of what this actually means because, as I said, few deterministic rules, so what else is there to say? I would argue that the way to approach this is rather complicated. It is really not deterministic because the challenge is that the people shaping these algorithms and the way they are implemented within society have a certain perspective, and they may not necessarily see all the challenges and the pitfalls that may happen, so even with best intentions they may not see all of them. As a consequence raising the awareness of where potential pitfalls may occur is a first step, to raise this moral awareness, so that the individual developers and decision makers may have a closer look at potentially critical decisions.

But the second step in my opinion is to go towards a more inter- and trans-disciplinary collaboration because when people of diverging perspectives—in all kinds of areas, whether it is educational, geographical, professional, or just different life experiences—come together their perspectives may complement each other and may pick up on potential issues that a single-layer perspective would not have. Translating this into a professional workflow, however, is much more expensive and much more complicated, and that is where I currently see the largest challenge in realizing those ideas.

ANJA KASPERSEN: So translating the need for a process of addressing pitfalls, bringing trans-disciplinary expertises together, and also ensuring the right type of diversity, if I am summarizing you correctly.

AGNES GIERULSKI: Yes, absolutely.

I mean, obviously it would be great if this moral awareness was already talked about and worked on in school. However, I think it is a topic that you are never done with. I don't think it's something that you can gain and keep your whole life. I think it is continuous work. That's how I see it, just constantly working on showing potential pitfalls or making people realize how they come to certain conclusions and whether there are any implicit, unconscious biases there that they may not be aware of.

LUKAS PÖHLER: So it is still a lot in the hands of every developer or individual to find the right tool. There is a big gap from the aspirations maybe also of legislation or of guidelines which are there, which are often very high-level, but still to reach the last step from these high-level goals that we have on fairness, on accountability, and on explainability if you need to deliver a product which has certain requirements and to just fulfill them but then also going a step further to say, okay, we want to also ensure that it is not biased, that it is fair, and that it is robust against different input data. I think there the discussion has somehow matured, but it is still far from the goal, for sure, but this is I think where the discussion went to also now recruit more members are growing out of university and turning to the professional world.

ANJA KASPERSEN: This is a discussion that has been going on in the AI ethics field for quite some time, this issue of translating policy to practice, or, as you were saying, aspirations to practice, which is a nice way of looking at it.

LUKAS PÖHLER: It is in the small minutes that you need to do on a daily basis. You really need to bridge this gap often by yourself. You read the law and think of cases. Is this an edge case, or is it still within the legal framework, or are we going one step too far?

It is not like you learn it and then you are able to do it. It is not a skill. You need I think the experience also to navigate through it. It is not like learning to swim. It is more like being able to navigate through difficult terrain, and every day to try your best to build good models, do the necessary tests, but also to be frank and discuss with your colleagues and be open for feedback, getting active feedback, and bring this to the table where you think, Okay, here we might need to discuss the limitations of the model we are building.

ANJA KASPERSEN: So the individual responsibility is something that weighs on you and others in your field as well on a daily basis, how to manage the tradeoffs constantly?

LUKAS PÖHLER: Exactly. It is a lot of tradeoffs because you have some final delivery, and you try to get the way until there as good as possible, but in the end it is on you. You built the code, you make the decisions, and you try to make them as good as possible.

EVA CHARLOTTE MAYER: I think the hardest part often is that most of the technology that we are working on has two sides. Sometimes you would like to create a new software, and then you are very eager about it because you think it will do a lot of good, but you need to always think about it twice because it might have a flip side, and this might be the flip side that is not good.

ANJA KASPERSEN: So, Agnes, over to you. Is there a generational gap?

AGNES GIERULSKI: I think the knowledge on, for example, machine learning, the basics, are fairly accessible these days. Maybe within my generation there is a higher affinity to access this knowledge and to put the question of whether I would trust a machine like that into context.

It's nice to see that the younger generation is really involved in this and has a high technological understanding, but I would wish that my parents' generation, for example, would also be more open to it. There are many horror stories around artificial intelligence, and that would be the first thing they would know about. So I am sometimes having a hard time explaining the exact flip side, that if I was talking about there is a lot of potential, it is great potential, but the technology itself is not necessarily bad in itself. It really depends on how it is created and how it is implemented. So calling it all evil is not going to get us anywhere. Having this awareness and this differentiation on how these technologies are created and implemented may get us further in the discussion of how to deal with this.

ANJA KASPERSEN: Four years have passed since you initiated ThinkTech or ConsciousCoders, as it was called initially. What has happened since you started? What has happened in those four years, both in the technology domain and also in the way you see that the initiative has evolved?

LUKAS PÖHLER: Four years ago there were hardly any initiatives working on AI ethics, but I have seen in the last four years that a lot of initiatives have come up, ones from industry consortia but also from universities, new chairs and study programs have been created. Other non-governmental organizations came up. We see a lot of initiatives from governments.

Definitely a lot of things have changed. Technology has progressed rapidly. We see one thing that is called AltaML, where basically you don't need to program everything by yourself, but a lot has already been preprogrammed. Now they can use it as a toolbox and use very sophisticated algorithms already for classification tasks, for example, which are just out there in the cloud, and you can use it as soon as you have access to the cloud.

ANJA KASPERSEN: What was originally referred to as "open-source" software.

LUKAS PÖHLER: Definitely open-source software but also from software providers that just give you a classification tool that you can use by yourself. This spread both for professionals, for developers that you mentioned like open-source, where you can basically access every component you want and build up your own AI system, but also for end users that have access to the cloud from business intelligence where they can just use ready AI tools for their work as are we. This has progressed a lot in the last four years.

ANJA KASPERSEN: Because of availability is there a temptation of doing more that may take us in a bad direction? Is there awareness around responsible uses with everything that is made available, as you said, for free or without necessarily knowing how to use it responsibly?

LUKAS POEHLER: That is an interesting point. I have two things I would add here. One is definitely open-source technology but also open data. I think what we have seen in the last years is a lot of data just being open. You can use the data, and the data is high quality, and numerous data volume to call up many models. Here we definitely see besides all the open-source models, AI is highly driven from open-source. We see a high openness.

The second twist I tend to see in the scientific world is a bit of temptation to release everything. We had it with the GPT-2 model, which is a language-generation model. There was hesitation at first to release the pre-trained model fully. The model is able to generate big text out of a few sentences. The fear was that out of just a few sentences you could create human-level text, that a human could not detect it is really from a machine. The fear was that there would be a spread of misinformation online and false news and fake news. This is why at the beginning the researchers were hesitating releasing the full model.

I see it from both sides. Some researchers or developers are increasingly aware of it and tend not to release it, but on the other hand there are so many open-source communities out there growing rapidly and also the code and data out there, so it is a bit two-sided.

ANJA KASPERSEN: Agnes, building on what Lukas was just alluding to, and obviously some red lines in terms of applications and deployment of technologies, what are your thoughts on this?

AGNES GIERULSKI: I think it is really challenging. I think that the language model that Lukas just talked about, if I am not mistaken, has similar work embeddings that show patterns that may be regarded as unfair than we see in society because the data really just represents to a certain degree society, and society may not necessarily be completely as we would like it to be.

I think it really is a discussion of how we alter the data or how we use the data and whether we aim for replication of the data and the existing societal state or that we want to actively alter that to reach a certain level of equality, for example, in gender as we can easily in language models and language embeddings.

EVA CHARLOTTE MAYER: Maybe I can add something to what Agnes said before, which also shows why I am so motivated. She mentioned that there are so many challenges. The fact that some language modules are discriminating and are not equally good for, for example, different genders, is obviously something that we should work on and that we should eliminate.

But then the question always comes up. If we equalize the data sets, if we say we are going to take as many male recordings and as many female recordings into the data set, for example, to make sure that this is stable and equalized, then maybe the data set will shrink and the whole performance of the system will be worse. That is something that we need to discuss. What do we want? Do we want to just move forward with innovation, or do we want to do something that Lukas always says will "hinder invention" to doing it right? That is something that I think is a huge challenge, and it is something that I don't know the answer to myself, and that is what is keeping me motivated because I feel like that is still an open challenge, an open question, and that is something that I would like to work on in the future.

LUKAS PÖHLER: I see that there are a lot of challenges within our society that are historically [inaudible] and have just been manifest through technology these days as we have these data sets and as we build models from the historic data, but also that we see if we deploy these models in our societies that challenges can come from these new models. My goal is that we don't ask the questions as do we want technology with all its consequences or we don't include and deploy the technologies? I still have the belief and the hope that we are able to build these technologies in a responsible way and use them and harness all the benefits that technology has.

AGNES GIERULSKI: I would also add that as we have talked about before, it has become a much more well-known topic. Many have heard of AI ethics, so it shows we are working in the right direction or a direction that is demanded. At the same time, the discussions are becoming more specific. We are working on more concrete projects. Yes, some of them are not in the mainstream discourse yet, so it definitely shows us that we still have some work to do to shine some light on those questions and how to approach this.

ANJA KASPERSEN: For those who are interested in how they can replicate conscious coding in their own environment, what are the key takeaways you would like to share with our listeners?

LUKAS PÖHLER: I think the key learning I had during forming this initiative is to make your voice heard. For all the listeners, you have an important voice, and your role is important, so everyone out there, it doesn't matter what age you are, you have a stake there, and you should really make use of your voice. Search for like-minded people for peers, engage with them, discuss thoughts. Just start it. I think that is the hardest step.

EVA CHARLOTTE MAYER: That actually is exactly one of the reasons why I am happy and thankful that we were invited by Carnegie Council because I feel like this is a platform that gives us a way to be heard, and that is what we are working for, so thank you.

ANJA KASPERSEN: You are most welcome. This is Anja Kaspersen. To listen again or to share this show, go to the Carnegie Council website. On it you can also subscribe to any news updates about AIEI and related activities by the Carnegie Council.

I would like to thank the great team at the Carnegie Council responsible for doing this podcast. This podcast is supported by Carnegie Council for Ethics in International Affairs.

También le puede interesar

APR 26, 2022 - Podcast

La promesa y el peligro de las interfaces cerebro-máquina, con Ricardo Chavarriaga

En este podcast de "Inteligencia Artificial e Igualdad", la investigadora principal Anja Kaspersen habla con el Dr. Ricardo Chavarriaga sobre la promesa y el peligro de las interfaces cerebro-máquina y ...

APR 5, 2022 - Podcast

IA y procesos colectivos de toma de decisiones, con Katherine Milligan

En este podcast "Inteligencia Artificial e Igualdad", la Senior Fellow Anja Kaspersen y Katherine Milligan, directora del Collective Change Lab, exploran lo que podemos aprender de ...

9 DE DICIEMBRE DE 2021 - Podcast

Ética, gobernanza y tecnologías emergentes: Una conversación con la Iniciativa Carnegie para la Gobernanza del Clima (C2G) y la Iniciativa para la Inteligencia Artificial y la Igualdad (AIEI)

Las tecnologías emergentes con impacto mundial están creando nuevos espacios no gobernados a un ritmo vertiginoso. Los responsables de las iniciativas C2G y AIEI de Carnegie Council debaten...