¿Se puede codificar el instinto? con Francesca Rossi

23 de febrero de 2022

La Dra. Francesca Rossi, responsable mundial de ética de la IA en IBM, se une a Anja Kaspersen en un fascinante podcast sobre "Inteligencia Artificial e Igualdad". Rossi habla de su papel centrado en la ética en una empresa multinacional y de la importancia de la experiencia lateral y la multidisciplinariedad a la hora de abordar las consideraciones y tensiones éticas en la investigación de la IA. ¿Cómo insertar los valores humanos en los sistemas de IA? ¿Puede la IA transformar y reforzar la toma de decisiones humana?

ANJA KASPERSEN: I am thrilled to today be joined by Francesca Rossi, who is a dear friend and someone who really inspired and has been driving efforts towards ethics in the field of AI research.

For those of our listeners who are not familiar with who you are, Francesca, allow me a quick introduction. Francesca is a former professor of computer science at the University of Padua in Italy and is currently an AI Ethics Global Leader for the International Business Machines Corporation, better known as IBM.

I am really excited to do this deep dive with you, Francesca, on the current state of artificial intelligence (AI) research, but before we get started on your research interests I would like to ask you: What sparked your interest as a young girl growing up in Italy to pursue what has essentially become a lifetime adventure in computer science and more recently artificial intelligence?

FRANCESCA ROSSI: Thanks, Anja. Thanks for the invitation and for the opportunity.

What convinced me to go on this journey? At that time I was not really thinking about AI but thinking about computer science in general. I decided to study computer science, which at that time was a very new kind of study. I'm not sure what really convinced me, but I was always attracted, even during my high school, to technical studies more than the humanities or the other sciences, so I guess I was attracted to some study in that space. Also I was intrigued by the fact that this was a very new curriculum.

Then I continued and I decided to stay in academia, even after my studies and my Ph.D., so I stayed for 25 years in academia. Then, more and more I developed the focus on AI rather than general computer sciences.

Then, in 2015 I went on sabbatical to Harvard at the Radcliffe Institute, where every year there are 50 fellows from all over the world who cover all the disciplines. I was the only computer scientist, and there were people covering all the other sciences, all the humanities, all the arts. What they do in the year you are there is force these fellows to work together, which is not something that I was used to doing because usually I was talking to my peer computer scientists and not to all these other people.

These other people, when I worked with them and spent time with them, were asking me questions that were very different from the usual ones that I was used to considering. They did not care about my latest theorem, experiment, or result in my research, but they cared about the big picture, about the impact of this technology, what I was doing, and in general of AI. That is where I started thinking more about the societal considerations, about the pervasive use of AI in society and the impact on people, the way we live, the way we work, and so on.

That year was really important because again I started thinking about these issues. It was also the year that many other people started thinking about these issues, and AI ethics really took a big jump forward, around 2015 I think. I joined the Future of Life Institute and all the other initiatives that started. Now the space is very crowded with initiatives, but at that time, which is not many years ago, it was not.

Then when I joined IBM, I decided to move from academia to a company with a significant research environment because I thought I could have an impact in research like I always had, even being at the university, but also an impact on the responsible use of this technology in practice. This is something that academia does not give you, this possibility of being impactful in that way, while being in a company allowed me to have that kind of an impact.

Now I spend some of my time doing research, publishing, and conferences, but also doing internal policies, governance, and also working with external communities, policymakers, and so on. That kind of impact was provided to me as an option by this corporate environment where research and business go together.

ANJA KASPERSEN: I find this very interesting, Francesca. If I hear you correctly, the transdisciplinarity that this course offered really was one of the driving forces that sparked your interest in the broader field of ethics in computer science.

FRANCESCA ROSSI: Yes. Until then, I was mostly advancing AI capabilities, but I was not really asking myself many society-related questions about this technology. In that year really I changed because these people were asking me questions that were not about technical issues but were about social-technical issues. That's where I started. It was a coincidence that it was also the year, I think, that many other people started to think about these issues and these various initiatives started being put together.

It was very instrumental for me because AI ethics is about multidisciplinarity. You cannot talk about or discuss AI ethics if you just discuss among technologists or among only philosophers or among only sociologists. You need to have a multidisciplinary environment. So for me—that year spent with people who were not my peers but were very different from me was very important.

ANJA KASPERSEN: Based on what you have shared with us already, do you find that in the corporate environment there are more of these lateral expertises that question the societal impact more so than in the academic environment?

FRANCESCA ROSSI: I think now in academia also computer scientists and AI researchers also ask themselves these social-technical questions. For example, at all the major AI conferences when you submit a paper they ask you to write something to discuss the societal impact and possible negative impacts of the work you are doing. When I was in academia, it was not like that—computer scientists were thinking about computer science, sociologists were thinking about sociology, and so on. So things have changed I think also in academia.

But definitely, in a company producing AI and delivering AI to other companies—in the case of IBM we deliver AI to other companies for their operations or their services—you cannot just be looking internally. You have to listen to what clients need, but also to what all the other stakeholders are thinking. You have to listen to what society, clients, consumer rights advocates, and policymakers are doing in order to fit well and to bring the vision of the company into an environment which is made by external actors. Definitely in the company I find myself discussing much more about all these other stakeholders and with these other stakeholders than when I was in academia.

ANJA KASPERSEN: Francesca, you are now the Global AI Ethics Leader for IBM. What does this role entail and what does your workday look like?

FRANCESCA ROSSI: First of all, I co-chair the internal IBM AI Ethics Board. This is the centralized governance within the company that makes all the decisions about AI ethics. I am one of the chairs. The other one is the chief privacy officer of IBM. So, privacy and ethics together.

The members of this board represent all the business divisions—from research to marketing to products to AI solutions to legal, to everything else, all the business divisions—and not just representing, but the people are the top people of these divisions, so they are the people that once the board makes a decision, then they go back home and can implement that decision. It is not just one generic person in that business unit.

What the board decides is internal policies and processes, for example, building a methodology so that all of our developers teams can understand how to detect and mitigate bias in the AI models that they build: How do you build this methodology? Which tools do you need to provide them? How do you make this methodology as easy as possible to be adopted and as integrated as possible with the other processes that they are already following? That is one of the work streams that was put together by the AI Ethics Board.

Another thing is to educate all the IBMers around AI ethics and the various issues. What is AI ethics? How does it relate to my role in the company? We have specific, very deep educational modules for the developers.

But then, we have also generic models for everybody else, like in our yearly course that we have to take of the business conduct guidelines, there is a part on AI ethics also that everybody has to take.

Another thing that the board does is to examine all the offerings that the company has towards any client in any part of the world that may have some risks related to AI ethics. There is a whole process in place. The team that wants to deliver something to a client has to talk to a focal point, the person representing the board in every business unit, and this focal point can be the first filter, and then, if he thinks there is enough risk, then the whole board discusses this offering.

The result of this discussion of the offering can be, "Yes, it's fine the way it is, we like it, you can go on," or "No, we don't like it, but there may be some way to add constraints on the contractual agreement, for example, usage restrictions, that if you add these user restrictions, then we are fine with providing this to our clients." Or in some cases we even say, "No, I'm sorry, we don't like it because the technology is not good, because the use is not good, or because the client is not aligned to our values." In some cases, we even say, "We don't see any way to make this align to our values, and so we don't deliver it." That is another big work stream of the AI ethics board.

At the beginning we even did things like putting together a glossary of terms around AI ethics. Because IBM is such a big company and distributed all over the world, we realized very soon that many people were using the same terms with different meanings or different terms with the same meanings, so there was a lot of confusion. We couldn't really talk to each other and understand well. So we said: "Okay, let's write down a glossary. What do we mean by 'AI ethics,' what do we mean by 'AI fairness,' 'AI explainability,' this and that?" So that everybody has someplace to say, "Okay, when we talk about that, we mean that."

Again, all these initiatives are because we started with very high-level principles—as you may remember, everybody was having principles a few years ago—"AI must have maximal intelligence, data belongs to whoever provides the data, and it has to be explainable and fair."

Then we said, "Okay, but our developers cannot use this principle to understand what to do in their everyday jobs, so we really need this governance and all these work streams of the board to really operationalize these principles and say: 'Okay, these are the principles, but now we need to go much deeper into concrete actions that are allowing those principles. Otherwise, we are not going to change our operations if we just tell people our principle.'"

That is what the board is about: the way to operationalize the principle in a coherent way in such a big company—that's not easy—and also in a way that it is integrated with what the various people in the company are already doing.

Other activities also supervised by the board are all our partnerships with external multistakeholder organizations or policymakers—the United Nations, the World Economic Forum, the European Commission—and this is important also with what you said earlier, now to listen to what everybody is saying and doing, learning from listening, but also bringing our own experience into these places, our own lessons learned, and working together.

For example, IBM does not have an external advisory board on AI ethics. We decided consciously that the best way to engage with everybody else is not really to have a fixed set of a few people who advise us from externally, so that to us is a much better way to really work together with everybody else.

In recent years I would say that the focus was in operationalizing the principles, making them concrete, transforming them into policies and processes in a coherent way within the company. That is my role as the AI Ethics Leader. Of course, I am not doing everything by myself, and there are so many people.

Actually, what we learned doing this is that in any company it doesn't make sense to appoint somebody as the AI Ethics Global Leader or chief AI ethics officer, whatever you want to call it, and say, "I appointed that person, and now I'm done, that person will take care of everything." It has to be a company-wide approach in order to operationalize this. Everybody has to be involved, everybody has to be aware of what is done by the board, all the initiatives and so on, so it is not just one person or a team that can address these AI ethics issues.

Sometimes journalists ask: "Tell me, what is your AI ethics team?"

We say, "Well, we don't have an AI ethics team. We have the board, the focal points, the advocacy network, this and that, and it's a company-wide approach." That is a very important lesson.

The other important lesson was that at the beginning the board was less able to make decisions. It was more like a way to discuss and be aware of what each other was doing,, and it was reasonable, because at the beginning people just needed to understand what this AI ethics is, how is it relevant to my operations, and so on. But then, after a while, we understood that we needed to have a much more powerful board in terms of making decisions and those decisions being adopted, so we restructured the board for achieving that.

ANJA KASPERSEN: Francesca, allow me just to follow up on the point you made on explainability, which is obviously a big one when talking about oversight and governance of AI and is key to any discussion on ethics. Do you think we are in a satisfactory place with regard to explainability? For our listeners who may not fully grasp the issues that exist around explainability, what are the big hurdles and are we making any headway in addressing them?

FRANCESCA ROSSI: Explainability is still ongoing work—also fairness and others, but explainability especially—because it's a technical issue. Explainability means that the AI system itself, not the human being, needs to be able to explain to a human being why it is making that recommendation. It tells you, "Oh, I think you should choose that thing and not the other one," but the AI system itself should be able to explain why.

That means that the AI system must have some notion of causality. It says, "Okay, you have to do this because there is this other thing that causes me to say that you should do that." That is not that easy, especially when you are using some of the more successful approaches which are based on data, machine learning, and deep learning. You may have different kinds of data—you may have a system that is based on images in input or text or structural data—and each one of them needs a different technical solution to allow this system to provide explanations. There are some partial solutions, like explanations based on correlations with other input data similar to the one that was just received and so on. So there are some partial solutions.

Another challenge is that you may have to provide different explanations for different users—for example, in the health care domain you may want to provide an explanation to a doctor, to a nurse, to the patient, to the auditor, or to whoever—so for each one of the recipients of the explanation you have to put in place a different explanation, not contradicting the other ones of course, but with different terminology and different levels of detail. So it is not just one explanation that is fine for all the possible people who are going to use that explanation.

ANJA KASPERSEN: I think that is a concern many of us have, that we increasingly and often mistakenly treat these systems as providing, as you said, causation and consequence, but very few of the systems are actually set up and programmed to do so, to provide correlation, and then you need the human component to really be on top of it to provide a wider context.

FRANCESCA ROSSI: In fact, one other challenge, I would say, of explainability is that whatever you put in place and provide as an explanation, the user of that AI system or whoever needs to be trained, needs to be able to interpret what the system is telling this person. You cannot mistake, as you say, correlation for causation. You should be able to understand what that means and not make inferences that maybe are not appropriate. So you should be able to understand the capabilities of the AI system and the framework of the explanation that you are receiving. So the training of the operators, of the human beings receiving the explanation and using these AI systems, is very important.

ANJA KASPERSEN: So essentially making sure that there is optimal maturity in the system before it gets embedded so that we avoid a situation where we display an overconfidence in the system and under-investing in the people who are becoming instrumental to operating the systems in a safe manner.

FRANCESCA ROSSI: Yes, definitely. We are just talking about people using an AI system, but of course there are also the people building the AI system who need to be aware of the possible, for example, unconscious bias that they put in whatever they do in their everyday job. And they need to be able to work in a possibly diverse environment. They need to be able to talk and understand each other with people who are not technical, to consult with them. There are all these actors around an AI life cycle who need to be trained, educated, and enter into this frame of mind of dealing not with the technology but with the social-technical environment.

It is important that all these actors don't think that everything can be solved by a completely technical-only solution, not probably the service that you are putting in place. It probably, especially when it is a high-stake decision, cannot be completely automated, it needs to be an interaction.

Also, even the things that you do while you are building a solution, even bias testing, cannot be just completely automated. It has to be something that the human being thinks about: Okay, so what could be here the impact on some protected variables? Do I see them in my model—race, gender, age—or, if I don't see them in my model, maybe there are other variables that are proxies for those, so how do I understand whether really there is risk or not? Indeed, there is no technical tool that can automate that, so the role of the technical tools and AI-based tools is very important, but it should be understood that they are not the complete solution.

The biggest project that I have at IBM, of course in collaboration with many external academics, is about trying to leverage cognitive theories of how humans make decisions—in particular, the Kahneman theory of "thinking fast and slow"—to be inspired by that theory in trying to advance AI capabilities, given the observation that there are many things that humans know how to do and machines still don't do.

For example, recently we put together an architecture inspired by Thinking, Fast and Slow where there is a machine that has some fast solvers defined in some way which are similar to the features of our "thinking fast," meaning our intuitive way of making decisions without thinking too much, without reasoning, and just based on past experience and then replicating what we know how to do well. Thinking fast is what we call System 1, which is our intuitive, very fast thinking, almost unconscious thinking, that we are not even aware of. Ninety-five percent of the time we make decisions that way.

Then, "thinking slow" is when we really decide that we need to put a lot of attention on whatever we need to decide on, so with all our attention, we are very careful, we follow a procedure, we reason about the problem, and get to a solution, and usually we do that when we are not familiar enough with the problem to be able to use our intuition. That "thinking slow" is also called System 2.

In this machine that we put together we have some processes that are similar to our "thinking fast," so whenever a problem comes in, they always activate and come up with a solution which sometimes is not really well thought out, and then it has also some processes that are similar to our "thinking slow" processes or System 2, so they are very careful and they take more time, but they usually get a better solution.

We saw that in this machine, by putting together these solvers, the "thinking fast" and "thinking slow," and a metacognitive part that decides which one is best to use in every scenario, we saw that we achieved a behavior similar to what we see in human beings, the behavior also called "skilled-learning behavior," When we are not familiar with a problem, we initially tackle it with our System 2, so "thinking slow," but then after we have used this "thinking slow" many times, we can actually pass to the System 1, to the "thinking fast," because we have accumulated enough experience that at that point System 1, or "thinking fast," can take over, producing high-quality decisions just like using System 2 but in much less time, almost instantly. We saw that behavior also in this machine where over time, in a simplified decision environment, the machine goes to using mostly System 2 and then later on it uses mostly System 1. We saw a similar skill-learning mechanism that is in human beings.

That is one example of what this project wants to do. The project's overall goal is to build machines inspired by these cognitive theories of how humans make decisions that can really exhibit some human behavior by the machine.

There is another important goal, which is to build machines that know how human beings would make that decision, and because of that they can help human beings avoid our pitfalls and our fallacies in making decisions. They can alert, they can nudge, they can tell us, "Look, I think you are going to use System 1 here, but I think you should not do that, you should use System 2, and I can help you in doing that."

Sometimes we don't use the right modality and we can make mistakes. We have bias, as Kahneman said in his book, we have noise, and we have many other fallacies in making inferences and making decisions. The ultimate goal is to build machines that can actually help us recognize our possible fallacies, especially when we have to make decisions that can have a significant impact on our lives or on other people's lives or on society in general.

ANJA KASPERSEN: I find this fascinating. We are essentially trying to code a machine to have a gut feeling and then to overcome that gut feeling or to back it up with some rational thinking. It is interesting that in neuroscience more recently there is an increasing acceptance that at the end of the day humans make decisions based on our gut feelings and then we use our brains, so "slow thinking," as a way of rationalizing the decisions we already made with our gut.

Is that what you are saying? You are trying to test that the machine can have a gut feeling and then back it up with post-rationalizing its thoughts, or it will help us distinguish between our guts and our more cerebral processes?

FRANCESCA ROSSI: The idea is to investigate this space. Again, one of the long-term goals is to help humans recognize when they probably should not use their gut feeling but should use something else, more careful analysis of the situation, in order to make a better decision.

Usually we use System 1 when we feel that the problem is cognitively easy—very familiar, easy, let's not think about it and just make a decision. When we feel that the problem goes beyond a threshold of cognitive easiness, we say, "Oh, okay, better be careful here, I am going to use System 2."

But when the problem becomes overwhelmingly difficult for us, in some sense we go back to System 1 because even in System 2, "Oh, my god, there is too much data, too many criteria," so we just simplify the problem so it can be handled even with gut feelings sometimes.

That is one of the typical scenarios where, especially again if the decision is a high-stakes decision, machines could help us. They say, "Okay, I understand that this is too overwhelming for you, but let me help with handling this amount of data and so on so that we can still use a System 2-like solution or process even if by yourself you would not be able to do it," and that can achieve a better solution. The idea is to have a machine that can help human beings, interact with them, by knowing how humans would behave in various scenarios.

Also, as I said, in these first experiments we did, we wanted to have machines that exhibit behaviors similar to human beings, like the skilled learning, and what that means in a machine. Of course, we had to make design decisions that are for machines, not necessarily replicating what happens—first of all, because we don't even know completely what happens in our mind, in our brain—but also because what is a good design choice for a human being is not necessarily one for a machine. A machine has different capabilities, different properties, so it's a completely different platform to build something.

That is still a very interesting project— the first part of the project, where we really were just brainstorming about this relationship between cognitive theories and AI was very, very interesting—and now it is even more interesting to see the first experiments, what they show, and so on.

It is very nice that even in a corporate environment there is some space for very long-term research projects, because this project is not going to bring this year or even next year anything to a product, to a solution, or to an offering that is useful for IBM with our clients. It is a long-term project that while doing the project you may have some spinoff that can generate more concrete and short-term things, but it's nice that there is a space also for this long-term thinking. It is very nice also to work in a very multidisciplinary research environment.

ANJA KASPERSEN: Building on that, Francesca, could you elaborate for our listeners the difference between machine learning and the broader field of symbolic reasoning, which you have alluded to, which has often been described as more "traditional" AI?

FRANCESCA ROSSI: Now I work in both areas, and this project actually can be seen as a way to combine both areas, because in some sense, in a very imprecise way, one could say that System 1 could be implemented by using data-driven and machine-learning approaches and System 2 more by symbolic AI.

Supervised machine learning just means that the machine learns how to solve the problem by looking at examples of solutions. For example, if you want to build a machine that can decide whether to accept or reject a loan application, you give the machine a lot of examples of loan applications and for each loan application you tell the machine whether it should be accepted or rejected, so you give the solution for those loan applications. Then, the machine takes these examples as training data and adjusts its parameters in a way that hopefully it behaves correctly also for a loan application it has never seen before, not part of those examples. So it tries to learn from these examples to provide the correct solution also for other input data.

In some sense, supervised machine learning means that we—human beings—are not telling the machine exactly all the steps—step one, step two, and step three—to solve the problem, but we tell the machine: "These are the examples of problems and their solutions and learn from that."

Instead, in symbolic AI, we tell the machine how to solve a problem by communicating the steps to do in order to solve that problem, like you would do in a recipe: "Okay, you want to bake a cake? Step one, these are the inputs that you need to have, the ingredients; step two, you need to mix this and that; step three, you need to do that; step four, you need to boil that; and so on until you get the output." That is an example of symbolic and logic-based AI.

For some problems, unfortunately, we don't have a way to tell the machine the recipe, the exact steps for how to solve the problem, because the problem is too vague and the input can be so diverse, it is not possible for us to tell the exact set of steps. Using the same example, given this loan application I tell you: "If you see this, that; if you see, that." If you want to know exactly the steps, it is not easy, and whatever you write down or how careful you are, you will not have the same correctness rate or the same accuracy as something that can learn from the data.

Another even more typical problem of this need for a data-driven approach is when you build a machine to interpret what is in an image. In that case it is too complex to tell the machine exactly, "If you see this in this pixel, then you need to do that, and da da." You cannot do that with the symbolic approach, but with high accuracy you can do that with a machine-learning approach.

For many years researchers have tried to use symbolic AI to tackle many typical AI applications—for example, translation from one language to another one—but then machine learning showed that with a data-driven approach we could achieve much better accuracy on that task.

That does not mean that symbolic approaches are not useful anymore and that you can do everything with machine learning because if you learn how to solve a problem from data, then in some sense, for example, you lose explainability and traceability. If you don't see the steps that were done, then it's, "Okay, the system tells me that this loan application is rejected.—Why? Why is it rejected? I don't know. There are a million parameters in the neural net. Who knows?"—so I need to build additional technology in order to do that, while, if you have a sequence of steps, you see very clearly what generated the rejection down here at the output. So you lose something.

Also, there are some situations in which you don't have the huge amounts of data that are needed to have good accuracy with machine learning. There are some situations where you have a lot of domain knowledge but not a lot of raw data, so in those situations it is not easy to find enough data to have machine learning work well.

Also, we would like machines to learn general concepts and possibly to reuse what they have learned from one problem in another problem so that in the other problem they don't need to start learning from scratch, like human beings do. Once we learn about horses—that's a typical example, what is a horse—then to learn about a zebra they just tell you that a zebra is like "a horse with stripes." That's it. You don't need another huge amount of data with images of zebras and so on. You just need one piece of information, but this piece of information is expressed in an explicit way, in a symbolic way. That is why we would like machine learning also to learn symbols and not just to learn how to give a correct answer based on raw data.

It is obvious to many people working in machine learning approaches now that the trend is to try to embed symbols and symbol learning, concept learning, also in machine-learning approaches. It is obvious that you need to be able to leverage the data. If the data is there, you need to be able to leverage it, especially for some problems, but you also need to be able to work with symbols, especially if you want machines to interact with human beings. We talk in symbols, we don't talk in terms of raw data or parameters of a neural net.

It is more and more clear I think that the two approaches to AI, symbolic and data-driven, need to be combined in some way. How to combine it? It could be like the approach that we are taking or it could be other ones, but it is clear that there are good and bad things in both approaches. If you really want AI to significantly advance its capability, we need to get the good parts of both approaches.

ANJA KASPERSEN: How do you view efforts to embed more symbolic reasoning as a means to respond to current toxic language models that are clearly not working at purpose and also demonstrating our overreliance on data-driven models, sometimes providing for less than optimal and inaccurate outputs?

FRANCESCA ROSSI: When you only rely on data it is not easy to bound the behavior of this system with some explicit boundaries. There are some decisions that you may not want to make, there are some values that you don't want to violate, there are some constraints that you don't want to violate, and so on. So it is difficult to say these things and to embed them into a machine if you just rely on data, even though there is a huge amount of data.

It is also difficult to be robust when you just rely on data, and to be smooth, meaning that a small change in the input should not generate a big change in the output. Smoothness is related to the notion of robustness. When you change the input a little bit, you may see a small degradation maybe but not a big change.

If you want machines that you can use and trust following what they suggest you do and so on, robustness is an important feature and explainability is an important feature. If you just rely on data, yes you may have large language models that are amazingly good at generating valid English—they write much better than I do in English—but sometimes they just go all the way in a completely different direction than what you were expecting. They say things that are not relevant from what you asked. They may say things that are even ethically unacceptable. So how do you bound this behavior? It is not easy to understand how to do that if you just rely on learning from data.

But again, there are so many approaches. It is amazing. The amount of AI researchers from all over the world grows every minute and there are so many every year at the conferences—I am looking at the Association for the Advancement of Artificial Intelligence (AAAI) that I know more familiarly, but Neural Information Processing Systems (NeurIPS) and other conferences on machine learning, the International Joint Conference on Artificial Intelligence (IJCAI), and so on—there are so many, and they are growing very, very rapidly with all sorts of techniques.

What is a bit worrisome—but I don't think now this is the case anymore—is that at some point these machine-learning approaches were so considered the solution to everything that even the new generation, the Ph.D. students, were not even taking courses on AI but just on machine learning, not on other techniques in AI.

I don't think that is the case now because I really think that everybody understands that you need to have in your hands all these techniques and the good things or the various techniques in AI—and definitely machine learning is one of them—in order to significantly advance AI. Otherwise, you get one more example, one more benchmark, one more application of another fine-tuned machine-learning technique, but you are not really advancing AI capabilities. I think people understand that you need also these other symbolic techniques to be combined in some way.

ANJA KASPERSEN: This is a very nice segue, Francesca, to the broader issue of ethics in AI, which is a vast field, as you have alluded to, and also an increasingly crowded field. You mentioned when we both started working on this, now seven or eight years ago, hardly anyone was discussing this issue related to AI.

Are you concerned about the risk of ethics washing? I still think there is lot of merit in ethics because it allows us to look at options chosen but also the options and roads not taken and to deal with some of the tensions that will come out of some of these tradeoffs that you were alluding to. In your own research and in the work of your company and other companies, there will be constant tradeoffs that we have to grapple with, we must grapple with, and ethics is the tool that allows us to really explore and investigate. Where do you see the ethics space, and where do you see it moving forward?

FRANCESCA ROSSI: At the beginning there was an awareness period. After that, there was the principles period, there were more than a hundred sets of principles from all over the place—from companies, from governments, from multistakeholder organizations, from academia, and so on, principles all over. You may remember a project at Harvard called the Principled Artificial Intelligence Project with even a round visual about all the sets of principles that had been published, and there were more than a hundred already in 2018.

So awareness, then principles. Then policies. That is to say, "Okay, how do we start operationalizing these principles? Let's define some policies."

Then from policies to hardshell processes, concrete processes that people use to be aligned with those principles and implement the policies. We are now living in this era of AI ethics doing practical things. You can see standards being finalized, you can see internal processes for corporations that are being used, and you can see regulations that are being proposed like the European Union AI Act or even adopted like in New York State or in other states in the United States.

It was a really complex path that AI ethics went through in, I would say, a few years, because as you say, in 2015 there was nothing of this, just some initial discussion. So, going from awareness to principles, to practice and policies, and then the practice I think we did a lot of work. It has had a concrete impact on the AI that is being used all over the place.

ANJA KASPERSEN: You mentioned the efforts of the European Union earlier, Francesca, and you served as a member of the European Commission High-Level Expert Group on Artificial Intelligence. Can you speak a bit more about that process and your key takeaways?

FRANCESCA ROSSI: The European Commission High-Level Expert Group on AI is a group that was nominated by the European Commission but worked for two years completely independent of the European Commission. What we published and what we wrote was not guided by the European Commission.

These 52 people were very diverse. Some were from academia. Some were from corporate environments. There were AI experts. Of the 52 people I think there were fewer than ten AI experts, and then there were philosophers, psychologists, sociologists, consumer rights advocates—so a very, very diverse group.

At the end we published several things, but I think the most impactful one was a document about the ethics guidelines for trustworthy AI in Europe. We called "trustworthy AI" the kind of AI that had to have seven requirements which were related to fairness, explainability, robustness, and so on, all the other things.

Based on that document, then the European Commission put together this proposal for regulation of AI in Europe where the notion of trustworthy AI that was developed by that group is included as the kind of AI that needs to be delivered in Europe.

On top of that, then the European Commission in their regulation proposal also put these levels of risks that we did not produce as a group. The four levels of risk— (i) unacceptable risk, (ii) high risk, (iii) limited risk, and (iv) minimal risk—require a lot of obligations for the provider and for the user, but also for making sure that the AI that is delivered in those high-risk applications is trustworthy, so it's fair, it's explainable, this and that, and so on, and it has human oversight.

That is the combination that I see. We started with that as guidelines. We gave this idea of trustworthy AI, the seven requirements, and then the European Commission added the obligations and the levels of risk. That is a more homogeneous way for the European Union to deal with regulating a technology, like what was done with the General Data Protection Regulation (GDPR) for data privacy issues and now with AI.

In the United States it is a bit different. There are not many federal laws about AI but mostly state-level laws, so it can be a bit scattered here and there. There are some states that already have published laws related to AI, like related to bias in AI and so on, but there are some other states that have not. So it is a different approach, more statewide rather than at the level of the whole region.

ANJA KASPERSEN: Recently, Francesca, Margrethe Vestager, the European Commission Fit for the Digital Age Chair, in a statement said, "On artificial intelligence trust is a must, not a 'nice to have.'" Do you agree with her?

FRANCESCA ROSSI: I agree. In fact, that's why in the European Commission High-Level Expert Group on AI we used the term "trustworthy AI."

That is also the term we use at IBM. There is nothing more important, at least for a company like IBM, than the trust of the client, the society, and all the other stakeholders around it. If you lose that trust because you deliver something that has negative effects, then it is very difficult to regain it. The issue of trust is really very important.

I don't see trust as the goal, but I see it as functional to achieving the goal, which is to have a future where AI is supportive of human values and is improving ourselves, improving the way we work, we live, and so on. But in order to achieve that goal you need to have this ecosystem of trust around the technology. Of course it has to be justified trust. That's why explainability and transparency are also important because otherwise you trust, but you need to trust something because there is a reason to trust that entity or that technology.

ANJA KASPERSEN: Evidence-based trust.


ANJA KASPERSEN: Thank you for that, Francesca.

You have also been elected as the president of AAAI. Can you tell us what it is and why it is important?

FRANCESCA ROSSI: AAAI is a worldwide association, the Association for the Advancement of AI, whose members are AI researchers from all over the world.

It has many activities, but one of the most visible ones is an annual conference. The membership and participation are worldwide, but the conference is usually always in North America. This conference has papers that have been submitted and then a large program committee selects some of them, and the papers that are submitted and presented are representative of all the techniques within AI. So you will find papers on machine learning—actually we find more and more papers on machine learning in percentage compared to previous years—but also papers on other techniques that are more like what we said, symbolic and logic-based AI, as well as the composition of them. There are papers also on applications and all the aspects of AI. There are a lot of invited talks. There are workshops and tutorials.

The conference used to be a way for these researchers to get together all in the same room to be inspired about the future of AI, but also to learn with tutorials, which are people telling about consolidated fields to the new generation usually, so to learn about current AI capabilities and techniques. There are also workshops to brainstorm in a more informal way about new ideas and so on. It's a very, very nice event and not just technical but also inspirational in my view.

Overall, it is a way to help the AI research community evolve in terms of their research but also evolve in the awareness of the issues around AI. From my point of view, it is very important that all the AI researchers are good in improving AI capabilities but also use their brilliant minds to do it in a way that takes into consideration the impact of this technology throughout society. Whoever wants to submit a paper to AAAI already needs to discuss the possible impact of his or her work, possible uses that maybe bring a negative impact, but I think that we can do even more and help the community to be more engaged with discussion about these issues and also providing solutions.

I am not the president yet. My presidency starts in June of this year.

ANJA KASPERSEN: Francesca, it is clear that there is a lot happening in the AI ethics field and we have many more peaks to climb, to use a mountain metaphor. What is in store for us looking ahead? Are there things that concern you, things that inspire you?

Lastly, in your view do we have the right type of honest scientific discourse also around the limitations of AI, despite all of its transformative impacts, to be able to fully cater to the promise it holds?

FRANCESCA ROSSI: You are right that AI already has a lot of successful applications and is useful, but we are just at the beginning in my view.

There are many things that we can envision, but we need AI again to be more trustworthy, more robust, and more explainable, because if we cannot achieve that level of these capabilities for AI, then it will be difficult to also achieve those other envisioned much more important applications, like using AI for global problems like the climate, health, pandemics, or other things.

We really need to take a significant step forward. It is like we are using AI now for a few things here and there that are important applications, that improve the operations of companies and improve many other things, but again we can do much more with that.

For example, at IBM we recently started a new research area called accelerated discovery, where we try to understand how AI, with its current and possibly future capabilities, can help human beings to really accelerate the process of discovering new things in science, in health, and in many other disciplines.

I still think—maybe because I have been working at IBM for a while now and have gotten into this frame of mind—that really many of the future visions that I have is where AI is going to work together with human beings and not just in a completely autonomous mode. That means that we need to think about that context and in that context to think about what we need from AI if we think that is the vision that we want to achieve. There are again some things that are needed in that context—for example, we said explainability and so on—and other stuff that may be less important in that context.

I still see for the future a place where AI should support human beings in improving their lives while being aligned to our values. That means that AI ethics, in my view, also has to include an understanding of how to embed human values into these machines, whatever they are doing. How do you embed human values? Can you learn values just from data? I don't think so. Can you embed values just from rules? I don't think so.

So again, it is another place where the two things need to be combined because rules are going to be too brittle and incomplete, and data is going to miss some rule that is obvious but doesn't show up in the data.

So, in my view, how to embed human values into machines and how to make machines really help human beings, remembering that advancing AI is not just for the sake of advancing AI. Advancing AI is because we think that we want a future where technological progress is helping us to improve our lives, our planet, our society. So it's not just a value per se to advance a technology like AI, but it is a value embedded in a society in order to advance that society in a way that is aligned to our values.

That's why, for example, there are many AI ethics frameworks, but I always like the ones that refer to the UN Sustainable Development Goals because in some sense that gives you a vision of the future. That is my vision of the future, where all the goals have been achieved. How do I use the technology to get to that vision of the future so that now I have a better framing of what to do with AI? Otherwise, if I don't have a vision of the future, then I just, "Oh, let's improve AI, doing this, doing that." Why? In which direction? Why this direction and not another one? I think it is important to do this reverse-engineering approach where you say: "Okay, that's where I want to go. I am here, so let's try to get toward one of the trajectories toward that vision."

ANJA KASPERSEN: Building on your point, Francesca, on getting the vision right and reverse-engineering approaches to get a better sense of where you want to go but also answer "Why am I here?", I would like us to conclude where we started, which is essentially: What would you advise your younger self, reflecting on the experience that you have gained to date?

FRANCESCA ROSSI: The first advice is to do what you like because if you do something that you are not passionate about, since you will have to spend a reasonable amount of time in whatever you choose, then you are not going to do it well. You are not going to be happy, nobody else is going to be happy, because if you do something without being passionate about it, probably you will do it with medium quality or something. So do what you are passionate about. It may be working in AI. It may be working in a completely different field, but if you want to really have a significant impact, you have to be passionate about what you're doing. Don't worry that what you are passionate about is not maybe the trendiest thing, but follow your passion. That is one thing.

Second, I would say do not make decisions because of fear. Sometimes use your System 1, your "thinking fast," and just jump if you feel that is a good thing for you. Even if everybody else tells you, "That is not reasonable, why do you do that?" If you feel that's the right thing for you, just do it. Don't say, "Oh, I'm not making that decision because I have a fear of something bad happening" and so on.

When I moved from academia to IBM, it was a big decision for me. I moved to a completely different working environment, different continent, different environment also for my life. I could have said: "Okay, I know the environment already where I am, I like it, it's academia, I am very familiar with it, I am in Italy," and so on.

But I felt that I had the opportunity to have a different impact that at that point I was passionate about. After thinking about it—and everybody was against that, I don't think I know anybody in Italy who resigned from being a full professor—at the end, I said, "Okay, I will make that decision because I feel it is right for me."

So don't be too afraid of making decisions that seem—of course you don't have to be completely unreasonable, but sometimes you have to really follow what you think is right for you and don't be too scared about the possible things that can happen. If you are passionate about something, usually good things will happen. Be okay with the fact that sometimes you have to go out of your comfort zone and be open to being in unfamiliar territory.

ANJA KASPERSEN: Before we conclude, Francesca, there is one more thing I would like our listeners to know about you, and that is your creative output. You are also a painter.

FRANCESCA ROSSI: I like the creative process, like going from a white canvas to something that looks like it is alive. I think in some sense it is similar also to what researchers do: Researchers create the future, something that was not there before their work, and then at the end, when the paper is written, there is something that came alive in a different form—not a painting, but in some sense it is similar to the research endeavor to create something.

AI people create the future, and that is the way I have thought about AI since I started. In some sense, yes, I see both things as creative activities.

ANJA KASPERSEN: Amazing. Thank you so much, Francesca, for sharing your time and your immense expertise about AI and AI research with all of us. This has been a riveting conversation, which I hope those listening have enjoyed as much as I have.

Thank you to all of you for tuning in, and a special thanks to the team at the Carnegie Council for hosting and producing this podcast.

For the latest content on ethics in international affairs be sure to follow us on social media @carnegiecouncil.

My name is Anja Kaspersen, and I hope we earned the privilege of your time. Thank you.

También le puede interesar

NOV 10, 2021 - Artículo

¿Por qué estamos fracasando en la ética de la IA?

Mientras lee esto, los sistemas de IA y las tecnologías algorítmicas se están integrando y ampliando mucho más rápidamente que los marcos de gobernanza existentes (es decir, las normas ...