Inteligencia artificial, justicia y Estado de Derecho

29 de julio de 2021

En este episodio del podcast "Inteligencia Artificial e Igualdad", Anja Kaspersen, Senior Fellow de la Thunderbird School of Global Management, Nicholas Davis, y Renée Cummings, de la Universidad de Virginia, debaten sobre el impacto de las tecnologías basadas en la IA en la justicia, el Estado de Derecho y las operaciones policiales.

ANJA KASPERSEN: In this episode I will be joined by two extraordinary practitioners in the technology and justice domain, Nicholas Davis and Renée Cummings, and we will discuss what do advanced software and cyber-physical systems mean for the rule of law and how the use of artificial intelligence (AI) is changing how important decisions are made by government and private sector.

Nicholas Davis is a lawyer and a professor of practice at Thunderbird School of Global Management. He is a special advisor at innovation at Gavi, the Vaccine Alliance, and a managing partner at SWIFT Partners, a Geneva-based consultancy focusing on helping companies harness emerging technologies to create sustainable value. Previously he held several leadership positions at the World Economic Forum, where he also led the Forum's thought leadership on techno-social issues, and in 2018 he co-authored a book, Shaping Future of the Fourth Industrial Revolution.

Renée Cummings is a criminologist, a criminal psychologist, a therapeutic jurisprudence specialist, and a trained journalist in addition to working on AI ethics, and is also the historic first data activist in residence at the School of Data Science at the University of Virginia. She is on the frontline of how to fuse AI with criminal justice for ethical, real-time solutions to improve law enforcement accountability and transparency.

Renée, with this introduction—and you obviously have a very rich background in this field—I am very curious how your journey as a data activist in residence started, especially in the justice space.

RENÉE CUMMINGS: Thank you very much, and, of course, thank you for having me.

I think I got into the space of AI ethics given my work in criminology. I was looking at what was happening in the justice system, the deployment of algorithmic decision-making systems, and how the opacity of these systems were impacting the kinds of decisions that were being made in the criminal justice system.

What we were seeing was that the algorithms were overestimating risks and creating zombie predictions, and of course it was also codifying systemic racism and other biases on discrimination. What we were seeing was that because we were not bringing an ethical perspective to the ways in which the algorithms were being deployed or bringing that ethical understanding or building the level of ethical resilience among the justice practitioners, we really were seeing some algorithms behaving badly.

As a criminologist I started to question the ways in which we were using algorithms, and I also started to look at algorithms in policing and felt that the deployment of algorithms on the streets was very questionable. There were things that we needed to push back on in real time before we found ourselves in a space where we were creating these algorithmic chokeholds that would really undermine civil rights and human rights. I think my work is about bringing an ethical perspective to the ways in which we use new and emerging technologies in the justice system and ensuring that the individuals who are presenting themselves to the justice system are never denied due process, no matter how efficient and how effective we believe algorithms could make us.

ANJA KASPERSEN: Thank you, Renée. I have also heard you say that, "AI is new language, and language needs literacy." What does literacy look like in your space of criminal justice?

RENÉE CUMMINGS: It really looks like an education, and building that sort of ethical resilience means bringing an understanding of accountability, transparency, and explainability into the justice space. One of the things that we have realized when it comes to law enforcement is that law enforcement historically has been challenged with issues such as accountability and transparency, and now when we include data science into the mix or when we are creating algorithmic tools for law enforcement or practitioners in the justice system to use, we have got to ensure that they understand the question of ethics, they understand big data ethics, they understand that data must be interrogated, they understand that data has a history and has a politics and is cultural, and they understand how social factors can find their way into criminal justice data, where criminogenic and social factors become data points for risk, criminality, and crime, and we have to be very cautious.

So that literacy includes due diligence. It also includes vigilance to ensure that we bring an ethical perspective that is critical, diverse, equitable, and inclusive, and that we are building a technology that we can be proud of.

ANJA KASPERSEN: Obviously there is a huge temptation in policing in particular, in law enforcement operations, to use predictive analytics. Large amounts of data trying to correlate patterns and the use of databases is increasing. We know that police forces around the world are working with data-mining companies in doing so. What is your take on that? Where do you see this going?

RENÉE CUMMINGS: I think data science is critical for the advancement of the criminal justice system. There are some perfect partnerships and relationships that we can create, but we need that ethical perspective because so far when it comes to something like predictive policing, what we have been seeing are systemic challenges. What we are seeing are social issues, systemic inequality, systemic racism, or structural inequality and how these things are now finding their ways into our data sets. So while I am also on the frontline ensuring that we create some excellent partnerships to advance the ways in which we think about criminal justice, justice in policing, sentencing, and corrections, and the ways in which we can include virtual reality and augmented reality, I'm very passionate about the ways in which we use AI.

I am also passionate about ensuring that we never do things to undermine civil rights and human rights. Where we bring this risk-based perspective we also bring a rights-based perspective to the work that we do because, for me, it's about ethical resilience. We will never get AI right all of the time, but we have got to have the knowledge and the understanding to realize that certain things—accountability, transparency, fairness, and ethics—are critical to the ways in which we deploy data and data's long-term impact on society.

ANJA KASPERSEN: Nick, this brings me to you. Building on the comment from Renée—"We will never get AI right all the time"—you have spent a lot of your career thinking around these big issues tied to the new data revolution, the Fourth Industrial Revolution, and also speaking to what do we need to do to make sure that AI remains humane. What has been your journey and your insights in this field, both relating to some of the points that Renée brought up but also relating to your own work in the larger technology and justice space?


I think my story is similar to Renée's. I come from starting in quite a different area, which is looking at the intersection of law and policy and how we make policy, and coming to it from the perspective of realizing that increasingly we rely on technologies to order and perceive our world. Those technologies, as Renée said, as they become more sophisticated can also become more opaque and less accessible, and that creates new risks for our legal systems, for our justice system, but also for how we relate to one another and how we engage with the world. We have very different perspectives on similar events, mediated through technology, and when it comes to justice and the legal system, those people who are able to rely on or engage with technologies in a more sophisticated or in some cases less mindful way will have a completely different power relationship with that world, with the outcomes.

We know that in justice in particular being able to efficiently, effectively, and reliably consistently apply public-positive laws and regulations to that world is absolutely critical. So if we give that power over to technologies we don't understand, then we end up in many of the concerns that Renée has been working on for many years, which is really understanding how to restore that power balance in regard to AI ethics.

I will probably just mention that this question of algorithmic bias is not just tied to what we call artificial intelligence today. It also happens in many simple rules of thumb and heuristics that we have deployed for many years in policing systems, in corporate systems, and in hiring systems, etc. I think this is a great time not just to look at the bleeding edge of neural networks and what we would call the "sophisticated" end of AI today but actually use this question of ethics in the law and systems to really examine fundamentally what we mean by human rights and the legality, the participation, and the accountability of rights in our legal systems. That's what I'm most excited about here, less the technical aspect and more the fundamental questions that this is giving rise to.

Maybe just starting with the idea of power, we know from history and from studies and the work on topics like structural racism and discrimination that the power over what is perceived is an incredibly important lever and driver of what we would think of as both direct inequality of treatment and inequality of outcome or impact. The issue there is, of course, both that technology grants new sources of power—it grants power in terms of capabilities—but it also provides an opportunity for particular groups to control that technology in different ways. So it's two different things.

First of all, it says, "I have the power to see something that you don't see," which says to me: "Okay, I'm going before a judge"—in one of the classic cases, the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) case that Renée can talk to you about much more than me—"and an algorithm is deciding whether or not I get bail and the conditions of that bail." If that technology is available to that justice system and yet is not really available to me or my legal defense, I am at the mercy of that system. So it increases that power shift. The power is already there. The judge has the ability to make that decision on my behalf. That's her role. But the fact that that judge then has the ability to call on the system, shifts the power further in terms of being able to interrogate.

But then the question comes, it's not just the capability that this accords to policing departments or in hiring decisions—I tend to work more in the civil world. It is not just that human resources departments have the power to use AI and advanced algorithmic technologies to look into and classify people as to fitness for a particular job. It's also that it grants them the ability to make decisions about the use of that technology in different ways, which I have less power to appeal or to engage with. So we need both the transparency and the ability to come into seeing the actual systems and the way they operate, but I think it's also important that the actual use of these systems in the first place is open to review, to human rights assessments, particularly when they are used in the public sphere, to have real choice when you are in the human resources field as well and on the part of employee rights as well.

This question of power is difficult because it shifts in multiple different directions, and I would also say that the point that Renée made earlier is incredibly important, which is the difficulty in engaging, the participatory aspect of advanced technologies the more sophisticated they become. That is a particularly big challenge in ensuring that those stakeholders affected by new technologies have the ability to participate, shape, and have meaningful dialogue about the impacts and consequences of those technologies.

RENÉE CUMMINGS: Nicholas spoke about power, and one of the things that we have seen baked into our data sets would be power. Data comes with a history, and that is the history of the powerful. When we see the data being deployed or the data being used to design what we call "new and emerging technologies," we are realizing that we are designing new technologies with old data sets, with old memories, and with old histories, so are we really creating new technology?

So in the deployment, for me, of the technologies is often the lack of that ethical perspective. It is the extraordinary use of discretion that our developers have, and when you relate this to law enforcement, law enforcement officers also have an extraordinary amount of discretion. Practitioners in our court system have an extraordinary amount of discretion, and when you have all these people just playing with discretion what we are seeing is that the people who are already underserved, the people who are already in need of the resources, and the people who are already challenged by the system continue to be further marginalized or re-victimized.

These are the things that concern me, the ways in which we are using things, particularly in law enforcement, that are designed sometimes for entertainment and for fun are finding themselves being placed in the hands of our law enforcement or criminal justice practitioners, who just need that education and that understanding of how these technologies can become very, very destructive if left unchecked. So it is really always for me about building the kind of awareness that is required and really empowering those practitioners with the knowledge that they need because two of the easiest things to do with our data is to monetize it and then weaponize it against us.

NICHOLAS DAVIS: Absolutely. If I can build on your point there, Renée, in the health care space it was a couple of years ago now that the University of California, Berkeley, study revealed some really deep systemic bias in some of the largest algorithms that determined health care risk among Black and white populations and then assigned basically whether or not patients would get care. It was underserving Black populations between about 20 to almost 50 percent. So, if two patients were exactly the same but one self-identified as Black and one self-identified as white, in that data set the Black patient was 20 to 46 percent I think less likely to be flagged to receive the care you needed.

I guess the thing that worries me and probably worries you, Renée, is that if it hadn't been for that group of researchers between Massachusetts General Hospital and the University of California, Berkeley, going in to actually look at those algorithms, we wouldn't know now that that was an example that has to be remedied, used to classify on some estimates more than 200 million patients across the United States. So this is an example not just of how power can shift and how the outcomes can be deeply unfair, but also the work that has to be done to uncover all the other examples that are out there across the areas that we care about in the public and in the private sphere, in this case in terms of health insurance.

RENÉE CUMMINGS: Definitely. Because I think many times we see that when it comes to data we usually take the approach that it is data, it is science, it is neutral, it is objective, the numbers don't lie, and the numbers speak for themselves, but we realize that these are fallacies and we realize that we have got to disrupt the data. We've got to deconstruct it. We've got to de-bias the data.

It doesn't always require the technical. There are times it requires the technical approaches to de-biasing as well as changing our thinking. What we are seeing is that many of these challenges are coming with these preconceived notions, presumptions, and stereotypic thinking. Some of my work in data activism is about building a new kind of consciousness in the ways in which we understand data, and we know that it is political, it's powerful, it's economic, it's cultural, it's so many things, but also understanding that in data there could be the transmission of intergenerational trauma and pain.

I speak a lot about vigilance as a data activist and AI ethicist, and that is what I do when I teach big data ethics. It's really about bringing this kind of understanding that says that you have to be courageous. You have to broaden your worldview. You have got to bring that multidisciplinary and interdisciplinary perspective because what is required is the intellectual confrontation early in the design process, and if we have that multistakeholder engagement and if we adhere to things like inclusive technology, equitable and diverse technology, then we could really get the best marriage of ethics and innovation to really design the things that we can be proud of, because I always say that an algorithm is not just a computation formula. It is a legacy that we are leaving with this technology.

NICHOLAS DAVIS: I think, Renée, what is really interesting, just to build on that, is that this is quite difficult for many companies to understand and comprehend because the private sector is providing so many of these algorithms and products into public and private sector uses, but traditionally a corporation, a venture, or a set of entrepreneurs who are working on a new product are pushed by their venture capitalists, but they are also pushed by their business training and the entrepreneurial advice they receive to focus on total addressable market, to focus on essentially a bell curve of players, of users of this technology. So they design for the common and profitable in the center of that bell curve.

When we are talking about justice, human rights, serving underserved populations, and ethics, we care about particularly one side of the distribution. We care about the excluded. We care about the communities that have less power, and we want to make sure that the technologically driven world we build is as participatory, ethical, and inclusive as possible, but when the focus of the algorithm-building design and indeed the data sets, as Renée just said, when our highest-fidelity data comes from powerful populations, then we are at a huge disadvantage in terms of serving those people who have already been excluded, so that transmission of power through the data set, combined with the fact that the choice of data set and the choice of algorithms is being determined by organizations that don't currently inherently have an incentive or a history of looking for more equitable and frankly more accountable ways of designing those algorithms.

So, as Renée said, this is a mindset shift and a perspective shift, but it does also conflict often with business models, so it really also comes out to: "Well, how can we through regulation, through training, through working with engineers, and through community programs make sure that the very systems that we are developing and adopting that are going to affect the future of our health care, our criminal justice systems, the way we interact in democracy, how can they be designed to cater for all citizens rather than for the total addressable market of profitable citizens, where most data sets are focused at the moment?"

RENÉE CUMMINGS: Nicholas, I think that is just excellent. There are also enough test cases out there and lessons learned.

Let's look at something like facial recognition and the extraordinary amount of investment made in that technology, and now every day it is being banned in a different place or public space. So you always have to ask yourself: Who are you designing for?

We always from the corporate level—and I spent a lot of time in that corporate space—think about bottom line. So, yes, we want to know: What are the risks, and how can we mitigate those risks immediately?

But then there are also the rights. What they are seeing is that when you don't pay attention to things like diversity, equity, and inclusion across the lifecycle from design into development into deployment, then the risk at the end is so great because you find yourself in a crisis. It could be financial. It could be reputational. You can become the topic of the next big news story or open yourself to an extraordinary exposé on media tech. I think many companies are realizing now that they are being called on the bluff, that they are realizing that—I always say that some of the companies that design some of the prettiest guidelines, frameworks, and codes of conduct when it comes to data ethics and AI ethics, are sometimes the greatest perpetrators of unethical behavior.

So it is really trying to build an ethical organizational culture, and it is also letting companies understand that it is more than just the risk. It is about rights, and it is about the long-term impacts, things that you may not see just yet, but when they do reveal themselves it is going to be way too late for you to really address those situations. So I think more and more companies are realizing that they have to have that kind of stakeholder engagement, or they have just got to bring in the interdisciplinary teams because in the end it is way too costly. So do it right at the beginning.

NICHOLAS DAVIS: Absolutely. And I think, Renée—I don't know if you would agree with me on this—two or three years ago working in this field, people would throw up their hands and say: "Governments are not capable of doing anything. They don't have the technical expertise. They are so much slower than the private sector."

Yet in the last couple of years we have seen a huge push forward in governance, in thinking through carefully—not perfectly in the case of the country where I am now based, Australia, very rushed legislation and pretty knee-jerk reactions to a very complex issue, but nevertheless, this idea that we do—in the governance community, in civil society, and in the public sector but also through universities and community stakeholders—have a wealth of knowledge of how to do this. It is not at all a lost cause. It does just take that focus, and it starts, as you mentioned, with really identifying those unacceptable risks and saying there are certain areas—and the EU framework on artificial intelligence starts with this—like the use of facial recognition in law enforcement in public areas where we can classify that as an unacceptable risk. We should be banning that. That should not be something that we bring into our societies because we can already see—we have way enough data already to understand—the discrimination that that can cause and the challenges that that propagates through the system by the use of those technologies.

But then, for the other acceptable but still risky uses of AI in public systems, in administrative law or in criminal justice, we need to really be careful about what the boundaries there are in terms of explainability, notification, and review. I am a big fan of mandatory human rights impact assessments for use in government, etc., and then making sure that in focusing on this that we don't forget that people's lives are also ordered more and more by the private sector. So, whether or not you can buy insurance through an online platform or not can be as impactful on your life for many people or whether you can get housing also is incredibly important, even if that is through private-sector providers, no matter how regulated they are. So, being able to then play those same principles of human rights into the private sector in ways that, as you say, Renée, show the private sector what the risks are and how this is a net benefit to everyone by broadening the market frankly.

RENÉE CUMMINGS: I totally agree with you. I think we have an extraordinary amount of talent when it comes to providing good governance for our technology. I think internationally we have seen and heard some amazing voices, and we are seeing that there are governments that are doing things, but I think the challenge always comes to the power of big tech and the fact that big tech really has its hands in so many governments, providing not only the infrastructure for the digital transformation that is happening internationally but also providing the talent as well. I think many people believe that we will never get the kind of robust and rigorous legislation and regulations that are required to really check what is going on.

I think there have been some major advancements. I think there are many things for us to be extraordinarily proud of, and there is great work happening in many corners of this universe, and we are going to see it coming together. I know with AI ethics there is a really vibrant and strong community of ethicists who are doing again extraordinary work, and of course our legal practitioners and just all of us.

But I think one of the biggest challenges remains: How do we get those stakeholder voices in there? How do we get the people who are impacted by the long-term negative effects of technology into the conversation? And how do we really get big business to pay serious attention to these long-term impacts? I know when I spend time in the boardroom or when I work with individuals who are seeking that ethical perspective, I always say simply: "Oversight doesn't begin with an O; it begins with U [you], and this is where we are going to start."

So I think it comes back to the kind of ethical resilience that we build as individuals, and it comes back to our own perspectives, our approaches, our own biases, the things that we are willing to challenge in ourselves, our limitations, and being aware of those limitations. It is really about our conversation. It is really about having those difficult conversations because when we don't have the difficult conversations, we end up with difficult technology.

At the beginning of facial recognition had we had a serious conversation about data and systemic racism, policing and systemic racism, and police violence and systemic racism, then we would not have had the technology that pretended that those things did not exist. So it's about having that difficult conversation because it is the intellectual confrontation in a respectful way and in a respectful space that really stretches the imagination of the data scientist or stretches the imagination of AI and AI ethics, and I think that is what we need to do. We need to reimagine the ways in which we design technology, the ways in which we think about each other in the design of the technology, and the kinds of decisions we want to use this technology to make for each other. I think it requires a reimagining of the ways in which we plan to exist together in a new technological consciousness.

ANJA KASPERSEN: If I may bring both of you back to the law enforcement space, there is obviously a great deal of temptation to use some of these technologies. What are the risks for law enforcement in general, the credibility of it, the integrity of our law enforcement apparatus, using technologies that we don't necessarily train officers in using?

RENÉE CUMMINGS: What it does is undermine public confidence and public trust, and it undermines police legitimacy. I think at this moment everyone is calling for a reimagining of the way that we do policing, in particular in the United States, the ways in which police engage with communities of color, the ways in which police engage with the African American and the Latino communities, and the ways in which police engage with young men of color as well and young women of color.

What we are seeing is that we are creating all of these new and exciting tools, but these tools are continuing to over-promise and under-deliver because many of the vendors are taking the easy way there. So what we are doing is creating tools to re-police and over-police the communities we are already policing, so we are arresting the same people, but we may be arresting them earlier and at a different street corner, but it is the same people we are arresting. So it comes back to that whole question of marginalization and re-marginalizing or further marginalizing these individuals. It comes back to re-victimizing. What we want is to use technology in ways in which it can support building communities and providing the kinds of interventions that are required, which may not be law enforcement interventions.

We cannot continue to use criminogenic factors like social exclusion, spatial isolation, or poverty as data points for risk because what we are collecting would be data that says there are social challenges in this community, there are communities that are in need of resources, and these communities need psychosocial and educational interventions. Yet we are deploying police officers in those communities because the data says that crime is going to happen here.

My thing is when we start to design technologies for crimes in the suites and we start to design these technologies to arrest those who are involved in white-collar crime and corruption, then we are bringing more equity into the deployment of algorithms in law enforcement. At the moment, what we are doing is really again attaching systemic racism consciousness to the ways in which we are thinking about data-driven technology. This is where we have to bring that ethical approach and really also speak to the vendors who are designing these technologies and who are not paying attention to things like duty of care, due process, and the fact that what you are creating—your data points—are sociological, sociopsychological, and sociocultural challenges that cannot be addressed by deploying digital technology into that space.

NICHOLAS DAVIS: Anja, I think it's a really important question.

Just to riff then on what you are saying, Renée, part of the problem here is that within any complex context where you already have massive power disparities and then you just apply technologies within that framework, what you do is you tend to exacerbate those disparities and increase that power. So if you say, "Well, the problem in policing is catching criminals; let's design algorithms to catch criminals," then, as Renée said, you completely undermine the longer-term, big-picture, systemic reasons and inequalities that produce many of the disparities and make it seem as though that is the problem. You also create systems that sense and report on that problem, which only then further reinforces that way of thinking, and you don't do anything to actually solve any of the underlying human rights issues at the heart of it.

I guess one of the ways through this is to work with community organizations, justice organizations, and the police, to think: Well, how can we use artificial intelligence technologies, advanced algorithms, and others to actually look at this community situation from a more rights-driven perspective, a sociopsychological perspective, a historical perspective, or through a systemic racism perspective? and really start to engage with communities using algorithms that are looking to try to uncover and give us better insight into some of those frankly more complex and more alien causes compared to: "That's a bad person. We need to catch them."

That we have not invested in as much because there is a market, particularly in states where police are community funded or privately funded in different ways. You have a market for goods that reinforce their own power. That's a huge challenge here because as long as we use this idea of new technologies, digital technologies, in the "Fourth Industrial Revolution" to simply give us more power in the current structure, we won't actually move into a better space or better outcomes, even though the vendors and the people creating it might promise that. That is, I think, the real challenge we have here.

If the ultimate goal, Anja, is security, then we need to use these technologies as a window into viewing security in new ways, if you think about it that way, rather than simply thinking about security as catching bad people because then we will inevitably err on the side of a criminal justice system that we already know doesn't work in that way.

Renée, is that fair enough to say? Do you agree with that statement about trying to shift the way we deploy technologies from the very outset rather than just remove bias from apprehension technologies, if you put it that way?

RENÉE CUMMINGS: Definitely, Nicholas, and I think it is not only about designing the technology to catch the criminals or designing the technology to safeguard security. It comes back to: How do we code fairness? How do we code security? Is it that we are deploying algorithms to catch criminals, or is it that we are designing algorithms to catch Black and Brown people? Because it comes back to the data sets that we are using, and it comes back to the fact that baked into these data sets in the United States context would be that history of systemic racism, that history of oppression, that history of colonization, that history of enslavement, and all of these are now part of our justice system. People of color and the criminal justice system in the United States have not had the most equitable relationship, and these past relationships are coded into our data sets.

This is why it is so critical to bring that ethical perspective to what we do that begins with thinking that has been changed. And if the thinking cannot be changed, you bring that understanding that these are the things that we need to check for in this data set: Who collected it? Why did they collect it? When did they collect it? What is the context of the collection? Who classified it? Who analyzed it? What about the communities impacted by this data? That is why I am saying those are the questions that we need to ask each other because at the moment when it comes to law enforcement technologies that are being designed or it comes to the intersection of law enforcement and AI, we are bringing those past histories with us, and we keep bringing them power, and they keep undermining not only the criminal justice system, not only due process, not only the ways in which we think about our communities, but we are underdeveloping ourselves and we are underdeveloping the ways in which we could advance and the ways in which we could enhance, and we are not looking at that.

So we continue to undermine the imagination of this extraordinary and powerful technology called artificial intelligence, about which I am very, very passionate, but as much as I am passionate about the things that I can do in a positive way with this technology, I am also passionate about ensuring that it is deployed in ways in which we can enhance and build up our communities and not use this technology to destroy, and while it is destroying we are calling it "unintentional" consequences because I always say: "For the communities impacted, the impact feels very intentional given the history that we are working with."

NICHOLAS DAVIS: I think the phrase "unintentional consequences" is a cop-out, Renée. I think we should not moderate it. Consequences are consequences, and in law the eggshell skull principle says you take your victim as you find them, and you can't say, "Well, that was unintentional; that doesn't matter." We need to look at recklessness, and we need to build those standards of care into the designers as well as the tools, of course.

ANJA KASPERSEN: It is very interesting. You both pointed out that rather than replicating, we need to reimagine what society we want and, as Nick was saying, what security looks like in this new digital context.

I am just curious, Nicholas. As a former lawyer now working in the technology space, what are some of the opportunities that we still haven't tapped into, and what are the things that concern you that we haven't quite discovered yet?

NICHOLAS DAVIS: One thing that is I think really interesting is the opportunity for technology to have outsized support for people with disabilities. I don't think we appreciate how important the same technologies we are talking about, particularly natural language processing, all the language side of AI, etc., how much power they can give to people who have physical, mental, psychological, or any kinds of disabilities. The access that this provides is huge, and yet we are not consciously often engaging in disability and access in these major algorithms. It seems to be almost a side gig in many ways. That is one area that I am super-excited about. If we could think about then that possibility as—and I hate to put it this way—if more and more companies and entrepreneurs realized that business case as well as the positive human impact, we would push that distribution of who can be served by these technologies further out and move away from this danger that we have touched on a little bit of thinking about all of this as risk. Yes, there is risk in the framework for regulation, but the way to deal with it, as Renée brought up earlier, is to actually look at it from an opportunity lens from the corporate perspective rather than the risk lens. So that's one thing that I think is really important.

The second thing is that I am really excited about the fact that I think the language is starting to change a little bit as we have talked about AI ethics.

I would love, Renée, to quickly hear your take on whether or not AI ethics has now been overly co-opted by tech companies and panels and those who tend to entrench power more than distribute it. What I worry most about is that we are still sticking to the phrase "artificial intelligence," which to me as a metaphor really says that, "I can do this task that I have already decided to do better," whereas what we have been talking about in this discussion so far is actually needing more wisdom. It is "less about intelligence and more about the hard work of being wise"—I have to find out whose quote that is—but getting back to the hard work of being wise in technology.

I think if we talked about advanced technologies and algorithms as having necessarily a context of being designed as wise systems in different ways, which means to take into account their goal, the community, their impacts, the consequences, the wider aspect, but also their evolution through time because wisdom is also knowing when you are wrong and being able to shift. Most AI technologies are quite brittle. They get worse over time rather than better, or they get more down the same track than ever. They reinforce their own assumptions about the world rather than building more.

That is the other aspect that I think is important and exciting here in moving forward, thinking about "how we code for wisdom," as Renée kind of put it, and thinking about the metaphors we use to move a little bit away from what were very good for the last 50 years, but in order to have our minds open, our business models, our government policies, and our community conversations to be more engaging and more inclusive.

ANJA KASPERSEN: I know, Renée, this is probably very close to your heart because I read an interview with you where you were quoting your dad saying, "Never miss an opportunity to do what is right," and that is very much what Nicholas is referring to now.

RENÉE CUMMINGS: Definitely, and I just want to say, Nicholas, you are quite correct when it comes to disability because, long before I became a criminologist, I worked in disability studies and disability rights and looking at the correlations between mental illness and, of course, other forms of disabilities, including something like substance abuse and crime, so it is something that I am very passionate about, and really providing that sort of access and amplifying those voices.

You are also right about the metaphors that we are using and the images that we continue to see that oftentimes seem so mechanical and so cold, as opposed to creating a relationship between society and data because so much of data is human—our thoughts, our behaviors, and our intimate thoughts are the things that are being collected by the data brokers that are being used to design the technologies that we are also going to buy.

When it comes to doing the right thing I am definitely always about acting and being "in the ethical moment," and you are only in the ethical moment when you are applying the requisite levels of due diligence and you are staying eternally vigilant, mindful, respectful, and understanding that communities are attached to data, families are attached to data, and legacies are attached to data. The future of generations is attached to data. For me, I always say: "Our data is our DNA." Our data is everything that we are, and it is something that we have got to think about when we share it. It is something we have to think about how it is being used, and we have got to demand that ethical approach in the ways in which we use data-driven technologies because if we do not, then we are putting our lives at extraordinary risk, and we are putting our futures in the hands of individuals who may not be responsible enough to do the right thing.

NICHOLAS DAVIS: I think it is also really important and useful to look at human rights as a framework for supporting our broader discussion around AI ethics. Human rights are really useful because they are universal, because they are indivisible, and because they are agreements. Australia, for instance, is a signatory to seven international human rights treaties. So we already have an established basis of international and domestic law that says that our governments need to respect, protect, and fulfill a set of human rights, that we are born free and that we are equal in dignity.

This is a fantastic governance, legal, but also perspective framework for thinking about how our technologies operate. If I were to think about just five core principles for how you might think about a human rights approach to new technologies—and this is work that has recently been published here by the Australian Human Rights Council—the five would be:

1) Participation; making sure that participation in decision-making of stakeholders affected by new technology is absolutely critical;

2) Accountability; making sure we have effective monitoring of compliance with standards by government and by non-state actors, and mechanisms to enforce those rights;

3) What we have talked about today is really important, that technologies are nondiscriminatory and equal in their application, and we can use a wealth of anti-discrimination law to realize the human rights principles around discrimination;

4) Critically, empowerment and capacity building; making sure that those stakeholders who have the most to say about their own situation, who are best placed to actually look at the challenges that are being experienced, that they themselves are fully empowered and engaged in these processes, not just as participants but as builders, to have the knowledge and have access to remedy and review;

5) Finally, making sure that human rights and these provisions as they relate to technologies are enshrined in law. We have existing law that may be inconsistently applied. Unless we are really clear that in the uses of new technology, human rights are legally enforceable, that those things are adapted and applied in case law or applied in regulation to the specific instances that we are talking about.

If you are building an AI system, think about participation, accountability, nondiscrimination, empowerment, and the legality, and I think you will be in a far better place, and that can help ground some of the broader ethical principles into a preexisting legal framework.

RENÉE CUMMINGS: I think you are so correct, Nicholas, because when we think about human rights it brings us right into the black box of AI and that question about explainability and whether or not we can use such opaque technology to make these critical decisions in the criminal justice system. So it comes back to that conflict between proprietary rights, civil rights, and human rights, and it really comes to a space where we have to understand that data rights are human rights and where we understand that it is our right to understand the technologies that are being used, how they are being designed, how they are being deployed, and how these technologies are going to impact ourselves, our families, our communities, our countries, and our futures. Then we understand how critical human rights is as a lens to understanding AI ethics and why it is we need those robust and rigorous guardrails for protection and also for direction.

NICHOLAS DAVIS: Also these are demanded by the public. The Australian Human Rights Council did a big survey to look at who wants these kinds of rights available, and 88 percent of respondents said that they want the AI systems used by government to be accountability; 87 percent said they wanted to be able to actually see the reasons for decisions; and another 80-odd percent said they wanted them to be reviewed by an independent body. So this is not like a side gig, techie, social justice warrior issue. The general public overwhelming says, "In the world we live in, I want these rights," so we should be delivering on that.

RENÉE CUMMINGS: And I think we deserve it.



ANJA KASPERSEN: There are quite a few people listening in to this conversation working on issues related to technology and justice. If you and Nicholas were to advise based on your experience in the varied domains that you have been working on for quite some time now, what would be three or four key insights or takeaways that you think people should take with them from this conversation that will guide their work, both in getting the opportunities right but also managing the risks in the best possible way?

NICHOLAS DAVIS: I think if you are in law enforcement and you are procuring and engaging with these kinds of systems, probably the first task is to step back from this and say there are a whole bunch of different options to use—what Renée is talking about, these rich data streams and new sources—to be able to support your core goals as an organization.

So the first thing is, don't be just guided by what's on the market and who is the vendor standing in front of you at any point in time. Think carefully about what could be innovative, thoughtful, and disruptive to go towards those deeper-level goals that you are going for rather than the narrow goals of a technological system that happens to be in front of you, whether that is communication interception, identification, etc. We often miss in these areas—if you are browsing Amazon late at night, you see what's on the screen, and you buy that because it's on the screen, but what you are being served on the screen is not always exactly what you need—particularly late at night after a few drinks. So think about that in terms of how you are facing the problem statement.

The second thing is, if you are going into these systems, almost all of them are very, very new. They are using relatively new methods. They are using data sets that are untested and poorly understood, as Renée pointed out. So one of the questions is: How can you make sure that you truly understand what is happening in the design phase of these technologies and in the deployment and implementation in your particular context.

There, as Renée said earlier, there is a lot of help available. You don't have to sit and just rely on representations by someone selling it to you. You can reach out to any number of people who are excited to support your organization at universities, community groups, public authorities, and others to say, "Yes, we'd love to help you make a good decision on this." It really is about the effort that you make more than it is about the kind of opaqueness of the technology itself.

So the second thing I would say is to cast your net widely in terms of reactions and understanding on the design, on the implementation, and on the deployment aspects of these technologies because lots of organizations will have gone through it, but lots of other mistakes are yet to be made that you can find out in advance if you think through it in the right way. That is the second big aspect.

The third big aspect is, don't be afraid to demand things that don't seem to be on the market, and actually reach out for solutions because we are at the beginning of this stage of revolution in capabilities, and there are really smart, committed people who want to create tailored solutions and use technologies to find new ways of making us all better off. But if you do go down that path, do make sure you take Renée's suggestions into mind. Make sure that this is really playing into the ethical aspects, the impact aspects, the stakeholder engagement, and the inclusion because that's the way to make a more robust, reliable, secure, and in the future compliance-based system.

Be very, very clear: This is a decade of technology regulation. What you do now will be heavily regulated in three to four years. I can guarantee that. So you want to be on the front foot of that, engaging with regulators but also working with vendors and technology partners to find the solutions that will be long-term compliant with the kind of privacy and discrimination law that we already know, that is out there. It exists. It hasn't been fully coded in. You need to think about coding that in before you get into trouble in a few years' time.

RENÉE CUMMINGS: Definitely. I think for me it would be critical reflection, critical thinking, and critical design. I always say, "Design for others as you would design for yourself." Bring that level of mindfulness. There are lots of lessons learned. We know how we got here. Let's use the technology to take us away from this space and really design things that could build communities, things that could enhance our lives, and things that could ensure that we give equal opportunity and access to all communities, and really in whatever we do just use that opportunity to understand that this technology is going to leave a legacy, and what do we want that legacy to be?

ANJA KASPERSEN: Thank you so much, Nick and Renée. It has been an absolutely wonderful conversation. Thank you for sharing your time, your expertise, and your insights. I hope our listeners enjoyed it as much as I have.

A huge thank-you also to the Carnegie team recording this podcast. Wishing you still a very good day or good night, depending on where you are time zone-wise. Thank you.

NICHOLAS DAVIS: Thank you, Anja. Thank you, Renée.

RENÉE CUMMINGS: Thank both of you.

También le puede interesar

APR 30, 2024 - Podcast

¿Es la IA sólo un artefacto? con Joanna Bryson

En este episodio, la presentadora Anja Kaspersen habla con Joanna Bryson, de la Hertie School, sobre la intersección entre las ciencias computacionales, cognitivas y del comportamiento y la IA.

APR 26, 2022 - Podcast

La promesa y el peligro de las interfaces cerebro-máquina, con Ricardo Chavarriaga

En este podcast de "Inteligencia Artificial e Igualdad", la investigadora principal Anja Kaspersen habla con el Dr. Ricardo Chavarriaga sobre la promesa y el peligro de las interfaces cerebro-máquina y ...

APR 19, 2022 - Podcast

Por qué la democracia frente a la autocracia no tiene sentido, con Jean-Marie Guéhenno

La investigadora principal Anja Kapsersen se une al profesor Jean-Marie Guéhenno para conversar sobre las comunidades virtuales y el advenimiento de la era de los datos.