Seis preguntas para la nueva tecnología de vigilancia, con Arthur Holland Michel

Oct 18, 2022 - Ver 7 minutos

Con motivo del Día Mundial de la Ética, el Senior Fellow Arthur Holland Michel analiza cuestiones relacionadas con las nuevas tecnologías y la privacidad.

¿Cómo se utilizarán estos sistemas? ¿Cómo podemos asegurarnos de que se utilizan con transparencia? ¿Quién es responsable cuando ocurren accidentes? "La mejor manera de abordar estas cuestiones es mantener un discurso honesto e integrador en el que todos tengan voz", afirma Michel.

Michel también es miembro de la Junta de Asesores de la Iniciativa Inteligencia Artificial e Igualdad.

Para más información sobre el Día Mundial de la Ética, haga clic aquí.

These days, we are witnessing the emergence of a wide variety of new surveillance technologies. Things like facial recognition, drones, location databases, and data fusion. It’s all happening so fast that it can be hard to know where to start when thinking about and discussing each of these technologies’ ethical implications.

However, though these machines come in many different forms and do many different things, they all raise some of the same ethical concerns. So for Global Ethics Day 2022, I wanted to highlight six key ethical questions that are common to all emerging surveillance technologies.

First, does the technology actually work?
It's easy to think that these technologies are all super powerful. But sometimes—indeed, oftentimes—new surveillance technologies don't prove to be as effective or reliable as one imagines they'll be. A poorly performing surveillance system might be more likely to cause harm (say, by misidentifying a suspect in a crime) , and its limited or inconsistent benefits won't outweigh its costs to privacy and freedom. Therefore, understanding a technology's real-world effectiveness is important for preventing unintended harm, as well as for deciding whether or not the technology should even be used in the first place.

Second, is it fair?
There's ample evidence to show that new surveillance technologies are disproportionately used against—and cause disproportionate harm to—non-white populations, as well as socially and economically marginalized societal groups. This has been a consistent pattern in the past, and is likely to continue to be a pattern in future. Therefore, new surveillance technologies require a robust assessment to determine their impact across different segments of society and, as needed, rules to forestall anticipated inequities.

Third, how will it be used?
Even when a new surveillance technology is at first only used for a seemingly noble purpose (say, for example, finding people who have been kidnapped), that doesn't mean it will only ever be used that way. New surveillance technologies often end up being used in ways that go far beyond their original purpose, for tasks that raise serious ethical concerns, for example identifying protesters who are exercising freedom of speech. Therefore, when a new surveillance technology emerges, it is helpful to consider not just the ethical balance of its stated purpose but also to imagine and consider the ethical implications of all the other ways it might hypothetically be used.

Another question is what happens to the data?
New surveillance technologies tend to generate large amounts of detailed digital data. In the absence of clear standards for how—and for how long—surveillance data are stored and secured, as well as rules for how the data can and cannot be used, the data may be exploited for privacy intrusions that have nothing to do with the original reason that these data were collected.

Next, it’s important to ask, Will it be accountable?
Even the most reliable, equitable surveillance technologies can fail, and there is always the chance that they will be intentionally used in ways that overstep ethical bounds. When this happens, a clear, transparent, standardized process of accountability is important for ensuring that those who were affected have recourse to justice and that those who are responsible for the harm face appropriate consequences. This will also help to dissuade authorities from using surveillance technologies in unethical ways, or in ways that go beyond their original stated purpose.

And finally, is it being rolled out and used transparently?
It is impossible to address any of these other questions if we don’t know about the surveillance technologies used in our communities. And yet unfortunately, new surveillance technologies are often deployed in the dark. This makes it impossible to have any kind of public scrutiny or discourse that addresses any of the other questions I’ve just mentioned. Transparency is also, in itself, an ethical principle. Privacy scholars and existing case law tends to agree that you have the right to know about the new surveillance technologies that are used either directly against you or your community as part of an investigation, or that may collect data about you incidentally in the course of its use. Oh, and transparency is also, in itself, a good way of keeping our overwatchers accountable.

There are, of course other questions. But this is a good place to start. The best way to address these questions is to have an honest, inclusive discourse where everyone has a voice. So I invite you all to share your own questions, concerns, and personal experiences using the hashtag #GlobalEthicsDay. Thanks for watching!

También le puede interesar

APR 25, 2024 - Podcast

Protección del ciberespacio, con Derek Reveron y John Savage

Derek Reveron y John Savage se unen a "The Doorstep" para hablar de su libro "Security in the Cyber Age". ¿Cómo podemos mitigar los efectos nocivos de la IA?

Portada del libro A Dangerous Master. CRÉDITO: Sentient Publications.

18 DE ABRIL DE 2024 - Artículo

Un maestro peligroso: Bienvenido al mundo de las tecnologías emergentes

En este prólogo a la edición de bolsillo de su libro "Un maestro peligroso", Wendell Wallach analiza los avances y las cuestiones éticas de la IA y las tecnologías emergentes.

9 DE ABRIL DE 2024 - Vídeo

Algoritmos de guerra: el uso de la IA en los conflictos armados

De Gaza a Ucrania, las aplicaciones militares de la IA están transformando radicalmente la ética de la guerra. Cómo deben afrontar los responsables políticos los compromisos inherentes a la IA?