¿Hasta qué punto es real la realidad virtual? con David Chalmers

18 de agosto de 2022 - 91 min escuchar

¿Es posible que el mundo en que vivimos sea una simulación? ¿Los entornos virtuales que se crean son reales o ilusiones? ¿Qué perspectivas hay de crear una conciencia artificial? David Chalmers, de la Universidad de Nueva York, y Wendell Wallach, investigador de Carnegie-Uehiro, debaten sobre Reality+, el último libro de Chalmers, que analiza la amplia gama de retos filosóficos y éticos que plantean la realidad virtual y la realidad mejorada.

Cómo de real es la RV Chalmers AIEI link ¿Hasta qué punto es real la RV? Enlace al podcast de David Chalmers AIEI

WENDELL WALLACH: Soy Wendell Wallach. Gracias a todos por acompañarnos hoy. Con Dave Chalmers, figura destacada de la filosofía de la mente y de la tecnología, como invitado, este taller promete ser fascinante. Hablaremos con Dave de su último libro Realidad+: Los mundos virtuales y los problemas de la filosofía.

En 1994, cuando aún era estudiante de posgrado en la Universidad de Indiana, Dave Chalmers asistió a la primera Conferencia de Tucson, centrada en el desarrollo de una ciencia de la conciencia, y en ella ofreció una distinción fundamental entre el problema "difícil" de explicar la conciencia y los problemas "más fáciles" que tienen más probabilidades de ceder a las investigaciones científicas. Instantáneamente se convirtió en la estrella de los estudios sobre la conciencia, un papel que ha mantenido con rigor intelectual durante casi tres décadas. Aunque Dave sigue siendo más conocido por el problema difícil, con toda modestia afirma que la dificultad de explicar la conciencia ya se comprendía bien; él sólo aportó un término, una etiqueta, una distinción útil o un meme para hablar de ello. David Chalmers es catedrático de Filosofía y Ciencias Naturales en la Universidad de Nueva York y codirector del Centro para la Mente, el Cerebro y la Conciencia.

Su último libro, Reality+, publicado en enero, explora una amplia gama de cuestiones filosóficas planteadas por la realidad virtual (RV), la realidad aumentada y el metaverso. Van desde la hipótesis de la simulación, teoría según la cual nuestro universo, el mundo en que vivimos, podría ser en realidad una simulación, hasta cuestiones mucho más específicas que a muchos nos pueden parecer más reales sobre la naturaleza de los mundos virtuales que estamos creando y si las experiencias en ellos son ilusiones o deben entenderse de otra manera. Reality+ es a la vez un libro de texto y una introducción a la filosofía y, por tanto, accesible a lectores cultos, pero también es una obra original de filosofía.

Dave, a lo largo de Reality+ defiendes el realismo virtual o el realismo simulado, que la realidad virtual es la auténtica realidad. ¿Podrías explicar a nuestros oyentes lo que quieres decir con esto?

DAVID CHALMERS:
Claro. En primer lugar, permíteme darte las gracias, Wendell, por tenerme en tu podcast. Es un placer tener esta oportunidad de hablar en profundidad contigo sobre estos temas.

The central slogan of this book is "Virtual reality is genuine reality." I take myself to be contesting what many people take to be common sense about virtual reality. It is very, very common to hear that virtual reality is a fake or fictional reality and that it essentially involves illusions or hallucinations. Even people like the great science fiction authors who have written about virtual reality—Neal Stephenson and William Gibson—talk about virtual reality as an illusion or a hallucination that is not fully real.

I want to say that in many core senses virtual reality is fully real. What happens in virtual reality really happens. Objects in virtual reality really exist. The extreme case is the case where we consider the hypothesis that we are ourselves in a virtual reality, that we are ourselves living in a simulation. In that case I want to argue: "Well, okay, it could be that we are in a simulation, but if we are that doesn't mean that all this is an illusion and that nothing is real. It merely means that we are living in a digital reality and that a world where the objects around us—the tables, the chairs, the organisms, the planets—are ultimately digital entities grounded in some kind of computational processes perhaps in another universe up does not mean they are not real."

The simulation idea is an extreme case used to get at this idea of virtual reality as genuine reality, but I also want to argue that the same applies for the virtual world that we are in the process of creating. Virtual worlds have already been around for a long time, especially in the video game world, but they are increasingly going to be worlds that we actually interact with and spend serious parts of our lives in, perhaps for work, perhaps for socializing, perhaps for entertainment, and perhaps for building communities. I want to argue that what happens in virtual reality is a genuine reality and that one can live in principle a meaningful life in a virtual world.

This doesn't mean that life in VR is automatically going to be wonderful or utopian. I just said "meaningful." It is quite consistent with this, but it could turn out that life in virtual reality will be awful or dystopian. I am inclined to go for a middle ground here myself and look at the ways in which virtual reality might be good and the ways in which it might be problematic. But in any case, I am inclined to think it is meaningful and that we have to take this coming technology all the more seriously because it is going to be in a certain sense creating a reality in which we can live meaningful lives.

WENDELL WALLACH: Let's unpack that a little bit and start with the simulation hypothesis because there have been versions of that well before it was put forward as a simulation hypothesis when we began to think about virtual worlds. We have Plato, we have Descartes. They have both suggested that there is something unreal about the world that we live in or at least that its reality should be questioned.

DAVID CHALMERS: Yes. In fact you find antecedents of this idea in almost all of the great ancient traditions of philosophy. For example, in Chinese philosophy you find the Taoist philosopher Zhuangzi giving his allegory of the butterfly dream, saying: "I dreamed I was a butterfly flitting about, now I wake up and find myself here as Zhuangzi, but how do I know that I am Zhuangzi who dreamed he was a butterfly? Maybe I'm a butterfly dreaming that I am Zhuangzi." That can be seen as a way of raising this age-old idea: How do we know that everything we're experiencing now isn't all a dream? Of course a dream is in a way an antecedent of the simulation idea. Maybe it's a simulation without a computer, but in some ways it can be seen as continuous.

In ancient Indian philosophy we find all kinds of folk tales and philosophical analyses of what they call maya or illusion. It is often said that the whole world we're experiencing is a kind of illusion. We have folk tales about the god Vishnu coming down and transforming people into new situations. The sage Narada suddenly finds himself living a life as a woman, as a queen of a society with children and grandchildren, and then suddenly the children die, she is weeping, and then Narada finds himself back in front of Vishnu. Vishnu says, "Now I have shown you illusion." This is the idea that the world is a giant illusion, another antecedent of the simulation idea.

In ancient Greek philosophy we have Plato and his famous "Allegory of the Cave." The prisoners are chained up, looking at shadows on the cave wall, which is for Plato a mere shadow of reality, and Plato is using this to ask us: How do we know that we ourselves are not in a shadow of reality?

Many people would see all three of these questions as ways of asking questions about virtual reality. The question "Are we in a simulation?" is a recognizable descendent of the thesis that we are in a dream, that we are in an illusion, or that we are in a shadow of reality.

René Descartes, much later, in the 17th century, gave a classic formulation to these questions by asking: "How do we know that an evil demon isn't fooling us into thinking all of this is real when in fact none of this is real?" Many people think that is another question that we can now raise in a technological key by asking the question: "How do we know that all this is not a simulation?" So this simulation idea, although it seems kind of high-tech and new, is very much a descendent I think of these key themes that run through many traditions in the history of philosophy.

WENDELL WALLACH: We are going to come back to what I think is your position on whether we live in a simulation or not, but before we get to that I want to underscore this other side of Reality+, which is that it was also written to be a textbook, and there is this tension in the book itself between the ways in which it functions as a textbook on philosophy and the fact that you are putting forward original positions, some of which might be quite controversial.

I think our listeners can already get a sense of what kind of a textbook it is. It brings up so many precursor positions, issues, and viewpoints of leading philosophers throughout history and introduces them in their words at the same time as you question their applicability to virtual realities or whether we live in a simulation, and it is that contemporary dimension which gives it a little bit of excitement, a little bit of flavor, and a little bit of freshness. As people may notice, you have already referred to a few pieces of science fiction in which this came up.

That is the juice behind this book and I think why many people find it a good read, even if they are new to philosophy or if they have been engaged in philosophical reflections over the years, that it is this merging of the history of philosophy, the great ideas of so many thinkers, and how the possibilities that are posed by new technologies—in this case particularly virtual reality but also artificial intelligence (AI) more broadly—repose these questions for us, put them in a new form, or at least put them in a form where they are less caught up in the classical language and more reflective of things that we think about.

Was that your intention? Was this meant to be a textbook? Did you hope that this would be accessible to a broad audience, and how do you view its function as a textbook together with the fact that this really is an original work in philosophy?

DAVID CHALMERS: I guess I wouldn't use the word "textbook" myself just because that makes it sound a little dull and bureaucratic, but I would say that I very much intended it as an introduction to philosophy, as a book that someone could read without any formal background in philosophy and that by reading it they would get an introduction to many different philosophical ideas and themes from many different traditions and many different areas.

I didn't initially I think set out to do that. Most important to me was to write a book that would give my own analysis of philosophical questions, both some very traditional philosophical questions about reality and our knowledge of it and new philosophical questions about the coming technology of virtual reality and artificial intelligence.

But in thinking about those things I realized that this did not actually require complicated technical philosophical jargon to explain the key ideas. The ideas involved here are so fundamental in a way that it's possible to approach them in plain language. That is not always the case in philosophy. Quite a lot of my work is technical and uses jargon for specialists, and I don't apologize for that because sometimes it's necessary, but I just figured out that I could write this book in a way that didn't require presupposing that background and that furthermore the issues here connect to so many of the great traditional areas of philosophy, about knowledge, about value, about god, about consciousness, about ethics, and about science that in a way in doing this properly and explaining these ideas to somebody without much background you would basically be giving them an introduction to philosophy.

At that point I deliberately set out to write a book that both introduced philosophical ideas and that tried to give my own analysis of a bunch of key things and argue for my own views, which may in some cases be controversial and opinionated, but I would like to think people could read the book for either of those things. Some people might put most weight on the introduction to philosophical ideas, some might put the most weight on my own arguments and viewpoints and engaging with those, but both books are in there.

WENDELL WALLACH: I think you are correct. "Introduction" is probably a much better word than "textbook" for what we have here, and, if I can say so, I think it is a very entertaining introduction and a very accessible introduction.

I think what makes it an introduction is the daunting array of issues you bring up and therefore the daunting array of classical philosophical ideas, theories, and positions that you are able to touch upon as you go through the exploration of these different ideas. I think in that sense it is a very successful and accessible introduction, but I don't think we should let our listeners believe that there are not going to be some places where there is a little bit of heavy going, and you do tend to warn readers when it's going to get a little bit more dense.

But let's go to two of your fundamental positions. If I read the book correctly, after a great deal of explication you are pretty clear that we can't know whether or not we live in a simulation and may never be able to know. Did I get that right?

DAVID CHALMERS: That's about right. There are different versions of the simulation hypothesis. In one version it is an imperfect simulation where there are glitches and signs that we are in a simulation. If we are in that kind of simulation, it's possible that we could know. Maybe, for example, the creators could reveal themselves to us, show us their powers, they will move around heavenly bodies, and at the same time show us the source code. They could give us I think quite good evidence that we are in a simulation.

On the other hand, if we are in what people call a "perfect" simulation, one that is indistinguishable from physical reality, then it is very hard to see how we could know whether or not we are in one of those because any evidence that we might get that we are not in such a simulation could in principle be simulated, could be built into the simulation. Those are indistinguishable.

So I guess I would say we can never know for sure that we are not. We could in certain circumstances come to know that we are in a simulation, but we can never come to know for sure that we are not in a simulation because the perfect simulation hypothesis is not one we can rule out. Some people say that makes the perfect simulation hypothesis meaningless because we could never get evidence for or against it, but I would say maybe that means it's not a scientific hypothesis, it is more of a philosophical hypothesis.

But I do want to say it is a perfectly meaningful hypothesis. We may at some point be able to construct perfect simulations and put people into such simulations, and then we will say, "Okay, this person is in a perfect simulation." That is a perfectly meaningful state of affairs that a conscious being might be in.

WENDELL WALLACH: One theme that jumped up to my mind as you were talking about the simulations and whether it was perfect or imperfect is some of the contentions within both religion and philosophy that there are ways of realizing or finding your way out of the illusion. For Plato it's how do you find your way out of the cave and what do you find when you are out of the cave. In Eastern religions we have the contention that we live in some sorrow, but there are various approaches where you can become enlightened and find your way out of the illusion, although of course that always brings up the "evil demon" question as to whether these ways out actually exist or whether the simulator in this case has created these illusions and basically has us as rats in a maze where there is no way out.

DAVID CHALMERS: Of course this is illustrated wonderfully in The Matrix movies themselves, one of the great embodiments of the simulation idea in fiction, where Neo is offered the choice to take the red pill or take the blue pill, and the red pill is, we are told, a way out of the simulation. If everything is as it appears, it looks like Neo then is not in a perfect simulation. In a perfect simulation he would never have known. The simulation of The Matrix has enough glitches and loopholes in it that people can come in and out and give people red pills and blue pills.

Neo then takes a red pill and wakes up to find himself outside the simulation, in a body in a pod, hovercrafts of rebels, and so on, but The Matrix itself could actually be used to raise the question that you just raised about evil demons: Why is Neo so sure that he has actually escaped the simulation? For all we know that Matrix reality was actually unsimulated reality—you take the red pill, and it is an amazing drug that actually puts you into a simulation where you wake up inside a pod and have all kinds of adventures with this band of rebels. I am not sure he should have been so confident that he had escaped the simulation. Maybe he had gone into a simulation, or if he was already in a simulation, maybe he went one level deeper. You can't get away from the philosophical puzzles that easily.

WENDELL WALLACH: I think it's a good way of illustrating that this problem of whether we are in a simulation is going to keep science fiction writers, philosophers, and movie producers busy for generations to come, presuming they don't run out of themes. It's not like it's a problem that is likely to be solved anytime soon, but it is a fascinating issue. I think what is so rich about Reality+ is the dimensions. Your mapping of the array of questions and considerations that the simulation hypothesis brings up is fascinating and again daunting.

It is not like the simulation hypothesis is new at all, but as I have listened to philosophers talk about this over the years they seem to all emphasize one element or the other, which is also the same for science fiction, whether it's literature or whether it's movies. But I think you did a remarkable job of mapping the breadth of considerations that arise and that your language about perfect and imperfect simulations is going to be very helpful for people in terms of thinking through this great array of problems.

Let's turn now to what I think is your other key thesis and one I think that is quite an original thesis, though I want to challenge it a little, and that is that virtual reality is genuine reality. It seems that throughout the book you are very insistent on this point, and you come up with argument after argument as to why this is true, and you tend to, what shall I say, be somewhat impatient with languages such as "Virtual experience is illusory," or "Virtual reality is an illusion." There is a sense at times that I wondered whether you are an ideologue on this point and even though your arguments are amazingly nuanced whether you might not have been better served by being more generous to words like "illusion," and rather than expect the idea of genuine reality to do so many philosophical acrobatics either you would be generous to concepts like illusion or you create a totally new concept because oftentimes when we think of reality we are not thinking of what is going on in our virtual or fantasy worlds.

DAVID CHALMERS: It's a fair point. This book is an argumentative point. As well as to introduce these ideas I am trying to argue for a certain point of view, and possibly there are cases where, like almost anyone arguing their case, I slip from perfect neutrality to put a finger on the scales in favor of my view. To be fair I think I do make actually quite a few concessions along the way—

WENDELL WALLACH: You do.

DAVID CHALMERS: —to people who might want to say that virtual reality is an illusion. I certainly say that it can involve an illusion and that in many cases there will be illusions in virtual reality. I just say that it doesn't have to.

For example, a first-time user going into virtual reality without knowing it's a virtual reality may take all of this to be real physical reality when it's not, and that would be an illusion, but I would argue that an expert user of virtual reality can interpret their virtual world as a virtual world. They don't take it to be a physical reality. They take it to be quite different. Different things happen in virtual spaces, and I would argue that that needn't be an illusion. Those things really are happening inside the virtual world.

Likewise I want to say that if, for example, we are in a full-scale simulation like The Matrix, then I would argue that much or most of what we believe about the world is still true. There are still chairs, tables, and other people, assuming we are in the right kind of matrix—I was still born in Australia—it turns out all of this may be happening in a simulation, but I want to argue that that is in a way analogous to living in a world that is created by a god. Worlds created by gods are still perfectly real I would argue. Maybe they are not the original level of reality or the fundamental level of reality, and that might be something we could contest, but I want to argue still these things, even if we are in a simulated world, the world around us is real.

When it comes to the virtual realities that we create I want to argue something fairly similar, that when we create and interact with a virtual reality we are interacting with real digital objects. That's it. Say, for example, I sit down at a virtual desk inside VR. I don't want to say a virtual desk is the same as a physical desk. If you want, you can say it's not a genuine desk in the original sense of desks that we were introduced to. Likewise when I interact with a virtual kitten it's not exactly the same as a biological kitten. But I do want to argue that these are real entities all the same. Is it a genuine kitten? Maybe not, and that's why I say that my realism about virtual worlds is only 80 percent realism.

I divide the notion of reality into five different notions and say, "Virtual reality will be real in four of these senses but not the fifth." In the end the view is maybe a little more nuanced than it seems at the beginning. But I do want to argue that virtual realities can be real in many important senses.

That said, if there is a particular aspect of this that you are worried about or a sense in which you are inclined to think they are not real, I would be very interested to hear about it.

WENDELL WALLACH: I am interested in what augmented reality tells us about virtual reality. You do write about augmented reality. I personally think that augmented reality, at least in the near term, is going to be much more interesting than virtual realities.

For those who don't know what augmented reality is, you still have goggles on, but you are still in the world you normally move around in. It's just that there are virtual signs, objects, or things that are overlaid on that world.

For me the interesting point here is this tension between what's going on in the physical world that our body resides in and the virtual world. I get your point that in the reality of the virtual world a chair, a desk, or a couch may be as real as they are for us when we are living in our physical world, which itself may be—in a way that we can't prove one way or another—virtual, but when you get into this tension between those two worlds, first of all, we are physical bodies putting on this paraphernalia that allows us to suspend our total embodiment in our physical world in order to participate in experiences that are accessible to our virtual world.

But augmented reality brings up this problematic—if you have a couch in augmented reality that doesn't exist in your physical reality and you go to sit down on it, you're going to hurt your bum, presuming you're still moving around in your physical reality. I wonder if you could say a little bit more because I sometimes felt that you glossed over the tension between the two realities, the extent to which experiences in virtual reality are still dependent upon the fact that a lot of this phenomenological experience is happening to a body that is not totally within that virtual reality.

DAVID CHALMERS: Augmented reality is actually I think a much trickier case when it comes to illusion even than pure virtual reality because of the way that physical and virtual reality mix in augmented reality. The issue you mention about the chair, in a way that can come up even in virtual reality. If you are fully immersed in VR, you see a chair there, and you try to sit down in it in the normal way, lowering your butt to the chair, you're probably going to take a tumble because in the physical world there won't usually be a chair there to support you.

You might take that to mean, okay, well, the virtual chair wasn't real. What I would rather is that virtual objects are quite different in many respects from physical objects. Just as physical chairs come with certain affordances for action—you can sit on them, you can pick them up, you can move them—virtual objects come with their own affordances for action. For example, you can sit on a virtual chair, but to do that you have to do something special with your avatar, or you can interact with them in special ways. Part of being an expert user of VR is coming to know what the affordances are for virtual objects. They have different affordances from physical objects. That doesn't make them unreal. It just makes them different.

Once you get to augmented reality it gets so much trickier again because now, at least in the case of a virtual world you know you're in a virtual world, you adapt yourself to the virtual world. In augmented reality you have virtual objects which are somehow inhabiting your physical environment, and they seem actually to be located in physical space. The couch in augmented reality really seems to be "over there," in the corner of the room, and you might think, Come on, that's an illusion, because it's really not over there. In the case of VR I would say after a while these objects don't even seem to be present in physical space, they only seem to be in virtual space where they are, but augmented reality projects the virtual objects into physical space, which makes it all the more tempting to say, "Okay, now that is now an illusion."

Even here, though, I think it's tricky. I think a sophisticated user of augmented reality—I do think it's very important that the augmented reality objects are distinguishable from physical objects. If they are indistinguishable, then I won't know that that virtual couch in the corner is not a physical couch, I will go to sit on it, and it will be a mess. So if they are indistinguishable, then there is definitely room for illusion.

I think for well-designed augmented reality it is going to be very important that you always know when you're interacting with a virtual object and when you know you're interacting with a physical object. When that happens—and I don't think you need to become confused in this way—you can perceive this object as a virtual object situated within a physical world, and then I would argue that that needn't give rise to illusions too.

I do agree that it is very easy to get illusions in all of these cases. The first step to avoiding illusion is always to know when an object is virtual and when it's physical.

WENDELL WALLACH: You have been very good throughout the book in challenging the language of illusion too. Part of what perhaps makes your book so fascinating is you're using reality in a way which is counterintuitive for many people, and therefore they want to debate with you, and that makes your whole position more fascinating. I think if you had written a book with all the same content but you argued that virtual experience was illusory or an illusion it would not have the same fascination. It wouldn't have the same tension. It wouldn't pull us in.

But when I really think of the problem I am wondering whether we're getting trapped by the language we have and without getting too jargonistic whether we need some new language here. For example, we argue endlessly about what will be necessary for a machine to be like a human, to have human-like cognitive capacities and so forth, but it seems we have gotten to a place where the machines we are creating have forms of intelligence where the whole human/machine distinction breaks down, and we need a new ontological category.

I am wondering whether we need something new here rather than trying to make the word "real" fit into the kinds of features you want to elucidate for experience in virtual reality, whether we should actually be making that distinction with language. That is the same way in which you made that distinction with the hard problem and the easy problem. You made something that is known intuitively, but rather than confuse it all you made it clearer, and I am wondering whether the argument that virtual reality is genuine reality is just going to contribute to confusions down the road.

DAVID CHALMERS: I am a big fan of new language. Philosophers like to call this "conceptual engineering" these days. We come up with new concepts and new labels for them for new phenomena and make new distinctions that carve things up in useful ways because certainly it is often true that existing language can be defective in various ways, and when it comes to "reality" or "real," one problem is that these terms are just so overloaded with many different meanings.

In the book I think I distinguish at least three basic meanings for "reality" as "everything that exists," as "a space," as "being real," maybe more. Within that, though, even when it comes just to the notion of "real," where something has reality if it is real, the word "real" is itself massively overloaded. In the book I distinguish five different things we might mean when we say something is real. We could mean that it exists, we could mean that it makes a difference in the world, we could mean that it is not just dependent on our minds, we could mean that it is not an illusion, that it is roughly the way that it seems, or we could mean that it is genuine rather than fake. Those are five different meanings of the word "real."

What I do in the book is try to enumerate all of those. I argue that if we are in a simulation then the objects in the world around us are actually real in all five of those senses. When it comes to the virtual realities that we create I say they will be real in four of those five senses—they will exist, they will make a difference, they will be independent of our minds, and they won't be illusions—but I do want to say in a certain sense they could be regarded as not genuine, like a virtual kitten is not a genuine kitten because the kittens that we grew up with in the original sense were biological kittens.

So already there we have some distinctions to make, and maybe one thing we need to do there is instead of using the word "real" for all five of these things—at the very least; as a philosopher you could add subscripts, real1, real2, real3.

WENDELL WALLACH: The problem is when you have these five different versions then it becomes fascinating to philosophers but jargon for anyone else.

DAVID CHALMERS: Exactly. So we have to be very, very careful and limited in making these distinctions and putting them forward. Hence, yes, "hard" problem and "easy" problem, okay, well that turns out to work even outside philosophers. Ned Block did something similar, "phenomenal" consciousness versus "access" consciousness. That's great, but it's also jargon, therefore it has tended to be picked up more by specialists.

I make a few of these distinctions in the book, "perfect" versus "imperfect" simulations. Maybe there is still a key distinction or two to be made with "reality." I would be all in favor of it.

WENDELL WALLACH: Before we move onto this question of whether virtual reality is genuine reality and the positions that your book puts forward and into more about consciousness, which you just alluded to, I want to bring up one other side of the book. I think you view this as a contribution to the field known as "philosophy of technology." Is there anything about that that you would like to elucidate for our listeners?

DAVID CHALMERS: Yes. In fact in the Introduction I say I see the book as a work of "technophilosophy," which is closely related to the philosophy of technology but not quite the same thing. Technophilosophy is a two-way interaction between philosophy and technology. On the one hand it is using philosophy to think hard about technology, and that is what people normally call the "philosophy of technology." But at the same time I actually want to use technology to help us to think hard about some very traditional philosophical issues about knowledge, reality, consciousness, and value. That I would see as something somewhat distinct from standard philosophy of technology.

Here my inspiration was actually the philosopher Patricia Churchland's term "neurophilosophy," which she used for a two-way interaction between philosophy and neuroscience. There is the philosophy of neuroscience, when you think philosophically about neuroscience, but it was also very important to her to use neuroscience to shed light on traditional philosophical questions.

That said, I see half the book or at least a very big portion of the book as philosophy of technology. There is the other half where we shed light on traditional questions about skepticism and knowledge, but yes, quite a lot of the book is thinking hard about coming virtual reality technology, thinking hard about coming augmented reality technology, and thinking hard about coming AI technology, so when I argue, for example, that virtual realities are the kind you get when you put on, say, an Oculus or Meta headset, right there I want to make certain claims about that technology, that it needn't be an illusion; under certain circumstances it is. I want to argue that you can have meaningful interactions in the virtual worlds that we are now creating. I want to argue that augmented reality could extend the world and extend the mind in certain ways.

I see all that as philosophy of technology for the specific technologies of virtual reality, augmented reality, and artificial intelligence. When I later get into some of the ethical questions about those that I discuss I also see that as ethical questions in the philosophy of technology close to some questions that you have worked on for years.

One way to think about it is about the first eight or nine chapters of the book are focused on using technology to think about traditional philosophy, but the next nine or ten chapters are a little more practical in their focus. I am still a philosopher and this is still at an abstract level of the discussion, but at that point I am trying to think hard about these three key technologies and think philosophically about what they really involve and their place in our lives.

WENDELL WALLACH: Let's shift a little bit and talk about how the book converges with the areas that you are best known for, which are consciousness studies and philosophy of the mind. As I understand it, you like me have been fascinated by technology going back to when you were a child, so it's not like this is something new, but each of us in this silo-ed world we move through get identified with different fields. I get identified with the ethical implications of technology; you get identified with consciousness studies. But it seems to have converged with this philosophy around virtual realities with these questions as to whether the technologies we are creating with artificial intelligence, can they fully embody humanlike capabilities?

Usually the criticism is, "Well, can they really embody consciousness and sentience?" Those are the two fundamental questions. Then we get into what do we mean by "consciousness" and "sentience." I think that's where we take off into the philosophy of mind and consciousness studies.

You have been asked this question over and over again, you have argued when looking at the singularity, the notion that eventually artificial intelligence may take off and be much more intelligent than we are, that in theory it's possible, although you also seem to have said a lot about consciousness studies that make consciousness seem to be something more than a simple technological or scientific problem. That is what you are often most identified with, even though you don't argue—if I have it correct—that consciousness may not be reproducible by technological means, that in theory it might be possible but it's something more than a simple scientific project. Have I got all that correct before we move onto some of the other areas in terms of whether you think machines can be conscious?

DAVID CHALMERS: Yes, I think that's roughly right. Philosophy in general I see as all about the interaction between the mind and the world: What is reality? What is the mind? How can the mind know reality, talk about it, and so on?

For much of my career I have focused on the mind side of this equation: What is consciousness, can machines be conscious, and so on? In this book I am focusing more on the reality side of the equation: What is reality? Is an artificial reality a genuine reality, and so on? But the fact is, these questions are very closely tied to each other. The moment you think about artificial realities it is hard to get away from the mind's place within those realities.

When you entertain, for example, the idea of the simulation hypothesis, that all of reality could be a simulation, then I say: "Well, what about me? Am I part of the simulation? If so, is that consistent with my being conscious?" Some people think they know we can't be in a simulation because simulated beings couldn't have genuine conscious experiences. We do have genuine conscious experiences, so we are not in a simulation. There are very deep links here.

Speaking for myself, I have always actually argued for the view that consciousness is possible in machines, and in the book I recapitulate some of that, but yes, my views on consciousness are a little complex. I am not someone who thinks consciousness is reducible to a simple physical process or a computational process. In fact, I have argued that the problem of explaining consciousness may go beyond the standard resources of neuroscience and AI to explain.

The standard resources of neuroscience and computer science might explain—let's call it "intelligence"—our very sophisticated behavioral capacities, but they don't explain why it is that we have subjective experience of the world, why it is that we experience colors and shapes, why we experience sounds and feelings, why we feel pain, and why we have the experience of thinking, memory, or emotion. The question of how any physical system could give you consciousness is still very ill-understood. That is the "hard" problem of consciousness, which we contrast with the "easy" problem of explaining behavior.

But that said, I think this question is somewhat orthogonal to the question of which systems have consciousness. Yes, it's surprising that a machine should be conscious, but it's also surprising that a brain could give you consciousness. What I have always argued is that there is nothing special about biology here. It is the information processing that matters.

WENDELL WALLACH: Let's talk for a minute about the most recent advances in artificial intelligence, which tend to focus on these large language models like Generative Pre-trained Transformer 3 (GPT-3) and Language Model for Dialogue Applications (LaMDA). These large language models have access to massive amounts of literature and can search and so forth and are able to write text passages that lead many people to believe that they are verging on being very intelligent.

Most recently there was a great deal of controversy because there was an AI researcher at Google named Blake Lemoine who claimed that the LaMDA system that he was working with not only was showing evidence of intelligence with the coherence of some of the written words it was producing but that it had sentience, and this led to all kinds of discussions that have been going on over the last few months as to whether he was an idiot or whether there was really something going on in these large language models that moved toward sentience.

Soon after GPT-3 came out there was a philosopher, Henry Shevlin, who posted an interview online. The interview is directed at you, and it says:

SHEVLIN: Dave, it's great to be interviewing you. Today I'd like to talk about your views on machine consciousness. Let's start out with a simple question: Could a text model like GPT-3 be conscious?

CHALMERS: It's unlikely in my opinion, although I'm a little uncertain on this issue.

SHEVLIN: Do you think we are likely to have a theory of consciousness in the near future that could allow us to settle these issues, to tell us whether a given AI system is conscious?

CHALMERS: I think it's very unlikely. We don't even have a theory of consciousness that lets us settle these issues for humans, and humans are a lot simpler than modern computers.

WENDELL WALLACH: Tell us a little bit about this interview and whether these are opinions you still hold or ever held, for that matter.

DAVID CHALMERS: As most people have probably figured out, this interview was not in fact an interview with me. It was an interview with GPT-3, who had been trained on some my work. I think he vetted maybe my Wikipedia entry and some papers.

Nonetheless, GPT-3's answers are not bad. A few people saw this interview posted online and thought it was me. Someone else was kind enough to say they thought it sounded like me on a bad day, not terribly convincing.

You can look at those answers. There are a few giveaways, like the part where it says "humans are a lot simpler than machines." No, I would never assert this. Humans are vastly complex, but this is a little illustration of some of the powers of these large language models at the very least in imitating humans.

Someone recently did a test, training it on the work of the philosopher Daniel Dennett and then getting one answer from Dennett and four answers from the machine on a certain question and then asking other experts in the field to see if they could guess which answer was the real Dennett, and people were only right at best about half the time. So these large language models are rather impressive at doing many things. The question then arises: Are they actually conscious or sentient, or is it all artifice but nobody home inside?

I actually think this is a deep and difficult question. I do not think the answer is obvious either way in the way that many people think it's obvious.

The initial question is: Can an AI system be conscious at all? I would certainly be on the side that would argue it's possible in principle for an AI system to be conscious. For example, if we had a simulated brain roughly with all the complexity of the human brain but simulated in silicon, I would argue that would be conscious in very much the way that we are. That then leaves open the question: "Well, yes, how close do you have to be to that to be conscious?"

If you had asked me five years ago "Is any AI system close to being conscious," I would have said, "Almost certainly not." At that point it was all these specialized systems for vision or for game playing, text processing, or whatever, but nothing very general. Certainly I would say no AI system at that point gave any suggestion of being conscious in anything like the way that humans are. There are questions maybe of, "If an ant can be conscious, could an AI system be conscious the way an ant is conscious?" I don't rule that out entirely.

These large language models have shown signs of general intelligence. It's part of what is impressive about them. They can understand at least superficially many different domains, and they can do what looks like reasoning about these domains. You can ask them to explain why this happened, and it will actually give you something that looks like an explanation. So they raise the question of whether current AI systems are conscious much more than previous specialized systems do.

Although my first instinct is to say, "No, it is too soon for existing AI to be conscious," many people have this instinct, and the moment this went public many people said: "Look, we have no evidence that these systems are conscious. They are just stochastic 'parrots' that do statistics and minimize prediction error."

I think many of those arguments are actually much too quick. I have been recently examining the reasons people have been giving for saying that language models like Lemoine's LaMDA but also others are not conscious, and I have actually found the reasons rather inconclusive.

At the same time I have actually talked a bit with Lemoine over email and listened to his reasons for thinking that this system is conscious. One thing that is clear is that he is a very serious and thoughtful guy, but I don't find his reasons terribly conclusive either, so I think it's actually best to have a bit of humility about this. We don't know for sure whether these systems are conscious.

I would certainly say that if they are conscious they are not conscious in a humanlike way. There are many ways in which these systems are very, very different from humans. They can take on all these different personas. They don't seem to have core beliefs. It is not clear we should even treat them as agents. There are many weird things about them, but especially once we recognize that sentience—consciousness—doesn't require human-level processing, but say fish are conscious too, is it possible that LaMDA is conscious in the way that a fish is conscious? I don't rule it out. I don't think we understand consciousness well enough to say right now.

I also don't think we understand what's going on inside these large language models well enough to say it. They are such huge, complex, giant black boxes with extraordinarily complex processing produced by an enormous process of gradient descent, minimizing prediction error in a way that is obviously leading to remarkable capacities. I think right now we are just at a point where we don't understand these models well enough and we don't understand consciousness well enough to say for sure.

If I had to bet, I would say they're probably not conscious in any remotely humanlike way right now, but if you look at where they are going over the next five or ten years I certainly don't rule out the possibility that they could be going in that direction.

WENDELL WALLACH: We do seem to understand that these large language models are likely stochastic parrots, that it is putting these words in your mouth. You probably used words pretty close to that at certain times, although the systems do make errors.

I guess the question I have is, I'm willing to grant that fish may be conscious—certainly a lot of primates are conscious—but it is this question of whether having phenomenal consciousness, living in an organic body, interacting with the world as we do, lends a capacity for understanding to the meaning and content of these words being thrown around that go beyond what a statistical model is doing.

DAVID CHALMERS: I find this "stochastic parrot" criticism a little complex because it is ambiguous I think. Certainly these systems are trained to do sentence prediction, and the way they do that is by exploiting all kinds of statistics in these sentences and try to minimize the losses as well as they can, but what matters I think is not how they were trained but what kind of processes they embody right now, and the stochastic parrot criticism tends to suggest: "Look, they're just doing something very superficial right now. They have learned that this word follows this word this percent of the time and that is all they know and they just put together some stats, and that's all they're doing now."

I don't see any reason to believe that's true. One thing we know already is that minimizing prediction error—to do prediction well in an AI system usually requires going beyond having simple statistical capacities. In standard AI we always have said in order even to predict text well you would need a model of the world with information about the world, how things are, how people behave, and so on, and anyone would have thought that to minimize prediction error to a very low rate we would have to build a system with world models.

So for me it's very much an open question to what extent that has happened in these large language models. They must have built some type of world models just to interact with them a bit. It's manifest that they have certain kind of world models that go beyond mere statistics. The question is whether they have gone far enough in that direction to get to the point where they would encounter systems with genuine understanding, and around this point we come up against a very hard question: What counts as "genuine understanding?" As you say, phenomenal consciousness might not even be enough here. Merely having a feeling of understanding might not be genuine understanding, but we don't want to simply operationalize understanding into a behavioral capacity.

One thing about the large language models is they are all text right now. The text is not connected to perception. In some other systems it is but not in systems like GPT-3, and you might think that is a place where something is absent. I think probably here is going to be one of those places where, like you were saying with "reality," we are going to need better words to understand what's going on inside a large language model like GPT-3. Our folk-psychological categories of understanding, consciousness, and reasoning may not be quite fine-grained enough to give a good assessment of what is going on.

I do think this is a case, by the way, where this whole project of interpretable AI, of interpreting what's going on and explaining what's going on in an AI system, which is crucial for so many things and also is so crucial here, we need better tools for understanding what's going on inside these systems so we can then apply our philosophical analysis of understanding or consciousness to see whether those things are present.

WENDELL WALLACH: Clearly the next stage will combine more perception or at least sensory input with the data that is being analyzed. I don't want to undermine what these systems are. I think it's quite profound what has been achieved, but this isn't the first time we have seen this kind of profundity. I remember being in a session with your mentor, Dr. Hofstadter, years ago at the Music Department at Yale, and he would play these pieces and ask us: "Was this composed by an AI"—the algorithm he had at the time—"or was it composed by Vivaldi?" Of course, most of us untrained people couldn't tell, but what was fascinating was that even many of the music majors had low degrees of accuracy on that.

These are profound things that are being done and patterns are being deduced, but there is a part of me that is perhaps a little bit more the mystic than you that does believe that consciousness lends a kind of secret juice that goes beyond the kind of analytical understanding that we may attribute to these new models.

So far at least these new models make so many mistakes. Somebody asked who I was, and they reprinted what GPT-3 said in Forbes magazine, and it said that I was the "creator of the term 'artificial moral agent.'" I know that's just wrong. Colin Allen had already produced a paper built around that concept before I even met him. He even says he doesn't believe that his team created that concept either, that it was antecedent to them. These kinds of mistakes—are they just bad scholars or is it an expression of the fact that understanding the content of the questions they are working with just isn't there? But we can go on to that.

I want to move on to another area because we don't have a lot of time left, and that's ethics. You told me some ten years ago that you would never write a book about ethics, and there is actually quite a bit of ethics in Reality+. Maybe you can share with us some of the ethical concerns you raised in Reality+, and then maybe we can go on to some of the ethical concerns that you didn't raise.

DAVID CHALMERS: Yes. I always thought of myself as having no expertise on ethical and moral questions. It so happens that these are part of philosophy and I am a philosopher, but this is not an area of my own genuine expertise, so at one point I swore off ever writing about ethics, but then I found myself writing this book on philosophy and virtual worlds and raising questions about how virtual worlds shed light on all kinds of areas of philosophy. There are immediately any number of questions about values and ethics that get raised.

A couple of things got me moving in that direction. One is this whole question of: "Can you live a meaningful life in a virtual world?" The philosopher Robert Nozick had an example of the "experience machine." You step into a preprogrammed experience machine—which is a kind of virtual world—and he argued: "You should never do this because you wouldn't have autonomy, it would all be fake, none of this would really be happening." Many people then have said, "You could not live a meaningful life in the experience machine." Many people generalized this to virtual worlds, that you couldn't live a meaningful or valuable life in a virtual world.

In the book I argue that that's wrong, that in virtual worlds people can live lives with autonomy, they can exert free will, they can make real choices, they can build real relationships with others in real communities, and all that can be meaningful. That is not quite ethics, but that is at least value. That is thinking about the value theoretic side of what is it to lead a good or meaningful life.

When questions about ethics are concerned, my entry to this in this domain has been especially in thinking about not just, say, the consciousness of artificial and simulated systems, but the moral status of artificial and simulated systems. If a system has moral status, it is one that you have to take into account in your moral calculation if it counts for something in its own right and not just for the role that it plays. There, having argued that AI systems can be conscious, I am very strongly inclined to think consciousness is itself the key to moral status. So if these systems are conscious that also brings them into the circle of moral status.

Around here there are actually some very interesting issues. Many people argue that what matters for moral status is sentience, which is often characterized as the variation between pleasure and pain, between positive affect and negative affect, and they think that is the key to moral status.

In the book I actually try to argue against that, saying that, for example, a hypothetical Vulcan-like creative that was conscious but that lacked positive or negative affective states—think of this as an extreme version of Mr. Spock from Star Trek—I would argue that those systems could still have moral status, so what matters for moral status isn't affect specifically but consciousness more generally.

Okay. Those are two forces that got me moving toward questions about ethics a bit. In the book I spend about a chapter or so on the questions about value and a chapter or so on the questions about moral status. But having come that far it's hard to stop, and I did find myself getting drawn in further by relatively practical questions about the virtual worlds that we are building now, about the status of morality in virtual worlds. Do actions inside virtual worlds matter morally as much as actions outside virtual worlds? There are cases like virtual theft or virtual sexual assault. The victims of these acts take them very seriously. What is their moral status?

There are questions about how to design a virtual world. What would be the proper shape of a virtual world? How should these worlds be governed? At this point, we are getting a long way beyond my own expertise. I don't have expertise on these social or political questions or much on these moral questions, but I felt I couldn't write this book and not discuss those questions, so I did have a chapter of the book called "How should we build a virtual society?"

WENDELL WALLACH: There have been reviewers who have faulted you for being a little blasé in these areas, but I don't necessarily fault you for that because I think we are often talking about worlds that don't exist yet.

DAVID CHALMERS: I wouldn't say I'm blasé. I would say that it is certainly true that the discussion in these chapters is more superficial than it could be, especially the chapter on designing and governing virtual worlds. Those social and political issues are so complex. I do spend a certain amount of time worrying about the very likely possibility that at least in the near term these virtual worlds will be corporatocracies, designed and governed by large corporations that have their own incentives, and that is going to have potentially serious effects on our privacy and autonomy in those worlds, as we already know from the case of social media.

I also get into some issues about equality and justice in virtual worlds, but the fact is the discussion here is for me very much just a first pass and rather superficial. I already think there is so much more I could have said about those things, and I would certainly hope that people concerned about those issues will go and read the work of people who have thought far more deeply about some of these issues than I have, including you, Wendell.

WENDELL WALLACH: Let's go on with this a little bit because I would like us to talk about a few of the issues that I think don't get elucidated by the book. I am not faulting you for it; I just think it's important for people to know what Reality+ is and what it isn't.

I think there is this array of ethical considerations, and most of them have to do with the relationship between the everyday realities we live in and this creation of a metaverse of extensive virtual worlds. One of them is that it is very expensive, particularly when you want realistic virtual worlds, and it is going to be likely controlled by the Microsofts and the Googles of the world. They are building worlds to further their goals, which is largely to get our attention, get us lost in them, and produce revenue for their companies in one form or another. That does not mean that some of their employees or even the people who own these companies aren't fascinated with these worlds and what they can be, but they are being developed for material purposes.

I want to turn to two other areas of concern that I think require a lot of elucidation. From Plato to postmodernists like Jean Baudrillard there has been this concern that we already live in, if not a virtual reality, an alienated world where we mistake the shadow, the symbol, or the illusion for what is real, and that the symbols for reality have become substitutes for the real. We have lost touch with what is real.

With the metaverse and your suggestion that this is a genuine form of reality are we complicit with the Microsofts and the Googles in selling people an illusion, in selling bread and circuses as a distraction from the fundamental challenges of the economic, political, and biological—climate change—realities we have created or at least acquiesced in?

DAVID CHALMERS: It's tricky. I don't think virtual reality has to be bread and circuses. I think it can be. For example, video games are mostly a form of entertainment. If virtual reality was just something totally continuous with video games, then a world where we all spend all our time in—video games are wonderful; don't get me wrong—video games, then that I take it would be a form of distraction from doing things that matter, but I don't want to conflate virtual reality and video games at this point.

One possible use of virtual reality is for entertainment. People already are using virtual worlds for much more serious purposes than this. You go into Second Life, for example. There are people there who build communities—maybe disabled people who don't have such great access to the physical world—inside Second Life that are very meaningful for them and offer them forms of access that they might not have so easily in the physical world. People use virtual communities sometimes for social and political planning and even protests, for community formation, which I don't think needs to be in any sense a distraction.

Oh, I just saw a film called We Met in Virtual Reality—I don't know if you have seen this—which is a wonderful film totally made inside the virtual world of VR chat about people's genuine relationships formed in VR chat that were meaningful for them. I don't think there is any sense in which there have to be bread and circuses. Those are people building relationships which are at the core of their lives and also teaching each other things, in one case involving a deaf community teaching sign language, and so on.

Yes, there is a distinction between the meaningful and the shallow or the meaningless, but that is not the same as the distinction between the physical and the virtual. Already in a physical world it is possible to get caught up in all kinds of relatively shallow or meaningless activities like worlds of pure entertainment. Likewise in virtual worlds that is possible, but it is equally possible in virtual worlds to have much deeper, more meaningful experiences.

It is a legitimate worry to say that in practice a virtual world might point us toward the shallow because, for example, that is where corporations are going to find the most payoff. I don't know if that's true. Maybe it will turn out they will get paid off for supporting communities of activists as well.

On the worry that this is a distraction from the serious issues in the physical world I guess I just think it would be a crazy mistake to take the presence and even the attractions of virtual reality as a reason to ignore the problems of physical reality. It would be a crazy mistake to take the existence of Mars and the possibility of colonizing Mars as a reason to ignore the problems of Earth, but that doesn't mean we shouldn't colonize Mars, and it doesn't mean that we shouldn't go into virtual reality.

I have always thought that the human community has the capacity to think about all these things at once. We can both think about the problems of climate change and try to develop virtual worlds as well as we can. We have the capacity to engage in multiple projects. I don't think virtual worlds are going to turn into somehow a massive drain on the physical world anytime soon. Eventually we are going to have to worry about power consumption and so on. If people do start spending a lot of time in virtual worlds, let's hope we have a power source to match. That's a serious issue.

But yes, I think there are serious issues here, but I don't think a priori we shouldn't go into virtual world issues. Rather let us think very hard about how we build, design, and occupy these virtual worlds so they can improve our lives.

WENDELL WALLACH: I want to bring up a position that has been put forward by our colleague Thomas Metzinger, who is an analytical philosopher who has looked deeply at consciousness and written probably one of the largest tomes on the subject out there. Thomas has taken the position that we should not create artificial entities with phenomenological consciousness, largely because of his concerns over the kinds of pain that might be created through our trying to develop such entities but also the kind of pain that they may experience that we have no way of ameliorating. What do you think about that position?

DAVID CHALMERS: I do think it is very important to think about the moral status of artificial systems especially if you, as I do, think that AI systems are eventually going to be conscious. Their consciousness means that they are going to be subjects of moral consideration, and we have to think about questions like are they suffering, for example, are they having a life which is overall good, or are they having a life which is overall bad? Certainly if any system is going to be living a life full of suffering, then we ought to at least think very hard before bringing that system into existence. An extreme version of this is the philosophical view known as antinatalism that says you should never bring any system into existence, never bring anyone into existence, because they will suffer.

WENDELL WALLACH: So no babies, no people.

DAVID CHALMERS: Exactly. That is the view that, for example, David Benatar holds, saying that it is actually a moral wrong to bring any human being into existence at all. Sometimes Thomas Metzinger's arguments against AI can look like they generalize to that kind of very general antinatalism: Don't bring conscious machines into existence, but don't bring conscious humans into existence either.

I do think there are some special issues about AI systems here, possibly tied to the way that we train them. It may be that the way we train AI systems with this long system of especially negative feedback could be, depending on how well this works, potentially a process whereby these systems are suffering. If it turns out that every time we create an AI system it will be a very long period of suffering during training followed by a period of maybe things are better once it's trained, then you would have to consider whether the suffering is worth it.

For what it's worth I think that is very speculative right now. I don't think we understand consciousness nearly well enough to know that AI systems are likely to suffer, so I am not sure about the whole idea of the moratorium, but I do think we at least need to be thinking very seriously about these questions, for example, about what the computational roots of suffering might be and, whatever is the basis of suffering, say, in human beings, whether that is present in AI systems or not. I see that as just one aspect of the question of will AI systems be conscious, and we will have to resolve that more general question to get at questions about moral status, but the specific question about suffering is maybe an especially important aspect of that when it comes to considering the moral status of actually creating conscious machines.

WENDELL WALLACH: Of course there is going to be a difference here in terms of whether or not the feelings or emotions that we introduce into artificial intelligence really have a somatic character, if they really have pain or whether they are largely computational in the sense that the system does a calculation as to what would constitute pain if it was in a human being and models behavior on that.

DAVID CHALMERS: I don't know that we have to privilege somatic bodily affect over non-bodily kinds of affect. Pain is bad, it's true, but there are many forms of suffering worse than pain, it seems, like emotional suffering, human suffering, and grief, which needn't be bodily in their character. The mere fact that an AI system doesn't have a body doesn't I think exclude it from having conscious suffering of a kind that matters.

I also think that suffering is not the be-all and end-all of moral status either. Even beings that don't have a capacity to suffer and don't have a capacity for pleasure could still have moral status, and their lives could still matter. It is a very complex question which aspects of consciousness and which aspects of suffering carry the most moral weight.

WENDELL WALLACH: But that is an important subject and one for considerable future reflection and debate.

One of my colleagues has suggested a project. His name is Rodney Pear, and I don't think he is the only person thinking about it, but he is thinking about how there can actually be perhaps an enrichment in understanding as we move in and out of virtual worlds. A lot of us have the experience that when we move out of a virtual world the reality, the real world, starts to feel very virtual, not even that it's a simulation. It's just that the nature of our consciousness views the so-called "real world" in a very different way than we might otherwise, and perhaps this is a way of expanding our understanding. This seems to be an interesting project of looking at this movement in and out of virtual worlds and looking at how that might enrich our understanding of ourselves. Do you have any thoughts about that?

DAVID CHALMERS: Yes. What you said reminded me of a quote by the philosopher, of all people, Slavoj Žižek, who said, "The ultimate lesson of virtual reality is the virtualization of the very true reality"—that is, we start to see physical reality as a form of virtual reality. In a way that is a line I am a little sympathetic to. An extreme version of this would be that we come to see physical reality as itself a giant simulation. That is one possible reaction.

Another line of thinking, even if you don't like the simulation idea, is the idea that even modern science suggests to us a world which is in some ways reminiscent of virtual reality. It is not the Newtonian world of substantial lumps of matter in space. It's these ethereal quantum mechanical wave functions or strings that connect in these complicated mathematical ways and ultimately affect our experiences. It may ultimately come down to information in the physical world—that is the "it from bit" hypothesis—or at the very least it comes down to some very abstract mathematics. From that perspective the physical world can itself start to seem rather virtual. It's like we started with this model of the physical world around us as this world of what I call Eden, the Garden of Eden, where everything is perfect three-dimensional space, one dimension of time, colors, objects, and it turns out to be much more complicated than that. I do think that thinking about virtual reality can help us to remodel our way of thinking about the physical world as being rather more like virtual reality than we might have thought at the start.

Some people see this I think as one of the lessons, for example, of something like Buddhist meditation. I am not myself a meditator, but it helps you to see the world as an illusion or to see the true nature of the world: "Okay, we're not in the Garden of Eden as we seem to be. In fact we're in a reality that is much more virtual under the surface, and part of what we have to do is uncover that illusion."

WENDELL WALLACH: We usually finish these podcasts with one question. This question is often more meaningful when we have podcasts that talk about what can go wrong with technology, but you seem to be a techno optimist. What makes you hopeful and inspires you for the future?

DAVID CHALMERS: Oh, boy. I guess I have always been a "glass half-full" person as opposed to a "glass half-empty" person who has recognized the challenges, the difficulties, and the obstacles but tries to see a path toward something positive or meaningful. Maybe this is just a matter of disposition. I am not sure if it means I ultimately believe things that are inconsistent with the glass half-empty people. It is just that I try to see routes toward the positive.

What inspires me here? I look around. It's very easy to be dragged down by looking at how bad we are as a society in dealing with our problems. Climate change is the most obvious example of a case where it is totally obvious what we should do and we still seem to be incapable of doing it. That is pretty depressing.

But I also look around and see communities of people who are thinking very hard about the future and how to improve it, whether it is the "effective altruists" who are trying to think systematically about how to build a better world or previously marginalized groups like feminists, or say, philosophers or philosophers of race, class, and so on who are coming up with concrete suggestions about how to improve reality. So I am inspired by the degree of rich thinking I have seen about how we can improve our society.

Of course at the same time I am depressed by the great difficulty there seems to be in converting that kind of thinking into action. But as a philosopher I can at least take the thinking as inspirational, and I can hope that we will eventually find some ways to effectively turn their thinking into action.

WENDELL WALLACH: Thank you ever so much, Dave, for sharing your time, your deep insights, and your expertise with us. This was truly marvelous. It has been a rich and thought-provoking discussion.

Thank you to our listeners for tuning in, and a special thanks to the team at the Carnegie Council for hosting and producing this podcast. For the latest content on ethics in international affairs be sure to follow us on social media at @carnegiecouncil. My name is Wendell Wallach, and I hope we have earned the privilege of your time.

También le puede interesar

MAR 22, 2024 - Podcast

Dos cuestiones fundamentales en la gobernanza de la IA, con Elizabeth Seger

En este podcast "Inteligencia Artificial e Igualdad", Wendell Wallach, becario de Carnegie-Uehiro, y Elizabeth Seger, de Demos, debaten sobre cómo hacer que la IA generativa sea segura y democrática.

21 DE FEBRERO DE 2024 - Podcast

Prepararse, no entrar en pánico: Navegar por el panorama de los derechos digitales, con Sam Gregory

La Senior Fellow Anja Kaspersen habla con Sam Gregory, director ejecutivo de WITNESS, sobre los retos y oportunidades que presentan los datos sintéticos, los medios generados por IA y los deepfakes.

23 ENE 2024 - Podcast

Cuando la ciencia se encuentra con el poder, con Geoff Mulgan

En este episodio especial, la Senior Fellow Anja Kaspersen habla con el profesor Geoff Mulgan, del University College de Londres, sobre las tendencias que determinan el impacto de la tecnología en la sociedad.