skip to content
Nicolas Stellwag

You Can't Argue with a Zombie - Jaron Lanier

/ 7 min read

My view of consciousness is that I have no idea how it arises, but I’m pretty skewed towards the physicalist side of the debate. I’ve never encountered a good argument for the necessity of a second substrate, so according to Occam’s razor, I see the Dualists to be under the obligation of making a convincing argument. I originally thought that’s what Jaron Lanier was trying to do with his essay. That turned out to be (sort of) incorrect. But it was still interesting to read, even though I disagree with almost everything he writes.

Zagnets and Zombies

According to Lanier, there are two distinct types of people: Zagnets are people that have actual subjective experience (a.k.a. consciousness), so basically what we would think of as ”normal” people. Zombies act just like Zagnets, but have a complete lack of subjective experience. Zombieness also explains how philosophers like Daniel Dennett end up as Physicalists of some sort. Lanier describes himself as a Zagnet amongst Zombies, because he’s a computer scientist.

Lanier’s Relativity Theory and Zombie Dualism

Jaron Lanier thinks there’s no way to design an objective test to detect computers or the programs they run:

If we designed a test that could detect an alien computer, then that test could also find computers and their programs wherever we chose to look (even in a meteor shower), so long as we looked hard enough. This is not what you call a useful detector.

By meteor shower, he refers to the following example, which shows that everything can be thought of as a running computer program:

When a natural phenomenon, like a meteor shower, is measured, it turns into a string of numbers. The program that runs a computer (the object code) is also a string of numbers, so we have two similar items. The string of numbers that runs a particular computer has to perfectly follow the rules of that computer or the computer will crash. But if you can find the matching computer, any particular string of numbers can run as a program.

Since there’s no test for computers, Lanier argues, computers do not objectively exist. They only exist through our subjective interpretation as computers, which means they cannot be our root cause.

He makes one mistake, which is throwing “computers and their programs” into the same basket. I totally agree that everything can be interpreted as a running computer program. But why shouldn’t there be a test that detects a computer running a program? You need to check for an entity that takes in a string, and depending on that string, manipulates another string in some predictable manner. A test for that can probably not be realized in practice, but it is not impossible.

Lanier also uses his idea to accuse Computationalists of some sort of undercover dualism:

In order to perceive information, you have to put it in a cultural context, and that re-opens the can of worms that zombies have been trying to solder shut. Could “information” just be a shell game that hides the nut of old-style consciousness?

Again, I don’t believe that an interpretation by humans is necessary for computers (or information) to exist, so I would have to refute that as well.

Static Consciousness and Explaining Qualia

He writes:

Let’s suppose you run a more normal program […] that implements the functional equivalent of your brain, a bunch of other people’s brains, and the surrounding environment, so that you and the rest of the brains can have lots of experiences together. (This is the condition in which my test zombies thought that nothing fundamental would have changed; they’d still experience themselves and each other as if they were flesh.) You save a digital record, on the same disk that holds the program, of everything that happens to all of you. Now the experiences “pre-exist” on the disk. Take the disk out of the computer. Is this free-floating disk version of you still having experiences? After all, the information is all there. Why is this information sanctified into some higher state of being by having a processor just look at it? After all, the experiences have already been recorded, so the processor can do no new computation. A much simpler process that just copied the disk would perform exactly the same function as running your brain a second time.

First of all, static objects (like the disk) can not be conscious. Without having read any of them in detail, I’m pretty sure most Computationalists agree that consciousness must be a dynamic, virtual property of running program. Furthermore, I’m not sure what he means by “copied the disk would perform exactly the same operation as running your brain a second time”. Sure, running the copied disk on another computer would result in agents that think they have exactly the same experiences as the ones of the original copy. They would be exactly the same person. Only their means of creation are different. The original agent states are a product of interaction with their environment, and the new ones just of a copy operation. If you think of a function purely in terms of its input-/output-mapping, the two operations are equivalent. But since consciousness is likely a side effect of the computational process that creates the function output, the two are not exactly the same.

The question how exactly subjective experience arises from running a program is valid, though. I haven’t found a theory about it that comes even close at being satisfactory myself. The one that comes closest is Joscha Bach’s theory, but just saying qualia exist because the brain writes them into a multimedia story doesn’t really cut it for me personally.

The Qualia Dial

Lanier defines consciousness as follows:

Here’s my thought: Consciousness is the choice of which abstractions we experience, out of an infinite number of ways of slicing the continuity of the universe. It’s the feeling of existence that is the choice. […] So consciousness is like a radio with a dial that might be marked “qualia” or “semantics”, that selects from an infinity of equally available “layers of abstraction”. Without the cosmic qualia dial, a brain, or a thought, is just another utterly arbitrary slice of the continuous causality that is the universe.

To me, it seems like the choice of abstraction layer is of more practical nature. Humans are agents placed in a complex environment, in which they need to achieve goals. Therefore, they need to model the environment, obviously under constrained resources. So representations we work with depend on what’s useful to model. That means our choice of level of abstraction seems to be a local minimum of evolution and individual development, not a consequence of some mystical process.

Generally speaking, I find these kinds of definitions weird. Jaron Lanier himself states that a big part of the motivation behind this definition is that it doesn’t require a second substrate, while also not being reductionist. It feels like he just picked out a random thing that his world view cannot explain and calls that ”consciousness”. But he neither gives a reason for what use this qualia dial might have, nor an explanation how this leads to subjective experience. Physicalist theories usually at least try to do the former.

Final Quote

This doesn’t really fit into this post, but I liked this quote, so I’ll just include it here at the end:

Nature doesn’t have nouns, and indeed the more nouns we make use of in our science, the less complete and accurate our science becomes.