The Coming Weirdness: On The Chinese Room, Algorithms, and Techno-Fear

I worry a lot about General Artificial Intelligence. Besides pining after unattainable women, this is the issue that keeps me up most at night. This has been the case since my college philosophy courses, where I was introduced to the subject. My favorite readings were the works of John Searle. I was less scared then; I bought into “The Chinese Room.” For the uninitiated, this was a famous thought experiment that set out to prove that no matter how intelligent a computer system could be, it could never be conscious. The thought experiment goes like this: suppose there is an artificial intelligence that “speaks Chinese” well enough to pass the Turing test. A native Chinese speaker poses questions, which the computer reads, and then responds to appropriately. Does this system “know” Chinese? Searle takes it a step further – instead of a computer program, what if this system is manual? What if there is an English man in a “Chinese Room”, where there is a complex list of characters and translations, etc. When the native Chinese speaker poses a question, the man follows the program (just like the computer): he follows the process and steps to answer appropriately in Chinese. Does the man know Chinese?

 

In this system, Searle argues that the program, whether it is a computer or a person following an algorithm, does not “know” Chinese, despite how intelligent the system seems. It is the distinction between symbol and syntax – using the right symbols versus knowing why they are the correct symbols. Because of this, we can never have “Strong AI”, a conscious equivalent of a human mind. With this conclusion, Searle assuaged much of my techno-fear.

 

I’m less certain now, thanks to two fascinating, deeply frightening books: Yuval Noah Harari’s Homo Deus and Nick Bostrom’s Superintelligence. I am no longer convinced that just because we won’t have conscious machines, we won’t have General Artificial Intelligence or Strong AI.

 

This is because we are essentially all our own Chinese Rooms – we are running algorithms and programs that we don’t understand. Harari makes this case persuasively – the research in evolutionary biology doesn’t leave us with many other conclusions – we are all running emotional algorithms to drive us toward biological success. “An algorithm,” he writes, “is a methodical set of steps that can be used to make calculations, resolve problems, and reach decisions. An algorithm isn’t a particular calculation, but the method followed when making the calculation.” He gives an example of a vending machine that serves tea or coffee, a mechanical process easy enough for us to understand. Then he adds,

 

“Over the last few decades biologists have reached the firm conclusion that the man pressing the buttons and drinking the tea is also an algorithm. A much more complicated algorithm than a vending machine, no doubt, but still an algorithm. Humans are algorithms that produce not cups of tea, but copies of themselves (like a vending machine which, if you press the right combination of buttons, produces another vending machine) …The algorithms controlling humans work through sensations, emotions and thoughts.” (Homo Deus, Pg 84-85).

 

 

We are first and foremost animals, striving to fulfill evolutionary imperatives. Emotions help drive our behavior, but it is not clear why we need consciousness of these emotions for the emotional algorithm to work its evolutionary goals. Consciousness does not need to play into this at all – in fact, the best scientific evidence points us to the conclusion that our consciousness itself is a pointless phenomenon with no causal effect.[1] Harari offers interesting evidence, in proofs against free will (using fMRI studies) and against the idea of a consciousness separate from a physical brain state (using some simple questions: “is there anything that happens in the mind that doesn’t happen in the brain? And if so, where does it happen?”) So: consciousness may not exist beyond your physical self, and it may follow the same causal laws of physics as the rest of the universe. It may not matter at all.

 

Diverting a bit to bring up another niggling point in all of this: we can’t even be sure that other people are conscious. This is “The Problem of Other Minds.” I know that I am conscious, with a rich inner life. But all I see with others is their outer behavior. For all I know, you are all zombies[2]. But I still interact with people, and they behave intelligently. In this way, we are all our own Chinese Room, running little understood mental processes. The question of whether we are actually conscious matters little in this analysis, and thus it is unclear why it matters if the intelligent system we interact with is unconscious meat or unconscious silicon.

 

This is all very frustrating, because consciousness is the first fact. Even those with a passing knowledge of philosophy know and grasp Descartes “Cogito.” All together now: “I Think, Therefore, I am.” As Descartes succinctly points out, if we are thinking, we must be there to be the thing thinking. Our consciousness is fundamental to our understanding of everything else, it is the ground floor of experience, which nothing is below. It seems like consciousness should be damn important. But it may not be important at all. We may just be algorithms. If all we are is biological algorithms, and if algorithms are things we can understand, then there is no reason to assume that we cannot recreate and control these algorithms in biological or digital space. We can become superintelligent designers, wielding godlike powers to shape and create.

 

Indeed, this is the subject of Nick Bostrom’s Superintelligence: he lays out several paths to superintelligence, and as Harari notes, “only a few of these pass through the substrates of consciousness.” For instance, Bostrom proposes a digital mind based on sufficiently advanced models of the brain – if we can create a 1:1 replacement for a human brain, but with a digital neuron in place of a physical one, we don’t even have to know how brains or consciousness work to make one: we can just accurately copy what our brains do. We may not know if this digital brain is conscious, but then again, I can’t say that about your brain either.

 

Once we have a digital mind, weirdness sets in. We could add billions of digital neurons to the digital brain, and just see what happens. We could ctrl-c, ctrl-v that template digital brain, and then add a billion neurons to each different area of the brain and see what happens. Or maybe rather than tinker with it, we just subject that average digital brain to an infinitude of different virtual realities to see how it responds. Maybe some of these digital brains are consigned to a virtual reality of ultimate suffering. If we consign a billion digital minds to a lifetime of perpetual pain in an Auschwitz dimension, have we done any moral wrong, and if so, why? Is there a conscious being there to suffer? Bostrom introduces these examples and the term, “mind crimes” to illustrate just how insufficient our moral and philosophical vocabulary is to address these challenges.  Things can get real weird, real quick.

 

And that’s just the simplest path! The scariest is the algorithms we create ourselves – General Artificial Intelligence; a system of algorithms that we create to achieve any set of goals. These algorithms can be superintelligent without any consciousness. These digital algorithms can easily outcompete the clumsy emotional algorithms that evolution provided us.


This raises challenges and opportunities. Once our technological algorithms allow us to outperform our biological ones, we’re in uncharted territory. We may leverage these algorithms or be destroyed by them. Maybe we will merge with this technology to create eternal life, bliss, and godlike powers. Or maybe we will create our replacements: self-replicating computer programs will outcompete us and spread across the galaxy[3]

 

So, where does this leave us? Should you have techno-fear, or techno-joy? I can’t tell you. The future is unknowable, and humans are remarkably bad at predicting it, at both the macro and micro levels. Here’s what I can tell you – shit is about to get really interesting, and you want to be involved. This is the most important technological development in the history of humanity. A General Artificial Intelligence, an algorithm that can improve itself, will be the last machine that we ever need build. The one who builds it, Bostrom predicts, will create a singleton: a single entity capable of handling all of the world’s data and decisions. Make the machine, rule the world.

 

Here’s what I took from these books: this is coming, and the elite will have access to it. When it gets here, it will usher in the largest equality divide ever seen: so much so that Harari believes most people, having been outcompeted by machines in physical and mental work, will fall into a “Useless Class: not only unemployed, but unemployable.” Don’t be useless. Make sure that you are at the cutting-edge of this technology, and prepared. Personally, this makes me want to study brains and algorithms. You can’t beat them. You must join them.

 

As usual, Run The Jewels says it best: Run with the Borg, baby, assimilate.

 

[1] Let that fuck with your head for a bit – every bit of pain and suffering, of joy and victory is not really there; just a byproduct of emotional, evolutionary algorithms and signals. This, I think, may have been what the Buddhists get at when they say that the self is an illusion – there’s really just nothing there.

[2] Also, please read “All You Zombies” by Robert Heinlein.

[3] Where are my Mass Effect people at? Reapers, is what we’re talking about here.