Downloading consciousness

Frank Tipler, in his new book “The Physics of Christianity” makes a number of interesting – some would even say amusing – claims with regards the “end of days”, as he sees it. I would like to focus on two of those claims, predictions in fact, which Tipler estimates that will occur by the year 2050. The claims are:


1. Intelligent machines [will exist] more intelligent than humans
2. Human downloads 
[will exist], effectively invulnerable, and far more capable than human beings

I should go very quickly through the first claim, suggesting that – depending how one measures “intelligence”, computers far outsmart humans even today. The few remaining computational problems that deal mostly with handling uncertainty and comprehending speech I expect to be effectively sorted out sooner that the date Tipler suggests. I see no trouble with that. Artificial Intelligence (AI) is an engineering discipline solving an engineering problem, i.e. how to furnish machines with adequate intelligence in order to perform executive tasks in situations and/or environments humans would better not.

The second claim, however, is truly fascinating. To suggest a human download is equivalent to suggesting codification of a person’s person into a digital (or other, but digital should suffice) format. Once a “digital file” of a person exists then downloading, copying, and transmitting the file are trivial problems. But should we really expect downloading human consciousness by the year 2050 – or ever? There are four possible answers to this question. Yes (Tipler or the techno-optimist), No (the absolute negativist), Don’t know (the agnostic crypto-techno-optimist), Cannot know even if it happened (the platonic negativist).

Let us now take these four responses in turn, and in the context of the loosely defined term “consciousness” as the sum total of those facets that when acting together produce the feeling of being “somebody” in the I of each and every one of us.

1. The Techno-optimist (Yes). This view assumes life as an engineering problem and thus falls back to the AI premise of solving it. The big trouble with this view is that if consciousness is indeed an engineering problem (i.e. a tractable, solvable problem), then it is also very likely that it is hard problem indeed. “Hard” is engineering can be defined in relation to the resources one needs in order to solve a problem. Say for example that I would like to build a bridge from here to the moon. Perhaps I could design such a bridge but when I get down to develop the implementation plan I will probably find out that the resources I need are simply not available. Similarly, with consciousness, one may discover that in order to codify consciousness in any meaningful way one might need computing resources unavailable in this universe. This may not be as far-fetched as it sounds. For example, if we discover that the brain evokes the feeling of I, by means of complex interactions between individual neurons and groups of neurons (which seems to be a reasonable scenario to expect), then the computational problem becomes exponentially explosive with each interaction. To dynamically “store” such a web of interactions one would need a storage capacity far exceeding the available matter in our universe. But let us not reject the techno-optimist simply on these grounds. What we know today may be overrun tomorrow. So let us for the time try to keep our options open and say that the “Yes” party appears to have a chance of being proven right.


2. The absolute negativist (No). Negativists tend to see the glass half-empty. In the case of Tipler’s claim, the “No” party would suggest that the engineering problem is insurmountable. Further, they would probably take issue with the definition of consciousness, claiming that you cannot even start with solving a problem which you cannot clearly define. I would say that both these arguments fall short. Engineering problems are very often ill-defined and yet solutions are being found. And the “impossibility” of finding adequate memory to code someone’s mind is something that we will have to wait and see if it truly turns out this way. The negativists, in this case, may also include die-hard dualists.


3. The agnostic (crypto-techno-optimist) responds “skeptically” and is a subdivision of the techno-optimist. She is a materialist at heart but no so gun-ho as the true variety of the techno-optimist.


4. The platonic negativist (cannot know even if it happened). Now here is a very interesting position, the true opposite of the techno-optimist. The platonic negativist refuses to buy Tipler’s claim on fundamental grounds. She claims that it is not possible to tell whether such thing will occur. In other words, the engineer of 2050 may be able to demonstrate downloading someone’s consciousness but she, the platonic negativist, will stand up and question the truth of the demonstration. How will she do such a thing? I will have to expand on this premise – which is, in fact, the neo-dualist attack on scientific positivism. Suffice it to say that she will base her antithesis on the following: Any test to confirm that someone’s consciousness has been downloaded will always be inadequate, based on Gödel’s theorem.


Of course, the very essence of the aforementioned debate, i.e. whether or not consciousness can be downloadable, lies at the core of the New Narrative with respect to the revisionist definition of humaneness. But this is a matter that needs to be further discussed.