Wednesday 10 February 2021

Are We Conscious? Part 2

My use of the word "supercomputer" was as an analogy to illustrate that an object or device that had not been born naturally as part of an evolutionary process could theoretically be indistinguishable from a "conscious" being.

Human consciousness is almost certainly a product of gradual evolution. We can guess from fossils of earlier hominids that they did not have the same level of consciousness as we do now, pretty much infer that our ancestors of 65 million years ago had even less, and that our bacterial ancestors of 3 billion years ago had no consciousness at all. Each consciousness was produced by a process that preceded it (be that the manufacture of my hypothetical computer, or the evolution of life on Earth), and somewhere along the line, non-conscious must have eventually evolved into conscious. Either we postulate that this happened suddenly, and one of our ancestors achieved sentience while his/her parents didn't (which is of course absurd), or we say that it happened gradually, and our level of conscious self-awareness is simply one step on a sliding scale from higher mammals down to flatworms and beyond.

My point was also that this supercomputer would not necessarily be programmed specifically to mimic consciousness, but that if it was complex enough, it would be indistinguishable from consciousness. An analogy would be the many accepted definitions of what constitutes “Life". If an object completely fulfils those definitions, then we have no choice but to say that it is alive. Even if the person who built it says "No, this is a machine, I created it", it must be alive if it fits the criteria for being so. If we don't like that, then we have only two logical choices - we either re-define what we mean by "life" or we reluctantly accept that this thing is alive. We can't just say that it isn't alive because we don't like the idea of it.

Similarly, with consciousness, if we could create or encounter an object that fulfilled our definitions of being conscious, then we would have to either accept that it was conscious, or redefine what conscious means. We couldn't just say that it wasn't conscious for no other reason than we didn't agree.

A supercomputer that fulfils the Turing Test, while admittedly far in the future, may well fit this criteria. If it did, then there would be only two possibilities: either we deliberately programmed consciousness into it, or we didn't, and consciousness emerged from the complexity.

If we had deliberately programmed consciousness into it, the question would then be moot, as we would therefore already know what consciousness was and that we could create it.

But if we accept that its consciousness arose as an emergent property of its complexity then we could safely conclude that our own consciousness also arose in the same way.

Are We Conscious? Part 1

One of the problems with understanding consciousness is that we need to go back to first principles and actually define what we mean by “consciousness”. Once we lay out a complete description of what properties "conscious thought" exhibits, then we can attempt to find a possible theory of what might be causing it. Well, in order to describe consciousness, we have to first study it, and there appear to be two ways of doing this. However, at first glance both appear problematic in their own way.

The first is “from the outside”, and essentially means studying the demonstration of consciousness of another creature.  This is actually pretty easy to do – the principle of the Turing Test shows us that a sufficiently advanced non-conscious device could appear to us to be indistinguishable from a conscious one. The problem here of course is that passing the Turing Test does not mean that you have consciousness, but that what you have is indistinguishable from consciousness, at least by the terms of the Test.

However, that may not be as problematic after all, since from the outside the only way we can define consciousness is by observing the demonstration of it, and if it is demonstrated, then by our own definition we can only conclude that it exists.

The second appears a bit more difficult - how do I study my own consciousness and determine if it is real or not? Surely the fact that I would be using my own consciousness would automatically prove that I was conscious. End of. Except...the same could be said of our hypothetical "Turing-Test-beating" supercomputer. It could be programmed to think that it was a conscious living creature rather than a construct, so in a conversation with it, not only would we be convinced that it was conscious, but it would also demonstrate to us that it thought it was conscious as well. Of course this does simply bring us back to Square One - that we are viewing another's consciousness from the outside. We cannot know how the computer truly "thinks" about itself, without being the computer.

However, if we had not programmed our hypothetical computer to think it was self-aware, and we could demonstrate that our hypothetical computer displayed to us all the signs of being consciously self-aware that we ourselves display, then we would have to conclude that the computer's self-awareness had arisen as an emergent property of its very design.

If the computer's complexity was of the same order as our own brain, it would be difficult not to conclude that both the computer's consciousness and our own had arisen through the same process. If we are happy with our own sentience being something that will naturally emerge from any sufficiently complex system (and both biology and physics continually demonstrate that this sort of thing happens all the while in nature), then we have solved the mystery. If we're not, then we may have to conclude that the supercomputer's apparent sentience came from the same mysterious place as ours. We would then have to conclude that if we could create a sentient creature through manufacture, then nature could create one through evolution.