Wednesday 10 February 2021

Are We Conscious? Part 2

My use of the word "supercomputer" was as an analogy to illustrate that an object or device that had not been born naturally as part of an evolutionary process could theoretically be indistinguishable from a "conscious" being.

Human consciousness is almost certainly a product of gradual evolution. We can guess from fossils of earlier hominids that they did not have the same level of consciousness as we do now, pretty much infer that our ancestors of 65 million years ago had even less, and that our bacterial ancestors of 3 billion years ago had no consciousness at all. Each consciousness was produced by a process that preceded it (be that the manufacture of my hypothetical computer, or the evolution of life on Earth), and somewhere along the line, non-conscious must have eventually evolved into conscious. Either we postulate that this happened suddenly, and one of our ancestors achieved sentience while his/her parents didn't (which is of course absurd), or we say that it happened gradually, and our level of conscious self-awareness is simply one step on a sliding scale from higher mammals down to flatworms and beyond.

My point was also that this supercomputer would not necessarily be programmed specifically to mimic consciousness, but that if it was complex enough, it would be indistinguishable from consciousness. An analogy would be the many accepted definitions of what constitutes “Life". If an object completely fulfils those definitions, then we have no choice but to say that it is alive. Even if the person who built it says "No, this is a machine, I created it", it must be alive if it fits the criteria for being so. If we don't like that, then we have only two logical choices - we either re-define what we mean by "life" or we reluctantly accept that this thing is alive. We can't just say that it isn't alive because we don't like the idea of it.

Similarly, with consciousness, if we could create or encounter an object that fulfilled our definitions of being conscious, then we would have to either accept that it was conscious, or redefine what conscious means. We couldn't just say that it wasn't conscious for no other reason than we didn't agree.

A supercomputer that fulfils the Turing Test, while admittedly far in the future, may well fit this criteria. If it did, then there would be only two possibilities: either we deliberately programmed consciousness into it, or we didn't, and consciousness emerged from the complexity.

If we had deliberately programmed consciousness into it, the question would then be moot, as we would therefore already know what consciousness was and that we could create it.

But if we accept that its consciousness arose as an emergent property of its complexity then we could safely conclude that our own consciousness also arose in the same way.

No comments: