Wednesday 10 February 2021

Are We Conscious? Part 1

One of the problems with understanding consciousness is that we need to go back to first principles and actually define what we mean by “consciousness”. Once we lay out a complete description of what properties "conscious thought" exhibits, then we can attempt to find a possible theory of what might be causing it. Well, in order to describe consciousness, we have to first study it, and there appear to be two ways of doing this. However, at first glance both appear problematic in their own way.

The first is “from the outside”, and essentially means studying the demonstration of consciousness of another creature.  This is actually pretty easy to do – the principle of the Turing Test shows us that a sufficiently advanced non-conscious device could appear to us to be indistinguishable from a conscious one. The problem here of course is that passing the Turing Test does not mean that you have consciousness, but that what you have is indistinguishable from consciousness, at least by the terms of the Test.

However, that may not be as problematic after all, since from the outside the only way we can define consciousness is by observing the demonstration of it, and if it is demonstrated, then by our own definition we can only conclude that it exists.

The second appears a bit more difficult - how do I study my own consciousness and determine if it is real or not? Surely the fact that I would be using my own consciousness would automatically prove that I was conscious. End of. Except...the same could be said of our hypothetical "Turing-Test-beating" supercomputer. It could be programmed to think that it was a conscious living creature rather than a construct, so in a conversation with it, not only would we be convinced that it was conscious, but it would also demonstrate to us that it thought it was conscious as well. Of course this does simply bring us back to Square One - that we are viewing another's consciousness from the outside. We cannot know how the computer truly "thinks" about itself, without being the computer.

However, if we had not programmed our hypothetical computer to think it was self-aware, and we could demonstrate that our hypothetical computer displayed to us all the signs of being consciously self-aware that we ourselves display, then we would have to conclude that the computer's self-awareness had arisen as an emergent property of its very design.

If the computer's complexity was of the same order as our own brain, it would be difficult not to conclude that both the computer's consciousness and our own had arisen through the same process. If we are happy with our own sentience being something that will naturally emerge from any sufficiently complex system (and both biology and physics continually demonstrate that this sort of thing happens all the while in nature), then we have solved the mystery. If we're not, then we may have to conclude that the supercomputer's apparent sentience came from the same mysterious place as ours. We would then have to conclude that if we could create a sentient creature through manufacture, then nature could create one through evolution.

No comments: