Wednesday 15 March 2017

Man does not think...he only thinks he thinks

Okay, so we want to build robots, right? Proper autonomous-thinking robots. AI with arms and legs. The mechanics and electronics we’ll get by trial, error, and design; all we’ve got to wait for is the technology to advance, which it will do with time (technology always does). But what about the brain? No, let’s qualify that, what about the Mind? The brain after all is just more tech. But a thinking mind, with Artificial Intelligence?

Well there’s advances there too, but mainly in complexity, not in the underlying logic of the artificial intelligence. That’s something we’re still having difficulties with, because the very thing we want it to be like, is the very thing we don’t understand - ourselves. But perhaps that’s the part of the problem - we want our robots to think like we do. At least that’s what we say. In actual fact what we mean (although we probably don’t realise it) is for them to act like we do, not think like we do.

Hang on, isn’t that more or less the same thing? After all, we act on thoughts, so the way we think is the way we act? Well probably not. It’s more likely that how we think about something is based on how we acted first, and a lot of how we act is instinctive.

Let me clarify that. Think about an instinctive act you’ve performed, and how often you perform it.  Somebody waves their hand in your face and you jerk back, ready for action. If we ask you afterwards why you did that, you will probably say something along the lines of “If I hadn’t moved, he would have hit my face”. You’re not explicitly saying that you decided to flinch, but it’s kind of implied in the language, and let’s face it, don’t we feel that this was what happened? Trip on the kerb and put your hands out to catch yourself; grab a ball out of the air; all of these feel like conscious decisions, but they’re not. They’re the result of millions of years of behavioural evolution. The fact that these instinctive reactions are still here proves they’re successful, and they’re successful because they’re quick, much quicker than human thought. There’s no way we could have controlled those actions at anywhere the same speed. That’s why they have remained as instinctive processes - humans think too slowly.

The point of that last paragraph is that we are mostly instinctive, with conscious thought merely a kind of afterthought (see what I did there?). if we want our AI to think like us, we’ve got to make it act like us first. So let’s make a start.


The previously-mentioned avoidance instincts should be simple to design, as they are really nothing more than a series of logical on/off decisions. Does this fit this criteria? If so, then - followed by a final yes it does, so do this. You could model any of these in a flow-chart.

But if most of our instinctive actions can be modelled and replicated, what about other functions of the human mind, such as deciding to eat because we’re hungry? This is where we have to look not at what we are thinking when we behave a certain way, but what mechanism could have evolved to make us behave that way in the first place.  


Well the smell of food certainly stimulates the feeling of hunger, which causes us to seek food and eat it, and then the feeling of eating gives us the feeling of enjoyment, followed by the feeling of contentment from being full. That’s simple enough, and in fact it’s pretty much acknowledged that this is a Reward System and that there’s actually no thought involved. The stimulus prompts the human nervous system to offer or promise rewards. Again our conscious thoughts are not the decisions that cause this action, but are after-the-fact results of  a decision already taken instinctively.

So could we model this too? Of course we could, because again it’s simple on/off decision-making – if feelings of hunger, then… But what about the actual reward? How do we model the feeling we get when we achieve a result, gain a prize, avoid pain? How do we reward a robot, or even get the robot to acknowledge and seek reward? Again, simply by the same method. It’s easier to explain this with an example.

We have built a robot. Let's call him "Robert". Robert has a power cell. When it gets below a certain level Robert must seek a recharge. We could just put that in as a simple instruction, a yes/no test for certain criteria (power below a specific level), and probably robots today may already be designed this way. However that’s not going to make Robert think and act like us. After all, we don’t have a simple meter for our stomach contents, with an instruction to eat when it gets below a certain level. Our system is a lot more Heath-Robinson. Yes there is a kind of level meter in action, but it’s the body’s chemical signals that supply the trigger. Sugar levels in the bloodstream,  for example, cause cells to react when a certain level has been reached, and we don’t have conscious access to this decision-making. However these systems do also produce other secondary effects (dry mouth, grumbling stomach) that we are consciously aware of, which we interpret as telling us that we’re hungry...and then we’re into a standard reward system again, where we eat to stop the unpleasant feelings and gain the pleasant ones.

Now we get to the crux. No matter how much detail we replicate in our AI, how do we get to that last I’ll do this because it makes me feel good? Well in Robert’s case, what if we keep the Level meter, but instead of the trigger simply instructing Robert to seek recharge, we pop an extra step in? We trigger an increment in some figure stored in a memory register. We then instruct Robert’s system to “do what the trigger wants us to do” when it detects either a rise in the figure in memory, or (for a slightly more complex process) when the figure has got to a particular number. The more Robert is discharging, the higher that figure gets, and this will cause Robert to follow the instructions in the trigger circuit - Recharge.

The beauty of this monitoring of increase of a figure (let’s call it the Reward Register) is that we can re-use it for any other scenario: Cold? That's bad for the electrics, so increase your internal heating and warm yourself up. Encounter an obstacle in front of you? Step over it (if small), Remove it without damaging it (if bigger), Walk round it (if very big).

Using the Reward Register (along with some very complex logic paths) for every example of decision-making Robert has to make allows for Redundancy, Design simplicity, and Adaptability. We can make Robert a decision-making machine on a par with a human being, and who’s to say that with this structured system in place, Robert won’t evolve Consciousness? After all, current neuroscientific thinking is that Consciousness may well be an Emergent property of the Brain’s processes anyway.

So in fact while we’ve modelled an Artificial Intelligence in this way, who’s to say that the human mind doesn’t already work like this?

No comments: