rolisz's site

Musings about free will

Red pill vs blue

Sometimes when I don't have anything in particular on my mind, like when I'm on the train and I've already checked the Facebook newsfeed, or when I'm washing the dishes, I like thinking about various philo­soph­i­cal issues. I haven't studied too much philosophy, I mostly limited myself to Wikipedia, so usually I don't get too far in my ponderings. But about free will I have thought more than usual, I also read several in­ter­est­ing blog posts by other people, which have all inspired to finally put down my thoughts in writing. I am curious to see what my readers think about it too.

Can't fall asleep?

First, let's start with a classic scenario that is very familiar to anyone, except my dad: you want to fall asleep, but you just can't, you keep tossing and turning (my dad is one of those lucky persons who falls asleep within 30 seconds of putting his head on the pillow). In this case, we clearly will something, but we cannot achieve it. It's not something impossible because of the laws of physics (I want to travel faster than the speed of light). There might be perfectly valid ex­pla­na­tions for why you can't fall asleep, such as drinking stimulants, being excited about something, or worrying about something. These all produce certain chemicals in our bodies which prevent the brain from "shutting down", thus in­ter­fer­ing with our desire to fall asleep.

In my opinion, this suggests that there is a disconnect between what we want and what we do, and it manifests itself in many different occasions. Other examples include being on a diet, yet still eating triple chocolate chip cookies. Repeatedly.

Radio robot

Now let's try a thought experiment. Imagine being around 1894, before Marconi invented the radio. Imagine that somebody else though, not only had radios, but also created a robot that could be remotely controlled via radio. People find the robot, interact with him, study it. They can see the elec­tric­i­ty flowing inside wires, study the various chips, and so on. The robot performs com­pli­cat­ed actions, it seeks out energy sources to charge itself, it in­ves­ti­gates the world, com­mu­ni­cates, and so on. To the people around it, the robot would appear to have a free will. They can understand all the mechanical and electrical components (if this is too impossible to believe, imagine that the robot is controlled via tachyons), but at the radio chip they just oc­ca­sion­al­ly see random electric signals appearing, without any cause, but with a real effect. In reality, the person on the radio controls the robot, but to the people around it, that's not obvious.

So, if something (like the soul), were pulling our strings, it wouldn't be very obvious either to an external viewer (other humans) and it could manifest itself as just random neural activity showing up in the brain.


Let's do another thought experiment, by imagining another robot. You can picture Asimo, or Wall-E, or whichever robot you prefer. He has ways too move around (legs, wheels, wings, it doesn't matter what exactly) and ways to interact with other objects that are around him (hand-like thingies, again, it doesn't matter exactly how). His pro­gram­ming is suf­fi­cient­ly complex so that he can model the world around him and do planning (so he's not just limited to "Now my battery is low, I must find outlet", but he can do longer term things like "I can still go and pick up this and that before I need to get back to an outlet"). But the pro­gram­ming doesn't specify any goals, other than "Survive". As our robot goes around the world, in­ter­act­ing with other things, it learns that if I fall down the roof, my arm bends, so my chances of survival are smaller, while if I go to the workshop I can get it fixed, so my chances of survival will be higher. If he were given a goal (that is physically feasible, such as design and build a car), he could do the necessary planning and then execute those actions to realize that goal. But he doesn't have a very definite goal. Physically, there is nothing stopping him from doing whatever action he can do.

But, at least given the current models we have for building AI, the robot will have two modes of operation: ex­ploita­tion and ex­plo­ration.

Given this, when the robot is in the ex­ploita­tion mode, he will choose the best action to him. It would be illogical for him to do other wise. It would be contrary to his pro­gram­ming, or his "per­son­al­i­ty" if you will, to act otherwise. He would have no reason at all to choose another option.

Of course, in ex­plo­ration mode, he would try new things, and this might oc­ca­sion­al­ly lead to funny things, such as trying to charge himself with a potato. But that's how we all learn.

But aren't we similar? I know I am quite pre­dictable. The easiest example is my breakfast, which when I am working is one of the following: a hot sandwich, cereal with milk or yoghurt, or oatmeal with whatever nuts and dried fruits there are. And I usually know what I'm going to have before I get to the office. And if you were to take a blood sample and measure the levels of various hormones, vitamins and other nutrients every morning, I'm pretty sure you could build a very good predictor of what my breakfast will be.1

And I believe that even though we are "de­ter­min­is­tic", we still are re­spon­si­ble for our actions (and this is what free will is about, who's to blame for what we do), because they represent who we are. I wouldn't be Roland if I could just sit at a table with a cake in front of me and not eat it. That's not me. I love sweet stuff. As much as I hate it, my internal values are "Sweets=Good stuff, must eat them" and in 99% of the cases that's what my brain will tell me it's in my best interest to do.


Both of the two thought ex­per­i­ments explain some parts of our behaviours and ex­pe­ri­ences, but not completely, so let's combine them, in light of the initial scenario.

What I think is that there are actually two layers of decision making. One of them is highly de­ter­min­is­tic. It's what happens at the level of the brain. You are in a given state at a point, with certain amounts of chemicals in your body, and if all these could be measured with sufficient accuracy, one could predict your next action with 99.999% precision. And if you were to be cloned, atom by atom, and built up again, completely iden­ti­cal­ly, you would react in the same way, over and over again. This is what's re­spon­si­ble for things like "I am going to eat the cookie that's in front of me, even though I'm on a diet". And this is also re­spon­si­ble for sin. How many times don't we get angry, impatient, or just succumb to temptation, even though we have resolved so many times not to?

But there is another layer, on top of it, which can subtly influence the other layer. It doesn't have a direct influence on our action, but it does influence our goals. This is the part that decides "I should go to the gym more often, so that I become stronger". And the way it interacts with the first layer is that after the first layer does something, it gives it the "reward": did this actually get me closer to my goal or not? If the first layer decided to instead play The Rise of Tomb Raider, the signal will be negative, possibly generating frus­tra­tion or guilt. If instead the first layer actually moved their sorry butt to the gym, the signal will be positive and it will update the values of the "neural network" so that it makes it more likely to happen again.

This second layer is not a ma­te­ri­al­is­tic one, because it can't be measured directly, so it must be something outside the physical realm, even though it has an indirect influence on it. It's what gives our free will, even though our bodies are de­ter­min­is­tic. This layer is what I believe would be the soul of man.

For some people, who have stronger wills, the coupling between the two layers is stronger. They find it easier to enforce what the second layer says right away, because the updates it does via the "reward" signal are stronger. Other people have a weaker link, so even though they want to learn German, they keep postponing signing up for classes, because... Oooh, something else is shiny.

But in no human is this link perfect. Even the best of people have weak spots, de­fi­cien­cies, vices, that they be­grudg­ing­ly live with. This is caused by sin, by the fall of Adam. This part of him de­te­ri­o­rat­ed when he ate from the fruit in the Garden of Eden. But it's also a part that is restored when we accept the Lord Jesus as our Saviour. God does something in us, so that "thanks be to God that, though you used to be slaves to sin, you have come to obey from your heart the pattern of teaching that has now claimed your allegiance. You have been set free from sin and have become slaves to right­eous­ness." (Romans 6:17-18)

Actually, I believe there are three layers, because humans were created by God with a body, soul and spirit, but I think this is enough for one post. I am looking forward to having some comments and hearing what others think about this.

  1. But just because our actions are de­ter­min­is­tic, it doesn't mean they are fully pre­dictable. This is because of the Halting Problem. It basically says that there is no algorithm that can tell you for every algorithm and every possible input to that algorithm whether it will halt or not. All these algorithms are perfectly de­ter­min­is­tic, yet in the end they are not perfectly pre­dictable. There are cases where we can figure it out, but not for every case. So even if our behaviour is de­ter­min­is­tic, it's not completely pre­dictable. While my breakfast is pre­dictable, I have surprised people by starting to learn piano for example.