Musings about free will
Sometimes when I don't have anything in particular on my mind, like when I'm on the train and I've already checked the Facebook newsfeed, or when I'm washing the dishes, I like thinking about various philosophical issues. I haven't studied too much philosophy, I mostly limited myself to Wikipedia, so usually I don't get too far in my ponderings. But about free will I have thought more than usual, I also read several interesting blog posts by other people, which have all inspired to finally put down my thoughts in writing. I am curious to see what my readers think about it too.
Can't fall asleep?
First, let's start with a classic scenario that is very familiar to anyone, except my dad: you want to fall asleep, but you just can't, you keep tossing and turning (my dad is one of those lucky persons who falls asleep within 30 seconds of putting his head on the pillow). In this case, we clearly will something, but we cannot achieve it. It's not something impossible because of the laws of physics (I want to travel faster than the speed of light). There might be perfectly valid explanations for why you can't fall asleep, such as drinking stimulants, being excited about something, or worrying about something. These all produce certain chemicals in our bodies which prevent the brain from "shutting down", thus interfering with our desire to fall asleep.
In my opinion, this suggests that there is a disconnect between what we want and what we do, and it manifests itself in many different occasions. Other examples include being on a diet, yet still eating triple chocolate chip cookies. Repeatedly.
Radio robot
Now let's try a thought experiment. Imagine being around 1894, before Marconi invented the radio. Imagine that somebody else though, not only had radios, but also created a robot that could be remotely controlled via radio. People find the robot, interact with him, study it. They can see the electricity flowing inside wires, study the various chips, and so on. The robot performs complicated actions, it seeks out energy sources to charge itself, it investigates the world, communicates, and so on. To the people around it, the robot would appear to have a free will. They can understand all the mechanical and electrical components (if this is too impossible to believe, imagine that the robot is controlled via tachyons), but at the radio chip they just occasionally see random electric signals appearing, without any cause, but with a real effect. In reality, the person on the radio controls the robot, but to the people around it, that's not obvious.
So, if something (like the soul), were pulling our strings, it wouldn't be very obvious either to an external viewer (other humans) and it could manifest itself as just random neural activity showing up in the brain.
Wall-E
Let's do another thought experiment, by imagining another robot. You can picture Asimo, or Wall-E, or whichever robot you prefer. He has ways too move around (legs, wheels, wings, it doesn't matter what exactly) and ways to interact with other objects that are around him (hand-like thingies, again, it doesn't matter exactly how). His programming is sufficiently complex so that he can model the world around him and do planning (so he's not just limited to "Now my battery is low, I must find outlet", but he can do longer term things like "I can still go and pick up this and that before I need to get back to an outlet"). But the programming doesn't specify any goals, other than "Survive". As our robot goes around the world, interacting with other things, it learns that if I fall down the roof, my arm bends, so my chances of survival are smaller, while if I go to the workshop I can get it fixed, so my chances of survival will be higher. If he were given a goal (that is physically feasible, such as design and build a car), he could do the necessary planning and then execute those actions to realize that goal. But he doesn't have a very definite goal. Physically, there is nothing stopping him from doing whatever action he can do.
But, at least given the current models we have for building AI, the robot will
have two modes of operation: exploitation and exploration.
- In exploitation, he will choose to do the action that gives the best expected results. This means that if he has the choice of recharging himself with a 9V battery and a 220AC, he will choose the latter (after he learned the effects of using both).
- In exploration, the robot will choose either a random action, or an action bout which he doesn't now well enough what will happen, to learn more things about the world and see if they help with his goals or not. For example, he ould try to see what happens when he sets something on fire. A lot of energy is generated, which is good, but unless it's done in a careful way, things burn down, so that's bad. So, after settings some things on fire, the robot would adjust some values in it's neural network so that it would know what to expect next time it sees (or thinks about) fire.
Given this, when the robot is in the exploitation mode, he will choose the best ction to him. It would be illogical for him to do other wise. It would be ontrary to his programming, or his "personality" if you will, to act otherwise. e would have no reason at all to choose another option.
Of course, in exploration mode, he would try new things, and this might occasionally lead to funny things, such as trying to charge himself with a potato. But that's how we all learn.
But aren't we similar? I know I am quite predictable. The easiest example is my breakfast, which when I am working is one of the following: a hot sandwich, cereal with milk or yoghurt, or oatmeal with whatever nuts and dried fruits there are. And I usually know what I'm going to have before I get to the office. And if you were to take a blood sample and measure the levels of various hormones, vitamins and other nutrients every morning, I'm pretty sure you could build a very good predictor of what my breakfast will be.[1]
And I believe that even though we are "deterministic", we still are responsible for our actions (and this is what free will is about, who's to blame for what we do), because they represent who we are. I wouldn't be Roland if I could just sit at a table with a cake in front of me and not eat it. That's not me. I love sweet stuff. As much as I hate it, my internal values are "Sweets=Good stuff, must eat them" and in 99% of the cases that's what my brain will tell me it's in my best interest to do.
Conclusion
Both of the two thought experiments explain some parts of our behaviours and experiences, but not completely, so let's combine them, in light of the initial scenario.
What I think is that there are actually two layers of decision making. One of them is highly deterministic. It's what happens at the level of the brain. You are in a given state at a point, with certain amounts of chemicals in your body, and if all these could be measured with sufficient accuracy, one could predict your next action with 99.999% precision. And if you were to be cloned, atom by atom, and built up again, completely identically, you would react in the same way, over and over again. This is what's responsible for things like "I am going to eat the cookie that's in front of me, even though I'm on a diet". And this is also responsible for sin. How many times don't we get angry, impatient, or just succumb to temptation, even though we have resolved so many times not to?
But there is another layer, on top of it, which can subtly influence the other layer. It doesn't have a direct influence on our action, but it does influence our goals. This is the part that decides "I should go to the gym more often, so that I become stronger". And the way it interacts with the first layer is that after the first layer does something, it gives it the "reward": did this actually get me closer to my goal or not? If the first layer decided to instead play The Rise of Tomb Raider, the signal will be negative, possibly generating frustration or guilt. If instead the first layer actually moved their sorry butt to the gym, the signal will be positive and it will update the values of the "neural network" so that it makes it more likely to happen again.
This second layer is not a materialistic one, because it can't be measured directly, so it must be something outside the physical realm, even though it has an indirect influence on it. It's what gives our free will, even though our bodies are deterministic. This layer is what I believe would be the soul of man.
For some people, who have stronger wills, the coupling between the two layers is stronger. They find it easier to enforce what the second layer says right away, because the updates it does via the "reward" signal are stronger. Other people have a weaker link, so even though they want to learn German, they keep postponing signing up for classes, because... Oooh, something else is shiny.
But in no human is this link perfect. Even the best of people have weak spots, deficiencies, vices, that they begrudgingly live with. This is caused by sin, by the fall of Adam. This part of him deteriorated when he ate from the fruit in the Garden of Eden. But it's also a part that is restored when we accept the Lord Jesus as our Saviour. God does something in us, so that "thanks be to God that, though you used to be slaves to sin, you have come to obey from your heart the pattern of teaching that has now claimed your allegiance. You have been set free from sin and have become slaves to righteousness." (Romans 6:17-18)
Actually, I believe there are three layers, because humans were created by God with a body, soul and spirit, but I think this is enough for one post. I am looking forward to having some comments and hearing what others think about this.
But just because our actions are deterministic, it doesn't mean they are fully predictable. This is because of the Halting Problem. It basically says that there is no algorithm that can tell you for every algorithm and every possible input to that algorithm whether it will halt or not. All these algorithms are perfectly deterministic, yet in the end they are not perfectly predictable. There are cases where we can figure it out, but not for every case. So even if our behaviour is deterministic, it's not completely predictable. While my breakfast is predictable, I have surprised people by starting to learn piano for example. ↩︎