Two of the "four horsemen" of New Atheism, Sam Harris and Daniel Dennett, have made clear their disagreements about the notion of Free Will. After Harris wrote his book entitled "Free Will", Dennett disagreed and reviewed it quite harshly, and Harris published his comments on his blog and also his own response.
The conversation can be found here, and also links to the other parts I mentioned.
The following comments are my own thoughts on the issue taken from a long comment I published on the Atheistically Speaking website for episodes 53 and 54.
This is a fascinating topic. It’s not one that I have previously thought about much at all. In fact I find it especially challenging – I must admit I really struggle understanding all the nuances in both Harris’ and Dennett’s arguments. Although you and Ryan did a very good job of unpacking the problem and clarifying the exact areas of disagreement. I’m also not sure who I agree with more, to me they both produce strong cases – for both determinism and compatablism.
It seems Dennett’s compatibilism view is pretty popular. Also it may be telling that Harris’ book “Free Will” only gets just over 3 stars on Amazon (UK). However this may be due in part to the “atheist-basher” effect.
I do have a few thoughts though. This is kinda dealing with the last few eps on free will. And this is a long comment, by the way…sorry!
I’m not completely convinced about determinism. If we accept that quantum events can occur, and can’t be predicted, then surely the concept of causality through a long chain of physical processes (on the smallest scales) is brought into question? Prediction is a probablistic, not a certain thing. We tend to have gaussian distributions rather than definite outcomes. Ryan raised other indeterministic possibilities towards the end of AS54, which are what I’m more interested in.
I’m not completely convinced about determinism. If we accept that quantum events can occur, and can’t be predicted, then surely the concept of causality through a long chain of physical processes (on the smallest scales) is brought into question? Prediction is a probablistic, not a certain thing. We tend to have gaussian distributions rather than definite outcomes. Ryan raised other indeterministic possibilities towards the end of AS54, which are what I’m more interested in.
Anyway, what I really wanted to talk about was that maybe a definition of free will should include the ability of people to make choices that they suspect may be “wrong”. We shouldn’t forget that there could sometimes be rational reasons for making the “wrong” choice – for example second or third order intentions. Or simply making a sacrifice (if the choice was not too important) in trying to prove that we do have free will. For example I can imagine a mode of operation where I second-guess the normal response I would make, and make a different choice deliberately. If this mode can be switched on and off randomly, I find it hard to accept that I do not have free will (or at least the appearance of it).
If we have some information about the consequences of one of our choices, we may decide we can in fact live with the consequences of the “bad” choice. Then it may be possible to further our interests by making the “wrong” decision if, for example, we are being observed by another agent. If the observing agent witnessed the “wrong” decision and thought we were making it because we thought it was the right decision, we may be able to feed the agent false information about either our sources of knowledge, or our decision making abilities. This may cause them to misjudge us in the future, at a time that may be beneficial to us.
You talked about how the idea of having more information about a problem, in one way, actually results in less choice in making a decision. This is because the logically correct answer we are led to, becomes clearer with more variables revealed. I agree with this – but only if we assume we always make the “right” decision. If we have no information in order to make a decision, many people may resort to entirely random acts like rolling a dice or flipping a coin to make a decision. Here, we almost have “too much” choice, and increasing the information we possess about the problem informs the decision, and reduces our choice. There is an interesting kind of “anti-parallel” here with what you discussed in the bonus content – as creatures evolve greater degrees of consciousness, they appear to exhibit more free will. So a creature which evolved, and simultaneously increased its information about a problem, would appear to have both more AND less free will about what decision to make. Oh dear…my poor brain just broke
I do have a problem with Dennett’s response when he talks about “not being able to do differently” and likening it to replaying a tape. This is not what I mean when I consider having a “do-over” of a moment or memory recalled.
Taking the example of the golfer, it’s not incoherent to suggest that he could be put back into the same exact moment with the added mental experience of knowing that what he had previously done, had resulted in a missed putt. Just claiming that the wind may have been different for example, I regard as a little churlish. The abilities that the golfer possessed before the do-over started would be able to account for that.
When I play a a level of a computer game that involves finding the correct tactical approach to an enemy encounter for instance, the first time I try it, I may fail, even though I have previous experience with this type of game and even with similar previous levels of the same game. I can then select “load game” and replay the exact same encounter which if I perform the same actions, I will suffer the same fate as before.
When I play a a level of a computer game that involves finding the correct tactical approach to an enemy encounter for instance, the first time I try it, I may fail, even though I have previous experience with this type of game and even with similar previous levels of the same game. I can then select “load game” and replay the exact same encounter which if I perform the same actions, I will suffer the same fate as before.
However if I try a different approach, using knowledge gleaned from the failed attempt, I may be successful. Interestingly enough, some games feature “dynamic” environments which introduce random new elements to the play, meaning that in this case, performing the exact same actions which failed before may now succeed due to enemies being in different locations for example. In either case, I will still learn something about the game, even if it is only what the random variables may be. Notice that free will is not really impinged by any of this. I could for example choose to deliberately fail the challenge (the penalty being having to reload and play it over) in exchange for discovering some unknown facet about the game for instance.
It’s not difficult to imagine a computer simulation in which one could do exactly this. The 2011 movie “Source Code” is a perfect example to demonstrate it. In it, the protagonist is forced to relive the same experience of a bomb blowing up a train he is riding in, over and over again until he discovers who was responsible. Each time he tries, he fails for some reason but learns more about what is happening as he maintains a chain of consciousness through multiple iterations of the same event. Upon learning the final truth he is able to intervene and extricate himself from the program.
The example of the “all knowing” supercomputer that can predict the future events of one’s life is a science-fiction folly, in my view, much more ridiculous even than “Source Code” with its limited domain that was recreated. I view this super-computer example as even less connected to reality than the population experiments we debated in the moral philosophy discussion, that everyone here seemed to decry because they challenged consequentialism.
I would think that the very existence of such a machine would undermine its ability to make 100% accurate predictions. And not only because of the randomness of quantum events leading to a necessary error or uncertainty in the predictions. If the “Jeremy” person who it was predicted would rob a particular bank aged 25, actually robbed a jewellery store across the street aged 26, would that count as a hit, and moreover if he had prior knowledge of the prediction, could he do differently? As a bit of a non-conformist, I believe he could, as the determinism of his life is changed from the point of that revelation. We are really getting into “Minority Report” territory here! People being arrested for “future crimes”…
In summary, I’m not sure I know what to think about free will. I think that we appear on the surface at least to have some level of free will, but no doubt the amount of free will we actually do have (if any) is certainly less than many philosophers have traditionally believed. It may be enough to have the illusion of free will, and this will certainly be difficult to break.
Your question regarding if we can have conciousness but no free will or vice versa…I guess it would depend if one was an emergent property of the other. For example. if free will was an emergent property of consciousness, then it would probably be possible to have consciousness but not free will (maybe like some animals?)
Finally, what are the ramifications of free will on law and punishment? We all know how broken justice systems around the world currently are, where huge populations are locked up often for little more than following the only course of action that they had. I believe the most important discussion to have on the subject of free will is to engage in a major public debate to decide how to change our justice systems, to focus on preventing further harm and giving opportunities to those with none, rather than just punishment for punishment’s sake. After all, the whole idea of deterrents is brought into question if we really don’t have free will…
No comments:
Post a Comment