Thursday 10 July 2014

Moral philosophy debates part 2

Ardent consequentialists can sometimes annoy me

Continuing the debate on moral philosophy:

Please see Darren Bennett's excellent comment on AS44 here:

http://atheisticallyspeaking.com/as44-moral-landscape-ryan-born-part-2-2/

It's thought provoking, at least for me. And also AS46 went into the issue a bit deeper, and Thomas himself admitted to subscribing largely to rule utilitarianism, which I can get on board with a lot more. This is a form of consequentialism that effectively also involves deontology. In the comments the general feeling is a little too anti-philosophy and  simple consequentialist (act consequentialist) for my liking though.

I'll address Darren's comment at some point on the website, because I have a couple of counterpoints for him.

The challenge has been put forward to justify some form of moral ethics other than act utilitarianism. I'll try to do this, as  I'm still not convinced that wellbeing is the only thing worth looking at, even for consequentialists. I still can't fully subscribe to pure act-based consequentialism.  I think I prefer rule utilitarianism.

In the comments for AS46, Rod says " I found myself in complete agreement with you that a population of 100,000 very happy and fulfilled individuals would definitely be “better” than 100 trillion miserable people. “Better” in this case must surely mean a higher overall well-being score."

I found myself wanting to ask the following questions.

"But would 100 trillion very happy and fulfilled individuals not be better still?

If "no", it's clear that the total amount of wellbeing is less relevant than the average of each individual. In this case, we have a clear counter-example. Why should people not be looking for ways to reduce the population in order to increase the per-individual wellbeing score? Would people  that try to stop this be selfish? And can these question be answered without invoking deontological considerations?

If "yes", why not try and get there now? Isn't it a bit selfish of us to deny more people the same wellbeing we enjoy? Why not start to breed like rabbits(?!)  And again, can these questions be answered without invoking deontological considerations?"

This is not good enough. I was struggling for a better critique than just asking questions. What is the main issue with consequentialism? I was actually about to give up on this whole exercise when it struck me. I didn't want to go trawling through Kant's works, after all (I do have a life, although some would argue with that assertion...)

Staying Gold

How did deontology come about? In its early conceptualisation, must it not have consequentialist roots? "We follow rules because rules lead to better consequences" ? Surely not. There must be more to it.

Again, it seemed that everything went back to consequentialism. Then came my epiphany, and it was a moment of catharsis. Swedish female duo First Aid Kit singing a song called "Staying Gold". It's a gorgeously beautiful track.

Here we have it. This hits the nail on the head. To me, staying gold represents living up to the best standards you can and following your best rules. When we do this, and things go our way, the feeling of pride is palpable.

Let's draw a counter- example:  the ultimate consequentialist, Jack Bauer from the TV show 24. He's done some unspeakable things in trying to save the world from evil terrorists. He's sacrificed so much. Maybe the world needs some people like this. But we wouldn't want everybody to be like him.  He may be hero, but really, who'd want to actually be him?

I've found what for me is the simplest justification of deontology, and most cutting criticism of consequentialism. It's simply this: Uncertainty

We all bow out sooner or later, and It simply feels better for people to die knowing that they have lived trying to do the best they can. And with consequentialism, nothing is ever finalised for sure. What you've done in the past, and may have thought was good, can come back via the "butterfly effect" and bite you on the ass. You may die not even knowing if the future good effects you'd banked on will come to fruition. Deontology can offer a feel-good alternative.

This is, if I may say so myself, a staggering analogy to the athesim / theism dilemma. Theism feels better in the short term, but atheism ultimately makes more sense. Likewise, deontology can maybe feel better during the course of your life, but consequentialism ultimately makes more sense as it also (at least in theory) takes into account the consequences of your actions after you die.

No doubt many ardent consequentialists out there will find fault with this reasoning. My only comeback would be, I think this does at least offer some rationale for the philosophical origins of deontology, as a function of the human psyche demanding the solutions that offer the most comfort in a cold and cruel world. Or, rather, my own reinvention of this particular wheel, as it were. In an age where most people did not have the intellectual or computational resources to make accurate judgements of consequences, deontology could have offered an appealing alternative. And it is, at least, somewhat more rational than theism.

So, in conclusion:

My chief concern with consequentialism is thus. Its biggest problem is uncertainty. The uncertainty due to chaotic, unpredicatble events, which even supercomputers may have trouble with. The uncertainty caused by having faulty or incomplete information when making decisions. The uncertainty due to my own limited cognitive resources. 

The trolley problem, redux

Will definite right/wrong divides be established for examples like the trolley problem? Will we be able to say that not pulling the lever is definitely the wrong decision and if  I do that I should be punished for it? I for one, still have doubts.

On Atheistically Speaking, it seems that everyone has unilaterally decided that pulling the lever is the "right" decision. But in a real life situation that mimicked the trolley problem, I'm not sure I could do it; and I'm also not sure that this a bad thing. 

To me, performing "no action" relating to the lever results in the status quo being maintained and the 5 people being killed - but to me this is an accident. Unless I sabotaged the trolley's brakes, I would not be to blame for this. The chain of events was set into motion outside my control. However, pulling the lever is a conscious decision I've made to perform a positive action resulting in another's death. It's not an accident. It's unlawful killing and I would be devastated.

Sure, I would have problems living with making this decision but I would also have problems living with the consequences of pulling the lever. In fact I think I'd end up doing myself in over it, which is the ultimate negative consequence for me.

In conclusion, the trolley problem is a complex moral dilemma which I'm not sure there is a "right" answer to. To me, either pulling or not pulling the lever would be permissible, but neither result would be necessarily "correct" or easy to live with. If consequentialism says that "you must pull the lever", I'm not sure I like it...but in almost every way, trying to escape from consequences is pointless. This is my lament.

Buy it here: 
https://itunes.apple.com/gb/album/stay-gold/id845312934?i=845313040

"What if our hard work ends in despair?
What if the road won't take me there?
Oh, I wish, for once, we could stay gold

What if to love and be loved's not enough?
What if I fall and can't bear to get up?
Oh, I wish, for once, we could stay gold
We could stay gold"

***

Tuesday 1 July 2014

Atheistically Speaking: The Moral Landscape Debates Part 1

I thought I'd start a new series consisting of at least a couple of posts, regarding discussion of the Sam Harris book, The Moral Landscape, and branching off into moral philosophy. It's just a bit too much for one single long post!

The Moral Landscape (which I abbreviate as TML) is a very interesting and intriguing book. I enjoyed it, and can agree with a lot of what Harris says. 

http://www.amazon.co.uk/Moral-Landscape-Sam-Harris-ebook/dp/B0055CS2E4/ref=sr_1_1?s=books&ie=UTF8&qid=1404240387&sr=1-1&keywords=the+moral+landscape

However, I do have a few issues about how the notion of wellbeing can be applied in real life, how it can be assessed, and how we can use the correct form of moral philosophy to ensure it's being maximised. Harris seems to "assume" a bit too much consequentialism as a catch-all approach (a broad term for the ethical stance that considers the outcomes of one's actions as the best measure of whether they were a good idea or not). Utilitarianism is a form of consequentialism, and "wellbeing" is a term used to measure the compound sum of a person's physical and mental health, wealth, fulfilment, happiness etc. 

Consequentialism is perhaps best placed of all the moral systems to deliver the long term ethical results we are looking for, as we always strive to get the best outcomes for everyone; but with the disadvantages of perhaps involving short-term suffering or sacrifices, and potentially becoming very complicated to operate and accurately assess when many factors are in play.

In contrast to consequentialism, broadly speaking, there are 2 other forms of moral philosophy: Deontology and Virtue Ethics. Briefly, deontology is adhering to what you consider to be moral duties. It is following a general set of rules to guide you in your actions. An example would be the Hippocratic Oath: Do no harm. It's simple to use and it's obvious to me that this could be a fruitful approach to morality, at least in certain circumstances, as long as the rules you choose to follow are reasonable.

Virtue Ethics involves living up to standards, such as bravery, honesty, honour or valour. It can seem a bit outdated, but again is simple and has the advantage of being proven to work in the past. A modern example of virtue ethics could be following the example of your role model, or following in the footsteps of an ancestor. For Christians, asking "What would Jesus do?"

___

Speaking Atheistically

So Thomas Smith over at Atheistically Speaking has interviewed Massimo Pigliucci about his criticisms of TML. The format here is that you can listen to his podcast and interview and also see my comments on Thomas' refutation of Massimo lower down on the page.

http://atheisticallyspeaking.com/as36-scientism-massimo-pigliucci-part-2-2/

Here's what I said:

"James Piechowski (Pieman789)
JUNE 3, 2014 AT 2:36 PM
Harris’ philosophical considerations around wellbeing, and particularly noting the work that had already been going on in the field, both in the ancient and recent past, are confined to the footnotes of the book – and not part of the main text.

These points you cover explain clearly that Harris was aware of Aristotle’s earlier thinking on wellbeing, and as you described more closely related work by two modern philosophers. This does defeat Pigliucci’s claim that Harris was just rehashing some old philosophy and calling it his own – in fact he was aware of the other work and was trying to build on it.

However, often when I read a book I will just read the main text and not big lists of notes which follow it, which some authors are inclined to use, as this does in some ways take a little away from the experience of the book and can be a little anti-climactic. It may be that Pigliucci did not read all the footnotes, and so did not uncover these explanations, or as you say it may be that he just misrepresents him. It’s also possible that some digital editions of the book may not include all the references and footnotes which are in the print version (I have certainly seen this happen before).

The misrepresentation does seem unlikely to me, as the answers were so closely at hand. I would expect someone as careful as Pigliucci to not go to the trouble of making those points unless he genuinely thought they were valid, and couldn’t be so readily challenged just by reading elsewhere in the same book.

Either way, the fact that Harris chose to not include these seemingly important philosophical considerations and explanations as part of the main text of The Moral Landscape is in itself quite revealing, and actually a bit problematic for Harris in my view, as they appear to be quite important aspects to omit.

The original reason stated in the book for not leaning much on philosophy for support was that it was either too boring, too complicated to explain in a book of this type, or not appropriate for the target audience. However, based on some of the negative feedback to TML, I would suggest it actually may have turned out better for Harris if he would have embraced the discussion of philosophy a bit more inclusively in the main text rather than confining it to an “afterthought”. This could have potentially avoided what may just be, after all, a bit of a misunderstanding. So it may just boil down to a mistake in editing by Harris rather than an outright attack at the core of philosophy!"


My comment was later featured on episode 38, where Thomas discussed it on the show!

http://atheisticallyspeaking.com/as38-debate-blake-giunta-part-2-2/

Wow, fame! (I thought he was a bit harsh to be honest!) It's also worth looking at for all the comments (not mine though!), which are entirely un-like what you'd find on YouTube! (For the record, I'm still not sure that the notion of an immaterial mind is even coherent).

I still wasn't convinced that Thomas had grasped the scale of the shortcuts that Harris was taking with philosophy. It was up to the winner of the essay challenge that Harris had set up for people to write critiques of his work though (very magnanimous of him, I must say) to properly voice this.

Ryan Born puts his finger exactly on the problem when Thomas interviews him in episodes 43 and 44:

http://atheisticallyspeaking.com/as43-moral-landscape-ryan-born/ 

And

http://atheisticallyspeaking.com/as44-moral-landscape-ryan-born-part-2-2/

My comments started with the following:

"James Piechowski (Pieman789)
JUNE 27, 2014 AT 2:56 PM
A very interesting and highly detailed discussion. Ryan is a very entertaining, knowledgeable and absorbing guest!

To me your discussion reinforces my earlier suggestion that Harris does not actively engage with the philosophical considerations of ethics sufficiently in TML. This detracts from the overall impact of the book in my view.

Rather than potentially alienating the target readership, the main problem is that he’s still making a massive assumption that Utilitarianism (or the notion of the wellbeing of conscious creatures) can adequately cover all (or even most) moral situations and provide answers to all (or even most) moral problems. As Ryan points out, this is not yet a settled issue in philosophy.

In discussing Ryan’s examples, you try and go to great lengths to adjust utilitarianism to cover any situation by modifying it for the problem at hand. The issue is, when this is done it tends to borrow heavily from the other morality systems. You may see this as just “obvious”, but in fact, you’re taking something directly from deontology to make that adjustment. This may be subconscious, but hopefully, unless I’m making a big error here, you should be able to see it if you haven’t done so already when I identify it below.

For example, in AS33, between about 48 to 51 minutes on the timestamp, you and Ryan are talking about the “optimum population dilemma” with regards to consequentialist thinking – and you make the point that realistically we would not be able to reach a higher average wellbeing per person by reducing the population, because “getting there would be bad”. Whilst this is fine, and I agree with you about it, we must acknowledge that “getting there would be bad” is actually not part of consequentialist considerations per se. As demonstrated by the responses to the trolley problem, killing may sometimes be justified in consequentialism. We must consider the circumstances to determine if killing is justified (already done here – average wellbeing is being raised, so tick that box).

However, “getting there would be bad” is very much a part of deontology (for example, the rule that says “killing is never justified”).
Therefore you are using deontology to argue for consequentialism. As Ryan says “Actually that’s just the problem, consequentially, it (reducing the population down) would be…(better)”.

Let’s make no mistake: Consequentialism is the most commonly used and best suited ethical approach to most circumstances . But it doesn’t always work. What if someone doesn’t have either the capacity, time or correct information to make a moral judgement based on consequences? When many people can lead good lives following the simple rule of “doing no harm”, or maybe looking up to their role models as living examples of how they should behave – it’s clear that there are other valid approaches, which have advantages consequentialism doesn’t provide. Sure, these people also make consequentialist decisions a lot, and I’m not saying we shouldn’t chiefly be consequentialist (I think we should), but one ultimately can’t rule out the possible effectiveness of either deontology or virtue ethics in certain situations."


It seemed I'd piqued the interest of a few die-hard consequentialists...the following was posted in response:

"Nathan in Winnipeg (@NathanInWin)
JUNE 28, 2014 AT 2:24 AM
Why can’t “getting there would be bad” be consequentialist? As in, you are concerned about the consequences for those who might suffer in the implementation of a population reduction plan."


To which I responded:


"James Piechowski (Pieman789)
JUNE 28, 2014 AT 7:29 AM
Thanks for your reply! Didn’t imagine my comment was worthy of one…To be honest I’m just playing devil’s advocate, as I didn’t realise Thomas was so pro Harris. Not that I’m anti-Harris, I agree with him most of the time.

As you say, yes we could add to the intended consequences of the population problem, but wasn’t it already supposed to have been determined for the purposes of the example that overall, average wellbeing was in fact being increased by lowering the population?

So we can argue that this is not the case. Having to also consider what has to happen in the act of moving from one level of wellbeing to another, within consequentialist framework, makes it even more complicated; and any “suffering in the implementation of a population reduction plan” needs to be taken account of in the initial determination of change in wellbeing. So the initial conditions have changed and the initial assumption was wrong. We can change the equation, this is fine – but there is no consequentialist “barrier”. Otherwise you are in fact using deontology.

The reason to not reduce the population, is not because it involves killing people, it’s because we determined that the wellbeing of those remaining would be adversely affected, meaning that it would not be raised overall. It’s complicated. In fact, a big criticism is that it can become too unwieldy a system. An initial look can suggest X, but if we study it more deeply, Y emerges.

Just as in the trolley problem, can you really blame people for making a different decision to you? Many people are appalled that someone would actually throw the switch to save the five people, condemning the one (I know Tracie Harris has expressed an opinion on this).

If we conclude, using consequentialism, as you seem to do (and I agree), that increasing average wellbeing in this way is not a desirable result; and we also say that increasing total wellbeing by increasing population is not a desirable result either (as in the 100 Billion people who have lives barely worth living, as Ryan discussed), then I question what real-world use wellbeing actually has, at least from a strategic standpoint, and also, how wellbeing could even be increased at all (besides technological advances etc. that are already happening). The definition of wellbeing can, it seems to me, become easily muddied.

It will always be possible to argue against certain actions that may or may not increase wellbeing for different people (is increasing the health of X people in Africa worth increasing the debt of your country by Y for example?) Whose wellbeing is more important, and moreover who gets to say that?

It could very well be, that we find it exceptionally difficult to increase overall wellbeing at all, with all these extra considerations. Seeing as the theory is supposed to rely on maximising wellbeing, doesn’t the system of calculating that need to be refined? To me, it reinforces the idea that the philosophy is not quite there yet, and this still seems problematic for Harris’ approach in TML.

For many people, following good role models and clear rules of conduct are a lot simpler, and can go a long way towards an ethical life. It’s just that religious rules and religious role models are seemingly always very bad examples to use!"


We'll follow up with more philosophical challenges  in part 2!