I have not found any Effective Altruist literature on free will debates and implications, which I was surprised by as it seems to be a topic of potentially great moral importance.  Can anyone point me to existing work?

If free will doesn't exist, does that ruin/render void the EA endeavour?  If so are most EAs libertarians re free will?

In light of thinkers such as Sam Harris's work dismantling free will, which I find compelling, https://samharris.org/the-illusion-of-free-will/ and given the ought-implies-can principle, can morality be salvaged?  Eg, how could I 'ought' to choose an impactful career, if my actions are all predetermined?

New Answer
Ask Related Question
New Comment

4 Answers sorted by

If skepticism about free will renders the EA endeavor void, then wouldn't it also render any action-guiding principles void (including principles about what's best to do out of self-interest)? In which case, it seems odd to single out its consequences for EA.

You sometimes see some (implicit) moving between "we did this good thing, but there's a sense in which we can't take credit, because it was determined before we chose to do it" to "we did this good thing, but there's a sense in which we can't take credit, because it would have happened whether or not we chose to do it", where the latter can be untrue even if the former always true. The former doesn't imply anything about what you should have done instead, while the latter does but has nothing to do with skepticism about free will. So even if determinism undermines certain kinds of "you ought to x" claims, it doesn't imply "you ought to not bother doing x" — it does not justify resignation. There is a parallel (though maybe more problematic) discussion about what to do about the possibility of nihilism.

Anyway, even skeptics about free will can agree that ex post it was good that the good thing happened (compared to it not happening), and they can agree that certain choices were instrumental in it happening (if the choices weren't made, it wouldn't have happened). Looking forward, the skeptic could also understand "you ought to x" claims as saying "the world where you do x will be better than the world where you don't, and I don't have enough information to know which world we're in". They also don't need to deny that people are and will continue to be sensitive to "ought" claims in the sense that explaining to people why they ought to do something can make them more likely to do it compared to the world where you don't explain why. Basically, counterfactual talk can still make sense for determinists. And all this seems like more then enough for anything worth caring about — I don't think any part of EA requires our choices to be undetermined or freely made in some especially deep way.

Some things you might be interested in reading —

I think maybe this free will stuff does matter in a more practical way when it comes to prison reform and punishment, since (plausibly) support for 'retributive' punishment vs rehabilitation comes from attitudes about free will and responsbility that are either incoherent or wrong in a influencable way. 

Thanks finm, I agree, EA is far from uniquely vulnerable to determinism, as you say all action-guiding principles would be affected, I was just contextualising to the forum.

Yes, I think that's a useful distinction, Harris labels these 'determinism' and 'fatalism' respectively, and so still believes our decisions matter in the sense that they will impact the value of future world-states.

That could work to reformulate the meaning of ought statements, though I still feel something important is lost from ethics if determinism is true.

Will have a look at the resources :)

According to the PhilPapers survey, over half of philosophers favour a compatibilist approach to free will - i.e. that free will is compatible with determinism.

I also recommend the LessWrong writing on the subject.

Thanks, I am quite sceptical of compatibilism as a work-around as it still seems unreasonable to say I ought to have done something I metaphysically could not have done.  But yes, given epistemic modesty I can't dismiss it entirely when so many professional philosophers support it.  I'll have a look through LessWrong.

“If free will doesn’t exist, does that ruin/render void the EA endeavor?”

 

Well, what does it matter if free will exists? Even if free will doesn’t exist, my life circumstances have led to me becoming invested in improving the world by engaging in altruism. My brain’s reward circuitry is still aligned with doing the most good that I can do for as long as I am able. I think for most of us who identify as altruists, the tendency to help those who need help is not tied to the idea of free will. I suppose that there are people who would take the absence of free will to be a pass to stray from altruism, but I doubt you’ll find them in the EA community.

 

Personally, losing my belief in free will has had a big, big difference in how I see the world. Because I believe free will doesn’t exist, I cease to judge those who are on the bottom rungs of our society. I have a deeper compassion for people who have addictions, who have committed crimes, who are not the easiest to care about. I have more patience with those who have differing opinions, even with flat-earthers and religious fundamentalists.

 

Shedding my belief in free will also helped me be kinder to myself. I am more patient whenever I face challenges arising from my shortcomings. I forgive myself for my failures and try to be humble even in my triumphs. My prime motivation to make the world a better place is no longer guilt but rather a genuine pleasure in spreading kindness. 

 

In so many different ways, not believing in free will has made me a better altruist and a kinder friend to myself. I hope questioning free will does the same to you!

Thanks for that personal perspective, good to hear.  For me too I think doubting free will is beneficial in my perceptions of others, as you say it makes judgementalism impossible.  I am yet to reconcile myself emotionally to me lacking freedom though, and perhaps never will.

Yes, perhaps some people will be demotivated by disbelieving free will and choose to be less altruistic, which itself is determined, as is how much I will try to break them out of it.  My moral system would take a lot of adjusting to without being able to use 'ought' statements (given ough-implies-can conception).

I'm no expert in this topic and haven't read Sam Harris's argument, but there are a couple of things I usually bear in mind:

1. If you're uncertain about whether determinism is true (that is, the probability you assign to hard determinism is less than 1), then it seems you should still act as though you are not determined.  Then we can apply reasoning like Pascal's Wager -- if determinism is false, then sadistic torture is terrible; if it's right, then we are indifferent.  Hence it seems that we should still act on the side of morality still having bearing.

2. A more compelling response (although, still contentious) is compatibilism.  I leave you to explore it here.

Exactly, 1 has been the approach I have taken; as long as I am unsure I err on the side of safety and believing in morally large universes including those with free will.  That said, it would be interesting if many EAs were similar and thought something like "there's only a ~10% chance free will and hence morality is real, so very likely my life is useless, but I am trying anyway".  I think that is a good approach, but would be an odd outcome.

2 comments, sorted by Click to highlight new comments since: Today at 4:14 PM

If free will doesn't exist, does that ruin/render void the EA endeavour?

Can you say more about why free will not existing is relevant to morality? 

My personal take is that free will seems like a pretty meaningless and confused concept, and probably doesn't exist (whatever that means). But that I want to do what I can to make the world a better place anyway, in the same way that I clearly want and value things in my normal life, regardless of whether I'm doing this with free will.

Sure, I think that makes sense if we see EA as just another preference like any other, I think if we were 100% certain there was no free will though it would greatly reduce the moral force of the argument supporting EA (and any decision-guiding framework), as I couldn't reasonably tell someone or myself, 'you ought to do X over and above Y'.