Over at 80,000 Hours I've tried to write the most comprehensive analysis so far of whether it's worth voting from an effective altruist perspective.
The bottom line is usually yes, if you're in a competitive election.
But it may nevertheless be better to opt out of following politics entirely, if you're not into it and have other good opportunities to have a social impact.
The full piece looks at a lot of different issues, and addresses criticisms people made of our previous article on the topic:
- How can you roughly estimate the chances of your vote being decisive, in elections all around the world?
- How much does it cost to get someone else to vote the way you'd like?
- How much does it matter who wins elections anyway?
- What can we say about the risk of accidentally voting for the wrong candidate?
- How hard is it to vote more intelligently than other voters?
- But won't the courts decide close elections regardless of what you do?
- How about proportional election systems?
- Is it too much work to figure out which candidate is better?
It builds upon previous work on this topic such as Politics as charity and Vote for charity's sake.
Let me know your thoughts below!
Tom Chivers (UK columnist) wrote a piece based on Rob's article.
Great post! I also think voting in state and local politics can make a greater difference if you care about local issues like zoning and occupational licensing (like I do). For example, in the upcoming NYC legislative election in 2021, there are several candidates running on a pro-housing platform and several candidates running against more housing development; the more pro-housing candidates are elected to office, the more likely it is that more housing units will be built. Even though a New Yorker's vote in the presidential election may not matter because New York is a solid blue state, their votes in local elections matter more.
I thought this was really good thanks for writing it. Jason Brennan is a notable smart sceptic of the duty to vote. Do we know what he thinks of this?
He says he's going to write a response. If I recall Jason isn't a consequentialist so he may have a different take on what kinds of things we can have a duty to do.
Thank you for writing this. The previous version was one of my most shared 80k links.
The post was long and detailed. That is what a certain audience wants (ie this forum).
I wonder if there is an audience for a trimmed down, lighter piece (eg style of vox's future perfect). I think the piece is good enough in content to share among people who prefer shorter and lighter articles.
Want to write a TLDR summary? I could find somewhere to stick it.
Thanks Nathan, this was helpful!
Though that's not what I meant. I more mean an op-ed style version of the same content that is lighter and more chatty. But maybe I'm misunderstanding the process? I guess if a journalist wants to summarise it, they'll do that themselves?
Eg in this style https://unherd.com/2020/10/why-do-people-believe-such-complete-rubbish/
I don't think that the chance of the election hinging on a single vote is the right thing to look at. One should decide based on the fact that other people similar to them are likely to act similarly. E.g. a person reading this post might decide whether to vote by asking themselves whether they want 300 people on the EA forum to each spend an hour (+ face COVID-19 risk?) on voting. (Of course, this reasoning neglects a much larger group of people that are also correlated with them.)
The costs are (in expectation) proportional to the benefits, so I think even under EDT or FDT it mostly just adds up to normality. For altruists at least.
When one assumes that the number of people that are similar to them (roughly speaking) is sufficiently small, I agree.
The costs are higher for people who value the time of people that are correlated with them, while the benefits are not.
To decide whether they want this, shouldn't they look at the chances? How would this change the answer? The risk to the EA community increases nonlinearly (since the EA community's marginal returns on additional members aren't constant) while the benefits of additional votes increase roughly linearly?
Also, there are mail-in ballots, although it might be too late in some places (I'm not informed either way, so don't take my word for it).
This sounds like the evidential decision theory answer, and I'm not that familiar with these different decision theories. However, your decision to vote doesn't cause these others to vote, it's only evidence that they are likely to act similarly, right? Finding that out one way or another doesn't actually make the world better or worse (compared to alternatives), it just clears up some uncertainty you had about what the world would look like. Otherwise, couldn't you justify confirmation bias, e.g. telling your friends to selectively only share good news with you?
What I wrote is indeed aligned with evidential decision theory (EDT). The objections to EDT that you mentioned don't seem to apply here. When you decide whether to vote you don't decide just for yourself, but rather you decide (roughly speaking) for everyone who is similar to you. The world will become better or worse depending on whether it's good or bad that everyone-who-is-similar-to-you decides to vote/not-vote.
What does this mean? If I'm in the voting booth, and I suddenly decide to leave the ballot blank, how does that affect anyone else?
It doesn't affect anyone else in a causal sense, but it does affect people similar to you in a decision-relevant-to-you sense.
Imagine that while you're in the voting booth, in another identical voting booth there is another person who is an atom-by-atom copy of you (and assume our world is deterministic). In this extreme case, it is clear that you're not deciding just for yourself. When we're talking about people who are similar to you rather than copies of you, a probabilistic version of this idea applies.
I don't get it.
Wikipedia's entry on superrationality probably explains the main idea here better than me.
It seems like to figure out whether it's a good use of time for 300 people like you to vote, you still need to figure out if it's worth it for any single of them.
What I mean to say is that, roughly speaking, one should compare the world where people like them vote to the world where people like them don't vote, and choose the better world. That can yield a different decision than when one decides without considering the fact that they're not deciding just for themselves.
I completely agree with you. This whole reasoning seems to heavily depend on using causal decision theory instead of its (in my opinion) more sensible competitors.