Interesting post! But I’m not convinced.
I’ll stick to addressing the decision theory section; I haven’t thought as much about the population ethics but probably have broadly similar objections there.
(1) What makes STOCHASTIC better than the strategy “take exactly N tickets and then stop”?
I get that you’re trying to avoid totalizing theoretical frameworks, but you also seem to be saying it’s better in some way that makes it worth choosing, at least for you. But why?
(2) In response to
But, well, you don’t have to interpret my actions as expressing attitudes towards expected payoffs. I mean this literally. You can just … not do that.
I’m having trouble interpreting this more charitably than “when given a choice, you can just … choose the option with the worse payoff.” Sure, you can do that. But surely you’d prefer not to? Especially if by “actions” here, we’re not actually referring to what you literally do in your day-to-day life, but a strategy you endorse in a thought-experiment decision problem. You’re writing as if this is a heavy theoretical assumption, but I’m not sure it’s saying anything more than “you prefer to do things that you prefer.”
(3) In addition to not finding your solution to the puzzle satisfactory,[2] I’m not convinced by your claim that this isn’t a puzzle for many other people:
Either you’re genuinely happy with recklessness (or timidity), or else you have antecedent commitments to the methodology of decision theory — such as, for example, a commitment to viewing every action you take as expressing your attitude to expected consequences.
To me, the point of the thought experiment is that roughly nobody is genuinely happy with extreme recklessness or timidity.[3] And as I laid out above, I’d gloss “commitment to viewing every action you take as expressing your attitude to expected consequences” here as “commitment to viewing proposed solutions to decision-theory thought experiments as expressing ideas about what decisions are good” — which I take to be nearly a tautology.
So I’m still having trouble imagining anyone the puzzles aren’t supposed to apply to.
The only case I can make for STOCHASTIC is if you can’t pre-commit to stopping at the N-th ticket, but can pre-commit to STOCHASTIC for some reason. But now we’re adding extra gerrymandered premises to the problem; it feels like we’ve gone astray.
Although if you just intend for this to be solely your solution, and make no claims that it’s better for anyone else, or better in any objective sense then ... ok?
This is precisely why it's a puzzle -- there's no outcome (always refuse, always take, take N, stochastic) that I can see any consistent justification for.
Another podcast episode on a similar topic came out yesterday, from Rabbithole Investigations (hosted by former Current Affairs podcasts hosts Pete Davis, Sparky Abraham, and Dan Thorn). They had Joshua Kissel on to talk about the premises of EA and his paper "Effective Altruism and Anti-Capitalism: An Attempt at Reconciliation."
This is the first interview (and second episode) in a new series dedicated to the question "Is EA Right?". The premise of the show is that the hosts are interested laypeople who interview many guests with different perspectives, in the hopes of answering their question by the end of the series.
I'm optimistic about this podcast as another productive bridge between the EA and lefty worlds; their intro episode gave me a lot of hope that they're approaching this with real curiosity.
(I'm posting this more to recommend / mention the series rather than the particular episode though; the episode itself spent most of its runtime covering intro-EA topics in what I felt was a pretty standard way, which most people here probably don't need. That said, it did a good job of what it was aiming for, and I'm looking forward to the real heavy-hitting critiques and responses as the series continues.)
I read this piece a few months ago and then forgot what it was called (and where it had been posted). Very glad to have found it again after a few previous unsuccessful search attempts.
I think all the time about that weary, determined, unlucky early human trying to survive, and the flickering cities in the background. When I spend too long with tricky philosophy questions, impossibility theorems, and trains to crazytown, it's helpful to have an image like this to come back to. I'm glad that guy made it. Hopefully we will too!
An important principle of EA is trying to maximize how much good you do, when you're trying to do good. So EAs probably won't advise you to base most of your charitable giving on emotional connection (which is unlikely to be highly correlated with cost-effectiveness) -- instead, according to EA, you should base this on some kind of cost-effectiveness calculation.
However, many EAs do give some amount to causes they personally identify with, even if they set aside most of their donations for more cost-effective causes. (People often talk about "warm fuzzies" in this context, i.e. donations that give you a warm fuzzy feeling.) In that sense, some amount of emotion-based giving is completely compatible with EA.
There have been a few posts discussing the value of small donations over the past year, notably:
There's a lot of discussion here (especially if you go through the comments of each piece), and so plenty of room to come to different conclusions.
Here's roughly where I come out of this:
This means
Setting Beeminder goals for the number of hours worked on different projects has substantially increased my productivity over the past few months.
I'm very deadline-motivated: if a deadline is coming up, I can easily put in 10 hours of work in a day. But without any hard deadlines, it can take active willpower to work for more than 3 or 4 hours. Beeminder gives me deadlines almost every day, so it takes much less willpower now to have productive days.
(I'm working on a blog post about this currently, which I expect to have out in about two weeks. If I remember, I'll add a link back to this comment once it's out.)