Alex Semendinger

74 karmaJoined Apr 2022


Setting Beeminder goals for the number of hours worked on different projects has substantially increased my productivity over the past few months.

I'm very deadline-motivated: if a deadline is coming up, I can easily put in 10 hours of work in a day. But without any hard deadlines, it can take active willpower to work for more than 3 or 4 hours. Beeminder gives me deadlines almost every day, so it takes much less willpower now to have productive days.

(I'm working on a blog post about this currently, which I expect to have out in about two weeks. If I remember, I'll add a link back to this comment once it's out.)

Interesting post! But I’m not convinced. 

I’ll stick to addressing the decision theory section; I haven’t thought as much about the population ethics but probably have broadly similar objections there.

(1) What makes STOCHASTIC better than the strategy “take exactly N tickets and then stop”?

  • Both avoid near-certain death (good!)
  • Both involve, at some point, turning down what looks like a strictly better option
    • To me, STOCHASTIC seems to do this at the very first round, and all subsequent rounds. (If I played STOCHASTIC and drew a non-black ball first, I think I’d be pretty disappointed. This indicates to me that I didn’t actually want to randomize on that round.) Or you could just view STOCHASTIC as doing this on the round when you stop accepting tickets.
    • This very fact makes these strategies about as puzzling as the "reckless" or "timid" strategies to me -- at some point, you're deliberately choosing the worse option, by your own lights! That's at least weird, right?
  • “Take exactly N” has the advantage of letting you decide the exact level of risk you want to take on, while STOCHASTIC involves an additional layer of uncertainty, which gets you ….???[1]

I get that you’re trying to avoid totalizing theoretical frameworks, but you also seem to be saying it’s better in some way that makes it worth choosing, at least for you. But why?

(2) In response to

But, well, you don’t have to interpret my actions as expressing attitudes towards expected payoffs. I mean this literally. You can just … not do that.

I’m having trouble interpreting this more charitably than “when given a choice, you can just … choose the option with the worse payoff.” Sure, you can do that. But surely you’d prefer not to? Especially if by “actions” here, we’re not actually referring to what you literally do in your day-to-day life, but a strategy you endorse in a thought-experiment decision problem. You’re writing as if this is a heavy theoretical assumption, but I’m not sure it’s saying anything more than “you prefer to do things that you prefer.”

(3) In addition to not finding your solution to the puzzle satisfactory,[2] I’m not convinced by your claim that this isn’t a puzzle for many other people:

Either you’re genuinely happy with recklessness (or timidity), or else you have antecedent commitments to the methodology of decision theory — such as, for example, a commitment to viewing every action you take as expressing your attitude to expected consequences.

To me, the point of the thought experiment is that roughly nobody is genuinely happy with extreme recklessness or timidity.[3] And as I laid out above, I’d gloss “commitment to viewing every action you take as expressing your attitude to expected consequences” here as “commitment to viewing proposed solutions to decision-theory thought experiments as expressing ideas about what decisions are good” — which I take to be nearly a tautology.

So I’m still having trouble imagining anyone the puzzles aren’t supposed to apply to.

  1. ^

    The only case I can make for STOCHASTIC is if you can’t pre-commit to stopping at the N-th ticket, but can pre-commit to STOCHASTIC for some reason. But now we’re adding extra gerrymandered premises to the problem; it feels like we’ve gone astray.

  2. ^

    Although if you just intend for this to be solely your solution, and make no claims that it’s better for anyone else, or better in any objective sense then ... ok? 

  3. ^

    This is precisely why it's a puzzle -- there's no outcome (always refuse, always take, take N, stochastic) that I can see any consistent justification for.

Another podcast episode on a similar topic came out yesterday, from Rabbithole Investigations (hosted by former Current Affairs podcasts hosts Pete Davis, Sparky Abraham, and Dan Thorn). They had Joshua Kissel on to talk about the premises of EA and his paper "Effective Altruism and Anti-Capitalism: An Attempt at Reconciliation."

This is the first interview (and second episode) in a new series dedicated to the question "Is EA Right?". The premise of the show is that the hosts are interested laypeople who interview many guests with different perspectives, in the hopes of answering their question by the end of the series. 

I'm optimistic about this podcast as another productive bridge between the EA and lefty worlds; their intro episode gave me a lot of hope that they're approaching this with real curiosity.

(I'm posting this more to recommend / mention the series rather than the particular episode though; the episode itself spent most of its runtime covering intro-EA topics in what I felt was a pretty standard way, which most people here probably don't need. That said, it did a good job of what it was aiming for, and I'm looking forward to the real heavy-hitting critiques and responses as the series continues.)

I read this piece a few months ago and then forgot what it was called (and where it had been posted). Very glad to have found it again after a few previous unsuccessful search attempts. 

I think all the time about that weary, determined, unlucky early human trying to survive, and the flickering cities in the background. When I spend too long with tricky philosophy questions, impossibility theorems, and trains to crazytown, it's helpful to have an image like this to come back to. I'm glad that guy made it. Hopefully we will too!

An important principle of EA is trying to maximize how much good you do, when you're trying to do good. So EAs probably won't advise you  to base most of your charitable giving on emotional connection (which is unlikely to be highly correlated with cost-effectiveness) -- instead, according to EA, you should base this on some kind of cost-effectiveness calculation.

However, many EAs do give some amount to causes they personally identify with, even if they set aside most of their donations for more cost-effective causes. (People often talk about "warm fuzzies" in this context, i.e. donations that give you a warm fuzzy feeling.) In that sense, some amount of emotion-based giving is completely compatible with EA.

There have been a few posts discussing the value of small donations over the past year, notably:

  1. Benjamin Todd on "Despite billions of extra funding, small donors can still have a significant impact"
  2. a counterpoint, AppliedDivinityStudies on "A Red-Team Against the Impact of Small Donations"
  3. a counter-counterpoint, Michael Townsend  on "The value of small donations from a longtermist perspective"

There's a lot of discussion here (especially if you go through the comments of each piece), and so plenty of room to come to different conclusions.

Here's roughly where I come out of this:

  • What's the relevant counterfactual? Many of these comment threads turn into discussions about earning-to-give vs direct work, but if you have $1000 in your hand, ready to donate, that's not the relevant question. Rather, you should ask, "if I don't donate this, what would I do with it instead, and how much impact would that have?"
  • You say "I know that professional grant makers think that last-dollar funding is not cost effective because they aren't funding more projects, but aren't out of dollars." I think this frames the issue incorrectly. It's not that big funders know that other projects aren't cost-effective, it's that they don't currently have enough projects that clear a certain cost-effectiveness bar. But crucially, that bar is still far above zero!

This means

  • there are probably many opportunities that are just as cost-effective that they haven't found (potentially you have information they don't that you could exploit; see this section of the above ADS post)
  • marginal donations should have a cost-effectiveness at worst just below that bar, which means you're only doing a little worse than the big funders. (This point taken from Benjamin Todd here.)