Twitter, Metaculus

I'm going to be working at 80,000hours as a careers advisor starting in September 2021; the opinions I've shared here (and will share in the future) are my own.

Wiki Contributions


Magnitude of uncertainty with longtermism

This talk and paper discusses what I think are some of your concerns about growing uncertainty over longer and longer horizons.

More EAs should consider “non-EA” jobs

In my case it was the opposite - I spent several years considering only non-EA jobs as I had formed the (as it turns out mistaken) impression that I would not be a serious candidate for any roles at EA orgs.

What things did you do to gain experience related to EA?

NB - None of the things below were done with the goal of building prestige/signalling. I did them because they were some combinaion of interesting, fun, and useful to the world. I doubt I'd have been able to stick with any if I'd viewed them as purely instrumental. I've listed them roughly in the order in which I think they were helpful in developing my understanding. The signalling value ordering is probably different (maybe even exactly reversed), but my experience of getting hired by an EA org is that you should prioritise developing skill/knowledge/understanding over signalling very heavily.

  • As a teacher, I ran a high-school group talking about EA ideas, mostly focusing on the interesting maths. This involved a lot of thinking and reading on my part in order to make the sessions interesting.
  • Over the course of a few years, I listened to almost every episode of the 80k podcast, some multiple times.
  • I wrote about things I thought were important on the EA forum.
  • I volunteered for SoGive as an analyst, and had a bunch of exciting calls with people like GiveWell and CATF as a result.
  • I spent a bunch of time on Metaculus, including volunteering as a moderator and trying to write useful questions, though I ended up doing fairly well at forecasting by some metrics.
A Sequence Against Strong Longtermism

I don't think the claim from Linch here is that not bothering to edit out snark has led to high value, rather that if a piece of work is flawed both in the level of snark and the poor quality of argument, the latter is more important to fix.

Career advice for Australian science undergrad interested in welfare biology seems like a good option to check out if you're set on Animal welfare work. Given that you're thinking about keeping AI on the table, you should probably at least consider keeping pandemic prevention similarly on the table, it seems like a smaller step sideways from your current interests. Have you considered applying to speak to someone at 80,000 hours*?

*I'll be working on the 1-1 team from September, but this is, as far as I can tell, the advice I'd have given anyway, and shouldn't be treated as advice from the team.

The problem of possible populations: animal farming, sustainability, extinction and the repugnant conclusion

How do you approach identity? If ~no future people are "necessary", does this just reduce to critical-level utilitarianism (but still counting people with negative welfare, can't remember if critical level does that)? Are you ok with that?

The problem of possible populations: animal farming, sustainability, extinction and the repugnant conclusion

Trying to summarise for my own understanding.

Is the below a reasonable tl;dr?

Total utilitarianism, except you ignore people who satisfy all of:

  • won't definitely  exist
  • Have welfare between 0 and T

Where T is a threshold chosen democratically by them, and lives with positive utility are taken to be "worth living".

If so, does this reduce to total utilitarianism in the case that people would choose not to be ignored if their lives were worth living?

Load More