I'm going to be working at 80,000hours as a careers advisor starting in September 2021; the opinions I've shared here (and will share in the future) are my own.
This talk and paper discusses what I think are some of your concerns about growing uncertainty over longer and longer horizons.
In my case it was the opposite - I spent several years considering only non-EA jobs as I had formed the (as it turns out mistaken) impression that I would not be a serious candidate for any roles at EA orgs.
NB - None of the things below were done with the goal of building prestige/signalling. I did them because they were some combinaion of interesting, fun, and useful to the world. I doubt I'd have been able to stick with any if I'd viewed them as purely instrumental. I've listed them roughly in the order in which I think they were helpful in developing my understanding. The signalling value ordering is probably different (maybe even exactly reversed), but my experience of getting hired by an EA org is that you should prioritise developing skill/knowledge/understanding over signalling very heavily.
Sentinel seems promising
I don't think the claim from Linch here is that not bothering to edit out snark has led to high value, rather that if a piece of work is flawed both in the level of snark and the poor quality of argument, the latter is more important to fix.
https://www.animaladvocacycareers.org/ seems like a good option to check out if you're set on Animal welfare work. Given that you're thinking about keeping AI on the table, you should probably at least consider keeping pandemic prevention similarly on the table, it seems like a smaller step sideways from your current interests. Have you considered applying to speak to someone at 80,000 hours*?*I'll be working on the 1-1 team from September, but this is, as far as I can tell, the advice I'd have given anyway, and shouldn't be treated as advice from the team.
How do you approach identity? If ~no future people are "necessary", does this just reduce to critical-level utilitarianism (but still counting people with negative welfare, can't remember if critical level does that)? Are you ok with that?
Trying to summarise for my own understanding.
Is the below a reasonable tl;dr?Total utilitarianism, except you ignore people who satisfy all of:
Where T is a threshold chosen democratically by them, and lives with positive utility are taken to be "worth living".If so, does this reduce to total utilitarianism in the case that people would choose not to be ignored if their lives were worth living?
Forecasting:Metaculus intro resources, partially complete introductory video series, book.