Currently looking for my next step in animal welfare. Reasonably clueless about what interventions are impartially good.
"We have enormous opportunity to reduce suffering on behalf of sentient creatures [...], but even if we try our hardest, the future will still look very bleak." - Brian Tomasik
Happy to give feedback on projects, or get on a call about anything to give advice and share contacts.
"I don’t think there is very much in the way of “this forecasting happened, and now we have made demonstrably better decisions regarding this terminal goal that we care about”."
I assume some people disagree with this strong claim. One example I've heard was AGI timelines and their influence on AI safety field priorities - though I guess one could answer that certain reports or expert opinions where disproportionately more useful than prediction markets.
On a different point, I appreciated Eli Lifland's past comment on many intellectual activities (such as grantmaking) being forms of forecasting.
Agree with the post and the bottom line, though I don't think it justifies focusing on AI safety, because of a disanalogy.
In your analogy, we assume that when we give the money to the mugger, they either make the coin more likely to land heads, or do nothing.
Meanwhile, in AI safety, small chances of averting doom come with small chances of causing doom - and it seems most of those who work in the field consider that some respected interventions are actually increasing P(Doom). They just disagree on what those doom-increasing interventions are.
"EA isn't drawing the same talent as it used it"
I'm surprised by this claim: do you mean it's getting fewer talented newcomers on a yearly basis than before, or that the newcoming talent is different? (different profiles / skillsets)
I understood it at the former claim, but that would be surprising to me. I've heard a few orgs saying that they've been able to raise the bar for who to hire in the past years, because the EA-aligned talent pool has been getting bigger, with more senior professionals and exceptionally competent people. Also, generally, that EAs are less young on average than ten years ago, and that this has benefits for hiring.
I can actually think of one example in animal welfare: the EA Animal Welfare Fund forecasts grant outcomes.
A marginal $500,000 should go to:
The backfire effects of general alignment work early on in AI safety may have outweighed the benefits; I worry that the same could be true for animal-specific alignment.
If I really believe that, I should probably want to avoid money going into animal-specific alignment at this stage, while extra 500,000$ to general alignment, while not necessarily positive, is less likely to cause major backfire events?
Solving WAS intuitively seems too niche for people to deliberately change their mind on that, but I could be wrong. After all, the Bible says that the Lion will lie down with the lamb and eat straw like the ox, so it could be that human preferences tend to come back to the idea that animal suffering can be bad even when it doesn't depend on human actions.
Really appreciate this sort of cluster thinking exercise, thanks for sharing! ~20% dying from parasites intuitively shocked me upon reading, even though it checks out, given the abundance of parasites in the wild.