Strong advocate of just having a normal job and give to effective charities.
Doctor in Australia giving 10% forever
(NOTE: Coming at this from a place of: a. ignorance of what the AI Safety community actually does and b. not wanting to take the ego hit of admitting that I have been wrong about my long-held skepticism of AI Safety)
I think it was and is fair to be skeptical of the shift to AI Safety in EA on the basis that it's not that tractable, and that there's there's not clear evidence that the AI Safety movement has had a positive effect on the trajectory of AI.
I think the AI Safety community will be tempted to think they've normalised in the zeitgeist ideas about superintelligent AIs and the philosphical questions and risks that arise from them, but 2001: A Space Odyssey came out in 1968, Terminator in 1984 and The Matrix in 1999 etc.. The ideas of superintelligant AIs and the existential risks of them are diffused through modern culture and it's possible that The Pope and The UN would have made the same statements about them given the recent progress of LLMs regardless of the AI Safety movement.
Are there many ideas in If Anyone Builds It, Everyone Dies that weren't broadly covered in Terminator/The Matrix/2001 a Space Odyssey/Dune etc.?
I haven't seen strong evidence for the direct work of the AI Safety movement reducing existential risks from AI:
Interpretability research seems far from being able to understand more than a few components at a time. And also the companies making AI would likely have been incentivised to do this work regardless of the AI Safety movement because customers don't want a black box.
Â
From the outside it seems there's a good argument that the AI situation would have evolved pretty similarly regardless of EA/AI Safety input.
From that position, it's easy to believe that if EA had just stuck to Earning To Give and malaria nets and decaging chickens then the impact would have been greater, both directly and because the movement might not have lost as much momentum when AI Safety alienated people.
I agree that the depth of the evidence conversations doesn't lend itself to amateur discussion on the forum and I also feel like there's not much I have to add to the GHD discussions here because of that.
Don't think it's fair to say it's not prioritised among the orgs. My understanding is that Coefficient Giving still gives huge amounts to GiveWell charities and grants.
“direct altruistic focus strategically so as to be of positive utility”
Vague and evasive. Say what you mean. If you want to keep poor people poor until some new technology comes out, you should say that. If you don’t think further development will ever be justified, you should say that (so that your contention can be discarded as absurd and impractical)
“From the sumatriptan RCT: 3% were pain-free at 10 minutes after placebo.”
This is an irrational comparison. You’re comparing your best case scenario anecdote to the results of an RCT.
It’s possible that one of those 3% of people would have an anecdote for sumatriptan as convincing as yours: causing rapid resolution of their headache. That anecdote would not be representative.
I’m not saying you’re wrong about psychedelics and cluster headache. I desperately hope you’re right and there is an easy fix. Anecdote leads people astray constantly and we have to have a high suspicion of it.
Fair, I really mean pessimism rather than nihilism. On what basis can you reject philosophical pessimism - a self-consistent and valid belief that is seemingly impossible to prove/disprove - other than that it is just not pragmatic or constructive at all.
None of that suggested work seems very clarifying
The welfare ranges are extremely broad for the animals they do cover, and that's with questionable assumptions. I don't see how extending these to microbes would clarify anything.
Doing "more research" on the day-to-day experience of nematodes and how they respond to noxious stimuli or calculating their neural energy consumption as a proxy for their ability to suffer also doesn't seem clarifying. Imagine you knew all this information about nematodes. Still the fundamental question will remain how their "suffering" or "joy" compares to ours and how morally important it is. A lot of animal ethics is driven by our ability to relate to animals ("I can relate somewhat to a chicken and I wouldn't want to be a chicken in a cage") but this falls apart by the time we get to nematodes, so you have to rely solely on your numbers, which will be extremely uncertain.
I remain very puzzled how you ever see us getting low enough error bars on the joy/suffering of microscopic worms that we could make decision based on it.
How would you get the "Further human economic development" "necessary to build the knowledge and resources" to build a better world without supporting the development of developing countries?
Are you talking a top-heavy approach where we keep poor countries poor until fake/cultured meat is cheap enough to supplant farmed animals?