MSc in applied mathematics/theoretical ML.
Interested in increasing diversity, transparency and democracy in the EA movement. Would like to know how algorithm developers can help "neartermist" causes.
Not trying to answer on the author's behalf, but it seems relatively clear to me that differential development is possible here: so far most advancements in science seem to have come from biological applications like AlphaFold that are distinct from the LLMs that have created most problems both in the eyes of "doomers" and in the eyes of people warning about current non-extinction dangers. Therefore the development of beneficial tools can in theory be accelerated while the development of LLMs is slowed down.
Small note: I don't know if it's my own English at fault, but I interpreted "7x below the WHO threshold" as meaning "7 times worse than the threshold" and only understood the actual meaning as I looked at the actual numbers later. Might be worth wording it differently.
Sorry for being this blunt, but EA is about using evidence and reason to identify the most effective ways to help others. I can't possibly see how operating on a vague guess is on par with that.
This criticism is independent of the fact that I still claim a "negative life" is not a concept we should incorporate into moral theories, and that we definitely shouldn't aim to just cull all animals whose lives we somehow think are negative.
Strongly up voted.
Compare with this quote from MacAskill's "What We Owe the Future chapter 7", showing exactly the problem you describe:
If scientists with Einstein-level research abilities were cloned and trained from an early age, or if human beings were genetically engineered to have greater research abilities, this could compensate for having fewer people overall and thereby sustain technological progress.
What do you think their counterfactual is? I don't think any of what they've been doing is really transferable.