Superforecaster, former philosophy PhD, Giving What We Can member since 2012. Currently trying to get into AI governance.Â
 Gavi do vaccines, something that governments and other big bureaucratic orgs sure seem to handle well in other cases. Government funding for vaccines is how we eliminated smallpox, for example. I think "other vaccination programs" are a much better reference class for Gavi than the nebulous category of "social programs" in general. Indeed the Rossi piece you've linked to actually says "In the social program field, nothing has yet been invented which is as effective in its way as the smallpox vaccine was for the field of public health." I'm not sure it is even counting public health stuff as "social programs" that fall under the iron law.
That's not to say that Gavi can actually save a life for $1600, or save millions at $1600 each, or that GiveWell should fund them. But impact of literally zero here seems very implausible.Â
It can also be indeterminate over a short time who the winner of an election is because the deciding vote is being cast and plausibly there is at least some very short duration of time where it is indeterminate whether the process of that vote being cast is finished yet. It can be indeterminate how many animals were killed for food if one animal was killed for multiple reasons of which "to eat" was one reason but not the major one. Etc. etc.Â
Thanks, I get what you meant now.
The relatively more orthodox view amongst philosophers about the heap case is roughly there is a kind of ambiguous region of successive ns where it is neither true nor false that n grains make a heap. This is a very, very technical literature though, so possibly that characterization isn't quite right. None of the solutions are exactly great though, and some experts do think there is an exact "n" where some grains become a heap.Â
To be fair to Richard, there is a difference between a) stating your own personal probability in time of perils and b) making clear that for long-termist arguments to fail solely because they rely on time of perils, you need it to have  extremely low probability, not just low, at least if you accept the expected value theory and subjective probability estimates can legitimately be applied at all here, as you seemed to be doing for the sake of making an internal critique. I took it to be the latter that Richard was complaining your paper doesn't do.Â
How strong do you think your evidence is for most readers of philosophy papers think the claim that X-risk is currently high, but will go permanently very low" is extremely implausible? If you asked me to guess I'd say most people's reaction would be more like "I've no idea how plausible this is, other than definitely quite unlikely", which is very different, but I have no experience with reviewers here.Â
I am a bit -not necessarily entirely-skeptical of the "everyone really knows EA work outside development and animal welfare is trash" vibe of your post. I don't doubt a lot of people do think that in professional philosophy. But at the same time, Nick Bostrom is more highly cited than virtually any reviewer you will have encountered. Long-termist moral philosophy turns up in leading journals constantly. One of the people you critiqued in your very good paper attacking arguments for the singularity is Dave Chalmers, and you literally don't get more professionally distinguished in analytic philosophy than Dave. Your stuff criticizing long-termism seems to have made it into top journals too when I checked, which indicates there certainly are people who think it is not too silly to be worth refuting: https://www.dthorstad.com/papers
I am far from sure that Thorstad is wrong that time of perils should be assigned ultra-low probability. (I do suspect he is wrong, but this stuff is extremely hard to assess.) But in my view there are multiple pretty obvious reasons why "time of Carols" is a poor analogy to "time of perils":
(Most important disanalogy in my view.) The second half of time of perils, that x-risk will go very low for a long-time, is plausibly something that many people will consider desirable, and might therefore aim for. People are even more likely to aim for related goals like "not have massive disasters while I am alive." This is plausibly a pretty stable feature of human motivation that has a fair chance of lasting millions of years; humans generally don't want humans to die. In comparison there's little reason to think decent numbers of people will always desire time of carols.
4. Maybe this isn't  an independent point from 1., but I actually do think it is relevant that "time of carols" just seems very silly to everyone as soon as they hear it, and time of perils does not. I think we should give some weight to people's gut reactions here.Â
N=1, but I looked at an ARC puzzle https://arcprize.org/play?task=e3721c99, and I couldn't just do it in a few minutes, and I have a PhD from the University of Oxford. I don't doubt that most of the puzzles are trivial for some humans, and some of the puzzles are trivial for most humans or that I could probably outscore any AI across the whole ARC-2 data set. But at the same time, I am a general intelligence, so being able to solve all ARC puzzles doesn't seem like a necessary criteria. Maybe this is the opposite of how doing well on benchmarks doesn't always generalize to real world tasks, and I am just dumb at these but smart overall, and the same could be true for an LLM.
Â