Academic philosopher, co-editor of utilitarianism.net, writes goodthoughts.blog
10% Pledge #54 with GivingWhatWeCan.org
Honestly, I still think my comment was a good one! I responded to what struck me as the most cruxy claim in your post, explaining why I found it puzzling and confused-seeming. I then offered what I regard as an important corrective to a bad style of thinking that your post might encourage, whatever your intentions. (I made no claims about your intentions.) You're free to view things differently, but I disagree that there is anything "discourteous" about any of this.
There's "understanding" in the weak sense of having the info tokened in a belief-box somewhere, and then there's understanding in the sense of never falling for tempting-but-fallacious inferences like those I discuss in my post.
Have you read the paper I was responding to? I really don't think it's at all "obvious" that all "highly trained moral philosophers" have internalized the point I make in my blog post (that was the whole point of my writing it!), and I offered textual support. For example, Thorstad wrote: "the time of perils hypothesis is probably false. I conclude that existential risk pessimism may tell against the overwhelming importance of existential risk mitigation." This is a strange thing to write if he recognized that merely being "probably false" doesn't suffice to threaten the longtermist argument!Â
(Edited to add: the obvious reading is that he's making precisely the sort of "best model fallacy" that I critique in my post: assessing which empirical model we should regard as true, and then determining expected value on the basis of that one model. Even very senior philosophers, like Eric Schwitzgebel, have made the same mistake.)
Going back to the OP's claims about what is or isn't "a good way to argue," I think it's important to pay attention to the actual text of what someone wrote. That's what my blog post did, and it's annoying to be subject to criticism (and now downvoting) from people who aren't willing to extend the same basic courtesy to me.
This sort of "many gods"-style response is precisely what I was referring to with my parenthetical: "unless one inverts the high stakes in a way that cancels out the other high-stakes possibility."
I don't think that dystopian "time of carols" scenarios are remotely as credible as the time of perils hypothesis. If someone disagrees, then certainly resolving that substantive disagreement would be important for making dialectical progress on the question of whether x-risk mitigation is worthwhile or not.
What makes both arguments instances of the nontrivial probability gambit is that they do not provide significant new evidence for the challenged claims. Their primary argumentative move is to assign nontrivial probabilities without substantial new evidence.
I donât think this is a good way to argue. I think that nontrivial probability assignments to strong and antecedently implausible claims should be supported by extensive argument rather than manufactured probabilities.
I'd encourage Thorstad to read my post more carefully and pay attention to what I am arguing there. I was making an in principle point about how expected value works, highlighting a logical fallacy in Thorstad's published work on this topic. (Nothing in the paper I responded to seemed to acknowledge that a 1% chance of the time of perils would suffice to support longtermism. He wrote about the hypothesis being "inconclusive" as if that sufficed to rule it out, and I think it's important to recognize that this is bad reasoning on his part.)
Saying that my "primary argumentative move is to assign nontrivial probabilities without substantial new evidence" is poor reading comprehension on Thorstad's part. Actually, my primary argumentative move was explaining how expected value works. The numbers are illustrative, and suffice for anyone who happens to share my priors (or something close enough). Obviously, I'm not in that post trying to persuade someone who instead thinks the correct probability to assign is negligible. Thorstad is just radically misreading what my post is arguing.
(What makes this especially strange is that, iirc, the published paper of Thorstad's to which I was replying did not itself argue that the correct probability to assign to the ToP hypothesis is negligible, but just that the case for the hypothesis is "inconclusive". So it sounds like he's now accusing me of poor epistemics because I failed to respond to a different paper than the one he actually wrote? Geez.)
The AI bubble popping would be a strong signal that this [capabilities] optimism has been misplaced.
Are you presupposing that good practical reasoning involves (i) trying to picture the most-likely future, and then (ii) doing what would be best in that event (while ignoring other credible possibilities, no matter their higher stakes)?
It would be interesting to read a post where someone tries to explicitly argue for a general principle of ignoring credible risks in order to slightly improve most-probable outcomes. Seems like such a principle would be pretty disastrous if applied universally (e.g. to aviation safety, nuclear safety, and all kinds of insurance), but maybe there's more to be said? But it's a bit frustrating to read takes where people just seem to presuppose some such anti-precautionary principle in the background.
To be clear: I take the decision-relevant background question here to not be the binary question Is AGI imminent? but rather something more degreed, like Is there a sufficient chance of imminent AGI to warrant precautionary measures? And I don't see how the AI bubble popping would imply that answering 'Yes' to the latter was in any way unreasonable. (A bit like how you can't say an election forecaster did a bad job just because their 40% candidate won rather than the one they gave a 60% chance to. Sometimes seeing the actual outcome seems to make people worse at evaluating others' forecasts.)
Some supporters of AI Safety may overestimate the imminence of AGI. It's not clear to me how much of a problem that is? (Many people overestimate risks from climate change. That seems important to correct if it leads them to, e.g., anti-natalism, or to misallocate their resources. But if it just leads them to pollute less, then it doesn't seem so bad, and I'd be inclined to worry more about climate change denialism. Similarly, I think, for AI risk.) There are a lot more people who persist in dismissing AI risk in a way that strikes me as outrageously reckless and unreasonable, and so that seems by far the more important epistemic error to guard against?
That said, I'd like to see more people with conflicting views about AGI imminence arrange public bets on the topic. (Better calibration efforts are welcome. I'm just very dubious of the OP's apparent assumption that losing such a bet ought to trigger deep "soul-searching". It's just not that easy to resolve deep disagreements about what priors / epistemic practices are reasonable.)
Quick clarification: My target here is not so much people with radically different empirical beliefs (such that they regard vaccines as net-negative), but rather the particular form of status quo bias that I discuss in the original post.
My guess is that for relatively elite audiences (like those who read philosophy blogs), they're unlikely to feel attached to this status quo bias as part of their identity, but their default patterns of thought may lead them to (accidentally, as it were) give it more weight than it deserves. So a bit of heated rhetoric and stigmatization of the thought-pattern in question may help to better inoculate them against it.
(Just a guess though â I could be wrong!)
Interesting post! Re: "how spotlight sizes should be chosen", I think a natural approach is to think about the relative priorities of representatives in a moral parliament. Take the meat eater problem, for example. Suppose you have some mental representatives of human interests, and some representatives of factory farmed animal interests. Then we can ask each representative: "How high a priority is it for you to get your way on whether or not to prevent this child from dying of malaria?" The human representatives will naturally see this as a very high priorityâwe don't have many better options for saving human lives. But the animal representatives, even if they aren't thrilled by retaining another omnivore, have more pressing priorities than trying to help animals by eliminating meat-eaters one by one. Given how incredibly cost-effective animal-focused charities can be, it will make sense for them to make the moral trade: "OK, save this life, but then let's donate more to the Animal Welfare Fund."
Of course, for spotlighting to work out well for all representatives, it's going to be important to actually follow through on supporting the (otherwise unopposed) top priorities of neglected representatives (like those for wild animal welfare). But I think the basic approach here does a decent job of capturing why it isn't intuitively appropriate to take animal interests into account when deciding whether to save a person's life. In short: insofar as we want to take animal interests into account, there are better ways to do it, that don't require creating conflict with another representative's top priorities. Avoiding such suboptimal conflict, and instead being open to moral trade, seems an important part of being a "good moral colleague".
Funnily enough, the main example that springs to mind is the excessive self-flagellation post-FTX. Many distanced themselves from the community and its optimizing norms/mindsetâfor understandable reasons, but ones more closely tied to "expressing" (and personal reputation management) than to actually "helping", IMO.
I'd be curious to hear if others think of further candidate examples.
EA Infrastructure Fund or Giving What We Can? For the latter, "our best-guess giving multiplier for [2023-24] was approximately 6x".
As I see it, I responded entirely reasonably to the actual text of what you wrote. (Maybe what you wrote gave a misleading impression of what you meant or intended; again, I made no claims about the latter.)
Is there a way to mute comment threads? Pursuing this disagreement further seems unlikely to do anyone any good. For what it's worth, I wish you well, and I'm sorry that I wasn't able to provide you with the agreement that you're after.