Thanks for doing this! I think the most striking part of what you found is the donations to representatives who sit on the subcommittee that oversees the CFTC (i.e. the House Agriculture Subcommittee on Commodity Exchanges, Energy, and Credit), so I wanted to look into this more. From a bit of Googling:
I didn't spend much time on this, so I very possibly missed or misinterpreted things.
why they would want to suggest to these bunch of concerned EAs how to go about trying to push for the ideas that Buck disagrees with better
My guess was that Buck was hopeful that, if the post authors focus their criticisms on the cruxes of disagreement, that would help reveal flaws in his and others' thinking ("inasmuch as I'm wrong it would be great if you proved me wrong"). In other words, I'd guess he was like, "I think you're probably mistaken, but in case you're right, it'd be in both of our interests for you to convince me of that, and you'll only be able to do that if you take a different approach."
[Edit: This is less clear to me now - see Gideon's reply pointing out a more recent comment.]
I interpreted Buck's comment differently. His comment reads to me, not so much like "playing the man," and more like "telling the man that he might be better off playing a different game." If someone doesn't have the time to write out an in-depth response to a post that takes 84 minutes to read, but they take the time to (I'd guess largely correctly) suggest to the authors how they might better succeed at accomplishing their own goals, that seems to me like a helpful form of engagement.
Thanks for sharing! The speakers on the podcast might not have had the time to make detailed arguments, but I find their arguments here pretty uncompelling. For example:
So I think, although their conclusions are plausible, these arguments don't pass enough of an initial sanity check to be worth lots of our attention.
Thanks for writing this! I want to push back a bit. There's a big middle ground between (i) naive, unconstrained welfare maximization and (ii) putting little to no emphasis on how much good one does. I think "do good, using reasoning" is somewhat too quick to jump to (ii) while passing over intermediate options, like:
There are lots of people out there (e.g. many researchers, policy professionals, entrepreneurs) who do good using reasoning; this community's concern for scope seems rare, important, and totally compatible with integrity. Given the large amounts of good you've done, I'd guess you're sympathetic to considering scope. Still, it seems important enough to include in the tagline.
Also, a nitpick:
now it's obvious that the idea of maximizing goodness doesn't work in practice--we have a really clear example of where trying to do that fails (SBF if you attribute pure motives to him); as well as a lot of recent quotes from EA luminaries saying that you shouldn't do that
This feels a bit fast; the fact that this example had to include a (dubious) "if" clause means it's not a really clear example, and maximizing goodness is compatible with constraints if we incorporate constraints into our notion of goodness (just by the fact that any behavior can be thought of as maximizing some notion of goodness).
(Made minor edits.)
Readers might be interested in the comments over here, especially Daniel K.'s comment:
The only viable counterargument I've heard to this is that the government can be competent at X while being incompetent at Y, even if X is objectively harder than Y. The government is weird like that. It's big and diverse and crazy. Thus, the conclusion goes, we should still have some hope (10%?) that we can get the government to behave sanely on the topic of AGI risk, especially with warning shots, despite the evidence of it behaving incompetently on the topic of bio risk despite warning shots.
Or, to put it more succinctly: The COVID situation is just one example; it's not overwhelmingly strong evidence.
Fair! Sorry for the slow reply, I missed the comment notification earlier.
I could have been clearer in what I was trying to point at with my comment. I didn't mean to fault you for not meeting an (unmade) challenge to list all your assumptions--I agree that would be unreasonable.
Instead, I meant to suggest an object-level point: that the argument you mentioned seems pretty reliant on a controversial discontinuity assumption--enough that the argument alone (along with other, largely uncontroversial assumptions) doesn't make it "quite easy to reach extremely dire forecasts about AGI." (Though I was thinking more about 90%+ forecasts.)
(That assumption--i.e. the main claims in the 3rd paragraph of your response--seems much more controversial/non-obvious among people in AI safety than the other assumptions you mention, as evidenced by researchers criticizing it and researchers doing prosaic AI safety work.)