This is a huge problem with EA thinking on this matter-- taking for granted a bunch of things that haven't happened, convincing yourself they are inevitable, instead of dealing with the situation we are in where none of that stuff has happened and may never happen, either because it wasn't going to happen or because we prevented it.
(This draft I’m publishing for Amnesty Week started with ^this quote, which I did write to someone but I forgot who.)
“Marginal” doesn’t mean “absolute best” and it means kind of the opposite of “best for everyone to do”. “Marginal” in the original EA sense takes for granted all the current actors affecting that cause area to figure out what the move effective next move is for a new actor. The current actors affecting that cause area are the basis of the recommendation for what somewhat “at the margin” should do.
However, it’s become common for me to hear EAs assert that particular futures are “inevitable” when they haven’t happened yet and set that as the base upon which “marginal” decisions are made. This is just getting ahead of yourself. Calling this margin thinking is just false— it’s speculation. Speculation is not wrong; it just isn’t the same as margin thinking. Don’t let “margin thinking” excuse falsely taking possibilities as facts.
Another mistake I hear with “marginal” is using it to refer to, not the best intervention for someone at the margin to do, but the absolute best action for anyone to do. I frequently see this with AI Safety, where people get the idea that advice that was plausible on the margin 6 years ago, like keeping quiet about AI Safety while working your way up through government, is evergreen and always correct. That style of reputation management was, frankly, always questionable, but it’s especially not necessary now that baseline public familiarity with AI Safety is so much higher.
Probably the worst misconception with margin thinking is thinking that margin thinking means going for super high leverage. Sometimes it’s still the best move to do something hard or grind-y. I often get this feeling when EAs react to PauseAI, like the hard democratic work of respecting other people’s autonomy to genuinely rally their sentiments or change their opinion (“high leverage” moves in the opinion change arena are often manipulative, illegitimate, or irresponsible) strikes them as peanuts compared to the slim, slim chance of being part of the world’s most successful company or a technical safety breakthrough that solves everything. The EV math can work out this way to favor more effort into tugging the rope your way.
It also worries me, in the context of marginal contributions, when some people (not all) start to think of "marginal" as a "sentiment" rather than actual measurements (getting to know those areas, the actual resources, and the amount of spending, and what the actual needs/problems may be) as reasoning for cause prioritization and donations. A sentiment towards a cause area, does not always mean the cause area got the actual attention/resources it was asking for.