This is a huge problem with EA thinking on this matter-- taking for granted a bunch of things that haven't happened, convincing yourself they are inevitable, instead of dealing with the situation we are in where none of that stuff has happened and may never happen, either because it wasn't going to happen or because we prevented it.

(This draft I’m publishing for Amnesty Week started with ^this quote, which I did write to someone but I forgot who.)

“Marginal” doesn’t mean “absolute best” and it means kind of the opposite of “best for everyone to do”. “Marginal” in the original EA sense takes for granted all the current actors affecting that cause area to figure out what the move effective next move is for a new actor. The current actors affecting that cause area are the basis of the recommendation for what somewhat “at the margin” should do.

However, it’s become common for me to hear EAs assert that particular futures are “inevitable” when they haven’t happened yet and set that as the base upon which “marginal” decisions are made. This is just getting ahead of yourself. Calling this margin thinking is just false— it’s speculation. Speculation is not wrong; it just isn’t the same as margin thinking. Don’t let “margin thinking” excuse falsely taking possibilities as facts.

Another mistake I hear with “marginal” is using it to refer to, not the best intervention for someone at the margin to do, but the absolute best action for anyone to do. I frequently see this with AI Safety, where people get the idea that advice that was plausible on the margin 6 years ago, like keeping quiet about AI Safety while working your way up through government, is evergreen and always correct. That style of reputation management was, frankly, always questionable, but it’s especially not necessary now that baseline public familiarity with AI Safety is so much higher.

Probably the worst misconception with margin thinking is thinking that margin thinking means going for super high leverage. Sometimes it’s still the best move to do something hard or grind-y. I often get this feeling when EAs react to PauseAI, like the hard democratic work of respecting other people’s autonomy to genuinely rally their sentiments or change their opinion (“high leverage” moves in the opinion change arena are often manipulative, illegitimate, or irresponsible) strikes them as peanuts compared to the slim, slim chance of being part of the world’s most successful company or a technical safety breakthrough that solves everything. The EV math can work out this way to favor more effort into tugging the rope your way.

19

4
4

Reactions

4
4
Comments7


Sorted by Click to highlight new comments since:
yz
*13
4
0

It also worries me, in the context of marginal contributions, when some people (not all) start to think of "marginal" as a "sentiment" rather than actual measurements (getting to know those areas, the actual resources, and the amount of spending, and what the actual needs/problems may be) as reasoning for cause prioritization and donations. A sentiment towards a cause area, does not always mean the cause area got the actual attention/resources it was asking for.

"Another mistake I hear with “marginal” is using it to refer to, not the best intervention for someone at the margin to do, but the absolute best action for anyone to do."

 This i really agree with. To add a little I think our individual competitive advantages stemming from our background, education and with history, as well as our passions and abilities are ofen heavily underrated when we talk generically about the best with we can do with our lives. I feel like our life situation can often make orders of magnitudes of differences as to our best life course on the margin, which obviously becomes more and more extreme as we get older.

I'm sure 80,000 hours and probably good  help people understand this, but I do think it's underemphasized at times.

This feels less like a disagreement over what the word marginal means and more that you just disagree with people's theory of impact and what they think the expected value of different actions are?

No, it’s literally about what the word marginal means 

All the disagreements on worldview can be phrased correctly. Currently people use the word “marginal” to sneak in specific values and assumptions about what is effective.

If people said all these things without the word marginal, would you be happy?

Yeah, because then it would be a clear conversation. The tradeoffs that are currently obscured wouldn’t be hidden and the speculation would be unmasked.

Curated and popular this week
Ben_West🔸
 ·  · 1m read
 · 
> Summary: We propose measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under a decade, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks. > > The length of tasks (measured by how long they take human professionals) that generalist frontier model agents can complete autonomously with 50% reliability has been doubling approximately every 7 months for the last 6 years. The shaded region represents 95% CI calculated by hierarchical bootstrap over task families, tasks, and task attempts. > > Full paper | Github repo Blogpost; tweet thread. 
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while
Recent opportunities in Building effective altruism
46
Ivan Burduk
· · 2m read