Davidmanheim

Head of Research and Policy @ ALTER - Association for Long Term Existence and Resilience
7812 karmaJoined Working (6-15 years)

Participation
4

  • Received career coaching from 80,000 Hours
  • Attended more than three meetings with a local EA group
  • Completed the AGI Safety Fundamentals Virtual Program
  • Completed the In-Depth EA Virtual Program

Sequences
2

Deconfusion and Disentangling EA
Policy and International Relations Primer

Comments
981

Topic contributions
1

I don't think it's that much of a sacrifice.

I don't understand how this is an argument applicable to anyone other than yourself; other people clearly feel differently.

I also think that for many, the only difference in practice would be slightly lower savings for retirement.

If that is something they care or worry about, it's a difference - adding the word "only" doesn't change that!

I've run very successful group brainstorming sessions with experts just in order to require them to actually think about a topic enough to realize what seems obvious to me. Getting people to talk through what the next decade of AI progress will look like didn't make them experts, or even get to the basic level I could have presented in a 15 minute talk - but it gives me me a chance to push them beyond their cached thoughts, without them rejecting views they see as extremes, since they are the ones thinking them!

But EA should scale, because its ideas are good, and this leaves it in a much more tricky situation.

I'll just note that when the original conversation started, I addressed this in a few parts.

To summarize, I think that yes, EA should be enormous, but it should not be a global community, and it needs to grapple with how the current community works, and figure out how to avoid ideological conformity.

There's also an important question about which EA causes are differentially more or less likely to be funded. If you think Pause AI is good, Anthropic's IPO probably won't help. If you think mechanistic interpretability is valuable, it might help to fund more training in relevant areas, but you should expect an influx of funding soon. And if you think animal welfare is important, funding new high risk startups that can take advantage of wave of funding in a year may be an especially promising bet.

I still don't think that works out, given energy cost of transmission and distance.

This could either be a new resource or an extension of an existing one. I expect that improving an existing resource would be faster and require lower maintenance.

My suggestion would be to improve the AI Governance section of aisafety.info


cc: @melissasamworth / @Søren Elverlin / @plex 

...but interstellar communication is incredibly unlikely to succeed - they are far away, we don't know in which direction, and required energy is incredibly large.

To possibly strengthen the argument made, I'll point out that moving already-effective money to a more effective cause or donation is smaller counterfactually because they are already looking at the question, and could easily come to the conclusion on their own. Moving money in a "Normie" foundation, on the other hand, can have knock-on effects of getting them to think more about impact at all, and change their trajectory.

I meant that I don't think it's obvious that most people in EA working on this would agree. 

I do think it's obvious that most people overall would agree, though most would not agree or be unsure that a simulation matters at all. It's even very unclear how to count person-experiences overall, as Johnston's Personite paper argues: https://www.jstor.org/stable/26631215 and I'll also point to the general double-counting problem: https://link.springer.com/article/10.1007/s11098-020-01428-9 and suggest that it could apply.

I need to write a far longer response to that paper, but I'll briefly respond (and flag to @Christian Tarsney) that I think my biggest crux is that I think they picked weak objections to causal domain restriction, and that far better objections apply. Secondarily, for axiological weights, the response about egalitarian views leading to rejection of different axiological weights seems to be begging the question, and the next part ignores the fact that any acceptable response to causal domain restriction also addresses the issue of large background populations.

Load more