Davidmanheim

Head of Research and Policy @ ALTER - Association for Long Term Existence and Resilience
7477 karmaJoined Working (6-15 years)

Participation
4

  • Received career coaching from 80,000 Hours
  • Attended more than three meetings with a local EA group
  • Completed the AGI Safety Fundamentals Virtual Program
  • Completed the In-Depth EA Virtual Program

Sequences
2

Deconfusion and Disentangling EA
Policy and International Relations Primer

Comments
918

Topic contributions
1

Strong +1 to the extra layer of scrutiny, but at the same time, there are reasons that the privileged people are at the top in most places, having to do with the actual benefits they have and bring to the table. This is unfair and a bad thing for society, but also a fact to deal with.

If we wanted to try to address the unfairness and disparity, that seems wonderful, but simply recruiting people from less privileged groups doesn't accomplish what is needed. Some obvious additional parts of the puzzle include needing to provide actual financial security to the less privileged people, helping them build networks outside of EA with influential people, and coaching and feedback.

Those all seem great, but I'm uncertain it's a reasonable use of the community's limited financial resources - and we should nonetheless acknowledge this as a serious problem.

This seems great, but it does something which is kind of indefensible that I keep seeing, assuming longtermism requires consequentialism.

Given the resolution criteria, the question is in some ways more about Wikipedia policies than the US government...

What about the threat of strongly superhuman artificial superintelligence?

Davidmanheim
2
0
0
79% agree

If we had any way of tractably doing anything with future AI systems, I might think there was something meaningful to talk about for "futures where we survive."

See my post here arguing against that tractability.

we can make powerful AI agents that determine what happens in the lightcone


I think that you should articulate a view that explains why you think AI alignment of superintelligent systems is tractable, so that I can understand how you think it's tractable to allow such systems to be built. That seems like a pretty fundamental disconnect that makes me not understand your )in my view, facile and unconsidered) argument about the tractablity of doing something that seems deeply unlikely to happen.

There is a huge range of "far future" that different views will prioritize differently, and not all need to care about the cosmic endowment at all - people can care about the coming 2-3 centuries based on low but nonzero discount rates, for example, but not care about the longer term future very much.

First, you're adding the assumption that the framing must be longtermist, and second, even conditional on longtermism you don't need to be utilitarian, so the supposition that you need a model of what we do with the cosmic endowment would still be unjustified.

You make a dichotomy not present in my post, then conflate the two types of interventions while focusing only on AI risk - so that you're saying that two different kinds of what most people would call extinction reduction efforts are differently tractable - and conclude that there's a definition confusion.

To respond, first, that has little to do with my argument, but if it's correct, your problem is with the entire debate week framing, which you think doesn't present two distinct options, not with my post! And second, look at the other comments which bring up other types of change as quality increasing, and try to do the same analysis, without creating new categories, and you'll understand what I was saying better. 

  • If you think extinction risk reduction is highly valuable, then you need some kind of a model of what Earth-originating life will do with its cosmic endowment


No, you don't, and you don't even need to be utilitarian, much less longtermist!

Load more