Computer Science student @ University of Texas at Dallas
213 karmaJoined Pursuing an undergraduate degreeDallas, TX, USA



Trying to make sure the development of powerful artificial intelligence goes well.


  • EA group house?
  • Tech startup incubator?
  • Research bootcamp, e.g. MATS?

I agree that GHW is an excellent introduction to effectiveness and we should watch out for the practical limitations of going too meta, but I want to flag that seeing GHW as a pipeline to animal welfare and longtermism is problematic, both from a common-sense / moral uncertainty view (it feels deceitful and that’s something to avoid for its own sake) and a long-run strategic consequentialist view (I think the EA community would last longer and look better if it focused on being transparent, honest, and upfront about what most members care about, and it’s really important for the long term future of society that the core EA principles don’t die).

How about “early-start EA (EEA)”? As a term, could sit neatly beside “highly-engaged EA (HEA)”.

Where can I find thoughtful research on the relationship between AI safety regulation and the centralization of power?

This is excellent research! The quality of Rethink Priorities’ output consistently impresses me.

A couple questions:

  • What software did you use to create figure 1?
  • What made you decide to use discrete periods in your model as opposed to a continuous risk probability distribution?

I agree that my answer isn't very useful by itself, without any object-level explanations, but I do think it is useful to bring up the anthropic principle if it hasn't been mentioned already. In hindsight, I think my answer comes off as unnecessarily dismissive.

Isn't the opposite end of the p(doom)–longtermism quadrant also relevant? E.g. my p(doom) is 2%, but I take the arguments for longtermism seriously and think that's high enough of a chance to justify working on the alignment problem.

Load more