662 karmaJoined Jun


I follow Crocker's rules.


Answer by niplavSep 14, 20232

Someone on atomically precise manufacturing: A "big if true" thing that is floating around EA but never really tackled head on. I don't know how good Eric Drexler is on podcasts, but he'd be an obvious candidate.

Or whatever person wrote this report.

Answer by niplavSep 14, 202328
  • Carl Shulman (again), his interviews on the Dwarkesh Patel podcast were incredible, and there seems to be potential for more
  • Vaclav Smil, who appears to be very knowledgeable, with a comprehensive model of the entire world. His books are filled with facts.
  • Lukas Finnveden about his blogposts on the altruistic implications of Evidential Cooperation in Large Worlds
  • Some employee of MIRI who is not Yudkowsky. I suggest
    • Tsvi Benson-Tilsen (blog), who has appeared on at least one podcast which I liked. Has looked into human intelligence enhancement and a variety of other problems such as communication. Generally has longer AI timelines.
    • Or Scott Garrabrant, but I don't know how interesting his interview would be for a nontechnical audience.
  • Another interview on wild animal welfare, perhaps with someone from Wild Animal Initiative.
    • Perhaps invite Brian Tomasik on the podcast?
  • Romeo Stevens (blog) mainly for his approach to his career: Founded a startup to support himself early on, and is now independent. Doesn't tend to write his ideas down, here's an interview which details some of his ideas.

Although I'd consider the counter-arguments against multiplicative decomposition to be decent evidence against it.

Disagreed, animal moral patienthood competes with all the other possible interventions effective altruists could be doing, and does so symmetrically (the opportunity cost cuts in both directions!).

I find this comment much more convincing than the top-level post.


I would very much prefer it if one didn't appeal to the consequences of the belief about animal moral patienthood, and instead argue whether animals in fact are moral patients or not, or whether the question is well-posed.

For this reason, I have strong-downvoted your comment.

Consider myself more culturally rationalist than EA, my (short) answer above. The real answer is 10k words and probably not worth the effort per insight/importance.

Answer by niplavAug 11, 202320

The Rationality Community has a far lower focus on morality, and has members which are amoral or completely selfish. I'll go out on a limb and claim that they also have a broader set of interests, since there is less of a restriction on what attention can be focused on (EA wants to do good, the rationality community is interested in truth, and truth can be found basically about anything).

I don't remember getting this from you, but maybe you mentioned it on 𝕏. I actually had to look up the difference, and hope I got it right.

Load more