Similarly: what domains do you wish someone would take a deep dive into and write about their learnings?
A few months ago, I was chatting with Richard Ngo when we concluded that perhaps more EAs should learn things that no one else in the EA community is learning. In other words, it would be good if EAs' knowledge were more uncorrelated.
I then asked him one of these questions, and his answer made me consider leaning into curiosity about corporate governance (one of his answers) with the aim of writing a post about my learnings and/or mentioning my findings to him.
And so I figured I'd ask everyone - perhaps someone will look into it for you and give you some answers.
I believe I could do this. My background is just writing, argument, and constitution of community, I guess.
An idea that was floated recently was an interactive site that asks the user a few questions about themselves and their worldview then targets an introduction to them.
I'm not sure how strong the need actually is, though. I get the impression that, EA is such a simple concept (reasoned evidenced moral dialog, earnest consequentialist optimization of our shared values) that most misunderstandings of what EA is are a result of deliberate misunderstanding, and having better explanations wont actually help much. It's as if people don't want to believe that EA is what it claims to be.
It's been a long time since I was outside of the rationality community, but I definitely remember having some sort of negative feeling about the suggestion that I can be better at foundational capacities like reasoning, or in EA's case, knowing right from wrong.
I guess a solution there is to convince the reader that rationality/practical ethics isn't just a tool for showing off for others (which is zero-sum, and so we wouldn't collectively benefit from improvements in the state of the art), and that being trained in it would make their life better in some way. I don't think LW actually developed the ability to sell itself as self-help (I think it just became a very good analytic philosophy school). I think that's where the work needs to be done.
What bad things will happen to you if you reject expected a VNM axiom or tell yourself pleasant lies? What choking cloud of regret will descend around you if you aren't doing good effectively?