I haven't downvoted or read the post, but one explanation is the title "You're probably a eugenicist" seems clickbaity and aimed at persuasion. It reads as ripe for plucking out of context by our critics. I immediately see it cited in the next major critique published in a major news org: "In upvoted posts on the EA forum, EAs argue they can have 'reasonable' conversations about eugenics."
One idea for dealing with controversial ideas is to A. use a different word and or B. make it more boring. If the title read something like, "Most people favor selecting for valuable hereditary traits." My pulse would quicken less upon reading.
Here's Bostrom's letter about it (along with the email) for context: https://nickbostrom.com/oldemail.pdf
Maybe a typo: the second AI (EA) should be AI (Work)?
AI (EA) did not have to care about mundane problems such as “availability of relevant training data” or even “algorithms”: the only limit ever discussed was amount of computation, and that’s why AI (EA) was not there yet, but soon would be, when systems would have enough computational power to simulate human brains.
Btw, really like your writing style! :)
Doing something to democratize randomized controlled trials (RCTs) - thereby reducing the risk involved in testing new ideas and interventions.
RCTs are a popular methodology in medicine and the social sciences. They create a safety net for the scientists (and consumers) to test that the drug works as intended and doesn't turn people into mutants.
I think using this methodology in other fields would be a high-leverage intervention. For example startups, policy-making, education, etc. Being able to try out new ideas without facing a huge downside should be a ...
I'm sorry, but I consider myself EA-adjacent-adjacent.
Isn't that a bit self-aggrandising? I prefer "aspiring EA-adjacent"