This was helpful; I agree with most of the problems you raise, but I think they're objecting to something a bit different than what I have in mind.
Agreement: 1a,1b,2a
Differences: 2b,3a,alternatives
Background on my views on the EA community and epistemics
Epistemic status: Passionate rant
I think protecting and improving the EA community's epistemics is extremely important and we should be very very careful about taking actions that could hurt it to improve on other dimensions.
First, I think that the EA community's epistemic advantage over the rest of the world in terms of both getting to true beliefs via a scout mindset, and taking the implications seriously is extremely important for the EA community's impact. I think it might be even more important ...
Enjoyed the post but I'd like to mention a potential issue with points like these:
I’m skeptical that we should give much weight to message testing with the “educated general public” or the reaction of people on Twitter, at least when writing for an audience including lots of potential direct work contributors.
I think impact is heavy-tailed and we should target talented people with a scout mindset who are willing to take weird ideas seriously.
I would put nontrivial weight on this claim: the support of the general public matters a lot in TAI worlds, e....
Reading this post reminded me of someone whose work may be interesting to look into: Rufus Pollock, a former academic economist who founded the Open Knowledge Foundation. His short book (freely available here) makes the case for replacing traditional IP, like patents and copyright, with a novel kind of remuneration. The major benefits he mentions include increasing innovation and creativity in art, science, technology, etc.
Thanks for writing this!
This is very reasonable; 'no predictive power' is a simplification.
Purely academically, I am sure a well-reasoned Bayesian approach would get us closer to the truth. But I think the conclusions drawn still make sense for three reasons.
Thanks for the comment!
I think it's completely plausible that these two measures were systematically measuring something other than what we took them to be measuring. The confusing thing is what it indeed was measuring and why these traits had negative effects.
(The way we judged open-mindedness, for example, was by asking applicants to write down an instance where they changed their minds in response to evidence.)
But I do think the most likely case is the small sample.
Nodding profusely while reading; thanks for the rant.
I'm unsure if there's much disagreement left to unpack here, so I'll just note this:
- If Will was in fact not being fully honest about the implications of his own views, then I doubt pretty strongly that this could be worth any potential benefit. (I also doubt there'd be much upside anyway given what's already in the book.)
- If the claim is purely about framing, I can see very plausible stories for costs regarding people entering the EA community, but I can also see stories for the benefits
... (read more)