All of SamiPetersen's Comments + Replies

Nodding profusely while reading; thanks for the rant.

I'm unsure if there's much disagreement left to unpack here, so I'll just note this:

  • If Will was in fact not being fully honest about the implications of his own views, then I doubt pretty strongly that this could be worth any potential benefit. (I also doubt there'd be much upside anyway  given what's already in the book.) 
  • If the claim is purely about framing, I can see very plausible stories for costs regarding people entering the EA community,  but I can also see stories for the benefits
... (read more)
6
elifland
2y
Roughly agree with both of these bullet points! I want to be very clear that I have no reason to believe that Will wasn't being honest and on the contrary believe he very likely was, my concerns are about framing. And I agree the balance of costs and benefits regarding framing aren't super obvious but I am pretty concerned about the possible costs.

This was helpful; I agree with most of the problems you raise, but I think they're objecting to something a bit different than what I have in mind.

Agreement: 1a,1b,2a

  • I am also very sceptical that >25% of the general public satisfies (1a) or (1b). I don't think these are the main mechanisms through which the general public could matter regarding TAI. The same applies to (2a).

Differences: 2b,3a,alternatives

  • On (2b): I'm a bit sceptical that politicians or policymakers are sufficiently nitpicky for this to be a big issue, but I'm not confident here. WWOTF m
... (read more)

Background on my views on the EA community and epistemics

Epistemic status: Passionate rant

I think protecting and improving the EA community's epistemics is extremely important and we should be very very careful about taking actions that could hurt it to improve on other dimensions.

First, I think that the EA community's epistemic advantage over the rest of the world in terms of both getting to true beliefs via a scout mindset, and taking the implications seriously is extremely important for the EA community's impact. I think it might be even more important ... (read more)

Enjoyed the post but I'd like to mention a potential issue with points like these:

I’m skeptical that we should give much weight to message testing with the “educated general public” or the reaction of people on Twitter, at least when writing for an audience including lots of potential direct work contributors. 

I think impact is heavy-tailed and we should target talented people with a scout mindset who are willing to take weird ideas seriously.

I would put nontrivial weight on this claim: the support of the general public matters a lot in TAI worlds, e.... (read more)

3
elifland
2y
I agree with this. The part you quoted is from the appendix and an ideal world it would be more rigorously argued with the claims you identified separated more cleanly. But in practice it should probably be thought more of as "stream-of-consciousness reactions from Eli as he read Will's posts/comments" (which is part of why I put it in the appendix). Epistemic status: speculation about something I haven't thought about that much (TAI governance and public opinion) I appreciate you making the benefits more concrete. However, I'm still not really sure I fully understand the scenario where WWOTF moves the needle here and how it will help much compared to alternatives. I'll list my best guess as to more explicit steps on the path to impact (let me know if I'm assuming wrong, a lot of this is guessing!), and my skepticisms about each step: 1. Many in the general public read WWOTF and over time through ideas spreading in various ways many people become much more on board with the general idea of longtermism. 1. I'm skeptical that > ~25% of the general public both (a) has the bandwidth/slack to care about the long-term future as opposed to their current issues and (b) is philosophically inclined enough to think about morality in this way. Maybe this could happen as a cultural shift over the course of several generations, but it feels like <5% to me in <40 year timelines worlds. 2. We either (a) convince the general public to care a specifically large amount about misaligned AI risk and elect politicians who care about it, or (b) get politicians on board with general longtermist platforms but actually the thing we care about most is misaligned AI risk. 1. My skepticism about (a) is that if you really believe the general public is savvy enough to get on board with a large amount of misaligned AI risk, I feel like you should also believe they're savvy enough to feel bait-and-switched by this two-step conversion process rather than us being more upfront about our

Reading this post reminded me of someone whose work may be interesting to look into: Rufus Pollock, a former academic economist who founded the Open Knowledge Foundation. His short book (freely available here) makes the case for replacing traditional IP, like patents and copyright, with a novel kind of remuneration. The major benefits he mentions include increasing innovation and creativity in art, science, technology, etc.

Thanks for writing this!

This is very reasonable; 'no predictive power' is a simplification.

Purely academically, I am sure a well-reasoned Bayesian approach would get us closer to the truth. But I think the conclusions drawn still make sense for three reasons.

  1. I did not specify in the table, but the p-values for the insignificant coefficients were very high; often around p=0.85. I think this constitutes so little evidence that it would be too minor a Bayesian update to have to formally conduct.
  2. Given that we do have evidence of some other variables being pred
... (read more)

Thanks for the comment! 

I think it's completely plausible that these two measures were systematically measuring something other than what we took them to be measuring. The confusing thing is what it indeed was measuring and why these traits had negative effects.

(The way we judged open-mindedness, for example, was by asking applicants to write down an instance where they changed their minds in response to evidence.)

But I do think the most likely case is the small sample.