SamiPetersen

73Joined Feb 2021

Comments
6

Nodding profusely while reading; thanks for the rant.

I'm unsure if there's much disagreement left to unpack here, so I'll just note this:

  • If Will was in fact not being fully honest about the implications of his own views, then I doubt pretty strongly that this could be worth any potential benefit. (I also doubt there'd be much upside anyway  given what's already in the book.) 
  • If the claim is purely about framing, I can see very plausible stories for costs regarding people entering the EA community,  but I can also see stories for the benefits I mentioned before. I find it non-obvious that a lack of prioritisation/quantification in WWOTF leads to a notably lower-quality EA community as misconceptions may be largely corrected when people try to engage with the existing community. Though I could very easily change my mind on this; e.g., it would worry me to see lots of new members with similar misconceptions enter at the same time. The magnitude of the pros and cons of the framing seems like an interestingly tough empirical question.

This was helpful; I agree with most of the problems you raise, but I think they're objecting to something a bit different than what I have in mind.

Agreement: 1a,1b,2a

  • I am also very sceptical that >25% of the general public satisfies (1a) or (1b). I don't think these are the main mechanisms through which the general public could matter regarding TAI. The same applies to (2a).

Differences: 2b,3a,alternatives

  • On (2b): I'm a bit sceptical that politicians or policymakers are sufficiently nitpicky for this to be a big issue, but I'm not confident here. WWOTF might just have the effect of bringing certain issues closer to the edges of the Overton window. I find it plausible that the most effective way to make AI risk one of these issues is in the way WWOTF does it: get mainstream public figures and magazines talking about it in a very positive way. I could see how this might've been far harder with a book that allows people to brush it off as tech-bro BS more easily.

    On there being intellectually dishonesty: I worry a bit about this, but maybe Will is just providing his perspective and that's fine.  We can still have others in the longtermist community disagree on various estimates. Will for one has explicitly tried not to be seen as a leader of a movement of people who just follow his ideas. I'd be surprised if differences within the community become widely seen as intellectual dishonesty from the outside (though of course isolated claims like these have been made already).

    So, maybe what we want from politicians and policymakers during important moments is for them to be receptive to good ideas. The perceived prioritisation of AI within longtermist writing might just not turn out to be that crucial. I'm open to change my mind on this but I don't expect there to be much conflict between different longtermist priorities such that policymakers will in fact need to choose between them. That's a reason I'd expect that the best we can do is to make certain problems more palatable so that when an organisation tells policymakers "we need policy X, else we raise the risk of AI catastrophe" they are more likely to listen.
     
  • On (3a): I'm also very uncertain here but conditional on some kind of intent alignment, it becomes a lot more plausible to me that coordination with the world outside top labs becomes valuable, e.g., on values, managing transitions, etc. (especially if takeoff is slow).
     
  • On alternative uses of time: Those three project seem great and might be better EV per effort spent, but that's consistent with great writers and speakers like Will having a comparative advantage in writing WWOTF.
     

The mechanism I have in mind is a bit nebulous. It's in the vein of my response to (2a), i.e., creating intellectual precedent, making odd ideas seem more normal, etc. to create an environment (e.g., in politics) more receptive to proposals and collaboration. This doesn't have to be through widespread understanding of the topics. One (unresearched) analogue might be antibiotic resistance. People in general, including myself, know next to nothing about it, but this weird concept has become respectable enough that when a policymaker Googles it, they know it's not just some kooky fear than nobody outside strangely named research centres worry about or respectfully engage with.

Enjoyed the post but I'd like to mention a potential issue with points like these:

I’m skeptical that we should give much weight to message testing with the “educated general public” or the reaction of people on Twitter, at least when writing for an audience including lots of potential direct work contributors. 

I think impact is heavy-tailed and we should target talented people with a scout mindset who are willing to take weird ideas seriously.

I would put nontrivial weight on this claim: the support of the general public matters a lot in TAI worlds, e.g., during 'crunch time' or when trying to handle value lock-in. If this is true and WWOTF helps achieve this, it can justify writing a book focusing less on people who are already prone to react in ways we typically assoicate with a scout mindset. Increasing direct work in the usual sense is one thing to optimise for; another is creating an enviroment receptive to proposals and cooperation with those who do direct work.

So although I understand that you're not making strong claims about other groups like the general public or policymakers, I think it's worth mentioning that "I'd rather recommend The Precipice to people who might do impactful work" and "WWOTF should have been written differently" are very importantly distinct claims.

Reading this post reminded me of someone whose work may be interesting to look into: Rufus Pollock, a former academic economist who founded the Open Knowledge Foundation. His short book (freely available here) makes the case for replacing traditional IP, like patents and copyright, with a novel kind of remuneration. The major benefits he mentions include increasing innovation and creativity in art, science, technology, etc.

Thanks for writing this!

This is very reasonable; 'no predictive power' is a simplification.

Purely academically, I am sure a well-reasoned Bayesian approach would get us closer to the truth. But I think the conclusions drawn still make sense for three reasons.

  1. I did not specify in the table, but the p-values for the insignificant coefficients were very high; often around p=0.85. I think this constitutes so little evidence that it would be too minor a Bayesian update to have to formally conduct.
  2. Given that we do have evidence of some other variables being predictive, updating in favour of weighting those higher still makes sense (although maybe to a lesser degree than I implied in the post).
  3. The time applicants and facilitators spend on the many different criteria we used is a cost (and a meaningful one for smaller groups). I would guess that cutting down the number of variables used would increase productivity more than what can be outweighed by the small updates we could make with little (but non-zero) predictive power.

Thanks for the comment! 

I think it's completely plausible that these two measures were systematically measuring something other than what we took them to be measuring. The confusing thing is what it indeed was measuring and why these traits had negative effects.

(The way we judged open-mindedness, for example, was by asking applicants to write down an instance where they changed their minds in response to evidence.)

But I do think the most likely case is the small sample.