All of Will Payne's Comments + Replies

Fwiw “EA seems drawn to drama” is a take I’ve heard before and I feel like it’s kind of misleading. The truth is probably closer to “small communities are drawn to drama, EA is also drawn to drama and should (maybe) try to mitigate this”. It’s not super clear to me whether EA is worse or better than it’s reference class. Modelling the community as unusually bad is easy to say from the inside and could lead us to correct against drama in the wrong ways

I notice that you quoted:

The big funding bodies (OpenPhil, EA Funds, etc.) should be disaggregated into smaller independent funding bodies within 3 years

This feels like a totally seperate proposal right? Evaluated separately a world like 'Someone who I trust about as much as I trust Holden or Alex (for independent reasons) is running an independent org which allocates funding' seems pretty good. Specifically it seems more robust to 'conflict-of-interest' style concerns whilst keeping grant decisions in the hands of skilled grantmakers. (Maybe a smaller ver... (read more)

I agree with other commenters saying 9mins seems too long but the general idea is good. I think a human read shorter summary would be really good. A lot of the videos like this I’ve seen also have some synthy soundtrack over the top which I’d add, just because I was put off by it being missing

I think this post is my favourite for laying out why a really convincing utilitarian argument for something which is common sense very bad shouldn’t move you. From memory Eliezer says something like ~Thinking there’s a really good utilitarian argument doesn’t mean the ends justify the means, it just means your flawed brain with weird motivations feels like there’s a really good utilitarian argument. Your uncertainty in that always dominates and leaves room for common sense arguments, even when you feel really extra super sure. Common sense morality rules l... (read more)

I was responding mainly to the format. I don’t expect you to get complete answers to your earlier two questions because there’s a lot more rationality methodology in EA than can be expressed in the amount of time I expect someone to spend on an answer

If I had to put my finger on why I don’t feel like the failure to answer those questions is as concerning to me as it seems to be for you I’d say because.

A) Just because it’s hard to answer doesn’t mean EAs aren’t holding themselves and each other to a high epistemic standard

B) Something about perfect not bein... (read more)

1
Elliot Temple
2y
I think I disagree with you on both A and B, as well as some other things. Would you like to have a serious, high-effort discussion about it and try to reach a conclusion?
1
Sharang Phadke
2y
Thanks, I've recorded this, and I think it's a good idea!

Cross posting to Nathan's post since it's pretty recent

(Posting here so people who just read this post can easily see) 
 

From the comments I think the consideration I hadn't considered was names on posts hold people accountable for the content of their post.

TLDR: We don't have some easy to summarise methodology and being rational is pretty hard. Generally we try our best and hold ourselves and each other accountable and try to set up the community in a way that encourages rationality. If what you're looking for is a list of techniques to be more rational yourself you could read this book of rationality advice or talk to people about why they prioritise what they do in a discussion group

Some meta stuff on why I think you got unsatisfactory answers to the other questions

I wouldn't try to answer either of the pr... (read more)

1
Elliot Temple
2y
Trying to address only one thing at a time: I don’t think I asked for an “easy to summarise methodology” and I’m unclear on where that idea is coming from.

For times when the authorship of a post probably affected how I interacted with it. I think those effects were negative. (E.g they were closer to biasing against novel ideas from newcomers to the movement than correctly promoting important updates about influential people/organisations in the movement to the frontpage)

I can think of a time where the authorship of a post probably affected how I interacted with it

I'd be interested to hear if my experience is similar to others. Use agree-disagree voting on my replies to this comment to vote in this poll.

2
Will Payne
2y
For times when the authorship of a post probably affected how I interacted with it. I think those effects were negative. (E.g they were closer to biasing against novel ideas from newcomers to the movement than correctly promoting important updates about influential people/organisations in the movement to the frontpage)
1
Will Payne
2y
I can think of a time where the authorship of a post probably affected how I interacted with it

I recently applied for funding and found it helpful to look at my month to month spending over the last few months. I guessed at a rough mean monthly spending over 6 months but I might have been better off picking a median. I also forgot to account for tax!

I’m a bit put off by the claim that the student focus is an historical coincidence. It seems to me that people have really doubled down on that commitment because of the career flexibility students have (as Thomas mentioned). Maybe in the early years of the movement this was a historical coincidence but I don’t think ‘historical coincidence’ is an honest answer to the question “Why has EA community building been focused on students?” nowadays.

Fwiw I don’t think we should focus only on students and agree with a lot of the rest of the post

4
DavidNash
2y
I'm maybe 70% on that claim, I do think founders effects play a big part in most movements and that early success has led people to overweight current strategies. 

In the scenario where AGI would 100% be malevolent it seems like slowing progress is very good and all AIS people should pivot to slowing or stopping AI progress. Unless we’re getting into “is xrisk bad given the current state of the world” arguments which become a lot stronger if there’s no safe AI utopia at the end of the tunnel. Either way it seems like it’s not irrelevant

Hi Thomas, great question. I’ve included a list below for our records as of today (mid Nov).

It’s worth noting that we think any of these groups could absorb at least 2 FTE at a minimum, so I’d like people looking at these numbers to not be put off applying for the Campus Specialist Internship or Campus Specialist Programme based on the amount of FTE they currently have (although if you’d want to work on some of the under supported groups that would be amazing).

  • Berkeley 1.9 FTE (of which 1 FTE is EAIF funded)
  • Brown 0.25 FTE
  • Caltech 0 FTE
  • Cambrid
... (read more)

Also worth noting that there are a bunch of other more accessible descriptions of longtermism out there and this is specifically a formal definition aimed at an academic audience (by virtue of being a GPI paper)

Once again I really like this model Bob. I'm pretty excited to see how this model changes with even more time to iterate. I'd never come across the formalised idea of slack before and I think it describes a lot of what was in my head when responding to your last post!

I'm wondering how you've been thinking about marginal spending in this model? I.e, If we're patient philanthropists, which choices should we spend money on and which should we save once we factor in that some choices are easier to affect than others? For example, one c... (read more)

I really like this model and will probably use it to think about hingeyness quite a lot now!

I'll make an attempt to give my idea of hingeyness, my guess is that the hingeyness is new enough an idea that there isn't really a correct answer out there.

You can think of every choice in this model as changing the distribution of future utilities (not just at the next time step but the sum across all time). Hingier choices are choices which change this distribution more than any other. For example a choice where one future branch includes -1000 and a bu... (read more)