Will Payne

324 karmaJoined Jan 2020

Bio

I’ve been engaging in some way with EA  since 2018. First by helping run EA Oxford. Then by running the group and setting up remote ’fellowships’. Recently I was on the CEA groups team looking into ways to support university groups.

I’m now looking for new things to work on. Hoping to shift towards work which both:

  • Gives me more sustained motivation (possibly building systems and developing code).
  • Shifts my focus more sharply towards urgent sources of existential risk

Comments
19

Fwiw “EA seems drawn to drama” is a take I’ve heard before and I feel like it’s kind of misleading. The truth is probably closer to “small communities are drawn to drama, EA is also drawn to drama and should (maybe) try to mitigate this”. It’s not super clear to me whether EA is worse or better than it’s reference class. Modelling the community as unusually bad is easy to say from the inside and could lead us to correct against drama in the wrong ways

I notice that you quoted:

The big funding bodies (OpenPhil, EA Funds, etc.) should be disaggregated into smaller independent funding bodies within 3 years

This feels like a totally seperate proposal right? Evaluated separately a world like 'Someone who I trust about as much as I trust Holden or Alex (for independent reasons) is running an independent org which allocates funding' seems pretty good. Specifically it seems more robust to 'conflict-of-interest' style concerns whilst keeping grant decisions in the hands of skilled grantmakers. (Maybe a smaller version of this proposal is splitting up Holden and Alex's arms of OP although they seem independent enough that I'm not too bothered about this)

I think we could start talking about how feasible it is to construct an organisation like this and whether major funders like Holden would want to funnel money towards it but at the very least it hasn't been ruled out by any of the reasoning above

I agree with other commenters saying 9mins seems too long but the general idea is good. I think a human read shorter summary would be really good. A lot of the videos like this I’ve seen also have some synthy soundtrack over the top which I’d add, just because I was put off by it being missing

I think this post is my favourite for laying out why a really convincing utilitarian argument for something which is common sense very bad shouldn’t move you. From memory Eliezer says something like ~Thinking there’s a really good utilitarian argument doesn’t mean the ends justify the means, it just means your flawed brain with weird motivations feels like there’s a really good utilitarian argument. Your uncertainty in that always dominates and leaves room for common sense arguments, even when you feel really extra super sure. Common sense morality rules like “the ends shouldn’t justify the means” arose because people in practice are very miscallibrated about when the ends actually justify the means so we should take the outside view and assume we are too.~

(By miscallibrated I think I could defend a claim like “90% of the time people think the ends definitely justify the means and this clashes with common sense morality they are wrong”)

I might be butchering the post though so you should definitely read it.

https://www.lesswrong.com/posts/K9ZaZXDnL3SEmYZqB/ends-don-t-justify-means-among-humans

I was responding mainly to the format. I don’t expect you to get complete answers to your earlier two questions because there’s a lot more rationality methodology in EA than can be expressed in the amount of time I expect someone to spend on an answer

If I had to put my finger on why I don’t feel like the failure to answer those questions is as concerning to me as it seems to be for you I’d say because.

A) Just because it’s hard to answer doesn’t mean EAs aren’t holding themselves and each other to a high epistemic standard

B) Something about perfect not being the enemy of good and about urgency of other work. I want humanity to have some good universal epistemic tools but currently I don’t have them and I don’t really have the option to wait to do good until I have them. So I’ll just focus on the best thing my flawed brain sees to work on at the moment (using what fuzzy technical tools it has but still being subject to bias) because I don’t have any other machinery to use

I could be wrong but my read from your comments on other answers is that we disagree most on B). E.g you think current EA work would be better directed if we were able to have a lot more formally rational discussions. To the point that EA work or priorities should be put on hold (or slowed down) until we can do this.

Cross posting to Nathan's post since it's pretty recent

(Posting here so people who just read this post can easily see) 
 

From the comments I think the consideration I hadn't considered was names on posts hold people accountable for the content of their post.

TLDR: We don't have some easy to summarise methodology and being rational is pretty hard. Generally we try our best and hold ourselves and each other accountable and try to set up the community in a way that encourages rationality. If what you're looking for is a list of techniques to be more rational yourself you could read this book of rationality advice or talk to people about why they prioritise what they do in a discussion group

Some meta stuff on why I think you got unsatisfactory answers to the other questions

I wouldn't try to answer either of the previous questions because the answers seem big and definitely incomplete. I don't have a quick summary for how I would resolve a disagreement with another EA because there are a bunch of overlapping techniques that can't be described in a quick answer. 

To put it into perspective I'd say the foundation to how I personally try to rationally approach EA is in the Rationality A-Z book but that probably doesn't cover everything in my head and I definitely wouldn't put it forward as a complete methodology for finding the truth. For a specific EA spin just talking to people about why they prioritise what they prioritise is what I've found most helpful and an easy way to do that is in EA discussion groups (in person is better than online).

It is pretty unfortunate that there isn't some easy to summarise methodology or curriculum for applying rationality for charity current EA curricula are pretty focussed on just laying out our current best guess and using those examples along with discussion to demonstrate our methodology.

How is EA rational then?

I think the main thing happening in EA is that there is a strong personal, social, and financial incentive for people to approach their work "rationally". E.g people in the community will expect you to have some reasoning which led you to do what you're doing, and they'll feedback on that reasoning if they think it's missing an important consideration. From that spawns a bunch of people thinking about how to reason about this stuff more rationally, and we end up with a big set of techniques or concepts which seem to guide us better.

For times when the authorship of a post probably affected how I interacted with it. I think those effects were negative. (E.g they were closer to biasing against novel ideas from newcomers to the movement than correctly promoting important updates about influential people/organisations in the movement to the frontpage)

Load more