Ben Millwood

Topic Contributions

Comments

New cause area: bivalve aquaculture

This should enable the nations become affluent more easily, because not many people would have to farm (efficiency gains would be relatively low) but industrial processing machinery will be invested into.

I don't understand this. More easily than what? What's your story for why people aren't doing this already, if it would make them more affluent?

New cause area: bivalve aquaculture

I’m informed that EAs do not care about climate change

This is an exaggeration IMO. EAs care about climate change, but often don't prioritise it, because they care about other things even more. If everything more important than climate change was solved, I think EAs would be working pretty hard on climate change.

Why the EA aversion to local altruistic action?

A brief response to one point: if you are including second-order and third-order effects in your analysis, you should include them on both sides. Yes, donating to a local cause fosters connections in the community and ultimately state capacity and so on. But saving people from malaria does that stuff too, and intuitively when the first order effects are more dramatic, one expects the second order effects to be correspondingly more dramatic: you meet a new friend at your local animal shelter, and meanwhile the child that didn't die of malaria meets a whole life's worth of people, their family has less grief and trauma, their community has greater certainty and security. Of course, it's really hard to be sure of the whole story, but I don't see any reason to suppose that going one step deeper in the analysis will totally invert the conclusion of the first-level analysis.

Bad Omens in Current Community Building

I feel a desire to lower some expectations:

  • I don't think any social movement of real size or influence has ever avoided drawing some skepticism, mockery, or even suspicion,
  • I think community builders should have a solid and detailed enough understanding of EA received wisdom to be able to lay out the case for our recommendations in a reasonably credible way, but I don't think it's reasonable to expect them to be domain experts in every domain, and that means that sometimes they aren't going to be able to seem impressive to every domain expert that comes to us.
  • To be frank, it isn't realistic to be able to capture the imagination of everyone who seems promising even if we make the best possible versions of our arguments. Some people will inevitably come away thinking we "just don't get it", that we haven't addressed their objections, that we're not serious about [specific concern X] and therefore our point of view is uninteresting. Communication channels just aren't high-fidelity enough, and people's engagement heuristics aren't precise enough, to avoid this happening from time to time.
  • When some people are weirded out by the way we behave or try to attract new members, it seems to me like sometimes this is just reasonable self-protective heuristics that they have, working exactly as intended. People are creeped out by us giving them free books or telling them to change their careers or telling them that the future of humanity is at stake, because they reason "these people are putting a lot into me because they want a lot out of me". They're basically correct about that! While we value contributions from people at a wide range of levels of engagement and dedication, the "top end" is pretty extreme, as it should be, and some people are going to notice that and be worried about it. We can work to reduce that tension, but I don't think it's going away.

Obviously we should try our best on all of these dimensions, progress can be made, we can be more impressive and more appealing and less threatening and more welcoming. But I can't imagine a realistic version of the EA community that honestly communicates about everything we believe and want to do and doesn't alienate anyone by doing that.

What is meant by 'infrastructure' in EA?

I think EA uses the word in a basically standard way. I imagine there being helpful things to say about "what do we mean by funding infrastructure" or "what kind of infrastructure is the EA Infrastructure Fund meaning to support", but I don't know that there's anything to say in a more general context than that.

Launching SoGive Grants

Why do you think it's valuable? I don't think we have this norm already, and it's not immediately obvious to me how it would change my behaviour.

Bad Omens in Current Community Building

I don't think we have a single "landing page" for all the needs of the community, but I'd recommend applying for relevant jobs or getting career advice or going to an EA Global conference, or figuring out what local community groups are nearby you and asking them for advice.

Bad Omens in Current Community Building

I agree with paragraph 1 and 2 and disagree with paragraph 3 :)

That is: I agree longtermism and x-risk are much more difficult to introduce to the general population. They're substantially farther from the status quo and have weirder and more counterintuitive implications.

However, we don't choose what to talk about by how palatable it is. We must be guided by what's true, and what's most important. Unfortunately, we live in a world where what's palatable and what's true need not align.

To be clear, if you think global development is more important than x-risk, it makes sense to suggest that we should focus that way instead. But if you think x-risk is more important, the fact that global development is less "weird" is not enough reason to lean back that way.

Against immortality?

I don't buy the asymmetry of your scope argument. It feels very possible that totalitarian lock-in could have billions of lives at stake too, and cause a similar quantity of premature deaths.

Free-spending EA might be a big problem for optics and epistemics

apologies if this was obvious from the responses in some other way, but did you consider that the person who gave a 9 might have had the scale backwards, i.e. been thinking of 1 as the maximally uncomfortable score?

Load More