Ben Millwood

993Joined Dec 2015

Posts
1

Sorted by New

Comments
137

Though the flipside of this is I think we probably don't have a bunch of people sitting around like "ah, I would do a cost-benefit analysis, but none of the things to analyse are worth my time", so reading this post probably doesn't generate LICAs unless we also figure out what people are missing to be able to do more of this stuff.

I expect partly it's just that doing Real, Important Research is more intimidating than it deserves to be, and it would be useful to try to "demystify" some of this work a bit.

Another possible benefit is that doing cost-benefit analyses might make you better at doing other cost -benefit analyses, or give you other transferrable skills or knowledge that are helpful for top-priority causes. I think that for all our enthusiasm about these kinds of assessments, we don't actually as a community produce that many of them. Scaling up the analysis industry might lead to all sorts of improvements in how quickly and accurately we can do them.

I think it's more like: CEA projects are limited by other essential resources (like staff, management capacity, onboarding capacity) before they run out of money.

(I agree it's not 0/1 exactly, but it's not as easy as you'd think to just spend more money and get more good stuff.)

Upvoted because I don't think this tension is discussed enough, even if to refute it.

It strikes me that the median non-EA is more risk averse than EAs should be, so moving non-EA to EA you should probably drop some of your risk aversion. But it does also seem true that the top performing people in your field might disproportionately be people who took negative EV bets and got lucky, so we don't necessarily want to be less risk averse than them.

I think we should eliminate any discussion of attractiveness from professional spaces (as is the norm among professional spaces generally, I'd hope), but... not all EA spaces are professional spaces, and given especially that EAs often date within EA I think it's reasonable to have a normal, respectful amount of discussion of physical appearance in social spaces (while at the same time agreeing with you that ranking every nearby woman by physical attractiveness is not respectful and I'm on board with calling that kind of thing out as inappropriate in any context).

I agree we should avoid fixating on it, overvaluing it, or letting our preferences for physical appearances leak into any other part of our opinion of a person, but I think suppressing it altogether is too much. It's part of how people interact with the world and I think trying to deny that it exists isn't ultimately healthy.

And we've created robots that can outperform humans in virtually all physical tasks.

Not that this is at all central to your point, but I don't think this is true. We're capable of building robots that move with more force and precision than humans, but mostly only in environments that are pretty simple or heavily customised for them. The cutting edge in robots moving over long distances or over rough terrain (for example) seems pretty far behind where humans are. Similarly, I believe fruit-picking is very hard to automate, in ways that seem likely to generalise to lots of similar tasks.

I also don't think we're very close to artificial smell, although possibly people aren't working on it very much?

Another thing you can do to respect the time of your readers is to think a little about who doesn't need to read your post, and how you can get them to stop. I don't have a lot of advice about what works here, but it seems like a good goal to have.

This should enable the nations become affluent more easily, because not many people would have to farm (efficiency gains would be relatively low) but industrial processing machinery will be invested into.

I don't understand this. More easily than what? What's your story for why people aren't doing this already, if it would make them more affluent?

I’m informed that EAs do not care about climate change

This is an exaggeration IMO. EAs care about climate change, but often don't prioritise it, because they care about other things even more. If everything more important than climate change was solved, I think EAs would be working pretty hard on climate change.

A brief response to one point: if you are including second-order and third-order effects in your analysis, you should include them on both sides. Yes, donating to a local cause fosters connections in the community and ultimately state capacity and so on. But saving people from malaria does that stuff too, and intuitively when the first order effects are more dramatic, one expects the second order effects to be correspondingly more dramatic: you meet a new friend at your local animal shelter, and meanwhile the child that didn't die of malaria meets a whole life's worth of people, their family has less grief and trauma, their community has greater certainty and security. Of course, it's really hard to be sure of the whole story, but I don't see any reason to suppose that going one step deeper in the analysis will totally invert the conclusion of the first-level analysis.

Load More