N

Nathaniel

68 karmaJoined Jan 2022

Comments
9

I'm worried your subdivision misses a significant proportion of harms that don't fall into either category. For instance, interactions that don't involve malice or power dynamics and are innocuous in isolation but harmful when repeated. This repetition can be made more likely by  imbalanced gender ratios. 

I think being flirted with during the day at an EAG like Nathan discussed above is a good example of this. If you're flirted with once over the weekend, perhaps it's fine or even nice, especially if it's from the person you found most interesting. But if you're flirted with several times, you may start to feel uncomfortable. 

Well if a conference has 3x more men than woman and 1-on-1s are matched uniformly at random, then women have 3x more cross-gender 1-on-1s than men. Assuming all people are equally likely to flirt with someone of a different gender than them, it's very possible that the average man receives a comfortable amount of flirting while the average woman receives an uncomfortable amount. 

And it probably gets worse when one considers that these are random variables and we don't care about the average but rather about how many people exceed the uncomfortable threshold and to what degree. And perhaps worse again if certain "attractive" people are more likely to receive flirting.

Overall, my point is that behaviors and norms that would be fine with balanced gender ratios can be harmful with imbalanced ones.  Unfortunately, we have imbalanced ones and we need to adapt accordingly. 
 

What makes you skeptical of the intervention?

This news seems like it increases the value of marginal donations this year relative to what we expected. Are 2022 donations also likely to be (much) more valuable relative to 2023? Is donating in December 2022 too late to take advantage of this effect?

Also, the quoted passage seems to assume that EA orgs optimize for their org’s impact rather than for the impact of the movement/good of the world. I’m not convinced that’s true. I would be surprised if EA orgs were attempting to poach workers they explicitly believed were having more impact at other organizations.

It does seem possible that orgs overestimate their own impact/the impact of roles they hire for. However, this would still lead to a much smaller effect than if they completely ignore the impact of candidates at their current roles, as the post seems to assume.

Thanks for link-posting, I enjoyed this!

I didn't understand the section about EA being too centralized and focused on absolute advantage. Can anyone explain? 

EA-in-practice is too centralized, too focused on absolute advantage; the market often does a far better job of providing certain kinds of private (or privatizable) good. However, EA-in-practice likely does a better job of providing certain kinds of public good than do many existing institutions.

And footnote 11: 

It's interesting to conceive of EA principally as a means of providing public goods which are undersupplied by the market. A slightly deeper critique here is that the market provides a very powerful set of signals which aggregate decentralized knowledge, and help people act on their comparative advantage. EA, by comparison, is relatively centralized, and focused on absolute advantage. That tends to centralize people's actions, and compounds mistakes. It's also likely a far weaker resource allocation model, though it does have the advantage of focusing on public goods. I've sometimes wondered about a kind of "libertarian EA", more market-focused, but systematically correcting for well-known failures of the market.

Don't global health charities provide private goods (bed nets, medicine) that markets cannot? Markets only supply things people will pay for and poor people can't pay much. 

X-risk reduction seems like a public good, and animal welfare improvements are either a public good of a private good where the consumers definitely cannot pay. 

I take it that centralization is in contrast to markets. But it seems like in a very real way EA is harnessing markets to provide these things. EA-aligned charities are competing in a market to provide QALYs as cheaply as possible, since EAs will pay for them. EAs also seem very fond of markets generally (ex: impact certificates, prediction markets). 

How is EA focused on absolute advantage? Isn't earning to give using one's relative advantage? 

Although it's against longtermism and not EA, this recent blog post by Boaz Barak (a computer science professor at Harvard) might qualify.