C

CounterBlunder

183 karmaJoined Oct 2021

Comments
9

This is so cool! I live in a largely veg*n co-op and am super excited to cook these recipes for them :)

What's wrong with the "People in poverty know how to fish but cannot afford the boat" that you used above? I think that's great.

I agree with all that! I think my worry is that this one issue reflects the deep, general problem that it's extremely hard to figure out what's true, and relatively simple and commonly-suggested approaches like 'read more people who have studied this issue', 'defer more to domain-experts', 'be more intellectually humble and incorporate a broader range of perspectives' don't actually solve this deep problem (all those approaches will lead you to cite people like McGilchrist).

I appreciate the large effort put into this post! But I wanted to throw out one small part that made me distrust it as a whole. I'm a US PhD in cognitive science, and I think it'd be hard to find a top cognitive scientist in the country (e.g., say, who regularly gets large science grants from governmental science funding, gives keynote talks at top conferences, publishes in top journals, etc.) who takes Iain McGilchrist  seriously as a scientist, at least in the "The Master & His Emissary" book. So citing him as an example of an expert whose findings are not being taken seriously makes me worry that you  handpicked a person you like, without evaluating the science behind his claims (or without checking "expert consensus"). Which I think reflects the problems that arise when you start trying to be like "we need to weigh together different perspectives". There's no easy heuristics for differentiating good science/reasoning from pseudoscience, without incisive, personal inquiry -- which is, as far as I've seen, what EA culture earnestly tries to do. (Like, do we give weight to the perspective of ESP people? If not, how do we differentiate them from the types of "domain experts" we should take seriously?)

I know this was only one small part of the post, and doesn't necessarily reflect the other parts -- but to avoid a kind of Gell-Mann Amnesia, I wanted to comment on the one part I could contribute to.

Thanks for this! I found it well-written and compelling. I wanted to point out one typo: I think you accidentally put $111 instead of $11 for “mass media campaigns” in the table figure (for the cost of a woman to live a year without violence).

This is awesome -- I've been wanting someone to do this research forever. Thank you :)

This feels like it misses an important point. On the margin, maybe less intelligent people will have on average less of an individual impact. But given that there are far more people of average intelligence than people on the right tail of the IQ curve, if EA could tune its pitches more to people of average intelligence, it could reach a far greater audience and thereby have a larger summed impact. Right?

I think there's also a couple other assumptions in here that aren't obviously true. For one, it assumes a very individualistic model of impact; but it seems possible that the most impactful social movements come out of large-scale collective action, which necessarily requires involvement from broader swaths of the population. Also, I think the driving ideas in EA are not that complicated, and could be written in equally-rigorous ways that don't require being very smart to parse.

This comment upset me because I felt that Olivia's post was important and vulnerable, and, if I were Olivia, I would feel pushed away by this comment. But I'm rereading your comment and thinking now that you had better intentions than what I felt? Idk, I'm keeping this in here because the initial gut reaction feels valuable to name.

There's a bunch of work in cognitive science on "virtual bargaining" -- or how people bargain in their head with hypothetical social partners when figuring out how to make social/ethical decisions (see https://www.sciencedirect.com/science/article/pii/S1364661314001314 for a review; Nick Chater is the person who's done the most work on this). This is obviously somewhat different from what you're describing, but it seems related -- you could imagine people assigning various moral intuitions to different hypothetical social partners (in fact, that's kind of explicitly what Karnofsky does) in order to implement the kind of bargaining you describe. Could be worth checking out that literature.