This kinda feels like vote brigading:
"where N is the total karma of this post three months from now, up to a max of $20,000"
Ad I'm torn because I obviously want to entice you to donate more, but I don't support people buying Karma in a direct " $ donated in -> Karma out" fashion like this.
Recent forum post on vote brigading here:
I'll be interested to see how/if this gets picked up by mainstream media.
For example, this sentence seems to be an exaggeration, particularly
"Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control."
I worry this letter might be passed off as "crying wolf", although I agree that 6 month pause would be amazing.
Will be watching the news...
Yeah, interesting. I definitely disagree with you on whose preferences should be met in this case, and suspect there are some situations where I would agree with you, but would require a lot of context to understand exactly where the lone of agree/disagree is.
I think I recently saw something about EA billionaires trying to increase tax on billionaires, which is along the lines of what you suggest.
Your suggestions for diversity in local groups would reduce the blindspots of my own that I uncovered during the writing of this post - I think it's easy to fall into patterns as a group based on the interests of the members, and therefore forget how wide the conversation and action under the umbrella "EA" is.
The focal point of the post is more around EA's potential to do more power-sharing, rather than solely increase the diversity of people within EA (though diversity is part of it). I think of it like a consultancy: a consultancy usually isn't criticised for being too homogenous. Instead, people just decide not to use that consultancy in favour of one that has the skills/perspectives/track record/etc that they're looking for. Although EA isn't one centralised company, I see similarities because (in many but not all) cases, we are a group of people who are trying to apply tools to other peoples' problems.
A consultancy doesn't need political allies or alignment, though it may choose to take on projects of a certain flavour. I'd be interested to hear your thoughts on whether EA needs to (or should be) seeking the political alignment you mentioned.
Yeah. I was quite nervous about posting this as a written critique because I agree - it can be really easy to talk past each other in discussions about colonialism and institutional racism, and this is exacerbated on text convos because (in my experience) people often have different meanings, experiences or inferences for the same word.
When I usually discuss these topics, it's a physical conversation where people are trying to really understand and connect to the others in the convo, and have time to check understanding and rephrase as the conversation continues.
What kind of caveats would you add, out of interest?
One framework I've come across for discussing racism is to keep it personal, local and immediate - i.e. talk about your own experiences, and avoid speaking for other people. However, this seems counter to the EA conversational norms (e.g. see my reply to Rubi's comment on this post) where we like to use concrete examples and hypotheticals.
I guess, if I was forced not to use hypotheticals or advocate for others' experiences in EA, I would be really incentivised to seek others' voices, and maybe that wouldn't be the worst thing, in terms of genuinely bringing others to the table.
Thanks for your comment. Glad you liked the post.
In response to 1.
I'd be much more interested in knowing, for example, what percentage of aid programs were done in cooperation with locals.
Yeah, there's a scale from:
I don't think there's much infrastructure set up for enabling communities directly, which I'd be interested to see someone try to design. I think there's potential. One thing Karen mentions in the 80,000 Hours interview is that you don't want to burden communities to provide services that they should just have by default, which is why she works at a governmental level to support the government to design and provide these services.
There was also a long tangent of my research into whether EA should be considering community-level infrastructure rather than programmes like GiveDirectly. The Page and Pande paper in the footnotes is pretty interesting and has some good discussion that was cut from the list of most persuasive arguments.
In response to 2.
...Maybe all the orgs that come from CE are like that?
I did a quick audit of Charity Entrepreneurship's orgs and
(1) there didn't seem to be too many that were overtly designed because the founders had insider-positioning in their target community. However, I knew I didn't have time to research the background story for each founder individually and drawing conclusions from anything less thorough would clearly be bad on multiple levels. Therefore, I cut that thread from the original post.
(2) LEEP and Family Empowerment Media both seemed relevant examples that would be relevant to the post. There are a couple that seem to be policy-based and it's unclear whether there's a lot of insider-positioning in these circumstances, and also unclear whether policy initiatives benefit from insider positioning? I'd be excited to discuss this in way more depth.
Yeah, and the IDinsight study only looked at #2 from your list above , which is one of the limitations and reasons more research would be good. This hits at a "collectivist culture vs individualist culture" nuance too, I suspect, because that could influence the weightings of #1 vs #2.
In a 2012 blog post Holden wrote about the GiveWell approach being purposefully health and life-based as this is possibly the best way to give agency to distant communities: https://blog.givewell.org/2012/04/12/how-not-to-be-a-white-in-shining-armor/
And they also have a note somewhere on their website about flow-on effects: GiveWell assumes the flow on effects from giving health/life-saving interventions is probably more cost effective than flow on effects from infrastructural interventions which end up improving health and lifespan.
In response to your comment about deferring to a hypothetical community who gives no life-saving intervention for people under 9 years old: if people had good access to information and resources, and their group decision was to focus a large amount of resource on saving lives of extremely old people on the community ... Maybe we should do this? I say this because I can think of reasons a community might want grandparents around for another few years (e.g. to pass on language, culture, knowledge) instead of more children at the moment. I think, if a community was at massive risk of loss of culture, the donors' insistence on saving young lives over the elders' lives could be incredibly frustrating.
Not saying this to make any conclusions, but just as a counter-example that introduces a little more nuance than "morally wrong to let under 9yo's die unnecessarily."
Hey, I've written up a post along these lines. If you're still interested: https://forum.effectivealtruism.org/posts/oD3zus6LhbhBj6z2F/red-teaming-contest-demographics-and-power-structures-in-ea
Yeah, I support this. Using a tool like https://www.getguesstimate.com/scratchpad - free scratchpad that could help late-highschoolers (a) understand that you can make guesses under uncertainty, and how uncertain the result is, and (b) make decisions about their careers using tools. That tool could be demonstrated in a TikTok I reckon.
Hi Lorenzo, can you please expand on "> EAs are also much less confident that they know what people need better than they do"?
In my experience, EA has an aura of being confident that their conclusions are more accurate or effective than others' (including beneficiaries) because people within EA arrive at their conclusions using robust tools.