I write about ideas and resources I like, along with some ideas for making concepts more clear to me and others who come from a humanities background but are digging into the tools EA gives us to build stuff we care about. I have academic and informal training in conflict studies and language studies, so I'll write about that too.
I got interested in EA via GiveWell when it started and got a bit more involved in EA when 80K started. I ascribe to the "keep your identity small" idea and see EA as a really useful set of tools and important questions, though not the only set of tools and important questions someone might consider when doing good. I'm a member of EA DC.
I'm also a Community Liaison at CEA (www.centreforeffectivealtruism.org/team)
Outside of EA, I'm involved in the Deaf community and interpreting field/higher ed. I'm generally interested in how people learn what they learn, how we effectively relate to ourselves and each other, and how to apply those ideas to mentoring and resolving conflict.
Fun things = acro-yoga, cross fit, 1:1 conversations about ideas, reading while laying in hammocks, scuba
Thanks for asking. I’m not able to say more at this point about that specific creator.
I think you’re asking a good, implied question I share though: which comms channels would be most promising, for creating or sharing additional EA content?
I’m interested in analysis of those sorts of questions, and see them as part of the strategic comms role we’re hoping to hire for this year.
(I work at CEA).
I have some data that may be relevant to folks with interest in this topic*:
I work for CEA, and this quarter I did a small brand test with Rethink’s help. We asked a sample of US college students if they had heard of “effective altruism.” Some respondents were also asked to give a brief definition of EA and a Likert scale rating of how negative/positive their first impression was of “effective altruism.”
Students who had never heard of “effective altruism” before the survey still had positive associations with it. Comments suggested that they thought it sounded good - effectiveness means doing things well; altruism means kindness and helping people. (IIRC, the average Likert scale score was 4+ out of 5). There were a small number of critiques too, but fewer than we expected. (Sorry that this is just a high-level summary - we don't have a full writeup ready yet.)
Caveats: We didn't test the name “effective altruism” against other possible names. Impressions will probably vary by audience. It could still be the case that "EA" puts off a sub-set of the audience we really want to reach. (E.g. if we found that highly critical/truth-seeking people in certain fields were often turned away by "EA," I'd consider that a concern. We don't have that data).
I do think this is encouraging, but doesn't settle the question. Testing other brands and sub-brands may still be a good idea. Testing brands within very specific sub-audiences is also harder to do. CEA is currently considering trying to hire someone to test and develop the EA brand, and help field media inquiries.
*I think this post may have been written after I gave Max the info that he posted on my behalf here so I'm cross-posting.
What are your thoughts on solutions journalism? Does it have much traction among science writers you know? Do you personally use it or promote it as a framework for writing?
Do you think this is a good/bad idea?:
I have the hunch that EA and solutions journalism could be a good match. E.g. EAs in journalism could join the solutions journalism network and seek solutions journalism angles to their editors. EA projects that think they would be well-served by public media coverage could seek to build relationships with strong solutions journalists and make themselves available for stories when they have something going on that the journalists are interested in. I'm not a journalist myself, and think the SJN approach is still small, so I'm curious if you see this area growing.
I haven't read this whole thread, so forgive me if I'm re-stating someone else's point.
I think there's another explanation: they have a hypothesis about you/EAs/us that we are not disproving.
My experience has been that people in any numerical or social minority group (e.g. Black Americans, people with disabilities, someone who is the "only" person from a given group at their workplace, etc), are used to being met with disappointing responses if they try to share their experiences with people who don't have them (e.g. members of the numerical or social majority group that they are different from). Most of us have had this experience at least some of the time, maybe as EAs! People get blank stares, unwanted pity or admiration, or outright dismissal and invalidation (e.g. "it can't be all that bad" or "you're just playing the [race/poverty/privilege/ whatever] card"). This is definitely the kind of conversation people see over and over again on the internet. So, until proven otherwise, that's what people expect. Majority group members are expected to be ignorant of what life is really like for people who experience it differently. I think this is a rational expectation at least some of the time. The hypothesis then goes: EAs look like majority group members and often are, ergo anything EAs say about which problems are "most important" is assumed to be somewhat ignorant. Maybe people see it as well-meaning or callous ignorance. Regardless, ignorance is assumed as most probable, because it's true of most people. (I think EAs and progressives also have different models of when ignorance matters the most and when differences matter the most, but that's a different thread).
I've usually taken the view that I don't get to assume people will see me as an informed, compassionate person on the progressive left until I disprove the hypothesis above. If the first thing I say is something like why local US poverty issues are "less important" than other issues, I've just reinforced the hypothesis rather than disproven it. It sounds like denying the reality that they know is true -- they've seen the real-life people impacted and/or read their stories or studied the human impact of these issues. At least in my case, it's not true that they struggle to think of people in other countries as real people too. (My progressive friends have often lived abroad, have family in other countries, or work in immigrant communities). It's a trust issue. If they see me denying that local issues are "real/important," I must be ignorant, and worse, I must be unwilling to be bothered with the real-life experiences of people different from me. Why should they trust anything I say after that about helping people? "But Africa though!" sounds like a deflection, not a genuine consideration or a sincere, compassionate challenge of their own thinking about poverty.
When I speak first about things we both care about and share sincere examples of the ways that I do see and care about the depth of personal stress that US poverty and racial disparities have on people I actually know, I haven't had a progressive friend respond by saying that poverty in other countries didn't matter. I brought it up second though, and that seems to make a difference. If someone trusts that I am a caring, informed person, not a callous ignorant one, we can expand the scope of the conversation from there.
Fwiw, I can't think of a time this has led to changed actions on their part.
To be clear, this also means I don't think everyone should look at PISE and think "we should definitely change our name too!" I think we don't have enough information from this one example to make a claim that strong.
I thought this was a thoughtfully-shared example and am glad Koen wrote it up so people could share their thinking.
Though I like thinking about words with a skeptical lens, I am not convinced this is a large concern. The name of a new thing will produce both predictable and random reactions from humans.
My expectation is that rational, intelligent, self-critical, scientifically literate humans are humans, which comes with a certain degree of randomness to their behaviors. There will be variations in what they feel like doing on a given day, and a low-stakes decision like "Do I want to go to this presentation by a group I haven't heard of?" is not much evidence either way of someone's thinking skills. If the ideas the group is presenting attract those individuals in their particular context, and they hit upon a name that helps rather than distracts from that goal, that seems solid.
Congrats on the launch! This may be a stretch, but if you'd find it helpful to connect with any of these folks: https://youtu.be/DbplLXRQquI or the Data Science for Social Good team at U of Chicago to see if they have additional contacts, let me know and I can connect you.
Sky here, with an update from CEA’s Community Health team:
I was previously listed in this post as an additional contact person. I’m taking extended leave and will be unavailable after July 30 as a contact person. We’ve edited this post to remove my info but we want you to know who to chat with going forward:
Other resources:
Personal note:
I’ve really enjoyed past conversations with many of you about topics we care about: thinking seriously and humbly about impact, media and EA communications, intercultural connections and diversity, mentorship and morale, and more. My C/EA colleagues and many of your peers are happy to hear from you on these topics too.
I’ve been very appreciative of support from CEA colleagues and EA community members while I’ve been managing health issues over the past couple years. I see we’re in a community that wants to help each other, so I hope you do reach out if and when you need it. I’m taking some time to prioritize healthcare now and may return to C/EA as a consultant in the future. Much love in the meantime. I’ll look forward to crossing paths when we do!