In short: Making EA causes relevant to different groups of people, especially minority groups, and talking about how would some problems we discuss in the EA community impact these groups in distinct ways, could help us bring more people to work on these issues and prevent some potential from being lost. 

One thing we can improve as EAs is how we communicate the issues we believe are the most important to different communities and people both inside and outside of the movement. I am running an EA Fellowship in Georgia, a war-torn country that suffers from the Russian invasion and several issues in human rights. While I am doing that, I have to think hard about how to make EA relevant to Georgians. Building an EA community here is difficult enough, and starting with issues like Artificial Intelligence safety research could further complicate the outreach work as it is not very relatable to many people in countries facing issues here and now.  Yes, it is an important issue to cover during the fellowship, but how should it be covered? 

Something similar could be said about different communities that are experiencing problems here and now due to their lack of privilege. For example, several queer community members face oppression, death threats, and other not-so-good things on a daily basis. How do we make longtermist issues relevant to them? How do we talk about some of the EA cause areas that might not be relevant to them here and now

In general, we often talk about more inclusive discourse, but from personal experience, I find that often EA is still centered on a Western, straight narrative. This is a feeling, not a fact, but other people might have similar feelings and not feel included enough. They might ask why is not there more work done within EA regarding specific types of issues that underprivileged groups face. I have searched the forum and for example, found very few discussions on issues facing the LGBTQ community. I have found comparatively more discussion about women’s rights, but often I also find frankly douchebag-y comments from other men and women with internalized patriarchy disregarding the experience of females who were brave enough to share their discomfort. Again, this is just an opinion and not a fact. These are some feelings here and some opinions that just pushed me to ask the question – how do we make EA causes relevant to the groups who might not feel like they affect them as well?

On one hand, we want to maximize our impact and work on the most pressing issues. On the other hand, we can't disregard the fact that for people personally affected, for example, queer activism is a pressing issue. For people suffering from war, ending the war is a pressing issue. For people who live in areas where malaria is prevalent, malaria is a pressing issue. Not all people have the capacity to think about issues they are not facing right now. And maybe, instead of asking them to do so, we could make the first step. 

If we don’t proactively try to include people, by also making major existential risks and causing areas relevant to their community, we might lose some potential that wouldn’t otherwise be lost. I think that one thing that could help us spread the EA principles and talk about some of the issues like AI or other longtermist issues is making these problems relevant and relatable to the target audience. 

Let me demonstrate this using one example. Artificial intelligence could be dangerous for everyone. If it is dangerous for everyone, it is also dangerous for groups of people. If you are already an underprivileged group of people, AI could potentially make things worse for you in a distinct way. Let’s take a look at the sexual orientation algorithm and why we, the queer community, could care about AI. 

Wang & Kosinski (2017) demonstrated that deep neural networks can differentiate between straight and gay faces in 81% of cases in men and 74% of cases in women. They do this by using facial features. Humans can only differentiate straight and gay men by their faces in 61% of cases and women in 54% of cases. This (already existing) algorithm could pose safety and privacy risks to the queer community, especially if certain authoritarian governments (most notably Russia, China, and the Gulf states) decide to employ such networks for their causes. Private entities employing these algorithms to influence people could have severe consequences even in countries without such a severe negative record on human rights. 

In fact, I am currently doing some research on the effect of masculinity vs femininity on perceived competence in people's faces. Preliminary results are that people are biased to think that more feminine male faces are likely less competent (even if they belong to actual politicians who won elections and were previously judged as competent). So people with feminine faces could already be judged as less competent by other humans, and if the AI algorithm learns such bias from humans, people with specific appearances could be in an uncomfortable situation. 

Going back to the Wang & Kosinski study, the AI algorithm that already exists can impact the lives of millions of queer people whose lives are already impacted by the systematic oppression they face. Millions of other, non-queer people with specific facial appearances could also be impacted negatively by AI. AI could impact everyone negatively, but there might a specific way in which it negatively affects your group. And humans care about their own groups more than about outsiders sometimes. Surely, in this case, working on AI safety could be an area worthy of your time even if you are less concerned with other groups outside of your circle. Even if you found that AI is far from your interests and you were interested mostly in queer advocacy. Advocating for safer AI that doesn’t expose the sexual orientations of people is also queer advocacy. 

My point here is that we could reach more people if we tried to make EA causes more relatable to them. To do that, we have to think about how a specific EA cause affects different groups of people (or individuals) differently. For example, even without a remote existential threat of AI, today this specific AI algorithm discussed here is already a potential threat to the queer community. The development of AI today affects straight people and queer people differently, and it is possible that in the future it will affect all of us negatively but in distinct ways. Seeing how there could be distinct ways of negative impact on different communities could help us bring these communities to work on these causes. 

This way, we could also understand that there are minority groups that experience things differently from the majority group. Taking an individualized outreach approach to these groups could make them more included and heard. But this is just my thought. 


 

11

New Comment