A few months ago I felt like some people I knew within community building were
doing a thing where they believed (or believed they believed) that AI
existential risk was a really big problem but instead of just saying that to
people (eg: new group members), they said it was too weird to just say that
outright and so you had to make people go through less "weird" things like
content about global health and development and animal welfare before telling
them you were really concerned about this AI thing.
And even when you got to the AI topic, had to make people trust you enough by
talking about misuse risks first in order to be more convincing. This would have
been an okay thing to do if those were their actual beliefs. But in a couple of
cases, this was an intentional thing to warm people up to the "crazy" idea that
AI existential risk is a big problem.
This bothered me.
To the extent that those people now feel more comfortable directly stating their
actual beliefs, this feels like a good thing to me. But I'm also worried that
people still won't just directly state their beliefs and instead still continue
to play persuasion games with new people but about different things.
Eg: one way this could go wrong is group organisers try to make it seem to new
people like they're more confident about what interventions within AI safety are
helpful than they actually are. Things like: "Oh hey you're concerned about this
problem, here are impactful things you can do right away such as applying to
this org or going through this curriculum" when they are much more uncertain (or
should be?) about how useful the work done by the org is or how correct/relevant
the content in the AI safety curriculum is.
The EA community aims to make a positive difference using two very different
approaches. One of them is much harder than the other.
As I see it, there are two main ways people in the EA community today aim to
make a positive difference in the world: (1) identifying existing,
high-performing altruistic programs and providing additional resources to
support them; and (2) designing and executing new altruistic programs. I think
people use both approaches—though in varying proportions—in each of the major
cause areas that people inspired by EA ideas tend to focus on.
In this post, I’ll call approach (1) the evaluation-and-support approach, and
I’ll call approach (2) the design-and-execution approach.
I consider GiveWell’s work to be the best example of the evaluation-and-support
approach.[1] Most policy advocacy efforts, technical engineering efforts, and
community-building projects are examples of the design-and-execution
approach.[2]
Both of these approaches are difficult to do well, but I think
design-and-execution is much more difficult than evaluation-and-support. (In
fact, recognizing and taking seriously how difficult and rare it is that a
well-intended altruistic program is actually designed and executed effectively
is one of the central
[https://80000hours.org/articles/effective-social-program/] insights
[https://www.givewell.org/giving101/The-Wrong-Donation-Can-Accomplish-Nothing]
that I find distinctive and valuable about EA’s evaluation-and-support
approach.)
I also think design-and-execution—with its long feedback loops and scarcity of
up-front empirical evidence—carries a much higher risk of accidentally causing
harm than evaluation-and-support, and so depends much more heavily on effective
risk-management and error-correction processes to have a positive impact on the
world.[3] I think the riskiness of design-and-execution approaches makes it
unclear whether it’s virtuous to be especially ambitious when pursuing these
approaches, since ambitious
Not all "EA" things are good - just saying what everyone knows out loud (copied
over with some edits from a twitter thread
[https://twitter.com/ChanaMessinger/status/1633102630871343104])
Maybe it's worth just saying the thing people probably know but isn't always
salient aloud, which is that orgs (and people) who describe themselves as "EA"
vary a lot in effectiveness, competence, and values, and using the branding
alone will probably lead you astray.
Especially for newer or less connected people, I think it's important to make
salient that there are a lot of takes (pos and neg) on the quality of thought
and output of different people and orgs, which from afar might blur into "they
have the EA stamp of approval"
Probably a lot of thoughtful people think whatever seems shiny in a "everyone
supports this" kind of way is bad in a bunch of ways (though possibly net
good!), and that granularity is valuable.
I think feel very free to ask around to get these takes and see what you find -
it's been a learning experience for me, for sure. Lots of this is "common
knowledge" to people who spend a lot of their time around professional EAs and
so it doesn't even occur to people to say + it's sensitive to talk about
publicly. But I think "some smart people in EA think this is totally
wrongheaded" is a good prior for basically anything going on in EA.
Maybe at some point we should move to more explicit and legible conversations
about each others' strengths and weaknesses, but I haven't thought through all
the costs there, and there are many. Curious for thoughts on whether this would
be good! (e.g. Oli Habryka talking about people with integrity here
[https://forum.effectivealtruism.org/posts/2y9eSkMAkdPQeXWMf/podcast-with-oli-habryka-on-lesswrong-lightcone?commentId=AkbWFp69iJ6tRTTyp])
BAD THINGS ARE BAD: A SHORT LIST OF COMMON VIEWS AMONG EAS
1. No, we should not sterilize people against their will.
2. No, we should not murder AI researchers. Murder is generally bad. Martyrs
are generally effective. Executing complicated plans is generally more
difficult than you think, particularly if failure means getting arrested and
massive amounts of bad publicity.
3. Sex and power are very complicated. If you have a power relationship,
consider if you should also have a sexual one. Consider very carefully if
you have an power relationship: many forms of power relationship are
invisible, or at least transparent, to the person with power. Common forms
of power include age, money, social connections, professional connections,
and almost anything that correlates with money (race, gender, etc). Some of
these will be more important than others. If you're concerned about
something, talk to a friend who's on the other side of that from you. If you
don't have any, maybe just don't.
4. And yes, also, don't assault people.
5. Sometimes deregulation is harmful. "More capitalism" is not the solution to
every problem.
6. Very few people in wild animal suffering think that we should go and
deliberately destroy the biosphere today.
7. Racism continues to be an incredibly negative force in the world. Anti-black
racism seems pretty clearly the most harmful form of racism for the minority
of the world that lives outside Asia.[1]
8. Much of the world is inadequate and in need of fixing. That EAs have not
prioritized something does not mean that it is fine: it means we're busy.
9. The enumeration in the list, of certain bad things, being construed to deny
or disparage other things also being bad, would be bad.
Hope that clears everything up. I expect with 90% confidence that over 90% of
EAs would agree with every item on this list.
1. ^
Inside, I don't know enough to say with confidence.
On Socioeconomic Diversity:
I want to describe how the discourse on sexual misconduct may be reducing the
specific type of socioeconomic diversity I am personally familiar with.
I’m a white female American who worked as an HVAC technician with co-workers
mostly from racial minorities before going to college. Most of the sexual
misconduct incidents discussed in the Time article
[https://time.com/6252617/effective-altruism-sexual-harassment/] have likely
differed from standard workplace discussions in my former career only in that
the higher status person expressed romantic/sexual attraction, making their
statement much more vulnerable than the trash-talk I’m familiar with. In the
places most of my workplace experience comes from, people of all genders and
statuses make sexual jokes about coworkers of all genders and statuses not only
in their field, but while on the clock. I had tremendous fun participating in
these conversations. It didn’t feel sexist to me because I gave as good as I
got. My experience generalizes well; Even when Donald Trump made a joke about
sexual assault that many upper-class Americans believed disqualified him,
immediately before the election he won, Republican women
[https://www.vox.com/2016/10/9/13217158/polls-donald-trump-assault-tape] were no
more likely to think he should drop out of the race than Republican voters in
general. Donald Trump has been able to maintain much of his popularity despite
denying the legitimacy of a legitimate election in part because he identified
the gatekeeping elements of upper-class American norms as classist
[https://astralcodexten.substack.com/p/a-modest-proposal-for-republicans]. I am
strongly against Trump, but believe we should note that many female Americans
from poorer backgrounds enjoy these conversations, and many more oppose the kind
of punishments popular in upper class American communities. This means strongly
disliking these conversations is not an intrinsic virtue, but a decision EA
culture ha
Proposing a change to how Karma is accrued:
I recently reached over 1,000 Karma, meaning my upvotes now give 2 Karma and my
strong upvotes give 6 Karma. I'm most proud of my contributions to the forum
about economics, but almost all of my increased ability to influence discourse
now is from participating a lot in the discussions on sexual misconduct. An
upvote from me on Global Health & Development (my primary cause area) now counts
twice as much as an upvote from 12 out of 19 of the authors of posts with
200-300 Karma with the Global Health & Development tag. They are generally
experts in their field working at major EA organizations, whereas I am an
electrical engineering undergraduate.
I think these kinds of people should have far more ability to influence the
discussion via the power of their upvotes than me. They will notice things about
the merits of the cases people are making that I won't until I'm a lot smarter
and wiser and farther along in my career. I don't think the ability to say
something popular about culture wars translates well into having insights about
the object level content. It is very easy to get Karma by participating in
community discussions, so a lot of people are now probably in my position after
the increased activity in that area around the scandals. I really want the
people with more expertise in their field to be the ones influencing how visible
posts and comments about their field are.
I propose that Karma earned from comments on posts with the community tag
accrues at a slower rate.
Edit: I just noticed a post by moderators that does a better job of explaining
why karma is so easy to accumulate in community posts:
https://forum.effectivealtruism.org/posts/dDudLPHv7AgPLrzef/karma-overrates-some-topics-resulting-issues-and-potential
[https://forum.effectivealtruism.org/posts/dDudLPHv7AgPLrzef/karma-overrates-some-topics-resulting-issues-and-potential]
SOME POST-EAG THOUGHTS ON JOURNALISTS
For context, CEA accepted at EAG Bay Area 2023 a journalist who has at times
written critically of EA and individual EAs, and who is very much not a
community member. I am deliberately not naming the journalist, because they
haven't done anything wrong and I'm still trying to work out my own thoughts.
On one hand, "journalists who write nice things get to go to the events,
journalists who write mean things get excluded" is at best ethically
problematic. It's very very very normal: political campaigns do it, industry
events do it, individuals do it. "Access journalism" is the norm more than it is
the exception. But that doesn't mean that we should. One solution is to be very
very careful about maintaining the differentiation between "community member"
and "critical or not". Dylan Matthews is straightforwardly an EA and has
reported critically on a past EAG
[https://www.vox.com/2015/8/10/9124145/effective-altruism-global-ai]: if he was
excluded for this I would be deeply concerned.
On the other hand, I think that, when hosting an EA event, an EA organization
has certain obligations to the people at that event. One of them is protecting
their safety and privacy. EAs who are journalists can, I think, generally be
relied upon to be fair and to respect the privacy of individuals. That is not a
trust I extend to journalists who are not community members
[https://observer.com/2012/07/faith-hope-and-singularity-entering-the-matrix-with-new-yorks-futurist-set/]:
the linked example is particularly egregious, but tabloid reporting happens.
EAG is a gathering of community members. People go to advance their goals: see
friends, network, be networked at, give advice, get advice, learn interesting
things, and more. In a healthy movement, I think that EAGs should be a
professional obligation, good for the individual, or fun for the individual. It
doesn't have to be all of them, but it shouldn't harm them on any axis.
Someone might be out ab
On the EA forum redesign: new EAs versus seasoned EAs
In the recent Design changes announcement
[https://forum.effectivealtruism.org/posts/sLB6tEovv7jDkEghG/design-changes-and-the-community-section-forum-update-march#What_are_the_changes_],
many commenters reacted negatively to the design changes.
One comment from somebody on the forum team
[https://forum.effectivealtruism.org/posts/sLB6tEovv7jDkEghG/design-changes-and-the-community-section-forum-update-march?commentId=JvDXiDAqmLmXiQbLq]
said in response: (bolded emphasis mine)
This feels like a crux. Personally I think the EA forum should be a place
seasoned EAs can go to to get the latest news and ideas in EA. Therefore, making
the EA forum more similar to "the internet [new EAs are] used to" should not
really be a priority.
There are so [https://www.effectivealtruism.org/] many
[https://www.effectivealtruism.org/virtual-programs/introductory-program] other
[https://www.givingwhatwecan.org/what-is-effective-altruism] spaces
[https://www.reddit.com/r/EffectiveAltruism/] for
[https://www.youtube.com/watch?v=Diuv3XZQXyc] new
[https://80000hours.org/book-giveaway/] EAs to get up to speed. It's not obvious
to me that the forum's comparative advantage is in being a space which is
especially welcoming to new users.
To my knowledge, this tradeoff between designing UX for new versus seasoned EAs
has not been publicly discussed much. Which is a shame, because if the EA Forum
is a worse space to exist in for seasoned EAs, then seasoned EAs will
increasingly retreat to their local communities and there will be less
interchange of ideas. (e.g. think about how different Bay Area EAs are from DC
EAs)