All of sapphire's Comments + Replies

sapphire
4
3
0
100% disagree

Giving meaningful advance notice of a post that is critical of an EA person or organization should be


Beware Trivial Inconveniences. 

Most EAs want to be rich and close to power. Or at least they are way more into the "effective" optimization part than the altruism. They talk a big game but getting in early on a rising power (AI companies) is not altruistic. Especially not when you end up getting millions in compensation due to very rapid valuation increases. 

I made a large amount of money in the 2021 crypto bom. I made a much smaller, though large for me, amount in the 2017 crash. I have never had a high paying job. Often I have had no job at all. My longterm partner has really bad... (read more)

It's not as much a pivot as a codification of what has been long true. 

"EA is (experientially) about AI" has been sorta true for a long time. Money and resources do go to other causes. But the most influential and engaged people have always been focused on AI. EA institutions have long systematically emphasized AI. For example many editions of the EA handbook spend a huge fraction of their introductions to other cause areas effectively arguing why you should work on AI instead. CEA staffers very heavily favor AI. This all pushes things very hard in on... (read more)

5
Sarah Cheng 🔸
Just wanted to quickly add that I don't think that this is quite accurate. My experience facilitating the Intro Fellowship using the previous version of the EA Handbook was that AI basically didn't come up until the week about longtermism, and glancing through the current version that doesn't seem to have changed. Though I welcome people to read the current version of the EA Handbook and come to their own conclusions. The most recent relevant data on CEA staff cause prio is this post about where people are donating, and I think animal welfare is more common in that list than AI safety (though this only includes a subset of staff who were interested in participating in the post).
3[anonymous]
I think you actually shifted me slightly to the 'announcement was handled well' side (even if not fully) with the idea that blatant honesty (since their work was mainly AI anyway for the last year or so) plus the very clear change descriptors. I am a bit wary of such a prominent resource such as 80k endorsing a sudden cause shift without first reconstructing the gap- I know they don't owe it to anyone, especially during such a tumultous time of AI risk, and there are other orgs (Probably Good, etc) but to me, 80k seemed like a very good intro into 'EA Cause Areas' that I can't think of another current substitute for. The problem profiles for example not being featured/promoted is fine for individuals already aware of their existence, but when I first navigated to 80k, I saw the big list of problem profiles and that's how I actually started getting into them, and what led to my shift from clinical medicine to a career in biosec/pandemics.

I think its better to start something new. Reform is hard but no one is going to stop you from making a new charity. The EA brand isn't in the best shape. Imo the "new thing" can take money from individual EAs but shouldn't accept anything connected to OpenPhil/CEA/Dustin/etc. 

If you start new you can start with a better culture. 

6
huw
AIM seems to be doing this quite well in the GHW/AW spaces, but lacks the literal openness of the EA community-as-idea (for better or worse)

I spent all day in tears when I read the congressional report. This is a nightmare. I was literally hoping to wake up from a bad dream.

I really hope people don't suffer for our sins.

How could we have done something so terrible. Starting an arms race and making literal war more likely.

I spent all day crying about this. An arms race is about the least safe way to approach. And we contributed to this. Many important people read Leopold's report. He promoted it quite hard. But the background work predates Leopold's involvement.

We were totally careless and self aggrandizing. I hope other people don't pay for our sins.

6
akash 🔸
What makes you say this? I agree that it is likely that Aschenbrenner's report was influential here, but did we make Aschenbrenner write chapter IIId of Situational Awareness the way he did?  Is there some background EA/aligned work that argues for an arms race? Because the consensus seems to be against starting a great power war.

This sounds very much like the missile gap/bomber gap narrative, and yeah this is quite bad news if they actually adopt the commitments pushed here.

The evidence that China is racing to AGI is quite frankly very little, and I see a very dangerous arms race that could come:

https://forum.effectivealtruism.org/posts/cXBznkfoPJAjacFoT/are-you-really-in-a-race-the-cautionary-tales-of-szilard-and

8
MichaelDickens
I feel that. This report saddens me and I think its recommendations are very bad. I don't feel that I contributed to this. Perhaps I could have done more to prevent it, although it's not obvious to me what I could have done.
sapphire
61
27
26
7

Criticism of who? If anything EAs have been far too trusting of their actual leaders. Conversely they have been far too critical of people like Holly. Its not a simple matter of some parameter being too high.

Holden is married to Dario Amodei's sister. Dario is a founder of Anthropic. Holden was a major driver of EA AI policy.

Dustin is a literal billionaire who, along with his wife, has control over almost all EA institutions. Being critical of Dustin, while at all relying on EA funding or support, is certainly brave. Open Phil is known to be quite capricio... (read more)

In what way do you think OpenPhil is capricious? 

5
Benevolent_Rain
I think this is super important - criticism of those with the most power is likely to be worthwhile. Like in all politics power can "buy" and create "opinion". Then, if epistemics is something we value, we have to be super careful of the contribution money makes to truth. Just look at the people in climate change - nowadays nearly anyone can frame their pet project as a climate change intervention and they get funded, as long as they go along with the party line. And in climate change there are huge economic incentives - it is not a false claim by conservatives that many "green" investors stand to gain enormously from a change to the cleantech they invested in. If there is one thing history should have taught us it is that power corrupts and I see no robust immune system in EA against this. At the same time we have to be charitable - there are of course significant chances those with power in the movement both have pure intentions of doing good and are able to resist any influence from personal gains they stand to make from nudging the movement in certain directions.

Why do you need to justify something to yourself? You can do whatever you want. 

1
Anton
Myself?

I'm quite leftwing by manifest standards. I'm probably extremely pro-woke even by EA standards. I had a great time at less-online/summer-camp/manifest. I honestly tried to avoid politics. Unlike many people I don't actually like arguing. I'd prefer to collaborate and learn from other people. (Though I feel somewhat 'responsible for' and 'invested in' EA and so I find it hard not to argue about that particular topic). I mostly tried to talk to people about finance, health and prediction markets. Was honestly super fun and easy. People didn't force me to dis... (read more)

5
Austin
Thanks for the report; I'm glad that you had a good time! And I appreciate that you brought your girlfriend, and sorry that our event didn't sit well with her -- I think that's a bad sign, and want to figure out how to structure Manifest so that people like her also enjoy it.
sapphire
28
9
3
2
7
1

Emile seems to donate quite a bit: 


"I’m passionate about alleviating global poverty, and have pledged to give away everything I earn over $40,000 a year. In December 2022, I started a fundraiser with Nathan Young, an Effective Altruist, that raised more than $321,000 for the charity Give Directly." -- https://www.xriskology.com/

I'm also quite critical of EA and have donated more than most EAs (both in absolute and percentage terms). 

Even annoying critics may be quite sincere.

3
Tristan W
This is a solid data point so thanks for mentioning it. It is maybe worth mentioning that, as much as Emile and you may be "critical of EA", Emile was formerly quite friendly and you and I are having this conversation on the forum. I think you're likely both "more EA" than the average person, and definitely more EA than the average detractor that I have in mind. What it means to "be EA" is amorphous and uncertain here, but many people who would consider themselves EAs are also critical of it sometimes. I'd be interested to see how much Timnit donates, or any of those who wrote the typical SBF articles, but I highly doubt their numbers would look like those above.

I donated a lot. Both in absolute and percentage terms. I gave a percentage many times higher than even most well off EAs. I think it would have been selfish to just keep the money. But I don't have any particularly great feelings about how I donated. 'Things are complicated' can be an applause light. Sometimes things aren't all that complicated. But this topic sure is. Saying 'those who criticize the movement as a whole are deeply intellectually unserious' just seems unserious to me. The movement has a lot of structural problems. Both 'extremely positive'... (read more)

4
NickLaing
I quite like this comment, and am interested who the "two people" are who have almost all the power? Is one of them me? ;).

Imo full enlightenment really means, or should mean, no suffering. There is no necessary suffering anyway. The Buddha, or the classic teaching, are pretty clear if you ask me. One can debate how to translate the noble truths but its pretty clear to me the fourth one says suffering can be completely overcome. 

FWIW you can get much faster progress combining meditation with psychedelics. Though as the Buddha said you must investigate for yourself, don't take anyones word for spiritual truth. Also enlightenment absolutely does make you better at most stuf... (read more)

There are a lot of possible answers to where thoughts come from and which thoughts are useful. One charitable thought is some Elite EAs tried to do things which were all of: hard, extremely costly if you fuck them up, they weren't able to achieve given the difficulty. I have definitely updated a lot toward trying things that are very crazy but at least obviously only hurt me (or people who follow my example, but those people made their own choice). Fail gracefullly. If you dont know how competent you are make sure not to mess things up for other people. There is a lot of 'theater' around this but most people don't internalize what it really means.

Answer by sapphire17
5
3
1

The people who initially set up Givewell, did the research and conivnced Dustin to donate his money did a truly amazing jop.  AFAICT the people who currently run Givewell are doing a good job. A large fraction of the good EA has done, in total, is largely do to their work.

But I don't think its a good idea to frame things as their a bunch of elite EAs and the quality of their work is superb. The EA leadership has fucked up a bunch of stuff. Many 'elite EAs' were not part of the parts of EA that went well. Many were involved in the parts of Ea that went... (read more)

1
AnonymousTurtle
I agree with some of this comment and disagree with other parts: "people who initially set up Givewell, did the research and conivnced Dustin to donate his money did a truly amazing jop" AFAIK Dustin would have donated a roughly similar amount anyway, at least at Gates levels of cost-effectiveness, so I don't think EA gets any credit for that (unless you include Dustin in EA, which you don't seem to do) "The EA leadership has fucked up a bunch of stuff. Many 'elite EAs' were not part of the parts of EA that went well." I agree, but I think we're probably thinking of different parts of EA "'Think for yourself about how to make the world better and then do it (assuming its not insane)' is probably both going to be better for you and better for the world" I agree with this, but I would be careful about where your thoughts are coming from

I don't think it makes any sense to punish people for past political or moral views they have sincerely recanted. There is some sense in which it shows bad judgement but ideology is a different domain from most. I am honestly quite invested in something like 'moral progress'. Its a bit of a naive position to have to defend philosophically but I think most altruists are too. At least if they are being honest with themselves. Lots of people are empirically quite racist. Very few people grew up with what I would consider to be great values. If someone sincere... (read more)

Not to state the obvious but the 'criticism of EA' posts didn't pose a real risk to the power structure. It is uhhhhh quite common for 'criticism' to be a lot more encouraged/tolerated when it isnt threatening.

I mostly agree with this, and upvoted strongly, but I don't think the scare quotes around "criticism" is warranted. Improving ideas and projects through constructive criticism is not the same thing as speaking truth to power, but it is still good and useful, it's just a different good and useful thing. 

9
wolframhead
I think this is entirely legitimate criticism. It's not at all clear to me that the net impact of Effective Altruism, from end to end, has even been positive. And if it has been negative, it has been negative BECAUSE of the impact the movement has had on AI timelines.  This should prompt FAR more reflection than I have seen within the community. People should be racking their brains for what went wrong and crying mea culpa. And working for OpenAI/Anthropic/etc/etc should not be seen as "effective".  (Well, maybe now  it's okay. Cat's out of the bag. But certainly being an AI capabilities researcher in 2020 did a lot of harm.) As far as I can tell, the "Don't Build the Torment Nexus" community went ahead and built the Torment Nexus because it was both intellectually interesting and a path for individuals to acquire more power. Oops.  And to be clear, this pales in comparison - in my mind at least - to any harms done from the FTX debacle or the sexual abuse scandals. And that is not in any way a trivialization of either of those harms, both of which were also pretty severe. "Accelerate AI timelines" is just that bad. 
7
Lorenzo Buonanno🔸
We have a higher bar for taking moderation action against criticism, but considering that sapphire was warned two days ago we have decided to ban sapphire for one month for breaking forum norms multiple times.

Im not trying to get dignity points. Im just trying to have a positive impact. At this point if AI is hard to align we all die (or worse!). I spent years trying to avoid contributing to the problem and helping when I could. But at this point its better to just hope alignment isn't that hard (lost cause timelines) and try to steer the trajectory positively.

2
Tamsin Leake
"dignity points" means "having a positive impact". if alignment is hard we need my plan. and it's still very likely alignment is hard. and "alignment is hard" is a logical fact not indexical location, we don't get to save "those timelines".

Ime you can induce much more torture than a tattoo relatively safely. Though all the best 'safe' forms of torture do cause short term damage to the skin. 

3
Jacob_Peacock
Not sure why this is being so heavily down-voted. I believe it's accurate and contributes, especially re: my comments where a safe and non-permanent way of causing severe pain would be needed.

I mean that 'at what income do GWWC pledgers actually start donating 10%+'. Or more precisely 'consider the set of GWWC pledge takers who make at least X per year, for what value X does is the mean donation at least X/10'. The value of X you get is around one million per year. Donations are of course even lower for people who didn't take the pledge! Giving 10% when you make one million PER YEAR is not a very big ask. You will notice EAs making large, but not absurd salaries, like 100-200K give around 5%. Some EAs are extremely altruistic, but the average EA isn't that altruistic imo. 

2
Jeff Kaufman 🔸
Looking at the chart henrith posted, it looks to me like the GWWC=yes line crosses 10% just below $300k/y, which is still high but well below $1M/y. Additionally, eyeballing the points on the chart, it looks to me like there's an issue with the way the fit works, where people earning less donating less makes it look like people who earn more also donate less? It looks like the chart came from Rethink Priorities EA Survey 2020 Series: Donation Data. Maybe the data is public and I can check this...

I agree with the thrust of the argument but I think its a little too pessimistic. A lot of EAs aren't especially altruistic people. Tons of EAs got involved because of Xrisk. And it requires very little altruism to care about whether you and everyone you know will die. You can look at the data on EA donations and notice they aren't that high. EAs dont donate 10% until they have a pre-tax income of around one million dollars per year!

4
Henri Thunberg 🔸
Emm sorry, what? Out of 8,000 GWWC pledgers, who have at least pledged to give 10%, very few earn $1M?
Lizka
Moderator Comment39
9
2

Hi folks, I’m coming in as a mod. We're doing three things with this thread: we're issuing two warnings and encrypting one person's name in rot13. 

Discussions of abuse and sexual misconduct tend to be difficult and emotionally intense, and can easily create more confusion and hurt than clarity and improvement. They are also vitally important for communities — we really need clarity and improvement! 

So we really want to keep these conversations productive and will be trying our best.

1. 

We’re issuing a warning to @sapphire  for this&... (read more)

4
Ivy Mazzola
[This comment has been edited cuz it was long and rambly trying to be gentle and nuanced before, but now idk, maybe it just sounded more aggressive before because it was longer idk] I'm sighing at your response of digging your heels in. I am glad [Qhapna] chimed in for himself. But I still think someone should drill in is just how much games of telephone can go wrong. It honestly sounds like you might do all this again in future.  I get the desire to post names. I am all about 100% transparency, particularly when done in great detail as a primary document (it looks like your document missed a lot of detail but you can always edit it to be more honest if you think it still matters. That's your right.). But discretion and being careful about where and how we say things are also important because of the very real risks that people get not-true stories out of the vague-things-we-think-sound-true that all of us people (myself included) can be prone to say if we don't catch ourselves. This is an outcome you should care about if you care about getting the truth out there.  Sooo, I had gone to your Twitter yesterday to check that there weren't real substantiated claims of abuse made against [Qhapna]. I thought to myself "maybe she knows something I don't" (I take abuse claims seriously, so checked at least one of your social medias before defending him out of the blue). Anyway, nothing about [Qhapna]. "All cool", I thought. But yeah scrolling down a bit, I believe you participated in a game of telephone just a couple days before. I was very much hoping you would be reasonable and either apologize or stay silent, because people make mistakes and I am not trying to shame you. But since you dug your heels in that naming DS on such a delicate thread was actually the right thing to do (potential misinformation be damned, I guess?), I will share this screenshot becasue you really ought to take seriously the suggestion that you should be more careful: [edit: I deleted the scre
[anonymous]38
4
0

sapphire leaves out that the bits they quote in their document look like this now, and have since just a few days after posting:

[Edit]

What used to stand in this place was an imagined apology, generated by [my model of Brent] plus [my sense of what could be the *least* bad state of affairs that's consistent with reality].

I took that least-bad-of-all-possible-explanations, and wrote a statement out of it, specifically so that the discussion would not anchor on the most-bad-of-all-possible-explanations, the way it sometimes does, to the detriment of our moral

... (read more)

No one has a right to be a leader. If leaders mismanaged abuse situations they should be removed from positions of leadership. The point of leadership is supposed to be service. 

Okay I expect that is the default consensus, and is my default general desire too from a point of ignorance about any given case. I was just surprised that actors such as that weren't listed in this writeup.

I would also like to say though, that depending how many cases you take, a case will be handled in a way that you could call mismanagement eventually. Extreme mismanagement is one thing and generally having poor policies, but slight mismanagement now and again is a bug of the world. I don't expect 1000/1000 cases to be handled perfectly. Handling sexual... (read more)

[anonymous]10
1
2

It's worth knowing that sapphire has never interacted with me in person to my knowledge, and also that I blocked sapphire on social media a while back (while they were using a different name) out of self-protectiveness. The accusation is pretty DARVO.


 

[Edit: I apologize for the rude and aggressive tone of some of this comment. In case it is of use to others, I have written more here on what I am doing to make sure I don't cause a disruption or potentially hurt someone's feelings again. Contributing to a healthy forum environment is important to me: https://bit.ly/40dfT90 ]

Responding in case journalists stop by. I do not think [Qhapna] is abusive, and I don't think those claims would bear out if you investigated about his treatment today. I can easily state that and verify that as someone who follows him... (read more)

-5
Dmitriy
-2
Lauren Maria
I agree with sharing names publicly. I think this practise will hopefully make it less likely that others will engage in abusive behaviour out of fear of having their reputation damaged. If this major figure is within EA, can you share the name here? I don't know your twitter or facebook so I'm not sure who it is you are referring to. 

In my experience anonymous accounts work fine? Whats important is having the information in public. Whether the account is anonymous or not isn't very predictive of whether effective change occurs. For example Brent was defended by CFAR, but got kicked out once anonymous accounts were posted publicly.

1
Sarah Levin
I actually do know the real names of the people who wrote about Brent. It’s one of those “community insiders know who they were but it’s hard to tell from the outside” situations, like the one I described with pre-doxxing Scott Alexander. If the authors had been anonymous for real then I don’t think it would’ve worked anywhere near as well. This approach avoids most of the downsides of actually-unknown-and-unaccountable burner accounts and I do not object to it.

Why did Google invest three hundred million dollars? Google is a for profit company.

Anthropic is also a for-profit company, so why wouldn't Google invest?

Or maybe what you're getting at is: What's Anthropic's plan for becoming profitable?

If you cannot tell Duncan Sabien is an abusive person from reading his facebook posts you should probably avoid weighing in on community safety. He makes his toxicity and aggression extremely obvious. Lots of people have gotten hurt.

(Of course there is other evidence, like the fact he constantly defends bad behavior by others. He was basically the last person publicly defending Brent. But he continues to be conisdered a community leader with good judgment)

I think negative update since lots of the people with bad judgment remained in positions of power. This remains true even if some people were forced out. AFAIK Mike Valentine was forced out of CFAR for his connections to Brent, in particular greenlighting Brent meeting with a very young person alone. Though I dont have proof of this specific incident. Unsurprisingly, post-brent Anna Salomon defended included Mike Vassar. 

With the exception of Brent, who is fully ostracized afaik, I think you seriously understate how much support these abusers still have. My model is sadly that a decent number of important rationalists and EAs just dont care that much about the sort of behavior in the article. CFAR investigated Brent and stood by him until there was public outcry! I will repost what Anna Salomon wrote a year ago, long after his misdeeds were well known. Lots of people have been updating TOWARD Vassar:

I hereby apologize for the role I played in X's ostracism from the communi

... (read more)

CFAR investigated Brent and stood by him until there was public outcry! 

This says very bad things about the leadership of CFAR, and probably other CFAR staff (to the extent that they either agreed with leadership or failed to push back hard enough, though the latter can be hard to do).

It seems to say good things about the public that did the outcry, which at the time felt to me like "almost everyone outside of CFAR". Everyone* yelled at a venerable and respected org until they stopped doing bad stuff. Is this a negative update against EA/rationality, ... (read more)

I think beating the uhhh 'market' is a lot easier than the EMH friends think. But its not exactly easy being a +EV 'gambler'/speculative-investor. Your counterparites usually aren't total idiots*. You are better off passing unless you think a bet is both really good and you can get in at least decent money. Its good policy to restrict your attention to only cases which plausibly fulfill both conditions**.

Ad hoc bets also have a very serious adverse selection problem. And in some cases betting people in private when they are being morons makes me feel preda... (read more)

3
NunoSempere
Why follow that policy, rather than "only make trades if their expected value is greater than the value of what you would otherwise have used your time with?"
1
DPiepgrass
We are strongly against racism. It's just that Nick Bostrom is not racist (even though I find that his comment 26 years ago was extremely cringe and his apology wasn't done particularly well.) Perhaps you have some insight about what was meant by "the views this particular academic expressed in his communications"? The criticisms of Bostrom I've seen have consistently declined to say what "views" they are referring to. One exception to this is that I heard one person say that almost everyone thinks it is racist to say that a racial IQ gap exists. To anyone who thinks this, I suggest searching for the word "gap" in this Wikipedia article. And by the way, the main thread for discussing the apology is here.

I take Iron, Omega3, vitamin B12, vitamin D. My blood tests always look good. Creatine seems like a good idea but I don't know a good vegan source. 

8
poppinfresh
I think most supplemental creatine is vegan? From what I can tell it's lab-synthesized from chemicals. Folks should obviously double-check that for themselves and their specific supplements, though.

Effective altruism's meta-strategy is about friendliness to (tech) power. All our funding comes from tech billionaires. We recruit at elite colleges. We strongly prioritize good relations with AI labs and the associated big tech companies. EA just isn't going to be genuinely critical or antagonistic toward the powerful groups we depend on for support and status. Not how EA works.

-4
sergeivolodin
I totally feel this isn't the only choice to do things. There are massive crowdfunding campaigns that work. I think that an entity that is not opposing itself to the power in any way has its own limitations, serious limitations. Here's an example from Russia where some charities collect money, but HAVE to say they're pro-government. In many cases those were criticised, and I think justly, that they created more troubles than the effects of their charity. For example, some used TV ads to gather money for cancer treatment for children. However, the real problem is: Putin used all the taxes and gas profits on his wars and "internet research" operations, as well as personal luxury items. So these charities, some argue, were used as a "front" by the government to convince people that "medicine is OK, no need to worry" Those charities only helped like, the few, and some argue, if they didn't exist, at all, people wouldn't have a false belief that "healthcare works fine in Russia", and would protest and maybe we could get it. All because of charity's inability to protest against existing power structures. I think it applies to alignment too, it's hard to do alignment when one gets funding from a corp that has a financial interest in "profit first safety second"
5
Sabs
This doesn't seem like a bad meta-strategy, fwiw. Surely otherwise EA just gets largely ignored.

Less theoretical example: FWIW im not sold on 'more than anyone' but the top 2-3 current AI labs are all downstream of AI safety!

3[anonymous]
Though you need to consider the counterfactual where the talent currently at OAI, DM, and Anthropic all work at Google or Meta and have way less of a safety culture.

Great point! It does look like left tails are everywhere in the AI safety space.

Answer by sapphire16
16
2

Im mostly just depressed about AI progress being so rapid and the 'safety gameboard' being in such a bad state. Im angry at the people who contributed to this terrible situation (which includes a lot of longtermist orgs). 

My honest reaction was: This is finally being taken sort of seriously. If an EVF board member acted badly then the community can't just pretend the Time article is about people totally peripheral to the community. At least we got some kind of accountability beyond "the same team that has failed to take sufficient action in the past is looking into things." 

It honestly does feel like the dialogue is finally moving in a good direction. I already knew powerful people in EA acted very badly. So it's honestly a relief it seems like we might get real change.

A comment I made a few days ago said "But usually very little changes until someone goes public (at least anonymously). Nothing else remotely reliably creates the momentum to get bad actors out of power." Really aged quite well. 

As always I would advise survivors who want change to be as public as possible. Anonymous public statements work fine. Of course prioritize your own safety. But private internal processes are not a vehicle for change. Owen would, as predicted, still be on the board if not for the Time article.

I think thats the public image but isn't how things actually work internally. Id really recommend reading this comment by Buck about how "You've also made the (IMO broadly correct) point that a lot of EA organizations are led and influenced by a pretty tightly knit group of people who consider themselves allies". Notably the post is pretty explicit that any proposed changes should be geared toward getting this small group onboard. 

It is less public (at this point) but some of the core EAs have definitely been capricious in terms of who they want to re... (read more)

Okay so, if you'll bear with me a moment, your comment has actually convinced me that EA is in fact not hierarchical, but I do agree with your intended point.

Buck's comment, and the parent post by ConcernedEAs, point out that there's a small, tightly-knit group that's involved in many of the core EA organizations, who all know each other and collectively influence a lot of funding outcomes.

This is not the same thing as a hierarchy. There's no middle management, no corporate ladder you have to climb, and (as far as I've seen) no office politics you have to ... (read more)

The fact this is true, despite issues being reported to the community health team, is a serious indictment.

Honesty, never-mind radical openness, is usually impossible if one party is dependent on the other. This is honestly one reason I hate how intensely hierarchical the EA community is. Hierarchy destroys openness. 

Can you explain how the EA community is intensely hierarchical? From what I've seen, EA tends to have a relatively flat orginazational structure and very high tolerance for contradicting or questioning authority figures, but maybe others have had different experiences with this than I have.

I agree that private processors are often better for survivors (Though they can be worse). But usually very little changes until someone goes public (at least anonymously). Nothing else remotely reliably creates the momentum to get bad actors out of power. If the people in power weren't at least complicit we wouldn't have these endemic problems. Notably this has already played out multiple times with rationalist and Ea paces. Brent was extremely egregious but until public callouts nothing was seriously done about him. In fact community leaders like eliezer... (read more)

2
Jason
I'm not sure if that is inherent to private responses, though. One could imagine something set up vaguely like the "Facebook Supreme Court" (FSC) with longterm funding and independent/external control by neutrals. I'm not suggesting anything about the FSC model other than its externality and independence, but those features would allow us to have more confidence in the process because we would almost eliminate the concern that the processors have "main allegiance  . . . to the existing power structure." Data could get published, including on actions that were taken in response, and the data quality would probably be better for a wide variety of reasons.

Yeah, they can be. I went through a brutal "restorative justice" process myself (I'm trained in traditional law, and at the time, was personally insulted that a bunch of hacks thought they could replace centuries of legal work/thought), with someone EA-adjacent (though I just confirmed that my rapist has some ties to EA via Google; he's one of the 14 and not 30) - I said no for weeks, had multiple people push into a process, went along because I wanted to tell my side, was silenced, and the "mediator" texted me to encourage me to kill myself before I left the country. Obviously, I'm not advocating for that. 

And also, I had no idea to report this to CH. Nor, given how CH is handling this, would I report this today.

Working with official orgs to handle sexual abuse cases almost never goes well. For obvious reasons victims want to avoid backlash. And many victims understandably dont want to ruin the lives of people they still care about. I truly wish private processes and call-ins worked better. But the only thing that creates change is public pressure. I would always endorse being as public as you can without compromising victim privacy or pressuring them to be more open about what happened. It is just a very unfortunate situation.

2
Evan_Gaensbauer
I'm not super familiar with the practices of call-in culture though I'm aware of it. While I'm sure there are some communities that have practiced methods similar to call-in culture well for a long time, they've been uncommon and I understand that call-in culture has in general only been spreading across different movements for a few years now. I also expect this community would benefit from learning more about call-in culture but it'd be helpful if you can make some recommendations for effective altruists to check out.

I agree with your sentiment (and upvoted so your comment doesn't get hidden), but (1) victims don't always have a strong connection to their attacker and may not care strongly, and (2) in my six years of doing this, sometimes (not always) private processes work. Mostly importantly, private processes are easier on the survivors, who should take precedence in any process. 

Under my old screen name, I had  3 commenters say they changed their minds about rape, for example. I know my work certainly has changed people's opinions on rape, both at large a... (read more)

I basically agree but following this advice would require lowering one's own status (relative to the counterfactual). So its not surprising people dont follow the advice.

I'm extremely opposed to the culture of silence in EA/rat spaces. It is very extreme.

I will just push back on the idea, in a top-level post, that EAG admissions are not a judgment on people as EAs. CEA is very concerned about the most promising/influential EAs having useful conversations. If you are one of the people they consider especially promising or influential you will get invited. Otherwise, they might let you in if EAG seems especially useful for shaping your career. But they will also be worried that you are lowering the quality of the conversations. Here are some quotes from Eli, the lead on EA global at CEA.


EAG is primarily a ne

... (read more)
Load more