The people who initially set up Givewell, did the research and conivnced Dustin to donate his money did a truly amazing jop. AFAICT the people who currently run Givewell are doing a good job. A large fraction of the good EA has done, in total, is largely do to their work.
But I don't think its a good idea to frame things as their a bunch of elite EAs and the quality of their work is superb. The EA leadership has fucked up a bunch of stuff. Many 'elite EAs' were not part of the parts of EA that went well. Many were involved in the parts of Ea that went...
I don't think it makes any sense to punish people for past political or moral views they have sincerely recanted. There is some sense in which it shows bad judgement but ideology is a different domain from most. I am honestly quite invested in something like 'moral progress'. Its a bit of a naive position to have to defend philosophically but I think most altruists are too. At least if they are being honest with themselves. Lots of people are empirically quite racist. Very few people grew up with what I would consider to be great values. If someone sincere...
Not to state the obvious but the 'criticism of EA' posts didn't pose a real risk to the power structure. It is uhhhhh quite common for 'criticism' to be a lot more encouraged/tolerated when it isnt threatening.
I mostly agree with this, and upvoted strongly, but I don't think the scare quotes around "criticism" is warranted. Improving ideas and projects through constructive criticism is not the same thing as speaking truth to power, but it is still good and useful, it's just a different good and useful thing.
Im not trying to get dignity points. Im just trying to have a positive impact. At this point if AI is hard to align we all die (or worse!). I spent years trying to avoid contributing to the problem and helping when I could. But at this point its better to just hope alignment isn't that hard (lost cause timelines) and try to steer the trajectory positively.
Ime you can induce much more torture than a tattoo relatively safely. Though all the best 'safe' forms of torture do cause short term damage to the skin.
I mean that 'at what income do GWWC pledgers actually start donating 10%+'. Or more precisely 'consider the set of GWWC pledge takers who make at least X per year, for what value X does is the mean donation at least X/10'. The value of X you get is around one million per year. Donations are of course even lower for people who didn't take the pledge! Giving 10% when you make one million PER YEAR is not a very big ask. You will notice EAs making large, but not absurd salaries, like 100-200K give around 5%. Some EAs are extremely altruistic, but the average EA isn't that altruistic imo.
I agree with the thrust of the argument but I think its a little too pessimistic. A lot of EAs aren't especially altruistic people. Tons of EAs got involved because of Xrisk. And it requires very little altruism to care about whether you and everyone you know will die. You can look at the data on EA donations and notice they aren't that high. EAs dont donate 10% until they have a pre-tax income of around one million dollars per year!
Hi folks, I’m coming in as a mod. We're doing three things with this thread: we're issuing two warnings and encrypting one person's name in rot13.
Discussions of abuse and sexual misconduct tend to be difficult and emotionally intense, and can easily create more confusion and hurt than clarity and improvement. They are also vitally important for communities — we really need clarity and improvement!
So we really want to keep these conversations productive and will be trying our best.
1.
We’re issuing a warning to @sapphire for this&...
sapphire leaves out that the bits they quote in their document look like this now, and have since just a few days after posting:
...[Edit]
What used to stand in this place was an imagined apology, generated by [my model of Brent] plus [my sense of what could be the *least* bad state of affairs that's consistent with reality].
I took that least-bad-of-all-possible-explanations, and wrote a statement out of it, specifically so that the discussion would not anchor on the most-bad-of-all-possible-explanations, the way it sometimes does, to the detriment of our moral
No one has a right to be a leader. If leaders mismanaged abuse situations they should be removed from positions of leadership. The point of leadership is supposed to be service.
Okay I expect that is the default consensus, and is my default general desire too from a point of ignorance about any given case. I was just surprised that actors such as that weren't listed in this writeup.
I would also like to say though, that depending how many cases you take, a case will be handled in a way that you could call mismanagement eventually. Extreme mismanagement is one thing and generally having poor policies, but slight mismanagement now and again is a bug of the world. I don't expect 1000/1000 cases to be handled perfectly. Handling sexual...
It's worth knowing that sapphire has never interacted with me in person to my knowledge, and also that I blocked sapphire on social media a while back (while they were using a different name) out of self-protectiveness. The accusation is pretty DARVO.
[Edit: I apologize for the rude and aggressive tone of some of this comment. In case it is of use to others, I have written more here on what I am doing to make sure I don't cause a disruption or potentially hurt someone's feelings again. Contributing to a healthy forum environment is important to me: https://bit.ly/40dfT90 ]
Responding in case journalists stop by. I do not think [Qhapna] is abusive, and I don't think those claims would bear out if you investigated about his treatment today. I can easily state that and verify that as someone who follows him...
In my experience anonymous accounts work fine? Whats important is having the information in public. Whether the account is anonymous or not isn't very predictive of whether effective change occurs. For example Brent was defended by CFAR, but got kicked out once anonymous accounts were posted publicly.
Anthropic is also a for-profit company, so why wouldn't Google invest?
Or maybe what you're getting at is: What's Anthropic's plan for becoming profitable?
If you cannot tell Duncan Sabien is an abusive person from reading his facebook posts you should probably avoid weighing in on community safety. He makes his toxicity and aggression extremely obvious. Lots of people have gotten hurt.
(Of course there is other evidence, like the fact he constantly defends bad behavior by others. He was basically the last person publicly defending Brent. But he continues to be conisdered a community leader with good judgment)
I think negative update since lots of the people with bad judgment remained in positions of power. This remains true even if some people were forced out. AFAIK Mike Valentine was forced out of CFAR for his connections to Brent, in particular greenlighting Brent meeting with a very young person alone. Though I dont have proof of this specific incident. Unsurprisingly, post-brent Anna Salomon defended included Mike Vassar.
With the exception of Brent, who is fully ostracized afaik, I think you seriously understate how much support these abusers still have. My model is sadly that a decent number of important rationalists and EAs just dont care that much about the sort of behavior in the article. CFAR investigated Brent and stood by him until there was public outcry! I will repost what Anna Salomon wrote a year ago, long after his misdeeds were well known. Lots of people have been updating TOWARD Vassar:
...I hereby apologize for the role I played in X's ostracism from the communi
CFAR investigated Brent and stood by him until there was public outcry!
This says very bad things about the leadership of CFAR, and probably other CFAR staff (to the extent that they either agreed with leadership or failed to push back hard enough, though the latter can be hard to do).
It seems to say good things about the public that did the outcry, which at the time felt to me like "almost everyone outside of CFAR". Everyone* yelled at a venerable and respected org until they stopped doing bad stuff. Is this a negative update against EA/rationality, ...
I think beating the uhhh 'market' is a lot easier than the EMH friends think. But its not exactly easy being a +EV 'gambler'/speculative-investor. Your counterparites usually aren't total idiots*. You are better off passing unless you think a bet is both really good and you can get in at least decent money. Its good policy to restrict your attention to only cases which plausibly fulfill both conditions**.
Ad hoc bets also have a very serious adverse selection problem. And in some cases betting people in private when they are being morons makes me feel preda...
I take Iron, Omega3, vitamin B12, vitamin D. My blood tests always look good. Creatine seems like a good idea but I don't know a good vegan source.
Effective altruism's meta-strategy is about friendliness to (tech) power. All our funding comes from tech billionaires. We recruit at elite colleges. We strongly prioritize good relations with AI labs and the associated big tech companies. EA just isn't going to be genuinely critical or antagonistic toward the powerful groups we depend on for support and status. Not how EA works.
Less theoretical example: FWIW im not sold on 'more than anyone' but the top 2-3 current AI labs are all downstream of AI safety!
Im mostly just depressed about AI progress being so rapid and the 'safety gameboard' being in such a bad state. Im angry at the people who contributed to this terrible situation (which includes a lot of longtermist orgs).
My honest reaction was: This is finally being taken sort of seriously. If an EVF board member acted badly then the community can't just pretend the Time article is about people totally peripheral to the community. At least we got some kind of accountability beyond "the same team that has failed to take sufficient action in the past is looking into things."
It honestly does feel like the dialogue is finally moving in a good direction. I already knew powerful people in EA acted very badly. So it's honestly a relief it seems like we might get real change.
A comment I made a few days ago said "But usually very little changes until someone goes public (at least anonymously). Nothing else remotely reliably creates the momentum to get bad actors out of power." Really aged quite well.
As always I would advise survivors who want change to be as public as possible. Anonymous public statements work fine. Of course prioritize your own safety. But private internal processes are not a vehicle for change. Owen would, as predicted, still be on the board if not for the Time article.
I think thats the public image but isn't how things actually work internally. Id really recommend reading this comment by Buck about how "You've also made the (IMO broadly correct) point that a lot of EA organizations are led and influenced by a pretty tightly knit group of people who consider themselves allies". Notably the post is pretty explicit that any proposed changes should be geared toward getting this small group onboard.
It is less public (at this point) but some of the core EAs have definitely been capricious in terms of who they want to re...
Okay so, if you'll bear with me a moment, your comment has actually convinced me that EA is in fact not hierarchical, but I do agree with your intended point.
Buck's comment, and the parent post by ConcernedEAs, point out that there's a small, tightly-knit group that's involved in many of the core EA organizations, who all know each other and collectively influence a lot of funding outcomes.
This is not the same thing as a hierarchy. There's no middle management, no corporate ladder you have to climb, and (as far as I've seen) no office politics you have to ...
The fact this is true, despite issues being reported to the community health team, is a serious indictment.
Honesty, never-mind radical openness, is usually impossible if one party is dependent on the other. This is honestly one reason I hate how intensely hierarchical the EA community is. Hierarchy destroys openness.
Can you explain how the EA community is intensely hierarchical? From what I've seen, EA tends to have a relatively flat orginazational structure and very high tolerance for contradicting or questioning authority figures, but maybe others have had different experiences with this than I have.
I agree that private processors are often better for survivors (Though they can be worse). But usually very little changes until someone goes public (at least anonymously). Nothing else remotely reliably creates the momentum to get bad actors out of power. If the people in power weren't at least complicit we wouldn't have these endemic problems. Notably this has already played out multiple times with rationalist and Ea paces. Brent was extremely egregious but until public callouts nothing was seriously done about him. In fact community leaders like eliezer...
Yeah, they can be. I went through a brutal "restorative justice" process myself (I'm trained in traditional law, and at the time, was personally insulted that a bunch of hacks thought they could replace centuries of legal work/thought), with someone EA-adjacent (though I just confirmed that my rapist has some ties to EA via Google; he's one of the 14 and not 30) - I said no for weeks, had multiple people push into a process, went along because I wanted to tell my side, was silenced, and the "mediator" texted me to encourage me to kill myself before I left the country. Obviously, I'm not advocating for that.
And also, I had no idea to report this to CH. Nor, given how CH is handling this, would I report this today.
Working with official orgs to handle sexual abuse cases almost never goes well. For obvious reasons victims want to avoid backlash. And many victims understandably dont want to ruin the lives of people they still care about. I truly wish private processes and call-ins worked better. But the only thing that creates change is public pressure. I would always endorse being as public as you can without compromising victim privacy or pressuring them to be more open about what happened. It is just a very unfortunate situation.
I agree with your sentiment (and upvoted so your comment doesn't get hidden), but (1) victims don't always have a strong connection to their attacker and may not care strongly, and (2) in my six years of doing this, sometimes (not always) private processes work. Mostly importantly, private processes are easier on the survivors, who should take precedence in any process.
Under my old screen name, I had 3 commenters say they changed their minds about rape, for example. I know my work certainly has changed people's opinions on rape, both at large a...
I basically agree but following this advice would require lowering one's own status (relative to the counterfactual). So its not surprising people dont follow the advice.
I will just push back on the idea, in a top-level post, that EAG admissions are not a judgment on people as EAs. CEA is very concerned about the most promising/influential EAs having useful conversations. If you are one of the people they consider especially promising or influential you will get invited. Otherwise, they might let you in if EAG seems especially useful for shaping your career. But they will also be worried that you are lowering the quality of the conversations. Here are some quotes from Eli, the lead on EA global at CEA.
...
EAG is primarily a ne
FWIW core EAs have openly said a major reason to keep EAG small is the 'quality of conversation' at the event. This is a big reason they made EAG smaller again. So there is definitely a level of judgment going on.
It's very understandable you dont want to handle this yourself. But I would strongly encourage you not to tell survivors to trust CEA.
I think it is pretty important that, by its own internal logic, longtermism has had negative impact. The AI safety community probably accelerated AI progress. Open AI is still pretty connected to the EA community and has been starting arm races (at least until recently the 80K jobs board listed jobs at Open aI). This is well known but true. Longtermism has also been connected to all sorts of scandals.
As far as I can tell neartermist EA has been reasonably successful. So its kind of concerning that institutional EA is dominated by longtermists. Would be nice to have institutions run by people who genuinely prioritize neartermism.
Fwiw my guess is that longtermism hasn’t had net negative impact by its own standards. I don’t think negative effects from AI speed up outweigh various positive impacts (e.g. promotion of alignment concerns, setting up alignment research, and non-AI stuff).
I think the problem here is that it makes a category mistake about how the move to longtermism happened. It wasn't because of any success or failure metric that moved things but the actual underlying arguments becoming convincing to people. For example, Holden Karnofsky moving from founding Givewell to heading the longtermist side of OpenPhil and focusing on AI.
The people who made neartermist causes successful chose on their own accord to move to the longtermist. They aren't being coerced away. GHW donations are growing in absolute terms. The weird f...
EVF is an umbrella org that manages community events, community health, public comms like EA handbook and curriculum, press interface, etc largely through CEA. It handles these tasks for both longtermism and the rest of EA. This is suboptimal IMO. The solution here is not just a financial split.
Would be extremely surprising if she didn't know about the sexual abuse allegations. They are very well known among her social circle. Despite this she has chosen to defend the fellow.
My interpretation of Anna was that if she thought there were credible allegations she would have included them in her long list of potentially undesirable actions?
The alleged perpetrator seems to be at least tolerated by some influential people. About Two years ago Anna Salomon wrote:
...(1) X seems to me to precipitate psychotic episodes in his interlocutors surprisingly often, to come closer to advocating physical violence than I would like, and to have conversational patterns that often disorient his interlocutors and leave them believing different things while talking to X than they do a bit later.
(2) I don't have overall advice that people ought to avoid X, in spite of (1), because it now seems to me that he is try
While I don't really disagree, I think it's worth pointing out that Anna here is talking about pretty different behaviors (precipitating psychotic episodes, approaching advocating physical violence, misleading reasoning, yelling) than we're talking about here (sexual abuse).
Probably important nitpick: The last bit of your first quoted paragraph misses a redaction.
Given what I've heard of this person, I'm really surprised and dismayed by the tolerance of this person by some, and wish they wouldn't do that.
This is what happens when you centralize power so much. I'm so sorry for what happened. So many people remaining silent and covering for abusers.
(it shouldn't matter but for the record I have multiple partners)
While this is a simple comment, I am a little bit surprised by the down votes and strong disagreement signaled. Could people who strongly disagree with this comment point out their reasoning?
Without having thought too much about this, I do think that it seems plausible to consider the effects that centralized decision making has on enabling or at least not discouraging these types of behaviors.
Nothing serious can change until the whole 'all important decisions are made by about ten people who dont see any need to get community buy in' issue is solved.
I've honestly developed some pretty serious mental health issues. It's just miserable to worry about everyone dying or worse.
There are a lot of possible answers to where thoughts come from and which thoughts are useful. One charitable thought is some Elite EAs tried to do things which were all of: hard, extremely costly if you fuck them up, they weren't able to achieve given the difficulty. I have definitely updated a lot toward trying things that are very crazy but at least obviously only hurt me (or people who follow my example, but those people made their own choice). Fail gracefullly. If you dont know how competent you are make sure not to mess things up for other people. There is a lot of 'theater' around this but most people don't internalize what it really means.