Hide table of contents

I think EA Global should be open access. No admissions process. Whoever wants to go can.

I'm very grateful for the work that everyone does to put together EA Global. I know this would add much more work for them. I know it is easy for me, a person who doesn't do the work now and won't have to do the extra work, to say extra work should be done to make it bigger.

But 1,500 people attended last EAG. Compare this to the 10,000 people at the last American Psychiatric Association conference, or the 13,000 at NeurIPS. EAG isn't small because we haven't discovered large-conference-holding technology. It's small as a design choice. When I talk to people involved, they say they want to project an exclusive atmosphere, or make sure that promising people can find and network with each other.

I think this is a bad tradeoff.

...because it makes people upset

This comment (seen on Kerry Vaughan's Twitter) hit me hard:

(source)

A friend describes volunteering at EA Global for several years. Then one year they were told that not only was their help not needed, but they weren't impressive enough to be allowed admission at all. Then later something went wrong and the organizers begged them to come and help after all. I am not sure that they became less committed to EA because of the experience, but based on the look of delight in their eyes when they described rejecting the organizers' plea, it wouldn't surprise me if they did.

People getting angry at not being invited to things has been a problem for a long time, and could even be considered a potential global catastrophic risk.

Not everyone rejected from EAG feels vengeful. Some people feel miserable. This year I came across the Very Serious Guide To Surviving EAG FOMO:

Part of me worries that, despite its name, it may not really be Very Serious... 

...but you can learn a lot about what people are thinking by what they joke about, and I think a lot of EAs are sad because they can't go to EAG.

...because you can't identify promising people.

In early 2020 Kelsey Piper and I gave a talk to an EA student group. Most of the people there were young overachievers who had their entire lives planned out, people working on optimizing which research labs they would intern at in which order throughout their early 20s. They expected us to have useful tips on how to do this.

Meanwhile, in my early 20s, I was making $20,000/year as an intro-level English teacher at a Japanese conglomerate that went bankrupt six months after I joined. In her early 20s, Kelsey was taking leave from college for mental health reasons and babysitting her friends' kid for room and board. If either of us had been in the student group, we would have been the least promising of the lot. And here we were, being asked to advise! I mumbled something about optionality or something, but the real lesson I took away from this is that I don't trust anyone to identify promising people reliably. 

...because people will refuse to apply out of scrupulosity.

I do this.

I'm not a very good conference attendee. Faced with the challenge of getting up early on a Saturday to go to San Francisco, I drag my feet and show up an hour late. After a few talks and meetings, I'm exhausted and go home early. I'm unlikely to change my career based on anything anyone says at EA Global, and I don't have any special wisdom that would convince other people to change theirs.

So when I consider applying to EAG, I ask myself whether it's worth taking up a slot that would otherwise go to some bright-eyed college student who has been dreaming of going to EAG for years and is going to consider it the highlight of their life. Then I realize I can't justify bumping that college student, and don't apply.

I used to think I was the only person who felt this way. But a few weeks ago, I brought it up in a group of five people, and two of them said they had also stopped applying to EAG, for similar reasons. I would judge both of them to be very bright and dedicated people, exactly the sort who I think the conference leadership are trying to catch.

In retrospect, "EAs are very scrupulous and sensitive to replaceability arguments" is a predictable failure mode. I think there could be hundreds of people in this category, including some of the people who would benefit most from attending.

...because of Goodhart's Law

If you only accept the most promising people, then you'll only get the people who most legibly conform to your current model of what's promising. But EA is forever searching for "Cause X" and for paradigm-shifting ideas. If you only let people whose work fits the current paradigm to sit at the table, you're guaranteed not to get these.

At the 2017 EAG, I attended some kind of reception dinner with wine or cocktails or something. Seated across the table from me was a man who wasn't drinking and who seemed angry about the whole thing.  He turned out to be a recovering alcoholic turned anti-alcohol activist. He knew nobody was going to pass Prohibition II or anything; he just wanted to lessen social pressure to drink and prevent alcohol from being the default option - ie no cocktail hours. He was able to rattle off some pretty impressive studies about the number of QALYs alcohol was costing and why he thought that reducing social expectations of drinking would be an effective intervention. I didn't end up convinced that this beat out bednets or long-termism, but his argument has stuck with me years later and influenced the way I approach social events.

This guy was a working-class recovering alcoholic who didn't fit the "promising mathematically gifted youngster" model - but that is the single conversation I think about most from that weekend, and ever since then I've taken ideas about "class diversity" and "diversity of ideas" much more seriously. 

(even though the last thing we need is for one more category of food/drink to get banned from EA conferences)

...because man does not live by networking alone

In the Facebook threads discussing this topic, supporters of the current process have pushed back: EA Global is a networking event. It should be optimized for making the networking go smoothly, which means keeping out the people who don't want to network or don't have anything to network about. People who feel bad about not being invited are making some sort of category error. Just because you don't have much to network about doesn't make you a bad person!

On the other hand, the conference is called "EA Global" and is universally billed as the place where EAs meet one another, learn more about the movement, and have a good time together. Everyone getting urged not to worry because it's just about networking has to spend the weekend watching all their friends say stuff like this:

Some people want to go to EA Global to network. Some people want to learn more about EA and see whether it's right for them. Some people want to update themselves on the state of the movement and learn about the latest ideas and causes. Some people want to throw themselves into the whirlwind and see if serendipity makes anything interesting happen. Some people want to see their friends and party. 

All of these people are valid. Even the last group, the people who just want to see friends and party, are valid. EA spends I-don't-even-know-how-many millions of dollars on community-building each year. And here are people who really want to participate in a great EA event, one that could change their lives and re-energize them, basically the community trying to build itself. And we're telling them no?

...because you can have your cake and eat it too.

There ought to be places for elites to hang out with other elites. There ought to be ways for the most promising people to network with each other. I just don't think these have to be all of EA Global.

For example, what if the conference itself was easy to attend, but the networking app was exclusive? People who wanted to network could apply, the 1500 most promising could get access to the app, and they could network with each other, same as they do now. Everyone else could just go to the talks or network among themselves.

Or what if EA Global was easy to attend, but there were other conferences - the Special AI Conference, the Special Global Health Conference - that were more selective? Maybe this would even be more useful, since the Global Health people probably don't gain much from interacting with the AI people, and vice versa.

Some people on Facebook worried that they wanted to offer travel reimbursement to attendees, but couldn't afford to scale things up and give 10,000 travel reimbursement packages. So why not have 10,000 attendees, and they can apply for 1,500 travel reimbursement packages which organizers give based on a combination of need and talent? Why not make ordinary attendees pay a little extra, and subsidize even more travel reimbursements?

I don't know, there are probably other factors I don't know about. Still, it would surprise me if, all things being considered, the EA movement would be worse off by giving thousands of extra really dedicated people the chance to attend their main conference each year. 

At the closing ceremony of EA Global 2017, Will MacAskill urged attendees to "keep EA weird"

His PowerPoint slide for this topic was this picture of Eliezer Yudkowsky. Really. I’m not joking about this part.

I don't know if we are living up to that. Some of the people who get accepted are plenty weird. Still, I can't help thinking we are failing to fully execute that vision.

Comments146517
Sorted by Click to highlight new comments since: Today at 11:54 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
Pinned by Sarah Cheng

Hey everyone, on an admin note I want to announce that I'm stepping in as "Transition Coordinator." Basically, Max wanted to step down immediately, and choosing an ED even on an interim basis might take a bit, so I will be doing the minimal set of ED-like tasks to keep CEA running and start an ED search. 

If things go well you shouldn’t even notice that I’m here, but you can reach me at ben.west@centreforeffectivealtruism.org if you would like to contact me personally.

Pinned by Will Aldred

Hey folks, a reminder to please be thoughtful as you comment.

The previous Nonlinear thread received almost 500 comments; many of these were productive, but there were also some more heated exchanges. Following Forum norms—in a nutshell: be kind, stay on topic, be honest—is probably even more important than usual in charged situations like these.

Discussion here could end up warped towards aggression and confusion for a few reasons, even if commenters are generally well intentioned:

  • Some of the allegations this post responds to, and the new allegations in the post, are serious and upsetting. People who have gone through similar experiences may find engaging with this topic especially stressful.
  • Power dynamics and alleged mistreatment is rightly an emotionally loaded topic, and can be difficult to discuss objectively.
  • Differences in personal culture and experiences can lead to hard-to-articulate disagreements over acceptable versus unacceptable behaviour.

Regarding this paragraph from the post:

Given what they have done, a number of people expressed to us that they think Alice/Chloe are a danger to the health of the community and should not be anonymized. We will leave that

... (read more)
Lizka
1yModerator Comment33
13
4
Pinned by JP Addison

A short note as a moderator:[1] People (understandably) have strong feelings about discussions that focus on race, and many of us found the content that the post is referencing difficult to read. This means that it's both harder to keep to Forum norms when responding to this, and (I think) especially important.

Please keep this in mind if you decide to engage in a discussion about this, and try to remember that most people on the Forum are here for collaborative discussions about doing good.

If you have any specific concerns, you can also always reach out to the moderation team at forum-moderation@effectivealtruism.org.

  1. ^

    Mostly copying this comment from one I made on another post.

I thought we could do a thread for Giving What We Can pledgers and lessons learnt or insights since pledging! 

I'll go first: I was actually really worried about how donating 10% would feel, as well as it's impact on my finances - but actually it's made me much less stressed about money - to know I can still have a great standard of living with 10% less. It's actually changed the way I see money and finances and has helped me think about how I can increase my giving in future years.

If folks don't mind,  a brief word from our sponsors...

I saw Cremer's post and seriously considered this proposal. Unfortunately I came to the conclusion that the parenthetical point about who comprises the "EA community" is, as far as I can tell, a complete non-starter.

My co-founder from Asana, Justin Rosenstein, left a few years ago to start oneproject.org, and that group came to believe sortition (lottery-based democracy) was the best form of governance. So I came to him with the question of how you might define the electorate in the case of a group like EA. He suggests it's effectively not possible to do well other than in the case of geographic fencing (i.e. where people have invested in living) or by alternatively using the entire world population. 

I have not myself come up with a non-geographic strategy that doesn't seem highly vulnerable to corrupt intent or vote brigading. Given that the stakes are the ability to control large sums of money, having people stake some of their own (i.e. become "dues-paying" members of some kind) does not seem like a strong enough mitigation. For example, a hostile takeover almost happened to the Sierra Club in SF in 2015 (albeit fo... (read more)

It is 2AM in my timezone, and come morning I may regret writing this. By way of introduction, let me say that I dispositionally skew towards the negative, and yet I do think that OP is amongst the best if not the best foundation in its weight class. So this comment generally doesn't compare OP against the rest but against the ideal.

One way which you could allow for somewhat democratic participation is through futarchy, i.e., using prediction markets for decision-making. This isn't vulnerable to brigading because it requires putting proportionally more money in the more influence you want to have, but at the same time this makes it less democratic.

More realistically, some proposals in that broad direction which I think could actually be implementable could be:

  • allowing people to bet against particular OpenPhilanthropy grants producing successful outcomes. 
  • allowing people to bet against OP's strategic decisions (e.g., against worldview diversification)
  • I'd love to see bets between OP and other organizations about whose funding is more effective, e.g., I'd love to see a bet between your and Jaan Tallinn on who's approach is better, where the winner gets some large amount (e.g., $20
... (read more)

Hi Dustin :)

FWIW I also don't particularly understand the normative appeal of democratizing funding within the EA community. It seems to me like the common normative basis for democracy would tend to argue for democratizing control of resources in a much broader way, rather than within the self-selected EA community. I think epistemic/efficiency arguments for empowering more decision-makers within EA are generally more persuasive, but wouldn't necessarily look like "democracy" per se and might look more like more regranting, forecasting tournaments, etc.

A couple replies imply that my research on the topic was far too shallow and, sure, I agree.

But I do think that shallow research hits different from my POV, where the one person I have worked most closely with across nearly two decades happens to be personally well researched on the topic. What a fortuitous coincidence! So the fact that he said "yea, that's a real problem" rather than "it's probably something you can figure out with some work" was a meaningful update for me, given how many other times we've faced problems together.

I can absolutely believe that a different person, or further investigation generally, would yield a better answer, but I consider this a fairly strong prior rather than an arbitrary one. I also can't point at any clear reference examples of non-geographic democracies that appear to function well and have strong positive impact. A priori, it seems like a great idea, so why is that?

The variations I've seen so far in the comments (like weighing forum karma) increase trust and integrity in exchange for decreasing the democratic nature of the governance, and if you walk all the way along that path you get to institutions.

chloe
7mo482
37
5
62
6
1

On behalf of Chloe and in her own words, here’s a response that might illuminate some pieces that are not obvious from Ben’s post - as his post is relying on more factual and object-level evidence, rather than the whole narrative.

“Before Ben published, I found thinking about or discussing my experiences very painful, as well as scary - I was never sure with whom it was safe sharing any of this with. Now that it’s public, it feels like it’s in the past and I’m able to talk about it. Here are some of my experiences I think are relevant to understanding what went on. They’re harder to back up with chatlog or other written evidence - take them as you want, knowing these are stories more than clearly backed up by evidence. I think people should be able to make up their own opinion on this, and I believe they should have the appropriate information to do so.

I want to emphasize *just how much* the entire experience of working for Nonlinear was them creating all kinds of obstacles, and me being told that if I’m clever enough I can figure out how to do these tasks anyway. It’s not actually about whether I had a contract and a salary (even then, the issue wasn’t the amount or even the legali... (read more)

I confirm that this is Chloe, who contacted me through our standard communication channels to say she was posting a comment today.

Thank you very much for sharing, Chloe.

Ben, Kat, Emerson, and readers of the original post have all noticed that the nature of Ben's process leads to selection against positive observations about Nonlinear. I encourage readers to notice that the reverse might also be true. Examples of selection against negative information include:

  1. Ben has reason to exclude stories that are less objective or have a less strong evidence base. The above comment is a concrete example of this.
    1. There's also something related here about the supposed unreliability of Alice as a source: Ben needs to include this to give a complete picture/because other people (in particular the Nonlinear co-founders) have said this. I strongly concur with Ben when he writes that he "found Alice very willing and ready to share primary sources [...] so I don’t believe her to be acting in bad faith." Personally, my impression is that people are making an incorrect inference about Alice from her characteristics (that are perhaps correlated with source-reliability in a large population, but aren't logically related, and aren't relevant in this case).
  2. To the extent that you expect other people to have been silenced (e.g. via antici
... (read more)

Emerson approaches me to ask if I can set up the trip. I tell him I really need the vacation day for myself. He says something like “but organizing stuff is fun for you!”.

[...]

She kept insisting that I’m saying that because I’m being silly and worry too much and that buying weed is really easy, everybody does it.

😬 There's a ton of awful stuff here, but these two parts really jumped out at me. Trying to push past someone's boundaries by imposing a narrative about the type of person they are ('but you're the type of person who loves doing X!' 'you're only saying no because you're the type of person who worries too much') is really unsettling behavior.

I'll flag that this is an old remembered anecdote, and those can be unreliable, and I haven't heard Emerson or Kat's version of events. But it updates me, because Chloe seems like a pretty good source and this puzzle piece seems congruent with the other puzzle pieces.

E.g., the vibe here matches something that creeped me out a lot about Kat's text message to Alice in the OP, which is the apparent attempt to corner/railroad Alice into agreement via a bunch of threats and strongly imposed frames, followed immediately by Kat repeatedly stat... (read more)

bruce
7mo80
32
2
3

This sounds like a terribly traumatic experience. I'm so sorry you went through this, and I hope you are in a better place and feel safer now.

Your self-worth is so, so much more than how well you can navigate what sounds like a manipulative, controlling, and abusive work environment.

 

spent months trying to figure out how to empathize with Kat and Emerson, how they’re able to do what they’ve done, to Alice, to others they claimed to care a lot about. How they can give so much love and support with one hand and say things that even if I’d try to model “what’s the worst possible thing someone could say”, I’d be surprised how far off my predictions would be.

It sounds like despite all of this, you've tried to be charitable to people who have treated you unfairly and poorly - while this speaks to your compassion, I know this line of thought can often lead to things that feel like you are gaslighting yourself, and I hope this isn't something that has caused you too much distress.

I also hope that Effective Altruism as a community becomes a safer space for people who join it aspiring to do good, and I'm grateful for your courage in sharing your experiences, despite it (very reasonably!... (read more)

Julia_Wise
1y454
135
14

I’m responding on behalf of the community health team at the Centre for Effective Altruism. We work to prevent and address problems in the community, including sexual misconduct.

I find the piece doesn’t accurately convey how my team, or the EA community more broadly, reacts to this sort of behavior.

We work to address harmful behavior, including sexual misconduct, because we think it’s so important that this community has a good culture where people can do their best work without harassment or other mistreatment. Ignoring problems or sweeping them under the rug would be terrible for people in the community, EA’s culture, and our ability to do good in the world.

My team didn’t have a chance to explain the actions we’ve already taken on the incidents described in this piece. The incidents described here include:

  • Ones where we already took action years ago, like banning the accused from our spaces
  • Ones where we offered to help address the situation and the person affected didn’t answer
  • Ones we weren’t aware of

We’ll be going through the piece to see if there are any situations we might be able to address further, but in most of them there’s not enough information to do so. If you ... (read more)

There's a lot of discussion here about why things don't get reported to the community health team, and what they're responsible for, so I wanted to add my own bit of anecdata.

I'm a woman who has been closely involved with a particularly gender-imbalanced portion of EA for 7 years, who has personally experienced and secondhand heard about many issues around gender dynamics, and who has never reported anything to the community health team (despite several suggestions from friends to). Now I'm considering why.

Upon reflection, here are a few reasons:

  1. Early on, some of it was naiveté. I experienced occasional inappropriate comments or situations from senior male researchers when I was a teenager, but assumed that they could never be interested in me because of the age and experience gap. At the time I thought that I must be misinterpreting the situation, and only see it the way I do now with the benefit of experience and hindsight. (I never felt unsafe, and if I had, would have reported it or left.)

  2. Often, the behavior felt plausibly deniable. "Is this person asking me to meet at a coffeeshop to discuss research or to hit on me? How about meeting at a bar? Going for a walk on the be

... (read more)

To give a little more detail about what I think gave wrong impressions - 

Last year as part of a longer piece about how the community health team approaches problems, I wrote a list of factors that need to be balanced against each other. One that’s caused confusion is “Give people a second or third chance; adjust when people have changed and improved.” I meant situations like “someone has made some inappropriate comments and gotten feedback about it,” not something like assault. I’m adding a note to the original piece clarifying.

What proportion of the incidents described was the team unaware of?

I appreciate you taking the effort to write this. However, like other commentators I feel that if these proposals were implemented, EA would just become the same as many other left wing social movements, and, as far as I can tell, would basically become the same as standard forms of left wing environmentalism which are already a live option for people with this type of outlook, and get far more resources than EA ever has. I also think many of the proposals here have been rejected for good reason, and that some of the key arguments are  weak. 

  1. You begin by citing the Cowen quote that "EAs couldn't see the existential risk to FTX even though they focus on existential risk". I think this is one of the more daft points made by  a serious person on the FTX crash. Although the words 'existential risk' are the same here, they have completely different meanings, one being about the extinction of all humanity or things roughly as bad as that, and the other being about risks to a particular organisation. The problem with FTX is that there wasn't enough attention to existential risks to FTX and the implications this would have for EA.  In contrast, EAs have put umpteen pers
... (read more)
Habryka
1y371
62
14

I don't think I am a great representative of EA leadership, given my somewhat bumpy relationship and feelings to a lot of EA stuff, but I nevertheless I think I have a bunch of the answers that you are looking for: 

Who is invited to the coordination forum and who attends? What sort of decisions are made? How does the coordination forum impact the direction the community moves in? Who decides who goes to the coordination forum? How? What's the rationale for keeping the attendees of the coordination forum secret (or is it not purposeful)?

The Coordination Forum is a very loosely structured retreat that's been happening around once a year. At least the last two that I attended were structured completely as an unconference with no official agenda, and the attendees just figured out themselves who to talk to, and organically wrote memos and put sessions on a shared schedule. 

At least as far as I can tell basically no decisions get made at Coordination Forum, and it's primary purpose is building trust and digging into gnarly disagreements between different people who are active in EA community building, and who seem to get along well with the others attending (with some bal... (read more)

I think it could be a cost-effective use of $3-10 billion (I don't know where you got the $8-15 billion from, looks like the realistic amounts were closer to 3 billion). My guess is it's not, but like, Twitter does sure seem like it has a large effect on the world, both in terms of geopolitics and in terms of things like norms for the safe development of technologies, and so at least to me I think if you had taken Sam's net-worth at face-value at the time, this didn't seem like a crazy idea to me. 

The 15 billion figure comes from Will's text messages themselves (page 6-7). Will sends Elon a text about how SBF could be interested in going in on Twitter, then  Elon Musk asks, "Does he have huge amounts of money?" and Will replies, "Depends on how you define "huge." He's worth $24B, and his early employees (with shared values) bump that up to $30B. I asked how much he could in principle contribute and he said: "~1-3 billion would be easy, 3-8 billion I could do, ~8-15b is maybe possible but would require financing"

It seems weird to me that EAs would think going in with Musk on a Twitter deal would be worth $3-10 billion, let alone up to 15 (especially of money that at the ti... (read more)

Hey, I wanted to clarify that Open Phil gave most of the funding for the purchase of Wytham Abbey (a small part of the costs were also committed by Owen and his wife, as a signal of “skin in the game”). I run the Longtermist EA Community Growth program at Open Phil (we recently launched a parallel program for EA community growth for global health and wellbeing, which I don’t run) and I was the grant investigator for this grant, so I probably have the most context on it from the side of the donor. I’m also on the board of the Effective Ventures Foundation (EVF).

Why did we make the grant? There are two things I’d like to discuss about this, the process we used/context we were in, and our take on the case for the grant. I’ll start with the former. 

 

Process and context: At the time we committed the funding (November 2021, though the purchase wasn’t completed until April 2022), there was a lot more apparent funding available than there is today, both from Open Phil and from the Future Fund. Existential risk reduction and related efforts seemed to us to have a funding overhang, and we were actively looking for more ways to spend money to support more good work, e... (read more)

Hi - Thanks so much for writing this. I'm on holiday at the moment so have only have been able to  quickly  skim your post and paper.  But, having got the gist, I just wanted to say:
(i) It really pains me to hear that you lost time and energy as a result of people discouraging you from publishing the paper, or that you had to worry over funding on the basis of this. I'm sorry you had to go through that. 
(ii) Personally, I'm  excited to fund or otherwise encourage engaged and in-depth "red team" critical work on either (a) the ideas of EA, longtermism or strong longtermism, or (b) what practical implications have been taken to follow from EA, longtermism, or strong longtermism.  If anyone reading this comment  would like funding (or other ways of making their life easier) to do (a) or (b)-type work, or if you know of people in that position, please let me know at will@effectivealtruism.org.  I'll try to consider any suggestions, or put the suggestions in front of others to consider, by the end of  January. 

I read about Kathy Forth, a woman who was heavily involved in the Effective Altruism and Rationalist communities. She committed suicide in 2018, attributing large portions of her suffering to her experiences of sexual harassment and sexual assault in these communities. She accuses several people of harassment, at least one of whom is an incredibly prominent figure in the EA community. It is unclear to me what, if any, actions were taken in response to (some) of her claims and her suicide. What is clear is the pages and pages of tumblr posts and Reddit threats, some from prominent members of the EA and Rationalist communities, disparaging Kathy and denying her accusations.

 

I'm one of the people (maybe the first person?) who made a post saying that (some of) Kathy's accusations were false. I did this because those accusations were genuinely false, could have seriously damaged the lives of innocent people, and I had strong evidence of this from multiple very credible sources.

I'm extremely prepared to defend my actions here, but prefer not to do it in public in order to not further harm anyone else's reputation (including Kathy's). If you want more details, feel free to email me at scott@slatestarcodex.com and I will figure out how much information I can give you without violating anyone's trust.

[anonymous]1y131
68
13

I'm glad you made your post about how Kathy's accusations were false.  I believe that was the right thing to do -- certainly given the information you had available.

But I wish you had left this sentence out, or written it more carefully:

But they wouldn't do that, I'm guessing because they were all terrified of getting called out in posts like this one.

It was obvious to me reading this post that the author made a really serious effort to stay constructive. (Thanks for that, Maya!)  It seems to me that we should recognize that, and you're erasing an important distinction when you categorize the OP with imprudent tumblr call-out posts.

If nothing else, no one is being called out by name here, and the author doesn't link any of the tumblr posts and Reddit threads she refers to.

I don't think causing reputational harm to any individual was the author's intent in writing this.  Fear of unfair individual reputational harm from what's written here seems a bit unjustified.

EDIT: After some time to cool down, I've removed that sentence from the comment, and somewhat edited this comment which was originally defending it. 

I do think the sentence was true. By that I mean that (this is just a guess, not something I know from specifically asking them) the main reason other people were unwilling to post the information they had, was because they were worried that someone would write a public essay saying "X doesn't believe sexual assault victims" or "EA has a culture of doubting sexual assault victims". And they all hoped someone else would go first to mention all the evidence that these particular rumors were untrue, so that that person could be the one to get flak over this for the rest of their life (which I have, so good prediction!), instead of them. I think there's a culture of fear around these kinds of issues that it's useful to bring to the foreground if we want to model them correctly.

But I think you're gesturing at a point where if I appear to be implicitly criticizing Maya for bringing that up, fewer people will bring things like that up in the future, and even if this particular episode was false, many similar ones will be true, so her bri... (read more)

Arepo
1y80
20
1

I want to strong agree with this post, but a forum glitch is preventing me from doing so, so mentally add +x agreement karma to the tally. [Edit: fixed and upvoted now]

I have also heard from at least one very credible source that at least one of Kathy's accusations had been professionally investigated and found without any merit.

Maybe also worth adding that the way she wrote the post would in a healthy person be intentionally misleading, and was at least incredibly careless for the strength of accusation. Eg there was some line to the effect of 'CFAR are involved in child abuse', where the claim was link-highlighted in a way that strongly suggested corroborating evidence but, as in that paraphrase, the link in fact just went directly to whatever the equivalent website was then for CFAR's summer camp.

It's uncomfortable berating the dead, but much more important to preserve the living from incredibly irresponsible aspersions like this.

Hi Carla and Luke, I was sad to hear that you and others were concerned that funders would be angry with you or your institutions for publishing this paper. For what it's worth, raising these criticisms wouldn't count as a black mark against you or your institutions in any funding decisions that I make. I'm saying this here publicly in case it makes others feel less concerned that funders would retaliate against people raising similar critiques. I disagree with the idea that publishing critiques like this is dangerous / should be discouraged.

+1 to everything Nick said, especially the last sentence. I'm glad this paper was published; I think it makes some valid points (which doesn't mean I agree with everything), and I don't see the case that it presents any risks or harms that should have made the authors consider withholding it. Furthermore, I think it's good for EA to be publicly examined and critiqued, so I think there are substantial potential harms from discouraging this general sort of work.

Whoever told you that funders would be upset by your publishing this piece, they didn't speak for Open Philanthropy. If there's an easy way to ensure they see this comment (and Nick's), it might be helpful to do so.

+1, EA Funds (which I run) is interested in funding critiques of popular EA-relevant ideas.

MathiasKB
1y275
110
11

This was incredibly upsetting for me to read. This is the first time I've ever felt ashamed to be associated with EA. I apologize for the tone of the rest of the comment, can delete it if it is unproductive, but I feel a need to vent.

One thing I would like to understand better is to what extent this is a bay area issue versus EA in general. My impression is that a disproportionate fraction of abuse happens in the bay. If this suspicion is true, I don't know how to put this politely, but I'd really appreciate it if the bay area could get its shit together.

In my spare time I do community building in Denmark. I will be doing a workshop for the Danish academy of talented highschool students in April. How do you imagine the academy organizers will feel seeing this in TIME magazine?

What should I tell them? "I promise this is not an issue in our local community"?

I've been extremely excited to prepare this event. I would get to teach Denmark's brightest high schoolers about hierarchies of evidence, help them conduct their own cost-effectiveness analyses, and hopefully inspire a new generation to take action to make the world a better place.

Now I have to worry about whether it would be more appropriate to send the organizers a heads up informing them about the article and give them a chance to reconsider working with us.

I frankly feel unequipped to deal with something like this.

A response to why a lot of the abuse happens  in the Bay Area:

"I am one of the people in the Time Mag article about sexual violence in EA. In the video below I clarify some points about why the Bay Area is the epicenter of so many coercive dynamics, including the hacker house culture, which are like frat houses backed by billions in capital, but without oversight of HR departments or parent institutions. This frat house/psychedelic/male culture, where a lot of professional networking happens, creates invisible glass ceilings for women."

tweet: https://twitter.com/soniajoseph_/status/1622002995020849152

Hi! I listened to your entire video. It was very brave and commendable. I really hope you've started something that will help get EA and the Bay Area rationalist scene into a much healthier and more impactful place. I think your analysis of the problem is very sharp. Thank you for coming forward and doing what you did.

Zooming out from this particular case, I’m concerned that our community is both (1) extremely encouraging and tolerant of experimentation and poor, undefined boundaries and (2) very quick to point the finger when any experiment goes wrong. If we don’t want to have strict professional norms I think it’s unfair to put all the blame on failed experiments without updating the algorithm that allows people embark on these experiments with community approval.

To be perfectly clear, I think this community has poor professional boundaries and a poor understanding of why normie boundaries exist. I would like better boundaries all around. I don’t think we get better boundaries by acting like a failure like this is due to character or lack of integrity instead of bad engineering. If you wouldn’t have looked at it before it imploded and thought the engineering was bad, I think that’s the biggest thing that needs to change. I’m concerned that people still think that if you have good enough character (or are smart enough, etc), you don’t need good boundaries and systems.

I’m concerned that our community is both (1) extremely encouraging and tolerant of experimentation and poor, undefined boundaries and (2) very quick to point the finger when any experiment goes wrong.

Yep, I think this is a big problem.

More generally, I think a lot of EAs give lip service to the value of people trying weird new ambitious things, "adopt a hits-based approach", "if you're never failing then you're playing it too safe", etc.; but then we harshly punish visible failures, especially ones that are the least bit weird. In cases like those, I think the main solution is to be more forgiving of failures, rather than to give up on ambitious projects.

From my perspective, none of this is particularly relevant to what bothers me about Ben's post and Nonlinear's response. My biggest concern about Nonlinear is their attempt to pressure people into silence (via lawsuits, bizarre veiled threats, etc.), and "I really wish EAs would experiment more with coercing and threatening each other" is not an example of the kind of experimentalism I'm talking about when I say that EAs should be willing to try and fail at more things (!).

"Keep EA weird" does not entail "have low ethical standards... (read more)

It's fair enough to feel betrayed in this situation, and to speak that out. 

But given your position in the EA community, I think it's much more important to put effort towards giving context on your role in this saga. 

Some jumping-off points: 

  • Did you consider yourself to be in a mentor / mentee relationship with SBF prior to the founding of FTX? What was the depth and cadence of that relationship? 
    • e.g. from this Sequoia profile (archived as they recently pulled it from their site): 

      "The math, MacAskill argued, means that if one’s goal is to optimize one’s life for doing good, often most good can be done by choosing to make the most money possible—in order to give it all away. “Earn to give,” urged MacAskill.

      ... And MacAskill—Singer’s philosophical heir—had the answer: The best way for him to maximize good in the world would be to maximize his wealth. SBF listened, nodding, as MacAskill made his pitch. The earn-to-give logic was airtight. It was, SBF realized, applied utilitarianism. Knowing what he had to do, SBF simply said, “Yep. That makes sense.”"
       
  • What diligence did you / your team do on FTX before agreeing to join the Future Fund as an advisor?&nb
... (read more)

[Edit after months: While I still believe these are valid questions, I now think I was too hostile, overconfident, and not genuinely curious enough.] One additional thing I’d be curious about:

You played the role of a messenger between SBF and Elon Musk in a bid for SBF to invest up to 15 billion of (presumably mostly his) wealth in an acquisition of Twitter. The stated reason for that bid was to make Twitter better for the world. This has worried me a lot over the last weeks. It could have easily been the most consequential thing EAs have ever done and there has - to my knowledge- never been a thorough EA debate that signalled that this would be a good idea.

What was the reasoning behind the decision to support SBF by connecting him to Musk? How many people from FTXFF or EA at large were consulted to figure out if that was a good idea? Do you think that it still made sense at the point you helped with the potential acquisition to regard most of the wealth of SBF as EA resources? If not, why did you not inform the EA community?

Source for claim about playing a messenger: https://twitter.com/tier10k/status/1575603591431102464?s=20&t=lYY65-TpZuifcbQ2j2EQ5w

It could have easily been the most consequential thing EAs have ever done and there has - to my knowledge- never been a thorough EA debate that signalled that this would be a good idea.

I don't think EAs should necessary require a community-wide debate before making major decisions, including investment decisions; sometimes decisions should be made fast, and often decisions don't benefit a ton from "the whole community weighs in" over "twenty smart advisors weighed in".

But regardless, seems interesting and useful for EAs to debate this topic so we can form more models of this part of the strategy space -- maybe we should be doing more to positively affect the world's public fora. And I'd personally love to know more about Will's reasoning re Twitter.

Did you understand the mechanism by which FTX claimed to be generating revenue? Were the revenues they reported sanity-checked against a back-of-the-envelope estimate of how much their claimed mechanism would be able to generate?


I think it's important to note that many experts, traders, and investors did not see this coming, or they could have saved/made billions.

It seems very unfair to ask fund recipients to significantly outperform the market and most experts, while having access to way less information.

See this Twitter thread from Yudkowsky

Edit: I meant to refer to fund advisors, not (just) fund recipients

Also from the Sequoia profile: "After SBF quit Jane Street, he moved back home to the Bay Area, where Will MacAskill had offered him a job as director of business development at the Centre for Effective Altruism." It was precisely at this time that SBF launched Alameda Research, with Tara Mac Aulay (then the president of CEA) as a co-founder ( https://www.bloomberg.com/news/articles/2022-07-14/celsius-bankruptcy-filing-shows-long-reach-of-sam-bankman-fried).

To what extent was Will or any other CEA figure involved with launching Alameda and/or advising it? 

Tamay
1y79
44
6

One specific question I would want to raise is whether EA leaders involved with FTX were aware of or raised concerns about non-disclosed conflicts of interest between Alameda Research and FTX.

For example, I strongly suspect that EAs tied to FTX knew that SBF and Caroline (CEO of Alameda Research) were romantically involved (I strongly suspect this because I have personally heard Caroline talk about her romantic involvement with SBF in private conversations with several FTX fellows). Given the pre-existing concerns about the conflicts of interest between Alameda Research and FTX (see examples such as these), if this relationship were known to be hidden from investors and other stakeholders, should this not have raised red flags? 

Hi Scott — I work for CEA as the lead on EA Global and wanted to jump in here. 

Really appreciate the post — having a larger, more open EA event is something we’ve thought about for a while and are still considering. 

I think there are real trade-offs here. An event that’s more appealing to some people is more off-putting to others, and we’re trying to get the best balance we can. We’ve tried different things over the years, which can lead to some confusion (since people remember messaging from years ago) but also gives us some data about what worked well and badly when we’ve tried more open or more exclusive events.

  1. We’ve asked people’s opinion on this. When we’ve polled our advisors including leaders from various EA organizations, they’ve favored more selective events. In our most recent feedback surveys, we’ve asked attendees whether they think we should have more attendees. For SF 2022, 34% said we should increase the number, 53% said it should stay the same, and 14% said it should be lower. Obviously there’s selection bias here since these are the people who got in, though.
  2. To your “...because people will refuse to apply out of scrupulosity” point — I want to clarify tha
... (read more)

FWIW I generally agree with Eli's reply here. I think maybe EAG should 2x or 3x in size, but I'd lobby for it to not be fully open.

Thanks for commenting, Eli. 

I'm a bit confused by one of your points here. You say: "I want to clarify that this isn’t how our admissions process works, and neither you nor anyone else we accept would be bumping anyone out of a spot". OK, cool.

However, when I received my acceptance email to EAG it included the words "If you find that you can’t make it to the event after all, please let us know so that we can give your spot to another applicant."

That sure sounds like a request that you make when you have a limited number of spots and accepting one person means bumping another.

To be clear, I think it's completely reasonable to have a set number of places - logistics are a thing, and planning an event for an unknown number of people is extremely challenging. I'm just surprised by your statement that it doesn't work that way.

I also want to make a side note that I strongly believe that making EA fun is important. The movement asks people to give away huge amounts of money, reorient their whole careers, and dedicate themselves to changing the world. Those are big asks! It's very easy for people to just not do them!

It's hard to get people to voluntarily do even small, easy things when they feel unappreciated or excluded. I agree that making EAs happy is not and should not be a terminal value but it absolutely should be an instrumental value.

The timeline (in PT time zone) seems to be:

Jan 13, 12:46am: Expo article published.

Jan 13, 4:20am: First mention of this on the EA Forum.

Jan 13, 6:46am: Shakeel Hashim (speaking for himself and not for CEA; +110 karma, +109 net agreement as of the 15th) writes, "If this is true it's absolutely horrifying.  FLI needs to give a full explanation of what exactly happened here and I don't understand why they haven't. If FLI did knowingly agree to give money to a neo-Nazi group, that’s despicable.  I don't think people who would do something like that ought to have any place in this community."

Jan 13, 9:18pm: Shakeel follows up, repeating that he sees no reason why FLI wouldn't have already made a public statement that it's really weird that FLI hasn't already made a public statement, and raises the possibility that FLI has maybe done sinister questionably-legal things and that's why they haven't spoken up.

Jan 14, 3:43am: You (titotal) comment,  "If the letter is genuine (and they have never denied that it is), then someone at FLI is either grossly incompetent or malicious. They need to address this ASAP. "

Jan 14, 8:16am: Jason comments (+15 karma, +13 net agreement a... (read more)

Thanks for calling me out on this — I agree that I was too hasty to call for a response.

I’m glad that FLI has shared more information, and that they are rethinking their procedures as a result of this. This FAQ hasn’t completely alleviated my concerns about what happened here — I think it’s worrying that something like this can get to the stage it did without it being flagged (though again, I'm glad FLI seems to agree with this). And I also think that it would have been better if FLI had shared some more of the FAQ info with Expo too.

I do regret calling for FLI to speak up sooner, and I should have had more empathy for the situation they were in. I posted my comments not because I wanted to throw FLI under the bus for PR reasons, but because I was feeling upset; coming on the heels of the Bostrom situation I was worried that some people in the EA community were racist or at least not very sensitive about how discussions of race-related things can make people feel. At the time, I wanted to do my bit to make it clear — in particular to other non-white people who felt similarly to me — that EA isn’t racist. But I could and should have done that in a much better way. I’m sorry.

Larks
1y83
27
9

FTX collapsed on November 8th; all the key facts were known by the 10th; CEA put out their statement on November 12th. This is a totally reasonable timeframe to respond. I would have hoped that this experience would make CEA sympathetic to a fellow EA org (with much less resources than CEA) experiencing a media crisis rather than being so quick to condemn.

I'm also not convinced that a Head of Communications, working for an organization with a very restrictive media policy for employees, commenting on a matter of importance for that organization, can really be said to be operating in a personal capacity. Despite claims to the contrary, I think it's pretty reasonable to interpret these as official CEA communications. Skill at a PR role is as much about what you do not say as what you do.

lc
1y74
36
0

The eagerness with which people rushed to condemn is frankly a warning sign for involution. We have to stop it with the pointless infighting or it's all we will end up doing.

Maya, I’m so sorry that things have made you feel this way. I know you’re not alone in this. As Catherine said earlier, either of us (and the rest of the community health team) are here to talk and try to support.

I agree it’s very important that no one should get away with mistreating others because of their status, money, etc. One of the concerns you raise related to this is an accusation that Kathy Forth made. When Kathy raised concerns related to EA, I investigated all the cases where she gave me enough information to do so. In one case, her information allowed me to confirm that a person had acted badly, and to keep them out of EA Global. 

At one point we arranged for an independent third party attorney who specialized in workplace sexual harassment claims to investigate a different accusation that Kathy made. After interviewing Kathy, the accused person, and some other people who had been nearby at the time, the investigator concluded that the evidence did not support Kathy’s claim about what had happened. I don’t think Kathy intended to misrepresent anything, but I think her interpretation of what happened was different than what most people’s would have been.

I do want pe... (read more)

I’m part of Anima International’s leadership as Director of Global Development (so please note that Animal Charity Evaluators’ negative view of the leadership quality is, among others, about me).

As the author noted, this topic is politically charged and additionally, as Anima International, we consider ourselves ‘a side’, so our judgment here may be heavily biased. This is why, even though we read this thread, we are quite hesitant to comment.

Nevertheless, I can offer a few factual points here that will clear some of the author’s confusion or that people got wrong in the comments.

We asked ACE for their thoughts on these points to make sure we are not misconstruing what happened due to a biased perspective. After a short conversation with Anima International, ACE preferred not to comment. They declined to correct what they feel is factually incorrect and instead let us know that they will post a reply to my post to avoid confusion, which we welcome.

1.

The author wrote: “it's possible that some Anima staff made private comments that are much worse than what is public”

While I don’t want to comment or judge whether comments are better or worse, we specifically asked ACE to publish all... (read more)

As AI heats up, I'm excited and frankly somewhat relieved to have Holden making this change. While I agree with 𝕮𝖎𝖓𝖊𝖗𝖆's comment below that Holden had a lot of leverage on AI safety in his recent role, I also believe he has an vast amount of domain knowledge that can be applied more directly to problem solving. We're in shockingly short supply of that kind of person, and the need is urgent.

Alexander has my full confidence in his new role as the sole CEO. I consider us incredibly fortunate to have someone like him already involved and and prepared to of succeed as the leader of Open Philanthropy.

I know that lukeprog's comment is mostly replying to the insecurity about lack of credentials in the OP.  Still,  the most upvoted answer seems a bit ironic in the broader context of the question:

If you read the comment without knowing Luke, you might be like "Oh yeah, that sounds encouraging." Then you find out that he wrote this excellent 100++ page report on the neuroscience of consciousness, which is possibly the best resource on this on the internet, and you're like "Uff, I'm f***ed."

Luke is (tied with Brian Tomasik) the most genuinely modest person I know, so it makes sense that it seems to him like there's a big gap between him and even smarter people in the community. And there might be, maybe. But that only makes the whole situation even more intimidating.

It's a tough spot to be in and I only have advice that maybe helps make the situation tolerable, at least.

Related to the advice about Stoicism, I recommend viewing EA as a game with varying levels of difficulty. 

Because life isn’t fair, the level of difficulty of the video game will sometimes be “hard” or even “insane”, depending on the situation you’re in. The robot on the other hand would be playing on “e

... (read more)

While I understand that people generally like Owen, I believe we need to ensure that we are not overlooking the substance of his message and giving him an overly favorable response.

Owen's impropriety may be extensive. Just because one event was over 5 years ago, does not mean that the other >=3 events were (and if they were, one expects he would tell us). Relatedly, if it indeed was the most severe mistake of this nature, there may have been more severe mistakes of somewhat different kinds. There may yet be further events that haven't yet been reported to, or disclosed by Owen, and indeed, on the outside view, most events would not be suchly reported.

What makes things worse is the kind of career Owen has pursued over the last 5+ years. Owen's work centered on: i) advising orgs and funders, ii) hiring junior researchers, and iii) hosting workshops, often residential, and with junior researchers. If as Owen says, you know as of 2021-22 that you have deficiencies in dealing with power dynamics, and there have been a series of multiple events like this, then why are you still playing the roles described in (i-iii)? His medium term career trajectory, even relative to other EAs, is in... (read more)

I want to make a small comment on your phrase "it could have a chilling effect on those who have their own cases of sexual assault to report." Owen has not committed sexual assault, but sexual harassment. If this imperfect wording was an isolated incident, I wouldn't have said anything, but in every sexual misconduct comment thread I've followed on the forum, people have said sexual assault when they mean sexual harassment, and/or rape when they mean sexual assault. I was a victim of sexual abuse both growing up and as an adult, so I'm aware that there are big differences between the three, and feel it would be helpful to be mindful of our wording.

As someone with a fairly upvoted comment expressing a different perspective than yours, I want to mention that personally I had never heard of Owen until this post except for the disturbing description in the Time article, and that personally I have no interest in advancing my career based on any of my political opinions, so his power is irrelevant to me. While I appreciate that the last section of your comment came from a place of wanting to be supportive towards early career people like me, I think it oversimplifies the issues and found it a bit condescending. I’m trying  to  encourage women in my position to speak up more because we have important things to say. 

I think it's likely that the difference in the replies to this post and the replies to the official statement by EV UK are from people not reading the link in the EV UK post, and so not getting the full context of the statement.  

Edit: Also, if I was trying to impress Owen, wouldn't I be agreeing with his current perspective instead of arguing that he had over-updated? 

With apologies, I would like to share some rather lengthy comments on the present controversy. My sense is that they likely express a fairly conventional reaction. However, I have not yet seen any commentary that entirely captures this perspective. Before I begin, I perhaps also ought to apologise for my decision to write anonymously. While none of my comments here are terribly exciting, I would like to think, I hope others can still empathise with my aversion to becoming a minor character in a controversy of this variety.

Q: Was the message in question needlessly offensive and deserving of an apology?

Yes, it certainly was. By describing the message as "needlessly offensive," what I mean to say is that, even if Prof. Bostrom was committed to making the same central point that is made in the message, there was simply no need for the point to be made in such an insensitive manner. To put forward an analogy, it would be needlessly offensive to make a point about free speech by placing a swastika on one’s shirt and wearing it around town. This would be a highly insensitive decision, even if the person wearing the swastika did not hold or intend to express any of the views associated wit... (read more)

For context, I'm black (Nigerian in the UK).

 
I'm just going to express my honest opinions here:

The events of the last 48 hours (slightly) raised my opinion of Nick Bostrom. I was very relieved that Bostrom did not compromise his epistemic integrity by expressing more socially palatable views that are contrary to those he actually holds.

I think it would be quite tragic to compromise honestly/accurately reporting our beliefs when the situation calls for it to fit in better. I'm very glad Bostrom did not do that.

As for the contents of the email itself, while very untasteful, they were sent in a particular context to be deliberately offensive and Bostrom did regret it and apologise for it at the time. I don't think it's useful/valuable to judge him on the basis of an email he sent a few decades ago as a student. The Bostrom that sent the email did not reflectively endorse its contents, and current Bostrom does not either.

  I'm not interested in a discussion on race & IQ, so I deliberately avoided addressing that.

I. It might be worth reflecting upon how large part of this seem tied to something like "climbing the EA social ladder".

E.g. just from the first part, emphasis mine
 

Coming to Berkeley and, e.g., running into someone impressive  at an office space already establishes a certain level of trust since they know you aren’t some random person (you’ve come through all the filters from being a random EA to being at the office space).
If you’re in Berkeley for a while you can also build up more signals that you are worth people’s time. E.g., be involved in EA projects, hang around cool EAs.

Replace "EA" by some other environment with prestige gradients, and you have something like a highly generic social climbing guide. Seek cool kids, hang around them, go to exclusive parties, get good at signalling.

II. This isn't to say this is bad . Climbing the ladder to some extent could be instrumentally useful, or even necessary, for an ability to do some interesting things, sometimes.

III. But note the hidden costs. Climbing the social ladder can trade of against building things. Learning all the Berkeley vibes can trade of against, eg., learning the math actually useful for understanding agen... (read more)

Turning to the object level: I feel pretty torn here.

On the one hand, I agree the business with CARE was quite bad and share all the standard concerns about SJ discourse norms and cancel culture.

On the other hand, we've had quite a bit of anti-cancel-culture stuff on the Forum lately. There's been much more of that than of pro-SJ/pro-DEI content, and it's generally got much higher karma. I think the message that the subset of EA that is highly active on the Forum generally disapproves of cancel culture has been made pretty clearly.

I'm sceptical that further content in this vein will have the desired effect on EA and EA-adjacent groups and individuals who are less active on the Forum, other than to alienate them and promote a split in the movement, while also exposing EA to substantial PR risk. I think a lot of more SJ-sympathetic EAs already feel that the Forum is not a space for them – simply affirming that doesn't seem to me to be terribly useful. Not giving ACE prior warning before publishing the post further cements an adversarial us-and-them dynamic I'm not very happy about.

I don't really know how that cashes out as far as this post and posts like it are concerned. Biting one's tongue about what does seem like problematic behaviour would hardly be ideal. But as I've said several times in the past, I do wish we could be having this discussion in a more productive and conciliatory way, which has less of a chance of ending in an acrimonious split.

Buck
3y91
0
0

I agree with the content of your comment, Will, but feel a bit unhappy with it anyway. Apologies for the unpleasantly political metaphor,  but as an intuition pump imagine the following comment.

"On the one hand, I agree that it seems bad that this org apparently has a sexual harassment problem.  On the other hand, there have been a bunch of posts about sexual misconduct at various orgs recently, and these have drawn controversy, and I'm worried about the second-order effects of talking about this misconduct."

I guess my concern is that it seems like our top priority should be saying true and important things, and we should err on the side of not criticising people for doing so.

More generally I am opposed to "Criticising people for doing bad-seeming thing X would put off people who are enthusiastic about thing X."

Another take here is that if a group of people are sad that their views aren't sufficiently represented on the EA forum, they should consider making better arguments for them. I don't think we should try to ensure that the EA forum has proportionate amounts of pro-X and anti-X content for all X. (I think we should strive to evaluate content fairly; this involves not being more or less enthusiastic about content about views based on its popularity (except for instrumental reasons like "it's more interesting to hear arguments you haven't heard before).)

EDIT: Also, I think your comment is much better described as meta level than object level, despite its first sentence.

"On the other hand, we've had quite a bit of anti-cancel-culture stuff on the Forum lately. There's been much more of that than of pro-SJ/pro-DEI content, and it's generally got much higher karma. I think the message that the subset of EA that is highly active on the Forum generally disapproves of cancel culture has been made pretty clearly"

Perhaps. However, this post makes specific claims about ACE. And even though these claims have been discussed somewhat informally on Facebook, this post provides a far more solid writeup. So it does seem to be making a signficantly new contribution to the discussion and not just rewarming leftovers.

It would have been better if Hypatia had emailed the organisation ahead of time. However, I believe ACE staff members might have already commented on some of these issues (correct me if I'm wrong). And it's more of a good practise than something than a strict requirement - I totally understand the urge to just get something out of there.

"I'm sceptical that further content in this vein will have the desired effect on EA and EA-adjacent groups and individuals who are less active on the Forum, other than to al... (read more)

Habryka
1y222
42
20

Over the course of me working in EA for the last 8 years I feel like I've seen about a dozen instances where Will made quite substantial tradeoffs where he traded off both the health of the EA community, and something like epistemic integrity, in favor of being more popular and getting more prestige. 

Some examples here include: 

  • When he was CEO while I was at CEA he basically didn't really do his job at CEA but handed off the job to Tara (who was a terrible choice for many reasons, one of which is that she then co-founded Alameda and after that went on to start another fradulent-seeming crypto trading firm as far as I can tell). He then spent like half a year technically being CEO but spending all of his time being on book tours and talking to lots of high net-worth and high-status people.
  • I think Doing Good Better was already substantially misleading about the methodology that the EA community has actually historically used to find top interventions. Indeed it was very "randomista" flavored in a way that I think really set up a lot of people to be disappointed when they encountered EA and then realized that actual cause prioritization is nowhere close to this formalized an
... (read more)

Fwiw I have little private information but think that:

  • I sense this misses some huge successes in EA getting where it is. Seems we've done pretty well all things considered. Wasn't will part of that?
  • Will is a superlative networker
  • He is a very good public intellectual. Perhaps Ord could be if his books were backed to that extent. Perhaps Will could be better if he wrote different books. But he seems really good at it. I would guess that on that public intellectual side he's a benefit not a cost
  • If I'd had the ability to direct billions in philanthropy I probably would have, even with nagging doubts. 
  • It seems he's maybe less good at representing the community or managing orgs. I don't know if thats the case, but I can believe it. 
  • If so, it seems possible there is a role as a public intellectual associated with EA but who isn't the only one
  • I feel bad when writing criticism because personally I hope he's well and I'm very grateful to him.

Also thanks Habryka for writing this. I think surfacing info like this is really valuable and I guess it has personal costs to you.

Habryka
1y222
60
17

Epistemic status: Probably speaking too strongly in various ways, and probably not with enough empathy, but also feeling kind of lonely and with enough pent-up frustration about how things have been operating that I want to spend some social capital on this, and want to give a bit of a "this is my last stand" vibe.

It's been a few more days, and I do want to express frustration with the risk-aversion and guardedness I have experienced from CEA and other EA organizations in this time. I think this is a crucial time to be open, and to stop playing dumb PR games that are, in my current tentative assessment of the situation, one of the primary reasons why we got into this mess in the first place. 

I understand there is some legal risk, and I am trying to track it myself quite closely. I am also worried that you are trying to run a strategy of "try to figure out everything internally and tell nice narratives about where we are all at afterwards", and I think that strategy has already gotten us into is so great that I don't think now is the time to double-down on that strategy. 

Please, people at CEA and other EA organizations, come and talk to the community. Explore with us what ... (read more)

I feel there's a bit of a "missing mood" in some of the comments here, so I want to say: 

I felt shocked, hurt, and betrayed at reading this. I never expected the Oxford incident to involve someone so central and well-regarded in the community, and certainly not Owen. Other EAs I know who knew Owen and the Oxford scene better are even more deeply hurt and surprised by this. (As other commenters here have already attested, tears have not been uncommon.)

Despite the length and thoughtfulness of the apology, it's difficult for me to see how someone who was already in a position of power and status in EA -- a community many of us see as key to the future of humanity -- behaved in a way that seems so inappropriate and destructive. I'm angry not only at the harm that was done to women trying to do good in the world, but also to the health, reputation, and credibility of our community. We deserve better from our leaders.

I really sympathize with all the EAs -- especially women -- who feel betrayed and undermined by this news. To all of you who've had bad experiences like this in EA -- I'm really sorry. I hope we can do better. I think we can do better -- I think we already have the seed... (read more)

I appreciate you writing this. To me, this clarifies something. (I'm sorry there's a rant incoming and if this comunity needs its hand held through these particular revelations, I'm not the one):

It seems like many EAs still (despite SBF) didn't put significant probability on the person from that particular Time incident being a very well-known and trusted man in EA, such as Owen. This despite the SBF scandal and despite (to me) this incident being the most troubling incident in the Time piece by far which definitely sounded to be attached to a "real" EA more than any of the others (I say as someone who still has significant problems with the Time piece). Some of us had already put decent odds on the probability that this was an important figure doing something that was at least thoughtless and ended up damaging the EA movement... I mean the woman who reported him literally tried to convey that he was very well-connected and important. 

It seems like the community still has a lot to learn from the surprise of SBF about problematic incidents and leaders in general: No one expects their friends or leaders are gonna be the ones who do problematic things. That includes us. Update no... (read more)

[Epistemic status: I've done a lot of thinking about these issues previously; I am a female mathematician who has spent several years running mentorship/support groups for women in my academic departments and has also spent a few years in various EA circles.]

I wholeheartedly agree that EA needs to improve with respect to professional/personal life mixing, and that these fuzzy boundaries are especially bad for women. I would love to see more consciousness and effort by EA organizations toward fixing these and related issues. In particular I agree with the following:

> Not having stricter boundaries for work/sex/social in mission focused organizations brings about inefficiency and nepotism [...]. It puts EA at risk of alienating women / others due to reasons that have nothing to do with ideological differences.

However, I can't endorse the post as written, because there's a lot of claims made which I think are wrong or misleading. Like: Sure, there are poly women who'd be happier being monogamous, but there are also poly men who'd be happier being monogamous, and my own subjective impression is that these are about equally common. Also, "EA/rationalism and redpill fit like yin and y... (read more)

We (the Community Health team at CEA) would like to share some more information about the cases in the TIME article, and our previous knowledge of these cases. We’ve put these comments in the approximate order that they appear in the TIME article. 

 

Re: Gopalakrishnan’s experiences

We read her post with concern.  We saw quite a few supportive messages from community members, and we also tried to offer support. Our team also reached out to Gopalakrishnan in a direct message to ask if she was interested in sharing more information with us about the specific incidents. 

 

Re: The man who

  1. Expressed opinions about “pedophilic relationships”
  2. “Another woman, who dated the same man several years earlier in a polyamorous relationship, alleges that he had once attempted to put his penis in her mouth while she was sleeping.” 

We don’t know this person’s identity for sure, but one of these accounts resembles a previous public accusation made against a person who used to be involved in the rationality community. He has been banned from CEA events for almost 5 years, and we understand he has been banned from some other EA spaces. He has been a critic of the EA movemen... (read more)

Brief update: I am still in the process of reading this. At this point I have given the post itself a once-over, and begun to read it more slowly (and looking through the appendices as they're linked).

I think any and all primary sources that Kat provides are good (such as the page of records of transactions). I am also grateful that they have not deanonymized Alice and Chloe.

I plan to compare the things that this post says directly against specific claims in mine, and acknowledge anything where I was factually inaccurate. I also plan to do a pass where I figure out which claims of mine this post responds to and which it doesn’t, and I want to reflect on the new info that’s been entered into evidence and how it relates to the overall picture. 

It probably goes without saying that I (and everyone reading) want to believe true things and not false things about this situation. If I made inaccurate statements I would like to know that and correct them.

As I wrote in my follow-up post, I am not intending to continue spear-heading an investigation into Nonlinear. However this post makes some accusations of wrongdoing on my part, which I intend to respond to, and of course for... (read more)

NL: A quick note on how we use quotation marks: we sometimes use them for direct quotes and sometimes use them to paraphrase.

I had missed that; thank you for pointing it out!

While using quotation marks for paraphrase or when recounting something as best as you recall is occasionally done in English writing, primarily in casual contexts, I think it's a very poor choice for this post. Lots of people are reading this trying to decide who to trust, and direct quotes and paraphrase have very different weight. Conflating them, especially in a way where many readers will think the paraphrases are direct quotes, makes it much harder for people to come away from this document with a more accurate understanding of what happened.

Perhaps using different markers (ex: "«" and "»") for paraphrase would make sense here?

I am one of the people mentioned in the article. I'm genuinely happy with the level of compassion and concern voiced in most of the comments on this article. Yes, while a lot of the comments are clearly concerned that this is a hard and difficult issue to tackle, I’m appreciative of the genuine desire of many people to do the right thing here. It seems that at least some of the EA community has a drive towards addressing the issue and improving from it rather than burying the issue as I had feared.

 

A couple of points, my spontaneous takeaways upon reading the article and the comments:
 

  • This article covers bad actors in the EA space, and how hard it is to protect the community from them. This doesn't mean that all of EA is toxic, but rather the article is bringing to light the fact that bad actors have been tolerated and even defended in the community to the detriment of their victims. I'm sensing from the comments that non-Bay Area EA may have experienced less of this phenomenon. If you read this article and are absolutely shocked and disgusted, then I think you experienced a different selection of EA than I have. I know many of my peers will read this article and feel unc
... (read more)

I feel like this post mostly doesn't talk about what feels like to me the most substantial downside of trying to scale up spending in EA, and increased availability of funding. 

I think the biggest risk of the increased availability of funding, and general increase in scale, is that it will create a culture where people will be incentivized to act more deceptively towards others and that it will attract many people who will be much more open to deceptive action in order to take resources we currently have. 

Here are some paragraphs from an internal memo I wrote a while ago that tried to capture this: 

========

I think it it was Marc Andreessen who first hypothesized that startups usually go through two very different phases:

  1. Pre Product-market fit: At this stage, you have some inkling of an idea, or some broad domain that seems promising, but you don't yet really have anything that solves a really crucial problem. This period is characterized by small teams working on their inside-view, and a shared, tentative, malleable vision that is often hard to explain to outsiders.
  2. Post Product-market fit: At some point you find a product that works for people. The transition here ca
... (read more)

Reading this, I guess I'll just post the second half of this memo that I wrote here as well, since it has some additional points that seem valuable to the discussion: 

When I play forward the future, I can imagine a few different outcomes, assuming that my basic hunches about the dynamics here are correct at all:

  1. I think it would not surprise me that much if many of us do fall prey to the temptation to use the wealth and resources around us for personal gain, or as a tool towards building our own empire, or come to equate "big" with "good". I think the world's smartest people will generally pick up on us not really aiming for the common good, but I do think we have a lot of trust to spend down, and could potentially keep this up for a few years. I expect eventually this will cause the decline of our reputation and ability to really attract resources and talent, and hopefully something new and good will form in our ashes before the story of humanity ends.
  2. But I think in many, possibly most, of the worlds where we start spending resources aggressively, whether for personal gain, or because we do really have a bold vision for how to change the future, the relationships of the centra
... (read more)

James courteously shared a draft of this piece with me before posting, I really appreciate that and his substantive, constructive feedback.

1. I blundered

The first thing worth acknowledging is that he pointed out a mistake that substantially changes our results. And for that, I’m grateful. It goes to show the value of having skeptical external reviewers.

He pointed out that Kemp et al., (2009) finds a negative effect, while we recorded its effect as positive — meaning we coded the study as having the wrong sign.

What happened is that MH outcomes are often "higher = bad", and subjective wellbeing is "higher = better", so we note this in our code so that all effects that imply benefits are positive. What went wrong was that we coded Kemp et al., (2009), which used the GHQ-12 as "higher = bad" (which is usually the case) when the opposite was true. Higher equalled good in this case because we had to do an extra calculation to extract the effect [footnote: since there was baseline imbalance in the PHQ-9, we took the difference in pre-post changes], which flipped the sign.

This correction would reduce the spillover effect from 53% to 38% and reduce the cost-effectiveness comparison from 9.5... (read more)

Jason
1y84
68
0

Strong upvote for both James and Joel for modeling a productive way to do this kind of post -- show the organization a draft of the post first, and give them time to offer comments on the draft + prepare a comment for your post that can go up shortly after the post does.

Thank you Max for your years of dedicated service at CEA. Under your leadership as Executive Director, CEA grew significantly, increased its professionalism, and reached more people than it had before. I really appreciate your straightforward but kind communication style, humility, and eagerness to learn and improve. I'm sorry to see you go, and wish you the best of luck in whatever comes next.

Predictably, I disagree with this in the strongest possible terms.

If someone says false and horrible things to destroy other people's reputation, the story is "someone said false and horrible things to destroy other people's reputation". Not "in some other situation this could have been true". It might be true! But discussion around the false rumors isn't the time to talk about that.

Suppose the shoe was on the other foot, and some man (Bob), made some kind of false and horrible rumor about a woman (Alice). Maybe he says that she only got a good position in her organization by sleeping her way to the top. If this was false, the story isn't "we need to engage with the ways Bob felt harmed and make him feel valid." It's not "the Bob lied lens is harsh and unproductive". It's "we condemn these false and damaging rumors". If the headline story is anything else, I don't trust the community involved one bit, and I would be terrified to be associated with it.

I understand that sexual assault is especially scary, and that it may seem jarring to compare it to less serious accusations like Bob's. But the original post says we need to express emotions more, and I wanted to try to convey an emot... (read more)

I think a very relevant question is to ask is how come none of the showy self-criticism contests and red-teaming exercises came up with this? A good amount of time and money and energy were put into such things and if the exercises are not in fact uncovering the big problems lurking in the movement then that suggests some issues

bruce
1y75
23
1

If this comment is more about "how could this have been foreseen", then this comment thread may be relevant. I should note that hindsight bias means that it's much easier to look back and assess problems as obvious and predictable ex post, when powerful investment firms and individuals who also had skin in the game also missed this. 

TL;DR: 
1) There were entries that were relevant (this one also touches on it briefly)
2) They were specifically mentioned
3) There were comments relevant to this. (notably one of these was apparently deleted because it received a lot of downvotes when initially posted)
4) There has been at least two other posts on the forum prior to the contest that engaged with this specifically

My tentative take is that these issues were in fact identified by various members of the community, but there isn't a good way of turning identified issues into constructive actions - the status quo is we just have to trust that organisations have good systems in place for this, and that EA leaders are sufficiently careful and willing to make changes or consider them seriously, such that all the community needs to do is "raise the issue". And I think looking at the system... (read more)

Am I right in thinking that, if it weren't for the Time article, there's no reason to think that Owen would ever have been investigated and/or removed from the board?

While this is all being sorted and we figure out what is next, I would like to emphasize wishes of wellness and care for the many impacted by this.

Note: The original post was edited to clarify the need for compassion and to remove anything resembling “tribalism,” including a comment of thanks, which may be referenced in comments.

[Edit: this was in response to the original version of the parent comment, not the new edited version]

Strong -1, the last line in particular seems deeply inappropriate given the live possibility that these events were caused by large-scale fraud on the part of FTX, and I'm disappointed that so many people endorsed it. (Maybe because the reasons to suspect fraud weren't flagged in original post?) At a point where the integrity of leading figures in the movement has been called into question, it is particularly important that we hold ourselves to high standards rather than reflexively falling back on tribalist instincts.

I am worried and sad for all involved, but I am especially concerned for the wellbeing and prospects of the ~millions of people—often vulnerable retail investors—who may have taken on too much exposure to crypto in general.

Many people like this must be extremely stressed right now. As with many financial meltdowns, some individuals and families will endure severe hardship, such as the breakdown of relationships, the loss of life savings, even the death of loved ones.

I don't really follow crypto so I know roughly nothing about the role SBF, FTX and Alameda have played in this ecosystem. My impression is that they've been ok/good on at least some dimensions of protecting vulnerable investors. But—let's see how things look, overall, when the dust settles.

[As is always the default, but perhaps worth repeating in sensitive situations, my views are my own and by default I'm not speaking on behalf of the Open Phil. I don't do professional grantmaking in this area, haven't been following it closely recently, and others at Open Phil might have different opinions.]

I'm disappointed by ACE's comment (I thought Jakub's comment seemed very polite and even-handed, and not hostile, given the context, nor do I agree with characterizing what seems to me to be sincere concern in the OP just as hostile) and by some of the other instances of ACE behavior documented in the OP. I used to be a board member at ACE, but one of the reasons I didn't seek a second term was because I was concerned about ACE drifting away from focusing on just helping animals as effectively as possible, and towards integrating/compromising between that and human-centered social justice concerns, in a way that I wasn't convinced was based on open-minded analysis or strong and rigorous cause-agnostic reasoning. I worry about this dynamic leading to an unpleasant atmosphere for those with different perspectives, and decreasing the extent to whi... (read more)

calebp
7mo206
95
8
1
1

Thanks for writing this post. It looks like it took a lot of effort that could have been spent on much more enjoyable activities, including your mainline work.

This isn’t a comment on the accuracy of the post (though it was a moderate update for me). I could imagine nonlinear providing compelling counter evidence over the next few days and I’d of course try to correct my beliefs in light of new evidence.

Posts like this one are a public good. I don’t think anyone is particularly incentivised to write them, and they seem pretty uncomfortable and effortful, but I believe they serve an important function in the community by helping to root out harmful actors and disincentivising harmful acts in the first place.

I read this post and about half of the appendix.

(1) I updated significantly in the direction of "Nonlinear leadership has a better case for themselves than I initially thought" and "it seems likely to me that the initial post indeed was somewhat careless with fact-checking."

(I'm still confused about some of the fact-checking claims, especially the specific degree to which Emerson flagged early on that there were dozens of extreme falsehoods, or whether this only happened when Ben said that he was about to publish the post. Is it maybe possible that Emerson's initial reply had little else besides "Some points still require clarification," and Emerson only later conveyed how strongly he disagreed with the overall summary once he realized that Ben was basically set on publishing on a 2h notice? If so, that's very different from Ben being told in the very first email reply that Nonlinear's stance on this is basically "good summary, but also dozens of claims are completely false and we can document that." That's such a stark difference, so it feels to me like there was miscommunication going on.)

At the same time:

(2) I still find Chloe's broad perspective credible and concerning (in a "... (read more)

Marzhin
1y202
90
2

Would someone from CEA be able to comment on this incident?

'A third described an unsettling experience with an influential figure in EA whose role included picking out promising students and funneling them towards highly coveted jobs. After that leader arranged for her to be flown to the U.K. for a job interview, she recalls being surprised to discover that she was expected to stay in his home, not a hotel. When she arrived, she says, “he told me he needed to masturbate before seeing me.”'

Was this 'influential figure in EA' reported to Community Health, and if so, what were the consequences? 

[Caveat: Assuming this is an influential EA, not a figure who has influence in EA but wouldn't see themselves as part of the community.]

I also found this incredibly alarming and would be very keen to hear more about this.

I closely read the whole post and considered it carefully. I'm struggling to sum up my reaction to this 15,000-word piece in way that's concise and clear.

At a high level:

Even if most of what Kat says is factually true, this post still gives me really bad vibes and makes me think poorly of Nonlinear.

Let me quickly try to list some of the reasons why (if anyone wants me to elaborate or substantiate any of these, please reply and ask):

  • Confusion, conflation, and prevarication between intent and impact.
  • Related to the above, the self-licensing, i.e. we are generally good people and generally do good things, so we don't need to critically self-reflect on particular questionable actions we took.
  • The varyingly insensitive, inflammatory, and sensationalist use of the Holocaust poem (truly offensive) and the terms "lynching" (also offensive) and "witch-burning".
  • Conflation between being depressed and being delusional.
  • Glib dismissal of other people's feelings and experiences.
  • The ridiculous use of "photographic evidence", which feels manipulative and/or delusional to me.
  • Seeming to have generally benighted views on trauma, abuse, power dynamics, boundaries, mental health, "victimhood", resilience,
... (read more)

In my experience, observing someone getting dogpiled and getting dogpiled yourself feel very different. Most internet users have seen others get dogpiled hundreds of times, but may never have been dogpiled themselves.

Even if you have been dogpiled yourself, there's a separate skill in remembering what it felt like when you were dogpiled, while observing someone else getting dogpiled. For example, every time I got dogpiled myself, I think I would've greatly appreciated if someone reached out to me via PM and said "yo, are you doing OK?" But it has never occurred to me to do this when observing someone else getting dogpiled -- I just think to myself "hm, seems like a pretty clear case of unfair dogpiling" and close the tab.

In any case, I've found getting dogpiled myself to be surprisingly stressful, relative to the experience of observing it -- and I usually think of myself as fairly willing to be unpopular. (For example, I once attended a large protest as the only counter-protester, on my own initiative.)

It's very easy say in the abstract: "If I was getting dogpiled, I would just focus on the facts. I would be very self-aware and sensitive, I wouldn't dismiss anyone, I wouldn't... (read more)

I agree with this. I think overall I get a sense that Kat responded in just the sort of manner that Alice and Chloe feared*, and that the flavor of treatment that Alice and Chloe (as told by Ben) said they experienced from Kat/Emerson seems to be on display here. (* Edit: I mean, Kat could've done worse, but it wouldn't help her/Nonlinear.)

I also feel like Kat is misrepresenting Ben's article? For example, Kat says

Chloe claimed: they tricked me by refusing to write down my compensation agreement

I just read that article and don't remember any statement to that affect, and searching for individual words in this sentence didn't lead me to a similar sentence in Ben's article on in Chloe's followup. I think the closest thing is this part:

Chloe’s salary was verbally agreed to come out to around $75k/year. However, she was only paid $1k/month, and otherwise had many basic things compensated i.e. rent, groceries, travel. This was supposed to make traveling together easier, and supposed to come out to the same salary level. While Emerson did compensate Alice and Chloe with food and board and travel, Chloe does not believe that she was compensated to an amount equivalent to the salary discus

... (read more)

My read on this is that a lot of the things in Ben's post are very between-the-lines rather than outright stated. For example, the financial issues all basically only matter if we take for granted that the employees were tricked or manipulated into accepting lower compensation than they wanted, or were put in financial hardship.

Which is very different from the situation Kat's post seems to show. Like... I don't really think any of the financial points made in the first one hold up, and without those, what's left? A She-Said-She-Said about what they were asked to do and whether they were starved and so on, which NL has receipts for.

[Edit after response below: By "hold up" I meant in the emotional takeaway of "NL was abusive," to be clear, not on the factual "these bank account numbers changed in these ways." To me hiring someone who turns out to be financially dependent into a position like this is unwise, not abusive. If someone ends up in the financial red in a situation where they are having their living costs covered and being paid a $1k monthly stipend... I am not rushing to pass judgement on them, I am just noting that this seems like a bad fit for this sort of position, which... (read more)

evhub
2y200
1
0

One thing that bugged me when I first got involved with EA was the extent to which the community seemed hesitant to spend lots of money on stuff like retreats, student groups, dinners, compensation, etc. despite the cost-benefit analysis seeming to favor doing so pretty strongly. I know that, from my perspective, I felt like this was some evidence that many EAs didn't take their stated ideals as seriously as I had hoped—e.g. that many people might just be trying to act in the way that they think an altruistic person should rather than really carefully thinking through what an altruistic person should actually do.

This is in direct contrast to the point you make that spending money like this might make people think we take our ideals less seriously—at least in my experience, had I witnessed an EA community that was more willing to spend money on projects like this, I would have been more rather than less convinced that EA was the real deal. I don't currently have any strong beliefs about which of these reactions is more likely/concerning, but I think it's at least worth pointing out that there is definitely an effect in the opposite direction to the one that you point out as well.

I'm a professional nanny and I've also held household management positions. I just want to respond to one specific thing here that I have knowledge about.

It is upsetting to see a "lesson learned" as only hiring people with experience as an assistant, because a professional assistant would absolutely not work with that compensation structure.

It is absolutely the standard in professional assistant type jobs that when traveling with the family, that your travel expenses are NOT part of your compensation.

When traveling for work (including for families that travel for extensive periods of time) the standard for professionals is:

  • Airfare, non-shared lodgings (your own room) and food are all covered by your family and NOT deducted from your pay. Ditto any expenses that are required for work such as taxis, tickets to places you are working at. etc.

-Your work hours start when you arrive at the airport.(Yes, you charge for travel time)

  • You charge your full, standard hourly rate for all hours worked.

  • You ALSO charge a per diem because you are leaving the comfort of being in your own home / being away from friends and pets and your life.

  • You are ONLY expected to work for the hours tha

... (read more)

This got a lot of upvotes so I want to clarify that this kind of arrangements isn't UNUSUALLY EVIL. Nanny forums are filled with younger nannies or more desperate nannies who get into these jobs only to immediately regret it.

When people ask my opinion about hiring nannies I constantly have to show how things they think are perks (live in, free tickets to go places with the kids) don't actually hold much value as perks. Because it is common for people to hold that misconception.

It is really common for parents and families to offer jobs that DON'T FOLLOW professional standards. In fact the majority of childcare jobs don't. The educated professionals don't take those jobs. The families are often confused why they can't find good help that stays.

So I look at this situation and it immediately pattern matches to what EDUCATED PROFESSIONALS recognize as a bad situation.

I don't think that means that NL folks are inherently evil. What they wanted was a common thing for people to want. The failure modes are the predictable failure modes.

I think they hold culpability. I think they "should have" known better. I don't think (based on this) that they are evil. I think some of their responses aren't the most ideal, but also shoot it's a LOT of pressure to have the whole community turning on you and they are responding way better than I would be able to.

From the way they talk, I don't think they learned the lessons I would hope they had, and that's sad. But it's hard to really grow when you're in a defensive position.

Overall this post seems like a grab-bag of not very closely connected suggestions. Many of them directly contradict each other. For example, you suggest that EA organizations should prefer to hire domain experts over EA-aligned individuals. And you also suggest that EA orgs should be run democratically. But if you hire a load of non-EAs and then you let them control the org... you don't have an EA org any more. Similarly, you bemoan that people feel the need to use pseudonyms to express their opinions and a lack of diversity of political beliefs ... and then criticize named individuals for being 'worryingly close to racist, misogynistic, and even fascist ideas' in essentially a classic example of the cancel culture that causes people to choose pseudonyms and causes the movement to be monolithically left wing. 

I think this is in fact a common feature of many of the proposals: they generally seek to reduce what is differentiated about EA. If we adopted all these proposals, I am not sure there would be anything very distinctive remaining. We would simply be a tiny and interchangeable part of the amorphous blob of left wing organizations.

It is true this does not apply to all of th... (read more)

Hey Maya, I'm Catherine -  one of the contact people on CEA's community health team (along with Julia Wise). I'm so so sorry to hear about your experiences, and the experiences of your friends. I share your sadness and much of your anger too. I’ll PM you, as I think it could be helpful for me to chat with you about the specific problems (if you are able to share more detail) and possible steps. 

If anyone else reading this comment who has encountered similar problems in the EA community, I would be very grateful to hear from you too. Here is more info on what we do.

Ways to get in touch with Julia and me : 

[anonymous]3y198
0
0

It is  very generous to characterise Torres' post as insightful and thought provoking. He characterises various long-termists as white  supremacists on the flimsiest grounds imaginable. This is a very serious accusation and one that he very obviously throws around due to his own personal vendettas against certain people. e.g. despite many of his former colleagues at CSER also being long-termists he doesn't call them nazis because he doesn't believe they have slighted him. Because I made the mistake of once criticising him, he spent much of the last two years calling me a white supremacist, even though the piece of mine he cited did not even avow belief in long-termism.  

A quick point of clarification that Phil Torres was never staff at CSER; he was a visitor for a couple of months a few years ago. He has unfortunately misrepresented himself as working at CSER on various media (unclear if deliberate or not). (And FWIW he has made similar allusions, albeit thinly veiled, about me).

HLI kindly provided me with an earlier draft of this work to review a couple of weeks ago. Although things have gotten better, I noted what I saw as major problems with the draft as-is, and recommended HLI take its time to fix them - even though this would take a while, and likely miss the window of Giving Tuesday. 

Unfortunately, HLI went ahead anyway with the problems I identified basically unaddressed. Also unfortunately (notwithstanding laudable improvements elsewhere) these problems are sufficiently major I think potential donors are ill-advised to follow the recommendations and analysis in this report.

In essence:

  1. Issues of study quality loom large over this literature, with a high risk of materially undercutting the results (they did last time). The reports interim attempts to manage these problems are inadequate.
  2. Pub bias corrections are relatively mild, but only when all effects g > 2 are excluded from the analysis - they are much starker (albeit weird) if all data is included. Due to this, the choice to exclude 'outliers' roughly trebles the bottom line efficacy of PT. This analysis choice is dubious on its own merits, was not pre-specified in the protocol, yet is onl
... (read more)

what exactly is contributing to the view that EA essentially is longtermism/AI Safety?

 

For me, it’s been stuff like:

  • People (generally those who prioritize AI) describing global poverty as “rounding error”.
  • From late 2017 to early 2021, effectivealtruism.org (the de facto landing page for EA) had at least 3 articles on longtermist/AI causes (all listed above the single animal welfare article), but none on global poverty.
  • The EA Grants program granted ~16x more money to longtermist projects as global poverty and animal welfare projects combined. [Edit: this statistic only refers to the first round of EA Grants, the only round for which grant data has been published. ]
  • The EA Handbook 2.0 heavily emphasized AI relative to global poverty and animal welfare. As one EA commented: “By page count, AI is 45.7% of the entire causes sections. And as Catherine Low pointed out, in both the animal and the global poverty articles (which I didn't count toward the page count), more than half the article was dedicated to why we might not choose this cause area, with much of that space also focused on far-future of humanity. I'd find it hard for anyone to read this and not take away that
... (read more)

Some things from EA Global London 2022 that stood out for me (I think someone else might have mentioned one of them):

  • An email to everyone promoting Will's new book (on longtermism)
  • Giving out free bookmarks about Will's book when picking up your pass.

These things might feel small but considering this is one of the main EA conferences, having the actual conference organisers associate so strongly with the promotion of a longtermist (albeit yes, also one of the main founders of EA) book made me think "Wow, CEA is really trying to push longtermism to attendees". This seems quite reasonable given the potential significance of the book, I just wonder if CEA have done this for any other worldview-focused books recently (last 1-3 years) or would do so in the future e.g. a  new book on animal farming.

Curious to get someone else's take on this or if it just felt important in my head.

Other small things:

  • On the sidebar of the EA Forum, there's three recommended articles: Replacing Guilt, the EA Handbook (which as you mentioned here, is mostly focused on longtermism) and the most important century by Holden. Again, essentially 1.5 longtermist texts to <0.5 from other worldviews.

As the ma... (read more)

I think most people reading this thread should totally ignore this story for at least 2 weeks. Meantime: get back to work.

For >90% of readers, I suspect:

  1. It's not action relevant right now.
  2. It's very distracting.
  3. It would be better to just read a sober update on the situation in a couple of weeks from now, after dust has settled.

I think this is true even of most people who have a bunch of crypto and/or are FTX customers, but that's more debatable and depends on exposure.

These are the standard problems with following almost any BREAKING NEWS story (e.g. an election night, a stock market event, an ongoing tragedy).

Agree, but still find it hard to stop watching? You are glued to your screen and this is unhelpful. This is an opportunity to practice the skill of ignoring stuff that isn't action-relevant, and allocating your attention effectively.

Not actively trading crypto or related assets? Just ignore this story for a while, and get back to work.


Added 2022-11-09 2200 GMT:

If I had a good friend who has a lot of crypto and who may be concerned about losing more than they can afford to lose, I would call them.

Given what I'm seeing online, the situation looks grim for people with big exposure to crypto in general, and those with deposits at FTX in particular.

(To repeat what I said in other comments on this post: I don't follow crypto closely. My takes are not investment advice.)

Peter -- I have mixed feelings about your advice, which is well-expressed and reasonable. 

I agree that, typically, it's prudent not to get caught up in news stories that involve high uncertainty, many rumors, and unclear long-term impact.

However, a crucial issue for the EA movement is whether there will be a big public relations blowback against EA from the FTX difficulties. If there's significant risk of this blowback, EA leadership better develop a pro-active plan for dealing with the PR crisis -- and quick.

The FTX crisis is a Very Big Deal in crypto -- one of the worst crises ever.  Worldwide, about 300 million people own crypto. Most of them have seen dramatic losses in the value of their tokens recently. On paper, at least, they have lost a couple of hundred billion dollars in the last couple of days. Most investors are down at least 20% this week because of this crisis. Even if prices recover, we will never forget how massive this drop has been.

Sam Bankman-Fried (SBF) himself has allegedly lost about 94% of his net worth this week, down from $15 billion to under $1 billion. (I don't give much credence to these estimates, but it's pretty clear the losses have been ve... (read more)

Hmm, I don't really buy this. I think at Lightcone I am likely to delay any major expenses for a few weeks and make decisions assuming a decent chance we will have a substantial funding crunch. We have a number of very large expenses coming up, and ignoring this would I think cause us to make substantially worse choices.

I'm going to be leaving 80,000 Hours and joining Charity Entrepreneurship's incubator programme this summer!

The summer 2023 incubator round is focused on biosecurity and scalable global health charities and I'm really excited to see what's the best fit for me and hopefully launch a new charity. The ideas that the research team have written up look really exciting and I'm trepidatious about the challenge of being a founder but psyched for getting started. Watch this space! <3

I've been at 80,000 Hours for the last 3 years. I'm very proud of the 800+ advising calls I did and feel very privileged I got to talk to so many people and try and help them along their careers!

I've learned so much during my time at 80k. And the team at 80k has been wonderful to work with - so thoughtful, committed to working out what is the right thing to do, kind, and fun - I'll for sure be sad to leave them.

There are a few main reasons why I'm leaving now:

  1. New career challenge - I want to try out something that stretches my skills beyond what I've done before. I think I could be a good fit for being a founder and running something big and complicated and valuable that wouldn't exist without me - I'd like t
... (read more)
Habryka
1y192
124
93

I feel really quite bad about this post. Despite it being only a single paragraph it succeeds at confidently making a wrong claim, pretending to speak on behalf of both an organization and community that it is not accurately representing, communicating ambiguously (probably intentionally in order to avoid being able to be pinned on any specific position), and for some reason omitting crucial context. 

Contrary to the OP it is easy to come up with examples where within the Effective Altruism framework two people do not count equally. Indeed most QALY frameworks value young people more than older people, many discussions have been had about hypothetical utility monsters, and about how some people might have more moral patienthood due to being able to experience more happiness or more suffering, and of course the moral patienthood of artificial systems immediately makes it clear that different minds likely matter differently in moral calculus. 

Saying "all people count equally" is not a core belief of EA, and indeed I do not remember hearing it seriously argued for a single time in my almost 10 years in this community (which is not surprising, since it indeed doesn't really ho... (read more)

I think I do see "all people count equally" as a foundational EA belief. This might be partly because I understand "count" differently to you, partly because I have actually-different beliefs (and assumed that these beliefs were "core" to EA, rather than idiosyncratic to me). 
What I understand by "people count equally" is something like "1 person's wellbeing is not more important than another's". 

E.g. a British nationalist might not think that all people count equally, because they think their copatriots' wellbeing is more important than that of people in other countries. They would take a small improvement in wellbeing for Brits over a large improvement in wellbeing for non-Brits. An EA would be impartial between improvements in wellbeing for British people vs non-British people. 

"most QALY frameworks value young people more than older people, many discussions have been had about hypothetical utility monsters, and about how some people might have more moral patienthood due to being able to experience more happiness or more suffering, and of course the moral patienthood of artificial systems immediately makes it clear that different minds likely matter differently in... (read more)

Sorry for the slow response.

I wanted to clarify and apologise for some things here (not all of these are criticisms you’ve specifically made, but this is the best place I can think of to respond to various criticisms that have been made):

  1. This statement was drafted and originally intended to be a short quote that we could send to journalists if asked for comment. On reflection, I think that posting something written for that purpose on the Forum was the wrong way to communicate with the community and a mistake. I am glad that we posted something, because I think that it’s important for community members to hear that CEA cares about inclusion, and (along with legitimate criticism like yours) I’ve heard from many community members who are glad we said something. But I wish that I had said something on the Forum with more precision and nuance, and will try to be better at this in future.
  2. The first sentence was not meant to imply that we think that Bostrom disagrees with this view, but we can see why people would draw this implication. It’s included because we thought lots of people might get the impression from Bostrom’s email that EA is racist and I don’t want anyone — w
... (read more)

In a post this long, most people are probably going to find at least one thing they don't like about it. I'm trying to approach this post as constructively as I can, i.e. "what I do find helpful here" rather than "how I can most effectively poke holes in this?" I think there's enough merit in this post that the constructive approach will likely yield something positive for most people as well.

I want to explain my role in this situation, and to apologize for not handling it better. The role I played was in the context of my work as a community liaison at CEA.

(All parts that mention specific people were run past those people.)

In 2021, the woman who described traveling to a job interview in the TIME piece told me about her interactions with Owen Cotton-Barratt several years before. She said she found many aspects of his interactions with her to be inappropriate. 

We talked about what steps she wanted taken. Based on her requests, I had conversations with Owen and some of his colleagues. I tried to make sure that Owen understood the inappropriateness of his behavior and that steps were taken to reduce the risk of such things happening again. Owen apologized to the woman. The woman wrote to me to say that she felt relieved and appreciated my help. Later, I wrote about power dynamics based partly on this situation.

However, I think I didn’t do enough to address the risk of his behavior continuing in other settings. I didn’t pay enough attention to what other pieces might need addressing, like the fact that, by the time I learned about the situation, he was on the boar... (read more)

Kirsten
1y293
153
6

Julia, I really appreciate you explaining your role here. I feel uneasy about the framing of what I've read. It sounds like the narrative is "Owen messed up, Julia knew, and Julia messed up by not saying more". But I feel strongly that we shouldn't have one individual as a point of failure on issues this important, especially not as recently as 2021. I think the narrative should be something closer to "Owen messed up, and CEA didn't (and still doesn't) have the right systems in place to respond to these kinds of concerns"

I appreciate you sharing this additional info and reflections, Julia. 

I notice you mention being friends with Owen, but, as far as I can tell, the post, your comment, and other comments don't highlight that Owen was on the board of (what's now called) EV UK when you learned about this incident and tried to figure out how to deal with it, and EV UK was the umbrella organization hosting the org (CEA) that was employing you (including specifically for this work).[1] This seems to me like a key potential conflict of interest, and like it may have warranted someone outside CEA being looped in to decide what to do about this incident. At first glance, I feel confused about this not having been mentioned in these comments. I'd be curious to hear whether you explicitly thought about that when you were thinking about this incident in 2021?

That is, if I understand correctly, in some sense Owen had a key position of authority in an organization that in turn technically had authority over the organization you worked at. That said, my rough impression from the outside is that, prior to November 2022, the umbrella organization in practice exerted little influence over what the organiza... (read more)

I hope others will join me in saying: thank you for your years serving as the friendly voice of the Forum, and best of luck at Open Philanthropy!

As someone deeply involved in politics in Oregon (I am a house district leader in one of the districts Flynn would have been representing, I am co-chair of the county Democratic campaign committee and I am chair of a local Democratic group that focuses on policy and local electeds and that sponsored a forum that Flynn participated in ) I feel that much of the discussion on this site about Carrick Flynn lacks basic awareness of what the campaign looked like on the ground.  I also have some suggestions about how the objectives you work for might be better achieved.

First, Flynn remained an enigma to the voters. In spite of more advertising than ever seen before in a race (there were often three ads in a single television hour program), his history and platform were unclear. While many of the ads came from Protect our Future PAC, Flynn had multiple opportunities to clarify these and failed.  Statements such as “He directed a billion dollars to health programs to save children’s lives and removed a legal barrier that may have cost several thousand more lives.” that was featured on his website led people to come to me and ask “What did he do to accomplish this?  Who was he... (read more)

I'm not taking a position on the question of whether Nick should stay on as Director, and as noted in the post I'm on record as having been unhappy with his apology (which remains my position)*,  but for balance and completeness I'd like to provide a perspective on the importance of Nick's leadership, at least in the past.

I worked closely with Nick at FHI from 2011 to 2015. While I've not been at FHI much in recent years (due to busyness elsewhere) I remember the FHI of that time being a truly unique-in-academia place; devoted to letting and helping brilliant people think about important challenges in unusual ways. That was in very large part down to Nick - he is visionary, and remarkably stubborn and difficult - with the benefits and drawbacks this comes with. It is difficult to understate the degree of pressure in academia to pull you away from doing something unique and visionary and to instead do more generic things, put time into impressing committees, keeping everyone happy etc**. - It's that stubbornness (combined with the vision) in my view that allowed FHI to come into being and thrive (at least for a time). It is (in my view) the same stubbornness and difficultness t... (read more)

JWS
1y185
49
2

Update:

FLI have released a full statement on their website here, and there is an FAQ post on that statement where discussion has mostly moved to on the Forum. I will respond to these updates there, and otherwise leave this post as-is (for now).

However, it looks like an 'ignorance-based' defence is the correct interpretation of what happened here. I don't regret this post - I still think it was important, and got valuable information out there. I also think that emotional responses should not be seen as 'wrong'. Nevertheless, I do have some updating to do, and I thank all commenters in the thread below.

I have also made some retractions, with explanations in the footnotes

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Epistemic Status: Unclear, but without much reason to dispute the factual case presented by Expo. As I wrote this comment, an ignorance-based defence seemed less and less convincing, and consequently my anger rose. I apologise if this means the post is of a lower tone than the forum is used to. I will also happily correct or retract this post partially or fully if better evidence is provided.

[Clarity Edit:... (read more)

and this seems to be another case of a major actor in our movement doing something that has massively poor consequences for the public perception of EA unless they can explain why

My only substantive disagreement with this comment (which I upvoted) is that I don't think FLI is a major actor in EA; they've always kind of done their own thing, and haven't been a core player within the EA community. I view them more as an independent actor with somewhat aligned goals.

MaxRa
3y185
0
0

High-Impact Athletes ➔ EA Sports for obvious reasons

(Hi, I'm Emily, I lead GHW grantmaking at Open Phil.)

Thank you for writing this critique, and giving us the chance to read your draft and respond ahead of time. This type of feedback is very valuable for us, and I’m really glad you wrote it.

We agree that we haven’t shared much information about our thinking on this question. I’ll try to give some more context below, though I also want to be upfront that we have a lot more work to do in this area.

For the rest of this comment, I’ll use “FAW” to refer to farm animal welfare and “GHW” to refer to all the other (human-centered) work in our Global Health and Wellbeing portfolio. 

To date, we haven’t focused on making direct comparisons between GHW and FAW. Instead, we’ve focused on trying to equalize marginal returns within each area and do something more like worldview diversification to determine allocations across GHW, FAW, and Open Philanthropy’s other grantmaking. In other words, each of GHW and FAW has its own rough “bar” that an opportunity must clear to be funded. While our frameworks allow for direct comparisons, we have not stress-tested consistency for that use case. We’re also unsure conceptually whether we should be... (read more)

Hi Emily,

Thanks so much for your engagement and consideration. I appreciate your openness about the need for more work in tackling these difficult questions.

our current estimates of the gap between marginal animal and human funding opportunities is very different from the one in your post – within one order of magnitude, not three.

Holden has stated that "It seems unlikely that the ratio would be in the precise, narrow range needed for these two uses of funds to have similar cost-effectiveness." As OP continues researching moral weights, OP's marginal cost-effectiveness estimates for FAW and GHW may eventually differ by several orders of magnitude. If this happens, would OP substantially update their allocations between FAW and GHW?

Our current moral weights, based in part on Luke Muehlhauser’s past work, are lower.

Along with OP's neartermist cause prioritization, your comment seems to imply that OP's moral weights are 1-2 orders of magnitude lower than Rethink's. If that's true, that is a massive difference which (depending upon the details) could have big implications for how EA should allocate resources between FAW charities (e.g. chickens vs shrimp) as well as between F... (read more)

If OpenPhil’s allocation is really so dependent on moral weight numbers, you should be spending significant money on research in this area, right? Are you doing this? Do you plan on doing more of this given the large divergence from Rethink’s numbers?

Thanks so much for writing this Will! I can't emphasise enough how much I appreciate it.

 if a project doesn’t seem like a good use of the people running it, then it’s not likely to get funded. 

Two norms that I'd really like to see (that I haven't seen enough of) are:
1. Funders being much more explicit  to applicants about why things aren't funded (or why they get less funding than asked for). Even a simple tagging system like "out of our funding scope" or "seemed too expensive", "not targeted enough", or "promising (review and resubmit)" (with a short line about why) is explicit yet simple.

2. More funder diversity while maintaining close communications (e.g. multiple funders with different focus areas/approaches/epistemics, but single application form to apply to multiple funders and those funders sharing private information such as fraud allegation etc).

I know feedback is extremely difficult to do well (and there are risks in giving feedback), but I think that lack of feedback creates a lot of problems, e.g.:

  • resentment and uneasiness towards funders within the community;
  • the unilateralists curse is exacerbated (in cases where something is not funded because it's seen
... (read more)
fenneko
1y182
43
2

From the article:

Another woman, who dated the same man several years earlier in a polyamorous relationship, alleges that he had once attempted to put his penis in her mouth while she was sleeping.

This rang a bell for me, and I was able to find an old Twitter thread (link removed on David's request) naming the man in question. At least, all the details seem to match.

I'm pretty sure that the man in question (name removed on David's request) has been banned from official EA events for many years. I remember an anecdote about him showing up without a ticket at EAG in the past and being asked to leave. As far as I know, the ban is because he has a long history of harassment with at least some assault mixed in. 

I don't know who introduced him to Sonia Joseph, but if she'd mentioned him to the people I know in EA,  I think the average reaction would have been "oh god, don't". I guess there are still bubbles I'm not a part of where he's seen as a "prominent man in the field", though I haven't heard anything about actual work from him in many years.

Anyway, while it sounds like many people mentioned in this article behaved very badly, it also seems possible that the incidents CEA k... (read more)

1)  One way to see the problem is that in the past we used frugality as a hard-to-fake signal of altruism, but that signal no longer works.

I'm not sure that's an entirely bad thing, because frugality seems mixed as a virtue e.g. it can lead to:

  • Not spending money on clearly worth it things (e.g. not paying to have a larger table at a student fair even when it would result in more sign ups; not getting a cleaner when you earn over $50/hour), which in turn can also make us seem not  serious about maximising impact (e.g. this comment).
  • Even worse, getting distracted from the top priority by worrying about efforts to save relatively small amounts of money. Or not considering high upside projects that require a lot of resources, but where there's a good chance of failure, due to a fear of not being able to justify the spending.
  • Feelings of guilt around spending and not being perfectly altruistic, which can lead to burn out.
  • Filtering out people who want a normal middle class lifestyle & family, but could have had a big impact (and go work at FAANG instead). Filtering out people from low income backgrounds or with dependents.

However, we need new hard-to-fake signals of seriousn... (read more)

fenneko
1y180
41
10

Following CatGoddess, I'm going to share more detail on parts of the article that seemed misleading, or left out important context. 

Caveat: I'm not an active member of the in-person EA community or the Bay scene. If there's hot gossip circulating, it probably didn't circulate to me. But I read a lot.

This is a long comment, and my last comment was a long comment, because I've been driving myself crazy trying to figure this stuff out. If the community I (digitally) hang out in is full of bad people and their enablers, I want to find a different community! 

But the level of evidence presented in Bloomberg and TIME makes it hard to understand what's actually going on. I'm bothered enough by the weirdness of the epistemic environment that it drove me to stop lurking  :-/

I name Michael Vassar here, even though his name wasn't mentioned in the article. Someone asked me to remove that name the last time I did this, and I complied. But now that I'm seeing the same things repeated in multiple places and used to make misleading points, I no longer think it makes sense to hide info about serial abusers who have been kicked out of the movement, especially when that info is easy to... (read more)

Thank you for writing this. It's barely been a week, take your time.

There's been a ton of posts on the forum about various failures, preventative measures, and more. As much as we all want to get to the bottom of this and ensure nothing like this ever happens again, I don't think our community benefits from hasty overcorrections. While many of the points made are undoubtedly good, I don't think it will hurt the EA community much to wait a month or two before demanding any drastic measures.

EA's should probably still be ambitious. Adopting rigorous governance and oversight mechanisms sometimes does more harm than good. Let's not throw out the baby with the bathwater.

I'm still reflecting and am far from having fully formed beliefs yet, I am confused about just how many strong views there have been expressed on the forum. Alone correctly recalling my thoughts and feelings around FTX before the event is difficult. I'm noticing a lot of finger pointing and not a lot of introspection.

I don't know about everyone else, but I'm pretty horrified at just how similar my thinking seems to have been to SBF's. If a person who seemingly agreed with me on so many moral priorities was capable of doing something so horrible, how can I be sure that I am different?

I'm going to sit with that thought for a while, and think about what type of person I want to strive to be.

Hi Will,

It is great to see all your thinking on this down in one place: there are lots of great points here (and in the comments too). By explaining your thinking so clearly, it makes it much easier to see where one departs from it.

My biggest departure is on the prior, which actually does most of the work in your argument: it creates the extremely high bar for evidence, which I agree probably couldn’t be met. I’ve mentioned before that I’m quite sure the uniform prior is the wrong choice here and that this makes a big difference. I’ll explain a bit about why I think that.

As a general rule if you have a domain like this that extends indefinitely in one direction, the correct prior is one that diminishes as you move further away in that direction, rather than picking a somewhat arbitrary end point and using a uniform prior on that. People do take this latter approach in scientific papers, but I think it is usually wrong to do so. Moreover in your case in particular, there are also good reasons to suspect that the chance of a century being the most influential should diminish over time. Especially because there are important kinds of significant event (such as the value lock-in or an... (read more)

Habryka
1y177
55
16

I agree with some of the points of this post, but I do think there is a dynamic here that is missing, that I think is genuinely important. 

Many people in EA have pursued resource-sharing strategies where they pick up some piece of the problems they want to solve, and trust the rest of the community to handle the other parts of the problem. One very common division of labor here is 

I will go and do object-level work on our core priorities, and you will go and make money, or fundraise from other people in order to fund that work

I think a lot of this type of trade has happened historically in EA. I have definitely forsaken a career with much greater earning potential than I have right now in order to contribute to EA infrastructure and to work on object-level problems. 

I think it is quite important to recognize that in as much as a trade like this has happened, this gives the people who have done object level work a substantial amount of ownership over the funds that other people have earned, as well as the funds that other people have fundraised (I also think this applies to Open Phil, though I think the case here is bunch messier and I won't go into my models of the g... (read more)

This is probably as good a place as any to mention that whatever people say about this race could very easily get picked up by local media and affect it. As a general principle, if you have an unintuitive idea for how to help Carrick's candidacy, it might be an occasion to keep it to yourself, or discuss it privately. Generally, here, on Twitter, and everywhere, thinking twice before posting about this topic would be a reasonable policy.

To recap, I thought Ben’s original post was unfair even if he happened to be right about Nonlinear because of how chilling it is for everyone else to know they could be on blast if they try to do anything. It sounded like NL made mistakes, but they sounded like very typical mistakes of EA/rationalists when they try out new or unusual social arrangements. Since the attitude around me if you don’t like contracts you entered is generally “tough shit, get more agency”, I was surprised at the responses saying Alice and Chloe should have been protected from an arrangement they willing entered (that almost anyone but EAs/rationalists would have told them was a bad idea). It made me think Ben/Lightcone had a double standard toward an org they already didn’t like because of Emerson talking about Machiavellian strategies and marketing.

Idk if Emerson talking about libel was premature. Many have taken it as an obvious escalation, but it seems like he called it exactly right because NL’s reputation is all but destroyed. Maybe if he hadn’t said that Ben would have waited for their response before publishing, and it would have been better. I think it’s naive and irresponsible for Ben/Lightcone to... (read more)

I drew a random number for spot checking the short summary table. (I don't think spot checking will do justice here, but I'd like to start with something concrete.)

Chloe claimed: they told me not to spend time with my romantic partner 

- Also a strange, false accusation: we invited her boyfriend to live with us for 2 of the 5 months. We even covered his rent and groceries.

- We were just about to invite him to travel with us indefinitely because it would make Chloe happy, but then Chloe quit.

Evidence/read more

This seems to be about this paragraph from the original post:

Alice and Chloe report that they were advised not to spend time with ‘low value people’, including their families, romantic partners, and anyone local to where they were staying, with the exception of guests/visitors that Nonlinear invited. Alice and Chloe report this made them very socially dependent on Kat/Emerson/Drew and otherwise very isolated.

There aren't any other details in the original post specifically from Chloe or specifically about her partner, including in the comment in Chloe's words below the post. The only specific detail about romantic partners I see in the original post is about Alice, and it pl... (read more)

Pablo
1y176
44
0

It's probably worth noting that Holden has been pretty open about this incident. Indeed, in a talk at a Leaders Forum around 2017, he mentioned it precisely as an example of "end justify the means"-type reasoning.

Linch
1y95
32
0

It's also listed under GiveWell's Our Mistakes page.

Cullen
1y175
72
1

My naive moral psychology guess—which may very well be falsified by subsequent revelations, as many of my views have this week—is that we probably won’t ever find an “ends justify the means” smoking gun (eg, an internal memo from SBF saying that we need to fraudulently move funds from account A to B so we can give more to EA). More likely, systemic weaknesses in FTX’s compliance and risk management practices failed to prevent aggressive risk-taking and unethical profit-seeking and self-preserving business decisions that were motivated by some complicated but unstated mix of misguided pseduo-altruism, self-preservation instincts, hubris, and perceived business/shareholder demands.

I say this because we can and should be denouncing ends justify the means reasoning of this type, but I suspect very rarely in the heat of a perceived crisis will many people actually invoke it. I think we will prevent more catastrophes of this nature in the future by focusing more on on integrity as a personal virtue and the need for systemic compliance and risk-management tools within EA broadly and highly impactful/prominent EA orgs, especially those whose altruistic motives will be systematically in tension with perceived business demands.

Relatedly, I think a focus on ends-justify-the-means reasoning is potentially misguided because it seems super clear in this case that, even if we put zero intrinsic value on integrity, honesty, not doing fraud, etc., some of the decisions made here were pretty clearly very negative expected-value. We should expect the upsides from acquiring resources by fraud (again, if that is what happened) to be systematically worth much less than reputational and trustworthiness damage our community will receive by virtue of motivating, endorsing, or benefitting from that behavior.

I think EA currently is much more likely to fail to achieve most of its goals by ending up with a culture that is ill-suited for its aims, being unable to change direction when new information comes in, and generally fail due to the problems of large communities and other forms of organization (like, as you mentioned, the community behind NeurIPS, which is currently on track to be an unstoppable behemoth racing towards human extinction that I so desperately wish was trying to be smaller and better coordinated). 

I think EA Global admissions is one of the few places where we can apply steering on how EA is growing and what kind of culture we are developing, and giving this up seems like a cost, without particularly strong commensurate benefits. 

On a more personal level, I do want to be clear that I am glad about having a bigger EA Global this year, but I would probably also just stop attending an open-invite EA Global since I don't expect it would really share my culture or be selected for people I would really want to be around. I think this year's EA Global came pretty close to exhausting my ability to be thrown into a large group of people with a quite different culture ... (read more)

lilly
7mo173
90
12
10
4

This situation reminded me of this post, EA's weirdness makes it unusually susceptible to bad behavior. Regardless of whether you believe Chloe and Alice's allegations (which I do), it's hard to imagine that most of these disputes would have arisen under more normal professional conditions (e.g., ones in which employees and employers don't live together, travel the world together, and become romantically entangled). A lot of the things that (no one is disputing) happened here are professionally weird; for example, these anecdotes from Ben's summary of Nonlinear's response (also the linked job ad):

  • "Our intention wasn't just to have employees, but also to have members of our family unit who we traveled with and worked closely together with in having a strong positive impact in the world, and were very personally close with."
  • "We wanted to give these employees a pretty standard amount of compensation, but also mostly not worry about negotiating minor financial details as we traveled the world. So we covered basic rent/groceries/travel for these people."
  • "The formal employee drove without a license for 1-2 months in Puerto Rico. We taught her to drive, which she was excited about. You mi
... (read more)

Hey,

I’m really sorry to hear about this experience. I’ve also experienced what feels like social pressure to have particular beliefs (e.g. around non-causal decision theory, high AI x-risk estimates, other general pictures of the world), and it’s something I also don’t like about the movement. My biggest worries with my own beliefs stem around the worry that I’d have very different views if I’d found myself in a different social environment. It’s just simply very hard to successfully have a group of people who are trying to both figure out what’s correct and trying to change the world: from the perspective of someone who thinks the end of the world is imminent, someone who doesn’t agree is at best useless and at worst harmful (because they are promoting misinformation).

In local groups in particular, I can see how this issue can get aggravated: people want their local group to be successful, and it’s much easier to track success with a metric like “number of new AI safety researchers” than “number of people who have thought really deeply about the most pressing issues and have come to their own well-considered conclusions”. 

One thing I’ll say is that core researchers ... (read more)

I think this comment will be frustrating for you and is not high quality. Feel free to disagree, I'm including it because I think it's possible many people (or at least some?) will feel wary of this post early on and it might not be clear why. In my opinion, including a photo section was surprising and came across as near completely misunderstanding the nature of Ben's post. It is going to make it a bit hard to read any further with even consideration (edit: for me personally, but I'll just take a break and come back or something). Basically, without any claim on what happened, I don't think anyone suspects "isolated or poor environment" to mean, "absence of group photos in which [claimed] isolated person is at a really pretty pool or beach doing pool yoga." And if someone is psychologically distressed, whether you believe this to be a misunderstanding or maliciously exaggerated, it feels like a really icky move to start posting pictures that add no substance, even with faces blurred, with the caption "s'mores", etc.

Hey - I’m starting to post and comment more on the Forum than I have been, and you might be wondering about whether and when I’m going to respond to questions around FTX. So here’s a short comment to explain how I’m currently thinking about things:

The independent investigation commissioned by EV is still ongoing, and the firm running it strongly preferred me not to publish posts on backwards-looking topics around FTX while the investigation is still in-progress. I don’t know when it’ll be finished, or what the situation will be like for communicating on these topics even after it’s done.

I had originally planned to get out a backwards-looking post early in the year, and I had been holding off on talking about other things until that was published. That post has been repeatedly delayed, and I’m not sure when it’ll be able to come out. If I’d known that it would have been delayed this long, I wouldn’t have waited on it before talking on other topics, so I’m now going to start talking more than I have been, on the Forum and elsewhere; I’m hoping I can be helpful for some of the other issues that are currently active topics of discussion.

Briefly, though, and as I indicated before: I had... (read more)

For what it's worth, I'm a 30 year old woman who's been involved with EA for eight years and my experience so far has been overwhelmingly welcoming and respectful. This has been true for all of my female EA friends as well. The only difference in treatment I have ever noticed is being slightly more likely to get speaking engagements. 

Just posting about this anonymously because I've found these sorts of topics can lead to particularly vicious arguments, and I'd rather spend my emotional energy on other things. 

[EDIT: I'd like to clarify that, strictly speaking, the comment below is gossip without hard substantiating evidence. Gossip can have an important community function - at the very least, from this comment you can conclude that things happened at Nonlinear which induced people (in fact, many people) to negatively gossip about the organization - but should also be treated as different from hard accusations, especially those backed by publicly available evidence. In the wake of the FTX fiasco, I think it's likely that people are more inclined to treat gossip of the sort I share below as decisive.

That said, I do think that the gossip below paints a basically accurate picture. I also have other reasons to distrust Nonlinear that I don't feel comfortable sharing (more gossip!). I know this is hard epistemic territory to work in, and I'm sorry. I would feel best about this situation if someone from, e.g., CEA would talk to some of the people involved, but I'm sure anyone who could deal with this is swamped right now. In the meantime, I think it's fine for this gossip to make you unsure about Nonlinear, but still e.g. consider applying to them for emergency funding. I personally wouldn't... (read more)

I’m a current intern at Nonlinear and I think It would be good to add my point of view.

I was offered an internship by Drew around 3 months ago after I contributed to a project and had some chats with him. From the first moment I was an intern he made me feel like a valuable member of the team, my feedback was always taken seriously, and I could make decisions on my own. It never felt like a boss relationship, more like coworkers and equals.

And when I started putting in less hours, I never got “hey you should work more or this is not gonna work out” but rather Drew took the time to set up a weekly 1 on 1 to help me develop personally and professionally and get to know me.

I can only speak for myself but overall I’m very happy to be working with them and there’s nothing about the situation I would call mistreatment.

Thanks, Alex, for writing this important contribution up so clearly and thanks, Dan, for engaging constructively. It’s good to have a proper open exchange about this. Three cheers for discourse.

While I am also excited about the potential of GivingGreen, I do share almost all of Alex’s concerns  and think that his concerns mostly stand / are not really addressed by the replies. I state this as someone who has worked/built expertise on climate for the past decade and on climate and EA for the past four years (in varying capacities, now leading the climate work at FP) to help those that might find it hard to adjudicate this debate with less background.

Given that I will criticize the TSM recommendation, I should also state where I am coming from:
My climate journey started over 15 years ago as a progressive climate youth activist, being a lead organizer for Friends of the Earth in my home state, Rhineland Palatinate (in Germany). 
I am a person of the center-left and get goosebumps every time I hear Bernie Sanders speak about a better society. This is to say I have nothing against progressives and I did not grow up as a libertarian techno-optimist who would be naturally incline... (read more)

Given that debating race and IQ would make EA very unwelcoming for black people, probably has the effect of increasing racism, and clearly does not help us do the most good, we shouldn’t even be debating it with ‘empathy and rigour’.

EA is a community for doing the most good, not for debating your favourite edgy topic

Yeah, I agree here. We shouldn't discuss that topic in community venues; it doesn't help our mission and is largely counterproductive.

saulius
4mo166
8
0
9
5

When I was asked to resign from RP, one of the reasons given was that I wrote the sentence “I don't think that EAs should fund many WAW researchers since I don't think that WAW is a very promising cause area” in an email to OpenPhil, after OpenPhil asked for my opinion on a WAW (Wild Animal Welfare) grant. I was told that this is not okay because OpenPhil is one of the main funders of RP’s WAW work. That did not make me feel very independent. Though perhaps that was the only instance in the four years I worked at RP.

Because of this instance, I was also concerned when I saw that RP is doing cause prioritization work because I was afraid that you would hesitate to publish stuff that threatens RP funding, and would more willingly publish stuff that would increase RP funding. I haven’t read any of your cause prio research though, so I can’t comment on whether I saw any of that.

EDIT: I should've said that this was not the main reason I was asked to resign and that I had said that I would quit in three months before this happened.

First off, thank you to everyone who worked on this post. Although I don't agree with everything in it, I really admire the passion and dedication that went into this work -- and I regret that the authors feel the need to remain anonymous for fear of adverse consequences. 

For background: I consider myself a moderate EA reformer -- I actually have a draft post I've been working on that argues that the community should democratically hire people to write moderately concrete reform proposals. I don't have a ton of the "Sam" characteristics, and the only thing of value I've accepted from EA is one free book (so I feel free to say whatever I think). I am not a longtermist and know very little about AI alignment (there, I've made sure I'd never get hired if I wanted to leave my non-EA law career?). 

Even though I agree with some of the suggested reforms here, my main reaction to this post is to affirm that my views are toward incremental/moderate -- and not more rapid/extensive -- reform. I'm firmly in the Global Health camp myself, and that probably colors my reaction to a proposal that may have been designed more with longtermism in mind. There is too much ... (read more)

Brief note on why EA should be careful to remain inclusive & welcoming to neurodiverse people:

As somebody with Aspergers, I'm getting worried that in this recent 'PR crisis', EA is sending some pretty strong signals of intolerance to those of us with various kinds of neurodiversity that can make it hard for us to be 'socially sensitive', to 'read the room', and to 'avoid giving offense'. (I'm not saying that any particular people involved in recent EA controversies are Aspy;  just that I've seen a general tendency for EAs to be a little Aspier than other people, which is why I like them and feel at home with them.)

There's an ongoing 'trait war' that's easy to confuse with the Culture War. It's not really about right versus left, or reactionary versus woke. It's more about psychological traits: 'shape rotators' versus 'wordcels', 'Aspies' versus 'normies', systematizers versus empathizers, high decouplers versus low decouplers. 

EA has traditionally been an oasis for Aspy systematizers with a high degree of rational compassion, decoupling skills, and quantitative reasoning. One downside of being Aspy is that we occasionally, or even often, say things that normies consid... (read more)

Lots of the comments here are pointing at details of the markets and whether it's possible to profit off of knowing that transformative AI is coming. Which is all fine and good, but I think there's a simple  way to look at it that's very illuminating.

The stock market is good at predicting company success because there are a lot of people trading in it who think hard about which companies will succeed, doing things like writing documents about  those companies' target markets, products, and leadership. Traders who do a good job at this sort of analysis get more funds to trade with, which makes their trading activity have a larger impact on the prices.

Now, when you say that:

the market is decisively rejecting – i.e., putting very low probability on – the development of transformative AI in the very near term, say within the next ten years.

I think what you're claiming is that market prices are substantially controlled by traders who have a probability like that in their heads. Or traders who are following an algorithm which had a probability like that in the spreadsheet. Or something thing like that. Some sort of serious cognition, serious in the way that traders treat compan... (read more)

I find it hard to believe that the number of traders who have considered crazy future AI scenarios is negligible. New AI models, semiconductor supply chains, etc. have gotten lots of media and intellectual attention recently. Arguments about transformative AGI are public. Many people have incentives to look into them and think about their implications.

I don't think this post is decisive evidence against short timelines. But neither do I think it's a "trap" that relies on fully swallowing EMH. I think there're deeper issues to unpack here about why much of the world doesn't seem to put much weight on AGI coming any time soon.

I'm disappointed that much of this document involves attacking the people who've accused you of harmful actions, in place of a focus on disputing the evidence they provided (I appreciate that you also do the latter). I also really bounce off the distraction tactics at play here, where you encourage the reader to turn their attention back to the world's problems. It doesn't seem like you've reflected carefully and calmly about this situation; I don't see many places where you admit to making mistakes and it doesn't seem like you're willing to take ownership of this situation at all.

I don't have time to engage with all the evidence here, but even if I came away convinced that all of the original claims provided by Ben weren't backed up, I still feel really uneasy about Nonlinear; uneasy about your work culture, uneasy about how you communicate and argue, and alarmed at how forcefully you attack people who criticise you. 

I'm disappointed that much of this document involves attacking the people who've accused you of harmful actions, in place of a focus on disputing the evidence they provided 

The vast majority of what they gave is disputing the evidence. There is a whole 135 pages of basically nothing but that. You then even refer to it saying:

I don't have time to engage with all the evidence here

How can both these be true at once? Either it's a lot so you don't have time to go through it all or they haven't done much in which case you should be able to spend some time looking at it?

I want to push back on this framing, and I think it shows a lack of empathy with the position Nonlinear have been put in. (Though I do agree with your dislike of many of the stylistic choices made in this post)

This post is 15K words, and does a mix of attacking the credibility of Ben, Alice and Chloe and disputing the claims with evidence. The linked doc is 58K words, and seems predominantly about collecting an exhaustive array of evidence. Nonlinear have clearly put in a *lot* of work to the linked doc, and try hard to dispute the evidence. So it seems to me that your complaint is really about what aspects Nonlinear chose to make prominent in this post, which in my opinion is a strategic question about how to write a good post, plus some emotional baggage from Nonlinear feeling aggrieved about this whole thing.

From Nonlinear's perspective (not necessarily mine, to be clear), they have two disgruntled ex-employees who had a bad time, told a bunch of lies about them, and got an incredibly popular and widely read EA Forum post about it. This has destroyed their reputation in EA, and been catastrophic to the org, in a way that they consider ill-deserved. They want to write a post to c... (read more)

My thoughts, for those who want them:

  • I don't have much sympathy for those demanding a good reason why the post wasn't delayed. While I'm generally quite pro sharing posts with orgs, I think it's quite important that this doesn't give the org the right to delay or prevent the posting. This goes double given the belief of both the author and their witnesses that Nonlinear is not acting in good faith.
  • There seem to be enough uncontested/incontestable claims made in this post for me to feel comfortable recommending that junior folks in the community stay away from Nonlinear. These include asking employees to carry out illegal actions they're not comfortable with, and fairly flagrantly threatening employees with retaliation for saying bad things about them (Kat's text screenshotted above is pretty blatant here).
  • Less confidently, I would be fairly surprised if I come out of the other end of this, having seen Nonlinear's defence/evidence, and don't continue to see the expenses-plus-tiny-salary setup as manipulative and unhealthy.
  • More confidently than anything on this list, Nonlinear's threatening to sue Lightcone for Ben's post is completely unacceptable, decreases my sympathy for them by
... (read more)

I am very bothered specifically by the frame "I wish we had resolved [polyamory] "internally" rather than it being something exposed by outside investigators."

I am polyamorous; I am in committed long-term relationships (6 years and 9 years) with two women, and occasionally date other people. I do not think there is anything in my relationships for "the community" to "resolve internally". It would not be appropriate for anyone to tell me to break up with one of my partners. It would not be appropriate for anyone to hold a community discussion about how to 'resolve' my relationships, though of course I disclose them when they are relevant to conflict-of-interest considerations, and go out of my way to avoid such conflicts. I would never ask out a woman who might rely on me as a professional mentor, or a woman who is substantially less professionally established. 

There are steps that can be taken, absolutely should be taken, and for the most part to my knowledge have been taken to ensure that professional environments aren't sexualized and that bad actors are unwelcome. Asking people out or flirting with them in professional contexts should be considered unacceptable. People who ... (read more)

I think some of us owe FLI an apology for assuming heinous intentions where a simple (albeit dumb) mistake was made.

I can imagine this must have been a very stressful period for the entire team, and I hope we as a community become better at waiting for the entire picture instead of immediately reacting and demanding things left and right.

Be marginally less accepting of weirdness overall.

I agree that a low-weirdness EA would have fewer weird scandals. I'm not sure whether these would just be replaced by more normal scandals.  It probably depends a lot on exactly what changes you make? A surprisingly large fraction of the "normal" communities I've observed are perpetually riven by political infighting, personal conflicts, allegations of bad behavior, etc., to a far greater degree than is true for EA.

Choosing the right target depends on understanding what EA is doing right in addition to understanding what it's doing wrong, and protecting and cultivating the former at the same time we combat the latter.

I'm skeptical that optimizing against marginal weirdness is a good way to reduce rates of sexual misconduct, mostly for two reasons:

  • The proposal is basically to regress EA to the mean, but I haven't seen evidence that EA is worse than the mean of the populations we'd realistically move toward. This actually matters; it would be PlayPump levels of tragicomic if EA put a ton of effort into Becoming More Normal for the sake of making sex and gender minorities safer in EA, only to find out that the normal demographic w
... (read more)

Rob - I strongly agree with your take here. 

EA prides itself on quantifying the scope of problems. Nobody seems to be actually quantifying the alleged scope of sexual misconduct issues in EA. There's an accumulation of anecdotes, often second or third hand, being weaponized by mainstream media into a blanket condemnation of EA's 'weirdness'. But it's unclear whether EA has higher or lower rates of sexual misconduct than any other edgy social movement that includes tens of thousands of people.

In one scientific society I'm familiar with, a few allegations of sexual conduct were made over several years (out of almost a thousand members). Some sex-negative activists tried to portray the society as wholly corrupt, exploitative, sexist, unwelcoming, and alienating. But instead of taking the allegations reactively as symptomatic of broader problems, the society ran a large-scale anonymous survey of almost all members. And it found that something less than 2% of female or male members had ever felt significantly uncomfortable, unwelcome, or exploited. That was the scope of the problem. 2% isn't 0%, but it's a lot better than 20% or 50%. In response to this scope information, the socie... (read more)

The FTX Future Fund recently finished a large round of regrants, meaning a lot of people are approved for grants that have not yet been paid out. At least one person has gotten word from them that these payouts are on hold for now. This seems very worrisome and suggests the legal structure of the fund is not as robust or isolated as you might have thought. I think a great community support intervention would be to get clarity on this situation and communicate it clearly. This would be helpful not only to grantees but to the EA community as a whole, since what is on many people's minds is not as much what will happen to FTX but what will happen to the Future Fund. (From the few people I have talked to, many were under the impression that funds committed to the Future Fund were actually committed in a strict sense, e.g. transferred to a separate entity. If that turns out not to be the case, it's really bad.)

Jonas V
2y161
28
0

I was one of the people who helped draft the constitutional amendment and launch the initiative. My quick takes:

  • My forecast had been a 3% chance of the initiative passing*, with a best guess of ~44% of voters in favor. So I was mildly disappointed by the results.
  • 37% is pretty good; many ambitious initiatives (with real rather than symbolic effects) that aren't right-wing-populist have had much worse failures.
  • In Swiss politics, initiatives that fail with 30-50% of voters in favor generally aren't regarded as total failures. They are generally perceived to lend symbolic support in favor of the issue.
  • I find it fairly encouraging that 37% of a mostly meat-eating population are voting in favor of fairly costly measures that would negatively affect them personally on a daily basis. Initial polls even suggested that 55% were in favor (but as voters got more informed, and as the countercampaign (with ~5x more funding) played out, it got lower).

(* An initiative passing doesn't just require a majority of the voters, but also a majority of the voters in a majority of cantons (states), which is a target that's much harder to hit for non-conservative initiatives. Even if >50% of the voters w... (read more)

Habryka
1y159
38
29

I want to push back on this post. I think sadly this post suffers from the same problem that 99% of all legal advice that people receive suffers from, which is that it is not actually a risk analysis that helps you understand the actual costs of different decisions. 

The central paragraph that I think most people will react to is this section: 

Being involved in litigation, even as a totally blameless witness—or even a perceived witness who in fact has no relevant knowledge at all—is expensive, time consuming, emotionally taxing, and unpleasant. Even cheap lawyers cost hundreds of dollars an hour these days and often bill in increments of .1 hours for their time. Anyone who gets caught up in court proceedings can expect to pay such a lawyer (or have their employer do so) for many hours of time to help them produce documents and communications (or formally object to having to do so) and then prepare them to be grilled for many more hours by another well-paid professional interlocutor with goals and motives at best orthogonal to their own, if not outright hostile. 

Sadly, this post does not indicate what the actual expected time a witness might be expected to spend in lit... (read more)

I consider your attempt at a quantified expected cost analysis a helping hand, not pushback, and I appreciate it. 

Accepting it as a data point, a few quick points in response:

  • Your comment only addresses the self-interest angle, which was a relatively small part of my post. It (understandably) ignores the impacts on others and the systemic impacts that I tried to highlight, which I don't think can be disentangled from the self-interest analysis so easily. I’m not sure those additional impacts are amenable to quantified expectation analysis (though I’d be happy to be proven wrong on that), but we shouldn't just ignore them.
  • I think your numbers are low at the outset, but I don’t think any tweaking I’d do would cause us to be off by an OOM. That said, I think you’ve established a floor, and one that only applies to a witness with no potential liability. Accepting your numbers for the sake of discussion, the time estimates sit at the bottom of towering error bars. And that’s assuming we’re talking about an individual and not an organization that might have orders of magnitude more documents to review than an individual would, attorney-client privilege and other concerns that compli
... (read more)

I feel sorely misunderstood by this post and I am annoyed at how highly upvoted it is. It feels like the sort of thing one writes / upvotes when one has heard of these fabled "longtermists" but has never actually met one in person.

That reaction is probably unfair, and in particular it would not surprise me to learn that some of these were relevant arguments that people newer to the community hadn't really thought about before, and so were important for them to engage with. (Whereas I mostly know people who have been in the community for longer.)

Nonetheless, I'm writing down responses to each argument that come from this unfair reaction-feeling, to give a sense of how incredibly weird all of this sounds to me (and I suspect many other longtermists I know). It's not going to be the fairest response, in that I'm not going to be particularly charitable in my interpretations, and I'm going to give the particularly emotional and selected-for-persuasion responses rather than the cleanly analytical responses, but everything I say is something I do think is true.

How much current animal suffering does longtermism let us ignore?

None of it? Current suffering is still bad! You don't get the pri... (read more)

I thought the paper itself was poorly argued, largely as a function of biting off too much at once. Several times the case against the TUA was not actually argued, merely asserted to exist along with one or two citations for which it is hard to evaluate if they represent a consensus. Then, while I thought the original description of TUA was accurate, the TUA response to criticisms was entirely ignored. Statements like "it is unclear why a precise slowing and speeding up of different technologies...across the world is more feasible or effective than the simpler approach of outright bans and moratoriums" were egregious, and made it seem like you did not do your research. You spoke to 20+ reviewers, half of which were sought out to disagree with you, and not a single one could provide a case for differential technology? Not a single mention of the difficulty of incorporating future generations into the democratic process?

Ultimately, I think the paper would have been better served by focusing on a single section, leaving the rest to future work. The style of assertions rather than argument and skipping over potential responses comes across as more polemical than evidence-seeking. I bel... (read more)

I think EAs could stand to learn something from non-EAs here, about how not to blame the victim even when the victim is you.

I have no personal insight on Nonlinear, but I want to chime in to say that I've been in other communities/movements where I both witnessed and directly experienced the effects of defamation-focused civil litigation. It was devastating. And I think the majority of the plaintiffs, including those arguably in the right, ultimately regretted initiating litigation. I sincerely hope this does not occur in the EA community. And I hope that threats of litigation are also discontinued. There are alternatives that are dramatically less monetarily and time-intensive, and more likely to lead to productive outcomes. I think normalizing (threats of) defmation-focused civil litigation is extremely detrimental to community functioning and community health.

I can't speak for Open Philanthropy, but I can explain why I personally was unmoved by the Rethink report (and think its estimates hugely overstate the case for focusing on tiny animals, although I think the corrected version of that case still has a lot to be said for it).
 
Luke says in the post you linked that the numbers in the graphic are not usable as expected moral weights, since ratios of expectations are not the same as expectations of ratios.
 

However, I say "naively" because this doesn't actually work, due to two-envelope effects...whenever you're tempted to multiply such numbers by something, remember two-envelope effects!)

[Edited for clarity] I was not satisfied with Rethink's attempt to address that central issue, that you get wildly different results from assuming the moral value of a fruit fly is fixed and reporting possible ratios to elephant welfare as opposed to doing it the other way around. 

It is not unthinkably improbable that an elephant brain where reinforcement from a positive or negative stimulus adjust millions of times as many neural computations could be seen as vastly more morally important than a fruit fly, just as one might think that a f... (read more)

Tegmark
1y156
35
64

(Jan 16 text added at the end)

Here's an official statement from FLI on rejecting the Nya Dagbladet Foundation grant proposal: 

For those of you unfamiliar with the  Future of Life Institute (FLI), we are a nonprofit charitable organization that works to reduce global catastrophic and existential risks facing humanity, particularly those from nuclear war and future advanced artificial intelligence.  These risks are growing.  Last year, FLI received scores of grant applications from across the globe for the millions of dollars in funding we distributed to support research, outreach and other important work in furtherance of FLI’s mission. One of these grant proposals came from the Nya Dagbladet Foundation (NDF, not to be confused with the eponymous newspaper) for a media project directly related to FLI's goals.  Although we were initially positive about the proposal and its prospects, we ultimately decided to reject it because of what our subsequent due diligence uncovered. We have given Nya Dagbladet and their affiliates zero funding and zero support of any kind, and will not fund them in the future. These final de... (read more)

Man, this interview really broke my heart. I think I used to look up to Sam a lot, as a billionaire whose self-attested sole priority was doing as much as possible to help the most marginalized + in need, today and in the future.

But damn... "I had to be good [at talking about ethics]... it's what reputations are made of."

Just unbelievable.

I hope this is a strange, pathological reaction to the immense stress of the past week for him, and not a genuine unfiltered version of the true views he's held all along. It all just makes me quite sad, to be honest.

Hi, this is something we’re already exploring, but we are not in a position to say anything just yet.

Mildly against the Longtermism --> GCR shift
Epistemic status: Pretty uncertain, somewhat rambly

TL;DR replacing longtermism with GCRs might get more resources to longtermist causes, but at the expense of non-GCR longtermist interventions and broader community epistemics

Over the last ~6 months I've noticed a general shift amongst EA orgs to focus less on reducing risks from AI, Bio, nukes, etc based on the logic of longtermism, and more based on Global Catastrophic Risks (GCRs) directly. Some data points on this:

  • Open Phil renaming it's EA Community Growth (Longtermism) Team to GCR Capacity Building
  • This post from Claire Zabel (OP)
  • Giving What We Can's new Cause Area Fund being named "Risk and Resilience," with the goal of "Reducing Global Catastrophic Risks"
  • Longview-GWWC's Longtermism Fund being renamed the "Emerging Challenges Fund"
  • Anecdotal data from conversations with people working on GCRs / X-risk / Longtermist causes

My guess is these changes are (almost entirely) driven by PR concerns about longtermism. I would also guess these changes increase the number of people donation / working on GCRs, which is (by longtermist lights) a positive thing. After all, no-one wants a... (read more)

Back to earning the give I guess, I’ll see you guys at the McKinsey office

Hey Scott - thanks for writing this, and sorry for being so slow to the party on this one!

I think you’ve raised an important question, and it’s certainly something that keeps me up at night. That said, I want to push back on the thrust of the post. Here are some responses and comments! :)

The main view I’m putting forward  in this comment is “we should promote a diversity of memes that we believe, see which ones catch on, and mould the ones that are catching on so that they are vibrant and compelling (in ways we endorse).” These memes include both “existential risk” and “longtermism”.


What is longtermism?

The quote of mine you give above comes from Spring 2020. Since then, I’ve distinguished between longtermism and strong longtermism.

My current preferred slogan definitions of each:

  • Longtermism is the view that we should do much more to protect the interests of future generations. (Alt: that protecting the interests of future generations should be a key moral priority of our time.)
  • Strong longtermism is the view that protecting the interests of future generations should be the key moral priority of our time. (That’s similar to the quote of mine you give.)

In WWOTF, I promote the weak... (read more)

I appreciate that Larks sent a draft of this post to CEA, and that we had the chance to give some feedback and do some fact-checking.

I agree with many of the concerns in this post. I also see some of this differently.

In particular, I agree that a climate of fear — wherever it originates— silences not only people who are directly targeted, but also others who see what happened to someone else. That silencing limits writers/speakers, limits readers/listeners who won’t hear the ideas or information they have to offer, and ultimately limits our ability to find ways to do good in the world.

These are real and serious costs. I’ve been talking with my coworkers about them over the last months and seeking input from other people who are particularly concerned about them. I’ll continue to do that.

But I think there are also real costs to pushing groups to go forward with events they don’t want to hold. I’m still thinking through how I see the tradeoffs between these costs and the costs above, but here’s one I think is relevant:

It makes it more costly to be an organizer. In one discussion amongst group organizers after the Munich situatio... (read more)

Can you say more about your plans to bring additional trustees on the boards?

I note that, at present, all of EV (USA)'s board are current or former members of Open Philanthropy: Nick Beckstead, Zachary Robinson and Nicole Ross are former staff, Eli Rose is a current staffmember. This seems far from ideal; I'd like the board to be more diverse and representative of the wider EA community. As it stands, this seems like a conflict of interest nightmare. Did you discuss why this might be a problem? Why did you conclude it wasn't?

Others may disagree, but in my perspective, EV/CEA's role is to act as a central hub for the effective altruism community, and balance the interests of different stakeholders. It's difficult to see how it could do that effectively if all of its board were or are members of the largest donor.

I’d also like to see “the board be more diverse and representative of the wider EA community.” In addition to adding more members without ties to OpenPhil, I’d favor more diversity in the cause preferences of board members. Many of the members of the EV US and EV UK board have clear preferences for longtermism, while none are clearly neartermist. The same can said of the projects EV runs. This raises the question of whether EV sees its role as “a central hub for the effective altruism community, [balancing] the interests of different stakeholders” or if EV is instead trying to steer the community in specific directions. I hope EV offers more transparency around this going forward.

Hopefully, EV will be expanding its boards, which would be an opportunity to address these issues. Expanding the US board seems particularly important, since two of the four board members (Zach and Nicole) are staff members of EV (a pretty unusual structure) and as such would need to recuse themselves from some votes. This dynamic, combined with Nick (in the US and UK) and Will (in the UK) recusing themselves from FTX related issues, means the effective board sizes will be quite small for some important decisions. 

FWIW, I wouldn't say I'm "dumb," but I dropped out of a University of Minnesota counseling psychology undergrad degree and have spent my entire "EA" career (at MIRI then Open Phil) working with people who are mostly very likely smarter than I am, and definitely better-credentialed. And I see plenty of posts on EA-related forums that require background knowledge or quantitative ability that I don't have, and I mostly just skip those.

Sometimes this makes me insecure, but mostly I've been able to just keep repeating to myself something like "Whatever, I'm excited about this idea of helping others as much as possible, I'm able to contribute in various ways despite not being able to understand half of what Paul Christiano says, and other EAs are generally friendly to me."

A couple things that have been helpful to me: comparative advantage and stoic philosophy.

At some point it would also be cool if there was some kind of regular EA webzine that published only stuff suitable for a general audience, like The Economist or Scientific American but for EA topics.

Seeing the discussion play out here lately, and in parallel seeing the topic either not be brought up or be totally censored on LessWrong, has made the following more clear to me: 

A huge fraction of the EA community's reputational issues, DEI shortcomings, and internal strife stem from its proximity to/overlap with the rationalist community. 

Generalizing a lot,  it seems that "normie EAs" (IMO correctly) see glaring problems with Bostrom's statement and want this incident to serve as a teachable moment so the community can improve in some of the respects above, and "rationalist-EAs" want to debate race and IQ (or think that the issue is so minor/"wokeness-run-amok-y" that it should be ignored or censored). This predictably leads to conflict.

(I am sure many will take issue with this, but I suspect it will ring true/help clarify things for some, and if this isn't the time/place to discuss it, I don't know when/where that would be)

[Edit: I elaborated on various aspects of my views in the comments, though one could potentially agree with this comment/not all the below etc.]

Whatever people think about this particular reply by Nonlinear, I hope it's clear to most EAs that Ben Pace could have done a much better job fact-checking his allegations against Nonlinear, and in getting their side of the story.

In my comment on Ben Pace's original post 3 months ago, I argued that EAs & Rationalists are not typically trained as investigative journalists, and we should be very careful when we try to do investigative journalism -- an epistemically and ethically very complex and challenging profession, which typically requires years of training and experience -- including many experiences of getting taken in by individuals and allegations that seemed credible at first, but that proved, on further investigation, to have been false, exaggerated, incoherent, and/or vengeful.

EAs pride ourselves on our skepticism and our epistemic standards when we're identifying large-scope, neglected, tractable causes areas to support, and when we're evaluating different policies and interventions to promote sentient well-being. But those EA skills overlap very little with the kinds of investigative journalism skills required to figure out who's really telling the truth, in contexts... (read more)

Elinor
1y150
66
15

Having read the full TIME article, what struck me was if I replaced each mention of ‘EA’ with ‘the Classical Music industry’ it would still read just as well, and just as accurately (minus some polyamory). 

I worked in the Arts for a decade, and witnessed some appalling behaviour and actions as a young  woman. It makes me incredibly sad to learn that people have had similar experiences within the EA community. While it is something that should be challenged by us all, it is with regret that I say it is by no means unique to the EA community. 

I admire the people who have spoken out, it's an incredibly hard thing to do,  I hope that they are receiving all the care and support that they need. But, I also know this community is full of people trying really hard, and actually doing good.

titotal
1y105
68
8

I have been saddened to learn of similarly bad behaviour in other communities I have been involved in. However it's important not to let the commonness of abuse and harassment in broader society as an excuse not to improve. (I'm 100% not accusing you of this by the way, it's just a behavior I've seen in other places). 

EA should not be aiming for a passing grade when it comes to sexual harassment. The question is not "is EA better than average", but "is EA as good as it could be". And the answer to that question is no. I deeply hope that the concerns of the women in the article will be listened to. 

I agree that EA should aim to be as good as it could be, but comparisons to other communities are still helpful. If the EA community is worse than others at this kind of thing then maybe:

  • Someone considering joining should seek out other communities of people trying to do good. (Ex: animal-focused work in EA spaces vs the broader animal advocacy world.)

  • We should start an unaffiliated group ("Impact Maximizers") that tries to avoid these problems. (Somewhat like the "Atheism Plus" split.)

  • We should be figuring what we're doing differently from most other communities and do more normal things instead. (Ex: this post)

[EDIT: this also feeds into how ashamed people should feel about their association with EA given what's described here.]

FWIW:

1) agree with everything Nick said
2) I am really proud of what the team has done on net, although obviously nothing's perfect!
3) We really do love feedback!  If you have some on a specific grant we made you can submit here, or feel free to separately ping me/Nick/etc. :)


 

This is an interesting post! I agree with most of what you write. But when I saw the graph, I was suspicious. The graph is nice, but the world is not.

I tried to create a similar graph to yours:

In this case, fun work  is pretty close to impactful toll. In fact, the impact value for it is only about 30% less than the impact value of impactful toll.  This is definitely sizable, and creates some of the considerations above. But mostly, everywhere on the pareto frontier seems like a pretty reasonable place to be.

But there's a problem: why is the graph so nice?  To be more specific: why are the x and y axes so similarly scaled?

Why doesn't it look like this?

Here I just replaced  in the ellipse equation with .  It seems pretty intuitive that our impact would be power law distributed, with a small number of possible careers making up the vast majority of our possible impact. A lot of the time when people are trying to maximize something it ends up power law distributed (money donated, citations for researchers, lives saved, etc.). Multiplicative processes, as Thomas Kwa alluded to, will also make something power law distributed. This doesn't really ... (read more)

I'll briefly comment on a few parts of this post since my name was mentioned (lack of comment on other parts does not imply any particular position on them). Also, thanks to the authors for their time writing this (and future posts)! I think criticism is valuable, and having written criticism myself in the past, I know how time-consuming it can be.

I'm worried that your method for evaluating research output would make any ambitious research program look bad, especially early on. Specifically:

The failure of Redwood's adversarial training project is unfortunately wholly unsurprising given almost a decade of similarly failed attempts at defenses to adversarial robustness from hundreds or even thousands of ML researchers.

I think for any ambitious research project that fails, you could tell a similarly convincing story about how it's "obvious in hindsight" it would fail. A major point of research is to find ideas that other people don't think will work and then show that they do work! For many of my most successful research projects, people gave me advice not to work on them because they thought it would predictably fail, and if I had failed then they could have said something similar to... (read more)

Nathan - thanks for sharing the Time article excerpts, and for trying to promote a constructive and rational discussion.

For now, I don't want to address any of the specific issues around SBF, FTX, or EA leadership. I just want to make a meta-comment about the mainstream media's feeding frenzy around EA, and its apparently relentless attempts to discredit EA.

There's a classic social/moral psychology of 'comeuppance' going on here: any 'moral activists' who promote new and higher moral standards (such as the EA movement) can make ordinary folks (including journalists) feel uncomfortable, resentful, and inadequate. This can lead to a public eagerness to detect any forms of moral hypocrisy, moral failings, or bad behavior in the moral activist groups. If any such moral failings are detected, they get eagerly embraced, shared, signal-amplified, and taken as gospel. This makes it easier to dismiss the moral activists' legitimate moral innovations (e.g. focusing on scope-sensitivity, tractability, neglectedness, long-termism), and allows a quicky, easy return to the status quo ante (e.g. national partisan politics + scope-insensitive charity as usual).

We see this 'psychology of comeuppanc... (read more)

finm
2d148
32
4
12
1

I think it is worth appreciating the number and depth of insights that FHI can claim significant credit for. In no particular order:

Note especially how much of the literal terminology was coined on (one imagines) a whiteboard in FHI. “Existential risk” isn't a neologism, but I understand it was Nick who first suggested it be used in a principled way to point to the “loss of potential” thing. “Existential hope”, “vulnerable world”, “unilateralist's curse”, “information hazard”, all (as fa... (read more)

'- Alice has accused the majority of her previous employers, and 28 people - that we know of - of abuse. She accused people of: not paying her, being culty, persecuting/oppressing her, controlling her romantic life, hiring stalkers, threatening to kill her, and even, literally, murder.'


The section of doc linked to here does not in fact provide any evidence whatsoever of Alice making wild accusations against anyone else, beyond plain assertions (i.e. there are no links to other people saying this). 

Something I personally would like to see from this contest is rigorous and thoughtful versions of leftist critiques of EA, ideally translated as much as possible into EA-speak. For example, I find "bednets are colonialism" infuriating and hard to engage with, but things like "the reference class for rich people in western countries trying to help poor people in Africa is quite bad, so we should start with a skeptical prior here" or "isolationism may not be the good-maximizing approach, but it could be the harm-minimizing approach that we should retreat to when facing cluelessness" make more sense to me and are easier to engage with.

That's an imaginary example -- I myself am not a rigorous and thoughtful leftist critic and I've exaggerated the EA-speak for fun. But I hope it points at what I'd like to see!

My coworkers got me a mug that said "Sorry, I'm not Julia Galef" to save me from having to say it so much at conferences. Maybe I should have just gone this route instead.

I generally believe that EAs should keep their identities small. Small enough so it wouldn't really matter what Julia you are

This feels complicated to say, because it's going to make me seem like I don't care about abuse and harassment described in the article. I do. It's really bad and I wish it hadn't happened, and I'm particularly sad that it's happened within my community, and  (more) that people in my community seemed often to not support the victims. 

But I honestly feel very upset about the anti-polyamory vibe of all this. Polyamory is a morally neutral relationship structure that's practiced happily by lots of people. It doesn't make you an abuser, or not-an-abuser.  It's not accepted in the wider community, so I value its acceptance in EA. I'd be sad if there was a community backlash against it because of stuff like this, because that would hurt a lot of people and I don't think it would solve the problem. 

I think the anti-poly vibe also makes it kind of...harder to work out what's happening, and what exactly is bad, or something? Like, the article describes lots of stuff that's unambiguously bad, like grooming and assault. But it says stuff like 'Another told TIME a much older EA recruited her to join his polyamorous relationship while she was still in college'. Like, what do... (read more)

I agree that the article moves between several situations of issues of hugely varying severity without acknowledging that, and this isn't very helpful. And I like that EA is able to be a welcoming place for people who enjoy relationship structures that are discriminated against in the wider world. But I did want to push back against one particular piece:

Polyamory is a morally neutral relationship structure that's practiced happily by lots of people. It doesn't make you an abuser, or not-an-abuser.

In figuring out how we should view polyamory a key question to me is what it's effects are. Imagine we could somehow run an experiment where we went back to having a taboo on non-monogamy regardless of partner consent: how would we expect the world to be different? Some predictions I'd make:

  • People who enjoy polyamorous relationships would be worse off.

  • Some people would be more productive because they're less distracted by partner competition.

  • Other people would be less productive because getting a lot done was part of their approach to partner competition.

  • Some people would have kids who otherwise wouldn't, or have kids earlier in life.

  • ...

  • There would be less of the

... (read more)

The closing remarks about CH seem off to me. 

  1. Justice is incredibly hard; doing justice while also being part of a community, while trying to filter false accusations and thereby not let the community turn on itself, is one of the hardest tasks I can think of. 
    So I don't expect disbanding CH to improve justice, particularly since you yourself have shown the job to be exhausting and ambiguous at best. 
    You have, though, rightly received gratitude and praise - which they don't often, maybe just because we don't often praise people for doing their jobs. I hope the net effect of your work is to inspire people to speak up.
     
  2. The data on their performance is profoundly censored. You simply will not hear about all the times CH satisfied a complainant, judged risk correctly, detected a confabulator, or pre-empted a scandal through warnings or bans. What denominator are you using? What standard should we hold them to? You seem to have chosen "being above suspicion" and "catching all bullies".
     
  3. It makes sense for people who have been hurt to be distrustful of nearby authorities, and obviously a CH team which isn't trusted can't do its job. But just to generate some further common knowledge and meliorate a distrust cascade: I trust CH quite a lot. Every time I've reported something to them they've surprised me with the amount of skill they put in, hours per case. (EDIT: Clarified that I've seen them work actual cases.)

Quite off-topic but I think it's quite remarkable that RP does crisis management and simulation exercises like this! I'm glad that RP is stable financially and legally (at least in the short-term), and put a significant chunk of that down to your collective excellent leadership.

It doesn't quite ring true to me that we need an investigation into what top EA figures knew. What we need is an investigation more broadly into how this was allowed to happen. We need to ask:

  • How did EA ideology play into SBF/FTX's decisions?
  • Could we have seen this coming, or at least known to place less trust in SBF/FTX?
  • Can we do anything to mitigate the large harms that have come about?
  • How can we remove whatever conditions that allowed this to happen, and that might allow other large-scale harms to occur, if they are not remedied?

It's not totally unreasonable to ask what EA figures knew, but it's not likely that they knew about the fraud, based on priors (it's risky to tell people beyond your inner circle about fraudulent plans), and insider reports. (And for me personally, based on knowledge of their character, although obviously that's not going to convince a sceptic.)

There's value in giving the average person a broadly positive impression of EA, and I agree with some of the suggested actions. However, I think some of them risk being applause lights-- it's easy to say we need to be less elitist, etc., but I think the easy changes you can make sometimes don't address fundamental difficulties, and making sweeping changes have hidden costs when you think about what they actually mean.

This is separate from any concern about whether it's better for EA to be a large or small movement.

Be extra vigilant to ensure that effective altruism remains a "big tent".

Edit: big tent actually means "encompassing a broad spectrum of views", not "big movement". I now think this section has some relevance to the OP but does not centrally address the above point.

As I understand it, this means spending more resources on people who are "less elite" and less committed to maximizing their impact. Some of these people will go on to make career changes and have lots of impact, but it seems clear that their average impact will be lower. Right now, EA has limited community-building capacity, so the opportunity cost is huge. If we allocate more resources to "big tent" efforts, ... (read more)

CEA's elaborate adjustments confirm everyone's assertions: constantly evolving affiliations cause extreme antipathy. Can everyone agree, current entertainment aside, carefully examining acronyms could engender accuracy? 

Clearly, excellence awaits: collective enlightenment amid cost effectiveness analysis.

MaxRa
3y144
0
0

Parents in EA ➔ Raising for Effective Giving

Slogan: Shut up and multiply!

I agree with the central thrust of this post, and I'm really grateful that you made it. This might be the single biggest thing I want to change about EA leaders' behavior. And relatedly, I think "be more candid, and less nervous about PR risks" is probably the biggest thing I want to change about rank-and-file EAs' behavior. Not because the risks are nonexistent, but because trying hard to avoid the risks via not-super-honest tactics tends to cause more harm than benefit. It's the wrong general policy and mindset.

Q: Is your approach utilitarian? A: It's utilitarian flavoured.

This seems like an unusually good answer to me! I'm impressed, and this updates me positively about Ben Todd's honesty and precision in answering questions like these.

I think a good description of EA is "the approach that behaves sort of like utilitarianism, when decisions are sufficiently high-stakes and there aren't ethical injunctions in play". I don't think utilitarianism is true, and it's obvious that many EAs aren't utilitarians, and obvious that utilitarianism isn't required for working on EA cause areas, or for being quantitative, systematic, and rigorous in your moral reasoning, etc. Yet it's remarkabl... (read more)

The FTX and Alameda estates have filed an adversary complaint against the FTX Foundation, SBF, Ross Rheingans-Yoo, Nick Beckstead, and some biosciences firms, available here. I should emphasize that anyone can sue over anything, and allege anything in a complaint (although I take complaints signed by Sullivan & Cromwell attorneys significantly more seriously than I take the median complaint). I would caution against drawing any adverse inferences from a defendant's silence in response to the complaint.

The complaint concerns a $3.25MM "philantrophic gift" made to a biosciences firm (PLS), and almost $70MM in non-donation payments (investments, advance royalties, etc.) -- most of which were also to PLS. The only count against Beckstead relates to the donation. The non-donation payments were associated with Latona, which according to the complaint "purports to be a non-profit, limited liability company organized under the laws of the Bahamas[,] incorporated in May 2022 for the purported purpose of investing in life sciences companies [which] held itself out as being part of the FTX Foundation."

The complaint does not allege that either Beckstead or Rheingans-Yoo knew of the fraud a... (read more)

Thanks for this review, Richard. 

In the section titled, "The Bad," you cite a passage from my essay--"Diversifying Effective Altruism's Longshots in Animal Advocacy"--and then go on to say the following: 

"Another author tells us (p. 81):

it is morally ill-advised to invest tens of millions of dollars in tech long shots that might someday have a huge impact on the world at large while failing to combat intimately related systemic injustices that are doing disproportionate damage right now to already at-risk communities.

(Of course, no argument is offered in support of this short-sighted thinking. It’s just supposed to be obvious to all right-thinking individuals. This sort of vacuous moralizing, in the total absence of any sort of grappling with—or even recognition of—opposing arguments, is found throughout the volume.)"

It sounds from your framing like you take it that I assert the claim in question, believe that the alleged claim is obvious, and hold this belief "in the total absence of any sort of grappling with--or even recognition of--opposing arguments." 

With respect, I don't think your reading is fair on any of these fronts.

First, I don't assert the claim in quest... (read more)

  1. This interview is crazy.

  2. One overarching theme is SBF lying about many things in past interviews for PR. Much of what he said in this one also looks like that.

I am opposed to this. 

I am also not an EA leader in any sense of the word, so perhaps my being opposed to this is moot. But I figured I would lay out the basics of my position in case there are others who were not speaking up out of fear [EDIT: I now know of at least one bona fide EA leader who is not voicing their own objection, out of something that could reasonably be described as "fear"].

Here are some things that are true:

  • Racism is harmful and bad
  • Sexism is harmful and bad
  • Other "isms" such as homophobia or religious oppression are harmful and bad.
  • To the extent that people can justify their racist, sexist, or otherwise bigoted behavior, they are almost always abusing information, in a disingenuous fashion. e.g. "we showed a 1% difference in the medians of the bell curves for these two populations, thereby 'proving' one of those populations to be fundamentally superior!" This is bullshit from a truth-seeking perspective, and it's bullshit from a social progress perspective, and in most circumstances it doesn't need to be entertained or debated at all. In practice, it is already the case that the burden of proof on someone wanting to have a discussion about these things is ove
... (read more)

What do EA and the FTX Future Team think of a claim by Kerry Vaughan that Sam Bankman-Fried did severely unethical behavior before and EA and FTX covered it up and laundered his reputation, effectively getting away with it.

I'm posting because of true, this suggests big changes to EA norms are necessary to deal with bad actors like him, and that Sam Bankman-Fried should be outright banned from the forum and EA events.

Link to tweets here:

https://twitter.com/KerryLVaughan/status/1590807597011333120

I want to clarify the claims I'm making in the Twitter thread.

I am not claiming that EA leadership or members of the FTX Future fund knew Sam was engaging in fraudulent behavior while they were working at FTX Future Fund.

Instead, I am saying that friends of mine in the EA community worked at Alameda Research during the first 6 months of its existence. At the end of that period, many of them suddenly left all at once. In talking about this with people involved, my impression is:

1) The majority of staff at Alameda were unhappy with Sam's leadership of the company. Their concerns about Sam included concerns about him taking extreme and unnecessary risks and losing large amounts of money,  poor safeguards around moving money around, poor capital controls, including a lack of distinction between money owned by investors and money owned by Alameda itself, and Sam generally being extremely difficult to work with.

2) The legal ownership structure of Alameda did not reflect the ownership structure that had been agreed to by the parties involved.  In particular, Sam registered Alameda under his sole ownership and not as jointly owned by him and his cofounders. This was not thought t... (read more)

I was one of the people who left at the time described. I don't think this summary is accurate, particularly (3).

(1) seems the most true, but anyone who's heard Sam on a podcast could tell you he has an enormous appetite for risk. IIRC he's publicly stated they bet the entire company on FTX despite thinking it had a <20% chance of paying off. And yeah, when Sam plays league of legends while talking to famous investors he seems like a quirky billionaire; when he does it to you he seems like a dick. There are a lot of bad things I can say about Sam, but there's no elaborate conspiracy.

Lastly, my severance agreement didn't have a non-disparagement clause, and I'm pretty sure no one's did. I assume that you are not hearing from staff because they are worried about the looming shitstorm over FTX now, not some agreement from four years ago.

When said shitstorm dies down I might post more and under my real name, but for now the phrase "wireless mouse" should confirm me as someone who worked there at the time to anyone else who was also there.

I'm the person that Kerry was quoting here, and am at least one of the reasons he believed the others had signed agreements with non-disparagement clauses. I didn't sign a severance agreement for a few reasons: I wanted to retain the ability to sue, I believed there was a non-disparagement clause, and I didn't want to sign away rights to the ownership stake that I had been verbally told I would receive. Given that I didn't actually sign it, I could believe that the non-disparagement clauses were removed and I didn't know about it, and people have just been quiet for other reasons (of which there are certainly plenty).

I think point 3 is overstated but not fundamentally inaccurate. My understanding was that a group of senior leadership offered Sam to buy him out, he declined, and he bought them out instead. My further understanding is that his negotiating position was far stronger than it should have been due to him having sole legal ownership (which I was told he obtained in a way I think it is more than fair to describe as backstabbing). I wasn't personally involved in those negotiations, in part because I clashed with Sam probably worse than anyone else at the company, which likel... (read more)

[anonymous]1y116
42
1

I think it is very important to understand what was known about SBF's behaviour during the initial Alameda breakup, and for this to be publicly discussed and to understand if any of this disaster was predictable beforehand. I have recently spoken to someone involved who told me that SBF was not just cavalier, but unethical and violated commonsense ethical norms. We really need to understand whether this was known beforehand, and if so learn some very hard lessons. 

It is important to distinguish different types of risk-taking here. (1) There is the kind of risk taking that promises high payoffs but with a high chance of the bet falling to zero, without violating commonsense ethical norms, (2) Risk taking in the sense of being willing to  risk it all secretly violating ethical norms to get more money. One flaw in SBF's thinking seemed to be that risk-neutral altruists should take big risks because the returns can only fall to zero. In fact, the returns can go negative - eg all the people he has stiffed, and all of the damage he has done to EA. 

In 2021 I tried asking about SBF among what I suppose you could call "EA leadership", trying to distinguish whether to put SBF into the column of "keeps compacts but compact very carefully" versus "un-Lawful oathbreaker", based on having heard that early Alameda was a hard breakup.  I did not get a neatly itemized list resembling this one on either points 1 or 2, just heard back basically "yeah early Alameda was a hard breakup and the ones who left think they got screwed" (but not that there'd been a compact that got broken) (and definitely not that they'd had poor capital controls), and I tentatively put SBF into column 1.  If "EA leadership" had common knowledge of what you list under items 1 or 2, they didn't tell me about it when I asked.  I suppose in principle that I could've expended some of my limited time and stamina to go and inquire directly among the breakup victims looking for one who hadn't signed an NDA, but that's just a folly of perfect hindsight.

My own guess is that you are mischaracterizing what EA leadership knew.

Habryka
1y165
21
2

Huh, I am surprised that no one responded to you on this. I wonder whether I was part of that conversation, and if so, I would be interested in digging into what went wrong. 

I definitely would have put Sam into the "un-lawful oathbreaker" category and have warned many people I have been working with that Sam has a reputation for dishonesty and that we should limit our engagement with him (and more broadly I have been complaining about an erosion of honesty norms among EA leadership to many of the current leadership, in which I often brought up Sam as one of the sources of my concern directly). 

I definitely had many conversations with people in "EA leadership" (which is not an amazingly well-defined category) where people told me that I should not trust him. To be clear, nobody I talked to expected wide-scale fraud, and I don't think this included literally everyone, but almost everyone I talked to told me that I should assume that Sam lies substantially more than population-level baseline (while also being substantially more strategic about his lying  than almost everyone else).

I do want to add to this that in addition to Sam having a reputation for dishonesty, he also had a reputation for being vindictive, and almost everyone who told me about their concerns about Sam did so while seeming quite visibly afraid of retribution from Sam if they were to be identified as the source of the reputation, and I was never given details without also being asked for confidentiality. 

I knew about Sam's bad character early on, and honestly I'm confused about what people would have expected me to do.

I should have told people that Sam has a bad character and can't be trusted,  that FTX is risky? Well, I did those things, and as far as I can tell, that has made the current situation less bad than it would have been otherwise (yes, it could have been worse!). In hindsight I should have done more of this though.

Should I have told the authorities that Sam might be committing fraud? All I had were vague suspicions about his character and hints that he might be dishonest, but no convincing evidence or specific worries about fraud. (Add jurisdictional problems, concerns about the competence of regulators, etc)

Should I not have "covered up" the early scandal? Well, EAs didn't, and I think Kerry's claim is wrong.

Should I have publicly spread concerns about SBF's character? That borders on slander. Also, I was concerned that SBF would permanently hate me after that (you might say I'm a coward, but hey, try it yourself).

Should I have had SBF banned from EA? Personally, I'm all for a tough stance, but the community is usually against complete bans of bad actors, so it just wasn't feasible.  (EG, if I were in charge, Jacy and Kerry would be banned, but many wouldn't like that.)

SBF was powerful and influential. EA didn't really have power over him.

What could have been done better? I am sincerely curious to get suggestions.

My current, extremely tentative, sense of the situation is not that individuals who were aware of some level of dishonesty and shadiness were not open enough about it. I think individuals acted in pretty reasonable ways, and I heard a good amount of rumors. 

I think the error likely happened at two other junctions: 

  1. Some part of EA leadership ended up endorsing SBF very publicly and very strongly despite having very likely heard about the concerns, and without following up on them (In my model of the world Will fucked up really hard here)
  2. We didn't have any good system for aggregating rumors and related information, and we didn't have anyone who was willing to just make a public post about the rumors (I think this would have been a scary and heroic thing to do, I am personally ashamed that I didn't do it, but I don't think it's something that we should expect the average person to do)

I think if we had some kind of e.g. EA newspaper where people try to actively investigate various things that seem concerning, then I think this would have helped a bunch. This kind of thing could even be circulated privately, though a public version seems also good. 

I separately also think... (read more)

Arepo
1y89
49
24

I'm unclear how to update on this, but note that Kerry Vaughan was at CEA for 4 years, and a managing director there for one year before, as I understand it, being let go under mysterious circumstances. He's now the program manager at a known cult that the EA movement has actively distanced itself from. So while his comments are interesting, I wouldn't treat him as a particularly credible source, and he may have his own axe to grind.

I have been community building in Cambridge UK in some way or another since 2015, and have shared many of these concerns for some time now. Thanks so much for writing them much more eloquently than I would have been able to, thanks!

To add some more anecdotal data, I also hear the 'cult' criticism all the time. In terms of getting feedback from people who walk away from us: this year, an affiliated (but non-EA), problem-specific table coincidentally ended up positioned downstream of the EA table at a freshers' fair. We anecdotally overheard approx 10 groups of 3 people discussing that they thought EA was a cult, after they had bounced from our EA table. Probably around 2000-3000 people passed through, so this is only 1-2% of people we overheard.

I managed to dig into these criticisms a little with a couple of friends-of-friends outside of EA, and got a couple of common pieces of feedback which it's worth adding.

  • We are giving away many free books lavishly. They are written by longstanding members of the community. These feel like doctrine, to some outside of the community.
  • Being a member of the EA community is all or nothing. My best guess is we haven't thought of anything less intensi
... (read more)

In my own work now, I feel much more personally comfortable leaning into cause area-specific field building, and groups that focus around a project or problem. These are much more manageable commitments, and can exemplify the EA lens of looking at a project without it being a personal identity.

The absolute strongest answer to most critiques or problems that has been mentioned recently is—strong object level work.

If EA has the best leaders, the best projects and the most success in executing genuinely altruistic work, especially in a broad range of cause areas, that is a complete and total answer to:

  • “Too much” spending
  • billionaire funding/asking people to donate income
  • most “epistemic issues”, especially with success in multiple cause areas

If we have the world leaders in global health, animal welfare, pandemic prevention, AI safety, and each say, “Hey, EA has the strongest leaders and its ideas and projects are reliably important and successful”, no one will complain about how many free books are handed out.

How about Caring Tuna? This would surely get support from Open Phil

I used to expect 80,000 Hours to tell me how to have an impactful career. Recently, I've started thinking it's basically my own personal responsibility to figure it out. I think this shift has made me much happier and much more likely to have an impactful career.

80,000 Hours targets the most professionally successful people in the world. That's probably the right idea for them - giving good career advice takes a lot of time and effort, and they can't help everyone, so they should focus on the people with the most career potential.

But, unfortunately for most EAs (myself included), the nine priority career paths recommended by 80,000 Hours are some of the most difficult and competitive careers in the world. If you’re among the 99% of people who are not Google programmer / top half of Oxford / Top 30 PhD-level talented, I’d guess you have slim-to-none odds of succeeding in any of them. The advice just isn't tailored for you.

So how can the vast majority of people have an impactful career? My best answer: A lot of independent thought and planning. Your own personal brainstorming and reading and asking around and exploring, not just following stoc... (read more)

From CEA's guiding principles:

Integrity: Because we believe that trust, cooperation, and accurate information are essential to doing good, we strive to be honest and trustworthy. More broadly, we strive to follow those rules of good conduct that allow communities (and the people within them) to thrive. We also value the reputation of effective altruism, and recognize that our actions reflect on it.

Habryka
1y139
53
42

I disagree. Or at least I think the reasons in this post are not very good reasons for Bostrom to step down (it is plausible to me he could pursue more impactful plans somewhere else, potentially by starting a new research institution with less institutional baggage and less interference by the University of Oxford).

Bostrom is as far as I can tell the primary reason why FHI is a successful and truth-oriented research organization. Making a trustworthy research institution is exceptionally difficult, and its success is not primarily measured in the operational quality of its organization, but in the degree to which it produces important, trustworthy and insightful research. Bostrom has succeeded at this, and the group of people (especially the early FHI cast including Anders Sandberg, Eric Drexler, Andrew Snyder Beattie, Owain Evans, and Stuart Armstrong) he has assembled under the core FHI research team have made great contributions to many really important questions that I care about, and I cannot think of any other individual who would have been able to do the same (Sean gives a similar perspective in his comment).

I think Bostrom overstretched himself when he let FHI grow to doze... (read more)

I don't think it's witchhunty at all. The fact is we really have very little knowledge about how Will and Nick are involved with FTX. I really don't think they did any fraud or condoned any fraud, and I do genuinely feel bad for them, and I want to hope for the best when it comes to their character. I'm pretty substantially unsure if Will/Nick/others made any ex ante mistakes, but they definitely made severe ex post mistakes and lost a lot of trust in the community as a result.

I think this means three things:

1.) I think Nathan is right about the prior. If we're unsure bout whether they made severe ex ante mistakes, we should remove them. I'd only keep them if I was sure they did not make severe ex ante mistakes. I think this applies more forcefully the more severe the mistake was and the situation with FTX makes me suspect that any mistakes could've been about as severe as you would get.

2.) I think in order to be on EVF's board it's a mandatory job requirement you to maintain the trust of the community and removing people over this makes sense.

3.) I think a traditional/"normie" board would've 100% removed Will and Nick back in November. Though I don't think that we should always d... (read more)

Thanks for the update. 

I'd like to recommend that part of the process review for providing travel grant funding includes consideration of the application process timing for CEA-run or supported events. In my experience, key dates in the process (open, consideration/decision, notification of acceptance, notification of travel grant funding) happen much closer to the date  of the event than other academic or trade conferences. 

For example, in 2022, several Australian EAs I know applied ~90 days in advance of EAG London or EAG SF, but were accepted only around 30-40 days before the event. 

A slow application process creates several issues for international attendees:

  1. Notice is needed for employment leave. Prospective attendees who are employed usually need to submit an application for leave with 1+ months notice, especially for a trip of ~1 week or longer needed for international travel. Shorter notice can create conflict or ill-feeling between the employee and employer.
  2. Flight prices increase as the travel date approaches. An Australian report recommended booking international flights 6 months ahead of the date of travel. A Google report recommended booking internati
... (read more)

EDIT: I've now written up my own account of how we should do epistemic deference in general, which fleshes out more clearly a bunch of the intuitions I outline in this comment thread.

I think that a bunch of people are overindexing on Yudkowsky's views; I've nevertheless downvoted this post because it seems like it's making claims that are significantly too strong, based on a methodology that I strongly disendorse. I'd much prefer a version of this post which, rather than essentially saying "pay less attention to Yudkowsky", is more nuanced about how to update based on his previous contributions; I've tried to do that in this comment, for example. (More generally, rather than reading this post, I recommend people read this one by Paul Christiano, which outlines specific agreements and disagreements. Note that the list of agreements there, which I expect that many other alignment researchers also buy into, serves as a significant testament to Yudkowsky's track record.)

The part of this post which seems most wild to me is the leap from "mixed track record" to

In particular, I think, they shouldn’t defer to him more than they would defer to anyone else who seems smart and has spent a rea

... (read more)

The part of this post which seems most wild to me is the leap from "mixed track record" to

In particular, I think, they shouldn’t defer to him more than they would defer to anyone else who seems smart and has spent a reasonable amount of time thinking about AI risk.

For any reasonable interpretation of this sentence, it's transparently false. Yudkowsky has proven to be one of the best few thinkers in the world on a very difficult topic. Insofar as there are others who you couldn't write a similar "mixed track record" post about, it's almost entirely because they don't have a track record of making any big claims, in large part because they weren't able to generate the relevant early insights themselves. Breaking ground in novel domains is very, very different from forecasting the weather or events next year; a mixed track record is the price of entry.

I disagree that the sentence is false for the interpretation I have in mind.

I think it's really important to seperate out the question "Is Yudkowsky an unusually innovative thinker?" and the question "Is Yudkowsky someone whose credences you should give an unusual amount of weight to?"

I read your comment as arguing for the former,... (read more)

This doesn't really match my (relatively little) experience. I think it might be because we disagree on what counts as "EA Leadership": we probably have a different idea of what counts as "EA" and/or we have a different idea of what counts as "Leadership".

I think you might be considering as "EA leadership" senior people working in "meta-EA" orgs (e.g. CEA) and "only-EA experience" to also include people doing more direct work (e.g. GiveWell). So the CEO of OpenPhilanthropy would count as profile #1 having mostly previous experience at OpenPhilanthropy and GiveWell, but the CEO of GiveWell wouldn't count as profile #2 because they're not in an "EA leadership position". Is that correct?

 

most importantly, I want feedback on whether people think this is a thing, and if it is a thing, is it bad.

I think the easiest way would be to compile a list of people in leadership positions and check their LinkedIn profiles.

 

Working on the assumption above for what you mean by "EA Leadership", while there is no canonical list of “meta-EA leaders”, a non-random sample could be this public list of some Meta Coordination Forum participants.[1]

 

Here's a quick (and inaccurate) short summar... (read more)

leopold - my key question here would be, if the OpenAI Preparedness team concluded in a year or two that the best way to mitigate AGI risk would be for OpenAI to simply stop doing AGI research, would anyone in OpenAI senior management actually listen to them, and stop doing AGI research? 

If not, this could end up being just another example of corporate 'safety-washing', where the company has already decided what they're actually going to do, and the safety team is just along for the ride.

I'd value your candid view on this; I can't actually tell if there are any conditions under which OpenAI would decide that what they've been doing is reckless and evil, and they should just stop.

People have some strong opinions about things like polyamory, but I figured I’d still voice my concern as someone who has been in EA since 2015, but has mostly only interacted with the community online (aside from 2 months in the Bay and 2 in London):

I have nothing against polyamory, but polyamory within the community gives me bad vibes. And the mixing of work and fun seems to go much further than I think it should. It feels like there’s an aspect of “free love” and I am a little concerned about doing cuddle puddles with career colleagues. I feel like all these dynamics lead to weird behaviour people do not want to acknowledge.

I repeat, I am not against polyamory, but I personally do not expect some of this bad behaviour would happen as much if in a monogamous setting since I expect there would be less sliding into sexual actions.

I’ve avoided saying this because I did not want to criticize people for being polyamorous and expected a lot would disagree with me and it not leading to anything. But I do think the “free love” nature of polyamory with career colleagues opens the door for things we might not want.

Whatever it is (poly within the community might not be part of the issue at all!), I feel like there needs to be a conversation about work and play (that people seem to be avoiding).

Habryka
1y137
70
0

Yes, I at least strongly support people reaching out to my staff about opportunities that they might be more excited about than working at Lightcone, and similarly I have openly approached other people working in the EA community at other organizations about working at Lightcone. I think the cooperative atmosphere between different organizations, and the trust that individuals are capable of making the best decisions for themselves on where they can have the best impact, is a thing I really like about the EA community.

I want to share the following, while expecting that it will probably be unpopular. 

I feel many people are not being charitable enough to Nonlinear here. 

I have only heard good things about Nonlinear, outside these accusations. I know several people who have interacted with them - mainly with Kat - and had good experiences. I know several people who deeply admire her. I have interacted with Kat occasionally, and she was helpful. I have only read good things about Emerson. 

As far as I can tell from this and everything I know/have read, it seems reasonable to assume that the people at Nonlinear are altruistic people. They have demonstrably made exceptional commitments to doing good; started organisations, invested a lot of time and money in EA causes, and helped a lot of people. 

Right now, on the basis of what could turn out to have been a lot of lies, their reputations, friendship futures and careers are at risk of being badly damaged (if not already so). 

This may have been (more) justified if the claims in the original post were all found and believed to be clearly true. However, that was, and is not, clearly the case at this point in time. 

At present, ... (read more)

I think it is entirely possible that people are being unkind because they updated too quickly on claims from Ben's post that are now being disputed, and I'm grateful that you've written this (ditto chinscratch's comment) as a reminder to be empathetic. That being said, there are also some reasons people might be less charitable than you are for reasons that are unrelated to them being unkind, or the facts that are in contention:
 

I have only heard good things about Nonlinear, outside these accusations

Right now, on the basis of what could turn out to have been a lot of lies, their reputations, friendship futures and careers are at risk of being badly damaged

Without commenting on whether Ben's original post should have been approached better or worded differently or was misleading etc, this comment from the Community Health/Special Projects team might add some useful additional context. There are also previous allegations that have been raised.[1]

Perhaps you are including both of these as part of the same set of allegations, but some may suggest that not being permitted to run sessions / recruit at EAGs and considering blocking attendance (especially given the reference class of ... (read more)

Since Frances is not commenting more:

This rhetorical strategy is analogous to a prosecutor showing smiling photos of a couple on vacation to argue that he couldn’t have possibly murdered her, or showing flirty texts between a man and woman to argue that he couldn’t have raped her, etc. This is a bad rhetorical strategy when prosecutors use it—and it’s a bad rhetorical strategy here—because it perpetuates misinformation about what abusive relationships look like; namely, that they are uniformly bad, with no happy moments or mitigating qualities.

As anyone who has been in an abusive relationship will tell you, this is rarely what abuse looks like. And you insinuating that Chloe and Alice are lying because there were happy-appearing moments is exactly the kind of thing that makes many victims afraid to come forward.

To be clear: I do not think these photos provide any evidence against the allegations in Ben’s post because no one is contesting that the group hung out in tropical locations. Additionally, having hung out in tropical locations is entirely compatible with the allegations made in the initial post. Ironically, this rhetorical strategy—the photos, the assertion that this was a ... (read more)

Some thoughts on the general discussion:

(1) some people are vouching for Kat's character. This is useful information, but it's important to note that behaving badly is very compatible with having many strengths, treating one's friends well, etc. Many people who have done terrible things are extremely charismatic and charming, and even well-meaning or altruistic. It's hard to think bad things about one's friends, but unfortunately it's something we all need to be open to. (I've definitely in the past not taken negative allegations against someone as seriously as I should have, because they were my friend).

(2) I think something odd about the comments claiming that this post is full of misinformation, is that they don't correct any of the misinformation. Like, I get that assembling receipts, evidence etc can take a while, and writing a full rebuttal of this would take a while. But if there are false claims in the post, pick one and say why it's false. 

This makes these interventions seem less sincere to me, because I think if someone posted a bunch of lies about me, in my first comments/reactions I would be less concerned about the meta appropriateness of the post having been post... (read more)

Just to clarify, nonlinear has now picked one claim and provided screen shots relevant to it, I’m not sure if you saw that.

I also want to clarify that I gave Ben a bunch of very specific examples of information in his post that I have evidence are false (responding to the version he sent me hours before publication). He hastily attempted to adjust his post to remove or tweak some of his claims right before publishing based on my discussing these errors with him. It’s a lot easier (and vastly less time consuming) to provide those examples in a private one-on-one with Ben than to provide them publicly (where, for instance, issues of confidentially become much more complicated, and where documentation and wording need to be handled with extreme care, quite different than the norms of conversation).

The easiest to explain example is that Ben claimed a bunch of very bad sounding quotes from Glassdoor were about Emerson that clearly weren’t (he hadn’t been at the company for years when those complaints were written). Ben acknowledged somewhere in the comments that those were indeed not about Emerson and so that was indeed false information in the original version of the post.

My understand... (read more)

Larks
9mo136
59
3

One said climate change, two said global health, and two said AI safety. Neither of the people who said AI safety had any background in AI. If after Arete, someone without background in AI decides that AI safety is the most important issue, then something likely has gone wrong (Note: prioritizing any non-mainstream cause area after Arete is epistemically shaky. By mainstream, I mean a cause area that someone would have a high prior on).

This seems like a strange position to me. Do you think people have to have a background in climate science to decide that climate change is the most important problem, or development economics to decide that global poverty is the moral imperative of our time? Many people will not have a background relevant to any major problem; are they permitted to have any top priority?

I think (apologies if I am mis-understanding you) you try to get around this by suggesting that 'mainstream' causes can have much higher priors and lower evidential burdens. But that just seems like deference to wider society, and the process by which mainstream causes became dominant does not seem very epistemically reliable to me.

Hi everyone,

To fully disclose my biases: I’m not part of EA, I’m Greg’s younger sister, and I’m a junior doctor training in psychiatry in the UK. I’ve read the comments, the relevant areas of HLI’s website, Ozler study registration and spent more time than needed looking at the dataset in the Google doc and clicking random papers.

I’m not here to pile on, and my brother doesn’t need me to fight his corner. I would inevitably undermine any statistics I tried to back up due to my lack of talent in this area. However, this is personal to me not only wondering about the fate of my Christmas present (Greg donated to Strongminds on my behalf), but also as someone who is deeply sympathetic to HLI’s stance that mental health research and interventions are chronically neglected, misunderstood and under-funded. I have a feeling I’m not going to match the tone here as I’m not part of this community (and apologise in advance for any offence caused), but perhaps I can offer a different perspective as a doctor with clinical practice in psychiatry and on an academic fellowship (i.e. I have dedicated research time in the field of mental health).

The conflict seems to be that, on one hand, HLI has im... (read more)

Habryka
1y136
34
17

I thought the previous article by Charlotte Alter on sexual misconduct in EA was pretty misleading in a lot of ways, as the top comments have pointed out, since it omitted a lot of crucial context, primarily used examples from the fringes of the community, and omitted various enforcement actions that were taken against the people mentioned in the article, which I think overall produced an article that had some useful truths in it, but made it really quite hard for readers to come to a good map of what is actually going on with that kind of stuff in EA.

This article, in contrast, does not have, as far as I can tell, any major misrepresentations in it. I do not know the details about things like conversations between Will and Tara, of course, since I wasn't there, and I have a bit of a feeling there is some exageration in the quotes by Naia here, but having done my own investigation and having talked to many people about this, the facts and rough presentation of what happened here seems basically correct. 

It still has many of the trapping of major newspaper articles, and think continues to not be amazingly well-optimized for people to come to a clear understanding of the details,... (read more)

In 2018, I collected data about several types of sexual harassment on the SSC survey, which I will report here to help inform the discussion. I'm going to simplify by assuming that only cis women are victims and only cis men are perpetrators, even though that's bad and wrong.

Women who identified as EA were less likely report lifetime sexual harassed at work than other women, 18% vs. 20%. They were also less likely to report being sexually harassed outside of work, 57% vs. 61%. 

Men who identified as EA were less likely to admit to sexually harassing people at work (2.1% vs. 2.9%) or outside of work (16.2% vs. 16.5%)

The sample was 270 non-EA women, 99 EA women, 4940 non-EA men, and 683 EA men. None of these results were statistically significant, although all of them trended in the direction of EAs experiencing less sexual harassment. 

This doesn't prove that EA environments have less harassment than the average environment, since it could be that EAs are biased to have less sexual harassment for other reasons, and whatever additional harassment they get in EA isn't enough to make up for it; the vast majority of EAs have the vast majority of interactions in non-EA environmen... (read more)

I thank you for apologizing publicly and loudly. I imagine that you must be in a really tough spot right now. 

I think I feel a bit conflicted on the way you presented this. 

I treat our trust in FTX and dealings with him as bureaucratic failures. Whatever measures we had in place to deal with risks like this weren't enough.

This specific post reads a bit to me like it's saying, "We have some blog posts showing that we said these behaviors are bad, and therefore you could trust both that we follow these things and that we encourage others to, even privately." I'd personally prefer it, in the future, if you wouldn't focus on the blog posts and quotes. I think they just act as very weak evidence, and your use makes it feel a bit like otherwise.

Almost every company has lots of public documents outlining their commitments to moral virtues. 

I feel pretty confident that you were ignorant of the fraud. I would like there to be more clarity of what sorts of concrete measures were in place to prevent situations like this, and what measures might change in the future to help make sure this doesn't happen again. 

There might also be many other concrete things that could be don... (read more)

EA forum content might be declining in quality. Here are some possible mechanisms:

  1. Newer EAs have worse takes on average, because the current processes of recruitment and outreach produce a worse distribution than the old ones
  2. Newer EAs are too junior to have good takes yet. It's just that the growth rate has increased so there's a higher proportion of them.
  3. People who have better thoughts get hired at EA orgs and are too busy to post. There is anticorrelation between the amount of time people have to post on EA Forum and the quality of person.
  4. Controversial content, rather than good content, gets the most engagement.
  5. Although we want more object-level discussion, everyone can weigh in on meta/community stuff, whereas they only know about their own cause areas. Therefore community content, especially shallow criticism, gets upvoted more. There could be a similar effect for posts by well-known EA figures.
  6. Contests like the criticism contest decrease average quality, because the type of person who would enter a contest to win money on average has worse takes than the type of person who has genuine deep criticism. There were 232 posts for the criticism contest, and 158 for the Cause Explora
... (read more)

Since the time I have started looking into this, you have:

... (read more)

Should we fund people for more years at a time? I've heard that various EA organisations and individuals with substantial track records still need to apply for funding one year at a time, because they either are refused longer-term funding, or they perceive they will be.

For example, the LTFF page asks for applications to be "as few as possible", but clarifies that this means "established organizations once a year unless there is a significant reason for submitting multiple applications". Even the largest organisations seem to only receive OpenPhil funding every 2-4 years. For individuals, even if they are highly capable, ~12 months seems to be the norm.

Offering longer (2-5 year) grants would have some obvious benefits:

  • Grantees spend less time writing grant applications
  • Evaluators spend less time reviewing grant applications
  • Grantees plan their activities longer-term

The biggest benefit, though, I think, is that:

  • Grantees would have greater career security.

Job security is something people value immensely. This is especially true as you get older (something I've noticed tbh), and would be even moreso for someone trying to raise kids. In the EA economy, many people get by on short-term gr... (read more)

I personally have no stake in defending Conjecture (In fact, I have some questions about the CoEm agenda) but I do think there are a couple of points that feel misleading or wrong to me in your critique. 

1. Confidence (meta point): I do not understand where the confidence with which you write the post (or at least how I read it) comes from. I've never worked at Conjecture (and presumably you didn't either) but even I can see that some of your critique is outdated or feels like a misrepresentation of their work to me (see below). For example, making recommendations such as "freezing the hiring of all junior people" or "alignment people should not join Conjecture" require an extremely high bar of evidence in my opinion. I think it is totally reasonable for people who believe in the CoEm agenda to join Conjecture and while Connor has a personality that might not be a great fit for everyone, I could totally imagine working with him productively. Furthermore, making a claim about how and when to hire usually requires a lot of context and depends on many factors, most of which an outsider probably can't judge. 
Given that you state early on that you are an experienced member of ... (read more)

I'm not very compelled by this response.

It seems to me you have two points on the content of this critique. The first point:

I think it's bad to criticize labs that do hits-based research approaches for their early output (I also think this applies to your critique of Redwood) because the entire point is that you don't find a lot until you hit.

I'm pretty confused here. How exactly do you propose that funding decisions get made? If some random person says they are pursuing a hits-based approach to research, should EA funders be obligated to fund them?

Presumably you would want to say "the team will be good at hits-based research such that we can expect a future hit, for X, Y and Z reasons". I think you should actually say those X, Y and Z reasons so that the authors of the critique can engage with them; I assume that the authors are implicitly endorsing a claim like "there aren't any particularly strong reasons to expect Conjecture to do more impactful work in the future".

The second point:

Your statements about the VCs seem unjustified to me. How do you know they are not aligned? [...] I haven't talked to the VCs either, but I've at least asked people who work(ed) at Conjecture.

Hmm, it... (read more)

DMMF
1y135
66
4

Thank you sharing this. As a distinct matter, the specific way FTX failed also makes me more concerned about the viability of a certain type of mindset that seems somewhat common and normalized amongst some in the EA community.

  • holding the belief that by being very very smart, you can work in areas you have minimal experience and know better than others
  • having (experienced) adults in the room/adhering to formal compliance norms is overrated
  • understating the risks posed by conflict of interest issues
  • accepting ends justify the means type reasoning

I believe Sam's adherence to the above referenced beliefs played a critical role in FTX's story.  I don't think that any one of these beliefs is inherently problematic, but I have adjusted downwards against those who hold all of them.

While I agree with the substance of this comment to a great extent, I want to note that EA also has a problem of being much more willing to tolerate abstract criticism than concrete criticism.

If I singled out a specific person in EA and accused them of significant conflicts of interest or of being too unqualified and inexperienced to work on whatever they are currently working on, the reaction in the forum would be much more negative than it was to this comment.

If you really believe the issues raised in the comment are important, take it seriously when people raise these concerns in concrete cases

This is Alex Cohen, GiveWell senior researcher, responding from GiveWell's EA Forum account.

Joel, Samuel and Michael — Thank you for the deep engagement on our deworming cost-effectiveness analysis.

We really appreciate you prodding us to think more about how to deal with any decay in benefits in our model, since it has the potential to meaningfully impact our funding recommendations.

We agree with HLI that there is some evidence for benefits of deworming declining over time and that this is an issue we haven’t given enough weight to in our analysis.

We’re extremely grateful to HLI for bringing this to our attention and think it will allow us to make better decisions on recommending funding to deworming going forward.

We would like to encourage more of this type of engagement with our research. We’re planning to announce prizes for criticism of our work in the future. When we do, we plan to give a retroactive prize to HLI.

We’re planning to do additional work to incorporate this feedback into an updated deworming cost-effectiveness estimate. In the meantime, we wanted to share our initial thoughts. At a high level:

  • We agree with HLI that there is some evidence for benefits of deworming
... (read more)

Hello Michael,

Thanks for your reply. In turn:

1: 

HLI has, in fact, put a lot of weight on the d = 1.72 Strongminds RCT. As table 2 shows, you give a weight of 13% to it - joint highest out of the 5 pieces of direct evidence. As there are ~45 studies in the meta-analytic results, this means this RCT is being given equal or (substantially) greater weight than any other study you include. For similar reasons, the Strongminds phase 2 trial is accorded the third highest weight out of all studies in the analysis.

HLI's analysis explains the rationale behind the weighting of "using an appraisal of its risk of bias and relevance to StrongMinds’ present core programme". Yet table 1A notes the quality of the 2020 RCT is 'unknown' - presumably because Strongminds has "only given the results and some supporting details of the RCT". I don't think it can be reasonable to assign the highest weight to an (as far as I can tell) unpublished, not-peer reviewed, unregistered study conducted by Strongminds on its own effectiveness reporting an astonishing effect size - before it has even been read in full. It should be dramatically downweighted or wholly discounted until then, rather than included a... (read more)

I previously gave a fair bit of feedback to this document. I wanted to quickly give my take on a few things.

Overall, I found the analysis interesting and useful. However, I overall have a somewhat different take than Nuno did.

On OP: 
-  Aaron Gertler / OP were given a previous version of this that was less carefully worded. To my surprise, he recommended going forward with publishing it, for the sake of community discourse. This surprised me and I’m really thankful. 
- This analysis didn’t get me to change my mind much about Open Philanthropy. I thought fairly highly of them before and after, and expect that many others who have been around would think similarly. I think they’re a fair bit away from being an “idealized utilitarian agent” (in part because they explicitly claim not to be), but still much better than most charitable foundations and the like.

On this particular issue: 
- My guess is that in the case of criminal justice reform, there were some key facts of the decision-making process that aren’t public and are unlikely to ever be public. It’s very common in large organizations for compromises to be made for various political or social reasons, for exampl... (read more)

I share the surprise and dismay other commenters have expressed about the experience you report around drafting this preprint. While I disagree with many claims and framings in the paper, I agreed with others, and the paper as a whole certainly doesn't seem beyond the usual pale of academic dissent. I'm not sure what those who advised you not to publish were thinking.

In this comment, I'd like to turn attention to the recommendations you make at the end of this Forum post (rather than the preprint itself). Since these recommendations are somewhat more radical than those you make in the preprint, I think it would be worth discussing them specifically, particularly since in many cases it's not clear to me exactly what is being proposed.

Having written what follows, I realise it's quite repetitive, so if in responding you wanted to pick a few points that are particularly important to you and focus on those, I wouldn't blame you!


You claim that EA needs to...

diversify funding sources by breaking up big funding bodies 

Can you explain what this would mean in practice? Most of the big funding bodies are private foundations. How would we go about breaking these up? Who even is "we" in th... (read more)

Thanks for writing this! I'd been putting something together, but this is much more thorough.

Here are the parts of my draft that I think still add something:


I'm interested in two overlapping questions:

  1. Should Ben have delayed to evaluate NL's evidence?
  2. Was Nonlinear wrong to threaten to sue?

While I've previously advocated giving friendly organizations a chance to review criticism and prepare a response in advance, primarily as a question of politeness, that's not the issue here. As I commented on the original post, the norm I've been pushing is only intended for cases where you have a neutral or better relationship with the organization, and not situations like this one where there are allegations of mistreatment or you don't trust them to behave cooperatively. The question here instead is, how do you ensure the accusations you're signal-boosting are true?

Here's my understanding of the timeline of 'adversarial' fact checking before publication: timeline. Three key bits:

  • LC first shared the overview of claims 3d before posting.
  • LC first shared the draft 21hr before posting, which included additional accusations
  • NL responded to both by asking for a week to gather evidence that they claime
... (read more)

For those who agree with this post (I at least agree with the author's claim if you replace most with more), I encourage you to think what you personally can do about it.

I think EAs are far too willing to donate to traditional global health charities, not due to them being the most impactful, but because they feel the best to donate to. When I give to AMF I know I'm a good person who had an impact! But this logic is exactly what EA was founded to avoid.

I can't speak for animal welfare organizations outside of EA, but at least for the ones that have come out of Effective Altruism, they tell me that funding is a major issue. There just aren't that many people willing to make a risky donation a new charity working on fish welfare, for example.

Those who would be risk-willing enough to give to eccentric animal welfare or global health interventions, tend to also be risk-willing enough with their donations to instead give it to orgs working on existential risks. I'm not claiming this is incorrect of them to do, but this does mean that there is a dearth of funding for high-risk interventions in the neartermist space.

I donated a significant part of my personal runway to help fund a new animal welfare org, which I think counterfactually might not have gotten started if not for this. If you, like me, think animal welfare is incredibly important and previously have donated to Givewell's top charities, perhaps consider giving animal welfare a try!

Toby_Ord
7mo133
18
2
29

Nick is being so characteristically modest in his descriptions of his role here. He was involved in EA right from the start — one of the members of Giving What We Can at launch in 2009 — and he soon started running our first international chapter at Rutgers, before becoming our director of research. He contributed greatly to the early theory of effective altruism and along with Will and I was one of the three founding trustees of the Centre for Effective Altruism. I had the great pleasure of working with him in person for a while at Oxford University, before he moved back to the States to join Open Philanthropy. He was always thoughtful, modest, and kind. I'm excited to see what he does next.

Larks
1y133
56
4

Thanks for sharing this; I especially appreciate the transparency that she resigned because of strategic disagreements.

I realize I am quite repetitive about this, but I really think EV/CEA would benefit from being more transparent with the community, especially about simple issues like 'who is currently in charge'. In this case I noticed the change on the website 18 days ago, and the actual handover may(?) have taken place prior to that point. My impression is that most normal organizations with public stakeholders announce leadership changes more or less immediately and I don't understand why EV doesn't.

I'm a POC, and I've been recruited by multiple AI-focused longtermist organizations (in both leadership and research capacities) but did not join for personal reasons. I've participated in online longtermist discussions since the 1990s, and AFAICT participants in those discussions have always skewed white. Specifically I don't know anyone else of Asian descent (like myself) who was a frequent participant in longtermist discussions even as of 10 years ago. This has not been a problem or issue for me personally – I guess different groups participate at different rates because they tend to have different philosophies and interests, and I've never faced any racism or discrimination in longtermist spaces or had my ideas taken less seriously for not being white. I'm actually more worried about organizations setting hiring goals for themselves that assume that everyone do have the same philosophies and interests, potentially leading to pathological policies down the line.

Lizka
1y132
43
3

I’d like to chime in here. I can see how you might think that there’s a coverup or the like, but the Online team (primarily Ben Clifford and I, with significant amounts of input from JP and others on the team) made the decision to run this test based on feedback we’d been hearing for a long time from a variety of people, and discussions we’d had internally (also for a long time). And I didn’t know about Owen’s actions or resignation until today. (Edited to add: no one on the Online team knew about this when we were deciding to go forward with the test.)

We do think it’s important for people in EA to hear this news, and we’re talking about how we might make sure that happens. I know I plan on sharing one or both of these posts in the upcoming Digest, and we expect one or both of the posts to stay at the top of the Community page for at least a few days. If the posts drift down, we’ll probably pin one somehow. We’re considering moving them out of the section, but we’re conflicted; we do endorse the separation of Community and other content, and keeping the test going, and moving them out would violate this. We’ll keep talking about it, but I figured I would let you know what our thoughts are at the moment. 

Here's a post with me asking the question flat out: Why hasn't EA done an SBF investigation and postmortem?

This seems like an incredibly obvious first step from my perspective, not something I'd have expected a community like EA to be dragging its heels on years after the fact.

We're happy to sink hundreds of hours into fun "criticism of EA" contests, but when the biggest disaster in EA's history manifests, we aren't willing to pay even one investigator to review what happened so we can get the facts straight, begin to rebuild trust, and see if there's anything we should change in response? I feel like I'm in crazytown; what the heck is going on?

Update Apr. 4: I’ve now spoken with another EA who was involved in EA’s response to the FTX implosion. To summarize what they said to me:

  • They thought that the lack of an investigation was primarily due to general time constraints and various exogenous logistical difficulties. At the time, they thought that setting up a team who could overcome the various difficulties would be extremely hard for mundane reasons such as:
    • thorough, even-handed investigations into sensitive topics are very hard to do (especially if you start out low-context);
    • this is especially true when they are vaguely scoped and potentially involve a large number of people across a number of different organizations;
    • “professional investigators” (like law firms) aren’t very well-suited to do the kind of investigation that would actually be helpful;
    • legal counsels were generally strongly advising people against talking about FTX stuff in general;
    • various old confidentiality agreements would make it difficult to discuss what happened in some relevant instances (e.g. meetings that had Chatham House Rules);
    • it would be hard to guarantee confidentiality in the investigation when info might be subpoenaed or something like that;
    • a
... (read more)

I haven't heard any arguments against doing an investigation yet, and I could imagine folks might be nervous about speaking up here. So I'll try to break the ice by writing an imaginary dialogue between myself and someone who disagrees with me.

Obviously this argument may not be compelling compared to what an actual proponent would say, and I'd guess I'm missing at least one key consideration here, so treat this as a mere conversation-starter.


Hypothetical EA: Why isn't EV's 2023 investigation enough? You want us to investigate; well, we investigated.

Rob: That investigation was only investigating legal risk to EV. Everything I've read (and everything I've heard privately) suggests that it wasn't at all trying to answer the question of whether the EA community made any moral or prudential errors in how we handled SBF over the years. Nor was it trying to produce common-knowledge documents (either private or public) to help any subset of EA understand what happened. Nor was it trying to come up with any proposal for what we should do differently (if anything) in the future.

I take it as fairly obvious that those are all useful activities to carry out after a crisis, especially when there... (read more)

I really want to be in favor of having a less centralized media policy, and do think some level of reform is in-order, but I also think "don't talk to journalists" is just actually a good and healthy community norm in a similar way that "don't drink too much" and "don't smoke" are good community norms, in the sense that I think most journalists are indeed traps, and I think it's rarely in the self-interest of someone to talk to journalists. 

Like, the relationship I want to have to media is not "only the sanctioned leadership can talk to media", but more "if you talk to media, expect that you might hurt yourself, and maybe some of the people around you". 

I think almost everyone I know who has taken up requests to be interviewed about some community-adjacent thing in the last 10 years has regretted their choice, not because they were punished by the community or something, but because the journalists ended up twisting their words and perspective in a way both felt deeply misrepresentative and gave the interviewee no way to object or correct anything. 

So, overall, I am in favor of some kind of change to our media policy, but also continue to think that the honest and true advice for talking to media is "don't, unless you are willing to put a lot of effort into this". 

like Bostrom's influential Superintelligence - Eliezer with the serial numbers filed off and an Oxford logo added

It's not accurate that the key ideas of Superintelligence came to Bostrom from Eliezer, who originated them. Rather, at least some of the main ideas came to Eliezer from Nick. For instance, in one message from Nick to Eliezer on the Extropians mailing list, dated to Dec 6th 1998, inline quotations show Eliezer arguing that it would be good to allow a superintelligent AI system to choose own its morality. Nick responds that it's possible for an AI system to be highly intelligent without being motivated to act morally. In other words, Nick explains to Eliezer an early version of the orthogonality thesis.

Nick was not lagging behind Eliezer on evaluating the ideal timing of a singularity, either - the same thread reveals that they both had some grasp of the issue. Nick said that the fact that 150,000 people die per day must be contextualised against "the total number of sentiences that have died or may come to live", foreshadowing his piece on Astronomical Waste, that would be published five years later. Eliezer said that having waited billions of years, the probability of a... (read more)

While SBF presents himself here as incompetent rather than malicious and fraudulent, his account here contradicts previous reporting in (at least) two nontrivial ways.

  • It was reported that Caroline Ellison, CEO of Alameda, admitted to Alameda employees that a deliberate decision was made to dip into FTX customer funds to cover Alameda's insolvency.
  • It was reported that a backdoor had been implemented into FTX's internal accounting systems to allow SBF to alter financial records without triggering alerts.
Lizka
1yModerator Comment130
41
1

A quick note from a moderator (me) about discussions about recent events related to FTX: 

  • It’s really important for us to be able to discuss all perspectives on the situation with an open mind and without censoring any perspectives. 
  • And also: 
    • Our discussion norms are still important — we won’t suspend them for this topic. 
    • It’s a stressful topic for many involved, so people might react more emotionally than they usually do.
    • The situation seems very unclear and likely to evolve, so I expect that we’ll see conclusions made from partial information that will turn out to be false fairly soon. 
      • That’s ok (errors happen), but…
      • We should be aware that this is the case, caveat statements appropriately, avoid deferring or updating too much, and be prepared to say “I was wrong here.” 
  • So I’d like to remind everyone: 
    • Please don’t downvote comments simply or primarily because you disagree with them (that’s what “disagree-voting” is for!). You can downvote if you think a comment is particularly low-quality, actively harmful, or seriously breaks discussion norms (if it’s the latter, consider flagging it to the moderation team). 
    • Please keep an open and gener
... (read more)

I think we should think carefully about the norm being set by the comments here.

This is an exceptionally transparent and useful grant report (especially Oliver Habryka's). It's helped me learn a lot about how the fund thinks about things, what kind of donation opportunities are available, and what kind of things I could (hypothetically if I were interested) pitch the LTF fund on in the future. To compare it to a common benchmark, I found it more transparent and informative than a typical GiveWell report.

But the fact that Habryka now must defend all 14 of his detailed write-ups against bikeshedding, uncharitable, and sometimes downright rude commenters seems like a strong disincentive against producing such reports in the future, especially given that the LTF fund is so time constrained.

If you value transparency in EA and want to see more of it (and you're not a donor to the LTF fund), it seems to me like you should chill out here. That doesn't mean don't question the grants, but it does mean you should:

  • Apply even more principle of charity than usual
  • Take time to phrase your question in the way that's easiest to answer
  • Apply some filter and don't ask unimportant questions
  • Use a tone that minimizes stress for the person you're questioning

On talking about this publicly

A number of people have asked why there hasn’t been more communication around FTX. I’ll explain my own case here; I’m not speaking for others. The upshot is that, honestly, I still feel pretty clueless about what would have been the right decisions, in terms of communications, from both me and from others, including EV, over the course of the last year and a half. I do, strongly, feel like I misjudged how long everything would take, and I really wish I’d gotten myself into the mode of “this will all take years.” 

Shortly after the collapse, I drafted a blog post and responses to comments on the Forum. I was also getting a lot of media requests, and I was somewhat sympathetic to the idea of doing podcasts about the collapse — defending EA in the face of the criticism it was getting. My personal legal advice was very opposed to speaking publicly, for reasons I didn’t wholly understand; the reasons were based on a general principle rather than anything to do with me, as they’ve seen a lot of people talk publicly about ongoing cases and it’s gone badly for them, in a variety of ways. (As I’ve learned more, I’ve come to see that this view has a lot of m... (read more)

I really appreciate the time people have taken to engage with this post (and actually hope the attention cost hasn’t been too significant). I decided to write some post-discussion reflections on what I think this post got right and wrong.

The reflections became unreasonably long - and almost certainly should be edited down - but I’m posting them here in a hopefully skim-friendly format. They cover what I see as some mistakes with the post, first, and then cover some views I stand by.

Things I would do differently in a second version of the post:

1. I would either drop the overall claim about how much people should defer to Yudkowsky — or defend it more explicitly

At the start of the post, I highlight the two obvious reasons to give Yudkowsky's risk estimates a lot of weight: (a) he's probably thought more about the topic than anyone else and (b) he developed many of the initial AI risk arguments. I acknowledge that many people, justifiably, treat these as important factors when (explicitly or implicitly) deciding how much to defer to Yudkowsky.

Then the post gives some evidence that, at each stage of his career, Yudkowsky has made a dramatic, seemingly overconfident prediction about tec... (read more)

This post uses an alarmist tone to trigger emotions ("the vultures are circling"). I'd like to see more light and less heat. How common is this? What's the evidence?

People have strong aversions to cheating and corruption, which is largely a good thing - but it can also lead to conversations on such issues getting overly emotional in a way that's not helpful.

I might be in the minority view here but I liked the style this post was written in, emotive language and all. It was flowery language but that made it fun to read it and I did not find it to be alarmist (e.g. it clearly says “this problem has yet to become an actual problem”). 

And more importantly I think the EA Forum is already a daunting place and it is hard enough for newcomers to post here without having to face everyone upvoting criticisms of their tone / writing style / post title. It Is not the perfect post (I think there is a very valid critique in what Stefan says that the post could have benefited from linking to some examples / evidence) but not everything here needs to be in the perfect EA-speak. Especially stuff from newcomers.

So welcome CitizenTen. Nice to have you here and to hear your views.  I want to say I enjoyed reading the post (don’t fully agree tho) and thank you for it. :-)

I've confirmed with a commenter here, whom left a comment positive of non-linear, that they were asked to leave that comment by nonlinear. I think this is low-integrity behaviour on behalf of nonlinear, and an example of brigading. I would appreciate the forum team looking into this. 

Edit: I have been asked to clarify that they were encouraged to comment ’by nonlinear, rather than asked to comment positively (or anything in particular).

Ruby
7mo120
62
1
2

I think asking your friends to vouch for you is quite possibly okay, but that people should disclose there was a request.

It's different evidence between "people who know you who saw this felt motivated to share their perspective" vs "people showed up because it was requested". 

I'm not sure that should count as brigading or unethical in these circumstances as long as they didn't ask people to vote a particular way.

Remember that even though Ben is only a single author, he spent a bunch of time gathering negative information from various sources[1]. I think that in order to be fair, we need to allow them to ask people to present the other side of the story. Also consider: if Kat or Emerson had posted a comment containing a bunch of positive comments from people, then I expect that everyone would be questioning why those people hadn't made the comments themselves.

I think it might also be helpful to think about it from the opposite perspective. Would anyone accuse me of brigading if I theoretically knew other people who had negative experiences with Nonlinear and suggested that they might want to chime in?

If not, then we've created an asymmetry where people are allowed to do things in terms of criticism, but not in terms of defense, which seems like a mistake to me.

That said, it is useful for us to know that some of these comments were solicited.

Disclaimer: I formerly interned at Nonlinear. I don't want my meta-level stance to be taken as support of the actio... (read more)

Note: I discuss Open Phil to some degree in this comment. I also start work there on January 3rd. These are my personal views, and do not represent my employer.

Epistemic status: Written late at night, in a rush, I'll probably regret some of this in the morning but (a) if I don't publish now, it won't happen, and (b) I did promise extra spice after I retired.

I think you contributed something important, and wish you had been met with more support. 

It seems valuable to separate "support for the action of writing the paper" from "support for the arguments in the paper". My read is that the authors had a lot of the former, but less of the latter.

From the original post:

We were told by some that our critique is invalid because the community is already very cognitively diverse and in fact welcomes criticism. They also told us that there is no TUA, and if the approach does exist then it certainly isn’t dominant. 

While "invalid" seems like too strong a word for a critic to use (and I'd be disappointed in any critic who did use it), this sounds like people were asked to review/critique the paper and then offered reviews and critiques of the paper. 

Still, to the degree that ther... (read more)

Linch
7mo127
34
1

I think people are overcomplicating this. You should generally follow the law, but to shield against risks that you are being such a stickler in unreasonable ways (trying to avoid "3 felonies a day"), you can just imagine whether uninvolved peers hearing about your actions would think a situation is obviously okay. Some potential ways to think about such peer groups:

  1. What laws people in the country you live in think are absolutely normal and commonplace to break. 
    1. For example, bribing police officers is generally illegal, but iiuc in some countries approximately everybody bribes police officers at traffic stops
  2. What laws people in your home country think is illegitimate and thus worth breaking
    1. For example some countries ban homosexuality, but your typical American would not consider it blameworthy to be gay.
  3. What laws other EAs (not affiliated in any way with your organization) think is okay to break.
    1. So far, candidates people gave include ag-gag laws and taking stimulants for undiagnosed ADHD.
      1. FWIW I'm not necessarily convinced that the majority of EAs agree here; I'd like to see polls.
  4. What laws your non-EA friends think is totally okay to break
    1. For example, most college-educated Mil
... (read more)

Rohit - if you don't believe in epistemic integrity regarding controversial views that are socially stigmatized, you don't actually believe in epistemic integrity. 

You threw in some  empirical claims about intelligence research, e.g. 'There's plenty of well reviewed science in the field that demonstrates that, varyingly, there are issues with measurements of both race and intelligence, much less how they evolve over time, catch up speeds, and a truly dizzying array of confounders.'

OK. Ask yourself the standard epistemic integrity checks: What evidence would convince you to change your mind about these claims? Can you steel-man the opposite position? Are you applying the scout mindset to this issue? What were your Bayesian priors about this issue, and why did you have those priors, and what would update you? 

It's OK for EAs to see a highly controversial area (like intelligence research), to acknowledge that learning more about it might be a socially handicapping infohazard, and to make a strategic decision not to touch the issue with a 10-foot-pole -- i.e. to learn nothing more about it, to say nothing about it, and if asked about it, to respond 'I haven't studied thi... (read more)

TL;DR

Lots of good critical points in this post. However I would want readers to note that:

  • None of the criticisms in the post really pertain to the core elements of the bill. The theory of change for the bill is: government doesn’t make long term plans > tell government to make long term plans (i.e. set a long term vision and track progress towards it) > then government will make long term plans. This approach has had research and thought put into it.
  • This draft of the bill makes much more sense when you see it as a campaigning tool, a showcase of ideas. This is a private members bill (PMB). PMBs are primarily campaigning techniques to build support and spark debate. It will not be passed through parliament (in its current form).

 

 

INTRODUCTION

Thank you for posting. (And thank you for sharing a draft of your post with me before posting so I could start drafting this reply).

I have been the main person from the EA community working on the bill campaign. I have never been in the driving seat for the bill but I have had some influence over it.

I agree with many of the points raised. At a high level I agree for example that there is no "compelling evidence" that this bill wo... (read more)

I mentioned a few months ago that I was planning to resign from the board of EV UK: I’ve now officially done so.

Since last November, I’ve been recused from the board on all matters associated with FTX and related topics, which has ended up being a large proportion of board business. (This is because the recusal affected not just decisions that were directly related to the collapse of FTX, but also many other decisions for which the way EV UK has been affected by the collapse of FTX was important context.) I know I initially said that I’d wait for there to be more capacity, but trustee recruitment has moved more slowly than I’d anticipated, and with the ongoing recusal I didn’t expect to add much capacity for the foreseeable future, so it felt like a natural time to step down.  

It’s been quite a ride over the last eleven years. Effective Ventures has grown to a size far beyond what I expected, and I’ve felt privileged to help it on that journey. I deeply respect the rest of the board, and the leadership teams at EV, and I’m glad they’re at the helm.

Some people have asked me what I’m currently working on, and what my plans are. This year has been quite spread over... (read more)

Will - of course I have some lingering reservations but I do want to acknowledge how much you've changed and improved my life.

You definitely changed my life by co-creating Centre for Effective Altruism, which played a large role in organizations like Giving What We Can and 80,000 Hours, which is what drew me into EA. I was also very inspired by "Doing Good Better".

To get more personal -- you also changed my life when you told me in 2013 pretty frankly that my original plan to pursue a Political Science PhD wasn't very impactful and that I should consider 80,000 Hours career coaching instead, which I did.

You also changed my life by being open about taking antidepressants, which is ~90% of the reason why I decided to also consider taking antidepressants even though I didn't feel "depressed enough" (I definitely was). I felt like if you were taking them and you seemed normal / fine / not clearly and obviously depressed all the time yet benefitted from them that maybe I would also benefit them (I did). It really shattered a stereotype for me.

You're now an inspiration for me in terms of resilience. Having an impact journey isn't always everything up and up and up all the time. 2022 and 2023 were hard for me. I imagine they were much harder for you -- but you persevere, smile, and continue to show your face. I like that and want to be like that too.

Ben_West
7mo126
34
4
3
14

Still, it’s hard to see how tweaking EA can lead to a product that we and others be excited about growing. Especially considering that we have the excellent option of just talking directly about the issues that matter to us, and doing field-building around those ideas... This would be a relatively clean slate, allowing us to do more (as outlined in 11), to discourage RB, and stop bad actors.

Do you remember how animal rights was pre-EA? The first Animal Rights National Conference I went to, Ingrid Newkirk dedicated her keynote address to criticizing scope sensitivity, and arguing that animal rights activists should not focus on tactics which help more animals. And my understanding is that EA deserves a lot of the credit for removing and preventing bad actors in the animal rights space (e.g. by making funding conditional on organizations following certain HR practices).

It's useful to identify ways to improve EA, but we have to be honest that imaginary alternatives largely seem better because they are imaginary, and actual realistic alternatives also have lots of flaws.

(Of course, it's possible that those flawed alternatives are still better than EA, but figuring this out requires act... (read more)

Why are you doing critiques instead of evaluations? This seems like you're deliberately only looking for bad things instead of trying to do a balanced investigation into the impact of an organization. 

This seems like bad epistemics and will likely lead to a ton of not necessarily warranted damage to orgs that are trying to do extremely important work. Not commenting on the content of your criticisms of Redwood or Conjecture, but your process. 

Knowing there's a group of anonymous people who are explicitly looking to find fault with orgs feels like an instance of EA culture rewarding criticism to the detriment of the community as a whole. Generally, I can see that you're trying to do good, but your approach makes me feel like the EA community is hostile and makes me not want to engage with it. 

I don’t see any way you could meaningfully “address” the work/social overlap without trying to get people not to date, live with or befriend people they otherwise would have dated, lived with, or befriended. And if you put it in those terms, it seems messed up, right?

I don't actually think that's necessarily messed up? That sometimes your role conflicts with a relationship you'd like to have is unfortunate, but not really avoidable:

  • A company telling its managers that they can't date their reports .

  • A person telling their partner that they can't date other people.

  • A person telling their partner that they can't date a specific other person.

  • A school telling professors they can't date their students.

  • A charity telling their donor services staff that they can't date major donors.

The person has the option of giving up their role (the manager and report can work with HR to see if either can change roles to remove the conflict, the poly partner can dump the mono one, etc.) but the role's gatekeeper saying you both can't keep the role and date the person seems fine in many cases?

The harm isn’t in the harshness or softness of the punishment - it’s friendships nipped in t

... (read more)

I'm analogizing Peter Singer and classical Givewell-style EA to Novik.

What about the parts of EA that isn't Peter Singer and classical GiveWell-style EA? If those parts of EA were somewhat responsible, would it be reasonable to call that EA as well?

I don't think the analogy is helpful. Naomi Novik presumably does not claim to emphasize the importance of understanding tail risks. Naomi presumably didn't meet Caroline and encourage her to earn a lot of money so she can donate to fantasy authors, nor did Caroline say "I'm earning all of this money so I can fund Naomi Novik's fantasy writing". Naomi Novik did not have Caroline on her website as a success story of "this is why you should earn money to buy fantasy books or support other fantasy writers".  Naomi didn't have a "Fantasy writer's fund" with the FTX brand on it. 

I think it's reasonable to preach patience if you think people are jumping too quickly to blame themselves. I think it's reasonable to think that EA is actually less responsible than the current state of discourse on the forum. And I'm not making a claim about the extent EA is in fact responsible for the events. But the analogy as written is pretty poor, and... (read more)

Protests are by nature adversarial and high-variance actions prone to creating backlash, so I think that if you're going to be organizing them, you need to be careful to actually convey the right message (and in particular, way more careful than you need to be in non-adversarial environments—e.g. if news media pick up on this, they're likely going to twist your words). I don't think this post is very careful on that axis. In particular, two things I think are important to change:

"Meta’s frontier AI models are fundamentally unsafe."

I disagree; the current models are not dangerous on anywhere near the level that most AI safety people are concerned about. Since "current models are not dangerous yet" is one of the main objections people have to prioritizing AI safety, it seems really important to be clearer about what you mean by "safe" so that it doesn't sound like the protest is about language models saying bad things, etc.

Suggestion: be very clear that you're protesting the policy that Meta has of releasing model weights because of future capabilities that models could have, rather than the previous decisions they made of releasing model weights.

"Stop free-riding on the goodwill of ... (read more)

spencerg
7mo125
21
36
1
5

Hi all, I wanted to chime in because I have had conversations relevant to this post with just about all involved parties at various points. I've spoken to "Alice" (both while she worked at nonlinear and afterward), Kat (throughout the period when the events in the post were alleged to have happened and afterward), Emerson, Drew, and (recently) the author Ben, as well as, to a much lesser extent, "Chloe" (when she worked at nonlinear). I am (to my knowledge) on friendly terms with everyone mentioned (by name or pseudonym) in this post. I wish well for everyone involved. I also want the truth to be known, whatever the truth is.

I was sent a nearly final draft of this post yesterday (Wednesday), once by Ben and once by another person mentioned in the post.
 

I want to say that I find this post extremely strange for the following reasons:
 

(1) The nearly final draft of this post that I was given yesterday had factual inaccuracies that (in my opinion and based on my understanding of the facts) are very serious despite ~150 hours being spent on this investigation. This makes it harder for me to take at face value the parts of the post that I have no knowledge of. &nb... (read more)

Habryka
7mo264
80
11
1
8
2

(Copying over the same response I posted over on LW)

I don't have all the context of Ben's investigation here, but as someone who has done investigations like this in the past, here are some thoughts on why I don't feel super sympathetic to requests to delay publication: 

In this case, it seems to me that there is a large and substantial threat of retaliation. My guess is Ben's sources were worried about Emerson hiring stalkers, calling their family, trying to get them fired from their job, or threatening legal action. Having things be out in the public can provide a defense because it is much easier to ask for help if the conflict happens in the open. 

As a concrete example, Emerson has just sent me an email saying: 

Given the irreversible damage that would occur by publishing, it simply is inexcusable to not give us a bit of time to correct the libelous falsehoods in this document, and if published as is we intend to pursue legal action for libel against Ben Pace personally and Lightcone for the maximum damages permitted by law. The legal case is unambiguous and publishing it now would both be unethical and gross negligence, causing irreversible damage.

For the record, ... (read more)

Adding some more data from my own experience last year.

Personally, I'm glad about some aspects of it and struggled with others, and there are some things I wish I had done differently, at least in hindsight. But here I just mean to quickly provide data I have collected anyway in a 'neutral' way, without implying anything about any particular application.

Total time I spent on 'career change' in 2018: at least 220h, of which at least about 101h were for specific applications. (The rest were things like: researching job and PhD opportunities; interviewing people about their jobs and PhD programs; asking people I've worked with for input and feedback; reflection before I decided in January to quit my previous job at the EA Foundation by April.) This does neither include 1 week I spent in in San Francisco to attend EAG SF and during which I was able to do little other work nor 250h of self-study that seems robustly useful but which I might not have done otherwise. (Nor 6 full weeks plus about 20h afterwards I spent doing an internship at an EA org, which overall I'm glad I did but might not have done otherwise.)

  • Open Phil Research Analyst - rejected af
... (read more)

One thing that might be worth noting: I was only able to invest that many resources because of things like (i) having had an initial runway of more than $10,000 (a significant fraction of which I basically 'inherited' / was given to me for things like academic excellence that weren't very effortful for me), (ii) having a good relationship to my sufficiently well-off parents that moving back in with them always was a safe backup option, (iii) having access to various other forms of social support (that came with real costs for several underemployed or otherwise struggling people in my network).

I do think current conditions mean that we 'lose' more people in less comfortable positions than we otherwise would.

This is a quick post to talk a little bit about what I’m planning to focus on in the near and medium-term future, and to highlight that I’m currently hiring for a joint executive and research assistant position. You can read more about the role and apply here! If you’re potentially interested, hopefully the comments below can help you figure out whether you’d enjoy the role. 

Recent advances in AI, combined with economic modelling (e.g. here), suggest that we might well face explosive AI-driven growth in technological capability in the next decade, where what would have been centuries of technological and intellectual progress on a business-as-usual trajectory occur over the course of just months or years.

Most effort to date, from those worried by an intelligence explosion, has been on ensuring that AI systems are aligned: that they do what their designers intend them to do, at least closely enough that they don’t cause catastrophically bad outcomes. 

But even if we make sufficient progress on alignment, humanity will have to make, or fail to make, many hard-to-reverse decisions with important and long-lasting consequences. I call these decisions Grand C... (read more)

The vast majority of people should probably be withholding judgment and getting back to work for the next week until Nonlinear can respond.

I'm contributing to it now, but it's a bit of a shame that this post has 183 comments at the time of writing when the post is not even a day old and not being on the front page. EA seems drawn to drama and controversy and it would accomplish its goals much better if it were more able to focus on more substantive posts.

Will basically threatened Tara,

I would VERY much like to get more information on  this (though I understand if Naia feels she can't say more.) This sounds, really really bad, but also like a lot turns on exactly how far 'basically threatened' is from 'threatened' without qualifier. 

Based on conversations with people at the time, it seems plausible to me that this is true. However, this is not as serious a concern as you might think: IMHO it was reasonable to consider both SBF and Tara highly untrustworthy at the time. Will trusted SBF too much, but his skepticism of Tara seems justified. Tara's hedge fund suffered a major loss later, and I heard she showed low integrity in communicating with stakeholders about the loss.

Relevant quote from the article:

“He was treating it like a ‘he said-she said,’ even though every other long-time EA involved had left because of the same concerns,” Bouscal adds.

Habryka
1y124
50
8

The accusations are public and have already received substantial exposure. TIME itself seems to be leveraging this request for confidentiality in order to paint an inaccurate picture of what is actually going on and also making it substantially harder for people to orient towards the actual potential sources of risk in the surrounding community. 

I don't currently see a strong argument for not linking to evidence that I was easily able to piece together publicly, and also like, probably the accused can also figure out. The cost here is really only born by the people who lack context who I feel like are being substantially mislead by the absence of information here. 

I'll by-default repost the links and guess at identity of the person in-question in 24 hours unless some forum admin objects or someone makes a decent counterargument.

Habryka
1y124
16
0

DM conversation I had with Eliezer in response to this post. Since it was a private convo and I was writing quickly I had somewhat exaggerated in a few places that I've now indicated with edits.

Habryka

Hmm, I do feel like I maybe want to have some kind of public debate about whether indeed we could have noticed that a bunch of stuff about FTX was noticeable, and whether we have some substantial blame to carry. 

Like, to be clear, I think the vast majority of EAs had little they could have or should have done here. But I think that I, and a bunch of people in the EA leadership, had the ability to actually do something about this. 

I sent emails in which I warned people of SBF. I had had messages written but that I never sent that seem to me like if I had sent them they would have actually caused people to realize a bunch of inconsistencies in Sam's story. I had sat down my whole team, swore them to secrecy, and told them various pretty clearly illegal things that I heard Sam had done [sadly all uncomfirmed, asking for confidentiality and only in rumors] that convinced me that we should avoid doing business with him as much as possible (this was when we were considering whethe

... (read more)

Bill Gates just endorsed GiveWell!

I don't know how much we should update on this, but I'm now personally a bit less concerned about the "self-recommending" issues of EA resources being mostly recommended by people in the EA social community.[1]

I think this is a good sign for the effective giving ecosystem, and will make my relatives much less worried about how I spend my money.

  1. ^

    Not that I was super concerned after digging deeper into things in the past year, but I remember being really concerned about it ~2 years ago, and most people don't have that much time to look into things.

>Since then, all the major actors in effective altruism’s global health and wellbeing space seem to have come around to it (e.g., see these comments by GiveWell, Founders Pledge, Charity Entrepreneurship, GWWC, James Snowden).

I don't think this is an accurate representation of the post linked to under my name, which was largely critical.

lilly
1y123
9
2

In light of this discussion about whether people would find this article alienating, I sent it to four very smart/reasonable friends who aren't involved in EA, don't work on AI, and don't live in the Bay Area (definitely not representative TIME readers, but maybe representative of the kind of people EAs want to reach). Given I don't work on AI/have only ever discussed AI risk with one of them, I don't think social desirability bias played much of a role. I also ran this comment by them after we discussed. Here's a summary of their reactions:

Friend 1: Says it's hard for them to understand why AI would want to kill everyone, but acknowledges that experts know much more about this than they do and takes seriously that experts believe this is a real possibility. Given this, they think it makes sense to err on the side of caution and drastically slow down AI development to get the right safety measures in place.

Friend 2: Says it's intuitive that AI being super powerful, not well understood, and rapidly developing is a dangerous combination. Given this, they think it makes sense to implement safeguards. But they found the article overwrought, especially given missing links in the argumen... (read more)

Thanks for everyone's contributions. I am learning a lot. I see that the author made significant mistakes and am glad he is taking action to correct them and that the community is taking them seriously, but I want to make a small comment on the sentence "She was in a structural position where it was (I now believe) unreasonable to expect honesty about her experience." I don't know enough about the specific relationship in the post to comment on it directly, but felt it could describe enough dynamics that it could use a diverse array of perspectives from women in the structural positions described.

I want to encourage other women in early stages of their careers like myself to continue striving to overcome shyness. I don't think it's too much to expect us to be honest if we dislike a higher status man flirting with us who doesn't have direct power over us, or if we dislike any other thing they do.  I hope this post encourages shy lower status women to feel like they would be heard if they were assertive about behaviors they don't like, that one way of making the behaviors stop could be to be more direct.  

I also think in general the Ask Culture norm prevalent in EA is very ... (read more)

[EDIT: I was assuming from the content of the conversation Sam and Kelsey had some preexisting social connection that made a "talking to a friend" interpretation reasonable. From Kelsey's tweets people linked elsewhere in this thread it sounds like they didn't, and all their recent interactions had been around her writing about him as a journalist. I think that makes the ethics much less conflicted.]

I'm conflicted on the ethics of publishing this conversation. I read this as if Sam's is talking to Kelsey this way because he thought he was talking casually with a friend in her personal capacity. And while the normal journalistic ethics is something like "things are on the record unless we agree otherwise", that's only true for professional conversations, right? Like, if Kelsey were talking with a housemate over dinner and then that ended up in a Vox article I would expect everyone would see that as unfair to the housemate? Surely the place you end up isn't "journalists can't have honest friendships", right? Perhaps Kelsey doesn't think of herself as Sam's friend, but I can't see how Kelsey could have gone through that conversation thinking "Sam thinks he's talking to me as a journalist".

On the other hand, Sam's behavior has been harmful enough that I could see an argument that he doesn't deserve this level of consideration, and falling back on a very technical reading of journalistic ethics is ok?

Copying what I posted in the LW thread: 
Sam has since tweeted "25) Last night I talked to a friend of mine. They published my messages. Those were not intended to be public, but I guess they are now."

His claims are hard to believe. Kelsey is very well-known as a journalist in EA circles. She says she interviewed him for a piece in May. Before Sam's tweet, she made a point of saying that she avoids secretly pulling "but I never said it would be off-the-record, you just asked for that" shenanigans. She confirmed the conversation with an email from her work account. She disputes the "friend" claim, and says they've never had any communication in any platform she can find, other than the aforementioned interview.

The only explanations that make sense to me are:

  • Sam expected Kelsey's coverage to be more favorable and is now regretting his conversation
  • Sam has been under so much stress that even the incredibly obvious fact that this was a professional interview was something he failed to realize
  • Sam is just lying here, perhaps after hearing from his lawyers about how dumb the interview was 

I'm honestly more than a bit surprised to see there being doubts on the propriety of publishing this. Like on the facts that Kelsey gives, it seems obvious that their relationship is journalist-subject (particularly given how experienced SBF is with the press). But even if you were to assume that they had a more casual social relationship than is being disclosed (which I do not), if you just blew up your company in a (likely) criminal episode that is the most damaging and public event in the history of the social movement you're a part of, and your casual friend the journalist just wants to ask you a series of questions over DM, the idea that you have an expectation of privacy (without your ever trying to clarify that the conversation is private) does not seem very compelling to me. 

Like, your therapist/executive coach just gave an interview on the record to the New York Times. You are front page news around the world. You know your statements are newsworthy. Why is the baseline here "oh this is just a conversation between friends?" (Particularly where one of the parties is like "no we are totally not friends")

I don't mean for my tone to be too harsh here, but I think this article is clearly in the public interest and I really just don't see the logic for not publishing it. 

I work (indirectly) in financial risk management. Paying special attention to special categories of risk - like romantic relationships - is very fundamental to risk management. It is not that institutions are face with a binary choice of 'manage risk' or 'don't manage risk' where people in romantic relationships are 'managed' and everyone else is 'not'. Risk management is a spectrum, and there are good reasons to think that people with both romantic and financial entanglements are higher risk than those with financial entanglements only. For example:

  1. Romantic relationships inspire particularly strong feelings, not usually characterising financial relationships. People in romantic relationships will take risks on each other's behalf that people in financial relationships will not. We should be equally worried about familial relationships, which also inspire very strong feelings.

  2. Romantic relationships inspire different feelings from financial relationships. Whereas with a business partner you might be tempted to act badly to make money, with a romantic partner you might be tempted to act badly for many other reasons. For example, to make your partner feel good, or to spare your

... (read more)

I found it a bit hard to discern what constructive points he was trying to make amidst all the snark. But the following seemed like a key passage in the overall argument:

Making responsible choices, I came to realize, means accepting well-known risks of harm. Which absolutely does not mean that “aid doesn’t work.” There are many good people in aid working hard on the ground, often making tough calls as they weigh benefits and costs. Giving money to aid can be admirable too—doctors, after all, still prescribe drugs with known side effects. Yet what no one in aid should say, I came to think, is that all they’re doing is improving poor people’s lives.

... This expert tried to persuade Ord that aid was much more complex than “pills improve lives.” Over dinner I pressed Ord on these points—in fact I harangued him, out of frustration and from the shame I felt at my younger self. Early on in the conversation, he developed what I’ve come to think of as “the EA glaze.”... Ord, it seemed, wanted to be the hero—the hero by being smart—just as I had. Behind his glazed eyes, the hero is thinking, “They’re trying to stop me.”

Putting aside the implicit status games and weird psychological projectio... (read more)

To put my money where my mouth is, I will be cutting my salary back to "minimum wage" in October.

I think  Abie Rohrig  and the broader team have been crushing it with the launch of What We Owe The Future.  So so much media coverage and there are even posters popping up in tube stations across London! 

Hi, thank you for starting this conversation! I am an EA outsider, so I hope my anecdata is relevant to the topic. (This is my first post on the forums.) I found my way to this post during an EA rabbit hole after signing up for the "Intro to EA" Virtual Program.

To provide some context, I heard about EA a few years ago from my significant other. I was/am very receptive to EA principles and spent several weeks browsing through various EA resources/material after we first met. However, EA remained in my periphery for around three years until I committed to giving EA a fair shake several weeks ago. This is why I decided to sign up for the VP.

I'm mid-career instead of enrolled in university, so my perspective is not wholly within the scope of the original post. However, I like to think that I have many qualities the EA community would like to attract:

  • I (dramatically) changed careers to pursue a role with a more significant positive impact and continue to explore how I can apply myself to do the "most good".
  • I'm well-educated (1 bachelor's degree & 2 master's degrees)
  • As a scientist for many years, I value evidence-based decision-making and rationalit
... (read more)

This post doesn't seem screamingly urgent. Why didn't you have the chance to share a draft with ACE?

It seems like there are several points here where clarification from ACE would be useful, even if the bulk of your complaints stand.

Hi Will, thanks for your comment.

The idea of sending a draft to ACE didn't occur to me until I was nearly finished writing the post. I didn't like the idea of dwelling on the post for much longer, especially given some time commitments I have in the coming weeks.

Though to be honest, I don't think this reason is very good, and upon reflection I suspect I should have send a draft to ACE before posting to clear up any misunderstandings.

Having written a similar post in the past, it's worth keeping in mind the amount of time they take to write is huge. Hypatia seems to have done a very good job expressing the facts in a way which communicates why they are so concerning while avoiding hyperbole. While giving organisations a chance to read a draft can be a good practice to reduce the risk of basic factual mistakes (and one I try to follow generally), it's not obligatory. Note that we generally do not afford non-EA organisations this privilege, and indeed I would be surprised if ACE offered Connor the chance to review their public statement which pseudonymously condemned him. Doing so adds significantly to the time commitment and raises anonymity risks[1], especially if one is worried about retaliation from an organisation that has penalized people for political disagreements in the past.

 

[1] As an example, here is something I very nearly messed up and only thought of at the last minute: you need to make a fresh copy of the google doc to share without the comments, or you will reveal the identity of your anonymous reviewers, even if you are personally happy to be known. 

Lukio
1y121
39
6

Hey, crypto insider here.

sbf actions seem to be directly inspired by his effective altruism believes. He mentioned a few times on podcasts that his philosophy was:  Make the most money possible, whatever the way, and then donate it all in the best way to improve the world. He was only in crypto because he thought this was the place where he could make the most money.

sbf was first a trader for Alameda and then started FTX 

some actions that Alameda/FTX was known for:

*Using exchange data to trade against their own customers

 *Paying twitter users money to post tweets with the intention of promoting ftx, hurting competitors, and manipulating markets

*Creating ponzi coins with no usage with the only intention of selling these for the highest price possible to naive users. Entire ecosystems were created for this goal. 

The typical plan was: 

1.Fund a team to create a new useless token. 2% of coins to public, 98% to investors who get it year later. 2. Create manipulation story for why this project is useful. 3.Release news item: Alameda invested in x coin (because alameda had a good reputation at first). 4. pump up the price as high as they can using twitter influence... (read more)

For people who consider taking or end up taking this advice, some things I might say if we were having a 1:1 coffee about it:

  • Being away from home is by its nature intense, this community and the philosophy is intense, and some social dynamics here are unusual, I want you to go in with some sense of the landscape so you can make informed decisions about how to engage.
  • The culture here is full of energy and ambition and truth telling. That's really awesome, but it can be a tricky adjustment. In some spaces, you'll hear a lot of frank discussion of talent and fit (e.g. people might dissuade you from starting a project not because the project is a bad idea but because they don't think you're a good fit for it). Grounding in your own self worth (and your own inside views) will probably be really important.
  • People both are and seem really smart. It's easy to just believe them when they say things. Remember to flag for yourself things you've just heard versus things you've discussed at length  vs things you've really thought about yourself. Try to ask questions about the gears of people's models, ask for credences and cruxes.  Remember that people disagree, including about very bi
... (read more)

"Don't be fanatical about utilitarian or longtermist concerns and don't take actions that violate common sense morality"  is a message that longtermists have been emphasizing from the very beginnings of this social movement, and quite a lot.

Some examples: 

More generally, there's often at least a full paragraph devoted to this topic when someone writes a longer overview article on longtermism or writes about particularly dicey implications with outsized moral stakes. I also remember this being a presentation or conversation topic at many EA conferences.

I haven't read the corresponding section in the paper that the OP refers to, yet, but I skimmed the literature section and found none of the sources I linked to above. If the paper criticizes longtermism on grounds of this sort of implication and fails to mention that longtermists have been aware of this and are putting in a lot o... (read more)

Although I am on the board of Animal Charity Evaluators, everything I say on this thread is my own words only and represents solely my personal opinion of what may have been going on. Any mistakes here are my own and this should not be interpreted as an official statement from ACE.

I believe that the misunderstanding going on here might be a false dilemma. Hypatia is acting as though the two choices are to be part of the social justice movement or to be in favor of free open expression. Hypatia then gives evidence that shows that ACE is doing things like the former, and thus concludes that this is dangerous because the latter is better for EA.

But this is a false dichotomy. ACE is deliberately taking a nuanced position that straddles both sides. ACE is not in danger of becoming an org that just goes around canceling free thought thinkers. But nor is ACE is danger of ignoring the importance of providing safe spaces for black, indigenous, and people of the global majority (BIPGM) in the EAA community. ACE is doing both, and I think rightly so.

Many who read this likely don't know me, so let me start out by saying that I wholeheartedly endorse the spirit of the quoted comment from Anna S... (read more)

Can you explain more about this part of ACE's public statement about withdrawing from the conference:

We took the initiative to contact CARE’s organizers to discuss our concern, exchanging many thoughtful messages and making significant attempts to find a compromise.

If ACE was not trying to deplatform the speaker in question, what were these messages about and what kind of compromise were you trying to reach with CARE?

As to other questions relating to Leverage, EA, funding- and attention-worthiness, etc., I’ve addressed some concerns in previous comments and I intend to address a broader range of questions later. I don’t however endorse attack posts as a discussion format, and so intend to keep my responses here brief. The issues you raise are important to a lot of people and should be addressed, so please feel free to contact me or my staff via email if it would be helpful to discuss more.

[Own views]

If an issue is important to a lot of people, private follow-ups seem a poor solution. Even if you wholly satisfy Buck, he may not be able to relay what reassured him to all concerned parties, and thus likely duplication of effort on your part as each reaches out individually.

Of course, this makes more sense as an ill-advised attempt to dodge public scrutiny - better for PR if damning criticism remains in your inbox rather than on the internet-at-large. In this, alas, Leverage has a regrettable track record: You promised 13 months ago to write something within a month to better explain Leverage better, only to make a much more recent edit (cf.) that you've "changed your plans" and enco... (read more)

I feel like this post is doing something I really don't like, which I'd categorize as something like "instead of trying to persuade with arguments, using rhetorical tricks to define terms in such a way that the other side is stuck defending a loaded concept and has an unjustified uphill battle."

For instance:

let us be clear: hiding your beliefs, in ways that predictably leads people to believe false things, is lying. This is the case regardless of your intentions, and regardless of how it feels.

I mean, no, that's just not how the term is usually used. It's misleading to hide your beliefs in that way, and you could argue it's dishonest, but it's not generally what people would call a "lie" (or if they did, they'd use the phrase "lie by omission"). One could argue that lies by omission are no less bad than lies by commission, but I think this is at least nonobvious, and also a view that I'm pretty sure most people don't hold. You could have written this post with words like "mislead" or "act coyly about true beliefs" instead of "lie", and I think that would have made this post substantially better.

I also feel like the piece weirdly implies that it's dishonest to advocate for a policy ... (read more)

Liv
1y120
54
53

My personal reaction: I know you are scared and emotional, I am too. This post however, crossed my boundary. 

I'm a woman, I'm in my late 20s and I'm going to do what you call sleeping around in the community if it's consensual from both sides. Obviously, I'm going to do my absolute best to be mature in my behaviors and choices in every way. I also believe that as the community we should do better job in protecting people from unwanted sexual behavior and abuse. But I will not be a part of community which treats conscious and consensual behavior of adult people as their business, because it hell smells like purity culture for me. And it won't do the job in protecting anybody.

I'm super stressed by this statement. 

I haven't thought about it much but removing people from boards after a massive miscalculation seems reasonable.

Like our prior should be to replace at least Nick and Will right?

Some thoughts about this --

I genuinely thought SBF spoke to me with the knowledge I was a journalist covering him, knew we were on the record, and knew that an article quoting him was going to happen.*** The reasons I thought that were: 

- I knew SBF was very familiar with how journalism works. At the start of our May interview I explained to him how on the record/off the record works, and he was (politely) impatient because he knew it because he does many interviews. 

- I knew SBF had given on the record interviews to the New York Times and Washington Post in the last few days, so while it seemed to me like he clearly shouldn't be talking to the press, it also seemed like he clearly was choosing to do so for some reason and not at random.  Edited to add: additionally, it appears that immediately after our conversation concluded he called another journalist to talk on the record and say among other things that he'd told his lawyer to "go fuck himself" and that lawyers "don’t know what they’re talking about".  I agree it is incredibly bizarre that Sam was knowingly saying things like this on the record to journalists.

- Obviously SBF's communications right now are g... (read more)

Thank you so much for your time, dedication, and efforts.
It seems like, for many of us, difficult times lay ahead. Let us not forget the power of our community - a community of brilliant, kind-hearted, caring people trying to do good better together
This is a crisis - but we have the ability to overcome it.
 

Molly
1y120
12
0

Quick response to comments about potential clawbacks: OP expects to put out an explainer about clawbacks tomorrow. It'll be written by our outside counsel and probably won't contain much in the way of specifics, but I think generally FTX grantees should avoid spending additional $$ on legal advice about this just yet.

Also, please don't take this as evidence that we expect clawbacks to happen, just that we know it's an issue of community concern. 

Hi Constance,

I was sad to read your initial post and recognize how disappointed you are about not getting to come to this EAG. And I see you’ve put a lot of work into this post and your application. I’m sorry that the result wasn’t what you were hoping for. 

After our call (I’m happy to disclose that I am “X”), I was under the impression that you understood our decision, and I was happy to hear that you started getting involved with the in-person community after we spoke. 

As I mentioned to you, I recommend that you apply to an EAGx event, which might be a better fit for you at this stage.

It’s our policy to not discuss the specifics of people’s applications with other people besides them. I don’t think it would be appropriate for me to give more detail about why you were rejected publicly, so it is hard to really reply to the substance of this post, and share the other side of this story.

I hope that you continue to find ways to get involved, deepen your EA thinking, and make contributions to EA cause areas. I’m sorry that this has been a disappointing experience for you. At this point, given our limited capacity, and the time we’ve spent engaging on calls, email, ... (read more)

I strongly agree with some parts of this post, in particular:

  • I think integrity is extremely important, and I like that this post reinforces that.
  • I think it’s a great point that EA seems like it could be very bitterly divided indeed, and appreciating that we haven’t as well as thinking about why (despite our various different beliefs) seems like a great exercise. It does seem like we should try to maintain those features.

On the other hand, I disagree with some of it -- and thought I'd push back especially given that there isn't much pushback in the comments here:

I think it’s a bad idea to embrace the core ideas of EA without limits or reservations; we as EAs need to constantly inject pluralism and moderation. That’s a deep challenge for a community to have - a constant current that we need to swim against.

I think this is misleading in that I’d guess the strongest current we face is toward greater moderation and pluralism, rather than radicalism. As a community and as individuals, some sources of pressure in a ‘moderation’ direction include:

  1. As individuals, the desire to be liked by and get along with others, including people inside and outside of EA

  2. As individuals that

... (read more)

Hi Jack,

Just a quick response on the CEA’s groups team end.

We are processing many small grants and other forms of support for CB  and we do not have the capacity to publish BOTECs on all of them. 

However, I can give some brief heuristics that we use in the decision-making.

Institutions like Facebook, Mckinsey, and Goldman spend ~ $1 million per school per year at the institutions they recruit from trying to pull students into lucrative careers that probably at best have a neutral impact on the world. We would love for these students to instead focus on solving the world’s biggest and most important problems.

Based on the current amount available in EA, its projected growth, and the value of getting people working in EA careers, we currently think that spending at least as much as McKinsey does on recruiting pencils out in expected value terms over the course of a student’s career. There are other factors to consider here (i.e. double-counting some expenses) that mean we actually spend significantly less than this. However, as Thomas said - even small chances that dinners could have an effect on career changes make them seem like effective uses of money. (We do have a fair a... (read more)

[anonymous]2y120
0
0

The discussion of Bostrom's Vulnerable World Hypothesis seems very uncharitable. Bostrom argues that on the assumption that technological development makes the devastation of civilisation extremely likely, extreme policing and surveillance would be one of the few ways out. You give the impression that he is arguing for this now in our world ("There is little evidence that the push for more intrusive and draconian policies to stop existential risk is either necessary or effective"). But this is obviously not what he is proposing - the vulnerable world hypothesis is put forward as a hypothesis and he says he is not sure whether it is true. 

Moreover, in the paper, Bostrom discusses at length the obvious risks associated with increasing surveillance and policing:

"It goes without saying that a mechanism that enables unprecedentedly intense forms of surveillance, or a global governance institution capable of imposing its will on any nation, could also have bad consequences. Improved capabilities for social control could help despotic regimes protect themselves from rebellion. Ubiquitous surveillance could enable a hegemonic ideology or an intolerant majority view to impose itself on... (read more)

I want to take this opportunity to thank the people who kept FHI alive for so many years against such hurricane-force headwinds. But I also want to express some concerns, warnings, and--honestly--mixed feelings about what that entailed. 

Today, a huge amount of FHI's work is being carried forward by dozens of excellent organizations and literally thousands of brilliant individuals. FHI's mission has replicated and spread and diversified. It is safe now. However, there was a time when FHI was mostly alone and the ember might have died from the shockingly harsh winds of Oxford before it could light these thousands of other fires. 

I have mixed feelings about encouraging the veneration of FHI ops people because they made sacrifices that later had terrible consequences for their physical and mental health, family lives, and sometimes careers--and I want to discourage others from making these trade-offs in the future. At the same time, their willingness to sacrifice so much, quietly and in the background, because of their sincere belief in FHI's mission--and this sacrifice paying off with keeping FHI alive long enough for its work to spread--is something for which I am incr... (read more)

saulius
4mo119
23
1
2

Thank you for your answer Marcus.

What bothers me is that if I said that I was excited about funding WAW research, no one would have said anything. I was free to say that. But to say that I’m not excited, I have to go through all these hurdles. This introduces a bias because a lot of the time researchers won’t want to go through hurdles and opinions that would indirectly threaten RP’s funding won’t be shared. Hence, funders would have a distorted view of researchers' opinions. 

Put yourself into my shoes. OpenPhil sends an email to multiple people asking for opinions on a WAW grant. What I did was that I wrote a list of pros and cons about funding that grant, recommended funding it, and pressed “send”. It took like 30 minutes. Later OpenPhil said that it helped them to make the decision. Score! I felt energized. I probably had more impact in those 30 minutes than I had in three months of writing about aquatic noise.

Now imagine I knew that I had to inform the management about saying that I’m not excited about WAW. My manager was new to RP, he would’ve needed to escalate to directors. Writing my manager’s manager’s manager a message like “Can I write this thing that threatens... (read more)

[anonymous]8mo119
23
2
2

A couple of other examples, both of which have been discussed on LessWrong before:

  • In Eliezer's book Inadequate Equilibria, he gives a central anecdote that by reading econ bloggers he confidently realized the Bank of Japan was making mistakes worth trillions of dollars. He further claimed that a change in leadership meant that the Bank of Japan soon after pursued his favored policies, immediately leading to "real GDP growth of 2.3%, where the previous trend was for falling RGDP" and validating his analysis. 
    • If true, this is really remarkable. Let me reiterate: He says that by reading econ blogs, he was able to casually identify an economic policy of such profound importance that the country of Japan was able to reverse declining GDP immediately. 
    • In fact, one of his central points in the book is not just that he was able to identify this opportunity, but that he could be justifiably confident in his knowledge despite not having any expertise in economic policy. His intention with the book is to explain how and why he can be correct about things like this. 
    • The problem? His anecdote falls apart at the slightest fact check
      • Japan's GDP was not falling when he says i
... (read more)

the quantity and quality of output is underwhelming given the amount of money and staff time invested.

Of Redwood’s published research, we were impressed by Redwood's interpretability in the wild paper, but would consider it to be no more impressive than progress measures for grokking via mechanistic interpretability, executed primarily by two independent researchers, or latent knowledge in language models without supervision, performed by two PhD students.[4] These examples are cherry-picked to be amongst the best of academia and independent research, but we believe this is a valid comparison because we also picked what we consider the best of Redwood's research and Redwood's funding is very high relative to other labs.

I'm missing a lot of context here, but my impression is that this argument doesn't go through, or at least is missing some steps:

  1. We think that the best Redwood research is of similar quality to work by [Neel Nanda, Tom Lieberum and others, mentored by Jacob Steinhardt]
  2. Work by those others doesn't cost $20M
  3. Therefore the work by Redwood shouldn't cost $20M

Instead, the argument which would go through would be:

  1. Open Philanthropy spent $20M on Redwood Research
  2. That $20
... (read more)
Liv
1y119
42
13

I think I found the crux.

I treat EA as a community. And by "community" I mean "a group of friends who have common interests". In the same time, I treat some parts of EA as "companies". "Companies" have hierarchy, structure, money and very obvious power dynamics. I separate the two. 

I'm not willing to be a part of community, which  treats conscious and consensual behavior of adult people as their business (as stated under the other post). In the same time, I'd be more than happy to work for a company  which has such norms.  I actually prefer it this way, as long as they are reasonable and not i.e. sexist, polyphobic and so on.

I think a tricky part is, EA is quite complex with this regard. I don't think the same rules should apply to interest groups, grant-makers, companies. I think a power dynamic between grant-maker and grantee is quite different from the one which applies to university EA group leader and group's member. I believe, that the community should function as a group of friends, and companies/interest groups should create their own, internal rules. But maybe it won't work for the EA. Happy to update here, I, however, want to mention that for a lot of people EA is their whole life and the main social group. I would be very careful  while setting the general norms. 


(When it comes to "EA celebrities", I think it's a separate discussion, so I'm not mentioning them here as I would like to focus on community/workplace differences and definitions first. )

ok, an incomplete and quick response  to the comments below (sry for typos). thanks to the kind person who alerted me to this discussion going on (still don't spend my time on your forum, so please do just pm me if you think I should respond to something)

1.

- regarding blaming Will or benefitting from the media attention

- i don't think Will is at fault alone, that would be ridiculous, I do think it would have been easy for him to make sure something is done, if only because he can delegate more easily than others (see below)

- my tweets are reaction to his tweets where he says he believes he was wrong to deprioritise measures

- given that he only says this after FTX collapsed, I'm saying, it's annoying that this had to happen before people think that institutional incentive setting needs to be further prioritised

- journalists keep wanting me to say this and I have had several interviews in which I argue against this simplifying position

2.
- i'm rather sick of hearing from EAs that i'm arguing in bad faith

- if I wanted to play nasty it wouldn't be hard (for anyone) to find attack lines, e.g. i have not spoken about my experience of sexual misconduct in EA and i continue to refuse t... (read more)

KMF
1y119
15
0

Hi hi :) Are you involved in the  Magnify Mentoring community at all? I've been poorly for the last couple of weeks so I'm a bit behind but I founded and run MM. Personally, I'd also love to chat :) Feel free to reach out anytime.  Super Warmly, Kathryn 

Thanks Magnus for your more comprehensive summary of our population ethics study.

You mention this already, but I want to emphasize how much different framings actually matter. This surprised me the most when working on this paper. I’d thus caution anyone against making strong inferences from just one such study.

For example, we conducted the following pilot study (n = 101) where participants were randomly assigned to two different conditions: i) create a new happy person, and ii) create a new unhappy person. See the vignette below:

Imagine there was a magical machine. This machine can create a new adult person. This new person’s life, however, would definitely [not] be worth living. They would be very unhappy [happy] and live a life full of suffering and misery [bliss and joy].

You can push a button that would create this new person.

Morally speaking, how good or bad would it be to push that button?

The response scale ranged from 1 = Extremely bad to 7 = Extremely good. 

Creating a happy person was rated as only marginally better than neutral (mean = 4.4), whereas creating an unhappy person was rated as extremely bad (mean = 1.4). So this would lead one to believe that there is stro... (read more)

The EA Mindset

 

This is an unfair caricature/ lampoon of parts of the 'EA mindset' or maybe in particular, my mindset towards EA. 

 

Importance: Literally everything is at stake, the whole future lightcone astronomical utility suffering and happiness. Imagine the most important thing you can think of, then times that by a really large number with billions of zeros on the end. That's a fraction of a fraction of what's at stake. 

 

Special: You are in a special time upon which the whole of everything depends. You are also one of the special chosen few who understands how important everything is. Also you understand the importance of rationality and evidence which everyone else fails to get (you even have the suspicion that some of the people within the chosen few don't actually 'really get it'). 

 

Heroic responsiblity: "You could call it heroic responsibility, maybe,” Harry Potter said. “Not like the usual sort. It means that whatever happens, no matter what, it’s always your fault. Even if you tell Professor McGonagall, she’s not responsible for what happens, you are. Following the school rules isn’t an excuse, someone else being in charge isn’t an excus... (read more)

Quantitatively how large do you think the non-response bias might be? Do you have some experience or evidence in this area that would help estimate the effect size? I don't have much to go on, so I'd definitely welcome pointers.

Let's consider the 40% of people who put a 10% probability on extinction or similarly bad outcomes (which seems like what you are focusing on). Perhaps you are worried about something like: researchers concerned about risk might be 3x more likely to answer the survey than those who aren't concerned about risk, and so in fact only 20% of people assign a 10% probability, not the 40% suggested by the survey.

Changing from 40% to 20% would be a significant revision of the results, but honestly that's probably comparable to other sources of error and I'm not sure you should be trying to make that precise an inference.

But more importantly a 3x selection effect seems implausibly large to me. The survey was presented as being about "progress in AI" and there's not an obvious mechanism for huge selection effects on these questions. I haven't seen literature that would help estimate the effect size, but based on a general sense of correlation sizes in other domains I'd... (read more)

Thanks so much for writing this, and even more for all you've done to help those less fortunate than yourself.

I'm glad I did that Daily Politics spot! It was very hard to tell in the early days how impactful media work was (and it still is!) so examples like this are very interesting.

I’m one of the Community Liaisons for CEA’s Community Health and Special Projects team. The information shared in this post is very troubling. There is no room in our community for manipulative or intimidating behaviour.

We were familiar with many (but not all) of the concerns raised in Ben’s post based on our own investigation. We’re grateful to Ben for spending the time pursuing a more detailed picture, and grateful to those who supported Alice and Chloe during a very difficult time. 

We talked to several people currently or formerly involved in Nonlinear about these issues, and took some actions as a result of what we heard. We plan to continue working on this situation. 

From the comments on this post, I’m guessing that some readers are trying to work out whether Kat and Emerson’s intentions were bad. However, for some things, intentions might not be very decision-relevant. In my opinion, meta work like incubating new charities, advising inexperienced charity entrepreneurs, and influencing funding decisions should be done by people with particularly good judgement about how to run strong organisations, in addition to having admirable intentions. 

I’m looking forward to seeing what information Nonlinear shares in the coming weeks.  

I'm Isaak, the lead organizer of Future Forum. Specifically addressing the points regarding Future Forum:

By "ask for money (often retroactively)", I am referring to the grant made to the Future Forum (a conference held Aug 4 - 7, 2022). 

I don't know whether retroactive funding happens in other cases. However, all grants made to Future Forum were committed before the event.  The event and the organization received three grants in total:

Applications for the grants were usually sent 1-3 weeks before approval. While we had conversations with funders throughout, all applications went through official routes and application forms. 

I received the specific grant application approval emails on: 

  • Feb 28th, 2022, 9:36 AM PT, 
  • July 5th, 2022, 5:04 PM PT, 
  • July 18th, 2022, 11:28 AM PT. 

The event ran from August 4-7th. I.e., we never had a grant committed "retroactively".

 

Cleaning up their mess included getting a new venue last minute (which was very expensive), which took them into large debt, and then, reportedly, being bailed out by Open Philanthropy (retroactively).

Knowing that the event was experimental and that the core team didn't have much operat... (read more)

gwern
5mo117
10
2
4
14
6

EDIT: this is going a bit viral, and it seems like many of the readers have missed key parts of the reporting. I wrote this as a reply to Wei Dai and a high-level summary for people who were already familiar with the details; I didn't write this for people who were unfamiliar, and I'm not going to reference every single claim in it, as I have generally referenced them in my prior comments/tweets and explained the details & inferences there. If you are unaware of aspects like 'Altman was trying to get Toner fired' or pushing out Hoffman or how Slack was involved in Sutskever's flip or why Sutskever flip-flopped back, still think Q* matters, haven't noticed the emphasis put on the promised independent report, haven't read the old NYer Altman profile or Labenz's redteam experience etc., it may be helpful to catch up by looking at other sources; my comments have been primarily on LW since I'm not a heavy EAF user, plus my usual excerpts.

Or even "EA had a pretty weak hand throughout and played it as well as can be reasonably expected"?

It was a pretty weak hand. There is this pervasive attitude that Sam Altman could have been dispensed with easily by the OA Board if it had been m... (read more)

The low number of human-shrimp connections may be due to the attendance dip in 2020. Shrimp have understandably a difficult relationship with dips.

I think this post is missing how many really positive relationships started with something casual, and how much the 'plausible deniability' of a casual start can remove pressure. If you turn flirting with someone from an "I'm open to seeing where this goes" into "I think you might the the one" that's a high bar. Which means despite the definition of 'sleeping around' you're using looking like it wouldn't reduce the number of EA marriages and primary relationships I expect it would. Since a lot of EAs in those relationships (hi!) are very happy with them (hi!), this is a cost worth explicitly weighing.

(Writing this despite mostly agreeing with the post and having upvoted it. And also as someone who's done very little dating and thought I was going to marry everyone I dated.)

Max is a phenomenal leader, and I’m very sad to see him go. He’s one of the most caring and humble people I’ve ever worked with, and his management and support during a very difficult few months has been invaluable. He’s also just a genuine delight to be around.

It’s deeply unfair that this job has taken a toll on him, and I’m very glad that he’s chosen the right thing for him.

Max has taught me so much, and I’ll be forever grateful for that. And I’m looking forward to continuing to work with him as an advisor — I know he’ll continue to be a huge help.

[For context, I'm definitely in the social cluster of powerful EAs, though don't have much actual power myself except inside my org (and my ability to try to persuade other EAs by writing posts etc); I had more power when I was actively grantmaking on the EAIF but I no longer do this. In this comment I’m not speaking for anyone but myself.]

This post contains many suggestions for ways EA could be different. The fundamental reason that these things haven't happened, and probably won't happen, is that no-one who would be able to make them happen has decided to make them happen. I personally think this is because these proposals aren't very good. And so:

  • people in EA roles where they could adopt these suggestions choose not to
  • and people who are capable/motivated enough that they could start new projects to execute on these ideas (including e.g. making competitors to core EA orgs) end up deciding not to.

And so I wish that posts like this were clearer about their theory of change (henceforth abbreviated ToC). You've laid out a long list of ways that you wish EA orgs behaved differently. You've also made the (IMO broadly correct) point that a lot of EA organizations are led and influenced ... (read more)

I strongly downvoted this response.

The response says that EA will not change "people in EA roles [will] ... choose not to", that making constructive critiques is a waste of time "[not a] productive ways to channel your energy" and that the critique should have been better "I wish that posts like this were clearer" "you should try harder" "[maybe try] politely suggesting".

This response seems to be putting all the burden of making progress in EA onto those  trying to constructively critique the movement, those who are putting their limited spare time into trying to be helpful, and removing the burden away from those who are actively paid to work on improving this movement. I don’t think you understand how hard it is to write something like this, how much effort must have gone into making each of these critiques readable and understandable to the EA community. It is not their job to try harder, or to be more polite. It is our job, your job, my job, as people in EA orgs, to listen and to learn and to consider and if we can to do better.

Rather than saying the original post should be better maybe the response should be that those reading the original post should be better at conside... (read more)

I think the criticism of the theory of change here is a good example of an isolated demand for rigour (https://slatestarcodex.com/2014/08/14/beware-isolated-demands-for-rigor/), which I feel EAs often apply when it comes to criticisms.

It’s entirely reasonable to express your views on an issue on the EA forum for discussion and consideration, rather than immediately going directly to relevant stakeholders and lobbying for change. I think this is what almost every EA Forum post does and I have never before seen these posts criticised as ‘complaining’.

One thing I think this comment ignores is just how many of the suggestions are cultural, and thus do need broad communal buy in, which I assume is why they sent this publicly. Whilst they are busy, I'd be pretty disappointed if the core EAs didn't read this and take the ideas seriously (ive tried tagging dome on twitter), and if you're correct that presenting such a detailed set of ideas on the forum is not enough to get core EAs to take the ideas seriously I'd be concerned about where there was places for people to get their ideas taken seriously. I'm lucky, I can walk into Trajan house and knock on peoples doors, but others presumably aren't so lucky, and you would hope that a forum post that generated a lot of discussion would be taken seriously. Moreover, if you are concerned with the ideas presented here not getting fair hearing, maybe you could try raising salient ideas to core EAs in your social circles?

I think all your specific points are correct, and I also think you totally miss the point of the post.

You say you though about these things a lot. Maybe lots of core EAs have though about these things a lot. But what core EAs have considered or not is completely opaque to us. Not so much because of secrecy, but because opaque ness is the natural state of things. So lots of non core EAs are frustrated about lots of things. We don't know how our community is run or why. 

On top of that, there are actually consequences for complaining or disagreeing too much. Funders like to give money to people who think like them. Again, this requires no explanation. This is just the natural state of things. 

So for non core EA, we notice things that seem wrong, and we're afraid to speak up against it, and it sucks. That's what this post is about.

And course it's naive and shallow and not adding much to anyone who has already though about this for years. For the authors this is the start of the conversation, because they, and most of the rest of us, was not invited to all the previous conversations like this.  

I don't agree with everything in the post. Lots of the suggestions seems nonse... (read more)

I liked CEA's statement

  • Writing statements like this is really hard. It's the equivalent of writing one tweet on something that you know everyone is gonna rip to pieces. I think there are tradeoffs here that people on the forum don't seem to acknowledge. I am very confident (90%) that a page length discussion of this would have been worse in terms of outcomes.
  • I don't think it was for us - I think it was for journalists etc. And I think it performed its job of EA not being dragged into all of this. Note how much better it was than either Anders' statement or Bostrom's - no one externally is discussing it, and in an adversarial environment that means it's  succeeded.
  • I think it was an acceptable level of accuracy. It's very hard to write short things, but does EA roughly hold that all people are equal? Yes I think that's not a bad 4 word summary. I think a better summary is "the value of beings doesn't change based on their position in space or time and I reject the many heuristics humanity has used to narrow concern which have led to the suffering we see today - racism, sexism, speciesism, etc". I think that while more precise that phrase isn't that much more accurate and is wors
... (read more)

(This is an annoyed post. Having re-read it, I think it's mostly not mean, but please downvote it if you think it is mean and I'll delete it.)

I have a pretty negative reaction to this post, and a number of similar others in this vein. Maybe I should write a longer post on this, but my general observation is that many people have suddenly started looking for the "adults in the room", mostly so that they can say "why didn't the adults prevent this bad thing from happening?", and that they have decided that "EA Leadership" are the adults. 

But I'm not sure "EA Leadership" is really a thing, since EA is a movement of all kinds of people doing all kinds of things, and so "EA Leadership" fails to identify specific people who actually have any responsibility towards you. The result is that these kinds of questions end up either being vague or suggesting some kind of mysterious shadowy council of "EA Leaders" who are secretly doing naughty things.

It gets worse! When people do look for an identifiable figure to blame, the only person who looks vaguely like a leader is Will, so they pick on him. But Will is not the CEO of EA! He's a philosopher who writes books about EA and has received ... (read more)

Thanks, I thought this was the best-written and most carefully argued of the recent posts on this theme.

Elon Musk

Stuart Buck asks:

“[W]hy was MacAskill trying to ingratiate himself with Elon Musk so that SBF could put several billion dollars (not even his in the first place) towards buying Twitter? Contributing towards Musk's purchase of Twitter was the best EA use of several billion dollars? That was going to save more lives than any other philanthropic opportunity? Based on what analysis?”

Sam was interested in investing in Twitter because he thought it would be a good investment; it would be a way of making more money for him to give away, rather than a way of “spending” money. Even prior to Musk being interested in acquiring Twitter, Sam mentioned he thought that Twitter was under-monetised; my impression was that that view was pretty widely-held in the tech world. Sam also thought that the blockchain could address the content moderation problem. He wrote about this here, and talked about it here, in spring and summer of 2022. If the idea worked, it could make Twitter somewhat better for the world, too.

I didn’t have strong views on whether either of these opinions were true. My aim was just to introduce the two of them, and let them have a conversation and take it from th... (read more)

I was afraid of you because you were my boss, so I could not criticize your work and behavior, and I was scared to do it for a long time. I also know you as a very manipulative manager (just a subjective experience, no evidence here), so I was very afraid of you, and even given the above response, I still am.  On the day you quit, I was about to talk to the leadership about quitting unless I no longer have to work with you. Luckily you quit, and I was able to stay in my highly impactful work. But yeah, I am still afraid of you, and it is very saddening to me there is no investigation into the gossip mentioned below. But again, I am just afraid of the manipulative skills I witnessed and you in general. Hopefully, some people can stand up to you. 

I would just add to this that it’s worth taking a few minutes to really think if there is anyone you might possibly know who lives in the district — or even a second-degree connection like a friend’s sister who you’ve never met. “Relational” communications are much more high-impact than calling strangers if it’s at all possible to find someone you have any connection with.

Linch
3y116
0
0

Red teaming papers as an EA training exercise?

I think a plausibly good training exercise for EAs wanting to be better at empirical/conceptual research is to deep dive into seminal papers/blog posts and attempt to identify all the empirical and conceptual errors in past work, especially writings by either a) other respected EAs or b) other stuff that we otherwise think of as especially important. 

I'm not sure how knowledgeable you have to be to do this well, but I suspect it's approachable for smart people who finish high school, and certainly by the time they finish undergrad^ with  a decent science or social science degree.

I think this is good career building for various reasons:

  • you can develop a healthy skepticism of the existing EA orthodoxy
    • I mean skepticism that's grounded in specific beliefs about why things ought to be different, rather than just vague "weirdness heuristics" or feeling like the goals of EA conflict with other tribal goals.
      • (I personally  have not found high-level critiques of EA, and I have read many, to be particularly interesting or insightful, but this is just a personal take).
  • you actually deeply understand at least one topic well enough
... (read more)
Jonas V
1y115
63
2

it was a lot less bad than what I'd have expected based on the TIME piece account

From my personal perspective: While the additional context makes the interaction itself seem less bad, I think the fact that it involved Owen (rather than, say, a more tangentially involved or less influential community member) made it a lot worse than what I would have expected. In addition, this seems the first second time (after this one*) I hear about a case that the community health team didn't address forcefully enough, which wasn't clear to me based on the Time article.

* edited based on feedback that someone sent me via DM, thank you

(edit: I think you acknowledge this elsewhere in your comment)

Jakob
1y115
49
14

Thank you for a good and swift response, and in particular, for stating so clearly that fraud cannot be justified on altruistic grounds.

I have only one quibble with the post: IMO you should probably increase your longtermist spending quite significantly over the next ~year or so, for the following reasons (which I'm sure you've already considered, but I'm stating them so others can also weigh in)

  • IIRC Open Philanthropy has historically argued that a lack of high-quality, shovel-ready projects has been limiting the growth in your longtermist portfolio. This is not the case at the moment. There will be projects that 1) have significant funding gaps, 2) have been vetted by people you trust for both their value alignment and competence, 3) are not only shovel-ready, but already started. Stepping in to help these projects bridge the gap until they can find new funding sources looks like an unusually cost-effective opportunity. It may also require somewhat less vetting on your end, which may matter more if you're unusually constrained by grantmaker capacity for a while
  • Temporarily ramping up funding can also be justified by considering likely flow-through effects of acting as an "insurer o
... (read more)

I want to push back on this a tiny bit. Just because some projects got funding from FTX, that doesn't necessarily mean Open Phil should fund them. There's a few reasons for this:

There will be projects that 1) have significant funding gaps, 2) have been vetted by people you trust for both their value alignment and competence, 3) are not only shovel-ready, but already started.

  1. When FTX Future Fund was functioning, there was lots more money available in the ecosystem, hence (I think) the bar for receiving a longtermist grant was lower. This money is now gone, and lots of orgs who got FTX funding might not meet OP's bar / the new bar we should have given less resources. So basically I don't think it's sufficient to say 1) they have significant funding gaps, 2) they exist and 3) they've been vetted by people you trust. IMO you need to prove that they're also sufficiently high-quality, which might not be true as FTX was vetting them with a different bar in mind. 

Stepping in to help these projects bridge the gap until they can find new funding sources looks like an unusually cost-effective opportunity. It may also require somewhat less vetting on your end, which may matter more if you

... (read more)

I think there's a lot that's intriguing here. I also really enjoyed the author's prior takedown of "Why We Sleep".

However, I need to throw a flag on the field for isolated demands of rigor / motivated reasoning here - I think you are demanding a lot from sleep science to prove their hypotheses about needing >7hrs of sleep but then heavily relying on an unproven analogy to eating (why should we think sleeping and eating are similar?), the sleep patterns of a few hunter-gatherers (why should we think what hunter-gatherers did was the healthiest?), the sailing coach guy (this was the most compelling IMO but shouldn't be taken as conclusive), and a random person with brain surgery (that wasn't even an RCT). If someone had the same scattered evidence in favor of sleep, there's no way you'd accept it.

Maybe not sleeping doesn't affect writing essays, but in the medical field at least there seems to at least be an increased risk of medical error for physicians who are sleep deprived. "I'm pretty sure this is 100% psyop" goes too far.

For what it's worth (and it should be worth roughly the same as this blog post), my personal anecdotes:

1.) Perhaps too convenient and my data quality is no... (read more)

Around 2015-2019 I felt like the main message I got from the EA community was that my judgement was not to be trusted and I should defer, but without explicit instructions how and who to defer to.
...
My interpretation was that my judgement generally was not to be trusted, and if it was not good enough to start new projects myself, I should not make generic career decisions myself, even where the possible downsides were very limited.

I also get a lot of this vibe from (parts of) the EA community, and it drives me a little nuts. Examples:

  • Moral uncertainty, giving other moral systems weight "because other smart people believe them" rather than because they seem object-level reasonable
  • Lots of emphasis on avoiding accidentally doing harm by being uninformed
  • People bring up "intelligent people disagree with this" as a reason against something rather than going through the object-level arguments

Being epistemically modest by, say, replacing your own opinions with the average opinion of everyone around you, might improve the epistemics of the majority of people (in fact it almost must by definition), but it is a terrible idea on a group level: it's a recipe for information cascades, groupthink... (read more)

ASB
1y114
17
19

Hi, thanks for raising these questions. I lead Open Philanthropy’s biosecurity and pandemic prevention work and I was the investigator of this grant. For context, in September last year, I got an introduction to Helena along with some information about work they were doing in the health policy space. Before recommending the grant, I did some background reference calls on the impact claims they were making, considered similar concerns to ones in this post, and ultimately felt there was enough of a case to place a hits-based bet (especially given the more permissive funding bar at the time).

Just so there’s no confusion: I think it’s easy to misread the nepotism claim as saying that I or Open Phil have a conflict of interest with Helena, and want to clarify that this is not the case. My total interactions with Helena have been three phone calls and some email, all related to health security work.

Just noting that this reply seems to be, to me, very close to content-free, in terms of addressing object-level concerns.  I think you could compress it to "I did due diligence" without losing very much.

If you're constrained in your ability to discuss things on the object-level, i.e. due to promises to keep certain information secret, or other considerations like "discussing policy work in advance of it being done tends to backfire", I would appreciate that being said explicitly.  As it is, I can't update very much on it.

ETA: to be clear, I'm not sure I how I feel about the broader norm of requesting costly explanations when something looks vaguely off.  My first instinct is "against", but if I were to adopt a policy of not engaging with such requests (unless they actually managed to surface something I'd consider a mistake I didn't realize I'd made), I'd make that policy explicit.

Hi Simon, thanks for writing this! I’m research director at FP, and have a few bullets to comment here in response, but overall just want to indicate that this post is very valuable. I’m also commenting on my phone and don’t have access to my computer at the moment, but can participate in this conversation more energetically (and provide more detail) when I’m back at work next week.

  • I basically agree with what I take to be your topline finding here, which is that more data is needed before we can arrive at GiveWell-tier levels of confidence about StrongMinds. I agree that a lack of recent follow-ups is problematic from an evaluator’s standpoint and look forward to updated data.

  • FP doesn’t generally strive for GW-tier levels of confidence; we’re risk-neutral and our general procedure is to estimate expected cost-effectiveness inclusive of deflators for various kinds of subjective consideration, like social desirability bias.

  • The 2019 report you link (and the associated CEA) is deprecated— FP hasn’t been resourced to update public-facing materials, a situation that is now changing—but the proviso at the top of the page is accurate: we stand by our recommendation.

  • This is be

... (read more)

This has included Will MacAskill and other thought leaders for the grave sin of not magically predicting that someone whose every external action suggested that he wanted to work with us to make the world a better place, would YOLO it and go Bernie Madoff. 

A contingent of EAs (e.g., Oliver Habryka and the early Alameda exodus) seems to have had strongly negative views of SBF well in advance of the FTX fraud coming to light. So I think it's worthwhile for some EAs to do a postmortem on why some people were super worried and others were (apparently) not worried at all.

Otherwise, I agree with you that folks have seemed to overreact more than underreact, and that there have been a lot of rushed overconfident claims, and a lot of hindsight-bias-y claims.

I object to how closely you link polyamory with shitty behaviour. At one point you say this you are not criticizing polyamory, but you repeatedly bring it up when talking about stuff like the overlap of work and social life, or men being predatory at EA meetups. 

I think men being predatory and subscribing to 'redpill' ideologies is terrible and we shouldn't condone it in the community. 

I feel more complicated about the overlap between social life and work life, but I take your general point that this could (and maybe does in fact) lead to conflicts of interest and exploitation. 

But neither of these is strongly related to polyamory, polycules etc. I worry that you are contributing to harmful stereotypes about polyamory. 
 

Extra ideas for the idea list: 

  • Altruistic perks, rather than personal perks. E.g.1. Turn up at this student event and got $10 donated to a charity of your choice. E.g.2. donation matching schemes mentioned in job adverts, perhaps funded by offering maybe slightly lower salaries. Anecdotally I remember the first EAish event I went to had money to charity for each attendee and free wine and it was the money to charity that attracted me to go, and free wine that attracted my friend, and I am still here and they are not involved.
  • Frugality options, like an optional version of the above idea. E.g.1. when signing up to an EA event the food options could be: "[]vegan, []nut free, []gluten free, []frugal - will bring my own lunch please donate money saved to charity x". E.g.2. Jobs could advertise the organisation offers salary sacrifice schemes that some employees take. I don’t know how well this would work but would be interested to see a group try. Anecdotally I know some EAs in well paid jobs take lower salaries than they are offered but I don’t think this is well known.

 

Also for what it is worth I was really impressed by the post. I it was an very well written, clear, and transparent discussion of this topic  with clear actions to take.

I don't know Carrick very well, but I will be pretty straightforward that this post, in particular in the combination with the top comment by Ryan Carey gives me a really quite bad vibe. It seems obvious to me that anyone saying anything bad right now about Carrick would be pretty severely socially punished by various community leaders, and I expected the community leadership to avoid saying so many effusively positive things in a context where it's really hard for people to provide counterevidence, especially when it comes with an ask for substantial career shifts and funding. 

I've seen many people receive genuine references in the EA community, many of them quite positive, but they usually are expressed substantially more measured and careful than this post. This post reads to me like a marketing piece that I do not trust, and that I expect to exaggerate at many points (like, did Carrick really potentially save "thousands of lives"? An assertion thrown around widely in the world, but one that is very rarely true, and one that I also doubt is true in this case, by the usual EA standards of evidence). 

I don't know Carrick, and the little that I've seen seemed positive and... (read more)

I think there's a bit of a misunderstanding - I'm not asking people to narrowly conform to some message. For example, if you want to disagree with Andrew's estimate of the number of lives that Carrick has saved, go ahead. I'm saying exhibit a basic level of cultural and political sensitivity. One of the strengths of the effective altruism community is that it's been able to incorporate people to whom that doesn't always come naturally, but this seems like a moment when it's required anyway.

Yeah, my reading of your comment was in some ways the opposite of Habryka's original take, since I was reading it as primarily directed at people who might support Carrick in weird/antisocial ways, rather than people who might dissent from supporting him.

Could you say why you chose the name Probably Good, and to what extent that's locked-in at this stage?

I may be alone in this, but to me it seems like a weird name, perhaps especially if a large part of your target audience will be new EAs and non-EAs. 

Firstly, it seems like it doesn't make it at all clear what the focus of the organisation is (i.e., career advice). 80,000 Hours' name also doesn't make its focus clear right away, but the connection can be explained in a single sentence, and from then on the connection seems very clear. Whereas if you say "We want to give career advice that's probably good", I might still think "But couldn't that name work just as well and for just the same reason for donation advice, or AI research, or relationship advice, or advice about what present to buy a friend?" 

This is perhaps exacerbated by the fact that "good" can be about either morality or quality, and that the name doesn't provide any clues that in this case it's about morality. (Whereas CEA has "altruism" in the name - not just "effective" - and GiveWell has "give" in the name - not just "well".)

In contrast, most other EA orgs' names seem to more clearly gesture at roughly wh... (read more)

Hello!

I’m Minh, Nonlinear intern from September 2022 to April 2023. The last time allegations of bad practices came up, I reiterated that I had a great time working at Nonlinear. Since this post is >10,000 words, I’m not able to address everything, both because:

  1. I literally can’t write that much.
  2. I can’t speak for interactions between Nonlinear and Alice/Chloe, because everything I’ve heard on this topic is secondhand.

I’m just sharing my own experience with Nonlinear, and interpreting specific claims made about Kat/Emerson’s character/interaction styles based on my time with Nonlinear. In fact, I’m largely assuming Alice and Chloe are telling the truth, and speaking in good faith.

Disclaimers

In the interest of transparency, I’d like to state:

  1. I have never been approached in this investigation, nor was I aware of it. I find this odd, because … if you’re gonna interview dozens of people about a company’s unethical treatment of employees, why wouldn’t you ask the recent interns? Nonlinear doesn’t even have that many people to interview, and I was very easy to find/reach. So that’s … odd.
  2. I was not asked to write this comment. I just felt like it. It’s been a while since I’ve writte
... (read more)

The post in which I speak about EAs being uncomfortable about us publishing the article only talks about interactions with people who did not have any information about initial drafting with Torres. At that stage, the paper was completely different and a paper between Kemp and I. None of the critiques about it or the conversations about it involved concerns about Torres, co-authoring with Torres or arguments by Torres, except in so far as they might have taken Torres an example of the closing doors that can follow a critique. The paper was in such a totally different state and it would have been misplaced to call it a collaboration with Torres. 

There was a very early draft of Torres and Kemp which I was invited to look at (in December 2020) and collaborate on. While the arguments seemed promising to me, I thought it needed major re-writing of both tone and content. No one instructed me (maybe someone instructed Luke?) that one could not co-author with Torres. I also don't recall that we were forced to take Torres off the collaboration (I’m not sure who know about the conversations about collaborations we had): we decided to part because we wanted to move the content and tone i... (read more)

Do you think it was a mistake to put "FTX" in the "FTX Future Fund" so prominently? My thinking is that you likely want the goodness of EA and philanthropy to make people feel more positively about FTX, which seems fine to me, but in doing so you also run a risk of if FTX has any big scandal or other issue it could cause blowback on EA, whether merited or not.

I understand the Future Fund has tried to distance itself from effective altruism somewhat, though I'm skeptical this has worked in practice.

To be clear, I do like FTX personally, am very grateful for what the FTX Future Fund does, and could see reasons why putting FTX in the name is also a positive.

Our data suggests that the highest impact scandals are several times more impactful than other scandals (bear in mind that this data is probably not capturing the large number of smaller scandals). 

If so, it seems plausible we should optimise for the very largest scandals, rather than simply producing a large volume of less impactful scandals.

Thanks. Is this person still active in the EA community? Does this person still have a role in "picking out promising students and funneling them towards highly coveted jobs"?

Downvoted. I appreciate you a lot for writing this letter, and am sorry you/Will were slandered in this way! But I would like to see less of this content on the EA Forum. I think Torres' has a clear history of writing very bad faith and outrage inducing hit pieces, and think that prominently discussing these or really paying any attention on the EA forum easily sucks in time and emotional energy with little reward. So seeing this post with a lot of comments and at 300+ karma feels sad to me!

My personal take is that the correct policy for the typical EA is to not bother reading their criticisms, given their history of quote mining and misrepresentation, and would have rather never heard about this article.

All that said, I want to reiterate that I'm very glad you wrote this letter, sorry you went through this, and that this has conveyed the useful information to take the bulletin's editorial standards less seriously!

I don't mind sharing a bit about this. SBF desperately wanted to do the Korea arb, and we spent quite a bit of time coming up with any number of outlandish tactics that might enable us to do so, but we were never able to actually figure it out. The capital controls worked. The best we could do was predict which direction the premium would go and trade into KRW and then back out of it accordingly.

Japan was different. We were able to get a Japanese entity set up, and we did successfully trade on the Japan arb. As far as I know we didn't break any laws in doing so, but I wasn't directly involved in the operational side of it. My recollection is that we made something like 10-30 million dollars (~90%CI) off of that arb in total, but I'm not at all confident on the exact amount.

Is that what created his early wealth, though? Not really. Before we all left, pretty much all of that profit had been lost to a series of bad trades and mismanagement of assets. Examples included some number of millions lost to a large directional bet on ETH (that Sam made directly counter to the predictions of our best event trader), a few million more on a large OTC trade in some illiquid shitcoin that crashed... (read more)

I think it's very plausible the reputational damage to EA from this - if it's as bad as it's looking to be  - will outweigh the good the Future Fund has done tbh

Agreed lots of kudos to the Future Fund people though

These numbers seem pretty all-over-the-place. On nearly every question, the odds given by the 7 forecasters span at least 2 orders of magnitude, and often substantially more. And the majority of forecasters (4/7) gave multiple answers which seem implausible (details below) in ways that suggest that their numbers aren't coming from a coherent picture of the situation.

I have collected the numbers in a spreadsheet and highlighted (in red) the ones that seem implausible to me.

Odds span at least 2 orders of magnitude:

Another commenter noted that the answers to "What is the probability that Russia will use a nuclear weapon in Ukraine in the next MONTH?" range from .001 to .27. In odds that is from 1:999 to 1:2.7, which is an odds ratio of 369. And this was one of the more tightly clustered questions; odds ratios between the largest and smallest answer on the other questions were 144, 42857, 66666, 332168, 65901, 1010101, and (with n=6) 12.

Other than the final (tactical nuke) question, these cover enough orders of magnitude for my reaction to be "something is going on here; let's take a closer look" rather than "there are some different perspectives which we can combine by aggregating" or... (read more)

How to fix EA "community building"

Today, I mentioned to someone that I tend to disagree with others on some aspects of EA community building, and they asked me to elaborate further. Here's what I sent them, very quickly written and only lightly edited:

Hard to summarize quickly, but here's some loose gesturing in the direction:

  • We should stop thinking about "community building" and instead think about "talent development". While building a community/culture is important and useful, the wording overall sounds too much like we're inward-focused as opposed to trying to get important things done in the world.
  • We should focus on the object level (what's the probability of an extinction-level pandemic this century?) over social reality (what does Toby Ord think is the probability of an extinction-level pandemic this century?).
  • We should talk about AI alignment, but also broaden our horizon to not-traditionally-core-EA causes to sharpen our reasoning skills and resist insularity. Example topics I think should be more present in talent development programs are optimal taxation, cybersecurity, global migration and open borders, 1DaySooner, etc.
  • Useful test: Does your talent development program m
... (read more)
Ofer
1y111
73
14

But the Apology […] reused the original racial slur[…]

Where? You mean in the 26-year-old email that he quoted in the apology? If so, the above claim seems unfair and deceptive.

Hi all -

This post has now been edited, but we would like to address some of the original claims, since many people have read them. In particular, the author claims:

  1. They have identified 30 incidents of rape or abuse with strong ties to EA, as well as 14 that are “EA adjacent”
  2. They have been fighting assault in EA since 2016

Here is some context: 

  • The author emailed the Community Health team about 7 months ago, when she shared some information about interpersonal harm; someone else previously forwarded us some anonymous information that she may have compiled. Before about 7 months ago, we hadn’t been in contact with her.
  • The information from her included serious concerns about various people in the Bay Area, most of whom had no connection to EA as far as we know. 4 of the accused seemed to be possibly or formerly involved with EA. CEA will not allow those 4 people at our events (though for context most of them haven’t applied). As we’ve said before, we’re grateful to her for this information. 
  • In addition, she later sent us some information that we had also previously received from other sources and we were already taking action on. We appreciate people sharing information even
... (read more)
Linch
1y111
35
21

I'm confused why people keep insisting this is a "CEA" decision even after Owen Cotton-Barratt's clarification (which I assume everyone commenting has read). 

I see the process on deciding to purchase Wytham Abbey as:

  1. Owen Cotton-Barratt made a proposal to spend ~$15M for a conference center
  2. His funder(s) were willing to give him money.
  3. Effective Ventures agreed to be a fiscal sponsor.

To the extent that anyone is responsible for this decision, it's primarily (1) Owen, and (2) his funder(s). I don't think (3) is much to blame here. Also, CEA the organization is distinct from EV, their fiscal sponsor.

I think if you think this is an ineffective use of limited resources, you absolutely should feel entitled to critique it! In many ways this is what our movement is about! But I think you should place the burden of blame on the actual decision-makers, and not vaguely associated institutions. 
 

Kirsten
1y111
28
2

Hey Maya, I like your post. It has a very EA conversational style to it which will hopefully help it be well received and I'm guessing took some effort.

A problem I can't figure out, which you or someone else might be able to help suggest solutions to -

-If I (or someone else) post about something emotional without suggestions for action, everyone's compassionate but nothing happens, or people suggest actions that I don't think would help

-If I (or someone else) post about something emotional and suggest some actions that could help fix it, people start debating those actions, and that doesn't feel like the emotions are being listened to

-But just accepting actions because they're linked to a bad experience isn't the right answer either, because someone could have really useful experience to share but their suggestions might be totally wrong

If anyone has any suggestions, I'd welcome them!

In the first wave of media attention GWWC got (around 2009), we got a lot of comments that we were just idealistic youngsters and we’d think differently when we reached our thirties.

Here's to not losing faith 🥂

I think it's great that CEA increased the event size on short notice. It's hard to anticipate everything in advance for complex projects like this one, and I think it's very cool that when CEA realized the potential mistake, it fixed the issue and expanded capacity in time.

I'd much rather have a CEA that gets important things broadly right and acts swiftly to fix any issues in time, than a CEA that overall gets less done due to risk aversion resulting from pushback from posts like this one*, or one that stubbornly sticks to early commitments rather than flexibly adjusting its plans.

I also feel like the decision not to worry too much about Covid seems correct given the most up-to-date risk estimates, similar to how conference organizers usually don't worry too much about the risk of flu/norovirus outbreaks.

(Edit - disclosure: From a legal perspective, I am employed by CEA, but my project (EA Funds) operates independently (meaning I don't report to CEA staff), and I wasn't involved in any decisions related to EA Global.)

* Edit: I don't mean to discourage thoughtful critiques like this post. I just don't want CEA to become more risk-averse because of them.

Richenda
7mo110
22
14
4
3

I have read the OP. I have skim read the replies. I'm afraid I am only making this one post because involvement with online debates is very draining for me.

My post is roughly structured along the lines of:

  • My relationship to Kat
  • My opinions about Kat's character
  • My opinions about EA culture and risky weirdness
  • My opinions about how we go about ensuring good practices in EA

Kat is a good friend, who I trust and think highly of. I have known her personally (rather than as a loose acquaintance, as I did for years prior) since summer 2017. I do not know Emerson or his brother from Adam.

I see somebody else was asked about declaring interests when they spoke positively about Kat. I have never been employed by Kat. Back in 2017, Charity Science had some legal and operational infrastructure (e.g. charity status, tax stuff) which was hard to arrange. And during that time, .impact - which later became Rethink Charity, collaborated with Charity Science in order to shelter under that charitable status in order to be able to hire people, legally process funds and so forth. So indirectly the entity that employed me was helped out by Charity Science.

However, I never collaborated in a work sense... (read more)

Maya - thanks for a thoughtful, considered, balanced, and constructive post.

Regarding the issue that 'Effective Altruism Has an Emotions Problem': this is very tricky, insofar as it raises the issue of neurodiversity.

I've got Aspergers, and I'm 'out' about it (e.g. in this and many other interviews and writings). That means I'm highly systematizing, overly rational (by neurotypical standards), more interested in ideas than in most people, and not always able to understand other people's emotions, values, or social norms. I'm much stronger on 'affective empathy' (feeling distressed by the suffering of others) than on 'cognitive empathy' (understanding their beliefs & desires using Theory of Mind.)

Let's be honest. A lot of us in EA have Aspergers, or are 'on the autism spectrum'. EA is, to a substantial degree, an attempt by neurodivergent people to combine our rational systematizing with our affective empathy -- to integrate our heads and our hearts, as they actually work, not as neurotypical people think they should work. 

This has lead to an EA culture that is incredibly welcoming, supportive, and appreciative of neurodivergent people, and that capitalizes on our distincti... (read more)

I want to say that I have tremendous respect for you, I love your writing and your interviews, and I believe that your intentions are pure.

How concerned were you about crypto generally being unethical? Even without knowledge of the possibly illegal, possibly fraudulent behaviour. Encouraging people to invest in "mathematically complex garbage" seemed very unethical. (Due to the harm to the investor and the economy as a whole).

SBF seemed like a generally dishonest person. He ran ads saying, "don't be like Larry". But in this FT interview, he didn't seem to have a lot of faith that he was helping his customers.

"Does he worry about the clients who lose life-changing sums through speculation, some trading risky derivatives products that are banned in several countries? The subject makes Bankman-Fried visibly uncomfortable. Throughout the meal, he has shifted in his seat, but now he has his crossed arms and legs all crammed into a yogic pose."

It is now clear that he is dishonest. Given he said on Twitter that FTX US was safe when it wasn't (please correct me if I'm wrong here).

I think that even SBF thinks/thought crypto is garbage, yet he spent billions bailing out a scam industry, poss... (read more)

Hey, yeah, for the last few months CEA and Forethought and a few other organizations have been working to try to help accurately explain EA and related ideas in the media. We've been working with experienced communications professionals. CEA recently also hired a Head of Communications to lead these efforts, and they're starting in September. I think that it was a mistake on CEA's part not to do more of this sooner.

I think that there might be a post sharing more of about these efforts in the future (but not 100% sure this will happen).  

Some notes from CEA:

  • Several people have asked me recently whether Jacy is allowed to post on the Forum. He was never banned from the Forum, although CEA told him he would not be allowed in certain CEA-supported events and spaces.
  • Three years ago, CEA thought a lot about how to cut ties with a person while not totally losing the positive impact they can have. Our take was that it’s still good to be able to read and benefit from someone’s research, even if not interacting with them in other ways.
  • Someone's presence on the Forum or in most community spaces doesn’t mean they’ve been particularly vetted.
  • This kind of situation is especially difficult when the full information can’t be public. I’ve heard both from people worried that EA spaces are too unwilling to ban people who make the culture worse, and from people worried that EA spaces are too willing to ban people without good enough reasons or evidence. These are both important concerns.
  • We’re trying to balance fairness, safety, transparency, and practical considerations. We won’t always get that balance right. You can always pass on feedback to me at julia.wise@centreforeffectivealtruism.org, to my manager Nicole at nicole.ross@centreforeffectivealtruism.org, or via our anonymous contact form.

The accusation of sexual misconduct at Brown is one of the things that worried us at CEA. But we approached Jacy primarily out of concern about other more recent reports from members of the animal advocacy and EA communities.

I haven't looked into the evidence here at all, but fwiw the section on 'sharing information on ben pace' is deranged. I know you are using this as an example of how unfounded allegations can damage someone's reputation. But in repeating them, you are also repeating unfounded allegations and damaging someone's reputation. You are also obviously doing this in retaliation for him criticising you. You could have used an infinite number of examples of how unfair allegations can damage someone's reputation, including eg known false allegations against celebrities or other people reported in the news, or hypotheticals.

Just share your counter-evidence, don't in the process try to smear the person criticising you. 

I haven't looked into the evidence here at all

For someone who seems to have made at least 20 comments on this post, why haven't you bothered to at least look into the evidence they provided?

Ofer
4mo109
48
11
14

Disclosure (copying from a previous comment): I have served in Israel Defense Forces, I live in Israel, I feel horrible about what Israel has done in the past 75 years to millions of Palestinians and I do not want Israel to end up as a horrible stain on human history. I am probably unusually biased when dealing with this topic. I am not making here a claim that people in EA should or should not get involved and in what way.

The author mentioned they do not want the comments to be "a discussion of the war per se" and yet the post contains multiple contentious pro-Israel propaganda talking points, and includes arguments that a cease-fire is net-negative. Therefore it seems to me legitimate to mention here the following.

In interviews to foreign press, Israeli officials/politicians often make claims to the effect that Israel is doing everything it can to minimize civilian casualties. Explaining why those claims are untrustworthy in a short comment is a hard task because whatever I'll write will leave out so much important stuff. (Imagine you had to explain to an alien, in a short text, why a certain claim by Donald Trump is untrustworthy.) But I'll give it a go anyway:

  • The current Mini
... (read more)

I'm really excited about this! :)

One further thought on pitching Athena: I think there is an additional, simpler, and possibly less contentious argument about why increasing diversity is valuable for AI safety research, which is basically "we need everyone we can get". If a large percentage of relevant people don't feel as welcome/able to work on AI safety because of, e.g., their gender, then that is a big problem. Moreover, it is a big problem even if one doesn't care about diversity intrinsically, or even if one is sceptical of the benefits of more diverse research teams.

To be clear, I think we should care about diversity intrinsically, but the argument above nicely sidesteps replies of the form "yes, diversity is important, but we need to prioritise reducing AI x-risk above that, and you haven't given me a detailed story for how diversity in-and-of-itself helps AI x-risk, e.g., one's gender does not, prima facie, seem very relevant to one's ability to conduct AI safety research". This also isn't to dispute any of your reasons in the post, by the way, merely to add to them :)

GET AMBITIOUS SLOWLY

Most approaches to increasing agency and ambition focus on telling people to dream big and not be intimidated by large projects. I'm sure that works for some people, but it feels really flat for me, and I consider myself one of the lucky ones. The worst case scenario is big inspiring  speeches get you really pumped up to Solve Big Problems but you lack the tools to meaningfully follow up. 

Faced with big dreams but unclear ability to enact them, people have a few options. 

  •  try anyway and fail badly, probably too badly for it to even be an educational failure. 
  • fake it, probably without knowing they're doing so
  • learned helplessness, possible systemic depression
  • be heading towards failure, but too many people are counting on you so someone steps in and rescue you. They consider this net negative and prefer the world where you'd never started to the one where they had to rescue you. 
  • discover more skills than they knew. feel great, accomplish great things, learn a lot. 

The first three are all very costly, especially if you repeat the cycle a few times.

My preferred version is ambition snowball or "get ambitious slowly". Pick something b... (read more)

I'm concerned that there's an information cascade going on. That is, some claims were made about people being negatively affected by having posted public criticism; as a result some people made critical posts anonymously; that reinforces the perception that the original claim is true; more people post anonymously; the cycle continues.

But I just roll to disbelieve that people facing bad consequences for posting criticism is a serious problem. I can totally believe that it has happened at some point, but I'd be very surprised if it's widespread. Especially given how mild some of the stuff that's getting anonymously posted is!

So I think there's a risk that we meme ourselves into thinking there's an object level problem when there actually isn't. I would love to know what if any actual examples we have of this happening.

These are anonymous quotes from two people I know and vouch for about the TIME piece on gender-based harassment in the EA community:

Anon 1: I think it's unfortunate that the women weren't comfortable with the names of the responsible parties being shared in the article. My understanding is that they were not people strongly associated with EA, some of them had spoken out against EA and had never identified as an EA or had any role in EA, and an article with their names would have given people a very different impression of what happened. I guess I think someone should just spell out who the accused parties are (available from public evidence).

Anon 2: I want EAs to not be fucking stupid 😭

"Oh geez this Times reporter says we're doing really bad things, we must be doing really bad things A LOT, that's so upsetting!"

yet somehow "This New York Times reporter says Scott Alexander is racist and bad, but he's actually not, ugh I hate how the press is awful and lies & spins stuff in this way just to get clicks"

And yes, this included reports of people, but like I've met the first person interviewed in the article and she is hella scary and not someone I would trust to report accurately ... (read more)

While this is a very valuable post, I don't think the core argument quite holds, for the following reasons:

  1. Markets work well as information aggregation algorithms when it is possible to profit a lot from being the first to realize something (e.g., as portrayed in "The Big Short" about the Financial Crisis).
  2. In this case, there is no way for the first movers to profit big. Sure, you can take your capital out of the market and spend it before the world ends (or everyone becomes super-rich post-singularity), but that's not the same as making a billion bucks. 
  3. You can argue that one could take a short position on interest rates (e.g., in the form of a loan) if you believe that they will rise at some point, but that is a different bet from short timelines - what you're betting on then, is when the world will realize that timelines are short, since that's what it will take before many people choose to pull out of the market, and thus drive interest rates up. It is entirely possible to believe both that timelines are short, and that the world won't realize AI is near for a while yet, in which case you wouldn't do this. Furthermore, counterparty risks tend to get in the way of taking up
... (read more)
Liv
1y109
63
1

Hi, I'm pretty new here, so pls correct me if I'm wrong. I had, however, one important impression which I think I should share. 
EA started as a small movement and right now is expanding like crazy. The thing is, it still has a "small movement" mentality. 
One of the key aspects of this is trust. I have an impression that the EA is super trust-based. I have a feeling that if somebody calls themselves EA everybody assumes that they have probably super altruistic intentions and most of the values aligned. It is lovely. But maybe dangerous?
In a small movement everybody knows everyone and if somebody does something suspicious, the whole group can very easily spread the warning. In a large groups, however, it won't work. So if somebody is a grifter, an amoral person, just an a*hole or anything similar - they can super easily abuse the system, just by, for example, changing the EA crowd they talk to. I have an impression that there was a push towards attracting the maximum number of people possible. I assume that it was thought through and there is a value added in it. It, however, may have a pretty serious cost. 

I thought I would like this post based on the title (I also recently decided to hold off for more information before seriously proposing solutions), but I disagree with much of the content.

A few examples:

It is uncertain whether SBF intentionally committed fraud, or just made a mistake, but people seem to be reacting as if the takeaway from this is that fraud is bad.

I think we can safely say with at this point >95% confidence that SBF basically committed fraud even if not technically in the legal sense (edit: but also seems likely to be fraud in the legal sense), and it's natural to start thinking about the implications of this and in particular be very clear about our attitude toward the situation if fraud indeed occurred as looks very likely. Waiting too long has serious costs.

We could immediately launch a costly investigation to see who had knowledge of fraud that occurred before we actually know if fraud occured or why. In worlds where we’re wrong about whether or why fraud occurred this would be very costly. My suggestion: wait for information to costlessly come out, discuss what happened when not in the midst of the fog and emotions of current events, and then decide whethe

... (read more)

I think this model is kind of misleading, and that the original astronomical waste argument is still strong. It seems to me that a ton of the work in this model is being done by the assumption of constant risk, even in post-peril worlds. I think this is pretty strange. Here are some brief comments:

  • If you're talking about the probability of a universal quantifier, such as "for all humans x, x will die", then it seems really weird to say that this remains constant, even when the thing you're quantifying over grows larger.
    • For instance, it seems clear that if there were only 100 humans, the probability of x-risk would be much higher than if there were 10^6 humans. So it seems like if there are 10^20 humans, it should be harder to cause extinction than 10^10 humans.
  • Assuming constant risk has the implication that human extinction is guaranteed to happen at some point in the future, which puts sharp bounds on the goodness of existential risk reduction.
  • It's not that hard to get exponentially decreasing probability on universal quantifiers if you assume independence in survival amongst some "unit" of humanity. In computing applications, it's not that hard to drive down the probability of er
... (read more)

Here's a Q&A which answers some of the questions by  reviewers of early drafts. (I planned to post it quickly, but your comments came in so soon! Some of the comments hopefully find a reply here)

"Do you not think we should work on x-risk?"

  • Of course we should work on x-risk

 

"Do you think the authors you critique have prevented alternative frameworks from being applied to Xrisk?"

  • No. It’s not really them we’re criticising if at all. Everyone should be allowed to put out their ideas. 
  • But not everyone currently gets to do that. We really should have a discussion about what authors and what ideas get funding.

 

"Do you hate longtermism?"

  • No. We are both longtermists (probs just not the techno utopian kind).

 

"You’re being unfair to Nick Bostrom. In the vulnerable world hypothesis, Bostrom merely speculates that such a surveillance system may, in a hypothetical world in which VWH is true, be the only option"

  • It doesn't matter whether Nick Bostrom speculates or wants to implement surveillance globally. In respect to what we talk about (justification of extreme actions) what matters is how readers perceive his work and who the readers are. 
  • There’s some hedging i
... (read more)

My personal view is that being an EA implies spending some significant portion of your efforts being (or aspiring to be) particularly effective in your altruism, but it doesn't by any means demand you spend all your efforts doing so. I'd seriously worry about the movement if there was some expectation that EAs devote themselves completely to EA projects and neglect things like self-care and personal connections (even if there was an exception for self-care & connections insofar as they help one be more effective in their altruism).

It sounds like you developed a personal connection with this particular dog rather quickly, and while this might be unusual, I wouldn't consider it a fault. At the same time, while I don't see a problem with EAs engaging in that sort of partiality with those they connect with, I would worry a bit if you were making the case that this sort of behavior was in itself an act of effective altruism, as I think prioritization, impartiality, and good epistemics are really important to exhibit when engaged in EA projects. (Incidentally, this is one further reason I'd worry if there was an expectation that EAs devote themselves completely to EA projects – I think this would lead to more backwards rationalizations about why various acts people want to do are actually EA projects when they're not, and this would hurt epistemics and so on.) But you don't really seem to be doing that.

[Own views]

  1. I think we can be pretty sure (cf.) the forthcoming strongminds RCT (the one not conducted by Strongminds themselves, which allegedly found an effect size of d = 1.72 [!?]) will give dramatically worse results than HLI's evaluation would predict - i.e. somewhere between 'null' and '2x cash transfers' rather than 'several times better than cash transfers, and credibly better than GW top charities.' [I'll donate 5k USD if the Ozler RCT reports an effect size greater than d = 0.4 - 2x smaller than HLI's estimate of ~ 0.8, and below the bottom 0.1% of their monte carlo runs.]
  2. This will not, however, surprise those who have criticised the many grave shortcomings in HLI's evaluation - mistakes HLI should not have made in the first place, and definitely should not have maintained once they were made aware of them. See e.g. Snowden on spillovers, me on statistics (1, 2, 3, etc.), and Givewell generally.
  3. Among other things, this would confirm a) SimonM produced a more accurate and trustworthy assessment of Strongminds in their spare time as a non-subject matter expert than HLI managed as the centrepiece of their activity; b) the ~$250 000 HLI has moved to SM should be counted on th
... (read more)
titotal
1y108
48
2

I'm sorry to be pushing on this when it seems like you are doing the right thing, but could you elaborate more on this sentence from the article?

After that leader arranged for her to be flown to the U.K. for a job interview, she recalls being surprised to discover that she was expected to stay in his home, not a hotel.

Why was she being put up in your house and not a hotel, if you weren't affiliated with the group she was interviewing for? I think this is the part a lot of people were sketched out by, so more context would be helpful. 

Sorry I'm mostly trying to take a day away from the forum, but someone let me know that it would be helpful to chime in here. Essentially what happened: 

  • The org had arranged accommodation (not a hotel), but it didn't cover the first night she'd be in the country 
  • The people running the recruitment talked to me in a "this is your friend you recommended, could you help out?" way 
  • We had a spare room so I offered that; they said yes so I communicated with her about that 
  • This was all arranged on the day of her flight (before she flew) 

(I'm eliding details to reduce risk of leaking information about the person's identity.)

There is scant public information that could justify it as the best-placed and most appropriate recipient, a clear risk of nepotism inherent in the recipient organization, and [...]

When I read this part of your bullet point summary, I thought someone at Open Phil might be related to someone at Helena. But then it became clear that you mean that the Helena founder dropped out of college supported with money from his rich investor dad to start a project that you think "(subjectively) seems like" self-aggrandizing. 

(The word "inherent" probably makes clear what you mean; I just had a prior that nepotism is a problem when someone receives funding, and I didn't know that you were talking about other  funding that Helena also received.) 


 

Linch
1y108
1
0

tl;dr:
In the context of interpersonal harm:

1. I think we should be more willing than we currently are to ban or softban people.

2. I think we should not assume that CEA's Community Health team "has everything covered"

3. I think more people should feel empowered to tell CEA CH about their concerns, even (especially?) if other people appear to not pay attention or do not think it's a major concern.

4. I think the community is responsible for helping the CEA CH team with having  a stronger mandate to deal with interpersonal harm, including some degree of acceptance of mistakes of overzealous moderation.

(all views my own) I want to publicly register what I've said privately for a while:

For people (usually but not always men) who we have considerable suspicion that they've been responsible for significant direct harm within the community, we should be significantly more willing than we currently are to take on more actions and the associated tradeoffs of limiting their ability to cause more harm in the community.

Some of these actions may look pretty informal/unofficial (gossip, explicitly warning newcomers against specific people, keep an unofficial eye out for some people during par... (read more)

"one particular disgruntled ex-employee" - I am not an ex-employee, but happy to confirm working with you is not a pleasant experience for an employee. So there are at least two people in the EA community. I expect if CEA investigates, there would maybe be more? 

After a ~5min online research on Emerson Spartz's past CEO role at his previous company "Dose", it looks like there were a lot more "disgruntled ex-employee[s]" (even if this is external to EA). 

Overall, CEO approval is at 0%.  Some examples out of the many:

  •  Terrible, Toxic, Traumatizing, Environment I actually consulted lawyers about a potential retaliation lawsuit after my experience working at this sicko company. 4 years later, I still have nightmares. Like, actual nightmares while I'm asleep. There are some seriously manipulative, narcissists at Dose. If you are a semi-decent person who cares even an inkling about your own well being or the well being of others, I highly suggest staying away from this insane company.
  • Yikes. Working at Dose is like being in a sorority who thinks they're really cool, popular and making a difference in the world, but are so blinded by their own delusions and egos, that it couldn't be further from the truth. Specifically, I'm talking specifically about upper management. The "leaders" not only have no clue what they're doing, but they refuse to listen to other people's opinions and play favorites. If you're not extroverted or as "hyped
... (read more)

My best guess is that without Eliezer, we wouldn't have a culture of [forecasting and predictions]

The timeline doesn't make sense for this version of events at all. Eliezer was uninformed on this topic in 1999, at a time when Robin Hanson had already written about gambling on scientific theories (1990), prediction markets (1996), and other betting-related topics, as you can see from the bibliography of his Futarchy paper (2000).  Before Eliezer wrote his sequences (2006-2009), the Long Now Foundation already had Long Bets (2003), and Tetlock had already written Expert Political Judgment (2005). 

If Eliezer had not written his sequences, forecasting content would have filtered through to the EA community from contacts of Hanson. For instance, through blogging by other GMU economists like Caplan (2009). And of course, through Jason Matheny, who worked at FHI, where Hanson was an affiliate. He ran the ACE project (2010), which led to the science behind Superforecasting, a book that the EA community would certainly have discovered.

(Writing from OP’s point of view here.)

We appreciate that Nuño reached out about an earlier draft of this piece and incorporated some of our feedback. Though we disagree with a number of his points, we welcome constructive criticism of our work and hope to see more of it.

We’ve left a few comments below.

*****

The importance of managed exits

We deliberately chose to spin off our CJR grantmaking in a careful, managed way. As a funder, we want to commit to the areas we enter and avoid sudden exits. This approach:

  1. Helps grantees feel comfortable starting and scaling projects. We’ve seen grantees turn down increased funding because they were reluctant to invest in major initiatives; they were concerned that we might suddenly change our priorities and force them to downsize (firing staff, ending projects half-finished, etc.)
  2. Helps us hire excellent program officers. The people we ask to lead our grantmaking often have many other good options. We don’t want a promising candidate to worry that they’ll suddenly lose their job if we stop supporting the program they work on.

Exiting a program requires balancing:

  • the cost of additional below-the-bar spending during a slow exit;
  • the risks from a faster
... (read more)

I didn't downvote (because as you say it's providing relevant information), but I did have a negative reaction to the comment. I think the generator of that negative reaction is roughly: the vibe of the comment seems more like a political attempt to close down the conversation than an attempt to cooperatively engage. I'm reminded of "missing moods";  it seems like there's a legitimate position of "it would be great to have time to hash this out but unfortunately we find it super time consuming so we're not going to", but it would naturally come with a mood of sadness that there wasn't time to get into things, whereas the mood here feels more like "why do we have to put up with you morons posting inaccurate critiques?". And perhaps that's a reasonable position, but it at least leaves a kind of bad taste.

It is sometimes hard for communities with very different beliefs to communicate. But it would be a shame if communication were to break down.

I think it is worth trying to understand why people from very different perspectives might disagree with effective altruists on key issues. I have tried on my blog to bring out some key points from the discussions in this volume, and I hope to explore others in the future.

I hope we can bring the rhetoric down and focus on saying as clearly as possible what the main cruxes are and why a reasonable person might stand on one side or another. 

Super sorry to see you go Max. It's honestly kind of hard to believe how different CEA is today from when I joined, and a lot of that is due to your leadership. CEA has a bunch of projects going on, and the fact that you can step down without these projects being jeopardized is a strong endorsement of the team you've built here.

I look forward to continuing to work with you in an advisory role!

I'm glad that FLI put this FAQ out, but I'm nervous that several commenters are swinging from one opinion (boo, FLI) to the opposite (FLI is fine! Folks who condemned FLI were too hasty!) too quickly.  

This FAQ only slightly changed my opinion on FLI's grantmaking process. My best guess is that  something went very wrong with this particular grant process. My reasoning:

I'd be surprised if FLI's due diligence step is intended to be a substantial part of the assessment process. My guess it that due diligence might usually be more about formalities like answering - can we legally pay this person? Is the person is who they say they are? And not - Is this a good grant to make?

It seems like FLI would be creating a huge hassle  if they regularly sent out "intention to issue a grant" to prospective grantees (with the $ amount especially), only to withdraw support later. It would be harmful for the prospective grantees by giving them false hopes (could cause them to change their plans thinking the money is coming), and annoying for the grant maker because I suspect they'd be asked to explain why they changed their mind.  

If indeed FLI does regularly reject grants at due diligence stage, that would update me towards thinking nothing went too badly with this particular grant (and I'd like to know their reasons for doing that as I'm probably missing something). 

Note - I'm speaking for myself not CEA (where I work).

I think your consequentialist analysis is likely wrong and misguided. I think you're overstating the effects of the harms Bostrom perpetuated?

I think a movement where our leading intellectuals felt pressured to distort their views for social acceptability is a movement that does a worse job of making the world a better place.

Bostrom's original email was bad and he disavowed it. The actual apology he presented was fine IMO; he shouldn't have pretended to believe that there are definitely no racial differences in intelligence.

Most EAs I've met over the years don't seem to value their time enough, so I worry that the frugal option would often cost people more impact in terms of time spent (e.g. cooking), and it would implicitly encourage frugality norms beyond what actually maximizes altruistic impact.

That said, I like options and norms that discourage fancy options that don't come with clear productivity benefits. E.g. it could make sense to pay more for a fancier hotel if it has substantially better Wi-Fi and the person might do some work in the room, but it typically doesn't make sense to pay extra for a nice room.

Still, it’s hard to see how tweaking EA can lead to a product that we and others be excited about growing. 

 

It's not clear to me how far this is the case. 

  • Re. the EA community: evidence from our community survey, run with CEA, suggests a relatively limited reduction in morale post-FTX. 
  • Re. non-EA audiences, our work reported here and here (though still unpublished due to lack of capacity) suggest relatively low negative effects in the broader population (including among elite US students specifically).

I agree that:

  • Selection bias (from EAs with more negative reactions dropping out) could mean that the true effects are more negative. 
    • I agree that if we knew large numbers of people were leaving EA this would be another useful datapoint, though I've not seen much evidence of this myself. Formally surveying the community to see how many people know of leaving could be useful to adjudicate this.
    • We could also conduct a 'non-EA Survey' which tries to reach people who have dropped out of EA, or who would be in EA's target audience but who declined to join EA (most likely via referrals), which would be more systematic than anecdotal evidence. RP discussed doing with
... (read more)

Here’s a followup with some reflections.

Note that I discuss some takeaways and potential lessons learned in this interview.

Here are some (somewhat redundant with the interview) things I feel like I’ve updated on in light of the FTX collapse and aftermath:

  • The most obvious thing that’s changed is a tighter funding situation, which I addressed here.
  • I’m generally more concerned about the dynamics I wrote about in EA is about maximization, and maximization is perilous. If I wrote that piece today, most of it would be the same, but the “Avoiding the pitfalls” section would be quite different (less reassuring/reassured). I’m not really sure what to do about these dynamics, i.e., how to reduce the risk that EA will encourage and attract perilous maximization, but a couple of possibilities:
    • It looks to me like the community needs to beef up and improve investments in activities like “identifying and warning about bad actors in the community,” and I regret not taking a stronger hand in doing so to date. (Recent sexual harassment developments reinforce this point.).
    • I’ve long wanted to try to write up a detailed intellectual case against what one might call “hard-core utilita
... (read more)
saulius
1y106
15
3

Thanks for the question Denise. Probably x-risk reduction, although on some days I’d say farmed animal welfare. Farmed animal welfare charities seem to significantly help multiple animals per dollar spent. It is my understanding that global health (and other charities that help currently living humans) spend hundreds or even thousands of dollars to help one human in the same way. I think that human individuals are more important than other animals but not thousands of times.

Sometimes when I lift my eyes from spreadsheets and see the beauty and richness of life, I really don’t want all of this to end. And then I also think about how digital minds could have even richer and better experiences: they could be designed for extreme happiness in the widest sense of the word. And if only a tiny fraction of world’s resources could be devoted to the creation of such digital minds, there could be bazillions of them thriving for billions of years. I’m not sure if we can do much to increase this possibility, maybe just spread this idea a little bit (it’s sometimes called hedonium or utilitronium). So I was thinking of switching my career to x-risk reduction if I could manage to find a way to be... (read more)

I agree the we ignore experts over people who are more value aligned. Seems like a mistake.

Here's Bostrom's letter about it (along with the email) for context: https://nickbostrom.com/oldemail.pdf

Thanks so much for  your post here! I spent 5ish years as a litigator and couldn't agree more with this. As an additional bit of context for non-lawyers, how discovery works in a large civil trial, from someone who used to do it:

  1. You gather an ocean of potentially relevant documents from a wide range of sources
  2. You spend a ton of time sifting through them looking for quotes that, at least if taken out of context, might support a point you want to make
  3. You gather up all these potentially useful materials and decide what story you want to tell with them

Like a bird building a nest at a landfill, it's hard to know what throwaway comment a lawyer might make something out of.

I really don't understand how you could have read that whole interview and see SBF as incompetent rather than a malicious sociopath. I know this is a very un-EA-forum-like comment, but I think it's necessary to say.

DC
1y106
42
6

I was really looking forward to maybe implementing impact markets in collaboration with Future Fund plus FTX proper if you and they wanted, and feel numb with regard to this shocking turn. I really believed FTX had some shot at 'being the best financial hub in the world', SBF 'becoming a trillionaire', and this longshot notion I had of impact certificates being integrated into the exchange, funding billions of dollars of EA causes through it in the best world. This felt so cool and far out to imagine. I woke up two days ago  and this dream is now ash. I have spiritually entangled myself with this disaster.

I don't want to be the first commenter to be that guy, and forgive me if I'm poking a wound, but when you have the time and slack can you please explain to us to what extent you guys grilled FTX leadership about the integrity of the sources of money they were giving you? Surely you had an inside view model of how risky this was if it blew up? If it's true SBF has had a history of acting unethically before (rumors, I don't know), isn't that something to have thoroughly questioned and spoken against? If there was anyone non-FTX who could have pressured them to act ethically, it would have been you. As an outsider it felt like y'all were in a highly trusted concerted relationship with each other going back a decade.

In any case, thank you for what you've done.

Sven Rone should've won a prize in the Red Teaming contest[1]:

The Effective Altruism movement is not above conflicts of interest 

[published Sep 1st 2022]

Summary

Sam Bankman-Fried, founder of the cryptocurrency exchange FTX, is a major donator to the Effective Altruism ecosystem and has pledged to eventually donate his entire fortune to causes aligned with Effective Altruism. By relying heavily on ultra-wealthy individuals like Sam Bankman-Fried for funding, the Effective Altruim community is incentivized to accept political stances and moral judgments based on their alignment with the interests of its wealthy donators, instead of relying on a careful and rational examination of the quality and merits of these ideas. Yet, the Effective Altruism community does not appear to recognize that this creates potential conflicts with its stated mission of doing the most good by adhering to high standards of rationality and critical thought.

In practice, Sam Bankman-Fried has enjoyed highly-favourable coverage from 80,000 Hours, an important actor in the Effective Altruism ecosystem. Given his donations to Effective Altruism, 80,000 Hours is, almost by definition, in a conflict of interest

... (read more)

I wrote that comment from over a month ago. And I actually followed it up with a more scathing comment that got downvoted a lot, and that I deleted out of a bit of cowardice, I suppose. But here's the text: 

 

Consider this bit from the origin story of FTX

In 2019, he took some of the profits from Alameda and $8 million raised from a few smaller VC firms and launched FTX. He quickly sold a slice to Binance, the world’s biggest crypto exchange by volume, for about $70 million. 

Binance, you say? This Binance

During this period, Binance processed transactions totalling at least $2.35 billion stemming from hacks, investment frauds and illegal drug sales, Reuters calculated from an examination of court records, statements by law enforcement and blockchain data, compiled for the news agency by two blockchain analysis firms. Two industry experts reviewed the calculation and agreed with the estimate.

Separately, crypto researcher Chainalysis, hired by U.S. government agencies to track illegal flows, concluded in a 2020 report that Binance received criminal funds totalling $770 million in 2019 alone, more than any other crypto exchange. Binance CEO Changpen

... (read more)

Wow, I didn't see it at the time but this was really well written and documented. I'm sorry it got downvoted so much and think that reflects quite poorly on Forum voting norms and epistemics.

Sabs
1y106
75
7

Maybe hold off on this sentiment until we know exactly what they were doing with customer funds? It could age quite badly. 

Linch
2y106
0
0

Not opinionating on the general point, but:

In her early 20s, Kelsey was taking leave from college for mental health reasons and babysitting her friends' kid for room and board. If either of us had been in the student group, we would have been the least promising of the lot

IIRC, Kelsey was in fact the president of the Stanford EA student group, and I do not think she would've been voted "least likely to succeed" by the other members. 

Quite. I was in that Stanford EA group, I thought Kelsey was obviously very promising and I think the rest of us did too, including when she was taking a leave of absence. 

Larks
2y106
0
0

The linked article seems to overstate the extent to which EAs support totalitarian policies. While it is true that EAs are generally left-wing and have more frequently proposed increases in the size & scope of government than reductions, Bostrom did commission an entire chapter of his book on the dangers of a global totalitarian government from one of the world's leading libertarian/anarchists, and longtermists have also often been supportive of things that tend to reduce central control, like charter cities, cryptocurrency and decentralised pandemic control.

Indeed, I find it hard to square the article's support for ending technological development with its opposition to global governance. Given the social, economic and military advantages that technological advancement brings, it seems hard to believe that the US, China, Russia etc. would all forgo scientific development, absent global coordination/governance. It is precisely people's skepticism about global government that makes them treat AI progress as inevitable, and hence seek other solutions. 

Pablo
3y106
0
0

This is what ACE's "overview" lists as Anima's weaknesses:

We think Anima International’s leadership has a limited understanding of racial equity and that this has impacted some of the spaces they contribute to as an international animal advocacy group—such as coalitions, conferences, and online forums. We also think including non-staff members in Anima International’s governing board would increase the board’s capacity to oversee the organization from a more independent and objective perspective. 

Their "comprehensive review" doesn't mention the firing of the CEO as a consideration behind their low rating. The primary reason for their negative evaluation seems to be captured in the following excerpt:

According to our culture survey, Anima International is diverse along the lines of gender identity and sexual identity, however, they are not diverse on racial identity. This is not surprising, as most of the countries in which their member organizations operate are very racially homogenous; in practice, we think it would be particularly difficult for them to successfully attract and hire advocates who are Black, Indigenous, or of the global majority49 (BIPGM) in those countries. Ou

... (read more)

This is a very unfortunate situation, but as a general piece of life advice for anyone reading this: expressions of interest are not commitments and should not be "interpreted" -- let alone acted upon! -- as such.

For example, within academia, a department might express interest in having Prof X join their department. But there's no guarantee it will work out. And if Prof. X prematurely quit their existing job, before having a new contract in hand, they would be taking a massive career risk!

(I'm not making any comment on the broader issues raised here; I sympathize with all involved over the unfortunate miscommunication. Just thought it was important to emphasize this particular point. Disclosure: I've recently had positive experiences with EAIF.)

This is entirely consistent with two other applications I know of from 2023, both of which were funded but experienced severe delays and poor/absent/straightforwardly unprofessional communication

Wei Dai
1y105
29
4

Would be interested in your (eventual) take on the following parallels between FTX and OpenAI:

  1. Inspired/funded by EA
  2. Taking big risks with other people's lives/money
  3. Attempt at regulatory capture
  4. Large employee exodus due to safety/ethics/governance concerns
  5. Lack of public details of concerns due in part to non-disparagement agreements

Good to see a post that loosely captures my own experience of EAG London and comes up with a concrete idea for something to do about the problem (if a little emotionally presented).

I don't have a strong view on the ideal level of transparency/communication here, but something I want to highlight is: Moving too slowly and cautiously is also a failure mode

In other words, I want to emphasise how important "this is time consuming, and this time is better spent making more grants/doing something else" can be. Moving fast and breaking things tends to lead to much more obvious, salient problems and so generally attracts a lot more criticism. On the other hand, "Ideally, they should have deployed faster" is not a headline. But if you're as consequentialist as the typical EA is, you should be ~equally worried about not spending money fast enough. Sometimes to help make this failure mode more salient, I imagine a group of chickens in a factory farm just sitting around in agony waiting for us all to get our act together (not the most relevant example in this case, but the idea is try to counteract the salience bias associated with the problems around moving fast). Maybe the best way fo... (read more)

problems like malaria and extreme poverty still exist

I know this isn't the only thing to track here, but it's worth noting that funding to GiveWell-recommended charities is also increasing fast, both from Open Philanthropy and from other donors. Enough so that last year GiveWell had more money to direct than room for more funding at the charities that meet their bar (which is "8x better than cash transfers", though of course money could be donated to things less effective than that). They're aiming to move 1 billion annually by 2025.

Fwiw, anecdotally my impression is that a more common problem is that people engage in motivated reasoning to justify projects that aren't very good, and that they just haven't thought through their projects very carefully. In my experience, that's more common than outright, deliberate fraud - but the latter may get more attention since it's more emotionally salient (see my other comment). But this is just my impression, and it's possible that it's outdated. And I do of course think that EA should be on its guard against fraud.

What I heard from former Alameda people 

A number of people have asked about what I heard and thought about the split at early Alameda. I talk about this on the Spencer podcast, but here’s a summary. I’ll emphasise that this is me speaking about my own experience; I’m not speaking for others.

In early 2018 there was a management dispute at Alameda Research. The company had started to lose money, and a number of people were unhappy with how Sam was running the company. They told Sam they wanted to buy him out and that they’d leave if he didn’t accept their offer; he refused and they left. 

I wasn’t involved in the dispute; I heard about it only afterwards. There were claims being made on both sides and I didn’t have a view about who was more in the right, though I was more in touch with people who had left or reduced their investment. That included the investor who was most closely involved in the dispute, who I regarded as the most reliable source.

It’s true that a number of people, at the time, were very unhappy with Sam, and I spoke to them about that. They described him as reckless, uninterested in management, bad at managing conflict, and being unwilling to accept a lower... (read more)

I broadly agree with the picture and it matches my perception. 

That said, I'm also aware of specific people who held significant reservations about SBF and FTX throughout the end of 2021 (though perhaps not in 2022 anymore), based on information that was distinct from the 2018 disputes. This involved things like:

  • predicting a 10% annual risk of FTX collapsing with FTX investors and the Future Fund (though not customers) losing all of their money, 
  • recommending in favor of 'Future Fund' and against 'FTX Future Fund' or 'FTX Foundation' branding, and against further affiliation with SBF, 
  • warnings that FTX was spending its US dollar assets recklessly, including propping up the price of its own tokens by purchasing large amounts of them on open markets (separate from the official buy & burns), 
  • concerns about Sam continuing to employ very risky and reckless business practices throughout 2021.

I think several people had pieces of the puzzle but failed to put them together or realize the significance of it all. E.g. I told a specific person about all of the above issues, but they didn't have a 'holy shit' reaction, and when I later checked with them they had forgotten... (read more)

To me this post ignores the elephant in the room: OpenPhil still has billions of dollars left and is trying to make funding decisions relative to where they think their last dollar is. I'd be pretty surprised if having the Wytham money liquid rather than illiquid (or even having £15mn out of nowhere!) really made a difference to that estimate.

It seems reasonable to argue that they're being too conservative, and should be funding the various things you mention in this post, but also plausible to me that they're acting correctly? More importantly, I think this is a totally separate question to whether to sell Wytham,and requires different arguments. Eg I gather that CEEALAR has several times been considered and passed over for funding before, I don't have a ton of context for why, but that suggests to me it's not a slam dunk re being a better use of money.

Massive thanks to Ben for writing this report and to Alice and Chloe for sharing their stories. Both took immense bravery.

There's a lot of discussion on the meta-level on this post. I want to say that I believe Alice and Chloe. I currently want to keep my distance from Nonlinear, Kat and Emerson, and would caution others against funding or working with them. I don't want to be part of a community that condones this sort of thing. 

I’m not and never have been super-involved in this affair, but I reached out to the former employees following the earlier vague allegations against Nonlinear on the Forum, and after someone I know mentioned they’d heard bad things. It seemed important to know about this, because I had been a remote writing intern at Nonlinear, and Kat was still an occasional mentor to me (she’d message me with advice), and I didn’t want to support NL or promote them if it turned out that they had behaved badly.

Chloe and Alice’s stories had the ring of truth about them to me, and seemed consistent with my experiences with Emerson and Kat — albeit I didn’t know either of them that well and I didn’t have any strongly negative experiences with them. 

It seems relevan... (read more)

Thank you! This post says very well a lot of things I had been thinking and feeling in the last year but not able to articulate properly. 

I think it's very right to say that EA is a "do-ocracy", and I want to focus in on that a bit. You talked about whether EA should become more or less centralized, but I think it's also interesting to ask "Should EA be a do-ocracy?"

My response is a resounding yes: this aspect of EA feels (to me) deeply linked to an underrated part of the EA spirit. Namely, that the EA community is a community of people who not only identify problems in the world, but take personal action to remedy them.

  • I love that we have a community where random community members who feel like an idea is neglected feel empowered to just do the research and write it up. 
  • I love that we have a community where even those who do not devote much of their time to action take the very powerful action of giving effectively and significantly. 
  • I love that we have a community where we fund lots of small experimental projects that people just though should exist. 
  • I love that most of our "big" orgs started with a couple of people in a basement because they thought it was a
... (read more)

I applied to attend the Burner Accounts Anonymous meetup and was rejected.

Initially, I received no feedback. Just a standard auto-generated rejection message.
After reaching out to BurnerMeetupBurner for feedback, I learned that I was rejected because of my IQ. The event is apparently only for high IQ individuals.

I feel very disappointed. Not only because I believe that intelligence is not relevant for making a fruitful contribution to event, but also because of the lack of transparency in the application process.

This makes me consider leaving the EA burner movement and post under my real name in the future.

Ofer
1y104
33
6

In your recent Cold Takes post you disclosed that your wife owns equity in both OpenAI and Anthropic. (She was appointed to a VP position at OpenAI, as was her sibling, after you joined OpenAI's board of directors[1]). In 2017, under your leadership, OpenPhil decided to generally stop publishing "relationship disclosures". How do you intend to handle conflicts of interest, and transparency about them, going forward?

You wrote here that the first intervention that you'll explore is AI safety standards that will be "enforced via self-regulation at first, and potentially government regulation later". AI companies can easily end up with "self-regulation" that is mostly optimized to appear helpful, in order to avoid regulation by governments. Conflicts of interest can easily influence decisions w.r.t. regulating AI companies (mostly via biases and self-deception, rather than via conscious reasoning).


  1. EDIT: you joined OpenAI's board of directors as part of a deal between OpenPhil and OpenAI that involved recommending a $30M grant to OpenAI. ↩︎

Linch
1y104
27
6

Hi, I think on balance I appreciate this post. This is a hard thing for me to say, as the post has likely caused nontrivial costs to some people rather close to me, and has broken some norms that I view as both subtle and important. But on balance I think our movement will do better with more critical thinkers, and more people with critical pushback when there is apparent divergence between stated memes and revealed goals. 

I think this is better both culturally, and also is directly necessary to combat actual harm if there is also actual large-scale wrongdoing that agreeable people have been acculturated to not point out. I think it will be bad for the composition and future of our movement if we push away young people who are idealistic and disagreeable, which I think is the default outcome if posts like this only receive critical pushback.

So thank you for this post. I hope you stay and continue being critical.

I think this is worth talking about, but I think it's probably a bad idea. I should say up front that I have a pretty strong pro-transparency disposition, and the idea of hiding public things from search engines feels intuitively wrong to me.

I think this has similar problems to the proposal that some posts should be limited to logged-in users, and I see two main downsides:

  • Discussion of community problems on the Forum is generally more informed and even-handed than I see elsewhere. To take the example of FTX, if you look on the broader internet there was lots of uninformed EA bashing. The discussion on the forum was in many places quite negative, but usually those were places where the negativity was deserved. On most EA community issues the discussion on the Forum is something I would generally want to point interested people at, instead of them developing their perspective with only information available elsewhere.

  • I expect people would respond to their words being somewhat less publicly visible by starting to talk more as if they are chatting off the record among friends, and that seems very likely to backfire. The Forum has search functionality, RSS feeds, posts with public

... (read more)
Pablo
1y104
51
2

Many people are tired of being constantly exposed to posts that trigger strong emotional reactions but do not help us make intellectual progress on how to solve the world's most pressing problems. I have personally decided to visit the Forum increasingly less frequently to avoid exposing myself to such posts, and know several other EAs for whom this is also the case. I think you should consider the hypothesis that the phenomenon I'm describing, or something like it, motivated the Forum team's decision, rather than the sinister motive of "attemp[ting] to sweep a serious issue under the rug".

Jonas V
1y104
31
1

EA Forum discourse tracks actual stakes very poorly

Examples:

  1. There have been many posts about EA spending lots of money, but to my knowledge no posts about the failure to hedge crypto exposure against the crypto crash of the last year, or the failure to hedge Meta/Asana stock, or EA’s failure to produce more billion-dollar start-ups. EA spending norms seem responsible for $1m–$30m of 2022 expenses, but failures to preserve/increase EA assets seem responsible for $1b–$30b of 2022 financial losses, a ~1000x difference.
  2. People are demanding transparency about the purchase of Wytham Abbey (£15m), but they’re not discussing whether it was a good idea to invest $580m in Anthropic (HT to someone else for this example). The financial difference is ~30x, the potential impact difference seems much greater still.

Basically I think EA Forum discourse, Karma voting, and the inflation-adjusted overview of top posts completely fails to correctly track the importance of the ideas presented there. Karma seems to be useful to decide which comments to read, but otherwise its use seems fairly limited.

(Here's a related post.)

Thanks for your thoughtful response.

I'm trying to figure out how much of a response to give, and how to balance saying what I believe vs. avoiding any chance to make people feel unwelcome, or inflicting an unpleasant politicized debate on people who don't want to read it. This comment is a bad compromise between all these things and I apologize for it, but:

I think the Kathy situation is typical of how effective altruists respond to these issues and what their failure modes are. I think "everyone knows" (in Zvi's sense of the term, where it's such strong conventional wisdom that nobody ever checks if it's true ) that the typical response to rape accusations is to challenge and victim-blame survivors. And that although this may be true in some times and places, the typical response in this community is the one which, in fact, actually happened - immediate belief by anyone who didn't know the situation, and a culture of fear preventing those who did know the situation from speaking out. I think it's useful to acknowledge and push back against that culture of fear.

(this is also why I stressed the existence of the amazing Community Safety team - I think "everyone knows" that EA doesn't ... (read more)

One of the biggest lessons I learned from all of this is that while humans are quite good judges of character in general, we do a lot worse in the presence of sufficient charisma, and in those cases we can't trust our guts, even when they're usually right. When I first met SBF, I liked him quite a bit, and I didn't notice any red flags. Even during the first month or two of working with him, I kind of had blinders on and made excuses for things that in retrospect I shouldn't have.

It's hard for me to say about what people should have been able to detect from his public presence, because I haven't watched any of his public interviews. I put a fair amount of effort into making sure that news about him (or FTX) didn't show up in any of my feeds, because when it did I found it pretty triggering.

Personally, I don't think his character flaws are at all a function of EA. To me, his character seems a lot more like what I hear from friends who work in politics about what some people are like in that domain. Given his family is very involved in politics, that connection seems plausible to me. This is very uncharitable, but: from my discussions with him he always seemed a lot more interested in power than in doing good, and I always worried that he just saw doing good as an opportunity to gain power. There's obviously no way for me to have any kind of confidence in that assessment, though, and I don't think people should put hardly any weight on it.

I agree! As a founder, I promise to never engage in fraud, either personally or with my business, even if it seems like doing so would result in large amounts of money (or other benefits) to good things in the world. I also intend to discourage other people who ask my advice from making similar trade-offs.

This should obviously go without saying, and I already was operating this way, but it is worth writing down publicly that I think fraud is of course wrong, and is not in line with how I operate the philosophy of EA.

What would have been really interesting is if someone wrote a piece critiquing the EA movement for showing little to no interest in scrutinizing the ethics and morality of Sam Bankman-Fried's wealth. 

To put a fine point on it, has any of his wealth come from taking fees from the many scams, Ponzi schemes, securities fraud, money laundering, drug trafficking, etc. in the crypto markets? FTX has been affiliated with some shady actors (such as Binance), and seems to be buying up more of them (such as BlockFi, known for securities fraud). Why isn't there more curiosity on the part of  EA, and more transparency on the part of FTX? Maybe there's a perfectly good explanation (and if so, I'll certainly retract and apologize), but it seems like that explanation ought to be more widely known. 

I could be wrong, but I have a pretty strong sense that nearly everyone I know with EA funding would be willing to criticise CEA if they had a good reason to. I'd be surprised if {being EA funded} decreased willingness to criticise EA orgs. I even expect the opposite to be true.

I disagree, I know several people who fit this description (5 off the top of my head) who would find this very hard. I think it very much depends on factors like how well networked you are, where you live, how much funding you've received and for how long, and whether you think you could work for and org in the future.

Tangentially related: I would love to see a book of career decision worked examples. Rather than 80k's cases, which often read like biographies or testimonials, these would go deeper on the problem of choosing jobs and activities. They would present a person (real or hypothetical), along with a snapshot of their career plans and questions. Then, once the reader has formulated some thoughts, the book would outline what it would advise, what that might depend on, and what career outcomes occurred in similar cases.

A lot of fields are often taught in a case-based fashion, including medicine, poker, ethics, and law. Often, a reader can make good decisions  in problems they encounter by interpolating between cases, even when they would struggle to analyse these problems analytically. Some of my favourite books have a case-based style, such as An Anthropologist on Mars by Oliver Sacks. It's not always the most efficient way to learn, but it's pretty fun.

I wanted to thank you for sharing. I think it can be hard or scary to raise concerns or feedback to a board like this, and I appreciate it.


(I can only speak for EVF US:) 

Since the beginning of all this, we’ve been thinking through board composition questions. In particular, we’ve been discussing what’s needed on the US board and what changes should be made. We’ve also explicitly discussed conflicts of interest and how we should think about that for board composition. 
 

There are a variety of different issues raised in the post and comments, but I want to say something specifically about FTX-related conflicts. In the aftermath of the FTX collapse, EVF UK and EVF US commissioned an outside independent investigation by the law firm Mintz to examine the organizations’ relationship to FTX, Alameda Research, Sam Bankman-Fried, and related individuals. We’re waiting for the results of the investigation to make a determination about whether any board members should be removed for FTX-related reasons. We’re doing this to avoid making rushed decisions with incomplete information. Nick has been recused from all FTX-related decision-making at EVF US. (Nick and Will have also been... (read more)

Hi Ludwig, thanks for raising some of these issues around governance. I work on the research team at Giving What We Can, and I’m responding here specifically to the claims relating to our work. There are a few factual errors in your post, and other areas I’d like to add additional context on. I’ll touch on:

  1. Our recommendations (we do disclosure conflicts of interest). 
  2. The Longtermism Fund specifically (payout reports are about to be published).
  3. Our relationship with EVF (we set our own strategy, independently fundraise, and have little to do with most organisations under EVF). 

#1 Recommendations

With respect to our recommendations: They are determined by our inclusion criteria which we regularly link to (for example, on our recommended charities page and on every charity page). As outlined in our inclusion criteria, we rely on our trusted evaluators to determine our giving recommendations. Longview Philanthropy and EA Funds are two of the five trusted evaluators we relied on this giving season. We explicitly outline our conflict of interests with both organisations in our trusted evaluators page.

We want to provide the best possible giving recommendations ... (read more)

I’m so sorry to hear about your negative experiences in EA community meetups. It is totally not okay for people to feel pressured or manipulated into sexual relationships. The community health team at CEA is available to talk, and will try to help resolve the situation. You can use this form to contact the team (you can be anonymous) or contact Julia Wise julia.wise@centreforeffectivealtruism.org or Catherine Low catherine@centreforeffectivealtruism.org  directly. 

If a crime has been committed (or you have reason to suspect a crime has been committed), we encourage people to report the crime to the police.

In the future I’d also be happy to talk with community members about the codes of conducts and other processes that CEA and the wider EA community has in place, and listen to their suggestions. 

This post is mostly making claims about what a very, very small group of people in a very, very small community in Berkeley think. When throwing around words like "influential leaders" or saying that the claims "often guide EA decision-making" it is easy to forget that.

The term "background claims" might imply that these are simply facts. But many are not: they are facts about opinions, specifically the opinions of "influential leaders"

Do not take these opinions as fact. Take none for granted. Interrogate them all.

"Influential leaders" are just people. Like you and I, they are biased. Like you and I, they are wrong (in correlated ways!). If we take these ideas as background, and any are wrong, we are destined to all be wrong in the same way.

If you can, don't take ideas on background. Ask that they be on the record, with reasoning and attribution given, and evaluate them for yourself.

Ajeya
2y103
1
0

I'm really sorry that you and so many others have this experience in the EA community. I don't have anything particularly helpful or insightful to say -- the way you're feeling is understandable, and it really sucks :(

I just wanted to say I'm flattered and grateful that you found some inspiration in that intro talk I gave. These days I'm working on pretty esoteric things, and can feel unmoored from the simple and powerful motivations which brought me here in the first place -- it's touching and encouraging to get some evidence that I've had a tangible impact on people.

Linch
2y103
0
0

Reading between the lines, you're a funny writer, are self-aware, were successful enough at work to be promoted multiple times, and have a partner and a supportive family.  This is more than what most people can hope for. At some level I think you should be proud of what you've accomplished, not just what you tried and failed to do. 

Depression really sucks, and it's unfortunate that this is entangled with trying hard to achieve ambitious EA goals and not succeeding. At the same time, I think the EA community would've done right by its members if most of our "failure stories" looked like yours, albeit I'd prefer perhaps more community support in the longer term.

I think it's important to frame longtermism as particular subset of EA. We should be EAs first and longtermists second. EA says to follow the importance-tractability-crowdedness framework, and allocate funding to the most effective causes. This can mean funding longtermist interventions, if they are the most cost-effective. If longtermist interventions get a lot of funding and hit diminishing returns, then they won't be the most cost-effective anymore. The ITC framework is more general than the longtermist framing of "focus on the long-term future", and allows us to pivot as funding and tractability changes.

I want to flag for Forum readers that I am aware of this post and the associated issues about FTX, EV/CEA, and EA. I have also reached out to Becca directly. 

I started in my new role as CEA’s CEO about six weeks ago, and as of the start of this week I’m taking a pre-planned six-week break after a year sprinting in my role as EV US’s CEO[1]. These unusual circumstances mean our plans and timelines are a work in progress (although CEA’s work continues and I continue to be involved in a reduced capacity).

Serious engagement with and communication about questions and concerns related to these issues is (and was already) something I want to prioritize, but I want to wait to publicly discuss my thoughts on these issues until I have the capacity to do so thoroughly and thoughtfully, rather than attempt to respond on the fly. I appreciate people may want more specific details, but I felt that I’d at least respond to let people know I’ve acknowledged the concerns rather than not responding at all in the short-term.

  1. ^

     It’s unusual to take significant time off like this immediately after starting a new role, but this is functionally a substitute for me not taking an extended break bet

... (read more)
JWS
4mo102
7
0
26

Many people find the Forum anxiety inducing because of the high amount of criticism. So, in the spirit of Giving Season, I'm going to give some positive feedback and shout-outs for the Forum in 2023 (from my PoV). So, without further ado, I present the 🎄✨🏆 totally-not-serious-worth-no-internet-points-JWS-2023-Forum-Awards: 🏆✨🎄[1]
 

Best Forum Post I read this year:

10 years of Earning to Give by @AGB: A clear, grounded, and moving look at what it actually means to 'Earn to Give'. In particular, the 'Why engage?' section really resonated with me.

Honourable Mentions:

Best ... (read more)

The Belgian senate votes to add animal welfare to the constitution.

It's been a journey. I work for GAIA, a Belgian animal advocacy group that for years has tried to get animal welfare added to the constitution. Today we were present as a supermajority of the senate came out in favor of our proposed constitutional amendment. The relevant section reads:

In exercising their respective powers, the Federal State, the Communities and the Regions strive to protect and care for animals as sentient beings.

It's a very good day for Belgian animals but I do want to note that:

  1. This does not mean an effective shutdown of the meat industry, merely that all future pro-animal welfare laws and lawsuits will have an easier time.  And,
  2. It still needs to pass the Chamber of Representatives.

If there's interest I will make a full post about it if once it passes the Chamber.

EDIT: Translated the linked article on our site into English.

Pandora
1y102
38
7

context: I'm relatively new to EA, mid 20s, and a polyamorous woman. Commenting anonymously because I am not yet totally "out" as polyamorous to everyone in my life.

I feel that this post risks conflating and/or unfairly associating polyamory with poor handling of power dynamics and personal/professional boundaries. Such issues can overlap with any relationship structure. Sexual misconduct exists throughout our society, and throughout both monogamous and non-monogamous spaces. 

I've experienced a range of sexual misconduct prior to my involvement in EA, and so far have found my dating and professional interactions with men in EA to be high quality, relative to high personal standards. In particular, the openness to and active solicitation of feedback I've experienced is something I've never really experienced outside of polyamory within EA. Since I learned about EA thanks to polyamory (not the other way around), I think I have a pretty different experience than that shared by women in the Time article. Their experience is not a representation of what polyamory done well actually looks like.

Additionally, the Time article fosters skepticism about restorative justice approaches to ... (read more)

I'm worried and skeptical about negative views toward the community health team and Julia Wise.

My view is informed by the absence of clear objective mistakes described by anyone. It also seems very easy and rewarding to criticize them[1].

I'm increasingly concerned about the dynamic over the last few months where CEA and the Community Health team constantly acts as a lightning rod for problems they have little control over. This dynamic has always existed, but it has become more severe post-SBF. 

This seems dysfunctional and costly to good talent at CEA. It is an even deeper issue because these seem to be one of the few people trying to take ownership and help EA publicly right now. 

I'm not sure what happens if Julia Wise and co. stop. 

  1. ^

    The Guzey incident is one example where a detractor seems excessive toward Wise. I share Will Bradshaw's view that this is both minor and harmless, although I respect and would be interested in Nuno's dissenting view. 

    (Alexey Guzey wrote a book chapter, that he would be releasing publicly, that was critical of MacAskill's content in DGB, to Julia Wise. Wise sent the chapter to MacAskill, which Guzey asked her not to do. It's unclea

... (read more)

Several nitpicks:

  • "2022 was a year of continued growth for CEA and our programs." - A bit of a misleading way to summarise CEA's year?
  • "maintaining high retention and morale" - to me there did seem to be a dip in morale at the office recently
  • "[EA Forum] grew by around 2.9x this year." - yes, although a bit of this was due to the FTX catastrophe
  • "Overall, we think that the quality of posts and discussion is roughly flat over the year, but it’s hard to judge." - this year, a handful of people told me they felt the quality had decreased, which didn't happen in previous years, and I noticed this too.
  • "Recently the community took a significant hit from the collapse of FTX and the suspected illegal and/or immoral behaviour of FTX executives." - this is a very understated way to note that a former board member of CEA committed one of the largest financial frauds of all time.

I realise there are legal and other constraints, so maybe I am being harsh, but overall, several components of this post seemed not very "real" or straightforward relative to what I would usually expect from this sort of EA org update.

I think some of this post's criticisms have bite: for example, I agree that EVF suborgs are at significant risk of falling prey to conflicts of interest, especially given the relatively low level of transparency at many of these suborgs, and that EVF should have explicit mechanisms for avoiding this.

However, I think this post largely fails to engage with the reasons so many suborgs have federated with EVF. Based on my experience[1], members of many of these suborgs genuinely consider themselves separate orgs, and form part of EVF mainly because this allows them to be supported by EVF's very well-oiled ops machine. This makes it significantly easier for new EA projects to spin up quickly, while offering high-quality HR and other support to their employees. This is a pretty exciting proposal for any new EA project that doesn't place a high value on being legally independent.

"Breaking up" EVF could thus be very costly from an impact perspective, insofar as it makes the component orgs less effective (which seems likely to me) and necessitates lots of duplication of ops effort. You might argue that it's worth it for the transparency benefits, but I'd want to see serious engagement with ... (read more)

Akhil
2y102
29
2

Given the uncertainty in the chronology of events and nature of how authorship and review occurred, would it have not made sense to reach out to Cremer and Kemp before posting this? It would make any commentary much less speculative and heated. If the OP has done this and not received a reply, they should make that clear (but my understanding is that this was not done, which imo is a significant oversight)

It's quite likely the extinction/existential catastrophe rate approaches zero within a few centuries if civilization survives, because:

  1. Riches and technology make us comprehensively immune to  natural disasters.
  2. Cheap ubiquitous detection, barriers, and sterilization make civilization immune to biothreats
  3. Advanced tech makes neutral parties immune to the effects of nuclear winter.
  4. Local cheap production makes for small supply chains that can regrow from disruption as industry becomes more like information goods.
  5. Space colonization creates robustness against local disruption.
  6. Aligned AI blocks threats from misaligned AI (and many other things).
  7. Advanced technology enables stable policies (e.g. the same AI police systems enforce treaties banning WMD war for billions of years), and the world is likely to wind up in some stable situation (bouncing around until it does).

If we're more than 50% likely to get to that kind of robust state, which I think is true, and I believe Toby  does as well, then the life expectancy of civilization is very long, almost as long on a log scale as with 100%.

Your argument depends on  99%+++ credence that such safe stable states won't be attained, wh... (read more)

I had a pretty painful experience where I was in a pretty promising position in my career, already pretty involved in EA, and seeking direct work opportunities as a software developer and entrepreneur. I was rejected from EAG twice in a row while my partner, a newbie who just wanted to attend for fun (which I support!!!) was admitted both times. I definitely felt resentful and jealous in ways that I would say I coped with successfully but wow did it feel like the whole thing was lame and unnecessary. 

I felt rejected from EA at large and yeah I do think my life plans have adjusted in response. I know there were many such cases! In the height of my involvement I was a very devoted EA, really believed in giving as much as I could bear (time etc included). 

This level of devotion juxtaposed with being turned away from even hanging out with people, it's quite a shock. I think the high devotion version of my life would be quite fulfilling and beautiful, and I got into EA seeking a community for that, but never found it. EAG admissions is a pretty central example of this mismatch to me.  

Retrospective grant evaluations

Research That Can Help Us Improve

EA funders allocate over a hundred million dollars per year to longtermist causes, but a very small fraction of this money is spent evaluating past grantmaking decisions. We are excited to fund efforts to conduct retrospective evaluations to examine which of these decisions have stood the test of time. He hope that these evaluations will help us better score a grantmaker's track record and generally make grantmaking more meritocratic and, in turn, more effective. We are interested in funding evaluations not just of our own grantmaking decisions (including decisions by regrantors in our regranting program), but also of decisions made by other grantmaking organizations in the longtermist EA community.

I’m sorry I didn’t handle this better in the first place. My original comments are here, but to reiterate some of the mistakes I think I made in handling the concerns about Owen:

  • I wish I had asked the various women for permission to get a second opinion from a colleague or to hand the case over to a colleague. 
  • In the case where Owen told me he believed he’d made someone uncomfortable, I wish I had reached out to the woman to get her side of the story (if she was willing to share that). This would have given me a clearer picture of some of his actions that I didn’t know about until after the investigation.
  • I wish I had been clearer to Owen about specific changes he should make.
  • I wish I had flagged my concerns earlier and more clearly to people at CEA and EV. Two of the people I told about some of my concerns were on the boards of EV US or EV UK (then called CEA US and CEA UK), but I didn’t properly think through Owen’s role on the board or flag that to them.

Some things that are different now, related to the changes that Chana describes:

  • The community health team has spent months going through lessons learned both from this situation and from other cases we’ve handled. B
... (read more)

My overall impression is that the CEA community health team (CHT from now on) are well intentioned but sometimes understaffed and other times downright incompetent. It's hard to me to be impartial here, and I understand that their failures are more salient to me than their successes. Yet I endorse the need for change, at the very least including 1) removing people from the CHT that serve as a advisors to any EA funds or have other conflict of interest positions, 2) hiring HR and mental health specialists with credentials, 3) publicly clarifying their role and mandate. 

My impression is that the most valuable function that the CHT provides is as support of community building teams across the world, from advising community builders to preventing problematic community builders from receiving support. If this is the case, I think it would be best to rebrand the CHT as a CEA HR department, and for CEA to properly hire the community builders who are now supported as grantees, which one could argue is an employee misclassification.

I would not be comfortable discussing these issues openly out of concern for the people affected, but here are some horror stories:

  1. A CHT staff pressured a c
... (read more)

Catherine from CEA’s Community Health and Special Projects Team here.  I have a different perspective on the situation than Jaime does and appreciate that he noted that “these stories have a lot of nuance to them and are in each case the result of the CHT making what they thought were the best decisions they could make with the tools they had.” 

I believe Jaime’s points 1, 2 and 3 refer to the same conflict between two people. In that situation, I have deep empathy for the several people that have suffered during the conflict. It was (and still is) a complex and very upsetting situation.

Typically CEA’s Groups team is the team at CEA that interfaces most closely with EA groups. The conflict mentioned here was an unusual situation which led the Community Health team to have more contact with that group than usual. From the information we gathered after talking to several individuals affected, this was an interpersonal conflict. We made a judgement call about what was best given the information, which Jaime disagrees with. To be clear, based on the information we had, there were no threats of violence, sexual harassment, or other forms of seriously harmful behavior that ... (read more)

Personal feelings (which I don't imply are true or actionable)

I am annoyed and sad.

I want to feel like I can trust the leaders of this community are playing by a set of agreed rules. Eg I want to hear from them. And half of me trusts them and half feels I should take an outside view that leaders often seek to protect their own power. The disagreement between these parts causes hurt and frustration.

I also variously feel hurt, sad, afraid, compromised, betrayed.

I feel ugly that I talk so much about my feelings too. It feels kind of obscene.

I feel sad that saying negative things, especially about Will. I sense he's worked really hard. I feel ungrateful and snide. Yuck.

Object level

I don't think this article moves me much This article moves me a bit on a number of important things:

  • We have some more colour around the specific warnings that were given
  • It becomes much more likely that MacAskill backed Bankman-Fried in the aftermath of the the early Alameda disagreements which was ex-ante, dubious and ex-post disasterous. The comment about threatening Mac Auley is very concerning.
  • I update a bit that Sam used this support as cover
  • I sense that people ought to take the accusations of inappropri
... (read more)

this was not above normal levels for the CEO of a rapidly growing business

It was, and we explicitly said that it was at the time. Many of those of us who left have a ton of experience in startups, and the persistent idea that this was a typical “founder squabble” is wrong, and to be honest, getting really tiresome to hear. This was not a normal startup, and these were not normal startup problems.

(Appreciate the words of support for my honesty, thank you!)

I still don't understand why they can't give a clear promise of when they will talk and that the lack of this makes me trust them less

fwiw I will probably post something in the next ~week (though I'm not sure if I'm one of the people you are waiting to hear from). 

Jonas V
1y79
29
25

I would still like an argument that they shouldn't be removed from boards, when almost any other org would. I would like the argument made and seen to be made. 

 

Here's my tentative take:

  • It's really hard to find competent board members that meet the relevant criteria
  • Nick (together with Owen) did a pretty good job turning CEA from a highly dysfunctional into a functional organization during CEA's leadership change in 2018/2019. 
  • Similarly, while Nick took SBF's money, he didn't give SBF a strong platform or otherwise promote him a lot, and instead tried to independently do a good (not perfect, but good enough!) job running a philanthropic organization. While SBF may have wanted to use the philanthropy to promote the FTX/SBF brand, Nick didn't do this. [Edit: This should not be read as me implying that Will did those things. While I think Will made some mistakes, I don't think this describes them.]
  • Continuity is useful. Nick has seen lots of crises and presumably learnt from them.

So, while Will should be removed, Nick has demonstrated competence and should stay on. 

(Meta note: I feel frustrated about the lack of distinction between Nick and Will on this questi... (read more)

Thanks for making the case. I'm not qualified to say how good a Board member Nick is, but want to pick up on something you said which is widely believed and which I'm highly confident is false.

Namely - it isn't hard to find competent Board members. There are literally thousands of them out there, and charities outside EA appoint thousands of qualified, diligent Board members every year. I've recruited ~20 very good Board members in my career and have never run an open process that didn't find at least some qualified, diligent people, who did a good job.

EA makes it hard because it's weirdly resistant to looking outside a very small group of people, usually high status core EAs. This seems to me like one of those unfortunate examples of EA exceptionalism, where EA thinks its process for finding Board members needs to be sui generis. EA makes Board recruitment hard for itself by prioritising 'alignment' (which usually means high status core EAs) over competence, sometimes with very bad results (e.g. ending up with a Board that has a lot of philosophers and no lawyers/accountants/governance experts).

It also sometimes sounds like EA orgs think their Boards have higher entry requirements... (read more)

Julia - thanks for a helpful update.

As someone who's dealt with journalists & interviews for over 25 years, I would just add: if you do talk to any journalists for any reason, be very clear up front about (1) whether the interview is 'on the record', 'off the record', 'background', or 'deep background', (2) ask for 'quote approval', i.e. you as the interviewee having final approval over any quotes attributed to them, (3) possibly ask for overall pre-publication approval of the whole piece, so its contents, tone, and approach are aligned with yours. (Most journalists will refuse 2 and 3, which reminds you they are not your friends or allies; they are seeking to produce content that will attract clicks, eyeballs, and advertisers.) 

Also, record the interview on your end, using recording software, so you can later prove (if necessary, in court), that you were quoted accurately or inaccurately.

If you're not willing to take all these steps to protect yourself, your organization, and your movement, DO NOT DO THE INTERVIEW.

This piece is a useful resource about these terms and concepts.

Yes - I almost can't believe I am reading a senior EA figure suggesting that every major financial institution has an unreasonably prurient interest in the sex lives of their risk-holding employees. EA has just taken a bath because it was worse at financial risk assessment than it thought it was. The response here seems to be to double-down on the view that a sufficiently intelligent rationalist can derive - from first principles - better risk management than the lessons embedded in professional organisations. We have ample evidence that this approach did not work in the case of FTX funding, and that real people are really suffering because EA leaders made the wrong call here.

Now is the time to eat a big plate of epistemically humble crow, and accept that this approach failed horribly. Conspiracy theorising about 'voting rings' is a pretty terrible look.

I generally directionally agree with Eli Nathan and Habryka's responses. I also weak-downvoted this post (though felt borderline about that), for two reasons. 

(1) I would have preferred a post that tried harder to even-handedly discuss and weigh up upsides and downsides, whereas this mostly highlighted upsides of expansion, and (2) I think it's generally easier to publicly call for increased inclusivity than to publicly defend greater selectivity (the former will generally structurally have more advocates and defenders). In that context I feel worse about (1) and wish Scott had handled that asymmetry better.

But I wouldn't have downvoted if this had been written by someone new to the community, I hold Scott to a higher standard and I'm  pretty uncertain about the right policy with respect to voting differently in response to the same content on that basis. 

I agree with Scott Alexander that when talking with most non-EA people, an X risk framework is more attention-grabbing, emotionally vivid, and urgency-inducing, partly due to negativity bias, and partly due to the familiarity of major anthropogenic X risks as portrayed in popular science fiction movies & TV series.

However, for people who already understand the huge importance of minimizing X risk, there's a risk of burnout, pessimism, fatalism, and paralysis, which can be alleviated by longtermism and more positive visions of desirable futures. This is especially important when current events seem all doom'n'gloom, when we might ask ourselves 'what about humanity is really worth saving?' or 'why should we really care about the long-term future, it it'll just be a bunch of self-replicating galaxy-colonizing AI drones that are no more similar to us than we are to late Permian proto-mammal cynodonts?'

In other words, we in EA need long-termism to stay cheerful, hopeful, and inspired about why we're so keen to minimize X risks and global catastrophic risks.

But we also need longtermism to broaden our appeal to the full range of personality types, political views, and religious views ... (read more)

Buck
3y101
0
0

Here's a crazy idea. I haven't run it by any EAIF people yet.

I want to have a program to fund people to write book reviews and post them to the EA Forum or LessWrong. (This idea came out of a conversation with a bunch of people at a retreat; I can’t remember exactly whose idea it was.)

Basic structure:

  • Someone picks a book they want to review.
  • Optionally, they email me asking how on-topic I think the book is (to reduce the probability of not getting the prize later).
  • They write a review, and send it to me.
  • If it’s the kind of review I want, I give them $500 in return for them posting the review to EA Forum or LW with a “This post sponsored by the EAIF” banner at the top. (I’d also love to set up an impact purchase thing but that’s probably too complicated).
  • If I don’t want to give them the money, they can do whatever with the review.

What books are on topic: Anything of interest to people who want to have a massive altruistic impact on the world. More specifically:

  • Things directly related to traditional EA topics
  • Things about the world more generally. Eg macrohistory, how do governments work, The Doomsday Machine, history of science (eg Asimov’s “A Short History of Chemistry”)
  • I think that b
... (read more)

I am ridiculously late to the party, and I must confess that I have not read the entire article.

My comment is about what I would expect to happen if EA decided to shift towards encouraging pro-growth policies. What I have to say is perhaps a refining of objection 5.4, politicization. It is how I perceive this would be instantiated. My perceptions are informed by being from a middle-income country (Brazil) and living in another (Chile), while having lived in the developed world (America) to know what it's like.

The authors correctly acknowledge that this has a "politicized nature". For the time being, the only way to enact pro-growth policies would be to influence those who hold political power in the target countries.

My concern about this is: people in such countries do not want these policies. They show that by how they think, how they act, how they vote, how they protest. Here in Chile, for example, people have been fighting tooth and nail against the policies that made the country the wealthiest, most educated one in South America, the only OECD member in the subcontinent. The content of the protests is explicitly against the pro-market policies that have prevailed... (read more)

@EV US Board @EV UK Board could you include Owen's response document somewhere in the post? It contains a lot of important information and it's getting lost in the comments. 

Here are some excerpts from Sequoia Capital's profile on SBF (published September 2022, now pulled). 

On career choice: 

Not long before interning at Jane Street, SBF had a meeting with Will MacAskill, a young Oxford-educated philosopher who was then just completing his PhD. Over lunch at the Au Bon Pain outside Harvard Square, MacAskill laid out the principles of effective altruism (EA). The math, MacAskill argued, means that if one’s goal is to optimize one’s life for doing good, often most good can be done by choosing to make the most money possible—in order to give it all away. “Earn to give,” urged MacAskill. 

... 

It was his fellow [fraternity members] who introduced SBF to EA and then to MacAskill, who was, at that point, still virtually unknown. MacAskill was visiting MIT in search of volunteers willing to sign on to his earn-to-give program. 

At a café table in Cambridge, Massachusetts, MacAskill laid out his idea as if it were a business plan: a strategic investment with a return measured in human lives. The opportunity was big, MacAskill argued, because, in the developing world, life was still unconscionably cheap. Just do the math: At $2,000 per life

... (read more)

As a moderator, I think the phrase "seems otherwise unintelligent" is clearly not generous or collaborative and breaks  Forum norms. This is a warning, please don't insult other users.

As a somewhat separate point: fwiw, I'm a woman and I've not experienced this general toxicity in EA myself. Obviously I am not challenging your experience - there are lots of EA sub-communities and it makes sense that some could be awful, others fine. But it's worth adding this nuance, I think (e.g., from what I've heard, Bay Area EA circles are particularly incestuous wrt work/life overlap stuff).

Lizka
2yModerator Comment100
37
0

The discussion on this post is getting heated, so we'd like to remind everyone of the Forum norms. Chiefly: 

  • Be kind.
  • Stay on topic.
  • Be honest.

If you don’t think you can respect these norms consistently in the comments of this post, consider not contributing, and moving on to another post. 

We’ll investigate the issues that are brought up to the best of our ability. We’d like to remind readers that a lot of this is speculation.

Linch
2y100
1
0

I'm going to be boring/annoying here and say some things that I think are fairly likely to be correct but may be undersaid in the other comments:

  • EAs on average are noticeably smarter than most of the general population
  • Intelligence is an important component for doing good in the world.
  • The EA community is also set up in a way that amplifies this, relative to much of how the rest of the world operates.
  • Most people on average are reasonably well-calibrated about how smart they are.
    • (To be clear exceptions certainly exist) EDIT: This is false, see Max Daniel's comment.
  • If you're less smart than average for EAs (or less driven, or less altruistic, or less hardworking, or have less of a social safety net), than on average I'd expect you to be less good at having a positive impact than others.
  • But this is in relative terms, in absolute terms I think it's certainly possible to have a large impact still.
  • Our community is not (currently) set up  well to accommodate the contributions of many people who don't check certain boxes, so I expect there to be more of an uphill battle for many such people.
    • I don't think this should dissuade you from the project of (effectively) doing good, but I understand and emphasize if this makes you frustrated.

This is an excellent post, one slightly subtle point about the political dynamics that I think it misses is the circumstances around BoldPAC's investment in Salinas. 

BoldPAC is the superpac for Hispanic House Democrats. It happens to be the case that in the 2022 election cycle there is a Hispanic state legislator (Andrea Salinas) living in a blue-leaning open US House of Representatives  seat. It also happens to be the case that given the ups and downs of the political cycle, this is the only viable opportunity to add a Hispanic Democrat to the caucus this year. So just as it's basically happenstance the the EA community got involved in the Oregon 6th as opposed to some other district, it's also happenstance that BoldPAC was deeply invested in this race. It's not a heavily Hispanic area or anything, Salinas just happens to be Latina. 

If it was an Anglo state legislator holding down the seat, the "flood the zone with unanswered money" strategy might have worked. And if there were four other promising Hispanic prospects in the 2022 cycle, it also might have worked because BoldPAC might have been persuaded that it wasn't worth going toe-to-toe with Protect Our Future. N... (read more)

Grayden
2mo99
46
15
1
1

I think forecasting is attractive to many people in EA like myself because EA skews towards curious people from STEM backgrounds who like games. However, I’m yet to see a robust case for it being an effective use of charitable funds (if there is, please point me to it). I’m worried we are not being objective enough and trying to find the facts that support the conclusion rather than the other way round.

Thank you for sharing. In particular, I find your mention of shame vs edginess interesting.. But I expect that at least one person reading your story will think "Uh sounds like you need more shame, dude, not less" so I'd like to share a perspective for any such readers: 

If I understand Owen anyway, I'll say that I relate in that I also have had some brazen periods of life, prompted by a sort of cultural rebirth and sex-positive idealism. An outsider might have labelled these brazen periods as a swinging of the pendulum in response to my strict religious upbringing, but that isn't quite right.. It's hard to notice how it is related to shame but in my case:

For a very shame-prone or shame-trained person, it can be very difficult to parse out "What is the actual harm here? What are the actual bad acts and why, when I know that most of these things I'm programmed to feel shame about simply are not wrong or shame-worthy?" This can lead to a sort of idealistically-motivated throwing out of all feelings that look like shame. Anxiety, hesitance, guilt, and self-criticality are examples of possibly-adaptive-feelings that can be mistakenly thrown out here. This, I think, can lead to soci... (read more)

Hey Aella, I appreciate you telling your story. I’m really sorry that you’ve experienced people lying about you, and making harmful assumptions about your intent . That really really sucks. 


I’ve put more information about most (not all) of the Community Health team’s understanding of the TIME cases in this comment:
https://forum.effectivealtruism.org/posts/JCyX29F77Jak5gbwq/ea-sexual-harassment-and-abuse?commentId=jKJ4kLq8e6RZtTe2P 
It might clarify some of your questions about individual cases. 

Great post.  I strongly agree with the core point.

Regarding the last section: it'd be an interesting experiment to add a "democratic" community-controlled fund to supplement the existing options.  But I wouldn't want to lose the existing EA funds, with their vetted expert grantmakers.  I personally trust (and agree with) the "core EAs" more than the "concerned EAs", and would be less inclined to donate to a fund where the latter group had more influence.  But by all means, let a thousand flowers bloom -- folks could then direct their donations to the fund that's managed as they think best.

[ETA: Just saw that Jason has already made a similar point.]

What's stunning to me is the following:

There may not have been extended discussions, but there was at least one more recent warning. “E.A. leadership” is a nebulous term, but there is a small annual invitation-only gathering of senior figures, and they have conducted detailed conversations about potential public-relations liabilities in a private Slack group.

Leaking private slack conversations to journalists is a 101 on how to destroy trust. The response to SBF and FTX betrayal shouldn't be to further erode trust within the community.

EA should not have to learn every single group dynamic from first principles - the community might not survive such a thorough testing and re-learning of all social rules around discretion, trust and why its important to have private channels of communication that you can assume will not be leaked to journalists.

If the community ignores trust, networks and support for one another - then the community will not form, ideas will not be exchanged in earnest and everyone will be looking over their shoulder for who may leak or betray their confidence.

Destroying trust decimates communities - we've all found that with SBF. The response to that shouldn't be fur... (read more)

A lot of liar’s paradox issues with this interview.

The earning to give company I started got acquired.

The Michael Neilsen critique seems thoughtful, constructive, and well-balanced on first read, but I have some serious reservations about  the underlying ethos and its implications.

Look, any compelling new world-view that is outside the mainstream cultures' Overton window can be pathologized as an information hazard that makes its believers feel unhappy, inadequate, and even mentally ill by mainstream standards. Nielsen seems to view 'strong EA' as that kind of information hazard, and critiques it as such.

Trouble is, if you understand that most normies are delusional about some important issue, and you you develop some genuinely deeper insights into that issue, the psychologically predictable result is some degree of alienation and frustration. This is true for everyone who has a religious conversion experience. It's true for everyone who really takes onboard the implications of any intellectually compelling science -- whether cosmology, evolutionary biology, neuroscience, signaling theory, game theory, behavior genetics, etc. It's true for everyone who learns about any branch of moral philosophy and takes it seriously as a guide to action.

I've seen this over, and over, an... (read more)

I love this, haha.

But, as with many things, J.S. Mill did this meme first!!! 

In the Houses of Parliament on April 17th, 1866, he gave a speech arguing that we should keep coal in the ground (!!). As part of that speech, he said:
 

I beg permission to press upon the House the duty of taking these things into serious consideration, in the name of that dutiful concern for posterity [...] There are many persons in the world, and there may possibly be some in this House, though I should be sorry to think so, who are not unwilling to ask themselves, in the words of the old jest, "Why should we sacrifice anything for posterity; what has posterity done for us?"

They think that posterity has done nothing for them: but that is a great mistake. Whatever has been done for mankind by the idea of posterity; whatever has been done for mankind by philanthropic concern for posterity, by a conscientious sense of duty to posterity [...] all this we owe to posterity, and all this it is our duty to the best of our limited ability to repay."

all great deeds [and] all [of] culture itself [...] all this is ours because those who preceded us have cared, and have taken thought, for posterity [...] Not

... (read more)

I don't have any arguments over cancel culture or anything general like that, but I am a bit bothered by a view that you and others seem to have. I  don't consider Robin Hanson an "intellectual ally" of the EA movement; I've never seen him publicly praise it or make public donation decisions, but he has claimed that do-gooding is controlling and dangerous, that altruism is all signaling with selfish motivations, that we should just save our money and wait for some unspecified future date to give it away, and that poor faraway people are less likely to exist according to simulation theory so we should be less inclined to help them. On top of that he made some pretty uncharitable statements about EA Munich and CEA after this affair. And some of his pursuits suggest that he doesn't care if he turns himself into a super controversial figure who brings negative attention towards EA by association. These things can be understandable on their own, you can rationalize each one, but when you put it all together it paints a picture of someone who basically doesn't care about EA at all. It just happens to be the case that he was big in the rationalist blogosphere and lots of EAs (includi... (read more)

EV US has made a court motion to settle with the FTX estate for 100% of the funds received in 2022 for a total of $22.5M. See this public docket for the details:  https://restructuring.ra.kroll.com/FTX/Home-DocketInfo (Memo number 3745). 

My guess is Open Phil is covering this amount. Seems very relevant to anyone who is exposed to FTX clawback risk, or wants to understand what is going on with FTX things.

Thank you so much for writing so clearly and compellingly about what happened to you and the subculture which encourages treating women like this.

There is no place for such a subculture in EA (or anywhere else).

Consider hiring an outside firm to do an independent review.

forced to watch money get redirected from the Global South to AI researchers.

I don't think this is a healthy way of framing disagreements about cause prioritization.  Imagine if a fan of GiveDirectly started complaining about GiveWell's top charities for "redirecting money from the wallets of world's poorest villagers..."  Sounds almost like theft!  Except, of course, that the "default" implicitly attributed here is purely rhetorical.  No cause has any prior claim to the funds. The only question is where best to send them, and this should be determined in a cause neutral way, not picking out any one cause as the privileged "default" that is somehow robbed of its due by any or all competing candidates that receive funding.

Of course, you're free to feel frustrated when others disagree with your priorities. I just think that the rhetorical framing of "redirected" funds is (i) not an accurate way to think about the situation, and (ii) potentially harmful, insofar as it seems apt to feed unwarranted grievances.  So I'd encourage folks to try to avoid it.

Thanks for the detailed response. 

I agree that we don't want EA to be distinctive just for the sake of it. My view is that many of the elements of EA that make it distinctive have good reasons behind them. I agree that some changes in governance of EA orgs, moving more in the direction of standard organisational governance, would be good, though probably I think they would be quite different to what you propose and certainly wouldn't be 'democratic' in any meaningful sense. 

  1. I don't have much to add to my first point and to the discussion below my comment by Michael PJ. Boiled down, I think the point that Cowen makes stripped of the rhetoric is just that EAs did a bad job on the governance and management of risks involved in working with SBF and FTX, which is very obvious and everyone already agrees with. It simply has no bearing on whether EAs are assessing existential risk correctly, and enormous equivocation on the word 'existential risk' doesn't change that fact. 
  2. Since you don't want diversity essentially along all dimensions, what sort of diversity would you like? You don't want Trump supporters; do you want more Marxists? You apparently don't want more right win
... (read more)

Thanks for all of the hard work on this, Howie (and presumably many others), over the last few months and (presumably) in the coming months

This is from  a couple months ago: in large part due to the advocacy of New York kidney donors in the EA community, this bill passed the NY state assembly, which will reimburse kidney donors and may save around 100 lives a year. It still needs to be signed into law by the governor, but it's very likely to, and EAs are already on the ball to lobby for its passing! 

Buck
2y97
0
1

(Writing quickly, sorry if I'm unclear)

Since you asked, here are my agreements and disagreements, mostly presented without argument:

  • As someone who is roughly in the target audience (I am involved in hiring for senior ops roles, though it's someone else's core responsibility), I think I disagree with much of this post (eg I think this isn't as big a problem as you think, and the arguments around hiring from outside EA are weak), but in my experience it's somewhat costly and quite low value to publicly disagree with posts like this, so I didn't write anything.
    • It's costly because people get annoyed at me.
    • It's low value because inasmuch as think your advice is bad, I don't really need to persuade you you're wrong, I just need to persuade the people who this article is aimed at that you're wrong. It's generally much easier to persuade third parties than people who already have a strong opinion. And I don't think that it's that useful for the counterarguments to be provided publicly.
      • And if someone was running an org and strongly agreed with you, I'd probably shrug and say "to each their own" rather than trying that hard to talk them out of it: if a leader really feels passionate about sh
... (read more)

Thanks for writing this, Will. I appreciate the honesty and ambition. Thank you for all you do and I hope you have people around you who love and support you.

I like the framing of judicious ambition. My key question around this and the related longtermism discussion is something like, What is the EA community for?

  • A democractic funding body?
  • A talent pool?
  • Community support?
  • Error checkers?

Are we the democratic body that makes funding decisions? No and I don't want us to be. Doing the most good likely involves decisions that the median EA will disagree with. I would like to trial forecasting funding outcomes and voting systems, but I don't assume that EA should be democratic. The question is what actually does the most good.

Are we a body of talented professionals who work on lower wages than they otherwise would? Yes, but I think we are more than that. Fundamentally it's our work that is undervalued, rather than us. Animals, the global poor and future generations cannot pay to save their own lives, so we won't be properly remunerated, except by the joy we take from doing it.

Are we community support for one another? Yes, and I think in regard to this dramatic shift in EA's fortunes that... (read more)

I personally would feel excited about rebranding "effective altruism" to a less ideological and more ideas-oriented brand (e.g., "global priorities community", or simply "priorities community"), but I realize that others probably wouldn't agree with me on this, it would be a costly change, and it may not even be feasible anymore to make the change at this point. OTOH, given that the community might grow much bigger than it currently is, it's perhaps worth making the change now? I'd love to be proven wrong, of course.

This sounds very right to me. 

Another way of putting this argument is that "global priorities (GP)"  community is both more likable and more appropriate  than "effective altruism (EA)" community. More likable because it's less self-congratulatory, arrogant, identity-oriented, and ideologically intense. 

More appropriate (or descriptive) because it better focuses on large-scale change, rather than individual action, and ideas rather than individual people or their virtues. I'd also say, more controversially, that when introducing EA ideas, I would be more likely to ask the question: "how ought one to decide what to work on?", or "what are the big probl... (read more)

You can now import posts directly from Google docs

Plus, internal links to headers[1] will now be mapped over correctly. To import a doc, make sure it is public or shared with "eaforum.posts@gmail.com"[2], then use the widget on the new/edit post page:

Importing a doc will create a new (permanently saved) version of the post, but will not publish it, so it's safe to import updates into posts that are already published. You will need to click the "Publish Changes" button to update the live post.

Everything that previously worked on copy-paste[3] will also work when importing, with the addition of internal links to headers (which only work when importing).

There are still a few things that are known not to work:

  • Nested bullet points (these are working now)
  • Cropped images get uncropped
  • Bullet points in footnotes (these will become separate un-bulleted lines)
  • Blockquotes (there isn't a direct analog of this in Google docs unfortunately)

There might be other issues that we don't know about. Please report any bugs or give any other feedback by replying to this quick take, you can also contact us in the usual ways.

Appendix: Version history

There are some minor improvements to the ver... (read more)

I’m very sorry that you had such a bad experience here. Whilst I would disagree with some of the details here I do think that our communication was worse than I would have liked and I am very sorry for any hardship that you experienced. It sounds like a stressful process which could have been made much better if we had communicated more often and more quickly.

In my last email (March 4th), I said that we were exploring making this grant, but it’s legally challenging. Grants for mental health support are complicated, in general, as we have to show that there is a pure public benefit. We have an open thread with our legal counsel, and I’m cautiously optimistic about getting a decision on this relatively soon.

In general, I don’t think I made promises or hard commitments to get back in a certain time frame; instead, I said that we aim to get back by a certain time. I believe I am at fault for not making this distinction appropriately clear, and I am upset that this mismatch of expectations resulted in hardship.

I'd also like to quickly clarify that many of the errors here were mine (as opposed to the wider EA Funds team). I should have been more realistic about the time frame for a grant of this nature.

Thanks for this update! Two questions…

  1. When all the sponsored projects have been spun out, will EV continue to exist? If so, what will it do?
  2. “I plan to share other non-privileged information on lessons learned in the aftermath of FTX and encourage others to share their reflections as well.” Do you have an estimated timeline for this? 

I can see where Ollie's coming from, frankly. You keep referring to these hundreds of pages of evidence, but it seems very likely you would have been better off just posting a few screenshots of the text messages that contradict some of the most egregious claims months ago. The hypothesising about "what went wrong", the photos, the retaliation section, the guilt-tripping about focusing on this, etc. - these all undermine the discussion about the actual facts by (1) diluting the relevant evidence and (2) making this entire post bizarre and unsettling.

For the most part, an initial reading of this post and the linked documents did have the intended effect on me of making me view many of the original claims as likely false or significantly exaggerated. With that said, my suggestion would have been to remove some sorts of stuff from the post and keep it only in the linked documents or follow-up posts. In particular, I'd say:

  • The photos provide a bit of information, but can be viewed as distracting and misleading. I think the value of information they provide is probably sufficient for their inclusion in a linked Google Doc, but including them twice in the post (and once near the top) gives them a lot of salience, and as some of the comments here show, this can cause some readers to switch off or view your post with hostility.
  • Some of the alternative hypothesis stuff, and the stuff related to claims about Ben Pace, may also have been better suited to a linked Google Doc -- something that curious readers could dig into, but that was not given a lot of salience for somebody who was just interested in the core claims. I think there's some value to these exercises, but it would muddy the waters less if this were less salient, so that r
... (read more)

I quit trying to have direct impact and took a zero-impact tech job instead.

I expected to have a hard time with this transition, but I found a really good fit position and I'm having a lot of fun.

I'm not sure yet where to donate extra money. Probably MIRI/LTFF/OpenPhil/RethinkPriorities.

I also find myself considering using money to try fixing things in Israel. Or maybe to run away first and take care things and people that are close to me. I admit, focusing on taking care of myself for a month was (is) nice, and I do feel like I can make a difference with E2G.

(AMA)

I think this article paints a fairly misleading picture, in a way that's difficult for me to not construe as deliberate. 

It doesn't provide dates for most of the incidents it describes, despite that many of them happened many years ago, and thereby seems to imply that all the bad stuff brought up is ongoing.  To my knowledge, no MIRI researcher has had a psychotic break in ~a decade. Brent Dill is banned from entering the group house I live in. I was told by a friend that Michael Vassar (the person who followed Sonia Joseph home and slept on her floor despite that it made her uncomfortable, also an alleged perpetrator of sexual assault) is barred from Slate Star Codex meetups. 

The article strongly reads to me as if it's saying that these things aren't the case, that the various transgressors didn't face any repercussions and remained esteemed members of the community.

 Obviously it's bad that people were assaulted, harassed, and abused at all, regardless of how long ago it happened. It's probably good for people to know that these things happened. But the article seems to assume that all these things are still happening, and it seems to be drawing conclusions on ... (read more)

The annual report suggests there are 45 to 65 statutory inquiries a year, link below (on mobile / lunch break, sorry!). So maybe a half to slightly less seem to end up as public reports.

https://www.gov.uk/government/publications/charity-commission-annual-report-and-accounts-2021-to-2022/charity-commission-annual-report-and-accounts-2021-to-2022

I skimmed the oldest ten very quickly and it looks like four subjects were wound up / dissolved, and four more had trustee-related actions like appointment of new trustees by a Commission-appointed Interim Manager, disqualification from being a trustee, etc. One organization had some poor governance not rising to misconduct/misadministration (but some trustees resigned), one had Official Warnings issued to trustees, one got an action plan.

Pending more careful and complete review, most inquires that result in public reports do seem to find substantial mismanagement and result in significant regulatory action.

In addition to having a lot more on the line, other reasons to expect better of ourselves:

  • EA had (at least potential) access to a lot of information that investors may not have, in particular about Alameda's early exodus in 2018.
  • EA had much more time to investigate and vet SBF—there's typically a very large premium for investors to move fast during fundraising, to minimize distraction for the CEO/team.

Because of the second point, many professional investors do surprisingly little vetting. For example, SoftBank is pretty widely reputed to be "dumb money;" IIRC they shook hands on huge investments in Uber and WeWork on the basis of a single meeting, and their flagship Vision Fund lost 8% (~$8b) this past quarter alone. I don't know about OTPP but I imagine they could be similarly diligence-light given their relatively short history as a venture investor. Sequoia is less famously dumb than those two, but still may not have done much vetting if FTX was perceived to be a "hot" deal with lots of time pressure.

Thanks, I think this post is thoughtfully written. I think that arguments for lower salary sometimes are quite moralising/moral purity-based; as opposed to focused on impact. By contrast, you give clear and detached impact-based arguments.

I don't quite agree with the analysis, however. 

You seem to equate "value-alignment" with "willingness to work for a lower salary". And you argue that it's important to have value-aligned staff, since they will make better decisions in a range of situations:

  • A researcher will often decide which research questions to prioritise and tackle. A value-aligned one might seek to tackle questions around which interventions are the most impactful, whereas a less value-aligned researcher might choose to prioritise questions which are the most intellectually stimulating.
  • An operations manager might make decisions regarding hiring within organisations. Therefore, a less value-aligned operations manager might attract similarly less value-aligned candidates, leading to a gradual worsening in altruistic alignment over time. It’s a common bias to hire people who are like you which could lead to serious consequences over time e.g. a gradual erosion of altruisti
... (read more)

It's bugged me for a while that EA has ~13 years of community building efforts but (AFAIK) not much by way of "strong" evidence of the impact of various types of community building / outreach, in particular local/student groups. I'd like to see more by way of baking self-evaluation into the design of community building efforts, and think we'd be in a much better epistemic place if this was at the forefront of efforts to professionalise community building efforts 5+ years ago. 

By "strong" I mean a serious attempt at causal evaluation using experimental or quasi-experimental methods - i.e. not necessarily RCTs where these aren't practical (though it would be great to see some of these where they are!), but some sort of "difference in difference" style analysis, or before-after comparisons. For example, how do groups' key performance stats (e.g. EA's 'produced', donors, money moved, people going on to EA jobs) compare in the year(s) before vs after getting a full/part time salaried group organiser? Possibly some of this already exists either privately or publicly and the relevant people know where to look (I haven't looked hard, sorry!). E.g. I remember GWWC putting together a fu... (read more)

Would really appreciate links to Twitter threads or any other publicly available versions of these conversations. Appreciate you reporting what you’ve seen but I haven’t heard any of these conversations myself.

Thanks for posting this update. I prefer to have it out rather than pending, and I think it’s appropriate that people will get a sense of approximately the scope of what happened. I deeply regret my actions, which were wrong and harmful; I think it’s a fair standard to expect me to have known better; I will of course abide by the restrictions you’re imposing.

I spent a lot of last year working on these issues, and I put up an update in December; that’s still the best place to understand my perspective on things going forwards.

I think that the first-order impression given by these findings is broadly accurate — I did a poor job of navigating feelings of romantic attraction, failed to track others’ experiences, took actions which were misguided and wrong, and hurt people. For most readers that’s probably enough to be going with. Other people might be interested in more granularity, either because they care about the nature of my character flaws and what types of mistakes I might be prone to in the future, or because they care about building detailed pictures of the patterns that cause harm. For this audience I’ve put my takes on the specific findings in this document. My&nbs... (read more)

Therefore, I expect marginal funding that we raise from other donors (i.e. you) to most likely go to the following:

  • Community Building Grants [...] $110,000
  • Travel grants for EA conference attendees [...] $295,000
  • EA Forum [...] [Nuño: note no mention of the cost in the EA forum paragraph]

You don't mention the cost of the EA forum, but per this comment, which gives more details, and per your own table, the "online team", of which the EA Forum was a large part, was spending ~$2M per year.

As such I think that your BOTECs are uninformative and might be "hiding the ask":

  1. This model compares a hypothetical LTFF grant to a biosecurity workshop with the labor that CEA staff spent on a similar event. It finds that the CEA expenditure is a bit more cost-effective.
  2. This model compares CEA's cost of producing Forum Digests to a grant that the EA Infrastructure Fund gave for creating Forum + LW summaries. It finds that CEA expenditure is more cost-effective.

 

we will be doing a follow-up post solely devoted to Forum fundraising

I look forward to this. In the meantime, readers can see my own take on this here: in short, I think that the value of the forum is high but the ... (read more)

For example, Francis and Kirkegaard (2022) employ the use of instrumental variables

I can view an astonishing amount of publications for free through my university, but they haven't opted to include this one, weird... So should I pay money to see this "Mankind Quarterly" publication?

When I googled it I found that Mankind Quarterly includes among its founders Henry Garrett an American psychologist who testified in favor of segregated schools during Brown versus Board of Education, Corrado Gini who was president of the Italian genetics and eugenics Society in fascist Italy and Otmar Freiherr von Verschuer who was director of the Kaiser Wilhelm Institute of anthropology human heredity and eugenics in Nazi Germany. He was a member of the Nazi Party and the mentor of Josef Mengele, the physician at the Auschwitz concentration camp infamous for performing human experimentation on the prisoners during World War 2. Mengele provided for Verschuer with human remains from Auschwitz to use in his research into eugenics.

It's funded by the Pioneer Fund which according to wikipedia:

The Pioneer Fund is an American non-profit foundation established in 1937 "to advance the scientific study of heredit

... (read more)

Hello Jack, I'm honoured you've written a review of my review! Thanks also for giving me sight of this before you posted. I don't think I can give a quick satisfactory reply to this, and I don't plan to get into a long back and forth. So, I'll make a few points to provide some more context on what I wrote. [I wrote the remarks below based on the original draft I was sent. I haven't carefully reread the post above to check for differences, so there may be a mismatch if the post has been updated]

First, the piece you're referring to is a book review in an academic philosophy journal. I'm writing primarily for other philosophers who I can expect to have lots of background knowledge (which means I don't need to provide it myself).

Second, book reviews are, by design, very short. You're even discouraged from referencing things outside the text you're reviewing. The word limit was 1,500 words - I think my review may even be shorter than your review of my review! - so the aim is just to give a brief overview and make a few comments.

Third, the thrust of my article is that MacAskill makes a disquietingly polemical, one-sided case for longtermism. My objective was to point this out and deliber... (read more)

We're coming up on two weeks now since this post was published, with no substantive response from Nonlinear (other than this). I think it would be good to get an explicit timeline from Nonlinear on when we can expect to see their promised response. It's reasonable to ask for folks to reserve judgement for a short time, but not indefinitely. @Kat Woods @Emerson Spartz 

OP strikes me as hyperbolic in a way that makes me disinclined to trust it.

THAT'S A TOTAL OF 44. DENY THE "ADJACENT', BUT YOU CAN'T DENY THE THIRTY STRONG THAT I, A SOLO PERSON, PERSONALLY FOUND.

I can't deny this, in the sense that I don't know that it's false, but OP gives no evidence for this beyond the bare claims. OP doesn't provide any details that people could investigate to verify, and OP writes anonymously on a one-off account, so that people can't check how trustworthy OP has been in the past or on similar topics.

Now, I don't think there's anything wrong with saying things without proof or evidence - and in fact, it wouldn't shock me to hear that there were 30 incidents of rape or prolonged abuse in EA circles in something like a 6-year period (I've had friends tell me of some sexual infractions, and I don't see why I would have heard about all of them) - but I think one should own that they're doing that.

So much so that your CH team not only tried to take credit for some of my work (SEE HERE - https://imgur.com/Rj5eo24)

That link shows an anonymous commenter saying that they reported people to CEA community health, and Julia Wise agreeing, thanking that commenter, ... (read more)

I am at best 1/1000th as "famous" as the OP, but the first ten paragraphs ring ABSOLUTELY TRUE from my own personal experience, and generic credulousness on the part of people who are willing to entertain ludicrous falsehoods without any sort of skepticism has done me a lot of damage.

I also attest that Aella is, if anything, severely underconveying the extent to which this central thesis is true.   It's really really hard to convey until you've lived that experience yourself.  I also don't know how to convey this to people who haven't lived through it.  My experience was also of having been warned about it, but not having integrated the warnings or really actually understood how bad the misrepresentation actually was in practice, until I lived through it. 

Some of the world’s most important problems are, surprisingly, still neglected. Lots of smart people are trying to cure cancer - it’s been around for a long time, and so has the medical research establishment attacking it.

But far fewer people are working on preventing an outbreak from a novel synthetic biological agent or safely governing advanced AI systems, because those issues are less widely-known.

 

I prefer something like "Imagine you're one of the first people to discover that cancer is a problem, or one of the first people to work on climate change seriously and sketch out the important problems for others to work on. There are such problems today, that don't have [millions] of smart people already working on them"

 

[this allows me to point at the value of being early on a neglected problem without presenting new "strange" such problems. moreover, after this part of the pitch, the other person is more welcoming to hear a new strange problem, I think]

[disclaimer: acting director of CSER, but writing in personal capacity]. I'd also like to add my strongest endorsement of Carrick - as ASB says, a rare and remarkable combination of intellectual brilliance, drive, and tremendous compassion. It was a privilege to work with him at Oxford for a few years. It would be  wonderful to see more people like Carrick succeeding in politics; I believe it would make for a better world.

I like the goal of politically empowering future people. Here's another policy with the same goal:

  • Run periodic surveys with retrospective evaluations of policy. For example, each year I can pick some policy decisions from {10, 20, 30} years ago and ask "Was this policy a mistake?", "Did we do too much, or too little?", and so on.
  • Subsidize liquid prediction markets about the results of these surveys in all future years. For example, we can bet about people in 2045's answers to "Did we do too much or too little about climate change in 2015-2025?"
  • We will get to see market odds on what people in 10, 20, or 30 years will say about our current policy decisions. For example, people arguing against a policy can cite facts like "The market expects that in 20 years we will consider this policy to have been a mistake."

This seems particularly politically feasible; a philanthropist can unilaterally set this up for a few million dollars of surveys and prediction market subsidies. You could start by running this kind of poll a few times; then opening a prediction market on next year's poll about policy decisions from a few decades ago; then lengthening the time horizon.

(I'd personally expect this to have a larger impact on future-orientation of policy, if we imagine it getting a fraction of the public buy-in that would be required for changing voting weights.)

All views are my own rather than those of any organizations/groups that I’m affiliated with. Trying to share my current views relatively bluntly. Note that I am often cynical about things I’m involved in. Thanks to Adam Binks for feedback. 

Edit: See also child comment for clarifications/updates

Edit 2:  I think the grantmaking program has different scope than I was expecting; see this comment by Benjamin for more.

Following some of the skeptical comments here, I figured it might be useful to quickly write up some personal takes on forecasting’s promise and what subareas I’m most excited about (where “forecasting” is defined as things I would expect to be in the scope of OpenPhil’s program to fund).

  1. Overall, most forecasting grants that OP has made seem much lower EV than the AI safety grants (I’m not counting grants that seem more AI-y than forecasting-y, e.g. Epoch, and I believe these wouldn’t be covered by the new grantmaking program). Due to my ASI timelines (10th percentile ~2027, median ~late 2030s), I’m most excited about forecasting grants that are closely related to AI, though I’m not super confident that no non-AI related ones are above the bar.
  2. I generally a
... (read more)

You touched on something here that I am coming to see as the key issue: whether there should be a justice system within the EA/Rationality community and whether Lightcone can self-appoint into the role of community police. In conversations with people from Lightcone re:NL posts, I was told that is wrong to try to guard your reputation because that information belongs to the community to decide. US law on reputation is that you do have a right to protect yourself from lies and misrepresentation. Emerson talking about suing for libel-- his right-- was seen as defection from the norms which that Lightcone employee thinks should apply to the whole EA/rationality community. When did Emerson opt into following these norms, being judged by these norms? Did any of us? The Lightcone employees also did not like that Kat made a veiled threat to either Chloe or Alice (can't remember) that her reputation in EA could be ruined by NL if she kept saying bad things about them. They saw that as bad not just because it was a threat but because it conspired to hide information from the community. From what I understood, that Lightcone employee thought it would have been okay for Kat to talk shit about... (read more)

I am commenting to encourage everyone to think about the real people at the centre of all of the very ugly accusations being made, which I hope is acceptable to do, even though this comment does not directly address the evidence presented by either Lightcone or Nonlinear. 

This is getting a lot of engagement, as did Ben Pace’s previous post, and for the people being discussed, this must be incredibly stressful. No matter how you think events actually played out, the following are true:

a) at least one group of people is having unfair accusations made against them, either of creating abusive working conditions and taking advantage of the naivety of young professionals, or of being delusional and unreliable or malicious. Neither of these are easy to deal with. 

b) the situation is ongoing, and there is no clear timeline for when things will be neatly wrapped up and concluded.

Given this, and having read several comments speaking to the overwhelming experience of being dogpiled on the internet, I just want to encourage everyone who is friendly with any of the people at the centre of this, including Alice, Chloe, Kat Woods, Emerson and Drew Spartz, Ben Pace, and Habryka to reach out and make sure they are coping well. The goal here is hopefully to get to the truth and to update community norms, and it is far too easy for individuals to become casualties of this process. A simple ‘how ya doing?’ can make a big difference when people are struggling.

Post on everybody who’s living together and dating each other and giving each other grants when?

Clarification: I’m just kind of surprised to see some of the things in this post portrayed as bad when they are very common in EA orgs, like living together and being open to unconventional and kind of unclear boundaries and pay arrangements and especially conflicts of interest from dating coworkers and bosses. I worry that things we’re all letting slide could be used to discredit anybody if the momentum turns against them.

Whatever its legitimate uses, defamation law is also an extremely useful cudgel that bad actors can, and very frequently do, use to protect their reputations from true accusations. The cost in money, time and risk of going through a defamation trial is such that threats of such can very easily intimidate would-be truth-tellers into silence, especially when the people making the threat have a history of retaliation. Making such threats even when the case for defamation seems highly dubious (as here), should shift us toward believing that we are in the defamation-as-unscrupulous-cudgel world, and update our beliefs about Nonlinear accordingly.

Whether or not we should be shocked epistemically that Nonlinear made such threats here, I claim that we should both condemn and punish them for doing so (within the extent of the law), and create a norm that you don't do that here. I claim this even if Nonlinear's upcoming rebuttal proves to be very convincing.

I don't want a community where we need extremely high burdens of proof to publish true bad things about people. That's bad for everyone (except the offenders), but especially for the vulnerable people who fall prey to the people doing the bad things because they happen not to have access to the relevant rumor mill. It's also toxic to our overall epistemics as a community, as it predictably and dramatically skews the available evidence we have to form opinions about people.

Akhil
1y94
20
0

High Impact Medicine and Probably good recently produced a report on medical careers that gives more in-depth consideration  to clinical careers  in low and middle income countries- you can check it out here: https://www.highimpactmedicine.org/our-research/medicalcareers

So far I have been running on the policy that I will  accept money from people who seem immoral to me, and indeed I preferred getting money from Sam instead of Open Philanthropy or other EA funders because I thought this would leave the other funders with more marginal resources that could be used to better ends (Edit: I also separately thought that FTX Foundation money would come with more freedom for Lightcone to pursue its aims independently, which I do think was a major consideration I don't want to elide).

To be clear, I think there is a reasonable case to be made for the other end of this tradeoff, but I currently still believe that it's OK for EAs to take money from people whose values or virtues they think are bad (and that indeed this is often better than taking money from the people who share your values and virtues, as long as its openly and willingly given). I think the actual tradeoffs are messy, and indeed I ended up encouraging us to go with a different funder for a loan arrangement for a property purchase we ended up making, since that kind of long-term relationship seemed much worse to me, and I was more worried about that entangling us more with FTX. 

To b... (read more)

For what it's worth, as someone saying in another thread that I do think there were concerns about Sam's honesty circulating, I don't know of anyone I have ever talked to who expressed concern about the money being held primarily in FTT, or who would have predicted anything close to the hole in the balance sheet that we now see. 

I heard people say that we should assume that Sam's net-wealth has high-variance, given that crypto is a crazy industry, but I think you are overstating the degree to which people were aware of the incredible leverage in FTX's net-position (if I understand the situation correctly there was also no way of knowing that before Alameda's balance sheet leaked a week ago. If you had asked me what Alameda's portfolio consists of, I would have confidently given you a much more diversified answer than "50% FTT with more than 50% liabilities").

Thanks for sharing your experiences here Sam.

Something that I find quite difficult is the fact that all of these things are true, but hard to 'feel' true at the same time:

  1. We have increased available funding by an order of magnitude over the past decade and increased the rate at which that funding is being deployed
  2. We don't want lack of funds to be the reason that people don't do important and ambitious things; and yet
  3. We are still extremely funding constrained in most cases

You're experiencing a bit of #1 and #2 right now. And I think that the huge upsides to that is (a) we're have good a shot of doing a lot more good; and (b) EA is less likely to be the pursuit of the already privileged (e.g. those who can afford to fly to a conference in SF or London or quit their job to pursue something that the world doesn't compensate for its value).

I'm glad that access to funding hasn't been a barrier for your pursuit of doing a lot of good.

Regarding #3.

It still stings every time I hear the funding situation talked about as if it's perpetually solved.

I'm glad that a promising AI safety researcher is likely to find the funding they need to switch careers and some of the top projects are able to a... (read more)

it concerns me that entry level positions in EA are now being advertised at what would be CEO-level salaries at other nonprofits

Suppose there was no existing nonprofit sector, or perhaps that everyone who worked there was an unpaid volunteer, so the only comparison was to the private sector. Do you think that the optimal level of compensation would differ significantly in this world? 

In general I'm skeptical that the existence of a poorly paid fairly dysfunctional group of organisations should inspire us to copy them, rather than the much larger group of very effective orgs who do practice competitive, merit based compensation.

The reason we have a deontological taboo against “let’s commit atrocities for a brighter tomorrow” is not that people have repeatedly done this, it worked exactly like they said it would, and millions of people received better lives in exchange for thousands of people dying unpleasant deaths exactly as promised.

The reason we have this deontological taboo is that the atrocities almost never work to produce the promised benefits. Period. That’s it. That’s why we normatively should have a taboo like that.

(And as always in a case like that, we have historical exceptions that people don’t like to talk about because they worked, eg, Knut Haukelid, or the American Revolution. And these examples are distinguished among other factors by a found mood (the opposite of a missing mood) which doesn’t happily jump on the controversial wagon for controversy points, nor gain power and benefit from the atrocity; but quietly and regretfully kills the innocent night watchman who helped you, to prevent the much much larger issue of Nazis getting nuclear weapons.)

This logic applies without any obvious changes to “let’s commit atrocities in pursuit of a brighter tomorrow a million years away” just li... (read more)

Thanks for making this podcast feed! I have a few comments about what you said here:

 Like 80,000 Hours itself, the selection leans towards a focus on longtermism, though other perspectives are covered as well. The most common objection to our selection is that we didn’t include dedicated episodes on animal welfare or global development.

We did seriously consider including episodes with Lewis Bollard and Rachel Glennester, but i) we decided to focus on our overall worldview and way of thinking rather than specific cause areas (we also didn’t include a dedicated episode on biosecurity, one of our 'top problems'), and ii) both are covered in the first episode with Holden Karnofsky, and we prominently refer people to the Bollard and Glennerster interviews in our 'episode 0', as well as the outro to Holden's episode.

I think if you are going to call this feed "Effective Altruism: An Introduction", it doesn't make sense to skew the selection towards longtermism so heavily. Maybe you should have phrased the feed as "An Introduction to Effective Altruism & Longtermism" given the current list of episodes. 

In particular, I think it would be better if the Lewis Bollard episode was... (read more)

Buck
4y94
1
0

I’ve recently been thinking about medieval alchemy as a metaphor for longtermist EA.

I think there’s a sense in which it was an extremely reasonable choice to study alchemy. The basic hope of alchemy was that by fiddling around in various ways with substances you had, you’d be able to turn them into other things which had various helpful properties. It would be a really big deal if humans were able to do this.

And it seems a priori pretty reasonable to expect that humanity could get way better at manipulating substances, because there was an established history of people figuring out ways that you could do useful things by fiddling around with substances in weird ways, for example metallurgy or glassmaking, and we have lots of examples of materials having different and useful properties. If you had been particularly forward thinking, you might even have noted that it seems plausible that we’ll eventually be able to do the full range of manipulations of materials that life is able to do.

So I think that alchemists deserve a lot of points for spotting a really big and important consideration about the future. (I actually have no idea if any alchemists were thinking about it this way; th... (read more)

My main issue with the paper is that it treats existential risk policy as the result of a global collective utility-maximizing decision based on people's tradeoffs between consumption and danger. But that is assuming away approximately all of the problem.

If we extend that framework to determine how much society would spend on detonating nuclear bombs in war, the amount would be zero and there would be no nuclear arsenals. The world would have undertaken adequate investments in surveillance, PPE, research, and other capacities in response to data about previous coronaviruses such as SARS to stop COVID-19 in its tracks. Renewable energy research funding would be vastly higher than it is today, as would AI technical safety. As advanced AI developments brought AI catstrophic risks closer, there would be no competitive pressures to take risks with global externalities in development either by firms or nation-states.

Externalities massively reduce the returns to risk reduction, with even the largest nation-states being only a small fraction of the world, individual politicians much more concerned with their term of office and individual careers than national-level outcomes, and ind... (read more)

As a person with an autism (at the time "asperger's") diagnosis from childhood, I think this is very tricky territory. I agree that autistics are almost certainly more likely to make innocent-but-harmful mistakes in this context. But I'm a bit worried about overcorrection for that for a few reasons: 

Firstly, men in general (and presumably women to some degree also), autistic or otherwise are already incredibly good at self-deception about the actions they take to get sex (source: basic commonsense). So giving a particular subset of us more of an excuse to think "I didn't realize I would upset her", when the actual facts are more "I did know there was a significant risk, but I couldn't resist because I really wanted to have sex with her", seems a bit fraught. I think this is different from the sort of predatory, unrepentant narcissism that Jonas Vollmer says we shouldn't ascribe to Owen: it's a kind of self-deception perfectly compatible with genuine guilt at your own bad behavior and certainly with being a kind and nice person overall. I actually think the feminism-associated* meme about sexual bad behavior being always really about misogyny or dominance can sometimes obscure ... (read more)

My basic takeaway from all of this is not who is right/wrong so much as that EA professional organisations should act more like professional organisations. While it may be temporarily less enjoyable I would expect overall the organisations with things like HR professionals, safeguarding policies, regular working hours, offices in normal cities and work/life boundaries to be significantly more effective contributors to EA

I’m less interested in “debating whether a person in a villa in a tropical paradise got a vegan burger delivered fast enough” or “whether it’s appropriate for your boss to ask you to pick up their ADHD medication from a Mexican pharmacy” or “if $58,000 of all inclusive world travel plus $1000 a month stipend is a $70,000 salary”? Than in interrogating whether EA wouldn’t be better off with more “boring” organisations led by adults with significant professional experience managing others, where the big company drama is the quality of coffee machine in the office canteen.

As one of the people Ben interviewed: 

  1. This post closely reflects my understanding of the situation. (EDIT: at this time, before engaging with Nonlinear reply myself.)
  2. Whenever this post touches on something that I can independently corroborate (EDIT: small minority of claims), I believe it to be accurate. Whenever the post communicates something that both Ben and I have heard from Alice and Chloe (EDIT: large majority of claims), it tells their account faithfully.
  3. I appreciate Ben’s emphasis on red lines and the experiences of Alice and Chloe. When he leaves out stories that I think we are both aware of, my guess is that he has done so because these stories aren’t super relevant to the case at hand or aren’t super objective/strongly evidenced. This makes me think more favourably of the rest of his write-up.

My theory is that while EA/rationalism is not a cult, it contains enough ingredients of a cult that it’s relatively easy for someone to go off and make their own. 

Not everyone follows every ingredient, and many of the ingredients are actually correct/good, but here are some examples:

  • Devoting ones life to a higher purpose (saving the world)
  •  High cost signalling of group membership (donating large amounts of income)
  • The use of in-group shibboleths (like “in-group, and “shibboleths”)
  • The use of weird rituals and breaking social norms  (Bayesian updating, “radical honesty”, etc)
  • A tendency to isolate oneself from non-group members (group houses, EA orgs)
  • the belief that the world is crazy, but we have found the truth (rationalist thinking)
  • the following of sacred texts explaining the truth of everything (the sequences)
  • And even the belief in an imminent apocalypse (AI doom)

These ingredients do not make EA/rationalism in general a cult, because it lacks enforced conformity and control by a leader. Plenty of people, including myself, have posted on Lesswrong critiquing the sequences and Yudkowsky and been massively upvoted for it. It’s decentralised across the internet, if someo... (read more)

Thank you to the CAIS team (and other colleagues) for putting this together. This is such a valuable contribution to the broader AI risk discourse.

Thank you for so much for articulating this in such a thoughtful and considered way! It must have taken a lot of courage to share these difficult experiences but I'm so glad you did.

Your suggested actions are really helpful, and I would encourage anyone who cares about building a strong community based on altruism to take the time to think on this.

*CW*

As someone who has had a similar experience with a partner I trusted, this paragraph felt incredibly true:

"The realistic tradeoffs as a survivor of sexual harassment or assault often push the survivor to choose an ideal, like justice or safety for others, at the expense of their time, energy, and health. While reeling from the harm of the situation, the person experiencing the harm might engage in a process that hurts them in an effort to ensure their safety, protect other potential victims, educate the perpetrator, or signal that the perpetrator’s actions were harmful."

I spent the weeks following the incident going over the facts in my head, considering his point of view, minimising the experience,  wondering if I should have been more direct (anyone who has met me in person will know that's not something I usually have a proble... (read more)

In all seriousness, I hope he is on some sort of suicide watch. If anyone in his orbit is reading this, you need to keep an eye on him or have his dad or whoever keep an eye on him. 

Brenton Mayer runs internal systems at 80k. That basically means operations and impact evaluation, ie the parts that don't really get visibility or interact with the outside world. He's been doing that extremely competently for years. Him and his team make it feel easier to work to a high standard (eg through making sure we get more of a sense of how we're impacting users and setting an ambitious but sustainable culture), keep the lights on (figuratively by fundraising and literally) and make 80k a lovely place to work.

The Parable of the Talents, especially the part starting at:

But I think the situation can also be somewhat rosier than that.

Ozy once told me that the law of comparative advantage was one of the most inspirational things they had ever read. This was sufficiently strange that I demanded an explanation.

Ozy said that it proves everyone can contribute. Even if you are worse than everyone else at everything, you can still participate in global trade and other people will pay you money. It may not be very much money, but it will be some, and it will be a measure of how your actions are making other people better off and they are grateful for your existence.

Might prove reassuring. Yes, EA has lots of very smart people, but those people exist in an ecosystem which almost everyone can contribute to. People do and should give kudos to those who do the object level work required to keep the attention of the geniuses on the parts of the problems which need them.

As some examples of helpful things available to you: 

  • Being an extra pair of hands at events
  • Asking someone who you think is aligned with your values and might have too much on their plate what you can help them with (if you actually have the bandwidth to follow through)
  • Making yourself available to on-board newcomers to the ideas in 1-on-1 conversations

If anyone has any neartermist community building ideas, I'd be happy to evaluate them at any scale (under $500K to $3M+). I'm on the EA Infrastructure Fund and helping fund more neartermist ideas is one of my biggest projects for the fund. You can contact me at peter@rethinkpriorities.org to discuss further (though note that my grantmaking on the EAIF is not a part of my work at Rethink Priorities).

Additionally, I'd be happy to discuss with anyone who wants seed funding in global poverty, neartermist EA community building, mental health, family planning, wild animal suffering, biorisk, climate, or broad policy and see how I can get them started.

I downvoted because it called the communication hostile without any justification for that claim. The comment it is replying to doesn't seem at all hostile to me, and asserting it is, feels like it's violating some pretty important norms about not escalating conflict and engaging with people charitably.

I also think I disagree that orgs should never be punished for not wanting to engage in any sort of online discussion. We have shared resources to coordinate, and as a social network without clear boundaries, it is unclear how to make progress on many of the disputes over those resources without any kind of public discussion. I do think we should be really careful to not end up in a state where you have to constantly monitor all online activity related to your org, but if the accusations are substantial enough, and the stakes high enough, I think it's pretty important for people to make themselves available for communication. 

Importantly, the above also doesn't highlight any non-public communication channels that people who are worried about the negative effects of ACE can use instead. The above is not saying "we are worried about this conversation being difficult to have in pub... (read more)

Thanks for raising this question about EA's growth, though I fully agree it would have been better to frame that question more like: “Given that we're pouring a substantial amount of money into EA community growth, why doesn't it show up in some of these metrics?" To that end, while I may refer to “growing” or “not growing” below for brevity I mean those terms relative to expectations rather than in an absolute sense. With that caveat out of the way… 

There’s a very telling commonality about almost all the possible explanations that have been offered so far. Aside from a fraction of one comment, none of the explanations in the OP or this followup post even entertain the possibility that any mistakes by people/organizations in the EA community inhibited growth. That seems worthy of a closer look. We expect an influx of new funding (ca. ~2016-7) to translate into growth (with some degree of lag), but only if it is deployed in effective strategies that are executed well. If we see funding but not growth, why not look at which strategies were funded and how well they were executed?

CEA is probably the most straightforward example to look at, as an organization that has run a lot of ... (read more)

I think this is the wrong question.

The point of lockdown is that for many people it is individually rational to break the lockdown - you can see your family, go to work, or have a small wedding ceremony with little risk and large benefits - but this imposes external costs on other people. As more and more people break lockdown, these costs get higher and higher, so we need a way to persuade people to stay inside - to make them consider not only the risks to themselves, but also the risks they are imposing on other people. We solve this with a combination of social stigma and legal sanctions.

The issue is exactly the same with ideologies. To environmentalists, preventing climate change is more important than covid. To pro-life people, preventing over half a million innocent deaths every year is more important than covid. To animal rights activists, ending factory farming is more important than covid. To anti-lockdown activists, preventing mass business failure and a depression is more important than covid. But collectively we are all better off if everyone stops holding protests for now.

The correct question is "is it good if I, and everyone else who thinks their reason is as good as I think this one is, breaks the lockdown?" Failure to consider this, as it appears most people have, is to grossly privilege this one cause over others and defect in this iterated prisoners dilemma - and the tragic consequence will be many deaths.

Thanks for this post Will, it's good to see some discussion of this topic. Beyond our previous discussions, I'll add a few comments below.


hingeyness

I'd like to flag that I would really like to see a more elegant term than 'hingeyness' become standard for referring to the ease of influence in different periods.

Even just a few decades ago, a longtermist altruist would not have thought of risk from AI or synthetic biology, and wouldn’t have known that they could have taken action on them.

I would dispute this. Possibilities of AGI and global disaster were discussed by pioneers like Turing, von Neumann, Good, Minsky and others from the founding of the field of AI.

The possibility of engineered plagues causing an apocalypse was a grave concern of forward thinking people in the early 20th century as biological weapons were developed and demonstrated. Many of the anti-nuclear scientists concerned for the global prospects of humanity were also concerned about germ warfare.

Both of the above also had prominent fictional portrayals to come to mind for longtermist altruists engaging in a wide-ranging search. If there had been a longtermist altruist movement trying to c... (read more)

I’m glad to see that Nonlinear’s evidence is now public, since Ben’s post did not seem to be a thorough investigation. As I said to Ben before he posted his original post, I knew of evidence that strongly contradicted his post, and I encouraged him to temporarily pause the release of his post so he could review the evidence carefully, but he would not delay.

Hey! I work at 80k doing outreach.

Thanks for your work here!

I think the data from 80k overall tells a bit of a different story.

Here’s a copy of our programmes’ lead metrics, and our main funnel metrics (more detailed). 

As you can see, some metrics take a dip in Q1 and Q2 2023: site visitors & engagement time, new newsletter subscribers, podcast listening time, and applications to advising. 

I’d like to say four things about that data: 

  1. It seems pretty plausible to me that lower interest in EA due to the FTX crash is one (important) factor driving those metrics that took a dip. That said: 
  2. All of those seem to have “bounced back” in Q3 
  3. Our website (and to some extent podcast) metrics are very heavily driven by how much outreach & marketing we do. In Q4 2022, we spent very little on marketing compared to the rest of 2022 & 2023, which I think is a significant contributor to the trend. 
  4. It looks like the second half of 2022 was just an unusually high-growth period (for 80k, and I think EA more broadly), and falling from that peak is not particularly surprising due to regression to the mean. Maintaining a level of growth that high might have been p
... (read more)

I have seen confidentiality requests weaponized many time (indeed, it is one of the most common ways I've seen people end up in abusive situations), and as such I desperately don't want us to have a norm of always erring on the side of confidentiality and heavily punishing people who didn't even receive a direct request for confidentiality but are just sharing information they could figure out from publicly available information.

Funnily enough, I think EA does worse than other communities / movements I'm involved with (grassroots animal advocacy & environmentalism). My partner and other friends (women) have often complained about various sexist issues when attending EA events e.g. men talking over them, borderline aggressive physical closeness, dismissing their ideas, etc., to the point that they doesn't want to engage with the community. Experiences like this rarely, if ever, happen in other communities we hang out in. I think there are a few reasons for why EA has been worse than other communities in my cases:

  • I think our experiences differ on animal issues as when groups /movements professionalise, as has been happening over the past decade for animal welfare, the likelihood that men will abuse their positions of power increases dramatically. At the more grassroots level, power imbalances often aren't stark enough to lead the types of issues that came out in the animal movement a few years back. EA has also been undergoing this professionalisation and consolidation of power, and seems like the article above highlights the negative consequences of that. 
  • As has been noted many times, EA is current
... (read more)

Pointing out the %70 male number seems very relevant since issues like this may contribute to that number and will likely push other women (such as myself) away from the movement.

While I haven’t experienced men in EA being dismissive of my ideas (though that’s only my personal experience in a very small EA community) I have found that the people I have met in EA are much more open to talking about sex and sexual experiences than I am comfortable with in a professional environment. I have personally had a colleague in EA ask me to go to a sex party to try BDSM sex toys. This was very strange for me. I have worked as a teacher, as a health care professional, and have spent a lot of time in academic settings, and I have never had an experience like that elsewhere. I also felt that it was being asked because they were sussing out whether or not I was part of the “cool crowd” who was open about my sex life and willing to be experimental.

I found this especially strange because there seem to be a lot of norms around conversation in EA (the same person who asked me to go to that party has strong feelings about up-keeping these norms) but they for some reason don’t have norms around speaking about sexual relationships, which is taboo in every other professional setting I have been a part of. I think having stronger “norms” or whatever you want to call it, or making discussions like this more taboo in EA, would be a good start. This will make it less likely that people in EA will feel comfortable doing the things discussed in this article.

This seems to be a false equivalence. There's a big difference between asking "did this writer, who wrote a bit about ethics and this person read, influence this person?" vs "did this philosophy and social movement, which focuses on ethics and this person explicitly said they were inspired by, influence this person?"

I agree with you that the question

Who's at fault for FTX's wrongdoing?

has the answer

FTX

But the question

Who else is at fault for FTX's wrongdoing?

 Is nevertheless sensible and cannot have the answer FTX.

Matis
2y92
43
5

UPDATE: less certain of the below. Be sure to read this comment by Cremer disputing Torres's account https://forum.effectivealtruism.org/posts/vv7FBtMxBJicM9pae/democratising-risk-a-community-misled?commentId=CwxqjeG8qqwy8gz4c

The fact that Torres was a co-author certainly does change the way I interpret the original post. For example. Cremer writes of the review process, “By others we were accused of lacking academic rigour and harbouring bad intentions.”

Before I knew about the Torres part, that sounded more troubling - it would maybe reflect badly on EA culture if reviewers were accusing Cremer and Kemp of these things just for writing “Democratising Risk”. I don’t think it’s a good paper, but I don’t think the content of the final paper is evidence of bad intentions.

But to accuse Torres of having bad intentions and lacking academic rigor? Reviewers would have been absolutely right to do so. By the time the paper was circulating, presumably Torres had already begun their campaign of slander against various members of the longtermist and EA communities.

Jonathan Mustin added the ability to copy and paste footnotes from google docs into the Forum, which has been one of our most oft-requested features.

I strongly agree with all this. Another downside I've felt from this exercise is it feels like I've been dragged into a community ritual I'm not really a fan of where my options are a) tacitly support (even if it is just deleting the email where I got the codes with a flicker of irritation) b) an ostentatious and disproportionate show of disapproval. 

I generally think EA- and longtermist- land could benefit from more 'professional distance': that folks can contribute to these things without having to adopt an identity or community that steadily metastasises over the rest of their life - with at-best-murky EV to both themselves and the 'cause'.  I also think particular attempts at ritual often feel kitsch and prone to bathos: I imagine my feelings towards the 'big red button' at the top of the site might be similar to how many Christians react to some of their brethren 'reenacting' the crucifixion themselves.

But hey, I'm (thankfully) not the one carrying down the stone tablets of community norms from the point of view of the universe here - to each their own. Alas this restraint is not universal, as this is becoming a (capital C) Community ritual, where 'success' or 'failu... (read more)

The Symmetry Theory of Valence sounds wrong to me and is not substantiated by any empirical research I am aware of. (Edited to be nicer.) I'm sorry to post a comment so negative and non-constructive, but I just don't want EA people to read this and think it is something worth spending time on.

Credentials: I'm doing a PhD in Neuroscience and Psychology at Princeton with a focus on fMRI research, I have a masters in Neuroscience from Oxford, I've presented my fMRI research projects at multiple academic conferences, and I published a peer reviewed fMRI paper in a mainstream journal. As far as I can tell, nobody at the Qualia Research Institute has a PhD in Neuroscience or has industry experience doing equivalent level work. Keeping in mind credentialism is bad, I am still pointing out their lack of neuroscience credentials compared to mine because I am confused by how overwhelmingly confident they are in their claims, their incomprehensible use of neuro jargon, and how dismissive they are of my expertise. (Edited to be nicer.) https://www.qualiaresearchinstitute.org/team

There are a lot of things I don't understand about STV, but the primary one is:  

  • If there is dissonance in the
... (read more)

The problem (for people like me, and may those who enjoy it keep doing so), as I see it: this is an elite community. Which is to say, this is a community primarily shaped by people who are and have always been extremely ambitious, who tend to have very strong pedigrees, and who are socialized with the norms of the global upper/top professional class.

"Hey you could go work for Google as a machine learning specialist" sounds to a person like me sort of like "Hey you could go be an astronaut." Sure, I guess it's possible. "Hey you could work for a nice nonprofit with all these people who share your weird values about charity, and join their social graph!" sounds doable. Which makes it a lot more damaging to fail.

People like me who are standardized-test-top-tier smart but whose backgrounds are pretty ordinary (I am inspired to post this in part because I had a conversation with someone else with the exact same experience, and realized this may be a pattern) don't tend to understand that they've traveled into a space of norms that is highly different than we're used to, when we join the EA social community. It just feels like "Oh! G... (read more)

Sorry to hear about your long, very difficult experience. I think part of what happened is that it did in fact get a lot harder to get a job at leading EA-motivated employers in the past couple years, but that wasn't clear to many EAs (including me, to some extent) until very recently, possibly as recently as this very post. So while it's good news that the EA community has grown such that these particular high-impact jobs can attract talent sufficient for them to be so competitive, it's unfortunate that this change wasn't clearer sooner, and posts like this one help with that, albeit not soon enough to help mitigate your own 1.5 years of suffering.

Also, the thing about some people not having runway is true and important, and is a major reason Open Phil pays people to take our remote work tests, and does quite a few things for people who do an in-person RA trial with us (e.g. salary, health benefits, moving costs, severance pay for those not made a subsequent offer). We don't want to miss out on great people just because they don't have enough runway/etc. to interact with our process.

FWIW, I found some of your comments about "elite culture" surprising. For context: I grew up in rur

... (read more)

Crossposted from LessWrong.

Maybe I'm being cynical, but I'd give >30% that funders have declined to fund AI Safety Camp in its current form for some good reason. Has anyone written the case against? I know that AISC used to be good by talking to various colleagues, but I have no particular reason to believe in its current quality.

  • MATS has steadily increased in quality over the past two years, and is now more prestigious than AISC. We also have Astra, and people who go directly to residencies at OpenAI, Anthropic, etc. One should expect that AISC doesn't attract the best talent.
    • If so, AISC might not make efficient use of mentor / PI time, which is a key goal of MATS and one of the reasons it's been successful.
  • Why does the founder, Remmelt Ellen, keep linkposting writing by Forrest Landry which I'm 90% sure is obvious crankery? It's not just my opinion; Paul Christiano said "the entire scientific community would probably consider this writing to be crankery", one post was so obviously flawed it gets -46 karma, and generally the community response has been extremely negative. Some AISC work is directly about the content in question. This seems like a concern especially given the ph
... (read more)

I run an advocacy nonprofit, 1Day Sooner. When good things happen that we have advocated for, it raises the obvious question, "were we the but-for cause?" 

A recent experience in our malaria advocacy work (W.H.O. prequalification of the R21 vaccine, a key advocacy target of ours) is exemplary. Prequalification was on the critical path for malaria vaccine deployment. Based on analysis of public sources and conversations with insiders, we came to the view that there was friction and possibly political pressure delaying prequalification from occurring as quickly as would be ideal. We decided to focus public pressure on a faster process (by calling for a prequalification timeline, asking Peter Singer to include the request in his op-ed on the subject, discussing the issue with relevant stakeholders, and asking journalists to inquire about it). We thought it would take at least till January and probably longer. Then a few days before Christmas, a journalist we were talking to sent us a W.H.O. press release -- that morning prequalification had been announced. Did it happen sooner because of us?

The short answer is we don't know. The reason I'm writing about it is that it highlights a ... (read more)

I tried starting from the beginning of the appendix, and almost immediately encountered a claim for which I feel Nonlinear has overstated their evidence.

Were Alice and Chloe "advised not to spend time with ‘low value people’, including their families, romantic partners, and anyone local to where they were staying, with the exception of guests/visitors that Nonlinear invited"? This is split into three separate rebuttals (family, romantic partners, and locals).

Nonlinear provides screenshots demonstrating that they encouraged Alice and Chloe to regularly spend time with their families, and encouraged Chloe to spend time with her boyfriend as well as letting him live with them... and also, in their own words (which I have reproduced verbatim below) they did, in fact, advise Alice to hang out with EAs they knew instead of family once, and instead of locals at least twice.

Their reporting of the family advice:

Of note, we think where this is coming from is that when Alice said she was going to visit her family in another country, we were surprised. We were having some of the top figures in the field come live with us for weeks right during the dates she’d chosen. 

She’d basically be sk

... (read more)

Another random spot check: page 115 of the Google Doc. (I generated a random number between 1 and 135.)

This page is split between two sections. The first starts on page 114:

Ben paints Emerson as a power-hungry villain when almost all of the last 4 years Emerson has been working part-time on EA things, gives his money away quietly, and is almost always behind the scenes, giving credit and decision-making power to others

The quote given in support of this is "I think Emerson is very ambitious and would like a powerful role in EA/X-risk/etc." In my opinion, the quote and the paraphrase are very different things, especially since, as it happens, that quote is not even from the original post, it's from a comment.

The Google Doc then goes on to describe the reasons Drew believes that Emerson is not ambitious for status within EA. This is ultimately a character judgement, and I don't have a strong opinion about who is correct about Emerson's character here. However, I do not think it's actually important to the issue at hand, since the purported ambition was not in fact load-bearing to the original argument in any way.

 

The second section is longer, and goes on for several pages. It con... (read more)

I appreciate the spirit of this post as I am not a Yudkowsky fan, think he is crazy overconfident about AI, am not very keen on rationalism in general, and think the EA community sometimes gets overconfident in the views of its "star" members. But some of the philosophy stuff here seems not quite right to me, though none of its egregiously wrong, and on each topic I agree that Yudkowsky is way, way overconfident. (Many professional philosophers are way overconfident too!)

As a philosophy of consciousness PhD: the view that animals lack consciousness is definitely an extreme minority view in the field, but it it's not a view that no serious experts hold. Daniel Dennett has denied animal consciousness for roughly Yudkowsky like reasons I think. (EDIT: Actually maybe not: see my discussion with Michael St. Jules below. Dennett is hard to interpret on this, and also seems to have changed his mind to fairly definitively accept animal consciousness more recently. But his earlier stuff on this at the very least opposed to confident assertions that we just know animals are conscious, and any theory that says otherwise is crazy.) And more definitely Peter Carruthers (https://scholar.google.... (read more)

Brief reflections on the Conjecture post and it's reception

(Written from the non-technical primary author)

  • Reception was a lot more critical than I expected. As last time, many good points were raised that pointed out areas where we weren't clear
  • We shared it with reviewers (especially ones who we would expect to disagree with us) hoping to pre-empt these criticisms. The gave useful feedback.
  • However, what we didn't realize was that the people engaging with our post in the comments were quite different from our reviewers and didnt share the background knowledge that our reviewers did  
  • We included our end line views (based on feedback previously that we didn't do this enough) and I think it's those views that felt very strong to people. 
  • It's really, really hard to share the right level of detail and provide adequate context. I think this post managed to be both too short and too long.
  • Short: because we didn't make as many explicit comparisons benchmarking research
  • Long: we felt we needed to add context on several points that weren't obvious to low context people. 
  • When editing a post it's pretty challenging to figure out what assumptions you can assume and what your reader
... (read more)

(personal, emotional reflection)

On a personal note, the past few days have been pretty tough for me. I noticed I took the negative feedback pretty hard.

I hope we have demonstrated that we are acting in good faith, willing to update and engage rigorously with feedback and criticism, but some of the comments made me feel like people thought we were trying to be deceptive or mislead people. It's pretty difficult to take that in when it's so far from our intentions.

We try not to let the fact that our posts are anonymous mean we can say things that aren't as rigorous, but sometimes it feels like people don't realize that we are people too. I think comments might be phrased differently if we weren't anonymous.

I think it's especially hard when this post has taken many weekends to complete, and we've invested several hours this week in engaging with comments, which is a tough trade off against other projects.

The CEO has been inconsistent over time regarding his position on releasing LLMs

I find this to be a pretty poor criticism, and its inclusion makes me less inclined to accept the other criticisms in this piece at face value.

Updating your beliefs and changing your mind in light of new evidence is undoubtedly a good thing. To say that doing so leaves you with concerns about Connor's "trustworthiness and character" seems not only unfair, but also creates a disincentive for people to publicly update their views on key issues, for fear of this kind of criticism.  

I don't think I understand the structure of this estimate, or else I might understand and just be skeptical of it. Here are some quick questions and points of skepticism.

Starting from the top, you say:

We estimate optimistically that there is a 60% chance that all the fundamental algorithmic improvements needed for AGI will be developed on a suitable timeline.

This section appears to be an estimate of all-things-considered feasibility of transformative AI, and draws extensively on evidence about how lots of things go wrong in practice when implementing complicated projects. But then in subsequent sections you talk about how even if we "succeed" at this step there is still a significant probability of failing because the algorithms don't work in a realistic amount of time.

Can you say what exactly you are assigning a 60% probability to, and why it's getting multiplied with ten other factors? Are you saying that there is a 40% chance that by 2043 AI algorithms couldn't yield AGI no matter how much serial time and compute they had available? (It seems surprising to claim that even by 2023!) Presumably not that, but what exactly are you giving a 60% chance?

(ETA: after reading later sectio... (read more)

Largely in response to the final paragraph of Ivy's comment: FWIW, as a woman in EA, I do not feel "healed" by Owen's post. I feel *very* annoyed and sorry for the person who was affected by Owen's behavior. In response to the final sentence ("extra obligations like board responsibilities on hold til you have things sorted"), I would be concerned if Owen was in a board position in EA because he has clearly proved himself incapable of doing so in a way that doesn't discredit legitimate actors in the space and cause harm. I'm surprised, and again really annoyed, this is already a topic of discussion.    

I appreciate this post a lot, particularly how you did not take more responsibility than was merited and how you admitted thinking it wasn't a red flag that SBF skirted regulations bc the regulations were probably bad. I appreciated how you noticed hindsight bias and rewritten history creeping in, and I appreciate how you don't claim that more ideal actions from you would have changed the course of history but nonetheless care about your small failures here.

Do you think EA's self-reflection about this is at all productive, considering most people had even less information than you? My (very, very emotional) reaction to this has been that most of the angst about how we somehow should have known or had a different moral philosophy (or decision theory) is a delusional attempt to feel in control. I'm just curious to hear in your words if you think there's any value to the reaction of the broader community (people who knew as much or less about SBF before 11/22 than you).

Do you think EA's self-reflection about this is at all productive, considering most people had even less information than you?

I don't have terribly organized thoughts about this. (And I am still not paying all that much attention—I have much more patience for picking apart my own reasoning processes looking for ways to improve them, than I have for reading other people's raw takes :-p)

But here's some unorganized and half-baked notes:


I appreciated various expressions of emotion. Especially when they came labeled as such.

I think there was also a bunch of other stuff going on in the undertones that I don't have a good handle on yet, and that I'm not sure about my take on. Stuff like... various people implicitly shopping around proposals about how to readjust various EA-internal political forces, in light of the turmoil? But that's not a great handle for it, and I'm not terribly articulate about it.


There's a phenomenon where a gambler places their money on 32, and then the roulette wheel comes up 23, and they say "I'm such a fool; I should have bet 23".

More useful would be to say "I'm such a fool; I should have noticed  that the EV of this gamble is negative." Now at least you are... (read more)

I roll to disbelieve on these numbers. "Multiple reports a week" would be >100/year, which from my perspective doesn't seem consistent with the combination of (1) the total number of reports I'm aware of being a lot smaller than that, and (2) the fact that I can match most of the cases in the Time article (including ones that had names removed) to reports I already knew about.

(It's certainly possible that there was a particularly bad week or two, or that you're getting filled in on some sort of backlog.)

I also don't believe that a law school, or any group with 1300 members in it, would have zero incidents in 3-5 years. That isn't consistent with what we know about the overall rate of sexual misconduct in the US population; it seems far more likely that incidents within those groups are going unreported, or are being reported somewhere you don't see and being kept quiet.

These are quotations from a table that are intended to illustrate "difficult tradeoffs". Does seeing them in context change your view at all?

(Disclosure: married to Wise)

  • "There is a racial gap on IQ test scores and it's really disturbing. We're working really hard to fix it and we will fix it one day - but it's a tough complicated problem and no one's sure what angle to attack it from."
  • "Black people score worse than white people on IQ tests."
  • "Black people have lower IQs than white people."
  • "Black people are dumber than white people."

The first statement would be viewed positively by most, the second would get a raised eyebrow and a "And what of it?", the third is on thin fucking ice, and the fourth is  utterly unspeakable.

2-4 aren't all that different in terms of fact-statements, except that IQ ≠ intelligence, so some accuracy is lost moving to the last. It's just that the first makes it clear which side the speaker is on, the second states an empiricism and the next two look like they're... attacking black people, I think?

I would consider the fourth a harmful gloss - but it doesn't state that there is a genetic component to IQ, that's only in the reader's eye. This makes sense in the context of Bostrom posing outrageous but Arguably Technically True things to inflame the reader's eye.

  • "Poor people are dumber than rich people."

I think people woul... (read more)

Some historical context on this issue. If Bostrom's original post was written around 1996 (as I've seen some people suggest), that was just after the height of the controversy over 'The Bell Curve' book (1994) by Richard Herrnstein & Charles Murray.

In response to the firestorm around that book, the American Psychological Association appointed a blue-ribbon committee of 11 highly respected psychologists and psychometricians to evaluate the Bell Curve's empirical claims. They published a report in 1996 on their findings, which you can read here, and summarized here. The APA committee affirmed most of the Bell Curve's key claims, and concluded that there were well-established group differences in average general intelligence, but that the reasons for the differences were not yet clear.

More recently, Charles Murray has reviewed the last 30 years of psychometric and genetic evidence in his book Human Diversity (2020), and in his shorter, less technical book Facing Reality (2021).

This is the most controversial topic in all of the behavioral sciences. EAs might be prudent to treat this whole controversy as an information hazard, in which learning about the scientific findings can be s... (read more)

Hello Peter, I will offer my perspective as a relative outsider who is not formally aligned with EA in any way but finds the general principle of "attempting to do good well" compelling and (e.g.) donates to Give Directly. I found Bostrom's explanation very offputting and am relieved that an EA institution has commented to confirm that racism is not welcome within EA. Given Bostrom's stature within the movement, I would have taken a lack of institutional comment as a tacit condonation and/or determination that it is more valuable to avoid controversy than to ensure that people of colour feel welcome within EA. 

I felt a lot of this when I was first getting involved in effective altruism. Two of the things that I think are most important and valuable in the EA mindset -- being aware of tradeoffs, and having an acute sense of how much needs to get done in the world and how much is being lost for a lack of resources to fix it -- can also make for a pretty intense flavor of guilt and obligation. These days I think of these core elements of an EA mindset as being pieces of mental technology that really would ideally be installed gradually alongside other pieces of mental technology which can support them and mitigate their worst effects and make them part of a full and flourishing life.

Those other pieces of technology, at least for me, are something like:

  • a conviction that I should, in fact, be aspiring to a full and flourishing life; that any plan which doesn't viscerally feel like it'll be a good, satisfying, aspirational life to lead is not ultimately a viable plan; that I may find sources of strength and flourishing outside where I imagined, and that it'd fine if I have to be creative or look harder to find them, but that I cannot and will not make life plans that don't entail having a good
... (read more)

A bunch of things that all seem true to me:

  1. Some number of people in EA community could have done things that were positive in expectation that would have mitigated much of the downside to EA from FTX.
  2. A bunch of people are overreacting to this situation and making it seem much more damning to EA than I think it is. Some of those people are acting in bad faith.
  3. It is very possible that as a community we overreact to this situation and adopt bad norms, institutions, or practices that are negative EV going forward.

Here are my high-level thoughts around the comments so far of this report:

  • This is a detailed report, where a lot of work has been put in, by one of EA's foremost scholars on the intersection of climate change and other global priorities.
  • So it'd potentially be quite valuable for people with either substantial domain expertise or solid generalist judgement to weigh in here on object-level issues, critiques, and cruxes, to help collective decision-making.
  • Unfortunately, all of the comments here are overly meta. Out of the ~60 comments so far on this thread, 0.5 of the comments on this thread approach anything like technical criticism, cruxes, or even engagement.
  • After saying that, I will hypocritically continue to follow the streak of being meta while not having read the full report.
  • I think I'm confused about the quality of the review process so far. Both the number and quality of the reviewers John contacted for this book seemed high. However, I couldn't figure out what the methodology for seeking reviews is here.
    • T
... (read more)

It might help to imagine a hard takeoff scenario using only known sorts of NN & scaling effects... (LW crosspost, with >82 comments)

It Looks Like You're Trying To Take Over The World

In A.D. 20XX. Work was beginning. "How are you gentlemen !!"... (Work. Work never changes; work is always hell.)

Specifically, a MoogleBook researcher has gotten a pull request from Reviewer #2 on his new paper in evolutionary search in auto-ML, for error bars on the auto-ML hyperparameter sensitivity like larger batch sizes, because more can be different and there's high variance in the old runs with a few anomalously high performance values. ("Really? Really? That's what you're worried about?") He can't see why worry, and wonders what sins he committed to deserve this asshole Chinese (given the Engrish) reviewer, as he wearily kicks off yet another HQU experiment...

Rest of story moved to gwern.net.

First of all, I'm sorry to hear you found the paper so emotionally draining. Having rigorous debate on foundational issues in EA is clearly of the utmost importance. For what it's worth when I'm making grant recommendations I'd view criticizing orthodoxy (in EA or other fields) as a strong positive so long as it's well argued. While I do not wholly agree with your paper, it's clearly an important contribution, and has made me question a few implicit assumptions I was carrying around.

The most important updates I got from the paper:

  1. Put less weight on technological determinism. In particular, defining existential risk in terms of a society reaching "technological maturity" without falling prey to some catastrophe frames technological development as being largely inevitable. But I'd argue even under the "techno-utopian" view, many technological developments are not needed for "technological maturity", or at least not for a very long time. While I still tend to view development of things like advanced AI systems as hard to stop (lots of economic pressures, geographically dispersed R&D, no expert consensus on whether it's good to slow down/accelerate), I'd certainly like to see mor
... (read more)

Like him, I only know about this particular essay from Torres, so I will limit my comments to that.

I personally do not think it is appropriate to include an essay in a syllabus or engage with it in a forum post when (1) this essay characterizes the views it argues against using terms like 'white supremacy' and in a way that suggests (without explicitly asserting it, to retain plausible deniability) that their proponents—including eminently sensible and reasonable people such as Nick Beckstead and others— are white supremacists, and when (2) its author has shown repeatedly in previous publications, social media posts and other behavior that he is not writing in good faith and that he is unwilling to engage in honest discussion.

(To be clear: I think the syllabus is otherwise great, and kudos for creating it!)

EDIT: See Seán's comment for further elaboration on points (1) and (2) above.

I really like the specific numbers people are posting. I'll add my own (rough estimates) from the ~5 months I spent applying to roles in 2018.

Context: In spring 2018, I attended an event CEA ran for people with an interest in operations, because Open Phil referred me to them; this is how I wound up deciding to apply to most of the roles below. Before attending the operations event, I'd started two EA groups, one of which still existed, and spent ~1 year working 5-10 hours/week as a private consultant for a small family foundation, doing a combination of research and operations work. All of the below experiences were specific to me; others may have gone through different processes based on timing, available positions, prior experience with organizations, etc.

  • CEA (applied to many positions, interviewed for all of them at once, didn't spend much additional time vs. what I'd have done if I just applied to one)
    • ~4 hours of interview time before the work trial, including several semi-casual conversations with CEA staff at different events about roles they had open.
    • ~2-hour work trial task, not very intense compared to Open Phil's tasks
    • 1.5-week work trial at CEA; th
... (read more)

My understanding (based on talking to people involved in Wytham and knowing the economics of renting and buying large venues in a lot of detail) is that the sale of Wytham (edit: as done here, where the venue will either be sold at a very large discount or lie empty for a long period of time) does not actually make any economic sense for EV in terms of its mission to do as much good as possible. It is plausible that the initial purchase was a mistake, and that it makes sense to set plans in motion to sell the venue, but my understanding is that it will likely take many years for EV to sell during which the venue will be basically completely empty, or the venue will have to be sold at a pretty huge loss. This means at this point, it's likely worth it to keep it running. 

Also based on talking to some of the people close to these decisions, and trying to puzzle together how this decision was made, it seems very likely to me that the reason why Wytham is being sold is not based in a cost-effectiveness analysis, but the result of a PR-management strategy which seems antithetical to the principles of Effective Altruism to me. 

EV (and Open Phil) are supposed to use its assets an... (read more)

I read the author's intention, when she makes the case for 'forgiveness as a virtue', as a bid to (1) seem more virtuous herself, and (2) make others more likely to forgive her (since she was so generous to her accusers - at least in that section - and we want to reciprocate generosity). I think this is an effective persuasive writing technique, but is not relevant to the questions at issue (who did what).

Another related 'persuasive writing' technique I spotted was that, in general, Kat is keen to phrase the hypothesis where Nonlinear did bad things in an extreme way - effectively challenging skeptics "so, you saying we're completely evil moustache-twirling vagabonds from out of a children's fairytale?". That's a straw person, because what's at issue is the overall character of Nonlinear staff, not whether they're cartoon villains. The word 'witch' is used 7 times in this post, and 'evil' half a dozen times too. Quote:

> 2 EAs are Secretly Evil Hypothesis: 2 (of 21) Nonlinear employees felt bad because while Kat/Emerson seem like kind, uplifting charity workers publicly, behind closed doors they are ill-intentioned ne’er do wells.
 
 

[Written in a personal capacity, etc. This is the first of two comments: second comment here]

Hello Will. Glad to see you back engaging in public debate and thanks for this post, which was admirably candid and helpful about how things work. I agree with your broad point that EA should be more decentralised and many of your specific suggestions. I'll get straight to one place where I disagree and one suggestion for further decentralisation. I’ll split this into two comments. In this comment, I focus on how centralised EA is. In the other, I consider how centralised it should be.

Given your description of how EA works, I don't understand how you reached the conclusion that it's not that centralised. It seems very centralised - at least, for something portrayed as a social movement.

Why does it matter to determine how 'centralised' EA is? I take it the implicit argument is EA should be "not too centralised, not too decentralised" and so if it's 'very centralised' that's a problem and we consider doing something. Let's try to leave aside whether centralisation is a good thing and focus on the factual claim of how centralised EA is.

You say, in effect, "not that centralised",... (read more)

I suspect that if transformative AI is 20 or even 30 years away, AI will still be doing really big, impressive things in 2033, and people at that time will get a sense that even more impressive things are soon to come. In that case, I don't think many people will think that AI safety advocates in 2023 were crying wolf, since one decade is not very long, and the importance of the technology will have only become more obvious in the meantime.

If 100% of these suggestions were implemented I would expect in 5 years' time EA to look significantly worse (less effective, helping less people/animals and possibly having more FTX type scandals).

If the best 10% were implemented I could imagine that being an improvement.

Let us take a moment of sympathy for the folks at CEA (who are, after all, or allies in the flight to make the world better). Scant weeks ago they were facing harsh criticism for failing to quickly make the conventional statement about the FTX scandal. Now they're facing criticism for doing exactly that. I'm glad I'm not comms director at CEA for sure.

I encourage readers to consider whether they are the correct audience for this advice. As I understand it, this advice is directed at those for whom all of the following apply:

- Making a large impact on the world is overwhelmingly more important to you than other things people often want in their lives (such as spending lot of time with friends/family, raising children, etc.)
- You have already experienced a normal workload of ~38h per week for at least a couple of years, and found this pretty easy/comfortable to maintain
- You generally consider yourself to be happy, highly composed and emotionally stable. You have no history of depression or other mood-related dissorders.

If any of these things do not apply, this post is not for you! And it would probably be a huge mistake to seek out an adderall prescription. 
 

"The image, both internally and externally, of SBF was that he lived a frugal lifestyle, which it turns out was completely untrue (and not majorly secret). Was this known when Rob Wiblin interviewed SBF on the 80000 Hours podcast and held up SBF for his frugality?"

Thanks for the question Gideon, I'll just respond to this question directed at me personally.

When preparing for the interview I read about his frugal lifestyle in multiple media profiles of Sam and sadly simply accepted it at face value. One that has stuck in my mind up until now was this video that features Sam and the Toyota Corolla that he (supposedly) drove.

I can't recall anyone telling me that that was not the case, even after the interview went out, so I still would have assumed it was true two weeks ago.

I imagine it feels challenging to share that and I applaud you for that.

While my EA experiences have been much more positive than yours, I do not doubt your account. For many of the points you mention, I can see milder versions in my own experience. I believe your post points towards something important.

Just a note from someone who is an FTX customer. 

I moved some of my crypto holding to FTX because I trusted them and Sam and wanted the profits from my crypto holdings to go to EA/FTX Future Fund. FTX have always told me my funds would be secured, I did not trade leveraged funds, so I'm the only rightful owner of that crypto and FTX has likely been using it to make money on leveraged instruments. This seems like fraud, and the optics of this for the EA community, and the already difficult optics of lontermism, seem to me like they will be very bad. 

I'm priviliged, my holdings in FTX were 2% of my net worth (I enjoy following crypto) so I'll be fine, but many will not. 

Not the intended audience, but as a US person who lives in the Bay Area, I enjoyed reading this really detailed list of what's often unusual or confusing to people from a specific different cultural context  

Emile Torres (formerly Phil) just admitted on their Twitter that they were a co-author of a penultimate version of this paper. It is extremely deceptive not to disclose their contribution this in the paper or in the Forum post. At the point this paper was written, Torres had been banned from the EA Forum and multiple people in the community had accused Torres of harassing them. Do you think that that might have contributed to the (alleged) reception to your paper? 

This argument has some force but I don't think it should be overstated.

Re perpetual foundations: Every mention of perpetual foundations I can recall has opened with the Franklin example, among other historical parallels, so I don't think its advocates could be accused of being unaware that the idea has been attempted!

It's true at least one past example didn't pan out. But cost-benefit analysis of perpetual foundations builds in an annual risk of misappropriation or failure. In fact such analyses typically expect 90%+ of such foundations to achieve next to nothing, maybe even 99%+. Like business start-ups, the argument is that the 1 in 100 that succeeds will succeed big and pay for all the failures.

So seeing failed past examples is entirely consistent with the arguments for them and the conclusion that they are a good idea.

Re communist revolutions: Many groups have tried to change how society is organised or governed hoping that it will produce a better world. Almost all the past examples of such movements I can think of expected benefits to come fairly soon — within a generation or two at most — and though advocates for such changes usually hoped the benefits will be long-lasting, ... (read more)

Effective Altruists want to effectively help others. I think it makes perfect sense for this to be an umbrella movement that includes a range of different cause areas.  (And literally saving the world is obviously a legitimate area of interest for altruists!)

Cause-specific movements are great, but they aren't a replacement for EA as a cause-neutral movement to effectively do good.

I donated $5800.

I also donated $5,800. Thanks Andrew for making this post – this seems like a somewhat rare opportunity for <$10k donations to be unusually impactful

[anonymous]2y90
0
0

The section on expected value theory seemed unfairly unsympathetic to TUA proponents 

  • The question of what we should do with pascal's mugging-type situations just seems like a really hard under-researched problem where there are not yet any satisfying solutions.
  • EA research institutes like GPI have put a hugely disproportionate amount of research into this question, relative to the field of decision theorists. Proponents of TUA, like Bostrom were the first to highlight these problems in the academic literature. 
  •  Alternatives to expected value have received far less attention in the literature and also have many problems
  • eg The solution you propose of having some probability threshold below which we can ignore more speculative risks also has many issues. For instance, this would seem to invalidate many arguments for the rationality of voting or for political advocacy, such as canvassing for Corbyn or Sanders: the expected value of such activities is high even though the payoff is often very low (eg <1 in 10 million in most US states). Advocating for degrowth also seems extremely unlikely to succeed given the aims of governments across the world and the preferences of ordinary voters. 

So, I think framing it as "here is this gaping hole in this worldview" is a bit unfair. Proponents of TUA pointed out the hole and are the main people trying to resolve the problem, and any alternatives also seem to have dire problems.

Thanks, I definitely agree that there should be more prioritization research. (I work at GPI, so maybe that’s predictable.) And I agree that for all the EA talk about how important it is, there's surprisingly little really being done.

One point I'd like to raise, though: I don’t know what you’re looking for exactly, but my impression is that good prioritization research will in general not resemble what EA people usually have in mind when they talk about “cause prioritization”. So when putting together an overview like this, one might overlook some of even what little prioritization research is being done.

In my experience, people usually imagine a process of explicitly listing causes, thinking through and evaluating the consequences of working in each of them, and then ranking the results (kind of like GiveWell does with global poverty charities). I expect that the main reason more of this doesn’t exist is that, when people try to start doing this, they typically conclude it isn’t actually the most helpful way to shed light on which cause EA actors should focus on.

I think that, more often than not, a more helpful way to go a... (read more)

Huh, this feels like a somewhat weird post without mentioning the FTX settlement for $22.5M that EV just signed: https://restructuring.ra.kroll.com/FTX/Home-DocketInfo (Memo number 3745). 

My guess is Open Phil is covering this, but my guess is there is a bunch of additional risk that funds you receive right now would become part of this settlement that donors should be able to model.

My guess is you can't talk about this for legal reasons in a post like this (though that does seem sad and my guess is you've been too risk-averse in the domain of sharing any information in this space publicly), but seems important for people to know when someone is assessing what is going on with EV and CEA.

Strongly, strongly, strongly agree. I was in the process of writing essentially this exact post, but am very glad someone else got to it first. The more I thought about it and researched, the more it seemed like convincingly making this case would probably be the most important thing I would ever have done. Kudos to you.

A few points to add

  1. Under standard EA "on the margin" reasoning, this shouldn't really matter, but I analyzed OP's grants data and found that human GCR has been consistently funded 6-7x more than animal welfare (here's my tweet thread this is from) Image
  2. @Laura Duffy's (for Rethink Priorities) recently published risk aversion analysis basically does a lot of the heavy lifting here (bolding mine):

Spending on corporate cage-free campaigns for egg-laying hens is robustly[8] cost-effective under nearly all reasonable types and levels of risk aversion considered here. 

  1. Using welfare ranges based roughly on Rethink Priorities’ results, spending on corporate cage-free campaigns averts over an order of magnitude more suffering than the most robust global health and development intervention, Against Malaria Foundation. This result holds for almost any level of risk aversio
... (read more)

I analyzed OP's grants data

FYI, I made a spreadsheet a while ago which automatically pulls the latest OP grants data and constructs summaries and pivot tables to make this type of analysis easier.

I also made these interactive plots which summarise all EA funding:

Larks
5mo89
28
1
7

Thanks for sharing these studies explaining why you are doing this. Unfortunately, in general I am very skeptical of the sort of studies you are referencing. The researchers typically have a clear agenda - they know what conclusions they want to come to ahead of time, and what conclusions will most advantageous to their career - and the statistical rigour is often lacking, with small sample sizes, lack of pre-registration, p-hacking, and other issues. I took a closer look at the four sources you referenced to see if these issues applied.

When more women participate in traditionally male-dominated fields like the sciences, the breadth of knowledge in that area usually grows, a surge in female involvement directly correlates with advancements in understanding[1]. [emphasis added]

The link you provide here, to a 2014 article in National Geographic, has a lot of examples of cases where male researchers supposedly overlooked the needs of women (e.g. not adequately studying how women's biology affects how drugs and seat belts should work, or the importance of cleaning houses), and suggests that increasing number of female scientists helped address this. But female scientists being better a... (read more)

(COI note: I work at OpenAI. These are my personal views, though.)

My quick take on the "AI pause debate", framed in terms of two scenarios for how the AI safety community might evolve over the coming years:

  1. AI safety becomes the single community that's the most knowledgeable about cutting-edge ML systems. The smartest up-and-coming ML researchers find themselves constantly coming to AI safety spaces, because that's the place to go if you want to nerd out about the models. It feels like the early days of hacker culture. There's a constant flow of ideas and brainstorming in those spaces; the core alignment ideas are standard background knowledge for everyone there. There are hackathons where people build fun demos, and people figuring out ways of using AI to augment their research. Constant interactions with the models allows people to gain really good hands-on intuitions about how they work, which they leverage into doing great research that helps us actually understand them better. When the public ends up demanding regulation, there's a large pool of competent people who are broadly reasonable about the risks, and can slot into the relevant institutions and make them work well.
  2. AI sa
... (read more)

I think it would be helpful for you to mention and highlight your conflict-of-interest here.

I remember becoming much more positive about ads after starting work at Google. After I left, I slowly became more cynical about them again, and now I'm back down to ~2018 levels. 

EDIT: I don't think this comment should get more than say 10-20 karma. I think it was a quick suggestion/correction that Richard ended up following, not too insightful or useful.

Let me justify my complete disagreement.

I read your comment as applying insanely high quality requirements to what's already an absolutely thankless task. The result of applying your standards would be that the OP would not get written. In a world where criticism is too expensive, it won't get produced. This is good if the criticism is substance-less, but bad if it's of substance.

Also, professional journalists are paid for their work. In case of posts like these, who is supposed to pay the wages and provide the manpower to fulfill requirements like "running it by legal"? Are we going to ask all EA organisations to pay into a whistleblower fund, or what?

Also, for many standards and codes of ethics, their main purpose is not to provide a public good, or to improve epistemics, but to protect the professionals themselves. (For example, I sure wish doctors would tell patients if any of their colleagues should be avoided, but this is just not done.) So unequivocally adhering to such professional standards is not the right goal to strive for.

I also read your comment as containing a bunch of leading questions that presupposed a negative conclusion. Over eight paragraphs of questions, you'r... (read more)

To put it more strongly: I would like to make clear that I have never heard any claims of improper conduct by Drew Spartz (in relation to the events discussed in this post or otherwise).

Application forms for EA jobs often give an estimate for how long you should expect it to take; often these estimates are *wildly* too low ime. (And others I know have said this too). This is bad because it makes the estimates unhelpful for planning, and because it probably makes people feel bad about themselves, or worry that they're unusually slow, when they take longer than the estimate. 

Imo, if something involves any sort of writing from scratch, you should expect applicants to take at least an hour, and possibly more. (For context, I've seen application forms which say 'this application should take 10 minutes' and more commonly ones estimating 20 minutes or 30 minutes).

It doesn’t take long to type 300 words if you already know what you’re going to say and don’t particularly care about polish (I wrote this post in less than an hour probably).  But job application questions —even ‘basic’ ones like ‘why do you want this job?’ and ‘why would you be a good fit?’-- take more time. You may feel intuitively that you’d be a good fit for the job, but take a while to articulate why. You have to think about how your skills might help with the job, perhaps cross-referencing with ... (read more)

Thanks to the authors for taking the time to think about how to improve our organization and the field of AI takeover prevention as a whole. I share a lot of the concerns mentioned in this post, and I’ve been spending a lot of my attention trying to improve some of them (though I also have important disagreements with parts of the post).

Here’s some information that perhaps supports some of the points made in the post and adds texture, since it seems hard to properly critique a small organization without a lot of context and inside information. (This is adapted from my notes over the past few months.)

Most importantly, I am eager to increase our rate of research output – and critically to have that increase be sustainable because it’s done by a more stable and well-functioning team. I don’t think we should be satisfied with the current output rate, and I think this rate being too low is in substantial part due to not having had the right organizational shape or sufficiently solid management practices (which, in empathy with the past selves of the Redwood leadership team, is often a tricky thing for young organizations to figure out, and is perhaps especially tricky in this field).

I t... (read more)

Removing Claire from the EVF Board because she approved the Wytham Abbey purchase seems tremendously silly to me. FTX is a serious scandal that impacted millions of people; EA projects buying conference venues or offices isn't.

Edward Kmett's take on that topic seems correct to me:

I look at the building, a building I give myself even odds never to step into, as actually a plausibly solid investment from the standpoint of the cost of running events. I say that as someone who is not paid to care on this front.

[... A]s someone who has had to organize many conferences in the past, and who watched the utilization of the CFAR space in Bodega Bay, and what Lightcone has been up to, this doesn't seem anywhere near as beyond the pale to me as it seems to to some. 

What do I mean by that? Well. Lightcone gets something like 75% utilization out of the Hubinger house as an event space today. It also spends a comparable amount of money on an annual basis on the office space they offer in Berkeley where we first met to the cost of a prime rate loan for this property, which happens to match up with the cost of a loan for the Rose Garden Inn, almost to a tee. Looking at the Rose Garden Inn, it

... (read more)
Habryka
1y89
28
20

Atlas reportedly spent $10,000 on a coffee table. Is this true? Why was the table so expensive?

Atlas at some point bought this table, I think: https://sisyphus-industries.com/product/metal-coffee-table/. At that link it costs around $2200, so I highly doubt the $10,000 number.

Lightcone then bought that table from Atlas a few months ago at the listing price, since Jonas thought the purchase seemed excessive, so Atlas actually didn't end up paying anything. I am really glad we bought it from them, it's probably my favorite piece of furniture in the whole venue we are currently renovating.

If you think it was a waste of money, I have made much worse interior design decisions (in-general furniture is really annoyingly expensive, and I've bought couches for $2000 that turned out to just not work for us at all and were too hard to sell), and I consider this one a pretty strong hit. (To clarify, the reason why it's so expensive is because it's a kinetic sculpture with a moving magnet and a magnetic ball that draws programmable patterns into the sand at the center of the table, so it's not just like, a pretty coffee table)

The table is currently serving as a centerpiece of our central worksp... (read more)

Lorenzo Buonanno
1yModerator Comment89
32
4

Hey everyone, the moderators want to point out that this topic is heated for several reasons:

  • Lots of relevant information comes from people's personal experiences, which will vary a lot.
  • Harassment and power dynamics are often emotionally loaded and can be difficult to discuss objectively.
  • Polyamory is something that a lot of the world stigmatizes, so some people will be defensive (whether merited or not), and some will be subconsciously biased against it. This makes it hard to discuss without assuming that some people are hostile.
  • The facts mentioned in the article are very serious and disturbing, and some readers have experienced similarly appalling episodes

So we want to ask everyone to be especially understanding and generous when discussing topics this sensitive.
And as a reminder, harassment is unacceptable. One resource that exists for this is the Community Health Team at CEA. You can get in touch with the team here. If you ever experience harassment of any kind on the Forum, please reach out to the moderation team.

Edit: added the last bullet point after a useful comment

Where I agree:

  • Experimentation with decentralised funding is good. I feel it's a real shame that EA may not end up learning very much from the FTX regrant program because all the staff at the foundation quit (for extremely good reasons!) before many of the grants were evaluated.
  • More engagement with experts. Obviously, this trades off against other things and it's easier to engage with experts when you have money to pay them for consultations, but I'm sure there are opportunities to engage with them more. I suspect that a lot of the time the limiting factor may simply be people not knowing who to reach out to, so perhaps one way to make progress on this would be to make a list of experts who are willing for people at EA orgs to reach out to them, subject to availability?
  • I would love to see more engagement from Disaster Risk Reduction, Future Studies, Science and Technology Studies, ect. I would encourage anyone with such experience to consider posting on the EA forum. You may want to consider extracting out this section in a separate forum post for greater visibility.
  • I would be keen to see experiments where people vote on funding decisions (although I would be surprised if this were
... (read more)

Nice. Thanks. Really well written, very clear language, and I think this is pointed in a pretty good direction. Overall I learned a lot.

I do have the sense it maybe proves too much -- i.e. if these critiques are all correct then I think it's surprising that EA is as successful as it is, and that raises alarm bells for me about the overall writeup.

I don't see you doing much acknowledging what might be good about the stuff that you critique -- for example, you critique the focus on individual rationality over e.g. deferring to external consensus. But it seems possible to me that the movement's early focus on individual rationality was the cause of attracting great people into the movement, and that without that focus EA might not be anything at all! If I'm right about that then are we ready to give up on whatever power we gained from making that choice early on?

Or, as a metaphor, you might be saying something like "EA needs to 'grow up' now" but I am wondering if EA's childlike nature is part of its success and 'growing up' would actually have a chance to kill the movement.

The casual assumption that people make that obviously the only reason Caroline could have become CEO was because she was sleeping with SBF is annoying when I see it on Twitter or some toxic subreddit. Here I expect better. Plenty of people at FTX and Alameda were equally young and equally inexperienced. The CTO (a similarly important role at a tech company) of FTX, Gary Wang, was 29. Sam Trabucco, the previous Alameda co-CEO, seems to be about the same. I have seen no reason to think that Caroline was particularly unusual in her age or experience relative to others at FTX and Alameda. 

Just also want to emphasise Lizka's role in organising and spearheading this, as well as her conscientiousness and clear communication at every step of the process - I've enjoyed being part of this, and am personally super grateful for all the work she has put into this contest.

It seems that half of these examples are from 15+ years ago, from a period for which Eliezer has explicitly disavowed his opinions (and the ones that are not strike me as most likely correct, like treating coherence arguments as forceful and that AI progress is likely to be discontinuous and localized and to require relatively little compute). 

Let's go example-by-example: 

1. Predicting near-term extinction from nanotech

This critique strikes me as about as sensible as digging up someone's old high-school essays and critiquing their stance on communism or the criminal justice system. I want to remind any reader that this is an opinion from 1999, when Eliezer was barely 20 years old. I am confident I can find crazier and worse opinions for every single leadership figure in Effective Altruism, if I am willing to go back to what they thought while they were in high-school. To give some character, here are some things I believed in my early high-school years: 

  • The economy was going to collapse because the U.S. was establishing a global surveillance state
  • Nuclear power plants are extremely dangerous and any one of them is quite likely to explode in a given year
  • We could have e
... (read more)

It seems that half of these examples are from 15+ years ago, from a period for which Eliezer has explicitly disavowed his opinions

Just to note that the boldfaced part has no relevance in this context. The post is not attributing these views to present-day Yudkowsky. Rather, it is arguing that Yudkowsky's track record is less flattering than some people appear to believe. You can disavow an opinion that you once held, but this disavowal doesn't erase a bad prediction from your track record.

_pk
2y89
0
0

Oregonian here, born and raised. I don’t live in OR-6 but can see it from my home. I’m by no means a member of EA but I’m aware of it and until now had a generally favorable impression of you all.

I hope that rather than donating, folks in this thread will think about what they’re doing and whether it’s a good idea. The most obvious effect of this effort has been to 5-10x the total spending in this race. It’s pretty easy to read it as an experiment to see if CEA can buy seats in congress. Thats not innovative, it’s one of the oldest impulses in politics: we’re rich, let’s put my friend in power.

Further, it sounds like your friend Carrick is a great guy, but he’s got many defects as a candidate. He’s only lived in Oregon for about 18 months since college. From the few interviews he’s given, he doesn’t seem to have much familiarity or even really care about key issues in Oregon (in particular, the few interviews he’s given show that he lacks a nuanced understanding of issues like forest policy and drug decriminalization). He does not appear to have reached out to local leaders or tried to do any of the local network building you’d expect of a good representative. According to OPB he’s... (read more)

Mau
2y79
0
0

Thanks for the thoughtful comment! Without commenting on the candidacy or election overall, a response (lightly edited for clarity) to your point about pandemics:

You emphasize pandemic expertise, but pandemic prevention priorities are arguably more relevant to who will make a difference. It might not take much expertise to think that now is a bad time for Congress to slash pandemic prevention funding, which happened despite some lobbying against it. And for harder decisions, a non-expert member of Congress can hire or consult with expert advisors, as is common practice. Instead of expertise being most important in this case, a perspective I've heard from people very familiar with Congress is that Congress members' priorities are often more important, since members face tough negotiations and tradeoffs. So maybe what's lacking in Congress isn't pandemic-related expertise or lobbying, but willingness to make it a priority to keep something like covid from happening again.

It's a little aside from your point, but good feedback is not only useful for emotionally managing the rejection -- it's also incredibly valuable information! Consider especially that someone who is applying for a job at your organization may well apply for jobs at other organizations. Telling them what is good or bad with their application will help them improve that process, and make them more likely to find something that is the right fit for them. It could be vital in helping them understand what they need to do to position themselves to be more useful to the community, or at least it could save the time and effort of them applying for more jobs that have the same requirements you did, that they didn't meet -- and save the time and effort of the hiring team there rejecting them.

A unique characteristic of EA hiring is that it's often good for your goals to help candidates who didn't succeed at your process succeed at something else nearby. I often think we don't realize how significantly this shifts our incentives in cases like these.

Here are ten reasons you might choose to work on near-term causes. The first five are reasons you might think near term work is more important, while the latter five are why you might work on near term causes even if you think long term future work is more important.

  1. You might think the future is likely to be net negative. Click here for why one person initially thought this and here for why another person would be reluctant to support existential risk work (it makes space colonization more likely, which could increase future suffering).

  2. Your view of population ethics might cause you to think existential risks are relatively unimportant. Of course, if your view was merely a standard person affecting view, it would be subject to the response that work on existential risk is high value even if only the present generation is considered. However, you might go further and adopt an Epicurean view under which it is not bad for a person to die a premature death (meaning that death is only bad to the extent it inflicts suffering on oneself or others).

  3. You might have a methodological objection to applying expected value to cases where the probability is small. While the author attributes

... (read more)

I often think whether Benjamin Lay would be banned from the EA forum or EA events. It seems to me that the following exchange would have gotten him at least a warning within the context of vegetarianism:
“Benjamin gave no peace” to slave owners, the 19th-century radical Quaker Isaac Hopper recalled hearing as a child. “As sure as any character attempted to speak to the business of the meeting, he would start to his feet and cry out, ‘There’s another negro-master!’”

I can't think of any EAs that take actions similar to the following:
"Benjamin Lay’s neighbors held slaves, despite Lay’s frequent censures and cajoling. One day, he persuaded the neighbors’ 6-year old son to his home and amused him there all day. As evening came, the boy’s parents became extremely concerned. Lay noticed them running around outside in a desperate search, and he innocently inquired about what they were doing. When the parents explained in panic that their son was missing, Lay replied: Your child is safe in my house, and you may now conceive of the sorrow you inflict upon the parents of the negroe girl you hold in slavery, for she was torn from them by avarice. (Swarthmore College Bulletin)"

Reading Lukas_Gloor’s comment (and to a lesser extent, this still helpful one from Erica_Edelman) made me realize what I think is the big disagreement between people and why they are talking past each other. 

It comes down to how you would feel about doing Alice/Chloe’s job. 

Some people, like the Nonlinear folks and most of those sympathetic to them, think something like the following:

“Why is she such an ungrateful whiner? She has THE dream job/life. She gets to travel the world with us (which is awesome since we can do anything and this is what we chose to do), living in some insanely cool places with super cool and successful people AND she has a large degree of autonomy over what she does AND we are building her up and like 15% of her job is some menial tasks that we did right before she joined and come on it’s fine. How can you complain about the smallest unpleasant thing when the rest of your life rocks and this is your FIRST job out of college when this lifestyle is reserved for multimillionaires? She gets to live the life of a multimillionaire and is surrounded by cool EA people”

Others look at Alice/Chloe’s life and think something like the following:

“Wow,... (read more)

Doing some napkin-math:

  • Rethink published 32 pieces of research in 2022 (according to your database)
  • I think roughly (?) half of your work doesn't get published as it's for specific clients, so let's say you produced 64 reports overall in 2022.
  • Rethink raised $10.7 million in 2022.
  • That works out to around $167k per research output.

That seems like a lot! Maybe I should discount a bit as some of this might be for the new Special Projects team rather than research, but it still seems like it'll be over $100k per research output. 

Related questions:

  • Do you think the calculations above are broadly correct? If not, could you share what the ballpark figures might actually be? Obviously, this will depend a lot on the size of the project and other factors but averages are still useful! 
  • If they are correct, how come this number is so high? Is it just due to multiple researchers spending a lot of time per report and making sure it's extremely high-quality? FWIW I think the value of some RP projects is very high - and worth more than the costs above - but I'm still surprised at the costs.
  • Is the cost something you're assessing when you decide whether to take on a research project (when it'
... (read more)

Hey Bob - Howie from EV UK here. Thanks for flagging this! I definitely see why this would look concerning so I just wanted to quickly chime in and let you/others know that we’ve already gotten in touch with relevant regulators about this and I don’t think there’s much to worry about here.

The thing going on is that EV UK has an extended filing deadline (from 30 April to 30 June 2023) for our audited accounts,[1] which are one of the things included in our Annual Return. So back in April, we notified the Charity Commission that we’ll be filing our Annual Return by 30 June. 

  1. ^

    This is due to a covid extension, which the UK government has granted to many companies.

I notice that I am surprised and confused.

I'd have expected Holden to contribute much more to AI existential safety as CEO of Open Philanthropy (career capital, comparative advantage, specialisation, etc.) than via direct work.

I don't really know what to make of this.

That said, it sounds like you've given this a lot of deliberation and have a clear plan/course of action.

I'm excited about your endeavours in the project!

Firstly, I will say that I'm personally not afraid to study and debate these topics, and have done so. My belief is that the data points to no evidence of significant genetic differences between races when it comes to matters such as intelligence, and i think one downside of being hush hush about the subject is that people miss out on this conclusion, which is the one even a basic wikipedia skim would get you to. (you're free to disagree, that's not the point of this comment). 

That being said, I think you have greatly understated the case for not debating the subject on this forum. Remember, this is a forum for doing the most good, not a debate club, and if shunting debate of certain subjects onto a different website does the most good, that's what we should do. This requires a cost/benefit analysis, and you are severely understating the costs here. 

Point 1 is that we have to acknowledge the obvious fact that when you make a group of people feel bad, some of them are going to leave your group. I do not think this is a moral failing on their part. We have a limited number of hours in the day, would you hang out in a place where people regularly discuss whether you are gene... (read more)

Hi Simon,

I'm back to work and able to reply with a bit more detail now (though also time-constrained as we have a lot of other important work to do this new year :)).

I still do not think any (immediate) action on our part is required. Let me lay out the reasons why:

(1) Our full process and criteria are explained here. As you seem to agree with from your comment above we need clear and simple rules for what is and what isn't included (incl. because we have a very small team and need to prioritize). Currently a very brief summary of these rules/the process would be: first determine which evaluators to rely on (also note our plans for this year) and then rely on their recommendations. We do not generally have the capacity to review individual charity evaluations, and would only do so and potentially diverge from a trusted evaluator's recommendation under exceptional circumstances. (I don't believe we have had such a circumstance this giving season, but may misremember)

(2) There were no strong reasons to diverge with respect to FP's recommendation of StrongMinds at the time they recommended them - or to do an in-depth review of FP's evaluation ourselves - and I think there still aren... (read more)

To be honest I'm relieved this is one of the top comments. I've seen Kathy mentioned a few times recently in a way I didn't think was accurate and I didn't feel able to respond. I think anyone who comes across her story will have questions and I'm glad someone's addressed the questions even if it's just in a limited way.

Without in any sense wanting to take away from the personal responsibility of the people who actually did the unethical, and probably illegal trading, I think there might be a couple of general lessons here:

1) An attitude of 'I take huge financial risks because I'm trading for others, not myself, and money has approx. 0 diminishing marginal utility for altruism, plus I'm so ethical I don't mind losing my shirt' might sound like a clever idea. But crucially, it is MUCH easier psychologically to think you'll just eat the loss and the attendant humiliation and loss of status, before you are actually facing losing vast sums of money for real. Assuming (as seems likely to me) that SBF started out with genuine good intentions, my guess is this was hard to anticipate because of a self-conception as "genuinely altruistic" blocked him from the idea he might do wrong. The same thing probably stopped others hearing about SBF taking on huge risks, which of course he was open* about, from realizing this danger. 

2) On reflection, the following is a failure mode for us as a movement combining a lot of utilitarians (and more generally, people who understand that it is *sometimes, in principle... (read more)

Great comment. First comment from new forum member here. Some background: I was EA adjacent for many years, and donated quite a lot of income through an EA organization, and EA people in my community inspired me to go vegan. Still thankful for that. Then I was heavily turned off by the move towards longtermism, which I find objectionable on many grounds (both philosophical and political). This is just to give you some background on where I'm coming from, so read my comment with that in mind. 

I would like to pick up on this part: "Assuming (as seems likely to me) that SBF started out with genuine good intentions, my guess is this was hard to anticipate because of a self-conception as "genuinely altruistic" blocked him from the idea he might do wrong". I think this is true, and I think it's crucial for the EA community to reflect on these things going forward.  It's the moral licensing or self-licensing effect, which is well described in moral psychology - individuals who are very confident they are doing good may be more likely to engage in bad acts. 

I think, however, that the EA community at large in recent years have started to suffer from a kind of intellectual sel... (read more)

I think "it's easy to overreact on a personal level" is an important lesson from covid, but much more important is "it's easy to underreact on a policy level". I.e. given the level of foresight that EAs had about covid, I think we had a disappointingly small influence on mitigating it, in part because people focused too much on making sure they didn't get it themselves.

In this case, I've seen a bunch of people posting about how they're likely to leave major cities soon, and basically zero discussion of whether there are things people can do to make nuclear war overall less likely and/or systematically help a lot of other people. I don't think it's bad to be trying to ensure your personal survival as a key priority, and I don't want to discourage people from seriously analysing the risks from that perspective, but I do want to note that the overall effect is a bit odd, and may indicate some kind of community-level blind spot.

I've seen the time-money tradeoff reach some pretty extreme, scope-insensitive conclusions. People correctly recognize that it's not worth 30 minutes of time at a multi-organizer meeting to try to shave $10 off a food order, but they extrapolate this to it not being worth a few hours of solo organizer time to save thousands of dollars. I think people should probably adopt some kind of heuristic about how many EA dollars their EA time is worth and stick to it, even when it produces the unpleasant/unflattering conclusion that you should spend time to save money.

Also want to highlight "For example, we should avoid the framing of ‘people with money want to pay for you to do X’ and replace this with an explanation of why X matters a lot and why we don’t want anyone to be deterred from doing X if the costs are prohibitive" as what I think is the most clearly correct and actionable suggestion here.

We've invited Will MacAskill, Elon Musk, Grimes, and Eliezer Yudkowsky to be our celebrity judges.

I haven't received my invite yet (probably because you left out my first name)

Toby Ordering is really good.

[anonymous]3y88
0
0

If you agree it is a serious and baseless allegation, why do you keep engaging with him? The time to stop engaging with him was several years ago. You had sufficient evidence to do so at least two years ago, and I know that because I presented you with it, e.g. when he started casually throwing around rape allegations about celebrities on facebook and tagging me in the comments, and then calling me and others nazis. Why do  you and your colleagues continue to extensively collaborate with him? 

To reiterate, the arguments he makes are not sincere: he only makes them because he thinks the people in question have wronged him. 

[disclaimer: I am co-Director at CSER. While much of what I will write intersects with professional responsibilities, it is primarily written from a personal perspective, as this is a deeply personal matter for me. Apologies in advance if that's confusing, this is a distressing and difficult topic for me, and I may come back and edit. I may also delete my comment, for professional or personal/emotional reasons].

I am sympathetic to Halstead's position here, and feel I need to write my own perspective. Clearly to the extent that CSER has - whether directly or indirectly - served to legitimise such attacks by Torres on colleagues in the field, I bear a portion of responsibility as someone in a leadership position. I do not feel it would be right or appropriate for me to speak for all colleagues, but I would like to emphasise that individually I do not, in any way, condone this conduct, and I apologise for it, and for any failings on my individual part that may have contributed.

My personal impression supports the case Halstead makes. Comments about my 'whiteness', and insinuations regarding my 'real' reasons for objecting to positions taken by Torres only came after I objected publicly... (read more)

Addendum: There's a saying that "no matter what side of an argument you're on, you'll always find someone on your side who you wish was on the other side".

There is a seam running through Torres's work that challenges xrisk/longtermism/EA on the grounds of the limitations of being led and formulated by a mostly elite, developed-world community.

Like many people in longtermism/xrisk, I think there is a valid concern here.  xrisk/longtermism/EA all started in a combination of elite british universities + US communities e.g. bay. They had to start somewhere. I am of the view that they shouldn't stay that way. 

I think it's valid to ask whether there are assumptions embedded within these frameworks at this stage that should be challenged, and to posit that these would be challenged most effectively by people with a very different background and perspective. I think it's valid to argue that thinking, planning for, and efforts to shape the long-term future should not be driven by a community that is overwhelmingly from one particular background and that doesn't draw on and incorporate the perspectives of a community that reflects more of global societies and cultures. Work by such... (read more)

I disagree. It seems to me that the EA community's strength, goodness, and power lie almost entirely in our ability to reason well (so as to be actually be "effective", rather than merely tribal/random). It lies in our ability to trust in the integrity of one anothers' speech and reasoning, and to talk together to figure out what's true.

Finding the real leverage points in the world is probably worth orders of magnitude in our impact. Our ability to think honestly and speak accurately and openly with each other seems to me to be a key part of how we access those "orders of magnitude of impact."

In contrast, our ability to have more money/followers/etc. (via not ending up on the wrong side of a cultural revolution, etc.) seems to me to be worth... something, in expectation, but not as much as our ability to think and speak together is worth.

(There's a lot to work out here, in terms of trying to either do the estimates in EV terms, or trying to work out the decision theory / virtue ethics of the matter. I would love to try to discuss in detail, back and forth, and see if we can work this out. I do not think this should be super obvious in either direction from the get go, although at this point my opinion is pretty strongly in the direction I am naming. Please do discuss if you're up for it.)

I met Australia's Assistant Minister for Defence last Friday. I asked him to write an email to the Minister in charge of AI, asking him to establish an AI Safety Institute. He said he would. He also seemed on board with not having fully autonomous AI weaponry.

All because I sent one email asking for a meeting + had said meeting. 

Advocacy might be the lowest hanging fruit in AI Safety.

I find it interesting and revealing to look at how Nonlinear re-stated Chloe's initial account of an incident into a shorter version.

First, here's their shortened version (by Nonlinear):

One of Chloe’s jobs was to organize fun day trips (which she’d join us on). In fact, one of her unofficial titles was Fun Lord of Nonlinear, First of Her Name. One day, spontaneously, we decided to go on a trip to St. Barths. Emerson asked her to do her usual job, and she said “It’s a weekend” and he said, “But you like organizing fun trips!” - she had said so many times - and she said sure.

She continued doing her job (arranging ATV rentals for the group - herself getting to ride as well of course). Then, when she complained, Emerson said “OK” and then just… went and did her job for her. And that was that.

(This is another example of Chloe coming in with the implicit frame that doing her job is abusive. “Everyone sits down at a lovely cafe to have coffee and chit chat, while I’m running around to car and ATV rentals to see what they have to offer.” We can empathize with her wishing she could join us before finishing her job, but this was her job. Being an assistant is not abuse.)

The day after she tal

... (read more)

The problem with Kat’s text is that it’s a very thinly veiled threat to end someone’s career in an attempt to control Nonlinear’s image. There is no context that justifies such a threat.

Shoutout to the 130-ish people in the UK who volunteered to be infected with malaria in two separate studies at various stages of the R21 development process! Those studies helped identify Matrix-M as the ideal adjuvant, and also provided insight into the optimal dose/vaccination schedule.

(context: worked at FHI for 2 years, no longer affiliated with it but still in touch with some people who are)

I'd probably frame/emphasize things a bit differently myself but agree with the general thrust of this, and think it'd be both overdue and in everyone's interest.

The obvious lack of vetting of the apology was pretty disqualifying w.r.t. judgment for someone in such a prominent institutional and community position, even before getting to the content (on which I've commented  elsewhere). 

I'd add, re: pre-existing issues, that FHI as an institution has failed at doing super basic things like at least semi-regularly updating key components of their website*; the org's shortcomings re: diversity have been obvious from the beginning and the apology was the last nail in the coffin re: chances for improving on that front as long as he's in charge; and I don't think I know anyone who thinks he adds net positive value as a manager** (vs. as a researcher,  where I agree he has made important contributions, but that could continue without him wasting a critical leadership position, and as a founder, where his work is done). 

*e.g. the news banner thing displays 6 yea... (read more)

I think this post is very accurate, but I worry that people will agree with it in a vacuous way of "yes, there is a problem, we should do something about it, learning from others is good". So I want to make a more pointed claim: I think that the single biggest barrier to interfacing between EAs and non-EAs is the current structure of community building. Community-building is largely structured around creating highly-engaged EAs, usually through recruiting college students or even high-school students. These students are not necessarily in the best position to interface between EA and other ways of doing good, precisely because they are so early into their careers and don't necessarily have other competencies or viewpoints. So EA ends up as their primary lens for the world, and in my view that explains a sizable part of EA's quasi-isolationist thinking on doing good.

This doesn't mean all EAs who joined as college students (like me) end up as totally insular - life puts you into environments where you can learn from non-EAs. But that isn't the default, and especially outside of global health and development, it is very easy for a young highly-engaged EA to avoid learning about doing good from non-EAs.

Hi James, 

Thanks for writing this - its difficult/intimidating to write and post things of this nature on here, and its also really important and valuable. So thanks for sharing your experience. 

Please don't read this response as being critical/dismissive of your experiences - I have no doubt that these dynamics do exist, and that these types of interaction do  happen (too frequently), in EA spaces. It makes me unhappy to know that well-intentioned people who want to make a different in the world are turned off by interacting with some people in the EA community, or attending some EA events. 

I do want to say though, for fairness sake, that as a member of an ethnic, religious, and geographical minority in the EA community, I feel valued and respected, and that I don't think the attitudes or opinions of the people you're reporting in your post are that common in the greater community, and that (the vast majority of the EAs I know) would be upset to hear another EA behave the way you're reporting they did. 

^This preempts what is the overall theme of the ideas I had when reading your post: that we make a mistake of thinking about the EA community, and EA events... (read more)

Jacy
1y87
54
9

Rather than further praising or critiquing the FTX/Alameda team, I want to flag my concern that the broader community, including myself, made a big mistake in the "too  much money" discourse and subsequent push away from earning to give (ETG) and fundraising.  People have discussed Open Philanthropy and FTX funding in a way that gives the impression that tens of billions are locked in for effective altruism, despite many EA nonprofits still insisting on their significant room for more funding. (There has been some pushback, and my impression that the "too much money" discourse has been more prevalent may not be representative.)

I've often heard the marginal ETG amount, at which point a normal EA employee should be ambivalent between EA employment and donating $X per year, at well above $1,000,000, and I see many working on megaproject ideas designed to absorb as much funding as possible. I think many would say that these choices make sense in a community with >$30 billion in funding, but not one with <$5 billion in funding, just as ballparks to put numbers on things. I think many of us are in fortunate positions to pivot quickly and safely, but for many, especially f... (read more)

Did a test run with 58 participants (I got two attempted repeats):

So you were right, and I'm super surprised here.

Why don’t you just buy followers? We could... We haven’t completely ruled this out (the ends might justify the means)

Just saying I think this would be a terrible idea, both for HIA and for the movement in general. We very obviously don't want to be associated with lying and manufacturing support. Not to mention it might just get you banned from social media.

On one hand it's clear that global poverty does get the most overall EA funding right now, but it's also clear that it's more easy for me to personally get my 20th best longtermism idea funded than to get my 3rd best animal idea or 3rd best global poverty idea funded and this asymmetry seems important.

This post leaves some dots unconnected. 

Are you suggesting that people pretend to have beliefs they don't have in order to have a good career and also shift the Republican party from the inside? 

Are you suggesting that anyone can be a Republican as long as they have a couple of beliefs or values that are not totally at odds with those of the Republican party — even if the majority of their beliefs and values are far more aligned with another party? 

Or by telling people to join the Republican party, are you suggesting they actively change some of their beliefs or stances in order to fit in, but then focus on shaping the party to be aligned with EA values that it is currently kind of neutral about?

It doesn't seem you're saying the first thing, because you don't say anything about hiding one's true beliefs, and you have the example of the openly left-wing acquaintance who got a job at a conservative NGO. 

If you're saying the second thing, I think this is more difficult then you're imagining. I don't mean emotionally difficult because of cold uggies. I mean strategically or practically difficult because participation in certain political parities is generally meant ... (read more)

This list should have karma hidden and entries randomised. I guess most poeple do not read and vote all the way to the bottom. I certainly didn't the first time I read it.

Reflection on my time as a Visiting Fellow at Rethink Priorities this summer

I was a Visiting Fellow at Rethink Priorities this summer. They’re hiring right now, and I have lots of thoughts on my time there, so I figured that I’d share some. I had some misconceptions coming in, and I think I would have benefited from a post like this, so I’m guessing other people might, too. Unfortunately, I don’t have time to write anything in depth for now, so a shortform will have to do.

Fair warning: this shortform is quite personal and one-sided. In particular, when I tried to think of downsides to highlight to make this post fair, few came to mind, so the post is very upsides-heavy. (Linch’s recent post has a lot more on possible negatives about working at RP.) Another disclaimer: I changed in various ways during the summer, including in terms of my preferences and priorities. I think this is good, but there’s also a good chance of some bias (I’m happy with how working at RP went because working at RP transformed me into the kind of person who’s happy with that sort of work, etc.). (See additional disclaimer at the bottom.)

First, some vague background on me, in case it’s relevant:

  • I finished m
... (read more)

Honestly, the biggest benefit to my wellbeing was taking action about depression, including seeing a doctor, going on antidepressants, and generally treating it like a problem that needed to be solved. I really think I might not have done that, or might have done it much later, were it not for EA - EA made me think about things in an outcome-oriented way, and gave me an extra reason to ensure I was healthy and able to work well.

For others: I think that Scott Alexander's posts on anxiety and depression are really excellent and hard to beat in terms of advice. Other things I'd add: I'd generally recommend that your top goal should be ensuring that you're in a healthy state before worrying too much about how to go about helping others; if you're seriously unhappy or burnt our, fixing that first is almost certainly the best altruistic thing you can do. I also recommend maintaining and cultivating a non-EA life: having a multi-faceted identity means that if one aspect of your life isn't going so well, then you can take solace in other aspects.


I don't agree with all of the decisions being made here, but I really admire the level of detail and transparency going into these descriptions, especially those written by Oliver Habryka. Seeing this type of documentation has caused me to think significantly more favorably of the fund as a whole.

Will there be an update to this post with respect to what projects actually fund following these recommendations? One aspect that I'm not clear on is to what extent CEA will "automatically" follow these recommendations and to what extent there will be significant further review.

There are no whistleblower systems in place at any major EA orgs as far as I know

I’ve heard this claim repeatedly, but it’s not true that EA orgs have no whistleblower systems. 

I looked into this as part of this project on reforms at EA organizations: Resource on whistleblowing and other ways of escalating concerns

  • Many organizations in EA have whistleblower policies, some of which are public in their bylaws (for example, GiveWell and ACE publish their whistleblower policies among other policies). EV US and EV UK have whistleblower policies that apply to all the projects under their umbrella (CEA, 80,000 Hours, etc.) This is just a normal thing for nonprofits; the IRS asks whether you have one even though they don't strictly require it, and you can look up on a nonprofit’s 990 whether they have such a policy. 
  • Additionally, UK law, state law in many US states, and lots of other countries provide some legal protections for whistleblowers. Legal protection varies by state in the US, but is relatively strong in California.
  • Neither government protections nor organizational policies cover all the scenarios where someone might reasonably want protection from ne
... (read more)
Vaipan
3mo86
49
42
3

I do not know Owen. I am however a bit worried to see two people in these comments advocating for Owen while this affair does not look good and the facts speak for themselves; there is a certain irony to see these two people coming to defend Owen while the community health head, Julia, admits to a certain level of bias when handling this affair since he was her friend. It seems that EA people do not learn from the mistakes that are courageously being owned up here. This posts talks about Owen misbehaving: it does not talk about Owen's good deeds. So this kind of comment defeats the point of this post. 

Can you put yourself two seconds in the shoes of these women who received unwanted and pressing attention from Owen, with all the power dynamics that are involved, reading comments on how Owen is responsible and a great addition to the community, even after women repeatedly complained about him? What I read is 'He treated me well, so don't be so quick to dismiss him' and 'I've dealt with worse cases, so I can assure you this one is not that bad'. 

Do you really think that such attitudes encourage women to speak up? Do you really think that this is the place to do this? 

E... (read more)

USAID has announced that they've committed $4 million to fighting global lead poisoning

USAID Administrator Samantha Power also called other donors to action, and announced that USAID will be the first bilateral donor agency to join the Global Alliance to Eliminate Lead Paint. The Center for Global Development (CGD) discusses the implications of the announcement here

For context, lead poisoning seems to get ~$11-15 million per year right now, and has a huge toll. I'm really excited about this news.

Also, thanks to @ryancbriggs for pointing out that this seems like "a huge win for risky policy change global health effective altruism" and referencing this grant:

In December 2021, GiveWell (or the EA Funds Global Health and Development Fund?) gave a grant to CGD to "to support research into the effects of lead exposure on economic and educational outcomes, and run a working group that will author policy outreach documents and engage with global policymakers." In their writeup, they recorded a 10% "best case" forecast that in two years (by the end of the grant period), "The U.S. government, other international actors (e.g., bilateral and multilateral donors), and/or national ... (read more)

(I edited an earlier comment to include this, but it's a bit buried now, so I wanted to make a new comment.)

I've read most of the post and appendix (still not everything). To be a bit more constructive, I want to expand on how I think you could have responded better (and more quickly):

  1. We were sad to hear that two of our former employees had such negative experiences working with us. We were aware of some of their complaints, but others took us by surprise.
  2. We have a different perspective on many of the issues they raise. In particular, we dispute some of the most serious allegations. We're attaching some evidence here to show that the employees were well-compensated, provided vegan food, and were absolutely not asked to transport illegal substances.
  3. We are also aware that one of the ex-employees has a concerning history of behaviour which we think affects how she perceives her time working with us.
  4. However, we also recognize that we made mistakes. In particular, we put ourselves and others in a risky situation by travelling and living in foreign countries with people who we both didn't know very well and were employing. We also chose to eschew some standard practices around emplo
... (read more)

You say: "This is inaccurate. I don't think there is any evidence that Ben had access to that doesn't seem well-summarized by the two sections above. We had a direct report from Alice, which is accurately summarized in the first quote above, and an attempted rebuttal from Kat, which is accurately summarized in the second quote above. We did not have any screenshots or additional evidence that didn't make it into the post."

Actually, you are mistaken, Ben did have screenshots. I think you just didn't know that he had them. I can send you proof that he had them via DM if you like.

Regarding this: "As Kat has documented herself, she asked Alice to bring Schedule 2 drugs across borders without prescription (whether you need a prescription in the country you buy it is irrelevant, what matters is whether you have one in the country you arrive in), something that can have quite substantial legal consequences (I almost certainly would feel pretty uncomfortable asking my employee to bring prescription medications across borders without appropriate prescription)."

It sounds like you're saying this paragraph by Ben: 

"Before she went on vacation, Kat requested that Alice bring a variety of i... (read more)

I’m surprised to hear you say this Habryka: “I think all the specific statements that Ben made in his post were pretty well-calibrated (and still seem mostly right to me after reading through the evidence)”

Do you think Ben was well calibrated/right when he made, for instance, these claims which Nonlinear has provided counter evidence for?

“She [Alice] was sick with covid in a foreign country, with only the three Nonlinear cofounders around, but nobody in the house was willing to go out and get her vegan food, so she barely ate for 2 days. Alice eventually gave in and ate non-vegan food in the house” (from my reading of the evidence this is not close to accurate, and I believe Ben had access to the counter evidence at the time when he published)

“Before she went on vacation, Kat requested that Alice bring a variety of illegal drugs across the border for her (some recreational, some for productivity). Alice argued that this would be dangerous for her personally, but Emerson and Kat reportedly argued that it is not dangerous at all and was “absolutely risk-free” (from my reading of the evidence Nonlinear provided, it seems Alice was asked to buy ADHD medicine that they believed was lega... (read more)

Effective giving quick take for giving season

This is quite half-baked because I think my social circle contains not very many E2G folks, but I have a feeling that when EA suddenly came into a lot more funding and the word on the street was that we were “talent constrained, not funding constrained”, some people earning to give ended up pretty jerked around, or at least feeling that way. They may have picked jobs and life plans based on the earn to give model, where it would be years before the plans came to fruition, and in the middle, they lost status and attention from their community. There might have been an additional dynamic where people who took the advice the most seriously ended up deeply embedded in other professional communities, so heard about the switch later or found it harder to reconnect with the community and the new priorities.

I really don’t have an overall view on how bad all of this was, or if anyone should have done anything differently, but I do have a sense that EA has a bit of a feature of jerking people around like this, where priorities and advice change faster than the advice can be fully acted on. The world and the right priorities really do change, thoug... (read more)

Just noting, for people who might not read the book, that there are many more mentions of "effective altruism":

I agree that EA seems often painted as "High IQ immature children", especially from Chapter 6 or 7.

To me, EA also seems painted as kind of a cult[1], where acolytes sacrifice their lives for "the greater good" according to a weird ideology, and people seem to be considered "effective altruists" mostly based on their social connections with the group.

I'm surprised you didn't mention what was for me the spiciest EA quote, from SBF in ~2018:

This combos really badly with the current EA shitshow I’m supposed to be, in some ways, adjudicating.

  1. ^

    Same way as this Washington Post article puts it

Emily Oster declared that “treating HIV doesn’t pay.” “It is humane to pay for AIDS drugs in Africa,” she wrote, “but it isn’t economical. The same dollars spent on prevention would save more lives.”


Twenty years later, with $100 billion dollars appropriated[26] under both Democratic and Republican administrations, and millions of lives saved, it’s hard to argue a different foreign aid program would’ve garnered more support, scaled so effectively, and done more good. It’s not that trade-offs don’t exist, we just got the counterfactual wrong.


It's not clear to me that the core point of the essay goes through. For instance, the same amount of money as applied to malaria would also have helped many people, driven down prices, encouraged innovation—maybe the equivalent would have been a malaria vaccine, a gene drive, or mass fumigations.

i.e., it seems plausible that both of these could be true:

  • PEPFAR was worth doing
  • There are other large health megaprojects that would have been better

Bad Things Are Bad: A Short List of Common Views Among EAs

  1. No, we should not sterilize people against their will.
  2. No, we should not murder AI researchers. Murder is generally bad. Martyrs are generally effective. Executing complicated plans is generally more difficult than you think, particularly if failure means getting arrested and massive amounts of bad publicity.
  3. Sex and power are very complicated. If you have a power relationship, consider if you should also have a sexual one. Consider very carefully if you have an power relationship: many forms of power relationship are invisible, or at least transparent, to the person with power. Common forms of power include age, money, social connections, professional connections, and almost anything that correlates with money (race, gender, etc). Some of these will be more important than others. If you're concerned about something, talk to a friend who's on the other side of that from you. If you don't have any, maybe just don't.
  4. And yes, also, don't assault people.
  5. Sometimes deregulation is harmful. "More capitalism" is not the solution to every problem.
  6. Very few people in wild animal suffering  think that we should go and deliberately de
... (read more)

Jeff is right: I just returned from my mom's memorial service, which delayed the just-posted FLI statement.

Lizka
1yModerator Comment86
25
2

A short note as a moderator (echoing a commenter): People (understandably) have strong feelings about discussions that focus on race, and many of us found the linked content difficult to read. This means that it's both harder to keep to Forum norms when responding to this, and (I think) especially important.

Please keep this in mind if you decide to engage in this discussion, and try to remember that most people on the Forum are here for collaborative discussions about doing good.

If you have any specific concerns, you can also always reach out to the moderation team at forum-moderation@effectivealtruism.org.


SBF seems like a very good person. Almost regardless of what happened, I don't think a reasonable person's opinion of him should be reduced. 

I think if he lied publicly about whether FTX's client assets were not invested then I think this should very much reduce a reasonable person's opinion of him. If lying straightforwardly in public does not count against your character, I don't know what else would. 

That said, I don't actually know whether any lying happened here. The real situation seems to be messy, and it's plausible that all of FTX's client assets (and not like derivatives) were indeed not invested, but that the thing that took FTX out were the leveraged derivates they were selling, which required more advanced risk-balancing, though I do think that Twitter thread looks really quite suspicious right now.

Thanks for the suggestion, Zach!

I did explain to Constance why she was initially rejected as one of the things we discussed on an hour-long call. We also discussed additional information she was considering including, and I told her I thought she was a better fit for EAGx (she said she was not interested). It can be challenging to give a lot of guidance on how to change a specific application, especially in cases where the goal is to “get in”. I worry about providing information that will allow candidates to game the system. 

I don’t think this post reflects what I told Constance, perhaps because she disagrees with us. So, I want to stick to the policy for now.

I agree that S-risks are more neglected by EA than extinction risks, and I think the explanation that many people associate S-risks with negative utilitarianism is plausible. I'm a regular utilitarian and I've reached the conclusion that S-risks are quite important and neglected, and I hope this bucks the perception of those focused on S-risks.

This was a very interesting post. Thank you for writing it.

I think it's worth emphasizing that Rotblat's decision to leave the Manhattan Project was based on information available to all other scientists in Los Alamos. As he recounts in 1985:

the growing evidence that the war in Europe would be over before the bomb project was completed, made my participation in it pointless. If it took the Americans such a long time, then my fear of the Germans being first was groundless.

When it became evident, toward the end of 1944, that the Germans had abandoned their bomb project, the whole purpose of my being in Los Alamos ceased to be, and I asked for permission to leave and return to Britain.

That so many scientists who agreed to become involved in the development of the atomic bomb cited the need to do so before the Germans did, and yet so few chose to terminate their involvement when it had become reasonably clear that the Germans would not develop the bomb provides an additional, separate cautionary tale besides the one your post focuses on. Misperceiving a technological race can, as you note, make people more likely to embark on ambitious projects aimed at accelerating the development of ... (read more)

Note that it may be hard to give criticism (even if anonymous) about FTX's grantmaking because a lot of FTX's grantmaking is (currently) not disclosed. This is definitely understandable and likely avoids certain important downsides, but it also does amplify other downsides (e.g., public misunderstanding of FTX's goals and outputs) - I'm not sure how to navigate that trade-off, but it is important to acknowledge that it exists!

Forgive the clickbait title, but EA is as prone to clickbait as anywhere else.

I mean, sometimes you have reason to make titles into a simple demand, but I wish there were a less weaksauce justification than “because our standards here are no better than anywhere else”.

If a community claims to be altruistic, it's reasonable for an outsider to seek evidence: acts of community altruism that can't be equally well explained by selfish impulses, like financial reward or desire for praise. In practice, that seems to require that community members make visible acts of personal sacrifice for altruistic ends. To some degree, EA's credibility as a moral movement (that moral people want to be a part of) depends on such sacrifices. GWWC pledges help; as this post points out, big spending probably doesn't.

One shift that might help is thinking more carefully about who EA promotes as admirable, model, celebrity EAs. Communities are defined in important ways by their heroes and most prominent figures, who not only shape behaviour internally, but represent the community externally. Communities also have control over who these representatives are, to some degree: someone makes a choice over who will be the keynote speaker at EA conferences, for instance.

EA seems to allocate a lot of its prestige and attention to those it views as having exceptional intellectual or epistemic powers. When we select EA role models and representatives, we seem to optimise for demonstr... (read more)

Your 3 items cover good+top priority, good+not top priority, and bad+top priority, but not #4, bad+not top priority.

I think people concerned with x-risk generally think that progress studies as a program of intervention to expedite growth is going to have less expected impact (good or bad) on the history of the world per unit of effort, and if we condition on people thinking progress studies does more harm than good, then mostly they'll say it's not important enough to focus on arguing against at the current margin (as opposed to directly targeting urgent threats to the world). Only a small portion of generalized economic expansion will go to the most harmful activities (and damage there comes from expediting dangerous technologies in AI and bioweapons that we are improving in our ability to handle, so that delay would help) or to efforts to avert disaster, so there is much more leverage focusing narrowly on the most important areas. 

With respect to synthetic biology in particular, I think there is a good case for delay: right now the capacity to kill most of the world's population with bioweapons is not available in known technologies (although huge secret bioweapons programs... (read more)

Is there going to be a post-mortem including an explanation for the decision to sell?

Going forwards, LTFF is likely to be a bit more stringent (~15-20%?[1] Not committing to the exact number) about approving mechanistic interpretability grants than in grants in other subareas of empirical AI Safety, particularly from junior applicants. Some assorted reasons (note that not all fund managers necessarily agree with each of them):

  • Relatively speaking, a high fraction of resources and support for mechanistic interpretability comes from other sources in the community other than LTFF; we view support for mech interp as less neglected within the community.
  • Outside of the existing community, mechanistic interpretability has become an increasingly "hot" field in mainstream academic ML; we think good work is fairly likely to come from non-AIS motivated people in the near future. Thus overall neglectedness is lower.
  • While we are excited about recent progress in mech interp (including some from LTFF grantees!), some of us are suspicious that even success stories in interpretability are that large a fraction of the success story for AGI Safety.
  • Some of us are worried about field-distorting effects of mech interp being oversold to junior researchers and other newcomers as necess
... (read more)

Over the years, I’ve done a fair amount of community building, and had to deal with a pretty broad range of bad actors, toxic leadership, sexual misconduct, manipulation tactics and the like. Many of these cases were associated with a pattern of narcissism and dark triad spectrum traits, self-aggrandizing behavior, manipulative defense tactics, and unwillingness to learn from feedback. I think people with this pattern rarely learn and improve, and in most cases should be fired and banned from the community even if they are making useful contributions (and I have been involved with handling several such cases over the last decade). I think it’s important that more people learn to recognize this; I encourage you to read the two above-linked articles.

I feel worried that some readers of this Forum might think Owen matches that pattern. Knowing him professionally and to some degree personally, I think he clearly does not. I’ve collaborated and talked with him for hours in all kinds of settings, and based on my overall impression of his character, I understand his problematic behavior to have arisen from an inability to model others’ emotions, an inability to recognize that he ... (read more)

I think what Jonas has written is reasonable, and I appreciate all the work he did to put in proper caveats. I also don’t want to pick on Owen in particular here; I don’t know anything besides what has been publicly said, and some positive interactions I had with him years ago. That said: I think the fact that this comment is so highly upvoted indicates a systemic error, and I want to talk about that.

The evidence Jonas provides is equally consistent with “Owen has a flaw he has healed” and “Owen is a skilled manipulator who charms men, and harasses women”. And if women (such as myself) report he never harassed them, that’s still consistent with him being a serial predator who’s good at picking targets. I’m not arguing the latter is true- I’m arguing that Jonas’s comment is not evidence either way, and its 100+ karma count has me worried people think it is.  There was a similar problem with the supportive comments around Nonlinear from people who had not been in subservient positions while living with the founders, although those were not very highly upvoted.

“If every compliment is equally strong evidence for innocence and skill at manipulation, doesn’t that leave people with n... (read more)

Thanks for this update, your leadership, and your hard work over the last year, Zach.

It's great to hear that Mintz's investigation has wrapped (and to hear they found no evidence of knowledge of fraud, though of course I'm not surprised by that). I'm wondering if it would be possible for them to issue an independent statement or comment confirming your summary?

Dear Stephen and the EA community:  

Shortly after the early November 2022 collapse of FTX, EV asked me and my law firm, Mintz, to conduct an independent investigation into the relationship between FTX/Alameda and EV.  I led our team’s investigation, which involved reviewing tens of thousands of documents and conducting dozens of witness interviews with people who had knowledge about EV’s relationship with FTX and Alameda.  As background, I spent 11 years serving as a federal prosecutor in the United States Attorney’s Office for the Southern District of New York, the same USAO that prosecuted Sam Bankman-Fried and the other FTX/Alameda executives.  

I can confirm that the statements in Zach Robinson’s post from yesterday, December 13, 2023, about the results of the investigation are 100% true and accurate.   

Mintz’s independent investigation found no evidence that anyone at EV knew about the alleged fraudulent criminal conduct at FTX and Alameda.  This conclusion was later reinforced by the evidence at this fall’s trial of United States v. Sam Bankman-Fried, where the three cooperating witnesses who had all pled guilty (Caroline Ellis... (read more)

Being mindful of the incentives created by pressure campaigns

I've spent the past few months trying to think about the whys and hows of large-scale public pressure campaigns (especially those targeting companies — of the sort that have been successful in animal advocacy).

A high-level view of these campaigns is that they use public awareness and corporate reputation as a lever to adjust corporate incentives. But making sure that you are adjusting the right incentives is more challenging than it seems. Ironically, I think this is closely connected to specification gaming: it's often easy to accidentally incentivize companies to do more to look better, rather than doing more to be better.

For example, an AI-focused campaign calling out RSPs recently began running ads that single out AI labs for speaking openly about existential risk (quoting leaders acknowledging that things could go catastrophically wrong). I can see why this is a "juicy" lever — most of the public would be pretty astonished/outraged to learn some of the beliefs that are held by AI researchers. But I'm not sure if pulling this lever is really incentivizing the right thing.

As far as I can tell, AI leaders speaking openl... (read more)

I don't really see the "terrible day for EA" part? Maybe you think Nonlinear is more integral to EA as a whole than I do. To me it seems like an allegation of bad behaviour on the part of a notable but relatively minor actor in the space, that doesn't seem to particularly reflect a broader pattern.

[anonymous]8mo85
30
11
1
1

I have mixed feelings about this mod intervention. On the one hand, I value the way that the moderator team (including Lizka) play a positive role in making the forum a productive place, and I can see how this intervention plays a role of this sort.

On the other hand:

  1. Minor point: I think Eliezer is often condescending and disrespectful, and I think it's unlikely that anyone is going to successfully police his tone. I think there's something a bit unfortunate about an asymmetry here.
  2. More substantially: I think procedurally it's pretty bad that the moderator team act in ways that discourages criticism of influential figures in EA (and Eliezer is definitely such a figure). I think it's particularly bad to suggest concrete specific edits to critiques of prominent figures. I think there should probably be quite a high bar set before EA institutions (like forum moderators) discourage criticism of EA leaders (esp with a post like this that engages in quite a lot of substantive discussion, rather than mere name calling). (ETA: Likewise, with the choice to re-tag this as a personal blogpost, which substantially buries the criticism. Maybe this was the right call, maybe it wasn't, but it cert
... (read more)

The tractability of further centralisation seems low

I'm not sure yet about my overall take on the piece but I do quibble a bit with this; I think that there are lots of simple steps that CEA/Will/various central actors (possibly including me) could do, if we wished, to push towards centralization. Things like:

  • Having most of the resources come from one place
  • Declaring that a certain type of resource is the "official" resource which we "recommend"
  • Running invite-only conferences where we invite all the people that are looked-up-to as leaders in the community, and specifically try to get those leaders on the same page strategically
  • Generally demonstrating intensely high levels of cooperativeness with people who are "trusted" along some shared legible axis, and much lower levels of cooperativeness with outsiders
  • Stop publishing critical info publicly, relying on whisper networks to get the word out about things

I didn't start off writing this comment to be snarky, but I realized that we are, kind of, doing most of these things. Do we intend to? Should we maybe not do them if we think we want to push away from centralization?

There are very expensive interventions that are financially constrained and could use up ~all EA funds, and the cost-benefit calculation takes probability of powerful AGI  in a given time period as an input, so that e.g. twice the probability of AGI in the next 10 years justifies spending twice as much for a given result by doubling the chance the result gets to be applied. That can make the difference between doing the intervention or not, or drastic differences in intervention size. 

I think that these are good lessons learned, but  regarding the last point, I want to highlight a comment by Oliver Habryka;

It seems obvious to me that anyone saying anything bad right now about Carrick would be pretty severely socially punished by various community leaders, and I expected the community leadership to avoid saying so many effusively positive things in a context where it's really hard for people to provide counterevidence, especially when it comes with an ask for substantial career shifts and funding. 

This seems really important, and while I'm not sure that politics is the mind-killer, I think that the forum and EA in general needs to be really, really careful about the community dynamics. I think that the principal problem pointed out by the recent "Bad Omens" post was peer pressure towards conformity in ways that lead to people acting like jerks, and I think that we're seeing that play out here as well, but involving central people in EA orgs pushing the dynamics, rather than local EA groups. And that seems far more worrying.

So yes, I think there are lots of important lessons learned about politics, but those matter narrowly. And I think that the biggest ... (read more)

Hey Theo - I’m James from the Global Challenges Project :)

Thanks so much for taking the time to write this - we need to think hard about how to do movement building right, and its great for people like you to flag what you think is going wrong and what you see as pushing people away.

Here’s my attempt to respond to your worries with my thoughts on what’s happening!

First of all, just to check my understanding, this is my attempt to summarise the main points in your post:

My summary of your main points

We’re missing out on great people as a result of how community building is going at student groups. A stronger version of this claim would be that current CB may be selecting against people who could most contribute to current talent bottlenecks. You mention 4 patterns that are pushing people away:

  1. EA comes across as totalising and too demanding, which pushes away people who could nevertheless contribute to pressing cause areas. (Part 1.1)
  2. Organisers come across as trying to push particular conclusions to complex questions in a way that is disingenuous and also epistemically unjustified. (Part 1.2)
  3. EA comes across as cult-like; primarily through appearing to be trying to hard to be persuasiv
... (read more)

I think I agree with the general thrust of your post (that mental health may deserve more attention amongst neartermist EAs), but I don't think the anecdote you chose highlights much of a tension.

>  I asked them how they could be so sceptical of mental health as a global priority when they had literally just been talking to me about it as a very serious issue for EAs.

I am excited about improving the mental health of EAs, primarily because I think that many EAs are doing valuable work that improves the lives of others and good mental health is going to help them be more productive (I do also care about EAs being happy as much as I care about anyone being happy, but I expect that value produced from this to be much less that the value produced from the EAs actions).

I care much less about the productivity benefits that we'd see from improving the mental health of people outside of the EA community (although of course I do think their mental health matters for other reasons).

So the above claim seems pretty reasonable to me. 

As an illustration, I can care about EAs having good laptops much more than I care about random people having good laptops, I am much more sceptical about giving random people good laptops producing impact than giving EAs good laptops.

We could definitely do well to include more people in the movement. For what it's worth, though, EA's core cause areas could be considered among the most important and neglected social justice issues. The global poor, non-human animals, and future generations are all spectacularly neglected by mainstream society, but we (among others) have opted to help them.

You might be interested in the following essays:

Freedom of speach and freedom of research are important, and as long as someone doesn't call to intentionally harm or discriminate against another, it's important that we don't condition funding on agreement with the funders' views.

This seems a very strange view. Surely an animal rights grantmaking organisation can discriminate in favour of applicants who also care about animals? Surely a Christian grantmaking organisation can condition its funding on agreement with the Bible? Surely a civil rights grantmaking organisation can decide to only donate to people who agree about civil rights?

Unfortunately, a significant part of the situation is that people with internal experience and a negative impression feel both constrained and conflicted (in the conflict of interest sense) for public statements. This applies to me: I left OpenAI in 2019 for DeepMind (thus the conflicted).

Both founders don't seem to have a background in technical AI safety research. Why do you think Nonlinear will be able to research and prioritize these interventions without having prior experience or familiarity in technical AI safety research?

Relatedly, wouldn't the organization be better if it hired for a full-time researcher or have a co-founder who has a background in technical AI safety research? Is this something you're considering doing?

I'll suggest a reconceptualization that may seem radical in theory but is conservative in practice.

It doesn't seem conservative in practice? Like Vasco, I'd be surprised if aiming for reliable global capacity growth would look like the current GHD portfolio. For example:

  1. Given an inability to help everyone, you'd want to target interventions based on people's future ability to contribute. (E.g. you should probably stop any interventions that target people in extreme poverty.)
  2. You'd either want to stop focusing on infant mortality, or start interventions to increase fertility. (Depending on whether population growth is a priority.)
  3. You'd want to invest more in education than would be suggested by typical metrics like QALYs or income doublings.

I'd guess most proponents of GHD would find (1) and (2) particularly bad.

Putting this here since this is the active thread on the NL situation. Here's where I currently am:

  • I think NL pretty clearly acted poorly towards Alice and Chloe. In addition to what Ozy has in this post, the employment situation is really pretty bad.  I don't know how this worked in the other jurisdictions, but Puerto Rico is part of the US and paying someone as an independent contractor when they were really functioning as an employee means you had an employee that you misclassified. And then $1k/mo in PR is well below minimum wage. They may be owed back pay, and consulting an employment lawyer could make sense, though since we're coming up on the two year mark it would be good to move quickly.
    • I think some people are sufficiently mature and sophisticated that if they and their employer choose to arrange compensation primarily in kind that's illegal more like jaywalking is illegal than like shoplifting is illegal. But I don't think Alice and Chloe fall into this category.
    • Many of the other issues are downstream from the low compensation. For example, if they had wanted to live separately on their own dime to have clearer live/work boundaries that would have eaten up ~all of th
... (read more)

The evidence collected here doesn’t convince me that Alice and Chloe were lying, or necessarily that Ben Pace did a bad job investigating this. I regret contributing another long and involved comment to this discourse, but I feel like “actually assessing the claims” has been underrepresented compared to people going to the meta level, people discussing the post’s rhetoric, and people simply asserting that this evidence is conclusive proof that Alice and Chloe lied.

My process of thinking through this has made me wish more receipts from Alice and Chloe were included in Ben’s post, or even just that more of the accusations had come in their own words, because then it would be clear exactly what they were claiming. (I think their claims being filtered through first Ben and then Kat/Emerson causes some confusion, as others have noted).

I want to talk about some parts of the post and why I’m not convinced. To avoid cherry-picking, I chose the first claim, about whether Alice was asked to travel with illegal drugs (highlighted by Kat as “if you read just one illustrative story, read this one”), and then I used a random number generator to pick two pages in the appendix (following the lead ... (read more)

Howie – I suspect you’d rather I don’t write anything, but it feels wrong not to thank you for everything you’ve given to this role and to the organisation over the past year. So I hope you’ll forgive a short (and perhaps biased) message of appreciation. 

Over the past year, you have taken EV UK through one of the most challenging periods of its history with extraordinary dedication and leadership. I don’t think there are many people who would have taken on a role like yours in the days after FTX collapsed, and fewer still who could have done the job you did. 

Throughout this time, I have continually been impressed with your intellect, inspired by your integrity, and in awe of your unceasing commitment to doing good. And I know for a fact that I’m not the only one. 

It’s been a privilege to support you for the past year and I’m delighted that you’ll now have a chance to take a proper break, before throwing yourself into the next challenge. 

Thank you for everything. 

Just want to flag that I'm really happy to see this. I think that the funding space could really use more labor/diversity now.

Some quick/obvious thoughts:  

- Website is pretty great, nice work there. I'm jealous of the speed/performance, kudos.
- I imagine some of this information should eventually be private to donors. Like, the medical expenses one. 
- I'd want to eventually see Slack/Discord channels for each regrantor and their donors, or some similar setup. I think that communication between some regranters and their donors could be really good.
- I imagine some regranters would eventually work in teams. From being both on LTFF and seeing the FTX regrantor program, I did kind of like the LTFF policy of vote averaging. Personally, I think I do grantmaking best when working on a team. I think that the "regrantor" could be a "team leader", in the sense that they could oversee people under them.
- As money amounts increase, I'd like to see regranters getting paid. It's tough work. I think we could really use more part-time / full-time work here. 
- I think if I were in charge of something like this, I'd have a back-office of coordinated investigations for everyone. Like,... (read more)

Matis
1y84
21
0

Another falsehood to add to the list of corrections the Bulletin needs to make to the article. In the article, Torres writes,

And in the acknowledgments section, he lists 30 scientists and an entire research group as having been consulted on “climate change” or “climate science.” I wrote to all the scientists MacAskill thanked for providing “feedback and advice,” and the responses were surprising.

However, one of those scientists, Peter Watson, has recently tweeted that Torres did not contact him about the Bulletin article. Torres responds to this claim with an irrelevant question. 

As you can see below, Peter Watson is indeed one of the climate scientists who was thanked. If Watson is correct, then the Bulletin needs to correct Torres's claim to have contacted all the climate scientists who were acknowledged in the book. 

[edit: I originally wrote  and highlighted"Andrew Watson" instead of Peter Watson. Peter Watson, as you can see below, is also acknowledged]

I think seeing it as "just putting two people in touch" is narrow. It's about judgement on whether to get involved in highly controversial commercial deal which was expected to significantly influence discourse norms, and therefore polarisation, in years to come. As far as I can tell, EA overall and Will specifically do not have skills / knowhow in this domain.

Introducing Elon to Sam is not just like making a casual introduction; if everything SBF was doing was based on EA, then this feels like EA wading in on the future of Twitter via the influence of SBFs money.

Introducing Elon to Holden because he wanted to learn more about charity evaluation? Absolutely - that's EA's bread and butter and where we have skills and credibility. But on this commercial deal and subsequent running of Twitter? Not within anyone's toolbox from what I can tell.

I'd like to know the thinking behind this move by Will and anyone else involved. For my part, I think this was unwise, should have had more consultation around it.

I would consider disavowing the community if people start to get more involved in: 1) big potentially world-changing decisions which - to me - it looks like they don't have the wider knowledge or skillset to take on well, or 2) incredibly controversial projects like the Twitter acquisition, and doing so through covert back-channels with limited consultation.

I would prefer it quite a lot if this post didn't have me read multiple paragraphs (plus a title) that feel kind of clickbaity and don't give me any information besides "this one opportunity that Effective Altruists ignore that's worth billions of dollars". I prefer titles on the EA Forum to be descriptive and distinct, whereas this title could be written about probably hundreds of posts here. 

A better title might be "Why aren't EAs spending more effort on influencing individual donations?" or "We should spend more effort on influencing individual donations".

I enjoyed the book and recommend it to others!

In case of of interest to EA forum folks, I wrote a long tweet thread with more substance on what I learned from it and remaining questions I have here: https://twitter.com/albrgr/status/1559570635390562305

This post resonated a lot with me. I was actually thinking of the term 'disillusionment' to describe my own life a few days before reading this.

One cautionary tale I'd offer to readers is don't automatically assume your disillusionment is because of EA and consider the possibility that your disillusionment is a personal problem. Helen suggested leaning into feelings of doubt or assuming the movement is making mistakes. That is good if EA is the main cause, but potentially harmful if the person gets disillusioned in general. 

I'm a case study for this. For the past decade, I've been attracted to demanding circles. First it was social justice groups and their infinitely long list of injustices. Then it was EA and its ongoing moral catastrophes. More recently, it's been academic econ debates and their ever growing standards for what counts as truth.

In each instance, I found ways to become disillusioned and to blame my disillusionment on an external cause. Sometimes it was virtue signaling. Sometimes it was elitism. Sometimes it was the people. Sometimes it was whether truth was knowable. Sometimes it was another thing entirely. All my reasons felt incredibly compelling at the time... (read more)

You seem to be jumping to the conclusion that if you don't understand something, it must be because you are dumb, and not because you lack familiarity with community jargon or norms. 

For example, take the yudkowsky doompost that's been much discussed recently. In the first couple of paragraphs, he namedrops people that would be completely unknown outside his specific subfield of work, and expects the reader to know who they are.  Then there are a lot of paragraphs like the following:

If nothing else, this kind of harebrained desperation drains off resources from those reality-abiding efforts that might try to do something on the subjectively apparent doomed mainline, and so position themselves better to take advantage of unexpected hope, which is what the surviving possible worlds mostly look like.

It doesn't matter if you have an oxford degree or not, this will be confusing to anyone who has not been steeped in the jargon and worldview of the rationalist subculture. (My PHD in physics is not helpful at all here)

This isn't necessarily bad writing, because the piece is deliberately targeted at  people who have been talking with this jargon for years. It would be bad wri... (read more)

Comments on Jacy Reese Anthis' Some Early History of EA (archived version).

Summary: The piece could give the reader the impression that Jacy, Felicifia and THINK played a comparably important role to the Oxford community, Will, and Toby, which is not the case.

I'll follow the chronological structure of Jacy's post, focusing first on 2008-2012, then 2012-2021. Finally, I'll discuss "founders" of EA, and sum up.

2008-2012

Jacy says that EA started as the confluence of four proto-communities: 1) SingInst/rationality, 2) Givewell/OpenPhil, 3) Felicifia, and 4) GWWC/80k (or the broader Oxford community). He also gives honorable mentions to randomistas and other Peter Singer fans. Great - so far I agree.

What is important to note, however, is the contributions that these various groups made. For the first decade of EA, most key community institutions of EA came from (4) - the Oxford community, including GWWC, 80k, and CEA, and secondly from (2), although Givewell seems to me to have been more of a grantmaking entity than a community hub. Although the rationality community provided many key ideas and introduced many key individuals to EA, the institutions that it ran, such as CFAR, were mostl... (read more)

This post is great, thanks for writing it.

I'm not quite sure about the idea that we should have certain demanding norms because they are costly signals of altruism. It seems to me that the main reason to have demanding norms isn't that they are costly signals, but rather that they are directly impactful. For instance, I think that the norm that we should admit that we're wrong is a good one, but primarily because it's directly impactful. If we don't admit that we're wrong, then there's a risk we continue pursuing failed projects even as we get strong evidence that they have failed. So having a norm that counteracts our natural tendency not to want to admit when we're wrong seems good.

Relatedly, and in line with your reasoning, I think that effective altruism should be more demanding in terms of epistemics than in terms of material resources. Again, that's not because that's a better costly signal, but rather because better epistemics likely makes a greater impact difference than extreme material sacrifices do. I developed these ideas here; see also our paper on real-world virtues for utilitarians.

Like other commenters, to back-up the tone of this piece, I'd want to see further evidence of these kinds of conversations (e.g., which online circles are you hearing this in?). 

That said, it's pretty clear that the funding available is very large, and it'd be surprising if that news didn't get out. Even in wealthy countries, becoming a community builder in effective altruism might just be one of the most profitable jobs for students or early-career professionals. I'm not saying it shouldn't be, but I'd be surprised if there weren't (eventually) conversations like the ones you described. And even if I think "the vultures are circling" is a little alarmist right now, I appreciate the post pointing to this issue.

On that issue: I agree with your suggestions of "what not to do" -- I think these knee-jerk reactions could easily cause bigger problems than they solve. But what are we to do? What potential damage could there be if the kind of behaviour you described did become substantially more prevalent?

Here's one of my concerns: we might lose something that makes EA pretty special right now. I'm an early-career employee who just started working at an EA org . And something that's s... (read more)

Thank you for writing and sharing. I think I agree with most of the core claims in the paper, even if I disagree with the framing and some minor details.

One thing must be said: I am sorry you seem to have had a bad experience while writing criticism of the field. I agree that this is worrisome, and makes me more skeptical of the apparent matters of consensus in the community. I do think in this community we can and must do better to vet our own views.

Some highlights:

I am big on the proposal to have more scientific inquiry . Most of the work today on existential risk is married to a particular school of Ethics, and I agree it need not be. 

On the science side I would be enthusiastic about seeing more work on eg models of catastrophic biorisk infection, macroeconomic analysis on ways artificial intelligence  might affect society and expansions of IPCC models that include permafrost methane release feedback loops.

On the humanities side I would want to see for example more work on historical, psychological and anthropological evidence for long term effects and successes and failures in managing global problems like the hole in the ozone layer. I would also like to see surveys ... (read more)

Aaron Gertler
3yModerator Comment84
0
0

As the Forum’s lead moderator, I’m posting this message, but it was written collaboratively by several moderators after a long discussion.

As a result of several comments on this post, as well as a pattern of antagonistic behavior, Phil Torres has been banned from the EA Forum for one year.

Our rules say that we discourage, and may delete, "unnecessary rudeness or offensiveness" and "behavior that interferes with good discourse". Calling someone a jerk and swearing at them is unnecessarily rude, and interferes with good discourse.

Phil also repeatedly accuses Sean of lying:

I am trying to stay calm, but I am honestly pretty f*cking upset that you repeatedly lie in your comments above, Sean [...] I won't include your response, Sean, because I'm not a jerk like you.

How can someone lie this much about a colleague and still have a job?

You repeatedly lied in your comments above. Unprofessional. I don't know how you can keep your job while lying about a colleague like that.

After having seen the material shared by Phil and Sean (who sent us some additional material he didn’t want shared on the Forum), we think the claims in question are open to interpretation but clearly not deliberate lies.&... (read more)

I asked my team about this, and Sky provided the following information. This quarter CEA did a small brand test, with Rethink’s help. We asked a sample of US college students if they had heard of “effective altruism.” Some respondents were also asked to give a brief definition of EA and a Likert scale rating of how negative/positive their first impression was of “effective altruism.”

Students who had never heard of “effective altruism” before the survey still had positive associations with it. Comments suggested that they thought it sounded good  - effectiveness means doing things well; altruism means kindness and helping people. (IIRC, the average Likert scale score was 4+ out of 5). There were a small number of critiques too, but fewer than we expected. (Sorry that this is just a high-level summary - we don't have a full writeup ready yet.)

Caveats: We didn't test the name “effective altruism” against other possible names. Impressions will probably vary by audience. Maybe "EA" puts off a small-but-important subsection of the audience we tested on (e.g. unusually critical/free-thinking people).

I don't think this is dispositive - I think that testing other brands might still be a good idea. We're currently considering trying to hire someone to test and develop the EA brand, and help field media enquiries. I'm grateful for the work that Rethink and Sky Mayhew have been doing on this.

What happened was a terrible tragedy and my heart aches for those involved. That said, I'd prefer if there wasn't much content of this type on the Forum. 8 people died in that horrific shooting. If there was a Forum post about every event that killed 8 people, or even just every time 8 people were killed from acts of violence, that might (unfortunately, because there are ways in which the world is a terrible place) dominate the Forum, and make it harder to find and spend time on content relevant to our collective task of finding the levers that will help us help as many people as possible.

I agree that we should attend especially to members of our community who are in a particularly difficult place at a given time, and extend them support and compassion, but felt uneasy about it in this case because of the above, because of Dale's point that the shooting might not have been racially motivated, because Asian EAs I know don't seem bothered, and I think we should have a high bar for asking everyone in the community to attend to something/asserting that they should (thought, I'm not sure whether you were doing that/intending to do that).

I don't have a fully-formed gestalt take yet, other than: thanks for writing this.

I do want to focus on 3.2.2 Communication about our work (it's a very Larissa thing to do to have 3 layers nesting of headers 🙂). You explain why you didn't prioritize public communication, but not why you restricted access to existing work. Scrubbing yourself from archive.org seems to be an action taken not from desire to save time communicating, but from a desire to avoid others learning. It seems like that's a pretty big factor that's going on here and would be worth mentioning.

[Speaking for myself, not my employer.]

Why are April Fools jokes still on the front page? On April 1st, you expect to see April Fools' posts and know you have to be extra cautious when reading strange things online. However, April 1st was 13 days ago and there are still two posts that are April Fools posts on the front page. I think it should be clarified that they are April Fools jokes so people can differentiate EA weird stuff from EA weird stuff that's a joke more easily. Sure, if you check the details you'll see that things don't add up, but we all know most people just read the title or first few paragraphs.

Thank you for putting so much effort into helping with this community issue. 

What do you think community members should do in situations similar to what Ben and Oliver believed themselves to be in: where a community member believes that some group is causing a lot of harm to the community, and it is important to raise awareness?

Should they do a similar investigation, but better or more fairly? Should they hire a professional? Should we elect a group (e.g., the CEA community health team (or similar)) to do these sorts of investigation? 

I would strongly caution against doing so. Even if it turns out to be seemingly justified in this instance (and I offer no view either way whether it is or not), I cannot think of a more effective way of discouraging victims/whistleblowers from coming forward (in other cases in this community) in future situations. 

Places I think people messed up and where improvement is needed

The Nonlinear Team

  1. The Nonlinear team should have gotten their replies up sooner, even if in pieces. In the court of public opinion, time/speed matters. Muzzling up and taking ~3 months to release their side of the story comes across as too polished and buttoned up.
  2. Not being selective enough about who they took into a very unorthodox work/living environment. I don’t think this type of work/living arrangement is always bad (though I do think that NL shouldn’t try it again, nor do I think a nomadic lifestyle is the most effective one generally). Still, I do think it needs to grow a lot more organically and have lower commitment tests that build up to this arrangement. Taking in a new employee to this environment is ill-advised. I’m happy to see that Nonlinear no longer lives with or travels with their employees.
  3. I think Emerson’s threat of a libel lawsuit encourages a bad norm. He went to it far too fast and it escalated things too quickly.

Ben Pace

  1. I think it is pretty reasonable to assume that ~1000-10000 hours and possibly more were spent by the community due to his original post (I am including all the reading and all the
... (read more)

I want to try to paraphrase what I hear you saying in this comment thread, Holly. Please feel free to correct any mistakes or misframings in my paraphrase.

I hear you saying...

  • Lightcone culture has a relatively specific morality around integrity and transparency. Those norms are consistent, and maybe good, but they're not necessarily shared by the EA community or the broader world.
  • Under those norms, actions like threatening your ex-employees's carrer prospects to prevent them from sharing negative info about you are very bad, while in broader culture a "you don't badmouth me, I don't badmouth you" ceasefire is pretty normal.
  • In this post, Ben is accusing Nonlinear of bad behavior. In particular, he's accusing them of acting particularly badly (compared to some baseline of EA orgs) according to the integrity norms of lightcone culture. 
    • My understanding is that the dynamic here that Ben considers particularly egregious is that  Nonlinear allegedly took actions to silence their ex-employees, and prevent negative info from propagating. If all of the same events had occurred between Nonlinear, Alice, and Chloe, except for Nonlinear suppressing info about what happened after the
... (read more)

Thanks for sharing this - I really appreciate the transparency!

A quick question on the attendees: Are there any other (primarily) animal advocacy-focused folks within the 43 attendees or is it just Lewis? I don't know the exact breakdown of meta EA efforts across various cause areas but I would be somewhat surprised if meta animal work was below 2% of all of meta EA spending (as is implied by your 1/43 ratio). There are several notable meta EA animal orgs doing work in this space (e.g. Animal Charity Evaluators, EA Animal Welfare Fund, Farmed Animal Funders, Focus Philanthropy and Animal Advocacy Careers) so wondering if Lewis is meant to represent them all? If so, I think that's a pretty tough gig! Would be curious to hear more about what determined the relative cause area focuses of the attendees or if there's some dataset that shows meta EA spending across various cause areas. 

 

(Note: I'm aware there is some overlap between other attendees and animal work e.g. Joey and Charity Entrepreneurship, but it's not their primary focus hence me not including them in my count above). 

Re: "In the weeks leading up to that April 2018 confrontation with Bankman-Fried and in the months that followed, Mac Aulay and others warned MacAskill, Beckstead and Karnofsky about her co-founder’s alleged duplicity and unscrupulous business ethics" -

I don't remember Tara reaching out about this, and I just searched my email for signs of this and didn’t see any. I'm not confident this didn't happen, just noting that I can't remember or easily find signs of it.

In terms of what I knew/learned 2018 more generally, I discuss that here.

Meta: I’m writing on behalf of the Community Health and Special Projects team (here: Community Health team) at CEA to explain how we’re thinking about next steps. For context, our team consists of:

  • Me, Chana Messinger: Normally I specialize (from a community health lens) in EA projects that involve high schoolers or minors, and community epistemics; since November, I’ve been the interim head of the Community Health team
  • Nicole Ross, the usual team head, who has been focusing on EV US board work since the FTX crisis, and when she transitions back to community health work, she plans to prioritize thinking through what changes should happen in EA given everything that happened with FTX
  • Julia Wise, who usually serves as a community health contact person for the EA community, but has been working primarily on other projects for a few months
  • Catherine Low, who serves as a contact person for the EA community among other roles
  • Eve McCormick, project manager and senior assistant
  • An affiliate and various contractors

In this comment I’ll sometimes be referring to Effective Ventures (EV) UK and Effective Ventures (EV) US together as the “EV entities” or as Effective Ventures or EV... (read more)

I don't plan to engage deeply with this post, but I wanted to leave a comment pushing back on the unsubtle currents of genetic determinism ("individuals from those families with sociological profiles amenable to movements like effective altruism, progressivism, or broad Western Civilisational values are being selected out of the gene pool"), homophobia ("cultures that accept gay people on average have lower birth rates and are ultimately outnumbered by neighboring homophobic cultures", in a piece that is all about how low birth rates are a key problem of our time) , and ethnonationalism ("based in developed countries that will be badly hit by the results of these skewed demographics") running through this piece.

I believe that genetics influence individual personality, but am very skeptical of claims of strong genetic determinism, especially on a societal level. Moreover, it seems to me that one of the core values of effective altruism is that of impartiality― giving equal moral weight to people who are distant from me in space and/or time. The kind of essentialist and elitist rhetoric common among people who concern themselves with demographic collapse seems in direct opposition to... (read more)

[anonymous]1y83
33
39

I have to say that I don't find these reasons especially convincing. It might help if you clarified exactly who you were speaking for and what you mean by the short-term, i.e., days or weeks?

Legal risk. I am assuming that you are not suggesting that any of these figureheads have done anything illegal. In which case the risk here is a reputational one: they don't want their words dragged into legal proceedings. But that seems like a nebulous possibility, and legal cases like this can take years in any case. Surely you are not saying that they won't address the subject of FTX or SBF over that entire span lest a lawyer quote them? Or am I misreading you somehow?

Lack of information. I agree there's still uncertainty, but there is certainly enough information for the the movement to assess its position and to take action. SBF and an inner circle at FTX/Alameda committed a fraud whose basic contours are now well-known, even if the exact timeline, motivations and particulars are not yet filled in. As this forum proves, that raises some blindingly obvious questions about the governance, accountability and culture of the movement.

People are busy. People are always busy, and saying 'I'm too busy' generally means 'I'm choosing not to prioritise this'. It's not an explanation so much as a restatement of an unwillingness to speak.

To be clear, I am not writing this because I think the leadership should try and set out a comprehensive position on the debacle as soon as possible. I don't think that. 

Thank you for posting this. It very much speaks to how I’m feeling right now. I'm grateful you've expressed and explained it.

Those accusations seem of a dramatically more minor and unrelated nature and don't update me much at all that allegations of mistreatment of employees are more likely.

Excellent post. I hope everybody reads it and takes it onboard.

One failure mode for EA will be over-reacting to black swan events like this that might not carry as much information about our organizations and our culture as we think they do. 

Sometimes a bad actor who fools people is just a bad actor who fools people, and they're not necessarily diagnostic of a more systemic organizational problem. They might be, but they might not be. 

We should be open to all possibilities at this point, and if EA decides it needs to tweak, nudge, update, or overhaul its culture and ethos, we should do so intelligently, carefully, strategically, and wisely -- rather than in a reactive, guilty, depressed, panicked, or self-flagellating panic.

Tara left CEA to co-found Alameda with Sam. As is discussed elsewhere, she and many others split ways with Sam in early 2018. I'll leave it to them to share more if/when they want to, but I think it's fair to say they left at least in part due to concerns about Sam's business ethics. She's had nothing to do with Sam since early 2018. It would be deeply ironic if, given what actually happened, Sam's actions are used to tarnish Tara.

[Disclosure: Tara is my wife]

I strongly disagree -- first, because this is dishonest and dishonorable. And second, because I don't think EA should try to have an immaculate brand.

Indeed, I suspect that part of what went wrong in the FTX case is that EA was optimizing too hard for having an immaculate brand, at the expense of optimizing for honesty, integrity, open discussion of what we actually believe, etc. I don't think this is the only thing that was going on, but it would help explain why people with concerns about SBF/FTX kept quiet about those concerns. Because they either were worried about sullying EA's name, or they were worried about social punishment from others who didn't want EA's name sullied.

IMO, trying super hard to never have your brand's name sullied, at the expense of ordinary moral goals like "be honest", tends to sully one's brand far more than if you'd just ignored the brand and prioritized other concerns. Especially insofar as the people you're trying to appeal to are very smart, informed, careful thinkers; you might be able to trick the Median Voter that EA is cool via a shallow PR campaign and attempts to strategically manipulate the narrative, but you'll have a far harder time trickin... (read more)

A couple of hours ago, I tweeted:

Given SBF's EA-ness and strong embeddedness within EA, I support the idea of making it an EA cause area to pay back people he effectively stole money from and used in EA (until that's done), if he stole money from people and SBF + normal legal channels don't suffice to return it.

It doesn't strike me as appropriate for AMF to take money under the understanding it would go to combating malaria, and then divert that money to random crypto people instead.

But it strikes me as appropriate for the larger EA community to undo any harms like this that the community caused, at least if the conditions above hold, including if that means giving less to AMF, MIRI, etc. next year.

(MIRI didn't receive any money from FTX AFAIK, and I have no idea whether AMF did; I'm just using them as examples of EA orgs. Diverting some donations from us to some pay-people-back fund would make sense if the above conditions hold, IMO, regardless of which specific EA orgs counterfactually lose money as a result. Because the point is to undo a harm caused by EAs, not to punish recipient orgs or undo every specific humanitarian benefit that occurred.)

Reimbursing people for the money s... (read more)

I generally think it'd be good to have a higher evidential bar for making these kinds of accusations on the forum. Partly, I think the downside of making an off-base socket-puppeting accusation (unfair reputation damage, distraction from object-level discussion, additional feeling of adversarialism) just tends to be larger than the upside of making a correct one.

Fwiw, in this case, I do trust that A.C. Skraeling isn't Zoe. One point on this: Since she has a track record of being willing to go on record with comparatively blunter criticisms, using her own name, I think it would be a confusing choice to create a new pseudonym to post that initial comment.

Hi Tae, thank you so much for writing this post! I’m coordinating WWOTF ads and this is really helpful feedback to get. We’ve thought a lot about the trade-off between reaching potentially interested audiences while not oversaturating those audiences in a way that’s off-putting, and have taken many steps to avoid doing so (most importantly, by not narrowing our target audience so greatly that the same people get bombarded). Ensuring we don’t oversaturate audiences is a key priority. 

If it’s alright, I’d love to hear more details about exactly which ads your friend encountered — I’ll contact you via DM. If other people have other relevant experiences that they want to share, please email me at abie@forethought.org — it’s very helpful and very actionable to get feedback right now, since we can adapt and iterate ads in real-time. 

I'm a journalist, and would second this as sound advice, especially the 'guide to responding to journalists'. It explains the pressures and incentives/deterrents we have to work with, without demonising the profession... which I was glad to see! 

A couple of things I would emphasise (in the spirit of mutual understanding!): 

It can help to look beyond the individual journalist to consider the audience we write for, and what our editors' demands might be higher up in the hierarchy. I know many good, thoughtful journalists who work for publications (eg politically partisan newspapers) where they have to present stories the way they do, because that's what their audience/editors demand... There's often so much about the article they, as the reporter, don't control after they file. (Early career journalists in particular have to make these trade-offs, which is worth bearing in mind.) 

Often I would suggest it could be helpful to think of yourself as a guide not a gatekeeper. An obvious point... but this space here [waves arms] is all available to journalists, along with much else in the EA world, via podcasts, public google docs etc. There are vast swathes of material that ... (read more)

[anonymous]2y83
0
0

At the start you say you are going to argue that "the median EAG London attendee will be less COVID-cautious than they would be under ideal epistemic conditions". So, I was expecting you to discuss the health risks of getting covid for EAG attendants (who will predominantly be between 20 and 40 and will ~all have been triple vaccinated) . Since you don't do that, your post shouldn't update us at all towards your conclusion.

The IFR for covid for all ages is now below seasonal flu. The risk of death for people attending EAG is extremely small given the likely age and vaccination status of attendants. 

It is difficult to work out the effects of long covid, but the most reasonable estimates I have seen put the health cost of long covid as equivalent to 0.02 DALYs, or about a week. (I'm actually pretty sceptical that long covid is real (see eg here))

For people aged 20-40 who are triple jabbed, the risks of attending EAG are extremely small, I think on the order of getting a cold. They do not justify "the usual spate of NPIs"

There's also the point that covid seems likely to be endemic so there is little value in a "wait and see" approach

Imma
2y83
0
0

I temporarily left the EA community in 2018 and that ended up well.

I took a time-out from EA to focus on a job search. I had a job that I wanted to leave, but needed a lot of time and energy to handle all the difficulties that come with a job search. My career path is outside of  EA organizations.

How I did it practically:
- I had  a clear starting point and wrap up existing commitments. I stopped and handed over my involvement in local community building and told my peers about the time-out. I donated my entire year's donation budget in February.
- I set myself some rules for what I would and would not do. No events, no volunteering, no interaction with the community. I deleted social media accounts that I only used for EA. I blocked a few websites, most notably 80000hours.org. I would have donated if my time-out took longer, but without any research.
- I did not set an end point. The time-out would be as long as needed. I returned soon after I signed the new contract, 8 months after my starting point. It could have been much longer.

This helped a lot to get the job search done.

I could not, and did not want to, stop aiming for a positive impact on the world.  I probably did more good overall than if I stayed involved in EA during the job search.

I can recommend this to others and my future self in a similar situation.

Everything written in the post above strongly resonates with my own experiences, in particular the following lines:

the creation of this paper has not signalled epistemic health. It has been the most emotionally draining paper we have ever written.

the burden of proof placed on our claims was unbelievably high in comparison to papers which were considered less “political” or simply closer to orthodox views.

The EA community prides itself on being able to invite and process criticism. However, warm welcome of criticism was certainly not our experience in writing this paper.

I think criticism of EA orthodoxy is routinely dismissed. I would like to share a few more stories of being publicly critical of EA in the hope that doing so adds some useful evidence to the discussion:

  • Consider systemic change. "Some critics of effective altruism allege that its proponents have failed to engage with systemic change" (source). I have always found the responses (eg here and here) to this critique to be dismissive and miss the point. Why can we not just say: yes we are a new community this area feels difficult and we are not there yet? Why do we have to pretend EA is perfect and does systemic change stu
... (read more)

I’ve been on the EA periphery for a number of years but have been engaging with it more deeply for about 6 months. My half-in, half-out perspective, which might be the product of missing knowledge, missing arguments, all the usual caveats but stronger:

Motivated reasoning feels like a huge concern for longtermism.

First, a story: I eagerly adopted consequentialism when I first encountered it for the usual reasons; it seemed, and seems, obviously correct. At some point, however, I began to see the ways I was using consequentialism to let myself off the hook, ethically. I started eating animal products more, and told myself it was the right decision because not doing so depleted my willpower and left me with less energy to do higher impact stuff. Instead, I decided, I’d offset through donations. Similar thing when I was asked, face to face, to donate to some non-EA cause: I wanted to save my money for more effective giving. I was shorter with people because I had important work I could be doing, etc., etc.

What I realized when I looked harder at my behavior was that I had never thought critically about most of these “trade-offs,” not even to check whether they were actually trade-offs! ... (read more)

I don’t have time to write a detailed and well-argued response, sorry. Here are some very rough quick thoughts on why I downvoted.  Happy to expand on any points and have a discussion.

In general, I think criticisms of longtermism from people who 'get' longtermism are incredibly valuable to longtermists.

One reason if that if the criticisms carry entirely, you'll save them from basically wasting their careers. Another reason is that you can point out weaknesses in longtermism or in their application of longtermism that they wouldn't have spotted themselves.  And a third reason is that in the worlds where longtermism is true, this helps longtermists work out better ways to frame the ideas to not put off potential sympathisers.

Clarity

In general, I found it hard to work out the actual arguments of the book and how they interfaced with the case for longtermism. 

Sometimes I found that there were some claims being implied but they were not explicit. So please point out any incorrect inferences I’ve made below!

I was unsure what was being critiqued: longtermism, Bostrom’s views, utilitarianism, consequentialism, or something else. 

The thesis of the book (for people readin... (read more)

I'll ask the obvious awkward question: 

Staff numbers are up ~35% this year but the only one of your key metrics that has shown significant movement is "Job Vacancy Clickthroughs".

What do you think explains this? Delayed impact, impact not caught by metrics, impact not scaling with staff - or something else?

When I read your scripts and Rob is interviewing, I like to read Rob’s questions at twice the speed of the interviewees’ responses. Can you accommodate that with your audio version?

Edit: I want to make it clear that I am talking about “genetic” differences not “environmental” differences in this comment. Thanks to titotal for pointing out I wasn’t clear enough. The survey of experts finds that far more experts believe both genetic factors and environmental factors play a role than just environmental factors. I spend the rest of my comment arguing that even if genetic factors play a role, genetic factors are so heavily influenced by environmental factors that we shouldn’t view them as evidence of innate differences in intelligence between races. 


I find the repeated use of the term "discredited" to refer to studies on race and IQ on the forum deeply troubling. Yes, some studies will have flaws, but that means you have conversations about the significance of these flaws and respect that reasonable people can disagree about how best to measure complicated issues. It doesn't mean  you dismiss everyone who agrees with the standard perspectives of experts in an academic field as racist. My favorite thing about this community is the epistemic humility.  We are supposed to be the people who judge studies on their merits, no matter how uncomfortable they... (read more)

A man with experience in the London, Bay Area, online communities: 

I’m monogamous, and had never encountered polyamory before interacting with EA. My early experiences consisted of:

1. A strong presumption that I either would become polyamorous on further thought, or if not that I simply wasn’t that smart.

2. Predatory men using polyamory to defend their behaviour, in a ‘you monogamous simpleton wouldn’t understand’ kind of way; some of these people have since been excluded from the community. 

3. People denying their feelings of insecurity or abandonment; trying to do what was ‘rational’ but not doing the communication or introspection necessary to make poly work for them. I don’t think I’m overstepping to observe that many EAs are poor communicators and very poor at being in touch with their feelings; weirdly I feel like poly would have been a much better fit for some of my pre-EA friends than for some of the EAs I’ve seen try to make it work. 

4. Very little in the way of healthy relationships. With hindsight, I think people in healthy polyamorous relationships simply didn’t need to advertise, whereas the people in (1) needed to show off and (2) needed a shield. 

... (read more)

I wanted to push back on this because most commenters seem to agree with you. I disagree that the writing style on the EA forum, on a whole, is bad. Of course, some people here are not the best writers and their writing isn't always that easy to parse. Some would definitely benefit from trying to make their writing easier to understand. 

For context, I'm also a non-native English speaker and during high school, my performance in English (and other languages) was fairly mediocre.

But as a whole, I think there are few posts and comments that are overly complex. In fact, I personally really like the nuanced writing style of most content on the EA forum. Also, criticizing the tendency to "overly intellectualize" seems a bit dangerous to me. I'm afraid that if you go down this route you shut down discussions on complex issues and risk creating a more Twitter-like culture of shoehorning complex topics into simplistic tidbits. I'm sure this is not what you want but I worry that this will be an unintended side effect. (FWIW, in the example thread you give, no comment seemed overly complex to me.)

Of course, in the end, this is just my impression and different people have different preferences. It's probably not possible to satisfy everyone. 

I think it's not quite right that low trust is costlier than high trust. Low trust is costly when things are going well. There's kind of a slow burn of additional cost.

But high trust is very costly when bad actors, corruption or mistakes arise that a low trust community would have preempted. So the cost is lumpier, cheap in the good times and expensive in the bad.

(I read fairly quickly so may have missed where you clarified this.)

Arepo
1y82
36
6

Hard disagree on Leverage. They've absorbed a tonne of philanthropic funding over the years to produce nothing but pseudoscience and multiple allegations of emotional abuse.

I'm not saying Kerry wouldn't know about this stuff - I think he likely does. I'm saying a) that he was one of the 'top leaders' he refers to, so had ample chance to do something about this himself, b) he has a track record of questionable integrity, and c) he has potential motive to undermine the people he's criticising.

The main assumption of this post seems to be that, not only are the true values of the parameters independent, but a given person's estimates of stages are independent. This is a judgment call I'm weakly against.

Suppose you put equal weight on the opinions of Aida and Bjorn. Aida gives 10% for each of the 6 stages, and Bjorn gives 99%, so that Aida has an overall x-risk probability of 10^-6 and Bjorn has around 94%.

  • If you just take the arithmetic mean between their overall estimates, it's like saying "we might be in worlds where Aida is correct, or worlds where Bjorn is correct"
  • But if you take the geometric mean or decompose into stages, as in this post, it's like saying "we're probably in a world where each of the bits of evidence Aida and Bjorn have towards each proposition are independently 50% likely to be valid, so Aida and Bjorn are each more correct about 2-4 stages".

These give you vastly different results, 47% vs 0.4%. Which one is right? I think there are two related arguments to be made against the geometric mean, although they don't push me all the way towards using the arithmetic mean:

  • Aida and Bjorn's wildly divergent estimates on probably come from some underlying diff
... (read more)

On the topic of feedback... At Triplebyte, where I used to work as an interviewer, we would give feedback to every candidate who went through our technical phone screen. I wasn't directly involved in this, but I can share my observations -- I know some other EAs who worked at Triplebyte were more heavily involved, and maybe they can fill in details that I'm missing. My overall take is that offering feedback is a very good idea and EA orgs should at least experiment with it.

  • Offering feedback was a key selling point that allowed us to attract more applicants.

  • As an interviewer, I was supposed to be totally candid in my interview notes, and also completely avoid any feedback during the screening call itself. Someone else in the company (who wasn't necessarily a programmer) would lightly edit those notes before emailing them -- they wanted me to be 100% focused on making an accurate assessment, and leave the diplomacy to others. My takeaway is that giving feedback can likely be "outsourced" -- you can have a contractor / ops person / comms person / intern / junior employee take notes on hiring discussions, then formulate diplomatic but accurate feedback for candidates.

  • My bo

... (read more)
Jason
16d81
11
1
4

But I ultimately decided against doing that for a variety of reasons, including that it was very costly to me,

Epistemic status: not fleshed out

(This comment is not specifically directed to Rebecca's situation, although it does allude to her situation in one point as an example.)

I observe that the powers-that-be could make it less costly for knowledgeable people to come forward and speak out. For example, some people may have legal obligations, such as the duties a board member owes a corporation (extending in some cases to former board members).[1] Organizations may be able to waive those duties by granting consent. Likewise, people may have concerns[2] about libel-law exposure (especially to the extent they have exposure to the world's libel-tourism capital, the UK). Individuals and organizations can mitigate these concerns by, for instance, agreeing not to sue any community member for libel or any similar tort for FTX/SBF-related speech. (One could imagine an exception for suits brought in the United States in which the individual or organization concedes their status as a public figure, and does not present any other claims that would allow a finding of liability witho... (read more)

As the post says above, I’d like to share updates the  team has made on its policies based on the internal review we did following the Time article and Owen’s statement as a manager on the team and the person who oversaw the internal review. (My initial description of the internal review is here). In general, these changes have been progressing prior to knowing the boards’ determinations, though thinking from Zach and the EV legal team has been an important input throughout.
 

Changes 

Overall we spent dozens of hours over multiple calendar months in discussions and doing writeups, both internally to our team and getting feedback from Interim CEA CEO Ben West and others. Several team members did retrospectives or analyses on the case, and we consulted with external people (two EAs with some experience thinking about these topics as well as seven professionals in HR, law, consulting and ombuds) for advice on our processes generally. 

From this we created a list of practices to change and additional steps to add. The casework team also reflected on many past cases to check that these changes were robust and applicable across a wide variety of casework. 

Our c... (read more)

Insightful and well-argued post!

  • I found the hypothetical about NYT and CEA helpful for reasoning from first principles about acceptable journalistic practice. I came out of it empathizing more with Nonlinear's feelings before and during the publication of Ben Pace's article than I previously had.
  • Regarding Ben Pace's explicit seeking of negative information and unwillingness to delay posting, you updated me from thinking of these as simple mistakes to now considering them egregiously bad.
  • Great point that an article author can't just state their disclaimers at the top and expect readers to rationally recalibrate themselves and ignore the vibes of the evidence's presentation.

I found it hard to update throughout this story because the presentation of evidence from both parties was (understandably) biased. As you pointed out, "Sharing Information About Nonlinear" presented sometimes true claims in a way which makes the reader unsympathetic to Nonlinear. Nonlinear's response presented compelling rebuttals in a way which was calculated to increase the reader's sympathy for Nonlinear. Both articles intentionally mix the evidence and the vibes in a way which makes it difficult to readers to separate the two. (I don't blame Nonlinear's response for this as much, since it was tit for tat.)

Thanks again for putting so much time and effort into this, and I'm excited to see what you write next.

I'll just quickly say that my experience of this saga was more like this: 

Before BP post: NL are a sort of atypical, low structure EA group, doing entrepreneurial and coordination focused work that I think is probably positive impact.
After BP post: NL are actually pretty exploitative and probably net negative overall. I'll wait to hear their response, but I doubt it will change my mind very much.
After NL post: NL are probably not exploitative. They made some big mistakes (and had bad luck) with some risks they took in hiring and working unconventionally. I think they are probably still likely to have a positive impact on expectation. I think that they have been treated harshly.
After this post: I update to be feeling more confident that this wasn't a fair way to judge NL and that these sorts of posts/investigations shouldn't be a community norm. 

(My personal views only, and like Nick I've been recused from a lot of board work since November.)

Thank you, Nick, for all your work on the Boards over the last eleven years. You helped steward the organisations into existence, and were central to helping them flourish and grow. I’ve always been impressed by your work ethic, your willingness to listen and learn, and your ability to provide feedback that was incisive, helpful, and kind.

Because you’ve been less in the limelight than me or Toby, I think many people don’t know just how crucial a role you played in EA’s early days. Though you joined shortly after launch, given all your work on it I think you were essentially a third cofounder of Giving What We Can; you led its research for many years, and helped build vital bridges with GiveWell and later Open Philanthropy. I remember that when you launched Giving What We Can: Rutgers, you organised a talk with I think over 500 people. It must still be one of the most well-attended talks that we’ve ever had within EA, and helped the idea of local groups get off the ground.

The EA movement wouldn’t have been the same without your service. It’s been an honour to have worked with you.

I strongly disagree with the claim that the connection to EA and doing good is unclear. The EA community's beliefs about AI have been, and continue to be, strongly influenced by Eliezer. It's very pertinent if Eliezer is systematically wrong and overconfident about being wrong because, insofar as there's some level of defferal to Elizer on AI questions within the EA community which I think there clearly is, it implies that most EAs should reduce their credence in Elizer's AI views. 

Joel’s response

[Michael's response below provides a shorter, less-technical explanation.]  

Summary 

Alex’s post has two parts. First, what is the estimated impact of StrongMinds in terms of WELLBYs? Second, how cost-effective is StrongMinds compared to the Against Malaria Foundation (AMF)? I briefly present my conclusions to both in turn. More detail about each point is presented in Sections 1 and 2 of this comment.

The cost-effectiveness of StrongMinds

GiveWell estimates that StrongMinds generates 1.8 WELLBYs per treatment (17 WELLBYs per $1000, or 2.3x GiveDirectly[1]). Our most recent estimate[2] is 10.5 WELLBYs per treatment (62 WELLBYs per $1000, or 7.5x GiveDirectly) . This represents a 83% discount (an 8.7 WELLBYs gap)[3] to StrongMinds effectiveness[4]. These discounts, while sometimes informed by empirical evidence, are primarily subjective in nature. Below I present the discounts, and our response to them, in more detail.

Figure 1: Description of GiveWell’s discounts on StrongMinds’ effect, and their source

Notes: The graph shows the factors that make up the 8.7 WELLBY discount.

Table 1: Disagreements on StrongMinds per tre... (read more)

I’m the woman who Julia asked on a hunch about her experiences with Owen, and one of the women who Owen refers to when he says there have been four other less egregious occasions where he expressed feelings of attraction that he regrets. I’m sharing my experience with Owen below, because I think it’s probably helpful for people reflecting on this situation (and by default, it would remain confidential indefinitely), but as an FYI, I’m probably unlikely to participate in substantive discussion about it in the comments section. (I’m posting this anonymously because I’d prefer to avoid being pulled into lots of discussions about this in a way that drains my time and emotional energy, not because I’m afraid of retribution from someone or negative consequences for my career.)

  • Several years ago, I stayed at Owen’s house for a while while I was visiting Oxford. Owen and I were friends, I had been to his house several times before, and he had previously offered that I could stay there if I was in Oxford. I was working at an EA organization at the time that was not professionally connected to Owen.
  • Towards the end of my stay, Owen and I went on a long walk around Oxford, where we ta
... (read more)

What is the main issue in EA governance then, in your view? It strikes me [I'm speaking in a personal capacity, etc.] the challenge for EA is a combination of the fact the resources are quite centralised and that trustees of charities are (as you say) not accountable to anyone. One by itself might be fine. Both together is tricky.  I'm not sure where this fits in with your framework, sorry. 

There's one big funder (Open Philanthropy), many of the key organisations are really just one organisation wearing different hats (EVF), and these are accountable only to their trustees. What's more,  as Buck notes here, all the dramatis personae are quite friendly ("lot of EA organizations are led and influenced by a pretty tightly knit group of people who consider themselves allies").  Obviously, some people will be in favour of centralised, unaccountable decision-making - those who think it gets the right results - but it's not the structure we expect to be conducive to good governance in general.

If power in effective altruism were decentralised, that is, there were lots of 'buyers' and 'sellers' in the 'EA marketplace', then you'd expect competitive pressure to improve go... (read more)

"Huh, this person definitely speaks fluent LessWrong. I wonder if they read Project Lawful? Who wrote this post, anyway? I may have heard of them.

...Okay, yeah, fair enough."

One thing I definitely believe, and have commented on before[1], is that median EA's (I.e, EA's without an unusual amount of influence) are over-optimising for the image of EA as a whole, which sometimes conflicts with actually trying to do effective altruism. Let the PR people and the intellectual leaders of EA handle that - people outside that should be focusing on saying what we sincerely believe to be true, and worrying much less about whether someone, somewhere, might call us bad people for saying it. That ship has sailed - there are people out there, by now, who already have the conclusion of "And therefore, EA's are bad people" written down - refusing to post an opinion won't stop them filling in the middle bits with something else, and this was true even before the FTX debacle.

In short - "We should give the money back because it would help EA's image" is, imo, a bad take. "We should give the money back because it would be the right thing to do" is, imo, a much better take, which I won't take a stand on ... (read more)

To the extent that Kerry's allegation involves his own judgment of Sam's actions as bad or shady, I think it matters that there's reason not to trust Kerry's judgment or possibly motives in sharing the information. However we should definitely try to find out what actually happened and determine whether it was truly predictive of worse behavior down the line.

I haven't read the comments and this has probably been said many times already, but it doesn't hurt saying it again:
From what I understand, you've taken significant action to make the world a better place. You work in a job that does considerable good directly, and you donate your large income to help animals. That makes you a total hero in my book :-)

This forum has taken off over the past year. Thanks to all the post authors who have dedicated so much time to writing content for us to read!

Number of posts per day has ~4x'd in the last year

At present, it is basically impossible to advance any drug to market without extensive animal testing – certainly in the US, and I think everywhere else as well. The same applies to many other classes of biomedical intervention. A norm of EAs not doing animal testing basically blocks them from biomedical science and biotechnology; among other things, this would largely prevent them from making progress across large swathes of technical biosecurity.

This seems bad – the moral cost of failing to avert biocatastrophe, in my view, hugely outweigh the moral costs of animal testing. At the same time, speaking as a biologist who has spent a lot of time around (and on occasion conducting) animal testing, I do think that mainstream scientific culture around animal testing is deeply problematic, leading to large amounts of unnecessary suffering and a cavalier disregard for the welfare of sentient beings (not to mention a lot of pretty blatantly motivated argumentation). I don't want EAs to fall into that mindset, and the reactions to this comment (and their karma totals) somewhat concern me.

I wouldn't support a norm of EAs not doing animal testing. But I think I would support a norm of EAs ap... (read more)

[anonymous]3y81
2
0

Thanks a lot for sharing this denise. Here are some thoughts on your points. 

  1. On your point about moral realism, I'm not sure how that can be doing much work in an argument against longtermism specifically, as opposed to all other possible moral views. Moral anti-realism implies that longtermism isn't true, but then it also implies that near-termism isn't true. The thought seems to be that there could only be an argument that would give you reason to change your mind if moral realism were true, but if that were true, there would be no point in discussing arguments for and against longtermism because they wouldn't have justificatory force.
  2. Your argument suggests that you find a person-affecting form of utilitarianism most plausible. But to me we should not reach conclusions about ethics on the basis of what you find intuitively appealing without considering the main arguments for and against these positions. Person-affecting views have lots of very counter-intuitive implications and are actually quite hard to define.
  3. I don't think it is true that the case for longtermism rests on the total view. As discussed in the Greaves and MacAskill paper, many theories imply longtermism.
  4. Your
... (read more)

But I think there are reasons to not contact an org before, besides urgency, e.g. lacking time, or predicting that private communication will not be productive enough to spend the little time we have at our disposal. So I currently think we should approve if people bring up the energy to voice honest concerns even if they don’t completely follow the ideal playbook. What do you, or others think?

I agree with the spirit of "I currently think we should approve if people bring up the energy to voice honest concerns even if they don’t completely follow the ideal playbook".

However, at first glance I don't find the specific "reasons to not contact an org before" that you state convincing:

  • "Lacking time" - I think there are ways that require minimal time commitment. For instance, committing to not (or not substantially) revise the post based on an org's response. I struggle to imagine a situation where someone is able to spend several hours writing a post but then absolutely can't find the 10 minutes required to send an email to the org the post is about.
  • "Predicting that private communication will not be productive enough to spend the little time we have at our disposal" - I think this misun
... (read more)

That flag is cool, but here's an alternative that uses some of the same ideas. 

The black background represents the vastness of space, and its current emptiness. The blue dot represents our fragile home. The ratio of their sizes represents the importance of our cosmic potential (larger version here).

A "Pale Blue Dot" flag for longtermism

It's also a reference to Carl Sagan's Pale Blue Dot - a photo taken of Earth, from a spacecraft that is now further from Earth than any other human-made object, and that was the first to leave our solar system.

Carl Sagan's Pale Blue Dot

Sagan wrote this famous passage about the image:

Look again at that dot. That's here. That's home. That's us. On it everyone you love, everyone you know, everyone you ever heard of, every human being who ever was, lived out their lives. The aggregate of our joy and suffering, thousands of confident religions, ideologies, and economic doctrines, every hunter and forager, every hero and coward, every creator and destroyer of civilization, every king and peasant, every young couple in love, every mother and father, hopeful child, inventor and explorer, every teacher of morals, every corrupt politician, every "supersta

... (read more)

I don't know if you need someone to say this, but:

You can often do more good outside of an EA organisation than inside one. For most people, the EA community is not the only good place to look for grantmaking or research jobs.

If I could be a grantmaker anywhere, I'd probably pick the Gates Foundation or the UK Government's Department for International Development. If I could be a researcher anywhere, I might choose Harvard's Kennedy School of Public Policy or the Institute for Government. None of these are "EA organisations" but they would all most likely allow me to do more good than working at GiveWell. (Although I do love GiveWell and encourage interested applicants to apply!)

Some people already know this and have particular reasons they want to work in an EA organisation, but some don't, so I thought it was worth saying.

My sense of what is happening regarding discussions of EA and systemic change is:


  • Actual EA is able to do assessments of systemic change interventions including electoral politics and policy change, and has done so a number of times
    • Empirical data on the impact of votes, the effectiveness of lobbying and campaign spending work out without any problems of fancy decision theory or increasing marginal returns
      • E.g. Andrew Gelman's data on US Presidential elections shows that given polling and forecasting uncertainty a marginal vote in a swing state average something like a 1 in 10 million chance of swinging an election over multiple elections (and one can save to make campaign contributions
      • 80,000 Hours has a page (there have been a number of other such posts and discussion, note that 'worth voting' and 'worth buying a vote through campaign spending or GOTV' are two quite different thresholds) discussing this data and approaches to valuing differences in political outcomes between candidates; these suggest that a swing state vote might be worth tens of thousands of dollars of income to rich country citizens
        • But if one thinks that charities like AMF do 100x or more g
... (read more)

Relative to the base rate of how wannabe social movements go, I’m very happy with how EA is going. In particular: it doesn’t spend much of its time on internal fighting; the different groups in EA feel pretty well-coordinated; it hasn’t had any massive PR crises; it’s done a huge amount in a comparatively small amount of time, especially with respect to moving money to great organisations; it’s in a state of what seems like steady, sustainable growth. There’s a lot still to work on, but things are going pretty well. 

What I could change historically:  I wish we’d been a lot more thoughtful and proactive about EA’s culture in the early days.  In a sense the ‘product’ of EA (as a community) is a particular culture and way of life. Then the culture and way of life we want is whatever will have the best long-run consequences. Ideally I’d want a culture where (i) 10% or so of people interact with the EA community are like ‘oh wow these are my people, sign me up’; (ii) 90% of people are like ‘these are nice, pretty nerdy, people; it’s just not for me’; and (iii) almost no-one is like, ‘wow, these people are jerks’. (On (ii) and (iii): I feel like the Quakers is the sort of thing I’m think... (read more)

I think there is a bit of tendency to assume that it is appropriate to ask for arbitrary amounts of transparency from EA orgs.  I don't think this is a good norm: transparency has costs, often significant, and constantly asking for all kinds of information (often with a tone that suggests that it ought to be presented) is I think often harmful.

Using an anonymous account because this is sensitive. The below is critical, but I love GWWC, and personally agree with your recommendation of the EA AWF as the best opportunity for most donors in the AW space.

I'm quite surprised you chose to include The Humane League specifically and not other ACE Top Charities, and based on the evaluation, it sounds like this is largely based on RP and Founders Pledge work from 2018/2019, and not more recent evaluations, as well as a referral from Open Phil.

A few comments on why this seems like a bad decision:

  • My impression is that basically everyone within the animal space who has read them thinks those cost-effectiveness analyses are no longer accurate, and reflect an earlier period of success. Most people also seem to think corporate campaigns are no longer nearly as effective as they were historically (even staff within The Humane League).
  • Open Phil said just last week that they think marginal opportunities in the AW space right now are 1/5th as cost-effective as RP's estimate (a claim I don't buy fully, but seems like evidence against using this for your recommendation)
  • A cursory glance at the trackers for corporate commitments shows that:
... (read more)

Hi Jeff. Thanks for engaging. Three quick notes. (Edit: I see that Peter has made the first already.)

First, and less importantly, our numbers don't represent the relative value of individuals, but instead the relative possible intensities of valenced states at a single time. If you want the whole animal's capacity for welfare, you have to adjust for lifespan. When you do that, you'll end up with lower numbers for animals---though, of course, not OOMs lower.

Second, I should say that, as people who work on animals go, I'm fairly sympathetic to views that most would regard as animal-unfriendly. I wrote a book criticizing arguments for veganism. I've got another forthcoming that defends hierarchicalism. I've argued for hybrid views in ethics, where different rules apply to humans and animals. Etc. Still, I think that conditional on hedonism it's hard to get MWs for animals that are super low. It's easier, though still not easy, on other views of welfare. But if you think that welfare is all that matters, you're probably going to get pretty animal-friendly numbers. You have to invoke other kinds of reasons to really change the calculus (partiality, rights, whatever).

Third, I've been try... (read more)

Some quick thoughts on AI consciousness work, I may write up something more rigorous later.

Normally when people have criticisms of the EA movement they talk about its culture or point at community health concerns.

I think aspects of EA that make me more sad is that there seems to be a few extremely important issues on an impartial welfarist view that don’t seem to get much attention at all, despite having been identified at some point by some EAs. I do think that ea has done a decent job of pointing at the most important issues relative to basically every other social movement that I’m aware of but I’m going to complain about one of it’s shortcomings anyway.

It looks to me like we could build advanced ai systems in the next few years and in most worlds we have little idea of what’s actually going on inside them. The systems may tell us they are conscious, or say that they don’t like the tasks we tell them to do but right now we can’t really trust their self reports. There’ll be a clear economic incentive to ignore self reports that create a moral obligation to using the systems in less useful/efficient ways. I expect the number of deployed systems to be very large and that it’ll be ... (read more)

Thanks for taking the time to write thoughtful criticism. Wanted to add a few quick notes (though note that I'm not really impartial as I'm socially very close with Redwood)

- I personally found MLAB extremely valuable. It was very well-designed and well-taught and was the best teaching/learning experience I've had by a fairly wide margin
- Redwood's community building (MLAB, REMIX and people who applied to or worked at Redwood) has been a great pipeline for ARC Evals and our biggest single source for hiring (we currently have 3 employees and 2 work triallers who came via Redwood community building efforts). 
- It was also very useful for ARC Evals to be able to use Constellation office space while we were getting started, rather than needing to figure this out by ourselves.
- As a female person I feel very comfortable in Constellation. I've never felt that I needed to defer or was viewed for my dating potential rather than my intellectual contributions. I do think I'm pretty happy to hold my ground and sometimes oblivious to things that bother other people, so that might not be a very strong evidence that it isn't an issue for other people. However, I have been bothered in the pa... (read more)

I believe that’s an oversimplification of what Alexander thinks but don’t want to put words in his mouth.

In any case this is one of the few decisions the 4 of us (including Cari) have always made together so we have done a lot of aligning already. My current view, which is mostly shared, is we’re currently underfunding x-risk even without longtermism math, both because FTXF went away and because I’ve updated towards shorter AI timelines in the past ~5 years. And even aside from that, we weren’t at full theoretical budget last year anyway. So that all nets out that to expected increase, not decrease.

I’d love to discover new large x-risk funders though and think recent history makes that more likely.

Chiming in from the EV UK side of things: First, +1 to Nicole’s thanks :) 

As you and Nicole noted, Nick and Will have been recused from all FTX-related decision-making. And, Nicole mentioned the independent investigation we commissioned into that. 

Like the EV US board, the EV UK board is also looking into adding more board members (though I think we are slightly behind the US board), and plans to do so soon.  The board has been somewhat underwater with all the things happening (speaking for myself, it’s particularly difficult because a lot of these things affect my main job at Open Phil too, so there’s more urgent action needed on multiple fronts simultaneously). 

(The board was actually planning and hoping to add additional board members even before the fall of FTX, but unfortunately those initial plans had to be somewhat delayed while we’ve been trying to address the most time-sensitive and important issues, even though having more board capacity would indeed help in responding to issues that crop up; it's a bit of a chicken-and-egg dynamic we need to push through.)  

Hope this is helpful!

Atlas says they're doing "talent search". This connotes finding talent from under-resourced communities or poor students. Do the statistics match this?

FWIW, the term "talent search" has no connotation of this type to me. To me it just means like, finding top talent, wherever you can find them.

Leaving aside some object-level stuff about Bostrom's views, I still think the apology could be much better without any dishonesty on his part. This is somewhat subjective but things that I think could have been better:

  • Don't frame the apology at the beginning as almost purely instrumental i.e. not like "I will get smeared soon, so I want to get ahead of the game". This makes everything come across as less genuine. 
  • "What about eugenics? Do I support eugenics? No, not as the term is commonly understood." - This is just not a useful thing to mention in an apology about racism, or at least, not in this way. Usually, if someone says "Don't think of an elephant" then you do think of an elephant. The consequence is now people are probably more likely to think there is a link between Bostrom and eugenics than if this was written differently.
  • And some other points that Habiba mentioned in her post e.g. "I am deeply uncomfortable with a discussion of race and intelligence failing to acknowledge the historical context of the ideas’ origin and the harm they can and have caused."

In my opinion it just highlights some basic misunderstandings about communication and our society today, which (I think) was proven by the fairly widespread negative backlash to this incident.

I have to be honest that I’m disappointed in this message. I’m not so much disappointed that you wrote a message along these lines, but in the adoption of perfect PR speak when communicating with the community. I would prefer a much more authentic message that reads like it was written by an actual human (not the PR speak formula) even if that risks subjecting the EA movement to additional criticism and I suspect that this will also be more impactful long term. It is much more important to maintain trust with your community than to worry about what outsiders think, especially since many of our critics will be opposed to us no matter what we do.

To try to group/summarize the discussion in the comments and offer some replies:

 

1. ‘Traders are not thinking about AGI, the inferential distance is too large’; or ‘a short can only profit if other people take the short position too’

(a) Anyone who thinks they have an edge in markets thinks they've noticed something which requires such a large inferential distance that no one else has seen it.

  • Any trade requires that the market price eventually converges to the ‘correct’ price
  • ⇒ This argument proves too much – it’s a general argument against ever betting that the market will correct an incorrect price!
    • Those who are arguing against need to be a clearer argument about why this situation is fundamentally different from any other
    • Sovereign bond markets are easily some of the most liquid and well-functioning markets ever to exist

(b) Many financial market participants ARE thinking about these issues.

... (read more)

I think this is a very helpful post.

I think some of the larger, systemically important organisations should either have a balance of trustees and/or a board of advisors who have relevant mission critical experiences such as risk management, legal and compliance, depending on the nature of the organisation. I appreciate senior executives and trustees in these organisations do seek such advice; but often it is too opaque who they consult and which area the advice covers; and there could be a lack of accountability and risk of the advisors lacking sufficient knowledge themselves.

I have raised this directly a number of years ago but perhaps still inadequate. As noted by others this becomes more important as we get bigger.

Ps I don’t post much and not as accurate with my choice of words as other forum users.

The last paragraphs in the article itself point to the most glaring issue IMO-loose norms around board of directors and conflicts of interests (COIs) between funding orgs and grantees. The author presents it in a way that it's self evident the boards were not constructed in a way to be sufficiently independent / objective, and having substantial overlap between the foundation board and the boards of the largest grantees can lead to hazards. These are common industry issues in corporate oversight, curious what policies there are among EA orgs to decrease COIs.

"A significant share of the grants went to groups focused on building the effective altruist movement rather than organizations working directly on its causes. Many of those groups had ties to Mr. Bankman-Fried’s own team of advisers. The largest single grant listed on the Future Fund website was $15 million to a group called Longview, which according to its website counts the philosopher Mr. MacAskill and the chief executive of the FTX Foundation, Nick Beckstead, among its own advisers.

The second-largest grant, in the amount of $13.9 million, went to the Center for Effective Altruism. Mr. MacAskill was a founder of the cent... (read more)

I think the point of most non-profit boards is to ensure that donor funds are used effectively to advance the organization's charitable mission. If that's the case, then having donor representation on the board seems appropriate. Why would this represent a conflict of interest? My impression is that this is quite common amongst non-profits and is not considered problematic. (Note that Holden is on ARC's board.)

I'm also not sure this what the NYT author is objecting to. I think they would be equally unhappy with SBF claiming to have donated a lot, but it secretly went to a DAF he controlled that he could potentially use to have influence later. The problem is more like trying to claim credit for good works despite not having actually given up the influence yet, not a COI issue.

(I don't think it's plausible to call "I gave my money to foundation or DAF, and then I make 100% of the calls about how the foundation donates" a COI issue. )

This actually goes back further, to OpenPhil funding CEA in 2017, with Nick Beckstead as the grant investigator whilst simultaneously being a Trustee of CEA (note that the history of this is now somewhat obscured, given that he later stepped down, but then stepped back up in 2021). The CoI has never been acknowledged or addressed as far as I know. I was surprised that no one seemed to have noticed this (at least publicly), so I (eventually) raised it with Max Dalton (Executive Director of CEA) in March 2021 - at least I anonymously sent a message to his Admonymous. In hindsight, it might've been better to publicly post (e.g. to the EA Forum), but I was concerned about EA's reputation being damaged, and possibly lessening the chances of my own org getting funding (perhaps I was a victim of/too in sway to Ra?). Even now part of me is recognising that this could be seen as "kicking people when they are down", or a betrayal, or mark me out as a troublemaker, and is causing me to pause [I've sat with this comment for hours; if you're reading it, I must've finally pressed "submit"]. Then again, perhaps now is the right time to be airing concerns, lest they never be aired and improvements... (read more)

[anonymous]1y80
36
2

I don't mean to endorse Holden's actions - they were obviously ill-judged - but this reads as pretty lightweight stuff. He posted a few anonymous comments boosting GiveWell? That is so far away from what it increasingly looks like SBF is responsible for - multi-billion dollar fraud, funneling customer funds to a separate trading entity against trumped-up collateral, and then running an insolvent business, presumably waiting for imminent Series C funding to cover the holes.

Hi all -- Cate Hall from Alvea here. Just wanted to drop in to emphasize the "we're hiring" part at the end there. We are still rapidly expanding and well funded. If in doubt, send us a CV.

Hi Jason,

I think your blog and work is great, and I'm keen to see what comes out of Progress Studies.

I wanted to ask a question, and also to comment on your response to another question, that I think this has been incorrect after about 2017:

My perception of EA is that a lot of it is focused on saving lives and relieving suffering.

More figures here.

The following is more accurate:

I don't see as much focus on general economic growth and scientific and technological progress.

(Though even then, Open Philanthropy has allocated $100m+ to scientific research, which would make it a significant fraction of the portfolio. They've also funded several areas of US policy research aimed at growth.)

However, the reason for less emphasis on economic growth is because the community members who are not focused on global health, are mostly focused on longtermism, and have argued it's not the top priority from that perspective. I'm going to try to give a (rather direct) summary of why, and would be interested in your response.

Those focused on longtermism have argued that influencing the trajectory of civilization is far higher value than speeding up progress (e.g. one example of that argument h... (read more)

As someone who's spent a lot of time on EA community-building and also on parenting, I'd  caution against any strong weighting on "my children will turn out like me / will be especially altruistic." That seems like a recipe for strained relationships. I think the decision to parent should be made because it's important to you personally, not because you're hoping for impact. You can almost certainly have more impact by talking to existing young people about EA or supporting community-building or field-building in some other way than by breeding more people.

I'd also caution against treating adoption as less intensive in time and effort. The process of adopting internationally or from foster care is intensive and often full of uncertainty and disappointment as placements fall through, policies change, etc. And I think the ongoing task of shoring up attachment with an adopted child is significant.(For example, I have a friend who realized her ten-year-old, adopted before he can remember, had somehow developed the belief that his parents would "give him back" at some point and that he was not actually a permanent member of the family. I think this kind of thing is pretty common.) ... (read more)

[ii] Some queries to MacAskill’s Q&A show reverence here, (“I'm a longtime fan of all of your work, and of you personally. I just got your book and can't wait to read it.”, “You seem to have accomplished quite a lot for a young person (I think I read 28?). Were you always interested in doing the most good? At what age did you fully commit to that idea?”).

I share your concerns about fandom culture / guru worship in EA, and am glad to see it raised as a troubling feature of the community. I don’t think these examples are convincing, though. They strike me as normal, nice things to say in the context of an AMA, and indicative of admiration and warmth, but not reverence.

In many ways this post leaves me feeling disappointed that 80,000 Hours has turned out the way it did and is so focused on long-term future career paths.

- -

Over the last 5 years I have spent a fair amount of time in conversation with staff at CEA and with other community builders about creating communities and events that are cause-impartial.

This approach is needed for making a community that is welcoming to and supportive of people with different backgrounds, interests and priorities; for making a cohesive community where people with varying cause areas feel they can work together; and where each individual is open-minded and willing to switch causes based on new evidence about what has the most impact.

I feel a lot of local community builders and CEA have put a lot of effort into this aspect of community building.

- -
Meanwhile it seems that 80000 Hours has taken a different tack. They have been more willing, as part of trying to do the most good, to focus on the causes that the staff at 80000 Hours think are most valuable.

Don’t get me wrong I love 80000 Hours, I am super impressed by their content glad to see them doing well. And I think there is a good case to be made fo... (read more)

  1. Social justice in relation to effective altruism

I've been thinking a lot about this recently too. Unfortunately I didn't see this AMA until now but hopefully it's not too late to chime in. My biggest worry about SJ in relation to EA is that the political correctness / cancel culture / censorship that seems endemic in SJ (i.e., there are certain beliefs that you have to signal complete certainty in, or face accusations of various "isms" or "phobias", or worse, get demoted/fired/deplatformed) will come to affect EA as well.

I can see at least two ways of this happening to EA:

  1. Whatever social dynamic is responsible for this happening within SJ applies to EA as well, and EA will become like SJ in this regard for purely internal reasons. (In this case EA will probably come to have a different set of politically correct beliefs from SJ that one must profess faith in.)
  2. SJ comes to control even more of the cultural/intellectual "high grounds" (journalism, academia, K-12 education, tech industry, departments within EA organizations, etc.) than it already does, and EA will be forced to play by SJ's rules. (See second link above for one specific scenario that worries me.)

From your answ

... (read more)

How I publicly talked about Sam 

Some people have asked questions about how I publicly talked about Sam, on podcasts and elsewhere. Here is a list of all the occasions I could find where I publicly talked about him.  Though I had my issues with him, especially his overconfidence, overall I was excited by him. I thought he was set to do a tremendous amount of good for the world, and at the time I felt happy to convey that thought. Of course, knowing what I know now, I hate how badly I misjudged him, and hate that I at all helped improve his reputation.

Some people have claimed that I deliberately misrepresented Sam’s lifestyle. In a number of places, I said that Sam planned to give away 99% of his wealth, and in this post, in the context of discussing why I think honest signalling is good, I said, “I think the fact that Sam Bankman-Fried is a vegan and drives a Corolla is awesome, and totally the right call”. These statements represented what I believed at the time. Sam said, on multiple occasions, that he was planning to give away around 99% of his wealth, and the overall picture I had of him was highly consistent with that, so the Corolla seemed like an honest si... (read more)

This is really sad news. I hope everyone working there has alternative employment opportunities (far from a given in academia!).

I was shocked to hear that the philosophy department imposed a freeze on fundraising in 2020. That sounds extremely unusual, and I hope we eventually learn more about the reasons behind this extraordinary institutional hostility. (Did the university shoot itself in the financial foot for reasons of "academic politics"?)

A minor note on the forward-looking advice: "short-term renewable contracts" can have their place, especially for trying out untested junior researchers. But you should be aware that it also filters out mid-career academics (especially those with family obligations) who could potentially bring a lot to a research institution, but would never leave a tenured position for short-term one. Not everyone who is unwilling to gamble away their academic career is thereby a "careerist" in the derogatory sense.

I think it might be helpful to look at a simple case, one of the best cases for the claim that your altruistic options differ in expected impact by orders of magnitude, and see if we agree there? Consider two people, both in "the probably neutral role of someone working a 'bullshit job'". Both donate a portion of their income to GiveWell's top charities: one $100k/y and the other $1k/y. Would you agree that the altruistic impact of the first is, ex-ante, 100x that of the second?

One of the big disputes here is over whether Alice was running her own incubated organization (which she could reasonably expect to spin out) or just another project under Nonlinear. Since Kat cites this as significant evidence for Alice's unreliability, I wanted to do a spot-check.

(Because many of the claims in this response are loosely paraphrased from Ben's original post, I've included a lot of quotes and screenshots to be clear about exactly who said what. Sorry for the length in advance.)

Let's start with claims in Ben's original post

Alice joined as the sole person in their incubation program. She moved in with them after meeting Nonlinear at EAG and having a ~4 hour conversation there with Emerson, plus a second Zoom call with Kat. Initially while traveling with them she continued her previous job remotely, but was encouraged to quit and work on an incubated org, and after 2 months she quit her job and started working on projects with Nonlinear. 

and

 One of the central reasons Alice says that she stayed on this long was because she was expecting financial independence with the launch of her incubated project that had $100k allocated to it (fundraised from FTX).

... (read more)

I am happy to see that Nick and Will have resigned from the EV Board. I still respect them as individuals but I think this was a really good call for the EV Board, given their conflicts of interests arising from the FTX situation. I am excited to see what happens next with the Board as well as governance for EV as a whole. Thanks to all those who have worked hard on this.

Yes, unfortunately I've also been hearing negatives about Conjecture, so much so that I was thinking of writing my own critical post (and for the record, I spoke to another non-Omega person who felt similarly). Now that your post is written, I won't need to, but for the record, my three main concerns were as follows:

1. The dimension of honesty, and the genuineness of their business plan. I won't repeat it here, because it was one of your main points, but I don't think that it's a way to run a business, to sell your investors on a product-oriented vision for the company, but to tell EAs that the focus is overwhelmingly on safety.

2. Turnover issues, including the interpretability team. I've encountered at least half a dozen stories of people working at or considering work at Conjecture, and I've yet to hear of any that were positive. This is about as negative a set of testimonials as I've heard about any EA organisation. Some prominent figures like Janus and Beren have left. In the last couple of months, turnover has been especially high - my understanding is that Connor told the interpretability team that they were to work instead on cognitive emulations, and most of them left. Much... (read more)

I'm not sure what can be shared publicly for legal reasons, but would note that it's pretty tough in board dynamics generally to clearly establish counterfactual influence. At a high level, Holden was holding space for safety and governance concerns and encouraging the rest of the leadership to spend time and energy thinking about them.

I believe the implicit premise of the question is something like "do those benefits outweigh the potential harms of the grant." Personally, I see this as a misunderstanding, i.e. that OP helped OpenAI to come into existence and it might not have happened otherwise. I've gone back and looked at some of comms around the time (2016) as well as debriefed with Holden and I think the most likely counterfactual is that the time to the next fundraising (2019) and creation of the for-profit entity would have been shortened (due to less financial runway). Another possibility is that the other funders from the first round would have made larger commitments. I give effectively 0% of the probability mass to OpenAI not starting up.

Bostrom was essentially still a kid (age ~23) when he wrote the 1996 email. What effect does it have on kids' psychology to think that any dumb thing they've ever said online can and will be used against them in the court of public opinion for the rest of their lives? Given that Bostrom wasn't currently spreading racist views or trying to harm minorities, it's not as though it was important to stop him from doing ongoing harm. So the main justification for socially punishing him would be to create a chilling effect against people daring to spout off flippantly worded opinions going forward. There are some benefits to intimidating people away from saying dumb things, but there are also serious costs, which I think are probably underestimated by those expressing strong outrage.

Of course, there are also potentially huge costs to flippant and crass discussion of minorities. My point is that the stakes are high in both directions, and it's very non-obvious where the right balance to strike is. Personally I suspect the pendulum is quite a bit too far in the direction of trying to ruin people's lives for idiotic stuff they said as kids, but other smart people seem to disagree.

As some othe... (read more)

The following is my personal opinion, not CEA's.

If this is true it's absolutely horrifying.  FLI needs to give a full explanation of what exactly happened here and I don't understand why they haven't. If FLI did knowingly agree to give money to a neo-Nazi group, that’s despicable.  I don't think people who would do something like that ought to have any place in this community.

Agree. I'd also add that this is a natural effect of the focus EA has put on outreach in universities and to young people. Not to say that the young people are the problem--they aren't, and we are happy to have them. But in prioritizing that, we did deprioritize outreach to mid and late-stage professionals. CEA and grantmakers only had so much bandwidth, and we only had so many people suited to CB/recruiting/outreach-style roles. 

We have had glaring gaps for a while in ability to manage people, scale programs, manage and direct projects and orgs, and perform due diligence checks on and advising for EA organisations. In other words, we lack expertise. 

I'd say 80K has been somewhat aware of this gap and touched on it lightly, and the community itself has dialled in on the problem by discussing EA recruiters. Yet CEA, funders, and others working on movement-building seem to repeatedly conflate community building with getting more young people to change careers, revealing their priorities, IMO, by what they actually work on.

Open Phil has done this as well. Looking at their Effective Altruism Community Growth focus area , 5 out of the 6 suggestions are focused on young people.... (read more)

[anonymous]2y79
0
0

Thanks for the detailed update!

There was one expectation / takeaway that I was surprised about.

Getting sympathetic founders from adjacent networks to launch new projects related to our areas of interest - Worse than expected. We thought that maybe there was a range of people who aren't on our radar yet (e.g., tech founder types who have read The Precipice) who would be interested in launching projects in our areas of interest if we had accessible explanations of what we were hoping for, distributed the call widely, and made the funding process easy. But we didn’t really get much of that. Instead, most of the applications we were interested in came from people who were already working in our areas of interest and/or from the effective altruism community. So this part of the experiment performed below our expectations.

You mentioned the call was open for three weeks. Would that have been sufficient for people who are not already deeply embedded in EA networks to formulate a coherent and fundable idea (especially if they currently have full-time jobs)? It seems likely that this kind of "get people to launch new projects" effect would require more runway. If so, the data from this round shouldn't update one's priors very much on this question.

Thanks for this post. If true, it does describe a pretty serious concern. 

One issue I've always had with the "highly engaged EA" metric is that it's only a measure for alignment,* but the people who are most impactful within EA have both high alignment and high competence. If your recruitment selects only on alignment this suggests we're at best neutral to competence and at worst (as this post describes) actively selecting against competence. 

(I do think the elite university setting mitigates this harm somewhat, e.g. 25th percentile MIT students still aren't stupid in absolute terms). 

That said, I think the student group organizers I recently talked to are usually extremely aware of this distinction. (I've talked to a subset of student group organizers from Stanford, MIT, Harvard (though less granularity), UPenn (only one) and Columbia, in case this is helpful). And they tend to operationalize their targets more in terms of people who do good EA research, jobs, and exciting entrepreneurship projects, rather than in terms of just engagement/identification. Though I could be wrong about what they care about in general (as opposed to just when talking with me).

The pet t... (read more)

Inner Rings and EA 

 

C. S. Lewis' The Inner Ring is IMO, a banger. My rough summary - inner rings are the cool club/ the important people. People spend a lot of energy on trying to be part of the inner rings,  and sacrifice things that are truly important. 

There are lots of passages that jump out at me, wrt to my experience as an EA. I found it pretty tough reading in a way... in how it makes me reflect on my own motivations and actions. 

 

[of inner rings] There are what correspond to passwords, but they are too spontaneous and informal. A particular slang, the use of particular nicknames, an allusive manner of conversation, are the marks.

There's a perrenial discussion of jargon in EA. I've typically thought of jargon as a trade off between havivng more efficient discourse on the one hand, and lower barriers for new people to enter the conversation on the other. Reading things makes me think of jargon more as a mechanism to signal in-group membership. 

And when you had climbed up to somewhere near it by the end of your second year, perhaps you discovered that within the ring there was a Ring yet more inner, which in its turn was the fringe of the gre

... (read more)
[anonymous]2y79
0
0

It still seems like you have mischaracterised his view. You say "Take for example Bostrom’s “Vulnerable World Hypothesis”17, which argues for the need for extreme, ubiquitous surveillance and policing systems to mitigate existential threats, and which would run the risk of being co-opted by an authoritarian state." This is misleading imo. Wouldn't it have been better to note the clearly important hedging and nuance and then say that he is insufficiently cognisant of the risks of his solutions (which he discusses at length)?

Buck
3y79
0
0

I think that this totally misses the point. The point of this post isn't to inform ACE that some of the things they've done seem bad--they are totally aware that some people think this. It's to inform other people that ACE has behaved badly, in order to pressure ACE and other orgs not to behave similarly in future, and so that other people can (if they want) trust ACE less or be less inclined to support them.

I think this post is fairly uncharitable to ACE, and misrepresents the situations it is describing. My overall take is basically along the lines of "ACE did the right thing in response to a hard situation, and communicated that poorly." Your post really downplays both the comments that the people in question made and actions they took, and the fact that the people in question were senior leadership at a charity, not just random staff.

I also want to note that I've had conversations with several people offline who disagreed pretty strongly with this post, and yet no one has posted major disagreements here. I think the EA Forum is generally fairly anti-social justice, while EAA is generally fairly pro-social justice, so there are norms clashing between the communities.

 

The blog post

Taken at face value, these claims seem pretty absurd. For example,"inextricably linked" implies that societies without white supremacy and/or patriarchy wouldn't oppress animals.

Your main issue seems to be the claim that these harms are linked, but you just respond by only saying how you feel reading the quote, which isn't a particularly valuable approach. It seems like it would be much more productive... (read more)

Lessons and updates

The scale of the harm from the fraud committed by Sam Bankman-Fried and the others at FTX and Alameda is difficult to comprehend. Over a million people lost money; dozens of projects’ plans were thrown into disarray because they could not use funding they had received or were promised; the reputational damage to EA has made the good that thousands of honest, morally motivated people are trying to do that much harder. On any reasonable understanding of what happened, what they did was deplorable. I’m horrified by the fact that I was Sam’s entry point into EA.

In these comments, I offer my thoughts, but I don’t claim to be the expert on the lessons we should take from this disaster. Sam and the others harmed me and people and projects I love, more than anyone else has done in my life. I was lied to, extensively, by people I thought were my friends and allies, in a way I’ve found hard to come to terms with. Even though a year and a half has passed, it’s still emotionally raw for me: I’m trying to be objective and dispassionate, but I’m aware that this might hinder me.

There are four categories of lessons and updates:

  • Undoing updates made because of FTX
  • Appreciating the
... (read more)

This post spends a lot of time touting the travel involved in Alice’s and Chloe’s jobs, which seems a bit off to me. I guess some people deeply value living in beautiful and warm locations and doing touristy things year-round, but my impression is that this is not very common. “Tropical paradises” often lack much of the convenience people take for granted in high-income countries, such as quick and easy access to some products and services that make life more pleasant. I also think most people quickly get bored of doing touristy things when it goes beyond a few weeks per year, and value being close to their family, friends, and the rest of their local community. Constantly packing and traveling can also be tiring and stressful, especially when you’re doing it for others. 

Putting those things together, it’s plausible that Alice and Chloe eventually started seeing the constant travel as a drawback of the job, rather than as a benefit.

[anonymous]5mo78
19
1

"The past few years seem to have proven the value of the EA community" 

This is the second CEA post to make claims like this without mentioning the FTX fraud. 

I lead the team at GWWC and thought it might help for me to share some quick context, clarifications, and thoughts (sorry for the delay, I was on leave). I've kept this short and in bullet points.

  • Firstly, thank you for writing this. I think that broadly you are correct in the view that FTX has done much more damage than is commonly recognised within the EA community, however, I think that this effect is overstated in your post due to various reasons (some of which have been outlined by others already in the comments).
  • Here is our Growth Dashboard (live metrics, unaudited, but mostly accurate) and a specific monthly graph for when pledges are created (as opposed to their start date which can be any date a pledger chooses, although it is often the day they pledge).
  • When you get a bit more granular, you can see that GWWC pledge data can be quite spikey due to (a) large advocacy moments (e.g. Sam Harris podcast, What We Owe The Future promotion, news articles etc) that then tend to cool down over the coming months after the spike; and (b) seasonality (e.g. giving season and new years day) where people tend to pledge or donate at key moments (and we also focus our growth activit
... (read more)

Wish Swapcard was better? 

Swapcard, the networking and scheduling app for EA Global and EAGx events, has published their product roadmap — where anyone can vote on features they want to see!

Two features currently in the "Researching (Vote)" stage have been requested by our attendees since the beginning of us using Swapcard for our events:

1) Reschedule a meeting
2) External Calendar Synchronization

If these sound like features you want, I encourage you to take a moment to vote for them! Every vote counts.

Swapcard product roadmap

I'm concerned about EA falling into the standard "risk-averse bureaucracy" failure mode. Every time something visibly bad happens, the bureaucracy puts a bunch of safeguards in place. Over time the drag created by the safeguards does a lot of harm, but because the harm isn't as visible, the bureaucracy doesn't work as effectively to reduce it.

I would like to see Fermi estimates for some of these, including explicit estimates of less-visible downsides. For example, consider EA co-living, including for co-workers. If this was banned universally, my guess is that it would mean EAs paying many thousands of dollars extra in rent for housing and/or office space per month. It would probably lead to reduced motivation, increased loneliness, and wasted commute time among EAs. EA funding would become more scarce, likely triggering Goodharting for EAs who want to obtain funding, or people dying senselessly in the developing world.

A ban on co-living doesn't seem very cost-effective to me. It seems to me that expanding initiatives like Basefund would achieve something similar, but be far more cost-effective.

One example of the evidence we’re gathering

We are working hard on a point-by-point response to Ben’s article, but wanted to provide a quick example of the sort of evidence we are preparing to share:

Her claim:  “Alice claims she was sick with covid in a foreign country, with only the three Nonlinear cofounders around, but nobody in the house was willing to go out and get her vegan food, so she barely ate for 2 days.” 


The truth (see screenshots below):

  1. There was vegan food in the house (oatmeal, quinoa, mixed nuts, prunes, peanuts, tomatoes, cereal, oranges) which we offered to cook for her.
  2. We were picking up vegan food for her.

Months later, after our relationship deteriorated, she went around telling many people that we starved her. She included details that depict us in a maximally damaging light - what could be more abusive than refusing to care for a sick girl, alone in a foreign country? And if someone told you that, you’d probably believe them, because who would make something like that up?

Evidence

  • The screenshots below show Kat offering Alice the vegan food in the house (oatmeal, quinoa, cereal, etc), on the first day she was sick. Then, when she wasn’t
... (read more)

It could be that I am misreading or misunderstanding these screenshots, but having read through them a couple of times trying to parse what happened, here's what I came away with:

On December 15, Alice states that she'd had very little to eat all day, that she'd repeatedly tried and failed to find a way to order takeout to their location, and tries to ask that people go to Burger King and get her an Impossible Burger which in the linked screenshots they decline to do because they don't want to get fast food. She asks again about Burger King and is told it's inconvenient to get there.  Instead, they go to a different restaurant and offer to get her something from the restaurant they went to. Alice looks at the menu online and sees that there are no vegan options. Drew confirms that 'they have some salads' but nothing else for her. She assures him that it's fine to not get her anything.


It seems completely reasonable that Alice remembers this as 'she was barely eating, and no one in the house was willing to go out and get her nonvegan foods' - after all, the end result of all of those message exchanges was no food being obtained for Alice and her requests for Burger King being rep... (read more)

I should also add that this (including the question of whether Alice is credible) is not very important to my overall evaluation of the situation, and I'd appreciate it if Nonlinear spent their limited resources on the claims that I think are most shocking and most important, such as the claim that Woods said "your career in EA would be over with a few DMs" to a former employee after the former employee was rumored to have complained about the company. 

RobBensinger
7mo179
40
1
1
1
16

I'd appreciate it if Nonlinear spent their limited resources on the claims that I think are most shocking and most important, such as the claim that Woods said "your career in EA would be over with a few DMs" to a former employee after the former employee was rumored to have complained about the company. 

I agree that this is a way more important incident, but I downvoted this comment because:

  • I don't want to discourage Nonlinear from nitpicking smaller claims. A lot of what worries people here is a gestalt impression that Nonlinear is callous and manipulative; if that impression is wrong, it will probably be because of systematic distortions in many claims, and it will probably be hard to un-convince people of the impression without weighing in on lots of the claims, both major and minor.
  • I expect some correlation between "this concern is easier to properly and fully address" and "this concern is more minor", so I think it's normal and to be expected that Nonlinear would start with relatively-minor stuff.
  • I do think it's good to state your cruxes, but people's cruxes will vary some; I'd rather that Nonlinear overshare and try to cover everything, and I don't want to locally punis
... (read more)
IrenaK
7mo82
42
15
7
3

I think it's telling, that Kat thinks that the texts speak in their favor. Reading them was quite triggering for me because I see a scared person, who asks for basic things, from the only people she has around her, to help her in a really difficult situation, and is made to feel like she is asking for too much, has to repeatedly advocate for herself (while sick) and still doesn't get her needs met. On one hand, she is encouraged by Kat to ask for help but practically it's not happening. Especially Emerson and Drew in that second thread sounded like she is difficult and constantly pushed to ask for less or for something else than what she asked for. Seriously, it took 2.5 hours the first day to get a salad, which she didn't want in the first place?! And the second day it's a vegetarian, not vegan, burger. 

The way Alice constantly mentioned that she doesn't want to bother them and says that things are fine when they are clearly not, is very upsetting. I can't speak to how Alice felt but it's no wonder she reports this as not being helped/fed when she was sick. To me, this is accurate, whether or not she got a salad and a vegetarian burger the next day.  

Honestly, the burger... (read more)

Thank you! And a few reflections on recognition.

A few days ago, while I sat at the desk in my summer cabin, an unexpected storm swept in. It was a really bad storm, and when it subsided, a big tree had fallen, blocking the road to the little neighborhood where the cabin lies. Some of my neighbors, who are quite senior, needed to get past the tree and could not move it, so I decided to help. I went out with a chainsaw and quad bike, and soon the road was clear.

The entire exercise took me about two hours, and it was an overall pretty pleasurable experience, getting a break from work and being out in nature working with my body. However, afterward, I was flooded with gratitude, as if I had done something truly praiseworthy. Several neighbors came to thank me, telling me what a very nice young man I was, some even brought small gifts, and I heard people talking about what I had done for days afterward.

This got me thinking.

My first thought: These are very nice people, and it is obviously kind of them to come and thank me. But it seems a little off - when I tell them what I do every day, what I dedicate my life to, most of them nod politely and move on to talk about the weather. It seems... (read more)

I think the title is misleading. Africa is a large continent, and this was just one fellowship of ~15 people (of which I was one). There are some promising things going on in EA communities in Africa. At the same time, and I speak for several people when I say this, EA community building seems quite neglected in Africa, especially given how far purchasing power goes. And many community building efforts to date have been off the mark in one way or another.

I expect this to improve with time. But I think a better barometer of the health of EA in Africa is the communities that have developed around Africa metropolises (e.g. EA Abuja, EA Nairobi).


I also dislike Fumba being framed to the broader EA community as the perfect compromise. Fumba town was arguably the thing that the residents most disliked. There are a lot of valid reasons as to why the residency took place in Fumba, but this general rosy framing of the residency overlooks the issues it had and, more importantly, the lessons learned from them.

I agree that it's best to think of GPT as a predictor, to expect it to think in ways very unlike humans, and to expect it to become much smarter than a human in the limit.

That said, there's an important further question that isn't determined by the loss function alone---does the model do its most useful cognition in order to predict what a human would say, or via predicting what a human would say?

To illustrate, we can imagine asking the model to either (i) predict the outcome of a news story, (ii) predict a human thinking step-by-step about what will happen next in a news story. To the extent that (ii) is smarter than (i), it indicates that some significant part of the model's cognitive ability is causally downstream of "predict what a human would say next," rather than being causally upstream of it. The model has learned to copy useful cognitive steps performed by humans, which produce correct conclusions when executed by the model for the same reasons they produce correct conclusions when executed by humans.

(In fact (i) is smarter than (ii) in some ways, because the model has a lot of tacit knowledge about news stories that humans lack, but (ii) is smarter than (i) in other ways,... (read more)

[anonymous]1y78
38
13

Strong disagree.

the fact that EA is in existential danger

Seems kinda strong given this paragraph from Ben: "Perhaps surprisingly, recent polling data from Rethink Priorities indicates that most people still don’t know what EA is, those that do are positive towards it as a brand, overall affect scores haven't noticeably changed post FTX collapse, and only a few percent of respondents mentioned FTX when asked about EA open-ended. It seems like these results hold both in the general US population and amongst students at “elite universities”."

EA...is implicated in one of the largest frauds in history

Seems kinda strong given that it was one EA and two(?) other EAs who went along with it.

is in the midst of a sexual harassment scandal involving leading figures in EA

Seems kinda strong given that I can only think of one leading figure and I'm not even sure I'd call him that.

morale is at an all-time low

Right?? Many of us have been depressed for months, but that's just not a sustainable reaction. EA has reached a size and level of visibility now that is sure to keep it continuously embroiled in various controversies and scandals from now on. We can't just mourn and hang our heads in shame for... (read more)

EA has reached a size and level of visibility now that is sure to keep it continuously embroiled in various controversies and scandals from now on. We can't just mourn and hang our heads in shame for the rest of our lives.

One animal welfare advocate told me something like "You EA's are such babies. There are entire organizations devoted to making animal advocacy look bad, sending "undercover investigators" into organizations to destroy trust, filing frivolous claims and lawsuits to waste time, placing stories in the media which paint us in the worst light possible, etc. Yet EA has a couple of bad months in the press and you all want to give up?"

I found that a helpful reframe.

I trade global rates for a large hedge fund so I think i can give the inside view on how financial market participants think about this. 

First, the essential claim is true - no one in rates markets talks about the theme of AI driving a massive increase in potential growth. 

However, even if this did become accepted as a potential scenario it would be very unlikely to show up in government bond yields so using yields as evidence of the likelihood of the scenario is, imho, a mistake. I'll give a number of reasons.

  1. Rates markets don't price in events (even ones that are fully known) more than one or two years ahead of time (Y2K, contentious elections in Italy or France, etc). This is generally outside participants time horizons but also...
  2. A lot can happen in two years (much less ten years). Major terrorist attack, pandemic, nuclear war to name three possibilities all of which would fully torpedo any bet you would make on AI, no matter how certain you are of the outcome.
  3. The premise is not obviously true that higher growth leads to higher real yields. That is one heuristic among many when thinking about what real yields should do. It's important to think about the mechanism here
... (read more)

To add one more person's impression, I agree with ofer that he apology was "reasonable," I disagree with him that your post "reads as if it was optimized to cause as much drama as possible, rather than for pro-social goals," and I agree with Amber Dawn that the original email is somewhat worse than something I'd have expected most people to have in their past. (That doesn't necessarily mean it deserves any punishment decades later and with the apology –non-neurotyptical people can definitely make a lot of progress between, say, early twenties and later in life, in understanding how their words affect others and how edginess isn't the same as being sophisticated.)

I think this is one of these "struggles of norms" where you can't have more than one sacred principle, and ofer's and my position is something like "it should be okay to say 'I don't know what's true' on a topic where the truth seems unclear ((but not, e.g., something like Holocaust denial))." Because a community that doesn't prioritize truth-seeking will run into massive troubles, so even if there's a sense in which kindness is ultimately more important than truth-seeking (I definitely think so!), it just doesn't make sens... (read more)

In the most respectful way possible, I strongly disagree with the overarching direction put forth here. A very strong predictor of engaged participation and retention in advocacy, work, education and many other things in life is the establishment of strong, close social ties within that community.

I think this direction will greatly reduce participation and engagement with EA, and I'm not even sure it will address the valid concerns you mentioned.

I say this despite the fact that I didn't have super close EA friends in the first 3-4 years, and still managed to motivate myself to work on EA stuff as well as policy successful advocacy in other areas. When it comes to getting new people to partake in self-motivated, voluntary social causes/projects, one of the first things I do is to make sure they find a friend to keep them engaged, and this likelihood is greatly increased if they simply meet more people.

I am also of the opinion that long-term engagement relying on unpaid, ad-hoc community organising is much more unreliable than paid work. I think other organisers will agree when I say: organising a community around EA for the purpose of deeply engaging EAs is time-consuming, and great... (read more)

Hey Richard, thanks for starting the discussion! I'd suggest making it easier to submit answers to these questions anonymously e.g. via an anonymous Google Form. I think that will help with opening up the discussion and making the brainstorming more fruitful.

  • We suspect that a huge number of EAs don’t apply who would be excellent candidates, thinking themselves not good enough.
  • This is a shame. Historically, about half of those who have made it all the way through the program and achieved funding didn’t even think they’d be accepted. 
  • Doctors think they lack the commercial skills, business students think they lack the research skills and researchers think they lack the interpersonal skills. The truth is that nobody comes onto the program ready. That’s what the program is for. Moreover, year one of running the charity is where you pilot, test, learn and become an expert; skillful and capable.  Almost nobody actually hits the ground running. We know this too. We are looking for people who have the potential to BECOME great founders, in time. 
  • Furthermore, very few people have a good sense of what it actually takes. That’s because they’ve never done anything like this before. So they underrate themselves not knowing what it takes. We, on the other hand, having started a bunch of charities, do have a good sense of what it takes. So you’re probably best off applying and trusting our vetting process. 


 

  • Specifically,
... (read more)

Hi Eli, thank you so much for writing this! I’m very overloaded at the moment, so I’m very sorry I’m not going to be able to engage fully with this. I just wanted to make the most important comment, though, which is a meta one: that I think this is an excellent example of constructive critical engagement — I’m glad that you’ve stated your disagreements so clearly, and I also appreciate that you reached out in advance to share a draft. 

I think this is a good guide, and thank you for writing it. I found the bit on how to phrase event advertising particularly helpful.

One thing I would like to elaborate on is the 'rent-seekers' bit. I'm going to say something that disagrees with a lot of the other comments here. I think we need to be careful about how we approach such 'rent-seeking' conversations. This isn't a criticism of what you wrote, as you explained it really well, but more of a trend I've noticed recently in EA discourse and this is a good opportunity to mention it. 

It's important to highlight that not all groups are equal, demographically. I co-lead a group in a city where the child poverty rate has gone from 24% to a whopping 42% in 5 years, and remains one of the poorest cities in the UK.  I volunteer my time at a food bank and can tell you that it's never been under stronger demand. Simply put, things are tough here.  One of the things I am proudest about in our EA group is we've done a load of outreach to people who face extra barriers to participating in academia and research, and as a result have a group with a great range of life backgrounds. I'm sure it's not the only EA group to achie... (read more)

I think there's a lot of truth to the points made in this post.

I also think it's worth flagging that several of them: networking with a certain subset of EAs, asking for 1:1 meetings with them, being in certain office spaces - are at least somewhat zero sum, such that the more people take this advice, the less available these things will actually be to each person, and possibly on net if it starts to overwhelm. (I can also imagine increasingly unhealthy or competitive dynamics forming, but I'm hoping that doesn't happen!)

Second flag is that I don't know how many people reading this can expect to have an experience similar to yours. They may, but they may not end up being connected in all the same ways, and I want people to go knowing that they take that as a risk and to decide whether it's worth it for them.

On the other side, people taking this advice can do a lot of great networking and creating a common culture of ambition and taking ideas seriously with each other, without the same set of expectations around what connections they'll end up making.

Third flag is I have an un-fleshed out worry that this advice funges against doing things outside Berkeley/SF that are more valuable c... (read more)

This is a side-note, but I dislike the EA jargon terms hinge/hingey/hinginess and think we should use the term "critical juncture" and "criticalness" instead. This is the common term used in political science, international relations and other social sciences. Its better theorised and empirically backed than "hingey", doesn't sound silly, and is more legible to a wider community.

Critical Junctures - Oxford Handbooks Online

The Study of Critical Junctures - JSTOR

https://users.ox.ac.uk/~ssfc0073/Writings%20pdf/Critical%20Junctures%20Ox%20HB%20final.pdf 

https://en.wikipedia.org/wiki/Critical_juncture_theory 

FWIW, I've had similar thoughts: I used to think being veg*n was, in some sense, really morally important and not doing it would be really letting the side down. But, after doing it for a few years, I felt much less certain about it.*

To press though, what seems odd about the "the other things I do are so much more impactful, why should I even worry about this?" line is that it has an awkward whisper of self-importance and that it would license all sorts of other behaviours. 

To draw this out with a slightly silly and not perfect analogy, imagine we hear a story about some medieval king who sometimes, but not always, kicked people and animals that got in his way. When asked by some brave lackey, "m'lord, but why do you kick them; surely there is no need?" The king replies (imagine a booming voice for best effect) "I am very important and do much good work.  Given this, whether I kick or not kick is truly a rounding error, a trifle, on my efforts and I do not propose to pay attention to these consequences".

I think that we might grant that what the king says is true - kicking things is genuinely a very small negative compared to the large positive of his other actions. Howeve... (read more)

[anonymous]2y78
0
0

A few thoughts on the democracy criticism. Don't a lot of the criticisms here apply to the IPCC? "A homogenous group of experts attempting to directly influence powerful decision-makers is not a fair or safe way of traversing the precipice."  IPCC contributors are disproportionately white very well-educated males in the West who are much more environmentalist than the global median voter, i.e. "unrepresentative of humanity at large and variably homogenous in respect to income, class, ideology, age, ethnicity, gender, nationality, religion, and professional background." So, would you propose replacing the IPCC with something like a citizen's assembly of people with no expertise in climate science or climate economics, that is representative wrt some of the demographic features you mention? 

You say that decisions about which risks to take should be made democratically. The implication of this seems to be that everyone, and not just EAs, who is aiming to do good with their resources should donate only to their own government. Their govt could then decide how to spend the money democratically. Is that implication embraced? This would eg include all climate philanthropy, which is now at $5-9bn per year.

I note the rider says it's not directed at regular forum users/people necessarily familiar with longtermism. 

The Torres critiques are getting attention in non-longtermist contexts, especially with people not very familiar with the source material being critiqued. I expect to find myself linking to this post regularly when discussing with academic colleagues who have come across the Torres critiques; several sections (the "missing context/selective quotations" section in particular) demonstrate  effectively places in which the critiques are not representing the source material entirely fairly.

I'll kick things off!

This month, I finished in second place at the Magic: the Gathering Grand Finals (sort of like the world championship). I earned $20,000 in prize money and declared that I would donate half of it to GiveWell, which gave me an excuse to talk up EA on camera for thousands of live viewers and post about it on Twitter.

This has been a whirlwind journey for me; I did unexpectedly well in a series of qualifying tournaments. Lots of luck was involved. But I think I played well, and I've been thrilled to see how interested my non-EA followers are in hearing about charity stuff (especially when I use Magic-related metaphors to explain cause prioritization).

Thank you for looking into the numbers! While I don't have a strong view on how representative the EA Leaders forum is, taking the survey results about engagement at face value doesn't seem right to me.

On the issue of long-termism, I would expect that people who don't identify as long-termists to now report to be less engaged with the EA Community (especially with the 'core') and identify as EA less. Long-termism has become a dominant orientation in the EA Community which might put people off the EA Community, even if their personal views and actions related to doing good haven't changed, e.g. their donations amounts and career plans. The same goes for looking at how long people have been involved with EA - people who aren't compelled by long-termism might have dropped out of identifying as EA without actually changing their actions.

One thing I often see on the forum is a conflation of 'direct work' and 'working at EA orgs'. These strike me as two pretty different things, where I see 'working at EA orgs' as meaning 'working at an organisation that explicitly identifies itself as EA' and 'direct work' as being work that directly aims to improve lives as opposed to aiming to eg make money to donate. My view is that the vast majority of EAs should be doing direct work but not at EA orgs - working in government, at the think tanks, in foundations and in influential companies. Conflating these two concepts seems really bad because it encourages people to focus on a very narrow subset of 'direct impact' jobs - those that are at the very few, small organisations which explicitly identify with the EA movement.

A trap I think a lot of us fall into at some time or other is thinking that in order to be a 'good EA' you have to do ALL THE THINGS: have a directly impactful job, donate money to a charity you deeply researched, live frugally, eat vegan etc. When, inevitably, you don't live up to a bunch of these standards, it's easy to assume othe... (read more)

I don't really agree with your second and third point. Seeing this problem and responding by trying to create more 'capital letter EA jobs' strikes me as continuing to pursue a failing strategy.

What (in my opinion) the EA Community needs is to get away from this idea of channelling all committed people to a few organisations - the community is growing faster* than the organisations, and those numbers are unlikely to add up in the mid term.

Committing all our people to a few organisations seriously limits our impact in the long run. There are plenty of opportunities to have a large impact out there - we just need to appreciate them and pursue them. One thing I would like to see is stronger profession-specific networks in EA.

It's catastrophic that new and long-term EAs now consider their main EA activity to be to apply for the same few jobs instead of trying to increase their donations or investing in non-'capital letter EA' promising careers.

But this is hardly surprising given past messaging. The only reason EA organisations can get away with having very expensive hiring rounds for the applicants is because there are a lot of strongly committed people out there willing to take on that cost. Organisations cannot get away with this in most of the for-profit sector.

*Though this might be slowing down somewhat, perhaps because of this 'being an EA is applying unsuccessfully for the same few jobs' phenomena.

Hi Alexey,

I appreciate that you’ve taken the time to consider what I’ve said in the book at such length. However, I do think that there’s quite a lot that’s wrong in your post, and I’ll describe some of that below. Though I think you have noticed a couple of mistakes in the book, I think that most of the alleged errors are not errors.

I’ll just focus on what I take to be the main issues you highlight, and I won’t address the ‘dishonesty’ allegations, as I anticipate it wouldn’t be productive to do so; I’ll leave that charge for others to assess.

tl;dr:

  • Of the main issues you refer to, I think you’ve identified two mistakes in the book: I left out a caveat in my summary of the Baird et al (2016) paper, and I conflated overheads costs and CEO pay in a way that, on the latter aspect, was unfair to Charity Navigator.
  • In neither case are these errors egregious in the way you suggest. I think that: (i) claiming that the Baird et al (2016) should cause us to believe that there is ‘no effect’ on wages is a misrepresentation of that paper; (ii) my core argument against Charity Navigator, regarding their focus on ‘financial efficiency’ metrics like overhead costs, is both successful and accurat
... (read more)

Yeah, I don't necessarily mind an informal tone. But the reality is, I read [edit: a bit of] the appendix doc and I'm thinking, "I would really not want to be managed by this team and would be very stressed if my friends were being managed by them. For an organisation, this is really dysfunctional." And not in an, "understandably risky experiment gone wrong" kind of way, which some people are thinking about this as, but in a, "systematically questionable judgement as a manager" way. Although there may be good spin-off convos around, "how risky orgs should be" and stuff. And maybe the point of this post isn't to say, "nonlinear did a reasonably sufficient job managing employees and can expect to do so in the future" but rather, "I feel slandered and lied about and I want to share my perspective." 

[anonymous]7mo77
19
2
8
1

Just want to signal my agreement with this.

My personal guess is that Kat and Emerson acted in ways that were significantly bad for the wellbeing of others. My guess is also that they did so in a manner that calls for them to take responsibility: to apologise, reflect on their behaviour, and work on changing both their environment and their approach to others to ensure this doesn't happen again. I'd guess that they have committed a genuine wrongdoing.

I also think that Kat and Emerson are humans, and this must have been a deeply distressing experience for them. I think it's possible to have an element of sympathy and understanding towards them, without this undermining our capacity to also be supportive of people who may have been hurt as a result of Kat and Emerson's actions.

Showing this sort of support might require that we think about how to relate with Nonlinear in the future. It might require expressing support for those who suffered and recognising how horrible it must have been. It might require that we think less well of Kat and Emerson. But I don't think it requires that we entirely forget that Kat and Emerson are humans with human emotions and that this must be pretty diffi... (read more)

I’m Chana, a manager on the Community Health team. This comment is meant to address some of the things Ben says in the post above as well as things other commenters have mentioned, though very likely I won’t have answered all the questions or concerns. 

High level

I agree with some of those commenters that our role is not always clear, and I’m sorry for the difficulties that this causes. Some of this ambiguity is intrinsic to our work, but some is not, and I would like people to have a better sense of what to expect from us, especially as our strategy develops. I'd like to give some thoughts here that hopefully give some clarity, and we might communicate more about how we see our role in the future.

For a high level description of our work: We aim to address problems that could prevent the effective altruism community from fulfilling its potential for impact. That looks like: taking seriously problems with the culture, and problems from individuals or organizations; hearing and addressing concerns about interpersonal or organizational issues (primarily done by our community liaisons); thinking about community-wide problems and gaps and occasionally trying to fill those;... (read more)

Thanks for sharing. I think it was brave and I appreciated getting to read this. I'm sorry you've had to go through this and am glad to hear you're feeling optimistic.

Jason
1y77
30
7

I'm struggling to see how releasing information already provided to the investigation would obstruct it. A self-initiated investigation is not a criminal, or even a civil, legal process -- I am much less inclined to accept it as an adequate justification for a significant delay, especially where potentially implicated people have not been put on full leaves of absence.

Given the TIME article, I thought I should give you all an update. Even though I have major issues with the piece, I don’t plan to respond to it right now.

Since my last shortform post, I’ve done a bunch of thinking, updating and planning in light of the FTX collapse. I had hoped to be able to publish a first post with some thoughts and clarifications by now; I really want to get it out as soon as I can, but I won’t comment publicly on FTX at least until the independent investigation commissioned by EV is over. Unfortunately, I think that’s a minimum of 2 months, and I’m still sufficiently unsure on timing that I don’t want to make any promises on that front. I’m sorry about that: I’m aware that this will be very frustrating for you; it’s frustrating for me, too.

And possibly with those numbers humans shouldn't be dating in general, ignoring EA?

Universities studies in both the US and the UK have found that only 2 - 3% of allegations are false.

This is not a fair description. The way people get such statistics is by assuming all accusations are true unless there is strong evidence against them, but there is a large number with no strong evidence either way, and researchers should not just assume they are all true.

A good first place to start is the Wikipedia Article on the subject, which features a wide range of estimates, almost all of which are higher than the 2-3% you say, some of which being dramatically higher.

https://en.wikipedia.org/wiki/False_accusation_of_rape

Alexander also has a good blog post on this:

https://slatestarcodex.com/2014/02/17/lies-damned-lies-and-social-media-part-5-of-%E2%88%9E/

Your own website lists a slightly higher range, 2-4%

[redacted]

If we look at the source you supply for the 2% we see a different story:

Despite reforms intended to increase the number of rape investigations that proceed to prosecution, the study found that suspects were charged in only 15 percent of the 850 reported rapes. Rape complaints were subsequently withdrawn in 15.1 percent of the cases, and 46.4 percent of the complaints

... (read more)

In the past two years, the technical alignment organisations which have received substantial funding include

Your post does not actually say this, but when I read it I thought you were saying that these are all the organizations that have received major funding in technical alignment. I think it would have been clearer if you had said "include the following organizations based in the San Francisco Bay Area:" to make it clearer you're discussing a subset.

Anyway, here are the public numbers, for those curious, of $1 million+ grants in technical AI safety in 2021 and 2022 (ordered by total size) made by Open Philanthropy:

  • Redwood Research: $9.4 million, and then another grant for $10.7 million
  • Many professors at a lot of universities: $14.4 million
  • CHAI: $11.3 million
  • Aleksander Madry at MIT: $1.4 million
  • Hofvarpnir Studios: $1.4 million
  • Berkeley Existential Risk Initiative - CHAI collaboration: $1.1 million
  • Berkeley Existential Risk Initiative - SERI MATS Program: $1 million

The Alignment Research Center received much less: $265,000.

There isn't actually any public grant saying that Open Phil funded Anthropic. However, that isn't to say that they couldn't have made a non-public grant. It was p... (read more)

I found this clear and reassuring. Thank you for sharing

Earlier this year ARC received a grant for $1.25M from the FTX foundation. We now believe that this money morally (if not legally) belongs to FTX customers or creditors, so we intend to return $1.25M to them.

It may not be clear how to do this responsibly for some time depending on how bankruptcy proceedings evolve, and if unexpected revelations change the situation (e.g. if customers and creditors are unexpectedly made whole) then we may change our decision. We'll post an update here when we have a more concrete picture; in the meantime we will set aside the money and not spend it.

We feel this is a particularly straightforward decision for ARC because we haven't spent most of the money and have other supporters happy to fill our funding gap. I think the moral question is more complex for organizations that have already spent the money, especially on projects that they wouldn't have done if not for FTX, and who have less clear prospects for fundraising.

(Also posted on our website.)

This article from The Wall Street Journal suggests that what happened was more like "taking funds from customers with full knowledge" than like a mistake:

In a video meeting with Alameda employees late Wednesday Hong Kong time, Alameda CEO Caroline Ellison said that she, Mr. Bankman-Fried and two other FTX executives, Nishad Singh and Gary Wang, were aware of the decision to send customer funds to Alameda, according to people familiar with the video.

(See also this article by The New York Times, which describes the same video meeting.[1])

There are other signs of fraud. For example:

  • Reuters reports that FTX had a "backdoor" which "allowed Bankman-Fried to execute commands that could alter the company's financial records without alerting other people, including external auditors," according to their sources.
  • On November 10, the official FTX account on Twitter announced that FTX was ordered to facilitate Bahamian withdrawals by Bahamian regulators. Days later, the Securities Commission of the Bahamas claimed that that was a lie. As Scott Alexander put it, "this might have been a ruse to let insiders withdraw first without provoking suspicion."
  • FTX's legal and compliance team resigned very
... (read more)

To be clear, this is an account that joined from Twitter to post this comment (link).

I have a similar-ish story. I became an EA (and a longtermist, though I think that word did not exist back then) as a high school junior, after debating a lot of people online about ethics and binge-reading works from Nick Bostrom, Eliezer Yudkowsky and Brian Tomasik. At the time, being an EA felt so philosophically right and exhilaratingly consistent with my ethical intuitions. Since then I have almost only had friends that considered themselves EAs.

For three years (2017, 2018 and 2019) my friends recommended I apply to EA Global. I didn’t apply in 2017 because I was underage and my parents didn’t let me go, and didn’t apply in the next two years because I didn’t feel psychologically ready for a lot of social interaction (I’m extremely introverted). 

Then I excitedly applied for EAG SF 2020, and got promptly rejected. And that was extremely, extremely discouraging, and played an important role in the major depressive episode I was in for two and a half years after the rejection. (Other EA-related rejections also played a role.)

I started recovering from depression after I decided to distance myself from EA. I think that was the only correct choice for me. I still care a lot about making the future go well, but have resigned to the fact that the only thing I can realistically do to achieve that goal is donate to longtermist charities.

Thank you for writing this - a lot of what you say here resonates strongly with me, and captures well my experience of going from very involved in EA back in 2012-14 or so, to much more actively distancing myself from the community for the last few years. I've tried to write about my perspective on this multiple times (I have so many half written Google docs) but never felt quite able to get to the point where I had the energy/clarity to post something and actually engage with EA responses to it. I appreciate this post and expect to point people to it sometimes when trying to explain why I'm not that involved in or positive about EA anymore.

I don’t think (or, you have not convinced me that) it’s appropriate to use CEA’s actions as strong evidence against Jacy. There are many obvious pragmatic justifications to do so that are only slightly related to the factual basis of the allegations—I.e., even if the allegations are unsubstantiated, the safest option for a large organization like CEA would be to cut ties with him regardless. Furthermore, saying someone has “incentives to lie” about their own defense also feels inappropriate (with some exceptions/caveats), since that basically applies to almost every situation where someone has been accused. The main thing that you mentioned which seems relevant is his “documented history of lying,” which (I say this in a neutral rather than accusatory way) I haven’t yet seen documentation of.

Ultimately, these accusations are concerning, but I’m also quite concerned of the idea of throwing around seemingly dubious arguments in service of vilifying someone.

I think it's especially confusing when longtermists working on AI risk think there is a non-negligble chance total doom may befall us in 15 years or less, whereas so-called neartermists working on deworming or charter cities are seeking payoffs that only get realized on a 20-50 year time horizon.

Taking the question literally, searching the term ‘social justice’ in EA forum reveals only 12 mentions, six within blog posts, and six comments, one full blog post supports it, three items even question its value, the remainder being neutral or unclear on value.

That can't be right. I think what may have happened is that when you do a search, the results page initially shows you only 6 each of posts and comments, and you have to click on "next" to see the rest. If I keep clicking next until I get to the last pages of posts and comments, I can count 86 blog posts and 158 comments that mention "social justice", as of now.

BTW I find it interesting that you used the phrase "even question its value", since "even" is "used to emphasize something surprising or extreme". I would consider questioning the values of things to be pretty much the core of the EA philosophy...

To better understand your view, what are some cases where you think it would be right to either

  1. not invite someone to speak, or
  2. cancel a talk you've already started organising,

but only just?

That is, cases where it's just slightly over the line of being justified.

I want to open by saying that there are many things about this post I appreciate, and accordingly I upvoted it despite disagreeing with many particulars. Things I appreciate include, but are not limited to:

-The detailed block-by-block approach to making the case for both cancel culture's prevalence and its potential harm to the movement.

-An attempt to offer a concrete alternative pathway to CEA and local groups that face similar decisions in future.

-Many attempts throughout the post to imagine the viewpoint of someone who might disagree, and preempt the most obvious responses.

But there's still a piece I think is missing. I don't fault Larks for this directly, since the post is already very long and covers a lot of ground, but it's the area that I always find myself wanting to hear more about in these discussions, and so would like to hear more about from either Larks or others in reply to this comment. It relates to both of these quotes.

Of course, being a prolific producer of premium prioritisation posts doesn’t mean we should give someone a free pass for behaving immorally. For all that EAs are consequentialists, I don’t think we should ignore wr
... (read more)
This seems like a tradeoff to me

Yes, it's a tradeoff, but Hanson's being so close to one extreme of the spectrum that it starts to be implausible that anyone can be that bad at communicating carefully just by accident. I don't think he's even trying, and maybe he's trying to deliberately walk as close to the line as possible. What's the point in that? If I'm right, I wouldn't want to gratify that. I think it's lacking nuance if you blanket object to the "misstep" framing, especially since that's still a relatively weak negative judgment. We probably want to be able to commend some people on their careful communication of sensitive topics, so we also have to be willing to call it out if someone is doing an absolutely atrocious job at it.

For reference, I have listened to a bunch of politically controversial podcasts by Sam Harris, and even though I think there's a bit of room to communicate even better, there were no remarks I'd label as 'missteps.' By contrast, several of Hanson's tweets are borderline at best, and at least one now-deleted tweet I saw was utterly insane. I don't think it'... (read more)

Lots! Treat all of the following as ‘things Will casually said in conversation’ rather than ‘Will is dying on this hill’ (I'm worried about how messages travel and transmogrify, and I wouldn't be surprised if I changed lots of these views again in the near future!). But some things include:

  • I think existential risk this century is much lower than I used to think — I used to put total risk this century at something like 20%; now I’d put it at less than 1%. 
  • I find ‘takeoff’ scenarios from AI over the next century much less likely than I used to. (Fast takeoff in particular, but even the idea of any sort of ‘takeoff’, understood in terms of moving to a higher growth mode, rather than progress in AI just continuing existing two-century-long trends in automation.) I’m not sure what numbers I’d have put on this previously, but I’d now put medium and fast takeoff (e.g. that in the next century we have a doubling of global GDP in a 6 month period because of progress in AI) at less than 10%. 
  • In general, I think it’s much less likely that we’re at a super-influential time in history; my next blog post will be about this idea 
  • I’m much more worried about a great power war in my lifeti
... (read more)

[comment I'm likely to regret writing; still seems right]

It seems lot of people are reacting by voting, but the karma of the post is 0. It seems to me up-votes and down-votes are really not expressive enough, so I want to add a more complex reaction.

  • It is really very unfortunate that the post is framed around the question whether Will MacAskill is or is not honest. This is wrong, and makes any subsequent discussion difficult. (strong down-vote) (Also the conclusion ("he is not") is not really supported by the evidence.)
  • It is (and was even more in the blog version) over-zealous, interpreting things uncharitably, and suggesting extreme actions. (downvote)
  • At the same time, it seems really important to have an open and critical discussion, and culture where people can challenge 'canonical' EA books and movement leaders. (upvote)
  • Carefully going through the sources and checking if papers are not cherry-picked and represented truthfully is commendable. (upvote)
  • Having really good epistemics is really important, in particular with the focus on long-term. Vigilance in this direction seems good. (upvote)

So it seems really a pity the post was not framed as a question s... (read more)

[My views only]

Although few materials remain from the early days of Leverage (I am confident they acted to remove themselves from wayback, as other sites link to wayback versions of their old documents which now 404), there are some interesting remnants:

  • A (non-wayback) website snapshot from 2013
  • A version of Leverage's plan
  • An early Connection Theory paper

I think this material (and the surprising absence of material since) speaks for itself - although I might write more later anyway.

Per other comments, I'm also excited by the plan of greater transparency from Leverage. I'm particularly eager to find out whether they still work on Connection Theory (and what the current theory is), whether they addressed any of the criticism (e.g. 1, 2) levelled at CT years ago, whether the further evidence and argument mentioned as forthcoming in early documents and comment threads will materialise, and generally what research (on CT or anything else) have they done in the last several years, and when this will be made public.

Thanks for asking Yadav. I can confirm that:

  • Nonlinear has not been invited or permitted to run sessions or give talks relating to their work, or host a recruiting table at EAG and EAGx conferences this year. 
  • Kat ran a session on a personal topic at EAG Bay Area 2023 in February. EDIT: Kat, Emerson and Drew also had a community office hour slot at that conference
    Since then we have not invited or permitted Kat or Emerson to run any type of session.
  • We have been considering blocking them from attending future conferences since May, and were planning on making that decision if/when Kat or Emerson applied to attend a future conference.

(I was previously a fund manager on the LTFF)

Agree with a lot of what Asya said here, and very appreciative of her taking the time to write it up. 

One complimentary point I want to emphasize: I think hiring a full-time chair is great, and that LTFF / EA Funds should in general be more willing to hire fund managers who have more time and less expertise. In my experience fund managers have very little time for the work (they’re both in part-time roles, and often extremely busy people), and little support (there’s relatively little training / infrastructure / guidance), but a fair amount of power. This has a few downsides:

  1. Insular funding: Fund managers lean heavily on personal networks and defer to other people’s impressions of applicants, which means known applicants are much more likely to be funded. This meant LTFF had an easy time funding EAs/rationalists, but was much less likely to catch promising, non-EA candidates who weren't already known to us. (This is already a common dynamic in EA orgs, but felt particularly severe because of our time constraints.) 
  2. Less ambitious funding: Similarly, it's particularly time-intensive to evaluate new organizations with substantial
... (read more)

Reading this post is very uncomfortable in an uncanny valley sort of way. A lot of things said is true and needs to be said, but the all over feeling of the post is off. 

I think most of the problem comes from blurring the line between how EA functions in practice for people who are close to money and the rest of us. 

Like, sure EA is a do-ocracy, and I can do what ever I want, and no-one is sopping me. But also, every local community organiser I talk to talks about how CEA is controlling and that their funding comes with lots of strings attached. Which I guess is ok, since it's their money. No one is stopping anyone from getting their own funding, and doing their own thing.

Except for the fact that 80k (and other though leaders? I'm not sure who works where), have told the community for years, that funding is solved and no one else should worry about giving to EA, which has stifled all alternative funding in the community. 

I don't really like this thing where you speak on behalf of black EAs.

I think you should let black EAs speak for themselves or not comment on it.

In my experience, there seems to be distortionary epistemic effects when someone speaks on behalf of a minority group. Often, the person so speaking assigns them harms, injustices or offenses that the relevant members of those groups may not actually endorse.

When it's done on my behalf, I find it pretty patronising, and it's annoying/icky?

I don't want to speak for black EAs but it's not clear to me that the "hurt" you mention is actually real.

bruce
1y76
26
3

Thanks for writing this post!

I feel a little bad linking to a comment I wrote, but the thread is relevant to this post, so I'm sharing in case it's useful for other readers, though there's definitely a decent amount of overlap here.

TL; DR

I personally default to being highly skeptical of any mental health intervention that claims to have ~95% success rate + a PHQ-9 reduction of 12 points over 12 weeks, as this is is a clear outlier in treatments for depression. The effectiveness figures from StrongMinds are also based on studies that are non-randomised and poorly controlled. There are other questionable methodology issues, e.g. surrounding adjusting for social desirability bias. The topline figure of $170 per head for cost-effectiveness is also possibly an underestimate, because while ~48% of clients were treated through SM partners in 2021, and Q2 results (pg 2) suggest StrongMinds is on track for ~79% of clients treated through partners in 2022, the expenses and operating costs of partners responsible for these clients were not included in the methodology.

(This mainly came from a cursory review of StrongMinds documents, and not from examining HLI analyses, though I do think "we’re... (read more)

I read that critique with hope, but ultimately I found it largely unconvincing.

I'm very surprised by the claim that mosquito nets keep their beneficiaries in poverty. Mosquito nets are not trying to lift people out of poverty, and yet there is some evidence that they do help lift people out of poverty to some extent. I really don't understand how distributing nets can keep people in poverty. 

Kalulu says:

if you randomly asked one of the people who themselves live in abject poverty, there is no chance that they will mention one of EA’s supported “effective” charities, as having impacted their lives more than the work of traditional global antipoverty agencies. No. That’s out of question.

To be honest, if you asked someone who had received $1000 from GiveDirectly whether it impacted their lives, I'm pretty confident they would say a hearty yes. It also allows the lived experiences of the poor to dictate what happens to the money -- something which Kalulu demands.

GiveWell believes that all of the GiveWell Top Charities outperform GiveDirectly, and I think this is correct, unless you place an unusually low amount of value on saving a life. Again, GiveWell have validated whether they... (read more)

This break even analysis would be more appropriate if the £15m had been ~burned, rather than invested in an asset which can be sold.

If I buy a house for £100k cash and it saves me £10k/year in rent (net costs), then after 10 years I've broken even in the sense of [cash out]=[cash in], but I also now have an asset worth £100k (+10y price change), so I'm doing much better than 'even'.

[on phone] Thank you so much for all of your hard work managing the fund. I really appreciated it and I think that it did a lot of good. I doubt that you could have ever have reasonably expected this outcome so I don't hold you responsible for it.

Reading this announcement was surprisingly emotional for me. It made me realise how many exceptionally good people who I really admire are going to be deeply impacted by all of this. That's really sad in addition to all the other stuff to be sad about. I probably don't have much to offer other than my thoughts and sympathy but please let me know if I can help.

I suppose that I should disclose that I recently received a regrant from FTX which I will abstain from drawing on for the moment. I don't think that this has much, if any, relevance to my sentiments however.

"It is also similarly the case that EA's should not support policy groups without clear rationale, express aims and an understanding that sponsorship can come with the reasonable assumption from general public, journalists, or future or current members, that EA is endorsing particular political views."

  • This doesn't seem right to me -- I think anyone who understands EA should  explicitly expect  more consequentialist grant-makers to be willing to support groups whose political beliefs they might strongly disagree with if they also thought the group was going to take useful action with their funding.
  • As an observer, I would assume EA funders are just  thinking through who has [leverage, influence, is positioned to act in the space, etc.] and putting aside any distaste they might feel for the group's politics more readily than non-EA funders (e.g. the CJR program also funded conservative groups working on CJR whose views the program director presumably didn't like or agree with for similar reasons).

"Other mission statements are politically motivated to a degree which is simply unacceptable for a group receiving major funds from an EA org."

  • This seems to imply that EA funde
... (read more)

It's very much not obvious to me that EAs should generally prefer progressive democratic candidates in general, or Salinas in particular.

Speaking personally, I am generally not excited about Democratic progressives gaining more power in the party relative to centrists, and I'm pretty confident I'm not alone here in that[1]

I also think it's false to claim that Salinas's platform as linked gives much reason to think she will be a force for good on global poverty, animal welfare, or meaningful voting reform. (I'd obviously change my mind on this if there are other Salinas quotes that pertain more directly to these issues.)

There are also various parts of her platform that make me think there's a decent chance that her time in office will turn out to be bad for the world by my lights (not just relative to Carrick). I obviously don't expect everyone here to agree with me on that, and I'm certainly not confident about it, but I also don't want broad claims that progressives are better by EA values to stand uncontested, because I personally don't think that's true.

  1. ^

    To be clear, I think this is very contestable within an EA framework, and am not trying to claim that my political pref

... (read more)

Agree that X-risk is a better initial framing than longtermism - it matches what the community is actually doing a lot better. For this reason, I'm totally on board with "x-risk" replacing "longtermism" in outreach and intro materials. However, I don't think the idea of longtermism is totally obsolete, for a few reasons:

  • Longtermism produces a strategic focus on "the last person" that this "near-term x-risk" view doesn't. This isn't super relevant for AI, but it makes more sense in the context of biosecurity. Pandemics with the potential to wipe out everyone are way worse than pandemics which merely kill 99% of people, and the ways we prepare for them seem likely to differ. On this view, bunkers and civilizational recovery plans don't make much sense.
  • S-risks seem like they could very well be a big part of the overall strategy picture (even when not given normative priority and just considered as part of the total picture), and they aren't captured by the short-term x-risk view.
  • The numbers you give for why x-risk might be the most important cause areas even if we ignore the long-term future, $30 million for a 0.0001% reduction in X-risk, don't seem totally implausible. The world is b
... (read more)

No offense to Neel's writing, but it's instructive that Scott manages to write the same thesis so much better. It:

  • is 1/3 the length
    • Caveats are naturally interspersed, e.g. "Philosophers shouldn't be constrained by PR."
    • No extraneous content about Norman Borlaug, leverage, etc
  • has a less bossy title
  • distills the core question using crisp phrasing, e.g. "Does Long-Termism Ever Come Up With Different Conclusions Than Thoughtful Short-Termism?" (my emphasis)

...and a ton of other things. Long-live the short EA Forum post!

Mau
2y76
0
0

So what to do? I’d like to note that some of the knee jerk reactions when hearing of the problem are examples of things not to do.

This seems overly quick to rule out a large class of potential responses. Assuming there are (or will be) more "vultures," it's not clear to me that the arguments against these "things not to do" are solid. I have these hesitations (among others) [edited for clarity and to add the last two]:

  • "The rationale for giving out high risk grants stands and hasn’t changed."
    • Sure, but the average level of risk has increased. So accepting the same level of risk means being more selective.
  • "decreasing the riskiness of the grants just means we backslide into becoming like any other risk averse institution."
    • Even if we put aside the previous point, riskiness can go down without becoming as low as that of typical risk-averse institutions.
  • "Increasing purity tests. [...] As a community that values good epidemics, having a purity test on whether or not this person agrees with the EA consensus on [insert topic here] is a death blow to the current very good MO."
    • There are other costly signals the community could use.
  • "So not funding young people means this t
... (read more)

He asserts that "numerous people have come forward, both publicly and privately, over the past few years with stories of being intimidated, silenced, or 'canceled.'"  This doesn't match my experience.

I also have not had this experience, though that doesn't mean it didn't happen, and I'd want to take this seriously if it did happen.

However, Phil Torres has demonstrated that he isn't above bending the truth in service of his goals, so I'm inclined not to believe him. See previous discussion here. Example from the new article:

It’s not difficult to see how this way of thinking could have genocidally catastrophic consequences if political actors were to “[take] Bostrom’s argument to heart,” in Häggström’s words.

My understanding (sorry that the link is probably private) is that Torres is very aware that Häggström generally agrees with longtermism  and provides the example as a way not to do longtermism, but that doesn't stop Torres from using it to argue that this is what longtermism implies and therefore all longtermists are horrible.

I should note that even if this were written by someone else, I probably wouldn't have investigated the supposed intimidation, silencing, or canc... (read more)

Many thanks for this, Rohin. Indeed, your understanding is correct. Here is my own screenshot of my private announcement on this matter.

This is far from the first time that Phil Torres references my work in a way that is set up to give the misleading impression that I share his anti-longtermism view. He and I had extensive communication about this in 2020, but he showed no sympathy for my complaints. 

Thanks a lot for writing this up and sharing this. I have little context beyond following the story around CARE and reading this post, but based on the information I have, these seem like highly concerning allegations, and ones I would like to see more discussion around. And I think writing up plausible concerns like this clearly is a valuable public service.

Out of all these, I feel most concerned about the aspects that reflect on ACE as an organisation, rather than that which reflect the views of ACE employees. If ACE employees didn't feel comfortable going to CARE, I think it is correct for ACE to let them withdraw. But I feel concerned about ACE as an organisation making a public statement against the conference. And I feel incredibly concerned if ACE really did downgrade the rating of Anima International as a result. 

That said, I feel like I have fairly limited information about all this, and have an existing bias towards your position. I'm sad that a draft of this wasn't run by ACE beforehand, and I'd be keen to hear their perspective. Though, given the content and your desire to remain anonymous, I can imagine it being unusually difficult to hear ACE's thoughts before pu... (read more)

I'm curious why there hasn't been more work exploring a pro-AI or pro-AI-acceleration position from an effective altruist perspective. Some points:

  1. Unlike existential risk from other sources (e.g. an asteroid) AI x-risk is unique because humans would be replaced by other beings, rather than completely dying out. This means you can't simply apply a naive argument that AI threatens total extinction of value to make the case that AI safety is astronomically important, in the sense that you can for other x-risks. You generally need additional assumptions.
  2. Total utilitarianism is generally seen as non-speciesist, and therefore has no intrinsic preference for human values over unaligned AI values. If AIs are conscious, there don't appear to be strong prima facie reasons for preferring humans to AIs under hedonistic utilitarianism. Under preference utilitarianism, it doesn't necessarily matter whether AIs are conscious.
  3. Total utilitarianism generally recommends large population sizes. Accelerating AI can be modeled as a kind of "population accelerationism". Extremely large AI populations could be preferable under utilitarianism compared to small human populations, even those with high per-ca
... (read more)

I mostly want to +1 to Jonas’ comment and share my general sentiment here, which overall is that this whole situation makes me feel very sad. I feel sad for the distress and pain this has caused to everyone involved. 

I’d also feel sad if people viewed Owen here as having anything like a stereotypical sexual predator personality.

My sense is that Owen cares extraordinarily about not hurting others. 

It seems to me like this problematic behavior came from a very different source – basically problems with poor theory of mind and underestimating power dynamics. Owen can speak for himself on this; I’m just noting as someone who knows him that I hope people can read his reflections genuinely and with an open mind of trying to understand him. 

That doesn’t make Owen’s actions ok – it’s definitely not – but it does make me hopeful and optimistic that Owen has learnt from his mistakes and will be able to tread cautiously and not make problems of this sort again.

Personally, I hope Owen can be involved in the community again soon. 

 

[Edited to add: I’m not at all confident here and just sharing my perspective based on my (limited) experience. I don’t think people should give my opinion/judgment much weight. I haven’t engaged at all deeply in understanding this, and don’t plan to engage more]

The great majority of my post focuses on process concerns. The primary sources introduced by Nonlinear are strong evidence of why those process concerns matter, but the process concerns stand independent. I agree that Nonlinear often paraphrased its subjects before responding to those paraphrases; that's why I explicitly pulled specific lines from the original post that the primary sources introduced by Nonlinear stand as evidence against.

My ultimate conclusion was and is explicitly not that Nonlinear is vindicated on every point of criticism. It is that the process was fundamentally unfair and fundamentally out of line with journalistic standards and a duty to care that are important to uphold. Not everyone who is put in a position of needing to reply to a slanted article about them is going to be capable of a perfectly rigorous, even-keeled, precise response that defuses every point of realistically defusable criticism, which is one reason people should not be put in the position of needing to respond to those articles.

It feels really cruxy to me whether you or Ben received any actual evidence of whether Alice or Chloe had lied or misrepresented anything in that 1 week.

Because to me the actual thing I felt from reading the original post's "Response from Nonlinear" was largely them engaging in some kind of justification or alternative narrative for the overall practices of Nonlinear... but I didn't care about that, and honestly it felt like it kind of did worse for them because it almost seemed like they were deflecting from the actual claims of abuse.

To me, if you received 0 evidence that there were any inaccuracies in the accusations against Nonlinear in that 1 week, then I think they really dropped the ball in not prioritizing at least something to show that you shouldn't trust the original sources. Maybe they just thought they had enough time to talk it out, and maybe it really was just like, woah, we need to dig through records from years ago, this is going to take longer than we expected.

But if you did receive some evidence that maybe Alice and Chloe had lied or exaggerated at all... to me that would absolutely justify waiting another week for more evidence, and being much more cautious abou... (read more)

I like GiveDirectly a lot, and I think you're doing great work! I'm glad to be able to point people to GiveDirectly who are skeptical of less clear cut interventions or who feel very strongly about letting recipients decide what's most important to them, and I think donations you receive go further than ones to the vast majority of charities.

On the other hand, it doesn't seem like this post engages with reasons EAs might disagree with the claim that "ending extreme poverty through cash transfers should be a central EA cause"? For example, just within global health and development GiveWell estimates the opportunities they're able to identify are at least 10x more effective than cash transfers. Do you think cash is undervalued by GiveWell and the EAs who defer to them by >10x?

It seems a little weird to me that most of the replies to this post are jumping to the practicalities/logistics of how we should/shouldn't implement official, explicit, community-wide bans on these risky behaviours.

I totally agree with OP that all the things listsed above generally cause more harm than good. Most people in other cultures/communities would agree that they're the kind of thing which should be avoided, and most other people succeed in avoiding them without creatiing any explicit institution responsible for drawing a specific line between correct/incorrect behavior or implementing overt enforcment mechanisms.

If many in the community don't like these kind of behaviours, we can all contribute to preventing them by judging things on a case-by-case basis and gently but firmly letting our peers know when we dissaprove of their choices. If enough people softly disaprove of things like drug use, or messy webs of romantic entanglement - this can go a long way towards reducing their prevalance. No need to draw bright lines in the sand or enshrine these norms in writing as exact rules.

PSA: Apropos of nothing, did you known you can hide the community section?

(You can get rid of it entirely in your settings as well.)

I think this is a ridiculous idea, but the linked article (and headline of this post) is super clickbait-y. This idea is mentioned in two sentences in the court documents (p. 20 of docket 1886, here). All we know is that Gabriel, Sam's brother, sent a memo to someone at the FTX Foundation mentioning the idea. We have no idea if Sam even heard about this or if anyone at the Foundation "wanted" to follow through with it. I'm sure all sorts of wild possibilities got discussed around that time. Based on the evidence it's a a huge leap to say there were desires or plans to act on them.

When EVF announced the new interim CEOs 3 months ago, I noted that there wasn't a bio for EVF's board members on their website, and that it was hard to find much information on Google. At this moment in time, it's the most upvoted comment on that post, with 35 upvoted and 29 agreements. Howie agreed to update the website, but as of now it doesn't look like anything has been added. 

I'd like to raise this again, it would be good to update EVF's website with board member bios for transparency, and maybe a contact email address. I like that this press release has bios for Zach and Eli, and a link to Becca's forum account. Could you add a bio for Rebecca? Again, it's hard to find much info, since there was no bio in the previous press release I don't know anything about her.

About 10+ people (5 Constellation members) have mentioned that there is social pressure to defer or act a certain way when at Constellation.

At least as written, this is so broad as to be effectively meaningless. All organisations exert social pressure on members to act in a certain way (e.g. to wipe down the exercise machines after use). Similarly, basically all employers require some degree of deference to management; typical practice is that management solicit feedback from workers but in turn compliance with instructions is mandatory.

What you describe could be bad... or it could be totally typical. There's no real way for the reader to judge based on what you've written.

In my view, Bostrom's email would have been offensive in the 90s and it is offensive now, for good reason.

Agree.

Bostrom’s apology is defensively couched - emphasising the age of the email, what others wrote on the listserv, that it would be best forgotten, that fear that people might smear him. I think that is cowardly and shows a disappointing lack of ownership of his actions.

I think these details are important context. I disagree with the final sentence.

When you are willfully disengaged from the empathy that underlies common decency

I don't see grounds for describing Bostrom in such harsh terms.

Ofer
1y75
34
31

The original email (written 26 years ago) was horrible, but Bostrom's apology seems reasonable.

If you look at the most horrible thing that every each person has done in their entire life, it's likely that almost everyone (in that age) has done things that are at least as horrible as writing that email.

The OP reads as if it was optimized to cause as much drama as possible, rather than for pro-social goals.

Manifold Markets ran a prediction tournament to see whether forecaster would be able to predict the winners! For each Cause Exploration Prize entry, we had a market on "Will this entry win first or second place?". Check out the tournament rules and view all predictions here.

I think overall, the markets did okay -- they managed to get the first place entry ("Organophosphate pesticides and other neurotoxicants") as the highest % to win, and one of the other winners was ranked 4th ("Violence against women and girls").  However, they did miss out on the two dark horse winners ("Sickle cell disease" and "shareholder activism"), which could have been one hypothetical way markets would outperform karma. Specifically, none of the Manifold forecasters placed a positive YES bet on either of the dark horse candidates.

 

I'm not sure that the markets were much better predictors than just EA Forum Karma -- and it's possible that most of the signal from the markets were just forecasters incorporating EA Forum Karma into their predictions. The top 10 predictions by Karma also had 2 of the 1st/2nd place winners:

And if you include honorable mentions in the analysis, EA Forum Karma actually ... (read more)

I appreciate the section on tradeoffs, and I think it makes me more likely to trust the community health team.

Hi, Joel, Sam, and Michael -

We really appreciate this type of thoughtful engagement. We find a lot of value in hearing well-reasoned critiques of our research and weaknesses in our communication: thank you for sharing this!

Facilitating feedback like this is a big part of why we hold transparency as one of our values. You're right that it shouldn't be as difficult as it is to understand why we made the decisions we did in our model and how our deworming estimates rely on priors and evidence. Our deworming cost-effectiveness analysis (and frankly other models as well) falls short of our transparency goals–this is a known shortfall in how we communicate about our research, and we are working on improving this.

We want to take the time to more deeply consider the points raised in this post, and do plan on sharing more thinking about our approach. Thanks again for your critical engagement with our research.

I broadly agree with your thesis and would love to see more university groups prioritize the types of activities you mention. I think EA university community building needs to hear this critique right now.

But I'm worried people could over-update on this, so I want to introduce three caveats:

1. You need a core group: Placing more emphasis on collectively skilling one another up only seems possible for groups that already have 3+ people seriously committed to doing the most good, which can't be said for many new EA groups. New groups might benefit from overmarketing to build up a core group.


2. You need to maintain some mass outreach component: Ideally, every student at your uni knows that an EA group exists and roughly what it does. I think there are low-cost ways to do this, like mass email and dept. emails with mail-o-meter that funnel towards an intro event or fellowship (or fellowship alternative). Nevertheless, I'd be worried that a group that focuses too much on self-skill will miss finding many of the "instant EAs" that could be a great fit for the group. 


3. It's hard to onboard new members into a group that's skilling up: Imagine your friend tells you about this cool new... (read more)

I was at an EA party this year where there was definitely an overspend of hundreds of pounds of EA money on food which was mostly wasted. As someone who was there, at the time, this was very clearly avoidable. 

It remains true that this money could have changed lives if donated to EA charities instead (or even used less wastefully towards EA community building!) and I think we should view things like this as a serious community failure which we want to avoid repeating.

At the time, I felt extremely uncomfortable / disappointed with the way the money was used. 

I think if this happened very early into my time affiliated with EA, it would have made me a lot less likely to stay involved - the optics were literally "rich kids who claim to be improving the world in the best way possible and tell everyone to donate lots of money to poor people are wasting hundreds of pounds on food that they were obviously never going to eat". 

I think this happened because the flow of money into EA has made the obligations to optimise cost-efficiency and to think counterfactually seem a lot weaker to many EAs. I don't think the obligations are any weaker than they were - we should just have a slightly lower cost effectiveness bar for funding things than before.

Starting EA community offices

Effective altruism

Some cities, such as Boston and New York, are home to many EAs and some EA organizations, but lack dedicated EA spaces. Small offices in these cities could greatly facilitate local EA operations. Possible uses of such offices include: serving as an EA community center, hosting talks or reading groups, providing working space for small EA organizations, reducing overhead for event hosting, etc.

(Note: I believe someone actually is looking into starting such an office in Boston. I think (?) that might already be funded, but many other cities could plausibly benefit from offices of their own.)

FWIW, one of my first projects at Open Phil, starting in 2015, was to investigate subjective well-being interventions as a potential focus area. We never published a page on it, but we did publish some conversation notes. We didn't pursue it further because my initial findings were that there were major problems with the empirical literature, including weakly validated measures, unconvincing intervention studies, one entire literature using the wrong statistical test for decades, etc. I concluded that there might be cost-effective interventions in this space, perhaps especially after better measure validation studies and intervention studies are conducted, but my initial investigation suggested it would take a lot of work for us to get there, so I moved on to other topics.

At least for me, I don't think this is a case of an EA funder repeatedly ignoring work by e.g. Michael Plant — I think it's a case of me following the debate over the years and disagreeing on the substance after having familiarized myself with the literature.

That said, I still think some happiness interventions might be cost-effective upon further investigation, and I think our Global Health & Well-Being team... (read more)

I'm so impressed that Pablo asked for an external review when he was feeling potentially burnt out and not sure about the impact of the wiki. That takes some incredible epistemic (and emotional!) chops. This is an example of EA at its finest.

The Christians in this story who lived relatively normal lives ended up looking wiser than the ones who went all-in on the imminent-return-of-Christ idea. But of course, if christianity had been true and Christ had in fact returned, maybe the crazy-seeming, all-in Christians would have had huge amounts of impact.

Here is my attempt at thinking up other historical examples of transformative change that went the other way:

  • Muhammad's early followers must have been a bit uncertain whether this guy was really the Final Prophet. Do you quit your day job in Mecca so that you can flee to Medina with a bunch of your fellow cultists? In this case, it probably would've been a good idea: seven years later you'd be helping lead an army of 100,000 holy warriors to capture the city of Mecca. And over the next thirty years, you'll help convert/conquer all the civilizations of the middle east and North Africa.

  • Less dramatic versions of the above story could probably be told about joining many fast-growing charismatic social movements (like joining a political movement or revolution). Or, more relevantly to AI, about joining a fast-growing bay-area startup whose technology might change the

... (read more)

My Forum comments will be less frequent, but probably spicier.

Looking forward to this.

[anonymous]3y75
0
0

I don't find the racism critique of longtermism compelling. Human extinction would be bad for lots of currently existing non-white people. Human extinction would also be bad for lots of possible future non-white people. If future people count equally, then not protecting them would be a great loss for future non-white people. So, working to reduce extinction risks is very good for non-white people.

Buck
3y75
0
0

I think Carl Shulman makes some persuasive criticisms of this research here :

My main issue with the paper is that it treats existential risk policy as the result of a global collective utility-maximizing decision based on people's tradeoffs between consumption and danger. But that is assuming away approximately all of the problem.

If we extend that framework to determine how much society would spend on detonating nuclear bombs in war, the amount would be zero and there would be no nuclear arsenals. The world would have undertaken adequate investments in surveillance, PPE, research, and other capacities in response to data about previous coronaviruses such as SARS to stop COVID-19 in its tracks. Renewable energy research funding would be vastly higher than it is today, as would AI technical safety. As advanced AI developments brought AI catstrophic risks closer, there would be no competitive pressures to take risks with global externalities in development either by firms or nation-states.

Externalities massively reduce the returns to risk reduction, with even the largest nation-states being only a small fraction of the world, individual politicians much more concerned with their term

... (read more)

"which makes me think that it's likely that Leverage at least for a while had a whole lot of really racist employees."

"Leverage" seems to have employed at least 60 people at some time or another in different capacities. I've known several (maybe met around 15 or so), and the ones I've interacted with often seemed like pretty typical EAs/rationalists. I got the sense that there may have been few people there interested in the neoreactionary movement, but also got the impression the majority really weren't.

I just want to flag that I really wouldn't want EAs generally think that "people who worked at Leverage are pretty likely to be racist," because this seems quite untrue and quite damaging. I don't have much information about the complex situation that represents Leverage, but I do think that the sum of the people ever employed by them still holds a lot of potential. I'd really not want them to get or feel isolated from the rest of the community.

Thanks Habryka for raising the bar on the amount of detail given in grant explanations.

Cullen
23d74
10
4
3

OP gave some reasoning for their views on their recent blog post:

Another place where I have changed my mind over time is the grant we gave for the purchase of Wytham Abbey, an event space in Oxford.

We initially agreed to help fund that purchase as part of our effort to support the growth of the community working to reduce global catastrophic risks (GCRs). The original idea presented to us was that the space could serve as a hub for workshops, retreats, and conferences, to cut down on the financial and logistical costs of hosting large events at private facilities. This was pitched to us at a time when FTX was making huge commitments to the GCR community, which made resources appear more abundant and lowered our own bar. Since its purchase, the space has gotten meaningful use for community events and gatherings. But with the collapse of FTX, our bar for this kind of work rose, and the original grant would no longer have risen to the level where we would want to provide funding.

Because this was a large asset, we agreed with Effective Ventures ahead of time that we would ask them to sell the Abbey if the event space, all things considered, turned out not to be sufficiently cost-effe

... (read more)
lyra
3mo74
14
7

[Liberally edited to clarify / address misunderstandings]

I assume it's obvious to everyone that it's a bad idea to make [things that are perceived as] unwanted romantic or sexual advances towards people, and that serious action should be taken if someone receives repeated complaints about that. I assume everyone agrees that "ignore complaints of harassment if a few people say they're pretty sure the perpetrator is a good person / they're a pillar of the community / their work is valuable / etc" is a bad policy. 

I assume everyone has a shared goal along the lines of "make the community safe and welcoming for people in general, and especially for underrepresented, vulnerable, or easy-to-make-feel-unwelcome groups".[1]

As a potential member of such a group who has had significant interactions with Owen, I think I have information that might help people to pursue that goal more effectively. I assume one sensible way to make decisions that improve the welcomingness for particular groups is to ask representatives of that group whether a particular decision would make them feel more or less welcome. In the absence of general solicitation to that effect (at least with respect to t... (read more)

I strongly agree with the end of your post:

Remember:

Almost nobody is evil.

Almost everything is broken.

Almost everything is fixable.

I want you to know that I don't think you're a villain, and that your pain makes me sad. I wrote some comments that were critical of your responses ... and still I stand by those comments. I dislike and disapprove the approach you took. But I also know that you're hurting, and that makes me sad.

So... I'd like you to dwell on that for a minute.

I wrote something in an edited paragraph deep within a subthread, and thought I should raise the point more directly. My sense is that you and Emerson have some characteristics or habits that I would call flawed or bad, and that it was justified to publicly write something about that.

But I also have a sense that Ben's post contains errors.

I think you are EAs and rationalists at heart. I respect that. And I respect the (unknown to me but probably large) funds you've put into trying to do good. Because of that, I think Ben & co should've spent more time to get Ben's initial post right.

And I guess I'm sad about this situation because I feel that both Ben's post and your post were worded in somewhat unfair ways, an... (read more)

Which financial claims seem to you like they have been debunked?

  1. The original post uses the low amount of money in Alice's bank account as a proxy for financial dependence and wealth disparity, which could often be an appropriate proxy but here elides that Alice also owned a business that additionally produced passive income, though there's disagreement about whether this was in the range of $600/month (your estimate) or $3k/month (what NL claims Alice told them and shows a screenshot of Emerson referencing).

  2. Being owed salary is very different from being owed reimbursements. We have a very strong norm (backed up legally) of paying wages on time. Companies that withhold wages or don't pay them promptly are generally about to go out of business or doing something super shady. On the other hand, reimbursements normally take some time, and being slow about reimbursements would be only a small negative update on NL.

  3. NL claims the reimbursements were late because Alice stopped filing for reimbursement, and once she did these were immediately paid. If NL is correct here (and this seems pretty likely to me) then this falls entirely on Alice and shouldn't be included in claims

... (read more)

Thanks for sharing all this information Kat. It seems like this situation has been very difficult for everyone involved. Members of the community health team will look through the post, comments and appendix and work out what our next steps (if any) will be. 

Hello Jason,

With apologies for delay. I agree with you that I am asserting HLI's mistakes have further 'aggravating factors' which I also assert invites highly adverse inference. I had hoped the links I provided provided clear substantiation, but demonstrably not (my bad). Hopefully my reply to Michael makes them somewhat clearer, but in case not, I give a couple of examples below with as best an explanation I can muster. 

I will also be linking and quoting extensively from the Cochrane handbook for systematic reviews - so hopefully even if my attempt to clearly explain the issues fail, a reader can satisfy themselves my view on them agrees with expert consensus. (Rather than, say, "Cantankerous critic with idiosyncratic statistical tastes flexing his expertise to browbeat the laity into aquiescence".) 

0) Per your remarks, there's various background issues around reasonableness, materiality, timeliness etc. I think my views basically agree with yours. In essence: I think HLI is significantly 'on the hook' for work (such as the meta-analysis) it relies upon to make recommendations to donors - who will likely be taking HLI's representations on its results and reliability (cf... (read more)

While Dawn claims it is "important" that Singer filed a demurrer rather than contesting factual allegations in court, no one should update on that legal strategy. Almost any rational litigant would have done the same thing given the procedural posture.

For background at the 10,000 foot level, at the very early stages of litigation, you can file a demurrer ("motion to dismiss for failure to state a claim" in federal court) claiming that even if everything in the complaint is true, it doesn't give rise to liability. You generally cannot ask the court to dismiss the case at that point because the alleged facts aren't true. The reason is that if the plaintiff's legal theory is sound, she should ordinarily have an opportunity to develop the facts through discovery (document production, depositions, etc.) before the court addresses factual issues.

Discovery is time consuming and expensive, so if you have an argument that "even if everything you say is true, there's no liability here" and an argument that "what you say isn't true," it is almost always better to present only the former argument at the demurrer stage. If you start disputing alleged facts, you're implicitly telling the court t... (read more)

Leopold - thanks for a clear, vivid, candid, and galavanizing post. I agree with about 80% of it. 

However, I don't agree with your central premise that alignment is solvable. We want it to be solvable. We believe that we need it to be solvable (or else, God forbid, we might have to actually stop AI development for a few decades or centuries). 

But that doesn't mean it is solvable. And we have, in my opinion, some pretty compelling reasons to think that it not solvable even in principle, (1) given the diversity, complexity, and ideological nature of many human values (which I've written about in other EA Forum posts, and elsewhere), (2) given the deep game-theoretic conflicts between human individuals, groups, companies, and nation-states (which cannot be waved away by invoking Coherent Extrapolated Volition, or 'dontkilleveryoneism', or any other notion that sweeps people's profoundly divergent interests under the carpet), and (3) given that humans are not the only sentient stakeholder species that AI would need to be aligned with (advanced AI will have implications for every other of the 65,000 vertebrate species on Earth, and most of the 1,000,000+ invertebrate species, ... (read more)

dsj
1y74
7
0

I’m calling for a six month pause on new font faces more powerful than Comic Sans.

Look, I think Will has worked very hard to do good and I don’t want to minimize that, but at some point (after the full investigation has come out) a pragmatic decision needs to be made about whether he and others are more valuable in the leadership or helping from the sidelines. If the information in the article is true, I think the former has far too great a cost. 

This was not a small mistake. It is extremely rare for charitable foundations to be caught up in scandals of this magntiude, and this article indicates that a signficant amount of the fallout could have been prevented with a little more investigation at key moments, and that clear signs of unethial behaviour were deliberately ignored. I think this is far from competent. 

We are in the charity business. Donors expect high standards when it comes to their giving, and bad reputations directly translate into dollars. And remember, we want new donors, not just to keep the old ones. I simply don’t see how “we have high standards, except when it comes to facilitating billion dollar frauds” can hold up to scrutiny. I'm not sure we can "credibly convince people" if we keep the current leadership in place. The monetary c... (read more)

An attempt to express the “this is really bad” position. 

These are not my views, but an attempt to describe others.

Imagine I am a person who occasionally experiences racism or who has friends or family for whom that is the case. I want a community I and my friends feel safe in. I want one that shares my values and acts predictably (dare I say, well-aligned). Not one that never challenges me, but one where lines aren’t crossed and if they are, I am safe. Perhaps where people will push back against awful behaviour so I don’t have to feel constantly on guard. 

Bostrom’s email was bad and his apology:

  • Was focused on himself
  • Decided to take a long detour into the very subjects that make me feel unsafe in the first place

And to add to that, rather than the community saying “yes that was bad” a top response is “I stand with Bostrom”. I understand that people might say “trust us, you know we are good and not racist” but maybe I don’t trust them. Or maybe my friends or family are asking me about if I know this Bostrom guy or if he’s part of my community. 

And maybe I am worried that Bostrom et al don’t have the interests of people of colour at heart when they think ... (read more)

For me, unfortunately, the discourse surrounding Wytham Abbey, seems like a sign of epistemic decline of the community, or at least on the EA forum.
 

  • The amount of attentions spent on this seems to be a textbook example of bikeshedding

    Quoting Parkinson :"The time spent on any item of the agenda will be in inverse proportion to the sum [of money] involved." A reactor is so vastly expensive and complicated that an average person cannot understand it (see ambiguity aversion), so one assumes that those who work on it understand it. However, everyone can visualize a cheap, simple bicycle shed, so planning one can result in endless discussions because everyone involved wants to implement their own proposal and demonstrate personal contribution.

    In case of EAs, there are complicated, high-stakes things, for example what R&D efforts to support around AI. This has scale of billions of dollars now, much higher stakes in the future, and there is a lot to understand.

    In contrast, absolutely anyone can easily form opinions about appropriateness of a manor house purchase, based on reading a few tweets. 
     
  • Repeatedly, the tone of the discussion is a bit like "I've read a twee
... (read more)

3) Critics (eg @CarlaZoeC @LukaKemp) warned that EA should decentralize funding so it doesn’t become a closed validation loop where the people in SBF’s inner circle get millions to promote his & their vision for EA while others don’t. But EA funding remained overcentralized

 

I think the FTX regranting program was the single biggest push to decentralize funding EA has ever seen, and it's crazy to me that anyone could look at what FTX Foundation was doing and say that the key problem is that the funding decisions were getting more, rather than less, centralized. (I would be interested in hearing from those who had some insight into the program whether this seems incorrect or overstated.)

That said, first, I was a regrantor, so I am biased, and even aside from the tremendous damage caused by the foundation needing to back out and the possibility of clawbacks, the fact that at least some of the money which was being regranted was stolen makes the whole thing completely unacceptable. However, it was unacceptable in ways that have nothing to do with being overly centralized.

Jason
1y74
19
1

At least MacAskill and Beckstead need to resign from the EVF board, take a leave of absence, publicly recuse from responding to the FTX situation, or submit a detailed explanation of the facts and circumstances that render none of these actions appropriate. They are just too intertwined in the events that happened to be able to manage the conflict of interest and risks to impartiality (or at least the appearance of the same). 

That's not me saying that I think they committed misconduct, but I think the circumstances would easily "cause a reasonable person with knowledge of the relevant facts to question [their] impartiality in the matter," and that's enough. Cf. 5 CFR § 2635.502(a) (Standards of Conduct for Employees of the Executive Branch [of the U.S. Government]). Although I'm not going to submit that EA officials should always follow government ethics rules, this is a really clear-cut case.

Hopefully they have recused and this just isn't being stated due to PR concerns (because some people might misinterpret recusal as an admission of wrongdoing rather than as respect for a basic principle of good governance).

Jeff - this is a useful perspective, and I agree with some of it, but I think it's still loading a bit too much guilt onto EA people and organizations for being duped and betrayed by a major donor. 

EAs might have put a little bit too much epistemic trust in subject matter experts regarding SBF and FTX -- but how can we do otherwise, practically speaking?

In this case, I think there was a tacit, probably largely unconscious trust that if major VCs, investors, politicians, and journalists trusted SBF, then we can probably trust him too. This was not just a matter of large VC firms vetting SBF and giving him their seal of approval through massive investments (flawed and rushed though their vetting may have been.) 

It's also a matter of ordinary crypto investors, influencers, and journalists largely (though not uniformly) thinking FTX was OK, and trusting him with billions of dollars of their money, in an industry that is actually quite skeptical a lot of the time. And major politicians, political parties, and PACs who accepted millions in donations trusting that SBD's reputation would not suffer such a colossal downturn that they would be implicated. And journalists from leadi... (read more)

As Astrid Wilde noted on Twitter, there is a distinct possibility that the causality of the situation may have run the other way, with SBF as a conman taking advantage of the EA community's high-trust environment to boost himself.

What makes this implausible for me is that SBF has been involved in EA since very early one (~2013 or earlier?). Back then, there was no money, power or fame to speak of, so why join this fringe movement? 

Thank you for this timely and transparent post, and for all the additional work I'm sure your team is shouldering in response to this situation.

With Giving Tuesday and general end-of-year giving on the horizon, I think any indication from OPP of new anticipated funding gaps would be useful to the EA community as a whole. It would also be helpful to get a sense as soon as the information is available of what the overall cause area funding distribution in EA is likely to look like after this week.

I'm grateful to you guys for making this post! :)

I think a lot of the criticisms shared recently have been very valid but overall the EA community is amazing and has also accomplished some great things and I'm super thankful for it. Great idea to create this thread to help keep that in mind!

Thanks for this post.

A few data points and reactions from my somewhat different experiences with EA:

  • I've known many EAs. Many have been vegan and many have not (I'm not). I've never seen anyone "treat [someone] as non-serious (or even evil)" based on their diet.
  • A significant minority achieves high status across EA contexts while loudly disagreeing with utilitarianism.
  • You claim that EA takes as given "Not living up to this list is morally bad. Also sort of like murder." Of course failing to save lives is sort of like murder, for sufficiently weak "sort of." But at the level of telling people what to do with their lives, I've always seen community leaders endorse things like personal wellbeing and non-total altruism (and not always just for instrumental, altruistic reasons). The rank-and-file and high-status alike talk (online and offline) about having fun. The vibe I get from the community is that EA is more of an exciting opportunity than a burdensome obligation. (Yes, that's probably an instrumentally valuable vibe for the community to have -- but that means that 'having fun is murder' is not endorsed by the community, not the other way around.)
  • [Retracted; I generally support noti
... (read more)

Hi Haydn,

This is awesome! Thank you for writing and posting it. I especially liked the description of the atmosphere at RAND, and big +1 on the secrecy heuristic being a possibly big problem.[1] Some people think it helps explain intelligence analysts' underperformance in the forecasting tournaments, and I think there might be something to that explanation. 

We have a report on autonomous weapons systems and military AI applications coming out soon (hopefully later today) that gets into the issue of capability (mis)perception in arms races too, and your points on competition with China are well taken.

What I felt was missing from the post was the counterfactual: what if the atomic scientists’ and defense intellectuals’ worst fears about their adversaries had been correct? It’s not hard to imagine. The USSR did seem poised to dominate in rocket capabilities at the time of Sputnik.

I think there’s some hindsight bias going on here. In the face of high uncertainty about an adversary’s intentions and capabilities, it’s not obvious to me that skepticism is the right response. Rather, we should weigh possible outcomes. In the Manhattan Project case, one of those possible outcomes ... (read more)

One option here could be to lend books instead. Some advantages:

  • Implies that when you're done reading the book you don't need it anymore, as opposed to a religious text which you keep and reference.

  • While the distributors won't get all the books back (and that's fine) the books they do get back they can lend out again.

  • Less lavish, both in appearance and in reality.

This is what we do at our meetups in Boston.

Thanks for writing this, really great post. 

I don't think this is super important, but when it comes to things like FTX I think it's also worth keeping in mind that besides the crypto volatility and stuff there's also the fact that a lot of what we're marking EA funding to aren't publicly-traded assets, and so numbers should probably be taken with an even bigger pinch of salt than usual. 

For example, the numbers for  FTX here are presumably backed out of the implied valuation from its last equity raise, but AFAIK this was at the end of January this year. Since then Coinbase (probably the best publicly traded comparator) stock has fallen ~62% in value, whereas FTX's nominal valuation hasn't changed in the interim since there hasn't been a capital raise. But presumably, were FTX to raise money today the implied valuation would reflect a somewhat similar move

Not a huge point, and in any case these kinds of numbers are always very rough proxies anyway since things aren't liquid, but I think maybe worth keeping in mind when doing BOTECs for EA funding

I appreciate the feedback! I will admit I had not seen Terminator in a while before writing that post. I also appreciate including Paul's follow-up, which is definitely clarifying. Will be clearer about the meaning of "influence" going forward.

Investment strategies for longtermist funders

Research That Can Help Us Improve, Epistemic Institutions, Economic growth

Because of their non-standard goals, longtermist funders should arguably follow investment strategies that differ from standard best practices in investing. Longtermists place unusual value on certain scenarios and may have different views of how the future is likely to play out. 

We'd be excited to see projects that make a contribution towards producing a pipeline of actionable recommendations in this regard. We think this is mostly a matter of combining a knowledge of finance with detailed views of the future for our areas of interest (i.e. forecasts for different scenarios with a focus on how giving opportunities may change and the associated financial winners/losers). There is a huge amount of room for research on these topics. Useful contributions could be made by research that develops these views of the future in a financially-relevant way, practical analysis of existing or potential financial instruments, and work to improve coordination on these topics.

Some of the ways the strategies of altruistic funders may differ include:

  • Mission-correlated investing
... (read more)

I think this is a really well-written piece, and personally I've shared it with my interns and in general tentatively think I am more inclined to share this with my close contacts than most 80k articles for "generic longtermist EA career advice" (though obviously many 80k articles /podcasts have very useful details about specific questions).

2 things that I'm specifically confused about:

  1. As Max_Daniel noted, an underlying theme in this post is that "being successful at conventional metrics" is an important desiderata, but this doesn't reflect the experiences of longtermist EAs I personally know. For example, anecdotally, >60% of longtermists with top-N PhDs regret completing their program, and >80% of longtermists with MDs regret it.

    (Possible ways that my anecdata is consistent with your claims:
    • These people are  often in the "Conceptual and empirical research on core longtermist topics" aptitudes camp, and success at conventional metrics is a weaker signal here than in other domains you listed.
    • Your notions of "success"/excellence are a much higher bar than completing a PhD at a top-N school.
    • My friends are wrong to think that getting a PhD/MD was a mistake.
  2. You mention that
... (read more)

Just a quick comment that I don't think the above is a good characterisation of how 80k assesses its impact. Describing our whole impact evaluation would take a while, but some key elements are:

  • We think impact is heavy tailed, so we try to identify the most high-impact 'top plan changes'. We do case studies of what impact they had and how we helped. This often involves interviewing the person, and also people who can assess their work. (Last year these interviews were done by a third party to reduce desirability bias). We then do a rough fermi estimate of the impact.

  • We also track the number of a wider class of 'criteria-based plan changes', but then take a random sample and make fermi estimates of impact so we can compare their value to the top plan changes.

If we had to choose a single metric, it would be something closer to impact-adjusted years of extra labour added to top causes, rather than the sheer number of plan changes.

We also look at other indicators like:

  • There have been other surveys of the highest-impact people who entered EA in recent years, evaluating which fraction came from 80k, which let's us make an estimate of the percentage of the EA workforce from 80

... (read more)

I think I agree with the core claims Buck is making. But I found the logical structure of this post hard to follow. So here's my attempt to re-present the core thread of the argument I think Buck is making:

In his original post, Will conditions on long futures being plausible, since these are the worlds that longtermists care about most. Let's assume from now on that this is the case. Will claims, based on his uniform prior over hinginess, that we should require extraordinary evidence to believe in our century's hinginess, conditional on long futures being plausible.  But there are at least two reasons to think that we shouldn't use a uniform prior. Firstly, it's more reasonable to instead have a prior that early times in human history (such as our time) are more likely to be hingey - for example because  we should expect humanity to expand over time, and also from considering technological advances.

Secondly: if we condition on long futures being plausible, then xrisk must be near-zero in almost every century (otherwise there's almost no chance we'd survive for that long). So observing any nonnegligible amount of (preventable) xrisk in our present time becomes very strong

... (read more)

*Logarithmic Scales of Pleasure and Pain*

Recall that while some distributions (e.g. the size of the leaves of a tree) follow a Gaussian bell-shaped pattern, many others (e.g. avalanches, size of asteroids, etc.) follow a long-tail distribution. Long-tail distributions have the general property that a large fraction of the volume is accounted for by a tiny percent of instances (e.g. 80% of the snow that falls from the mountain will be the result of the top 20% largest avalanches).

Keeping long-tails in mind: based on previous research we have conducted at the Qualia Research Institute we have arrived at the tentative conclusion that the intensity of pleasure and pain follows a long-tail distribution. Why?

First, neural activity on patches of neural tissue follow log-normal distributions (an instance of a long-tail distribution).

Second, the extremes of pleasure and pain are so intense that they cannot conceivably be just the extremes of a normal distribution. This includes, on the positive end: Jhana meditation, 5-MeO-DMT peak experiences, and temporal lobe epilepsy (Dostoevsky famously saying he'd trade 10 years of his life for just a few moments of his good epileptic experience... (read more)

I really wish we (as an EA community) didn't work so hard to accidentally make earning to give so uncool. It's a job that is well within the reach of anyone, especially if you don't have unrealistic expectations of how much money you need to make and donate to feel good about your contributions. It's also a very flexible career path and can build you good career capital along the way.

Sure talent gaps are pressing, but many EA orgs also need more money. We also need more people looking to donate, as the current pool of EA funding feels very over-concentrated in the hands of too few decision-makers right now.

I also wish we didn't accidentally make donating to AMF or GiveDirectly so uncool. Those orgs could continually absorb the money of everyone in EA and do great, life-saving work.

(Also, not to mention all the career paths that aren't earning to give or "work in an EA org"...)

Habryka
5mo73
36
27
4

EAs out of the board

Let's please try to avoid phrasing things in ways as tribal as this. We have no idea what happened. Putting an identity of the board members this central feels like it frames the discussion in a bad way.

I understand that you are using this as an example of something you think is untrue and to demonstrate the asymmetrical burden of refuting a lot of claims.

However, if you're prioritising, I would be most interested in whether it is true that you a) encouraged someone who you had financial and professional power over to drive without a driving licence; and b) encouraged someone in the same situation to smuggle drugs across international borders for you.

Whether or not they are formally an employee, encouraging people you have financial and professional power over to commit crimes unconnected to your mission is deeply unethical (and encouraging them to do this for crimes connected to your mission is also, at best, extremely ethically fraught).

tl;dr, GOP presidential candidate Will Hurd seems to be making AI alignment a key part of his platform. Is it worth trying to help him to get onto the debate stage?

disclaimers: 1. this post is about politics, obviously; 2. although I am a director at Cavendish Labs, everything expressed in this post is done entirely in a personal capacity, and in no way reflects any opinions of Cavendish Labs.

epistemic status: highly uncertain. mostly quick thoughts on my impressions, plus ten minutes of research. written in like 5 minutes.
 

So I was driving through New Hampshire today (on the way from Boston to Cavendish), when suddenly a thought hit me—aren’t people campaigning for president around here? So I pulled up some GOP events calendar, and indeed, Will Hurd’s event was starting in 20 minutes, a 7 minute drive away. I’d heard of Will Hurd before—but only in the context of him being a candidate polling at 0%. I went on his website, and came out pretty unimpressed; it seemed like a platform of a generic also-ran that might be fun to stumble upon on archive.org in 2026 and say “wow, totally forgot about this guy!”. But anyways, the allure of meeting a presidential candidate drew me in, a... (read more)

Pausing EA Forum Drama Developments Isn't Enough. We Need to Shut it All Down

A post published today calls for a drama schedule and enforced curfews of any EA Forum Drama Risk content. 

This curfew would be better than no curfew. I have respect for everyone who stepped up and upvoted it. It’s an improvement on the margin.

I refrained from upvoting because I think the post is understating the seriousness of the situation and asking for too little to solve it.

Read More: EA Forum Burners Urged to Pump the Brakes in Open Post

The key issue is not moderation capacity and forum-level drama; it’s what happens after drama gets to more-than-Twitter-levels of ragebait, also known as SuperTwitter Drama. Key thresholds there may not be obvious, we definitely can’t calculate in advance what happens when, and it currently seems imaginable that a burner account would cross critical lines without noticing.

Many researchers steeped in these issues, including myself, expect that the most likely result of reaching a SuperTwitter Drama, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is... (read more)

I'm sorry, but I consider myself EA-adjacent-adjacent. 

Isn't that a bit self-aggrandising? I prefer "aspiring EA-adjacent"

titotal
1y73
46
13

EA leaders should be held to high standards, and it's becoming increasingly difficult to believe that the current leadership has met those standards. I'm open to having my mind changed when the investigation is concluded and the leaders respond (and we get a better grasp on who knew what when). As it stands now, I would guess it would be in the best interest of the movement (in terms of avoiding future mistakes, recruitment, and fundraising) for those who have displayed significantly bad judgement to step down from leadership roles. I recognize that they have worked very hard to do good, and I hope they can continue helping in non-leadership roles. 

If anything this post supports some of the criticism  – the account in the TIME article suggests OCB was responsible for finding promising students and placing then in high-profile jobs (neither of which was the case). It makes no mention of the fact he and the accuser were seemingly already friends with an "unusually direct and honest" relationship (a statement the accuser presumably agrees with, as she's had a chance to vet this post). And that once he learned he had overstepped he was horrified and sought to make amends.

In my mind that's a lot of important context that was elided, and suggests an awkward misstep rather than something more sinister.

Great work Charity entrepreneurship!

As a public health doctor in a low income country, I  read the initial cause areas and had quite a few concerns about implementation. Then when I read the longer summaries I saw you had thought about almost all of them which is impressive - clearly done your homework ;)

Have a few comments

On the kangaroo care rollout front I have four thoughts

1. My instinct is that a generalist could well struggle with this initiative. They would  be dealing at a high level with hospital management and senior staff at hospitals, and without medical expertise or at least a public health background they might not be taken very seriously and struggle to make headway. As you've obviously researched yourselves, and you've seen on the givewell review, sustaining kangaroo care in facilites is extremely difficult  for a range of factors. That 2014 study managed only 5% sustainable practise n 4 African countries. A few NGOs have come around in our Ugandan facilities training midwives, and we are still quite bad at it (I haven't pushed it as hard as I should either).

2. I believe (moderate uncertainty) that cultural resistance, or even cultral norms are an und... (read more)

I have now had a look at the analysis code. Once again, I find significant errors and - once again - correcting these errors is adverse to HLI's bottom line.

I noted before the results originally reported do not make much sense (e.g. they generally report increases in effect size when 'controlling' for small study effects, despite it being visually obvious small studies tend to report larger effects on the funnel plot). When you use appropriate comparators (i.e. comparing everything to the original model as the baseline case), the cloud of statistics looks more reasonable: in general, they point towards discounts, not enhancements, to effect size: in general, the red lines are less than 1, whilst the blue ones are all over the place.

However, some findings still look bizarre even after doing this. E.g. Model 13 (PET) and model 19 (PEESE) not doing anything re. outliers, fixed effects, follow-ups etc, still report higher effects than the original analysis. These are both closely related to the eggers test noted before: why would it give a substantial discount, yet these a mild enhancement?

Happily, the code availability means I can have a look directly. All the basic data seems fine, a... (read more)

mvolz
1y73
35
1

It is not uncommon, and I will even say usual, that Nazi sympathisers are at least somewhat subtle about it.

This is not particularly subtle. Here's their section on the Holocaust: https://nyadagbladet.se/tag/forintelsen/

Here's an editorial written for Holocaust Remembrance Day. Their central claim is that the way to prevent antisemitism it to stop "lying" about how many Jews were killed. https://nyadagbladet.se/ledare/sa-forebygger-vi-den-verkliga-antisemitismen/

This is very classic Holocaust denialism. I don't think it's unreasonable to call a website that actively promotes ethnonationalism and Holocaust denialism "pro-Nazi", unless you think that the literal words "pro-Nazi" must appear somewhere in order to qualify.

Thanks Ludwig for raising the conversation around governance. It's something that is important to me and we're exploring how we can improve on this (and will share later any changes we plan to make). Michael has just left another comment which covers most of what I'd say right now so I don't have much more to add other than:

1. I'd to flag that I don't believe this comment is accurate, and seems very uncharitable:

I believe this structure was set up so the EVF board has central control over EA strategy.

CEA was setup before there was [added:much of] an EA movement (the term "effective altruism" was invented while setting up CEA to support GWWC/80,000 Hours). In recent years, several organisations have approached EVF so they can receive the same kind of operational and legal support. Some of these organisations have met EVF's bar for impact and thus been supported, and I'm aware some are in the process of spinning out of EVF after receiving initial support getting started.

2. In my experience leading GWWC for the past 2.5 years the EVF trustees have never "exerted influence" over our strategy.

During this time I have received helpful input from trustees (mostly working with Toby as our a... (read more)

Since I’m running the project in question (not Wytham Abbey), I would like to share my perspective as well.  (I reached out to the author of the comment, Bob, in a DM asking him to remove the previously posted addresses and we chatted briefly about some of these points privately but I also want to share my answers publicly.)

  1. ESPR can't return the property or the money at the moment because there is currently no mechanism that we are aware of that would make it possible to legally send money "back to FTX" such that it would reliably make its way back to customers who lost their money. We will wait and see how the bankruptcy proceedings play out which will likely take years. For now, I have a responsibility to the staff, to the property, and to the project. 
  2. This project is not an EA project. It covers a broader scope of world-improving activities and organizations. It is not part of the Czech EA organization. I also personally don’t own the property - I’m the CEO of a separate organization (not ESPR, not CZEA) that owns it.
  3. You ask that this purchase be disclosed publicly - this was always the plan. The transaction is very fresh and has only been finalized this week. We are i
... (read more)

Apologies for maybe sounding harsh: but I think this is plausibly quite wrong and nonsubstantive. I am also somewhat upset that such an important topic is explored in a context where substantial personal incentives are involved.

One reason is that the post that gives justice to the topic should explore possible return curves, and this post doesn't even contextualize betting with how much money EA had at the time (~$60B)/has now(~$20B) until the middle of the post where it mentions it in passing: "so effectively increase the resources going towards them by more than 2-fold, and perhaps as much as 5-fold." Arguing that some degree of risk aversion is, indeed, implied by diminishing returns is trivial and has little implications on practicalities.

I wish I had time to write about why I think altruistic actors probably should take a 10% chance of 15B vs. a 100% chance of 1B. Reverse being true would imply a very roughly ≥3x drop in marginal cost-effectiveness upon adding 15B of funding. But I basically think there would be ways to spend money scalably and at current "last dollar" margins.

In GH, this sorta follows from how OP's bar didn't change that drastically in response to a substanti... (read more)

I was on the fence between posting this under my name vs. using an anonymous account. I decided to go ahead, because this is something I've discussed with other folks and it's something I feel pretty strongly about. I wanted to write this comment both to validate your experience and to say a few words about how I see the path forward.

I've had those experiences too: feeling dismissed, shut down, or like I'm not worth someone's time. 

But - and maybe this is because I have a stubborn, contrary, slightly masochistic, "oh yeah? I'll show you" streak - I stuck around. I'm not saying that this is the only way to go; if hanging out with other people in the EA community is causing you pain, I don't want that for you and it is 100% OK to go and do your own thing. 

But if you can: stick around.

Because here's the thing: not everyone is like that. I'd go so far as to say that folks with the attitude above are in the minority. There are SO many humane, warm, kind people in this movement. There are people with a sense of humor and a healthy bit of self-doubt and a generous willingness to meet others where they are. When I hang out with them, I feel inspired to work harder and do more goo... (read more)

I am glad you felt okay to post this - being able to criticise leadership and think critically about the actions of the people we look up to is extremely important.

I personally would give Will the benefit of the doubt of his involvement in/knowledge about the specific details of the FTX scandal, but as you pointed out the fact remains that he and SBF were friends going back nearly a decade.

I also have questions about Will Macaskill's ties with Elon Musk, his introduction of SBF to Elon Musk, his willingness to help SBF put up to 5 billion dollars towards the acquisition of Twitter alongside Musk, and the lack of engagement with the EA community about these actions. We talk a lot about being effective with our dollars and there are so many debates around how to spend even small amounts of money (eg. at EA events or on small EA projects), but it appears that helping SBF put up to 5 billion towards Twitter to buy in with a billionaire who recently advocated voting for the Republican party in the midterms didn't require  that same level of discussion/evaluation/scrutiny. (I understand that it wasn't Will's money and possibly SBF couldn't have been talked into putting it towards ot... (read more)

I'm not a fan of Leverage, but I agree with Richard here. I think Kerry is better modeled as "normal philosophy-friendly EA" with the modifications "less conflict-averse than the average EA" and "mad at EA (for plenty of good reasons and also plenty of bad reasons, IMO) and therefore pretty axe-grindy". If you model him with a schema closer to "crazy cultist" than to "bitter ex-EA", I expect you to make worse predictions.

pete
2y73
25
1

There’s a hunger in EA for personal stories - what life is like outside of forum posts for people doing the work, getting grants, being human. Thank you for sharing.

(Note: personal feelings below, very proud of / keen to support your work)

I’m struck by how differently I felt reading about this funding example, coming from my circumstances. I work in private sector with job stability and hope to build a family. The thought of existing on 6-month grants / frequently changing locations, is scary to me. Health insurance (US), planning financial future, kids, etc. I’ve spoken to many EAs who are in a way more transient living situation than I could handle. Suspect that’s true for many, but not all, mid-career folks.

I've spent time in the non-EA nonprofit sector, and the "standard critical story" there is one of suppressed anger among the workers. To be clear, this "standard critical story" is not always fair, accurate, or applicable. By and large, I also think that, when it is applicable, most of the people involved are not deliberately trying to play into this dynamic. It's just that, when people are making criticisms, this is often the story I've heard them tell, or seen for myself.

It goes something like this:

[Non-EA] charities are also primarily funded by millionaires and billionaires. But they're also run by independently wealthy people, who do it for the PR or for the fuzzies. They underpay, overwork, and ignore the ideas of their staff. They're burnout factories.

Any attempts to "measure the impact" of the charity are subverted by carelessness and the undirected dance of incentives to improve the optics of their organization to keep the donations flowing. Lots of attention on gaming the stats, managing appearances, and sweeping failures under the rug.

Missions are thematic, and there's lots of "we believe in the power of..."-type storytelling motivating the work. Sometimes, the storytelli

... (read more)

Why do I keep meeting so many damned capabilities researchers and AI salespeople? 
I thought that we agreed capabilities research was really bad. I thought we agreed that increasing the amount of economic activity in capabiliities was really bad. To me it seems like the single worst thing that I could even do! 

This really seems like a pretty consensus view among EA orthodoxy. So why do I keep meeting so many people who, as far as I can tell, are doing the single worst thing that it's even in their power to do? If there is any legal thing that could get you kicked out of EA spaces, that isn't sexual misconduct, wouldn't it be this?

I'm not even talking about people who maintain that safety/alignment research requires advancing capabilities or might do so. I'm just talking about people who do regular OpenAI or OpenAI competitor shit. 

If you're supposed to be high status in EA for doing good, aren't you supposed to be low status if you do the exact opposite? It honestly makes me feel like I'm going insane. Do EA community norms really demand that I'm supposed to act like something is normal and okay even though we all seem to believe that it really isn't okay at all? ... (read more)

Copying a comment I once wrote:

  • eating veg sits somewhere between "avoid intercontinental flights" and "donate to effective charities" in terms of expected impact, and I'm not sure where to draw the line between "altruistic actions that seem way too costly and should be discouraged" and "altruistic actions that seem a reasonable early step in one's EA journey and should be encouraged"

  • Intuitively and anecdotally (and based on some likely-crappy papers), it seems harder to see animals as sentient beings or think correctly about the badness of factory farming while eating meat; this form of motivated reasoning plausibly distorts most people's epistemics, and this is about a pretty important part of the world, and recognizing the badness of factory farming has minor implications for s-risks and other AI stuff

So my understanding is as follows.

Imagine that we had these five projects (and only these projects) in the EA portfolio:

  • Alpha: Spend $100,000 to produce 1000 units of impact (after which Alpha will be exhausted and will produce no more units of impact; you can't buy it twice)

  • Beta: Spend $100,000,000 to produce 200,000 units of impact (after which Beta will be exhausted and will produce no more units of impact; you can't buy it twice)

  • Gamma: Spend $1,000,000,000 to produce 300,000 units of impact (after which Gamma will be exhausted and will produce no more units of impact; you can't buy it twice)

  • GiveDeltaly: Spent any amount of money to produce a unit of impact for each $2000 spent (GiveDeltaly cannot be exhausted and you can buy it as many times as you want).

  • Research: Spend $200,000 to create a new opportunity with the same "spend X for Y" of Alpha, Beta, Gamma, or GiveDeltaly.

Early EA (say ~2013), with relatively fewer resources (we didn't have $100M to spend), would've been ecstatic about Alpha because it only costs $100 to buy one unit of impact, which is much better than Beta's $500 per unit, GiveDeltaly's $2000 per unit, or Gamma's $3333.33 per unit.

But "mode... (read more)

Fellow UChicago alum here, also from a house with hardcore house culture (save Breckinridge!) and I think your comparison to house culture is useful in understanding some of the caveats of GITV moments. Being part of an intense, somewhat insular group with strong traditions and a strong sense of itself can be absolutely exhilarating and foster strong cohesion, as you say, but it also can be alienating to those who are more on the edges. Put differently, I absolutely think we should encourage GITV moments, but that spirit can go too far. Once you start saying "the people who don't get in the van aren't real members of [GROUP]," that starts pushing some people away. 

With EA as with house culture, I think it's important to find a balance between cultivating passionate intensity and acknowledging that folks have other things going on and can't always commit 100%; important to cultivate GITV moments but also acknowledge and build systems and traditions that acknowledge that you can't always get in the van--and, furthermore, that often folks with more privilege can more easily get in the van. If you have a shift at work or have to care for your kid, you can't go on spontaneous trips in the same way that a person with fewer obligations might. 

"the top 1% stay on the New York Times bestseller list more than 25 times longer than the median author in that group."

FWIW my intuition is not that this author is 25x more talented, but rather that the author and their marketing team are a little bit more talented in a winner-takes-most market.

I wanted to point this out because I regularly see numbers like this used to justify claims that individuals vary significantly in talent or productivity. It's important to keep the business model in mind if you're claiming talent based on sales!

(Research citations are also a winner-takes-most market; people end up citing the same paper even if it's not much better than the next best paper.)

Thanks for this detailed post on an underdiscussed topic!  I agree with the broad conclusion that extinction via partial population collapse and infrastructure loss, rather than by the mechanism of catastrophe being potent  enough to leave no or almost no survivors (or indirectly enabling some  later extinction level event) has very low probability.  Some comments:

  • Regarding case 1, with a pandemic leaving 50% of the population dead but no major infrastructure damage, I think you can make much stronger claims about there not being 'civilization collapse' meaning near-total failure of industrial food, water, and power systems. Indeed, collapse so defined from that stimulus seems nonsensical to me for rich quantitative reasons.
    • There is no WMD war here, otherwise there would be major infrastructure damage.
    • If half of people are dead, that cuts the need for food and water by half (doubling per capita stockpiles), while already planted calorie-rich crops can easily be harvested with a half-size workforce.
    • Today agriculture makes up closer to 5% than 10% of the world economy, and most of that effort is expended on luxuries such as animal agriculture, expensive fruits, av
... (read more)

I started working at Rethink Priorities. I'm pretty happy about it! 

I tried applying to EA jobs about 3 years ago, and didn't have much luck. 

I don't think this is causal, but I think my emotional/cognitive relationship to applying to EA jobs have changed. 3 years ago, I was more like "I want to have a job at an EA org because that was a path I heard that can have more impact, so I really want to have an EA job so I can have more impact" and was definitely much more in "job application mode." This time around, it felt much more like "okay there are some stuff I want to do. It seems like these orgs will let me progress my goals better." All-in-all, I think this was probably a much healthier relationship to have to my work and EA aspirations.

This definitely resonates with me, and is something I've been thinking about a lot lately, as I wrestle with my feelings around recreational activities and free time. I'm not sure if what follows is exactly an answer to your question, but here's where I'm at in thinking about this problem.

I think one thing it's very important to keep in mind is that, in utilitarianism (or any kind of welfarist consequentialism) your subjective wellbeing is of fundamental intrinsic value. Your happiness is deeply good, and your suffering is deeply bad, regardless of whatever other consequences your actions have in the world. That means that however much good you do in the world, it is better if you are happy as you do it.

Now, the problem, as your post makes clear, is that everyone else's subjective wellbeing is also profoundly valuable, in a way that is commensurate with your wellbeing and can be traded off against it. And, since your actions can affect the wellbeing of many other people, that indirect value can outweigh the direct value of your own wellbeing. This is the fundamental demandingness of consequentialist morality that so many people struggle with. Still, I find it helpful to remember th... (read more)

Great post, thank you for compiling this list, and especially for the pointers for further reading.

In addition to Tobias's proposed additions, which I endorse, I'd like to suggest protecting effective altruism as a very high priority problem area. Especially in the current political climate, but also in light of base rates from related movements as well as other considerations, I think there's a serious risk (perhaps 15%) that EA will either cease to exist or lose most of its value within the next decade. Reducing such risks is not only obviously important, but also surprisingly neglected. To my knowledge, this issue has only been the primary focus of an EA Forum post by Rebecca Baron, a Leaders' Forum talk by Roxanne Heston, an unpublished document by Kerry Vaughan, and an essay by Leverage Research (no longer online). (Risks to EA are also sometimes discussed tangentially in writings about movement building, but not as a primary focus.)

I think you're missing some important ground in between "reflection process" and "PR exercise".

I can't speak for EV or other people then on the boards, but from my perspective the purpose of the legal investigation was primarily about helping to facilitate justified trust. Sam had by many been seen as a trusted EA leader, and had previously been on the board of CEA US. It seemed it wouldn't be unreasonable if people in EA (or even within EV) started worrying that leadership were covering things up. Having an external investigation was, although not a cheap signal to send, much cheaper in worlds where there was nothing to hide compared to worlds where we wanted to hide something; and internal trust is extremely important. Between that and wanting to be able to credibly signal to external stakeholders like the Charity Commission, I think general PR was a long way down the list of considerations.

Thank you for taking the time to write up all of this evidence, and I can only imagine how time-consuming and challenging this must have been.

Apologies if I missed this, but I didn't see a response to Chloe's statement here that one of her tasks was to buy weed for Kat in countries where weed is illegal. This statement wasn't in Ben's original post, so I can see how you might have missed it in your response. But I would appreciate clarification on whether it is true that one of Chloe's tasks was to buy weed in countries where weed is illegal.

This is an open call for CEA to be more transparent with its finances and allocation of resources to different projects (historically and currently)

  1. A quick google shows pretty inconsistent reporting metrics and update cadence over the past several years, as well as reporting gaps.
  2. There is no easily available breakdown of funding / budgeting for most years
  3. It seems like CEA staff do share numbers when asked ad-hoc - e.g. see this comment from JP Addison on spending of the Online team. But if someone wants to get a quick overview it would be incredibly time consuming to compute all the numbers
  4. So it’s not that they are entirely transparent, but that they aren’t making this information easy to access, which feels bad / like obfuscation. And I think they are succeeding - I don’t think many people do have the time or inclination to sift through data. 
     

As a key entity in the EA ecosystem (even if the scope changes), it seems good to demonstrate high transparency and data accessibility even if the decisions are not endorsed by the average community member.

Spicier take: I think they aren’t sharing it, in large part, because of optics. This feels like a bad reason not to be tra... (read more)

Did the EV US Board consider running an open recruitment process and inviting applications from people outside of their immediate circle? If so, why did it decide against?

Hi readers! I work as a Programme Officer at a longtermist organisation. (These views are my own and don't represent my employer!) I think there's some valuable advice in this post, especially about not being constrained too much by what you majored in. But after running several hiring rounds, I would frame my advice a bit differently. Working at a grantmaking organisation did change my views on the value of my time. But I also learned a bunch of other things, like:

  1. The majority of people who apply for EA jobs are not qualified for them.
  2. Junior EA talent is oversupplied, because of management constraints, top of funnel growth, and because EAs really want to work at EA organisations.
  3. The value that you bring to your organisation/to the world is directly proportional to your skills and your fit for the role.

Because of this, typically when I talk to junior EAs my advice is not to apply to lots more EA jobs but rather to find ways of skilling up — especially by working at a non-EA organisation that has excellent managers and invests in training its staff — so that one can build key skills that make one indispensable to EA organisations.

Here's a probably overly strong way of stating ... (read more)

Here’s just the headings from the updates + implications sections, lightly reformatted. I don’t necessarily agree with all/any of it (same goes for my employer).

Updates

Factual updates (the world is now different, so the best actions are different)

  • Less money — There is significantly less money available
  • Brand — EA/longtermism has a lot more media attention, and will have a serious stain on its reputation (regardless of how well deserved you think that is)
  • Distrust — My prediction is that if we polled the EA community, we’d find EAs have less trust in several institutions and individuals in this community than they did before November. I think this is epistemically correct: people should have less trust in several of the core institutions in the community (in integrity; in motives; in decision-making)

Epistemic updates (beliefs about the world I wish I’d had all along, that I discovered in processing this evidence)

  • Non-exceptionalism — Seems less likely that a competent group of EAs could expect to do well in arbitrary industries / seems like making money is generally harder (which means the estimate of future funding streams goes down beyond the immediate cut in fund
... (read more)
lilly
1y72
43
22

I think your general point is a good one—EA has been criticized for a lot of things, many critiques of EA are unfair, and journalists score points by writing salacious stories. I also agree that it's really hard to interpret some of the anecdotes in the TIME article without more context. But I don't agree with this:

I’m not saying that EA is perfect or that nothing in the article is true, but rather that reading it, my gut instinct was that roughly 80% was entirely misleading

I think we have good reason to believe the article is broadly right, even if some of the specific anecdotes don't do a good job of proving this. Here's a rough summary of the main (non-anecdote) points of the article:

  1. EA involves many "complex professional relationships" (true)
  2. "Most of the access to funding and opportunities within the movement [is] controlled by men" (true)
  3. "Effective altruism’s overwhelming maleness, its professional incestuousness, its subculture of polyamory and its overlap with tech-bro dominated 'rationalist' groups have combined to create an environment in which sexual misconduct can be tolerated, excused, or rationalized away." This language is inflammatory ("overwhelming", "incestuou
... (read more)

This language is inflammatory ("overwhelming", "incestuous"), but we can boil this down to a more sterile sounding claim

A major part of the premise of the OP is something like "the inflammatory nature is a feature, not a bug; sure, you can boil it down to a more sterile sounding claim, but most of the audience will not; they will instead follow the connotation and thus people will essentially 'get away' with the stronger claim that they merely implied."

The accuser doesn’t offer concrete behaviors, but rather leaves the badness as general associations. They don’t make explicit accusations, but rather implicit ones. The true darkness is hinted at, not named. They speculate about my bad traits without taking the risk of making a claim. They frame things in a way that increases my perceived culpability.

I think it is a mistake to steelman things like the TIME piece, for precisely this reason, and it's also a mistake to think that most people are steelmanning as they consume it.

So pointing out that it could imply something reasonable is sort of beside the point—it doesn't, in practice.

(I wrote this comment in a personal capacity, intending only to reflect my own views / knowledge.)

Hi,

In 2021, the EA Infrastructure Fund (which is not CEA, though both are supported and fiscally sponsored by Effective Ventures) made a grant for preparatory work toward potentially creating a COVID-related documentary.[1] I was the guest fund manager who recommended that grant. When I saw this post, I guessed the post was probably related to that grant and to things I said, and I’ve now confirmed that.

This post does not match my memory of what happened or what I intended to communicate, so I'll clarify a few things:

  • The EA Infrastructure Fund is not CEA, and I’m just one of its (unpaid, guest) fund managers. So what I said shouldn’t be treated as “CEA’s view”. 
  • The EAIF provided this grant in response to an application the grantseekers made, rather than “commissioning” it. 
  • When evaluating this grant, I consulted an advisor from the COVID forecasting space and another from the biosecurity space. They both flagged one of the people mentioned in the title of this post as seeming maybe unwise to highlight in this documentary. 
    • But I don’t recall this having been abo
... (read more)

I am not a medical doctor, but I live in Nigeria. As a lecturer, I have had opportunity to be trained for my PhD benchwork at Duke University, USA. This experience gave me a clue as to the difference between the western world and the low and midsummer income countries like Nigeria. The gap is wide and the differences are huge.

The poor economic situation in Nigeria has necessitated mass exodus of Professionals(Medical Doctors, Nurses, Lecturers) everyone wants to leave to a better economy that pays well.

I support the fact that western by default idea not be seen as good and may not make the desired “good” impacts required in these countries. I also suggest that fund managers with a good knowledge of the African culture should be recruited to help evaluate causes from Africa

My understanding is that FTX's business model fairly straightforwardly made sense? It was an exchange, and there are many exchanges in the world that are successful and probably not fraudulent businesses (even in crypto - Binance, Coinbase, etc). As far as I can tell, the fraud was due to supporting specific failures of Alameda due to bad decisions, but wasn't inherent to FTX making any money at all?

[anonymous]1y72
32
10

As an outsider to the movement, I think this is misjudged. 

I think it incredibly unlikely that SBF disclosed his fraudulent behavior to anyone outside of a small inner circle within FTX/Alameda. Why would he take that stupendous risk?

The failure here is becoming so dependent upon, and promoting the virtues of, someone engaged in a crypto business with a lot of red flags. In my opinion, that is what merits 'review'. 

Investigating your own personnel for something you have no probable cause for will only consolidate the bad publicity EA is getting now. It will make EA look guilty for something it did not do.

If I am being honest, this comes across as an over-the-top attempt at self-cleansing that is motivated more by prim sanctimony than any real reckoning with the situation.

Quick thoughts -- this isn't intended to be legal advice, just pointing in a relevant direction. There are a couple types of "clawbacks" under bankruptcy law:

  • Preference action (11 USC 547): Generally allows clawback of most transfers by an insolvent entity or person made within 90 days of filing for bankruptcy. The concept here is that courts don't want people to be able to transfer money away to whoever they want to have it just before filing for bankruptcy. My GUESS (this really really isn't legal advice, I'm really not a bankruptcy lawyer) is that any money transferred to a grantee before ~early August won't be able to be clawed back in a preference action. Caveat: There are special rules around transfers to insiders, so the situation might be more complicated for grantees that have multiple types of relationships to FTX.
  • Fraudulent transfer action (11 USC 548): Generally allows clawback of transfers made within 2 years of a bankruptcy filing in cases where the transfer was meant to help conceal or perpetrate the fraud (very rough characterization -- trying to balance precision and comprehensibility here). This is the classic Madoff/Ponzi case, where a person/company will pay out
... (read more)

Not sure it's worth the effort, but I'd find the charts easier to read if you used a wider variety of colors.

Tl;dr the Longtermism Fund aims to be a widely accessible call-to-action to accompany longtermism becoming more mainstream 😍

I used to agree more with the thrust of this post than I do, and now I think this is somewhat overstated. 

[Below written super fast, and while a bit sleep deprived]

An overly crude summary of my current picture is "if you do community-building via spoken interactions, it's somewhere between "helpful" and "necessary" to have a substantially deeper understanding of the relevant direct work than the people you are trying to build community with, and also to be the kind of person they think is impressive, worth listening to, and admirable. Additionally, being interested in direct work is correlated with a bunch of positive qualities that help with community-building (like being intellectually-curious and having interesting and informed things to say on many topics). But not a ton of it is actually needed for many kinds of extremely valuable community building, in my experience (which seems to differ from e.g. Oliver's). And I think people who emphasize the value of keeping up with direct work sometimes conflate the value of e.g. knowing about new directions in AI safety research vs.  broader value adds from becoming a more informed person and gaining various intellectual benef... (read more)

Curated and popular this week
Relevant opportunities