AnonymousEAForumAccount

Topic Contributions

Comments

EA is more than longtermism

But it seems to me like anyone who starts the Handbook will get a very strong impression in those first three sections that EA cares a lot about near-term causes, helping people today, helping animals, and tackling measurable problems. That impression matters more to me than cause-specific knowledge (though again, some of that would still be nice!).

However, I may be biased here by my teaching experience. In the two introductory fellowships I've facilitated, participants who read these essays spent their first three weeks discussing almost exclusively near-term causes and examples.

 

That’s helpful anecdata about your teaching experience. I’d love to see a more rigorous and thorough study of how participants respond to the fellowships to see how representative your experience is.

 

I don't think I've seen Pascal's Mugging discussed in any non-longtermist context, unless you count actual religion. Do you have an example on hand for where people have applied the idea to a neartermist cause?

 

I’m pretty sure I’ve heard it used in the context of a scenario questioning whether torture is justified to stop the threat dirty bomb that’s about to go off in a city.

 

I wrote to four people, two of whom (including Michael) sent useful feedback . The other two also responded; one said they were busy, the other seemed excited/interested but never wound up sending anything. 

A 50% useful-response rate isn't bad, and makes me wish I'd sent more of those emails. My excuse is the dumb-but-true "I was busy, and this was one project among many".

That’s a good excuse :) I misinterpreted Michael’s previous comment as saying his feedback didn’t get incorporated at all. This process seems better than I’d realized (though still short of what I’d have liked to see after the negative reaction to the 2nd edition).

 if you specifically have ideas for material you'd like to see included, I'd be happy to pass them along to CEA — or you could contact someone like Max or Lizka.

GiveWell’s Giving 101 would be a great fit for global poverty. For animal welfare content, I’d suggest making the first chapter of Animal Liberation part of the essential content (or at least further reading), rather than part of the “more to explore” content. But my meta-suggestion would be to ask people who specialize in doing poverty/animal outreach for suggestions.

EA is more than longtermism

Thanks for sharing this history and your perspective Aaron.

I agree that 1) the problems with the 3rd edition were less severe than those with the 2nd edition (though I’d say that’s a very low bar to clear) and 2) the 3rd edition looks more representative if you weigh the “more to explore” sections equally with “the essentials” (though IMO it’s pretty clear that the curriculum places way more weight on the content it frames as “essential” than a content linked to at the bottom of the “further reading” section.)

I disagree with your characterization of "The Effectiveness Mindset", "Differences in Impact", and "Expanding Our Compassion" as neartermist content in a way that’s comparable to how subsequent sections are longtermist content. The early sections include some content that is clearly neartermist (e.g. “The case against speciesism”, and “The moral imperative toward cost-effectiveness in global health”). But much, maybe most, of the 
"essential" reading in the first three sections isn’t really about neartermist (or longtermist) causes. For instance, “We are in triage every second of every day” is about… triage.  I’d also put  “On Fringe Ideas”, “Moral Progress and Cause X”, “Can one person make a difference?”, “Radical Empathy”, and “Prospecting for Gold” in this bucket.

By contrast, the essential reading in the “Longtermism”, “Existential Risk”, and “Emerging technologies” section is all highly focused on longtermist causes/worldview; it’s all stuff like “Reducing global catastrophic biological risks”, “The case for reducing existential risk”, and “The case for strong longtermism”.

I also disagree that the “What we may be missing?” section places much emphasis on longtermist critiques (outside of the “more to explore” section, which I don’t think carries much weight as mentioned earlier). “Pascal’s mugging” is relevant to, but not specific to, longtermism, and “The case of the missing cause prioritization research” doesn’t criticize longtermist ideas per se,  it more argues that the shift toward prioritizing longtermism hasn’t been informed by significant amounts of relevant research. I find it telling that “Objections to EA” (framed as a bit of a laundry list) doesn’t include anything about longtermism and that as far as I can tell no content in this whole section addresses the most frequent and intuitive criticism of longtermism I’ve heard (that it’s really really hard to influence the far future so we should be skeptical of our ability to do so).

Process-wise, I don’t think the use of test readers was an effective way of making sure the handbook was representative. Each test reader only saw a fraction of the content, so they’d be in no position to comment on the handbook as a whole. While I’m glad you approached members of the animal and global development communities for feedback, I think the fact that they didn’t respond is itself a form of (negative) feedback (which I would guess reflects the skepticism Michael expressed that his feedback would be incorporated). I’d feel better about the process if, for example, you’d posted in poverty and animal focused Facebook groups and offered to pay people (like the test readers were paid) to weigh in on whether the handbook represented their cause appropriately. 

EA is more than longtermism

Thanks for sharing that post! Very well thought out and prescient, just unfortunate (through no fault of yours) that it's still quite timely.

EA is more than longtermism

Agree! This decision has huge implications for the entire community, and should be made explicitly and transparently.

EA is more than longtermism

I agree with your takes on CEA as an organization and as individuals (including Max).

Personally, I’d have a more positive view of CEA the organization if it were more transparent about its strategy around cause prioritization and representativeness (even if I disagree with the strategy) vs. trying to make it look like they are more representative than they are. E.g. Max has made it pretty clear in these comments that poverty and animal welfare aren’t high priorities, but you wouldn’t know that from reading CEA’s strategy page where the very first sentence states: “CEA's overall aim is to do the most we can to solve pressing global problems — like global povertyfactory farming, and existential risk — and prepare to face the challenges of tomorrow.”

EA is more than longtermism

Thanks for following up regarding who was consulted on the Fellowship content. 

And nice to know you’re planning to run the upcoming update by some critics. Proactively seeking out critical opinions seems quite important, as I suspect many critics won’t respond to general requests for feedback due to a concern that they’ll be ignored. Michael noted that concern, I’ve personally been discouraged from offering feedback because of it (I’ve engaged with this thread to help people understand the context and history of the current state of EA cause prioritization, not because I really expect CEA to meaningfully change its content/behavior), and I can’t imagine we’re alone in this.

EA is more than longtermism

I  can see how the work of several EA projects, especially CEA, contributed to this. I think that some of these were mistakes (and we think some of them were significant enough to list on our website)… Often my take on these cases is more like "it's bad that we called this thing "EA"", rather than "it's bad that we did this thing"… I think that calling things "EA" means that there's a higher standard of representativeness, which we sometimes failed to meet.

I do want to note that all of the things you list took place around 2017-2018, and our work and plans have changed since then. For instance…  the EA Handbook is different.

 

The EA Handbook is different, but as far as I can tell the mistakes made with the Handbook 2.0 were repeated for the 3rd edition.

CEA describing those “mistakes” around the Handbook 2.0:

“it emphasized our longtermist view of cause prioritization, contained little information about why many EAs prioritize global health and animal advocacy, and focused on risks from AI to a much greater extent than any other cause. This caused some community members to feel that CEA was dismissive of the causes they valued. We think that this project ostensibly represented EA thinking as a whole, but actually represented the views of some of CEA’s staff, which was a mistake. We think we should either have changed the content, or have presented the content in a way that made clear what it was meant to represent.”

CEA acknowledges it was a mistake for the 2nd edition to exclude the views of large portions of the community, but frame the content as representative of EA. But the 3rd edition does the exact same thing!

As Michael relates, he observed to CEA staff that the EA Introductory Fellowship  curriculum was heavily skewed toward longtermist content, and was told that it had been created without much/any input from non-longtermists. Since the Intro Fellowship curriculum is identical to the EA Handbook 3.0 material, that means non-longertermists had minimal input on the Handbook.

Despite that, the Handbook 3.0 and the Intro Fellowship curriculum (and for that matter the In-Depth EA Program, which includes topics on biorisk and AI but nothing on animals or poverty) are clearly framed as EA materials, which you say should be held to “a higher standard of representativeness” rather than CEA’s views. So I struggle to see how the Handbook 3.0 (and other content) isn’t simply repeating the mistakes of the second edition; it feels like we’re right back where we were four years ago. Arguably a worse place, since at least the Handbook 2.0 was updated to clarify that CEA selected the content and other community members might disagree.

I realize CEA posted on the Forum soliciting suggestions on what should be included in the 3rd edition and asking for feedback on an initial sequence on motivation (which doesn’t seem to have made it into the final handbook). But from Michael’s anecdote, it doesn’t sound like CEA reached out to critics of the 2nd edition or the animal or poverty communities. I would have expected those steps to be taken given the criticism surrounding the 2nd edition, CEA’s response to that criticism and its characterization of how it addressed its mistakes (“we took this [community] feedback into account when we developed the latest version of the handbook”), and how the 3rd edition is still framed as “EA” vs. “CEA’s take on EA”.

Is it still hard to get a job in EA? Insights from CEA’s recruitment data

EOIs are substantially different from the Core roles (in having a higher bar for progression, etc.), which would make an overall figure less useful. 

If EOIs are hard to get, that seems relevant to the question of whether EA jobs are hard to get since EOIs are quite sought after (as many applicants as core jobs despite less chance of getting hired). But since AFAIK CEA is the only EA org that has EOIs, I can certainly see the case for excluding them from the sample.

we're taking the average across applicants, and not across roles.

100% agree this is the right methodology. But I still think 1.85% is the relevant number (number of hires/number of applicants). From your answer to Khorton, it sounds like your 2.4% figure excludes the Core job you didn’t hire for (which seems to have gotten more applicants than the average core job). I don’t understand that decision, and think it makes it harder to answer the question of whether EA jobs are hard to get.

Regarding the industry comparison, as you mention there are ways in which CEA might be more selective than industry and other ways in which CEA might be less selective. As Ben mentions in an earlier comment, we probably don't have solid enough evidence to call it in one direction or another.

Can you provide CEA’s offer rate, for the PM role and for core jobs overall? Hire rate really isn’t the best measure for determining whether jobs are hard to get. 

FWIW, I’m not sure why Ben thinks hires as a “percent of applicants who get to the people ops interview stage” (the only stage where CEA is more likely to hire, and not an apples-to-apples comparison since CEA has a work trial before it and Ashby doesn’t) is the right metric. He suggests he likes that metric as a way to exclude low-quality applicants,  but the better way to do that is to look at hires (or ideally offers) as a percent of people who make it past the initial screen (which is more restrictive for Ashby than CEA). CEA hires 1 in 28 people who make it past the first screen; the Ashby sample hires 1 in 12 (and makes offers to 2 in 12). 

EA is more than longtermism

Thanks for sharing the job counts, that's interesting data. But I also think it's important to note how those jobs are framed on the job board. The AI and pandemic jobs are listed as “top recommended problems”, while the global health jobs are listed as “other pressing problems” (along with jobs related to factory farming). 

EA is more than longtermism

IMO the share of grants going to community infrastructure isn’t particularly relevant to the relative shares received by longterm and nearterm projects. But I’ll edit my post to note that the stat I cite is only from the first round of EA Grants since that’s the only round for which data was ever published.

 

one of the five concrete examples listed seems to be a relatively big global poverty grant. 

Could you please clarify what you mean by this? I linked to an analysis listing written descriptions of 6 EA Grants made after the initial round, for which grant sizes were never provided. One of those (to Charity Entrepreneurship) could arguably be construed as a global poverty grant, though I think it’d be more natural to categorize it as meta/community (much as how I think it was reasonable that the grant you received to work on LessWrong2.0, the largest grant of the first round, was classified as a community building grant rather than a longtermist grant.) 

In any case, the question of what causes the EA Grant program supported is an empirical one that should be easy to answer. I’ve already asked for data on this, and hope that CEA publishes it so we don't have to speculate.

Load More