All of AnonymousEAForumAccount's Comments + Replies

Remuneration In Effective Altruism

I liked this series, and agree with a lot of it. But (unless I missed this) I think you omitted an important problem of using low salaries as a proxy for value alignment: it is a much more meaningful proxy for some people than others. Low salaries might filter out people who aren’t value aligned, but they will also filter out people who are very value aligned but can’t accept low salaries because of e.g. high medical bills, having dependents to support, student loans, etc. This interferes with the goal of finding the best candidates, and exacerbates EA’s tendency toward elitism.
 

6Stefan_Schubert9d
Thanks, good point. I agree that this is an additional problem for that strategy. My discussion about it wasn't very systematic.
Is it still hard to get a job in EA? Insights from CEA’s recruitment data

Yeah, job experience seems like a major difference between CEA and Ashby. I’d guess that salary could be quite different too (which might be why the CEA role doesn’t seem interesting to experienced PMs).

It sounds like one of the reasons why EA jobs are hard to get (at least for EA candidates) is because EA candidates (typically young people with great academic credentials and strong understanding of EA but relatively little job experience) lack the job experience some roles require. To me this suggests that advising (explicitly or implicitly) young EAs that the most impactful thing they can do is direct work could be counterproductive, and that it might be better to emphasize building career capital.

Is it still hard to get a job in EA? Insights from CEA’s recruitment data

Re: offer rate vs hire rate, CEA’s applicants are likely applying to other EA jobs they’d also be dedicated to. CEA may well be more attractive than other EA employers, but I don’t think that’s a given and I’m not sure of the magnitude of any difference there might be. Bigger picture, as I mentioned earlier I think any individual metric is problematic and that we should look at a variety of metrics and see what story they collectively tell.

Re: your meta point, the thing I find confusing is that you “didn't have particularly strong opinions about whether EA... (read more)

4Ben_West20d
I think I'm largely like "bruh, literally zero of our product manager finalist candidates had ever had the title "product manager" before, how could we possibly be more selective than Ashby?"[1] [#fnca5snl3dg9c] Some other data points: 1. When I reach out to people who seem like good fits, they often decline to apply, meaning that they don't even get into the data set evaluated here 2. When I asked some people who are well-connected to PMs to pass on the job to others they know, they declined to do so because they thought the PMs they knew would be so unlikely to want it it wasn't worth even asking I acknowledge that, if you rely 100% on the data set presented here, maybe you will come to a different conclusion, but I really just don't think the data set presented here is that compelling. 1. ^ [#fnrefca5snl3dg9c]As mentioned, our candidates are impressive in other ways, and maybe they are more impressive than the average Ashby candidate overall, but I just don't think we have the evidence to confidently say that.
Is it still hard to get a job in EA? Insights from CEA’s recruitment data

The conclusion of this post was "Overall, CEA might be slightly more selective than Ashby’s customers, but it does not seem like the difference is large" and that still seems basically right to me: 1/7 vs. 1/5 is more selective, but well within the margin of error given how much uncertainty I have.

 

The 1/7 vs. 1/5 comparison is based on hire rate, but offer rate is more relevant to selectivity (if you disagree, could you explain why?) The difference in offer rate is 14% for CEA vs. 40% for Ashby; I’d be quite surprised if this large difference is stil... (read more)

2Ben_West23d
I think it's pretty uncontroversial that our applicants are more dedicated (i.e. more likely to accept an offer). My understanding of Ashby is that it's used by a bunch of random tech recruiting agencies, and I would guess that their applicants have ~0 pre-existing excitement about the companies they get sent to. The statement in the post is "CEA might be slightly more selective than Ashby’s customers, but it does not seem like the difference is large". This seems consistent with the view that CEA is selective? (It also just implies that Ashby is selective, which is a reasonable thing to believe.[1] [#fnyplah3142dr]) -- As a meta point: I kind of get the sense that you feel that this post is intended to be polemical, like we are trying to convince people that CEA isn't selective or something. But as you originally said: "the authors don’t seem to take an explicit stance on the issue" – we just wanted to share some statistics about our hiring and, at least as evidenced by that first comment of yours, we were somewhat successful in conveying that we didn't have particularly strong opinions about whether EA jobs are still hard to get. This post was intended to provide some statistics about our hiring, because we were collecting them for internal purposes anyway so I figured we might as well publish. We threw in the Ashby thing at the end because it was an easily accessible data point, but to be honest I kind of regret doing that – I'm not sure the comparison was useful for many people, and it caused confusion. 1. ^ [#fnrefyplah3142dr]It sounds to me like you think Ashby is selective: "the Ashby benchmark (which itself likely captures selective jobs)."
Is it still hard to get a job in EA? Insights from CEA’s recruitment data

Thanks for updating the post and providing the offer rate data! As I mentioned in my response to Ben, I think CEA's much lower offer rates relative to those in the Ashby survey and CEA’s 100% offer acceptance rate are strong evidence that EA jobs are hard to get.

Is it still hard to get a job in EA? Insights from CEA’s recruitment data

Thanks for clarifying how the EOIs work, I had a different impression from the OP.

I still strongly disagree with the following statement:

in some ways CEA is more selective, and in other ways we are less; I think the methodology we used isn't precise enough to make a stronger statement than 'we are about the same.'"

Which are the ways in which CEA is less selective? You mentioned in a previous comment that “ we hire a substantially greater percent of applicants who get to the people ops interview stage” and I cited that interpretation in my own comment,... (read more)

4Ben_West23d
Thanks yeah sorry, there is a greater change in the percentage of drop off for Ashby on-site -> hired, but because we start with a smaller pool we are still more selective. 1 in 7 versus 1 in 5 is the correct comparison. I guess I'm flattered that you trust the research we did here so much, but I think it's very much not clear: 1. The number of applicants we get is very heavily influenced by how widely we promote the position, if the job happens to get posted to a job aggregator site, etc. To take a concrete example: six months ago we hired for a PM and got 52 applicants; last month we opened another PM position which got on to some non-EA job boards and got 113 applicants. If we hire one person from each round, I think you will say that we have gotten more than twice as selective, which is I guess kind of true, but our hiring bar hasn't really changed (the person who we hired last time would be a top candidate this time). 2. I don't really know what Ashby's candidate pool is like, but I would guess their average applicant has more experience than ours – for example: none of our final candidates last round ever even had the job title "product manager" before, though they had had related roles, and in the current round neither of the two people at the furthest round in the process have ever had a PM role. I would be pretty surprised if Ashby's final rounds were consistently made up of people who had never been PMs before.[1] [#fnn4i18qjsf9] The conclusion of this post was "Overall, CEA might be slightly more selective than Ashby’s customers, but it does not seem like the difference is large" and that still seems basically right to me: 1/7 vs. 1/5 is more selective, but well within the margin of error given how much uncertainty I have. Thanks – I just cut that sentence since my inability to communicate my view even with our substantial back-and-forth makes me pessimistic about making a summary. 1. ^ [
EA is more than longtermism

But it seems to me like anyone who starts the Handbook will get a very strong impression in those first three sections that EA cares a lot about near-term causes, helping people today, helping animals, and tackling measurable problems. That impression matters more to me than cause-specific knowledge (though again, some of that would still be nice!).

However, I may be biased here by my teaching experience. In the two introductory fellowships I've facilitated, participants who read these essays spent their first three weeks discussing almost exclusively near-

... (read more)
EA is more than longtermism

Thanks for sharing this history and your perspective Aaron.

I agree that 1) the problems with the 3rd edition were less severe than those with the 2nd edition (though I’d say that’s a very low bar to clear) and 2) the 3rd edition looks more representative if you weigh the “more to explore” sections equally with “the essentials” (though IMO it’s pretty clear that the curriculum places way more weight on the content it frames as “essential” than a content linked to at the bottom of the “further reading” section.)

I disagree with your characterization of "The E... (read more)

4Aaron Gertler2mo
I'll read any reply to this and make sure CEA sees it, but I don't plan to respond further myself, as I'm no longer working on this project. Thanks for the response. I agree with some of your points and disagree with others. To preface this, I wouldn't make a claim like "the 3rd edition was representative for X definition of the word" or "I was satisfied with the Handbook when we published it" (I left CEA with 19 pages of notes on changes I was considering). There's plenty of good criticism that one could make of it, from almost any perspective. I agree. Many of these have ideas that can be applied to either perspective. But the actual things they discuss are mostly near-term causes. * "On Fringe Ideas" focuses on wild animal welfare. * "We are in triage" ends with a discussion of global development (an area where the triage metaphor makes far more intuitive sense than it does for longtermist areas). * "Radical Empathy" is almost entirely focused on specific neartermist causes. * "Can one person make a difference" features three people who made a big difference — two doctors and Petrov. Long-term impact gets a brief shout-out at the end, but the impact of each person is measured by how many lives they saved in their own time (or through to the present day). This is different from e.g. detailed pieces describing causes like malaria prevention or vitamin supplementation. I think that's a real gap in the Handbook, and worth addressing. But it seems to me like anyone who starts the Handbook will get a very strong impression in those first three sections that EA cares a lot about near-term causes, helping people today, helping animals, and tackling measurable problems. That impression matters more to me than cause-specific knowledge (though again, some of that would still be nice!). However, I may be biased here by my teaching experience. In the two introductory fellowships I've facilitated, participants who read these essays spent their firs
EA is more than longtermism

Thanks for sharing that post! Very well thought out and prescient, just unfortunate (through no fault of yours) that it's still quite timely.

EA is more than longtermism

Agree! This decision has huge implications for the entire community, and should be made explicitly and transparently.

EA is more than longtermism

I agree with your takes on CEA as an organization and as individuals (including Max).

Personally, I’d have a more positive view of CEA the organization if it were more transparent about its strategy around cause prioritization and representativeness (even if I disagree with the strategy) vs. trying to make it look like they are more representative than they are. E.g. Max has made it pretty clear in these comments that poverty and animal welfare aren’t high priorities, but you wouldn’t know that from reading CEA’s strategy page where the very first sentence ... (read more)

It's possibly worth flagging that these are (sadly) quite long-running issues. I wrote an EA forum post now 5 years ago on the 'marketing gap', the tension between what EA organisations present EA as being about and what those the organisations believe it should be about, and arguing they should be more 'morally inclusive'. By 'moral inclusive', I mean welcoming and representing the various different ways of doing the most good that thoughtful, dedicated individuals have proposed.

This gap has since closed a bit, although not always in the way I hoped for, ... (read more)

2Chris Leong3mo
Well, now that GiveWell has already put in the years of vetting work, we can reliably have a pretty large impact on global poverty just by channeling however many million to AMF + similar. And I guess, it's not exactly that we need to do too much more than that.
EA is more than longtermism

Thanks for following up regarding who was consulted on the Fellowship content. 

And nice to know you’re planning to run the upcoming update by some critics. Proactively seeking out critical opinions seems quite important, as I suspect many critics won’t respond to general requests for feedback due to a concern that they’ll be ignored. Michael noted that concern, I’ve personally been discouraged from offering feedback because of it (I’ve engaged with this thread to help people understand the context and history of the current state of EA cause prioritization, not because I really expect CEA to meaningfully change its content/behavior), and I can’t imagine we’re alone in this.

I’ve engaged with this thread to help people understand the context and history of the current state of EA cause prioritization, not because I really expect CEA to meaningfully change its content/behavior

Fwiw, my model of CEA is approximately that it doesn't want to look like it's ignoring differing opinions but that, nevertheless, it isn't super fussed about integrating them or changing what it does. 

This is my view of CEA as an organisation. Basically, every CEA staff member I've ever met (including Max D) has been a really lovely, thoughtful individual.  

EA is more than longtermism

I  can see how the work of several EA projects, especially CEA, contributed to this. I think that some of these were mistakes (and we think some of them were significant enough to list on our website)… Often my take on these cases is more like "it's bad that we called this thing "EA"", rather than "it's bad that we did this thing"… I think that calling things "EA" means that there's a higher standard of representativeness, which we sometimes failed to meet.

I do want to note that all of the things you list took place around 2017-2018, and our work

... (read more)
7Aaron Gertler3mo
While at CEA, I was asked to take the curriculum for the Intro Fellowship and turn it into the Handbook, and I made a variety of changes (though there have been other changes to the Fellowship and the Handbook since then, making it hard to track exactly what I changed). The Intro Fellowship curriculum and the Handbook were never identical. I exchanged emails with Michael Plant and Sella Nevo, and reached out to several other people in the global development/animal welfare communities who didn't reply. I also had my version reviewed by a dozen test readers [https://forum.effectivealtruism.org/posts/NerMQ2QXASgqGW82k/closed-seeking-paid-volunteers-to-test-introductory-ea] (at least three readers for each section), who provided additional feedback on all of the material. I incorporated many of the suggestions I received, though at this point I don't remember which came from Michael, Sella, or other readers. I also made many changes on my own. It's reasonable to argue that I should have reached out to even more people, or incorporated more of the feedback I received. But I (and the other people who worked on this at CEA) were very aware of representativeness concerns. And I think the 3rd edition was a lot more balanced than the 2nd edition [https://assets.ctfassets.net/ohf186sfn6di/glbXAUtnb2QagqY88qy4s/f8da9e4617efb89c0f79bf592b3f7ecd/Effective_Altruism_Handbook.pdf] . I'd break down the sections as follows: * "The Effectiveness Mindset", "Differences in Impact", and "Expanding Our Compassion" are about EA philosophy with a near-term focus (most of the pieces use examples from near-term causes, and the "More to Explore" sections share a bunch of material specifically focused on anima welfare and global development). * "Longtermism" and "Existential Risk" are about longtermism and X-risk in general. * "Emerging Technologies" covers AI and biorisk specifically. * These topics get more specific detail than animal welfare and global developme

Hey, I've just messaged the people directly involved to double check, but my memory is that we did check in with some non-longtermists, including previous critics (as well as asking more broadly for input, as you note). (I'm not sure exactly what causes the disconnect between this and what Aaron is saying, but Aaron was not the person leading this project.) In any case, we're working on another update, and I'll make sure to run that version by some critics/non-longtermists.

Also, per other bits of my reply, we're aiming to be ~70-80% longtermist, and I thin... (read more)

Is it still hard to get a job in EA? Insights from CEA’s recruitment data

EOIs are substantially different from the Core roles (in having a higher bar for progression, etc.), which would make an overall figure less useful. 

If EOIs are hard to get, that seems relevant to the question of whether EA jobs are hard to get since EOIs are quite sought after (as many applicants as core jobs despite less chance of getting hired). But since AFAIK CEA is the only EA org that has EOIs, I can certainly see the case for excluding them from the sample.

we're taking the average across applicants, and not across roles.

100% agree this is the ... (read more)

2Ben_West1mo
Sorry for my slow response here, I missed the notification about your comment. I think maybe we just didn't explain what EOIs are well. As an example: we had a product manager EOI; once we opened a full hiring round for PMs we contacted all the people who filled out the EOI and said "hey are you still looking for a PM position" and then moved the ones who said "yes" into the p.m. hiring round.[1] [#fnp46oyjq67rl] My conclusion was [https://forum.effectivealtruism.org/posts/vGRFZMriNYdLtD2AJ/is-it-still-hard-to-get-a-job-in-ea-insights-from-cea-s?commentId=yAdE7Zu24GZYbdGdx#comments] : "in some ways CEA is more selective, and in other ways we are less; I think the methodology we used isn't precise enough to make a stronger statement than 'we are about the same.'" I don't think one of these comparison points is the "right metric" – they all have varying degrees of usefulness, and you and I might disagree a bit about their relative value, but, given their contradictory conclusions, I don't think you can draw strong conclusions other than "we are about the same". 1. ^ [#fnrefp46oyjq67rl]Sometimes exceptional candidates are hired straight from an EOI, the example I give is specific to that role. I think in retrospect we should have just left EOIs off, as the data was more confusing than helpful.
3Akara1mo
Hey, apologies that it has taken us so long to get back to you on this. Thanks for pointing this out! You've shed light on an important point.The 2.4% figure can be thought of as "the probability of being hired, conditional on clearing a hiring bar" and the 1.85% figure is the "probability of being hired at all"; on reflection I agree that the latter would be more useful in this case. I've updated the post to reflect this. For the PM role there was only one offer made (to the one hire), so a rate of 1/52=1.9%. For core jobs overall, on average there was just one offer made for each[1] [#fngxucslk8804]. The average number of applications was 53.7, so the average offer rate for core roles is 1/53.7=1.9%. 1. ^ [#fnrefgxucslk8804]Of the 7 Core roles, one role made two offers, and one other role made zero offers, so this averages out at one offer per role.
EA is more than longtermism

Thanks for sharing the job counts, that's interesting data. But I also think it's important to note how those jobs are framed on the job board. The AI and pandemic jobs are listed as “top recommended problems”, while the global health jobs are listed as “other pressing problems” (along with jobs related to factory farming). 

5ryancbriggs3mo
I completely agree.
EA is more than longtermism

IMO the share of grants going to community infrastructure isn’t particularly relevant to the relative shares received by longterm and nearterm projects. But I’ll edit my post to note that the stat I cite is only from the first round of EA Grants since that’s the only round for which data was ever published.

 

one of the five concrete examples listed seems to be a relatively big global poverty grant. 

Could you please clarify what you mean by this? I linked to an analysis listing written descriptions of 6 EA Grants made after the initial round, for w... (read more)

9Habryka3mo
Yeah, the Charity Entrepreneurship grant is what I was talking about. But yeah, classifying that one as meta isn't crazy to me, though I think I would classify it more as Global Poverty (since I don't think it involved any general EA community infrastructure).
EA is more than longtermism

Why I am probably not a longtermist seems like the best of these options, by a very wide margin. The other two posts are much too technical/jargony for introductory audiences.

Also, A longtermist critique of “The expected value of extinction risk reduction is positive” isn’t even a critique of longtermism, it’s a longtermist arguing against one longtermist cause (x-risk reduction) in favor of other longtermist causes (such as s-risk reduction and trajectory change). So it doesn’t seem like a good fit for even a more advanced curriculum unless it was accompa... (read more)

The longtermist critique is a critique of arguments for a particular (perhaps the main) priority in the longtermism community, extinction risk reduction. I don't think it's necessary to endorse longtermism to be sympathetic to the critique. That extinction risk reduction might not be robustly positive is a separate point from the claim that s-risk reduction and trajectory changes are more promising.

Someone could think extinction risk reduction, s-risk reduction and trajectory changes are all not robustly positive, or that no intervention aimed at any of t... (read more)

EA is more than longtermism

EA Grants changed a lot after the first round and was closed down around 2019 

Did subsequent rounds of EA Grants give non-trivial amounts to animal welfare and/or global poverty? What percentage of funding did these cause areas receive, and how much went to longtermist causes? Only the first round of grants was made public.

EA is more than longtermism

I do want to note that all of the things you list took place around 2017-2018, and our work and plans have changed since then. 

 

My observations about 80k, GPI, and CFAR are all ongoing (though they originated earlier). I also think there are plenty of post-2018 examples related to CEA’s work, such as the Introductory Fellowship content Michael noted (not to mention the unexplained downvoting he got for doing so), Domassoglia’s observations about the most recent EAG and EAGx (James hits on similar themes), and the late 2019 event that was fra... (read more)

I also strongly share this worry about selection effects. There are additional challenges to those mentioned already: the more EA looks like an answer, rather than a question, the more inclined anyone who doesn't share that answer is simply to 'exit', rather than 'voice', leading to an increasing skew over time of what putative experts believe. A related issue is that, if you want to work on animal welfare or global development you can do that without participating in EA, which is much harder if you want to work on longtermism.

Further, it's a sort of doubl... (read more)

EA is more than longtermism

Thank you! I really appreciate this comment, and I’m glad you find my writing helpful.

Is it still hard to get a job in EA? Insights from CEA’s recruitment data

I’m glad CEA is sharing this data, but I wish the post focused more on the important question posed in the title: Is it still hard to get a job in EA? I think the data suggests quite strongly that the answer is “yes”, but the authors don’t seem to take an explicit stance on the issue and the Summary of the post arguably suggests the opposite conclusion. 

Here’s why I think the data implies EA jobs are still hard to get:

  • Only a (very?) small percentage of applications led to a job. The report describes a 2.4% hiring rate for Core jobs (itself a low figur
... (read more)
4Akara3mo
Hey, thanks for your comment. * There are a few different ways to look at the probability of being hired. As you suggest, one would be to take the total number of hires and divide it by the total number of applicants, across all recruitment. We chose not do to this here because the EOIs are substantially different from the Core roles (in having a higher bar for progression, etc.), which would make an overall figure less useful. (The CEA website does emphasise the difference between main roles and EOIs, so it is something prospective applicants are made aware of when applying.) * When we "weight by the numbers of applicants in each stage", this just means that we're taking the average across applicants, and not across roles. (Worked example: two Roles A and B each hired one person. Role A has 100 people in stage 1, with probability of success 1/100=1%; Role B has 10 people in stage 2, with probability of success 1/10=10%. The probability of success when weighting is (1%*100 + 10%*10)/110 = 2/110 = 1.8%; but when averaging across roles it is (1%+10%)/2 = 5.5%) * Regarding the industry comparison, as you mention there are ways in which CEA might be more selective than industry and other ways in which CEA might be less selective. As Ben mentions in an earlier comment [https://forum.effectivealtruism.org/posts/vGRFZMriNYdLtD2AJ/is-it-still-hard-to-get-a-job-in-ea-insights-from-cea-s?commentId=yAdE7Zu24GZYbdGdx] , we probably don't have solid enough evidence to call it in one direction or another.
EA is more than longtermism

what exactly is contributing to the view that EA essentially is longtermism/AI Safety?

 

For me, it’s been stuff like:

  • People (generally those who prioritize AI) describing global poverty as “rounding error”.
  • From late 2017 to early 2021, effectivealtruism.org (the de facto landing page for EA) had at least 3 articles on longtermist/AI causes (all listed above the single animal welfare article), but none on global poverty.
  • The EA Grants program granted ~16x more money to longtermist projects as global poverty and animal welfare projects combined.
... (read more)
8frances_lorenz3mo
Thank you, I really appreciate the breadth of this list, it gives me a much stronger picture of the various ways a longtermist worldview is being promoted.

Some things from EA Global London 2022 that stood out for me (I think someone else might have mentioned one of them):

  • An email to everyone promoting Will's new book (on longtermism)
  • Giving out free bookmarks about Will's book when picking up your pass.

These things might feel small but considering this is one of the main EA conferences, having the actual conference organisers associate so strongly with the promotion of a longtermist (albeit yes, also one of the main founders of EA) book made me think "Wow, CEA is really trying to push longtermism to attendees... (read more)

Thank you for sharing these thoughts. 

I can see how the work of several EA projects, especially CEA, contributed to this. I think that some of these were mistakes (and we think some of them were significant enough to list on our website). I personally caused several of the mistakes that you list, and I'm sorry for that.

Often my take on these cases is more like "it's bad that we called this thing "EA"", rather than "it's bad that we did this thing". E.g. I think that the first round of EA Grants made some good grants (e.g. to LessWrong 2.0), but that i... (read more)

Yeah, this is an excellent list. To me, the OP seems to miss the obvious the point, which is that if you look at what the central EA individuals, organisations, and materials are promoting, you very quickly get the impression that, to misquote Henry Ford, "you can have any view you want, so long as it's longtermism". One's mileage may vary, of course, as to whether one thinks this is a good result.

To add to the list, the 8-week EA Introductory Fellowship  curriculum, the main entry point for students, i.e. the EAs of the future,  has 5 sections o... (read more)

1Habryka3mo
This seems wrong to me. The LTFF and the EAIF don't get 16x the money that the Animal Welfare and Global Health and Development funds get. Maybe you meant to say that the EAIF has granted 16x more money to longtermist projects?

This account has some of the densest and most informative writing in the forum, here's another comment 

(The comment describes CEA in a previous era. It seems the current CEA has different leadership and should be empowered and supported).

You can now apply to EA Funds anytime! (LTFF & EAIF only)

Thanks for clarifying Jonas. Glad to hear the funds have been making regular grants (which to me is much more important than whether they follow a specific schedule). But FYI the fund pages still refer to the Feb/Jul/Nov grant schedule, so probably worth updating that when you have a chance.

Re: the balances on the fund web pages, it looks like the “fund payout” numbers only reflect grants that have been reported but not the interim grants since the last report, is that correct? Do the fund balances being displayed also exclude these unreported grants (whic... (read more)

3Jonas Vollmer8mo
Thanks, fixed! Correct. No, they don't. I can see that this is confusing. We will likely de-emphasize the fund balance (probably move it to the Stats page) because it's hard to clarify what it means and people frequently read too much into it. Thanks for the feedback!
You can now apply to EA Funds anytime! (LTFF & EAIF only)

Jonas, just to clarify, could you confirm that the non-global health funds have been making grants on the planned Feb/July/November schedule even if some of the reports haven’t been published yet? I ask because the Infrastructure Fund shows a zero balance as of the end of Nov (suggesting a Nov grant round took place) but the Animal Fund and LTFF show non-zero balances that suggest no grants have been made since the last published grant reports (Jul and Apr respectively). 

For example, LTFF shows a balance of ~$2.5m as of the end of nov, which is the sa... (read more)

3Jonas Vollmer8mo
No, the EAIF and LTFF now have rolling applications: https://forum.effectivealtruism.org/posts/oz4ZWh6xpgFheJror/you-can-now-apply-to-ea-funds-anytime-ltff-and-eaif-only [https://forum.effectivealtruism.org/posts/oz4ZWh6xpgFheJror/you-can-now-apply-to-ea-funds-anytime-ltff-and-eaif-only] There have been dozens of grants made since the last published reports, much more than over the same period last year, both in numbers and dollar amounts. Both LTFF and EAIF have received large amounts of funding recently, some of which has already been processed, and some of which hasn't.
How well did EA-funded biorisk organisations do on Covid?

Great question, and I look forward to following this discussion!

A tangential (but important in my opinion) comment… You write that “EA funders have funded various organisations working on biosecurity and pandemic preparedness”, but I haven’t seen any evidence that EA funders aside from Open Phil have funded biosecurity in any meaningful way. While Open Phil has funded all the organizations you listed, none of them have been funded by the LTFF, Survival and Flourishing Fund, the Centre on Long-Term Risk Fund, or BERI, and nobody in the EA Survey reported gi... (read more)

Launching a new resource: 'Effective Altruism: An Introduction'

I’m glad you’ve been discouraging people from working at Leverage, and haven’t been involved with them for a long time.

In our back and forth, I noticed a pattern of behavior that I so strongly associate with Leverage (acting as if one’s position is the only “rational” one , ignoring counterevidence that’s been provided and valid questions that have been asked, making strong claims with little evidence, accusing the other party of bad faith) that I googled your name plus Leverage out of curiosity. That’s not a theory, that’s a fact (and as I said originally, perhaps a meaningless one). 

But you're right: it was a mistake to mention that fact, and I’m sorry for doing so. 

Oh come on, this is clearly unfair. I visited that group for a couple of months over seven years ago, because a trusted mentor recommended them. I didn't find their approach useful, and quickly switched to working autonomously, on starting the  EA Forum and EA Handbook v1. For the last 6-7 years, (many can attest that) I've discouraged people from working there! So what is the theory exactly?

Launching a new resource: 'Effective Altruism: An Introduction'

This is a really insightful comment.

The dynamic you describe is a big part of why I think we should defer to people like Peter Singer even if he doesn’t work on cause prioritization full time. I assume (perhaps incorrectly) that he’s read stuff like Superintelligence, The Precipice, etc. (and probably discussed the ideas with the authors) and just doesn’t find their arguments as compelling as Ryan. 

Launching a new resource: 'Effective Altruism: An Introduction'

A: I didn't say we should defer only to longtermist experts, and I don't see how this could come from any good-faith interpretation of my comment. Singer and Gates should some weight, to the extent that they think about cause prio and issues with short and longtermism, I'd just want to see the literature.

 

You cited the views of the leaders forum as evidence that leaders are longtermist, and completely ignored me when I pointed out that the “attendees [were] disproportionately from organizations focused on global catastrophic risks or with a longt... (read more)

You cited.. prioritization

OK, so essentially you don't own up to strawmanning my views?

You... ignored me when I pointed out that the “attendees [were] disproportionately from organizations focused on global catastrophic risks or with a longtermist worldview.”

This could have been made clearer, but when I said that incentives come from incentive-setters thinking and being persuaded, the same applies to the choice of invitations to the EA leaders' forum. And the leaders' forum is quite representative of highly engaged EAs , who also favour AI & longtermis... (read more)

Launching a new resource: 'Effective Altruism: An Introduction'

I definitely think (1) is important. I think (2-3) should carry some weight, and agree the amount of weight should depend on the credibility of the people involved rather than raw popularity. But we’re clearly in disagreement about how deference to experts should work in practice.

There are two related questions I keep coming back to (which others have also raised), and I don’t think you’ve really addressed them yet.

A: Why should we defer only to longtermist experts? I don’t dispute the expertise of the people you listed. But what about the “thoughtful peop... (read more)

2Aaron Gertler1y
Most of what you've written about the longtermist shift seems true to me, but I'd like to raise a couple of minor points: Very few people ever clicked on the list of articles featured on the EA.org landing page (probably because the "Learn More" link was more prominent — it got over 10x the number of clicks as every article on that list combined). The "Learn More" link, in turn, led to an intro article that prominently featured global poverty, as well as a list of articles that included our introduction to global poverty. The site's "Resources" page was also much more popular than the homepage reading list, and always linked to the global poverty article. So while that article was mistakenly left out of one of EA.org's three lists of articles, it was by far the least important of those lists, based on traffic numbers. Do you happen to have numbers on when/how EA Global content topics shifted to be more longtermist? I wouldn't be surprised if this change happened, but I don't remember seeing anyone write it up, and the last few conferences (aside from EA Global: Reconnect, which had four total talks) seem to have a very balanced mix of content.

I like this comment! But I think I would actually go a step further:

I don’t dispute the expertise of the people you listed.

I haven't thought too hard about this, but I think I do actually dispute the expertise of the people Ryan listed. But that is nothing personal about them!

When I think of the term 'expert' I usually have people in mind who are building on decades of knowledge of a lot of different practitioners in their field. The field of global priorities has not existed long enough and has not developed enough depth to have meaningful expertise a... (read more)

A: I didn't say we should defer only to longtermist experts, and I don't see how this could come from any good-faith interpretation of my comment. Singer and Gates should some weight, to the extent that they think about cause prio and issues with short and longtermism, I'd just want to see the literature.

I agree that incentives within EA lean (a bit) longtermist. The incentives don't come from a vacuum. They were set by grant managers, donors, advisors, execs, board members. Most worked on short-term issues at one time, as did at least some of Beckstead, O... (read more)

Launching a new resource: 'Effective Altruism: An Introduction'

It's frustrating that I need to explain the difference between the “argument that would cause us to donate to a charity for guide dogs” and the arguments being made for why introductory EA materials should include content on Global Health and Animal Welfare, but here goes…

People who argue for giving to guide dogs aren’t doing so because they’ve assessed their options logically and believe guide dogs offer the best combination of evidence and impact per dollar. They’re essentially arguing for prioritizing things other than maximizing utility (like helping o... (read more)

3RyanCarey1y
I was just saying that if you have three interventions, whose relative popularity is A<B<C but whose expected impact, per a panel of EA experts was C<B<A, then you probably want EA orgs to allocate their resources C<B<A. Is it an accurate summary to say that you think that sometimes we should allocate more resources to B if: 1. We're presenting introductory material, and the resources are readers attention 2. B is popular with people who identify with the EA community 3. B is popular with people who are using logical arguments? I agree that (1) carries some weight. For (2-3), it seems misguided to appeal to raw popularity among people who like rational argumentation - better to either (A) present the arguments, (e.g. arguments against Nick Beckstead's thesis) (B) analyse who are the most accomplished experts in this field, and/or (C) consider how thoughtful people have changed their mind. The EA leaders forum is very long-termist. The most accomplished experts are even more so: Christiano, Macaskill, Greaves, Shulman, Bostrom, etc.. The direction of travel of people's views is even more pro-longtermist. I doubt many of these people wanted to focus on unpopular, niche future topics - as a relative non-expert, I certainly didn't. I publicly complained about the longtermist focus, until the force of the arguments (and meeting enough non-crazy longtermists) brought me around. If instead of considering (1-3), you consider (1,A-C), you end up wanting a strong longtermist emphasis.
AMA: JP Addison and Sam Deere on software engineering at CEA

I certainly wouldn't subject our random Googlers to eight weeks' worth of material! To clarify, by "this content" I mean "some of this content, probably a similar amount to the amount of content we now feature on EA.org", rather than "all ~80 articles".

 

Ah, thanks for clarifying :) The devil is always in the details, but "brief and approachable content" following the same rough structure as the fellowship sounds very promising. I look forward to seeing the new site!

AMA: JP Addison and Sam Deere on software engineering at CEA

Thank you for making these changes Aaron, and for your openness to this discussion and feedback!

You’re correct, I was referring to the reading list on the homepage. The changes you made there, to the key ideas series, and to the resources page (especially when you complete the planned reordering) all seem like substantial improvements. I really appreciate that you've updated the site!

I took a quick look at the Fellowship content, and it generally looks like you’ve chosen good content and done a reasonable job of providing a balanced overview of EA (thanks ... (read more)

2Aaron Gertler1y
Credit goes to James Aung, Will Payne, and others (I don't know the full list) who created the curriculum! I was one of many people asked to provide feedback, but I'm responsible for maybe 2% of the final content, if that. I think this is a very reasonable quibble. In the context of "this person already signed up for a fellowship", the additional credibility may be less important, but this is definitely a consideration that could apply to "random people finding the content online". I wholly agree, and I certainly wouldn't subject our random Googlers to eight weeks' worth of material! To clarify, by "this content" I mean "some of this content, probably a similar amount to the amount of content we now feature on EA.org", rather than "all ~80 articles". The current introduction to EA [https://www.effectivealtruism.org/articles/introduction-to-effective-altruism/] , which links people to the newsletter and some other basic resources, will continue to be the first piece of content we show people. Some of the other articles are likely to be replaced by articles or sequences from the Fellowship — but with an emphasis on relatively brief and approachable content.
AMA: JP Addison and Sam Deere on software engineering at CEA

Thanks for this response Max!

1.  I’m torn. On one hand (as I mentioned to Aaron) I appreciate that CEA is making efforts to offer realistic estimates instead of overpromising or telling people what they want to hear. If CEA is going to prioritize the EA Wiki and would rather not outsource management of EA.org, I’m legitimately grateful that you’re just coming out and saying that. I may not agree with these prioritization decisions (I see it as continuing a problematic pattern of taking on new responsibilities before fulfilling existing ones), but at t... (read more)

1Aaron Gertler1y
Aha! I now believe you were referring to this list: That's a very good thing to have noticed — we did not, in fact, have the Global Health and Development article in that list, only at the "Read More" link (which goes to the Resources page). I've added it. Thank you for pointing this out. For a bit of context that doesn't excuse the oversight: Of ~2500 visitors to EA.org in the last week, more than 1000 clicked through to the "Key Ideas" series (which has always included the article) or the "Resources" page (ditto). Fewer than 100 clicked any of the articles in that list, which is why it didn't come to mind — but I'll be happy to see the occasional click for "Crucial Considerations" go to global dev instead. Part of my plan for EA.org has been some refactoring on the back end. Looks like this should include "make sure the same reading materials appear in each place, rather than having multiple distinct lists".
3Aaron Gertler1y
Edit: The screenshots below no longer reflect the exact look of the site, since I went ahead and did some of the reshuffling of the "Key Ideas" series that I mentioned. But the only change to the content of that series was the removal of "Crucial Considerations and Wise Philanthropy, which I'd been meaning to get to for a while. Thanks for the prompt! ***** Though I'm a bit confused by this comment (see below), I'm really glad you've been keeping up the conversation! At any given time, there are many things I could be working on, and it's quite plausible that I've invested too little time in EA.org relative to other things with less readership. I'm glad to be poked and prodded into rethinking that approach. Regarding my confusion: Which reading list are you referring to? (Edit: see here [https://forum.effectivealtruism.org/posts/ChXZ2SZGaAzRqLM6D/ama-jp-addison-and-sam-deere-on-software-engineering-at-cea?commentId=HKbzHtTyudZxNDTTJ] ) The "Key Ideas" list of introductory articles (see the bottom of this page [https://www.effectivealtruism.org/articles/introduction-to-effective-altruism/] ) has always included the GHD article (at least since I started working at CEA in late 2018): So has the Resources [https://www.effectivealtruism.org/resources/] page: I think it would be perfectly reasonable to have more than one article on this topic (as we will once the Fellowship content becomes our main set of intro resources). And I do plan to reshuffle the article list a bit this week to move the Global Health and Animal Welfare articles towards the top (I agree they should be more prominent). But I wanted to make sure we didn't have some other part of the site where this article isn't showing up. As for future variants on our intro content: You can see the EA Fellowship curriculum here [https://docs.google.com/document/d/1lkWP2eMNxJY_N7QekI0sfmB616iWI5HKrEvtk2pLWRM/edit?usp=sharing] . That set of articles is almost identical to what will show up on the Forum soon (
AMA: JP Addison and Sam Deere on software engineering at CEA

FYI, I'm still seeing an error message, albeit a different one than earlier. Here's what I get now:

Your connection is not private

Attackers might be trying to steal your information from effectivealtruism.org (for example, passwords, messages, or credit cards). Learn more

NET::ERR_CERT_COMMON_NAME_INVALID

That said, I didn't mean to imply the site has historically had abnormal downtime, sorry for not making that clear.

3MaxDalton1y
This problem should be fixed now too.
AMA: JP Addison and Sam Deere on software engineering at CEA
  1. Change up the introductory material a lot.

I’m glad there are some changes planned to the introductory materials and resources page. As you update this material, what reference class will you be using? Do you want effectivealtruism.org to reflect the views of the EA community? Engaged EAs? CEA? EA “leaders”?

I’m also curious if/how that reference class will be communicated on the site, as I think that’s been a problem in the past. For the past few years (until the modest changes you made recently) the resources page has been virtually identical to the EA Han... (read more)

3MaxDalton1y
I touched on this in an earlier comment [https://forum.effectivealtruism.org/posts/TpoeJ9A2G5Sipxfit/ea-leaders-forum-survey-on-ea-priorities-data-and-analysis?commentId=7oF9Q3jJH5TdsToNK] : Although we haven’t yet commissioned that research, that’s still the spirit I want us to have as we create content. We are consulting with non-longtermists as we develop the content. I agree that it’s a shame that the EA.org resources are still quite similar to the handbook content. We’re working on a replacement which should be more up to date, but I’m not sure when we’ll make the relevant changes. We’d consider offers (contact us [https://www.centreforeffectivealtruism.org/contact/]), but I think we’re more likely to aim to develop the capacity to do this in-house rather than finding someone external to take this on (though I don’t want to make specific commitments).
2JP Addison1y
On the last point: our hosting provider Netlify had an outage [https://www.netlifystatus.com/incidents/r2nqshvznvyj] affecting a subset of their customers that happened to include us. We were down for about 2 hours, which is the longest outage I can remember in the last 3 years.
Some quick notes on "effective altruism"

This is great! Can you summarize your findings across these tests?

Responses and Testimonies on EA Growth

For this to be the explanation presumably intra-EA conflict would not merely need to be driving people away, but driving people away at higher rates than it used to. It's not clear to me why this would be the case.

My mental model is that in the early years, a disproportionately large portion of the EA community consisted of the community’s founders and their friends (and friends of friends, etc.) This cohort is likely to be very tolerant of the early members’ idiosyncrasies- it’s even possible some of those friendships were built around those idiosyncrasie... (read more)

2Dale1y
That's true, and those friendships also probably reduced conflict as well - much harder to take a very negative view of someone you know well socially.
Responses and Testimonies on EA Growth

Another factor that has slowed EA’s growth over the years: people are leaving EA because of bad experiences with other EAs. 

That’s not some pet theory of mine, that’s simply what EA’s reported in the 2019 EA survey. There were 178 respondents who reported knowing a highly engaged EA who left the community, and by far the most cited factor (37% of respondents) was “bad experiences with other EAs.” I think it’s safe to say these bad experiences are also likely driving away less engaged EAs who could have become more engaged.

One could argue that this fac... (read more)

For this to be the explanation presumably intra-EA conflict would not merely need to be driving people away, but driving people away at higher rates than it used to. It's not clear to me why this would be the case.

It's also worth noting that highly engaged EAs are quite close socially. It's possible that many of those 178 people might be thinking of the same people!

Responses and Testimonies on EA Growth

I largely agree with your categorizations, and how you classify the mistakes. But I agree with Max that I’d expect 1 and especially 2 to impact growth directly.

FWIW, I don’t think it was a mistake to make longtermism a greater priority than it had been (#3), but I do think mistakes were made in pushing this way too far (e.g. having AI/longtermist content dominate the EA Handbook 2.0 at the expense of other cause areas) and I’m concerned this is still going on (see for example the recent announcement that the EA Infrastructure Fund’s new managers are all lo... (read more)

To be fair, people pivoted hard toward longtermism because they're convinced that it's a much higher priority, which seems correct to me.

Responses and Testimonies on EA Growth

Thanks AGB!

But it's true that in neither case would I expect the typical reader to come away with the impression that a mistake was made, which I think is your main point and a good one. This is tricky because I think there's significant disagreement about whether this was a mistake or a correct strategic call, and in some cases I think what is going on is that the writer thinks the call was correct (in spite of CEA now thinking otherwise), rather than simply refusing to acknowledge past errors.

I do think it was a mistake to deprioritize GWWC, though I agr... (read more)

Responses and Testimonies on EA Growth

Thanks Max! It’s incredibly valuable for leaders like yourself to acknowledge the importance of identifying and learning from mistakes that have been made over the years.

Responses and Testimonies on EA Growth

Thanks for raising this question about EA's growth, though I fully agree it would have been better to frame that question more like: “Given that we're pouring a substantial amount of money into EA community growth, why doesn't it show up in some of these metrics?" To that end, while I may refer to “growing” or “not growing” below for brevity I mean those terms relative to expectations rather than in an absolute sense. With that caveat out of the way… 

There’s a very telling commonality about almost all the possible explanations that have been offered s... (read more)

Just wanted to say that I really liked this comment, thanks for writing it.

I agree that it's worth asking for an explanation why growth has - if anything - slowed, while funds have vastly increased. One interesting exercise is to categorise the controversies. Some major categories:

  1. Leverage-people violating social norms (which was a mistake)
  2. CEA under-delivering operationally (mistake)
  3. Re-prioritising toward longtermism (not a mistake imo)
  4. Re-prioritising away from community growth (unclear whether a mistake)

The mistakes:

  • GWWC deprioritised (3,4)
  • EA Ventures (1,2)
  • EA Global 2015 PR (1,3)
  • Pareto Fellowship cultishness (1)
  • EA Funds depriori
... (read more)
4AGB1y
I agree with a lot of this, and I appreciated both the message and the effort put into this comment. Well-substantiated criticism is very valuable. I do want to note that GWWC being scaled back was flagged elsewhere, most explicitly in Ben Todd's comment [https://forum.effectivealtruism.org/posts/dRkGXHxKGWwWY6AqP/why-hasn-t-effective-altruism-grown-since-2015-1?commentId=vXxhLjA5ZaEuAepB9] (currently 2nd highest upvoted on that thread). But for example, Scott's linked reddit comment also alludes to [https://www.reddit.com/r/slatestarcodex/comments/m0vob7/why_hasnt_effective_altruism_grown_since_2015/gqavs32/] this, via talking about the decreased interest in seeking financial contributions. But it's true that in neither case would I expect the typical reader to come away with the impression that a mistake was made, which I think is your main point and a good one. This is tricky because I think there's significant disagreement about whether this was a mistake or a correct strategic call, and in some cases I think what is going on is that the writer thinks the call was correct (in spite of CEA now thinking otherwise), rather than simply refusing to acknowledge past errors.

I agree that this is a (significant) part of the explanation. For instance, I think there are a variety of things I could have done last year that would have helped our groups support improve more quickly.

Plug: if you have feedback about mistakes CEA is making or has made, I'd love to hear it. You can share thoughts (including anonymously) here.

AMA: Holden Karnofsky @ EA Global: Reconnect

In addition to funding AI work, Open Phil’s longtermist grantmaking includes sizeable grants toward areas like biosecurity and climate change/engineering, while other major longtermist funders (such as the Long Term Future Fund, BERI, and the Survival and Flourishing Fund) have overwhelmingly supported AI with their grantmaking. As an example, I estimate that “for every dollar Open Phil has spent on biosecurity, it’s spent ~$1.50 on AI… but for every dollar LTFF has spent on biosecurity, it’s spent ~$19 on AI.” 

Do you agree this distinction exists, an... (read more)

Our plans for hosting an EA wiki on the Forum

I think CEA would have to tread carefully to support this work without violating Wikipedia's rules about paid editing. I may think about this more in future months (right now, I'm juggling a lot of projects). If you have suggestions for what CEA could do in this area, I'd be happy to hear them.

The paid editing restrictions are a bigger issue than I’d originally realized. But I do think it would be helpful for an experienced Wikipedia editor like Pablo to write up some brief advice on how volunteers can add EA content to Wikipedia while adhering to all thei... (read more)

But I do think it would be helpful for an experienced Wikipedia editor like Pablo to write up some brief advice on how volunteers can add EA content to Wikipedia while adhering to all their rules.

Darius Meissner and I are in the process of writing exactly such a document.

Our plans for hosting an EA wiki on the Forum

Thank you Aaron for this detailed engagement! 

Sounds like we’re agreed that Wikipedia editing would be beneficial, and that working on Wikipedia vs. a dedicated wiki isn’t necessarily in direct conflict.

I mostly set my own priorities at CEA; even if I came to believe that doing a lot of dedicated wiki work wasn't a good use of my time, and we decided to stop paying for work from Pablo or others like him, I can't imagine not wanting to spend some of my time coordinating other people to do this work…

The reason I haven't spent much time thinkin

... (read more)
2Aaron Gertler1y
These are all reasonable concerns, and I agree that there are cases where CEA hasn't done this well in past years. As soon as the wiki is up and running, and we have a sense for what "maintenance" looks like for Pablo and I (plus the level of volunteer activity we end up with after the festival), I think we'll be in a much better place to make contingency plans, and I picture us doing much of the research/planning you called for in April. (I work in a series of monthly sprints; this month's sprint is launching the wiki, and future months will involve more thinking on sustainability.)
Our plans for hosting an EA wiki on the Forum

Thanks Pablo… I appreciate your thoughtful engagement with my comments, and all the hard work you’ve put into this project.

How sensitive are your worries to scenarios in which the main paid content-writer fails to stay motivated, relative to scenarios in which the project fails because of insufficient volunteer effort? I'm inclined to believe that as long as there is someone whose full-time job is to write content for the Wiki (whether it's me or someone else), in combination with all the additional work that Aaron and the technical team are devoting to it

... (read more)
Load More