AnonymousEAForumAccount

Wiki Contributions

Comments

You can now apply to EA Funds anytime! (LTFF & EAIF only)

Thanks for clarifying Jonas. Glad to hear the funds have been making regular grants (which to me is much more important than whether they follow a specific schedule). But FYI the fund pages still refer to the Feb/Jul/Nov grant schedule, so probably worth updating that when you have a chance.

Re: the balances on the fund web pages, it looks like the “fund payout” numbers only reflect grants that have been reported but not the interim grants since the last report, is that correct? Do the fund balances being displayed also exclude these unreported grants (which would lead to higher cash balances being displayed than the funds currently have available)? Just trying to make sure I understand what the numbers on the funds’ pages are meant to represent.

You can now apply to EA Funds anytime! (LTFF & EAIF only)

Jonas, just to clarify, could you confirm that the non-global health funds have been making grants on the planned Feb/July/November schedule even if some of the reports haven’t been published yet? I ask because the Infrastructure Fund shows a zero balance as of the end of Nov (suggesting a Nov grant round took place) but the Animal Fund and LTFF show non-zero balances that suggest no grants have been made since the last published grant reports (Jul and Apr respectively). 

For example, LTFF shows a balance of ~$2.5m as of the end of nov, which is the same as the difference between the cumulative $3.6m the fund had raised in 2021 through the end of nov, and the cumulative $1.1m the fund had raised through the end of April (date of the last grant report). If the LTFF had a July (or November) grant round, I’d expect a lower current balance.

How well did EA-funded biorisk organisations do on Covid?

Great question, and I look forward to following this discussion!

A tangential (but important in my opinion) comment… You write that “EA funders have funded various organisations working on biosecurity and pandemic preparedness”, but I haven’t seen any evidence that EA funders aside from Open Phil have funded biosecurity in any meaningful way. While Open Phil has funded all the organizations you listed, none of them have been funded by the LTFF, Survival and Flourishing Fund, the Centre on Long-Term Risk Fund, or BERI, and nobody in the EA Survey reported giving to any of the organizations.

The LTFF has admittedly made some small biosecurity grants (though as a reference it has granted ~19x more to AI), and FHI (which has relatively broad support from EA and/or longtermist funders) does some biosecurity work. But broadly speaking, I think it’s a (widely held) misconception that EA donors besides Open Phil were materially prioritizing biosecurity grantmaking prior to the pandemic.

Launching a new resource: 'Effective Altruism: An Introduction'

I’m glad you’ve been discouraging people from working at Leverage, and haven’t been involved with them for a long time.

In our back and forth, I noticed a pattern of behavior that I so strongly associate with Leverage (acting as if one’s position is the only “rational” one , ignoring counterevidence that’s been provided and valid questions that have been asked, making strong claims with little evidence, accusing the other party of bad faith) that I googled your name plus Leverage out of curiosity. That’s not a theory, that’s a fact (and as I said originally, perhaps a meaningless one). 

But you're right: it was a mistake to mention that fact, and I’m sorry for doing so. 

Launching a new resource: 'Effective Altruism: An Introduction'

This is a really insightful comment.

The dynamic you describe is a big part of why I think we should defer to people like Peter Singer even if he doesn’t work on cause prioritization full time. I assume (perhaps incorrectly) that he’s read stuff like Superintelligence, The Precipice, etc. (and probably discussed the ideas with the authors) and just doesn’t find their arguments as compelling as Ryan. 

Launching a new resource: 'Effective Altruism: An Introduction'

A: I didn't say we should defer only to longtermist experts, and I don't see how this could come from any good-faith interpretation of my comment. Singer and Gates should some weight, to the extent that they think about cause prio and issues with short and longtermism, I'd just want to see the literature.

 

You cited the views of the leaders forum as evidence that leaders are longtermist, and completely ignored me when I pointed out that the “attendees [were] disproportionately from organizations focused on global catastrophic risks or with a longtermist worldview.” I also think it’s unreasonable to simply declare that “Christiano, Macaskill, Greaves, Shulman, Bostrom, etc” are “the most accomplished experts” but require “literature” to prove that Singer and Gates have thought a sufficient amount about cause prioritization. 

I’m pretty sure in 20 years of running the Gates Foundation, Bill has thought a bit about cause prioritization, and talked to some pretty smart people about it. And he definitely cares about the long-term future, he just happens to prioritize climate change over AI. Personally, I trust his philanthropic and technical credentials enough to take notice when Gates says stuff like :

[EAs] like working on AI. Working on AI is fun. If they think what they’re doing is reducing the risk of AI, I haven’t seen that proof of that. They have a model. Some people want to go to Mars. Some people want to live forever. Philanthropy has got a lot of heterogeneity in it. If people bring their intelligence, some passion, overall, it tends to work out. There’s some dead ends, but every once in a while, we get the Green Revolution or new vaccines or models for how education can be done better. It’s not something where the philanthropists all homogenize what they’re doing.

Sounds to me like he's thought about this stuff.

I agree that some longtermists would favour shorttermist or mixed content. If they have good arguments, or if they're experts in content selection, then great! But I think authenticity is a strong default.

You’ve asked us to defer to a narrow set of experts, but (as I previously noted) you’ve provided no evidence that any of the experts you named would actually object to mixed content. You also haven’t acknowledged evidence that they’d prefer mixed content (e.g. Open Phil’s actual giving history or KHorton’s observation that “Will MacAskill [and] Ajeya Cotra [have] both spoken in favour of worldview diversification and moral uncertainty.”) I don’t see how that’s “authentic.”

In my ideal universe, the podcast would be called an "Introduction to prioritization", but also, online conversation would happen on a "priorities forum", and so on. 

I agree that naming would be preferable. But you didn’t propose that name in this thread, you argued that an “Intro to EA playlist”, effectivealtruism.org, and the EA Handbook (i.e. 3 things with “EA” in the name) should have narrow longtermist focuses. If you want to create prioritization handbooks, forums, etc., why not just go create new things with the appropriate names instead of coopting and changing the existing EA brand? 

Launching a new resource: 'Effective Altruism: An Introduction'

I definitely think (1) is important. I think (2-3) should carry some weight, and agree the amount of weight should depend on the credibility of the people involved rather than raw popularity. But we’re clearly in disagreement about how deference to experts should work in practice.

There are two related questions I keep coming back to (which others have also raised), and I don’t think you’ve really addressed them yet.

A: Why should we defer only to longtermist experts? I don’t dispute the expertise of the people you listed. But what about the “thoughtful people” who still think neartermism warrants inclusion? Like the experts at Open Phil, which splits its giving roughly evenly between longtermist and neartermist causes? Or Peter Singer (a utilitarian philosophers like 2/3 of the people you named) who has said (here, at 5:22) : “I do think that the EA movement has moved too far and, arguably, there is now too much resources going into rather speculative long-termism.” Or Bill Gates? (“If you said there was a philanthropist 500 years ago that said, “I’m not gonna feed the poor, I’m gonna worry about existential risk,” I doubt their prediction would have made any difference in terms of what came later. You got to have a certain modesty. Even understanding what’s gonna go on in a 50-year time frame I would say is very, very difficult.”) 

I place negligible weight on the fact that “the EA leaders forum is very long-termist” because  (in CEA’s words): “in recent years we have invited attendees disproportionately from organizations focused on global catastrophic risks or with a longtermist worldview.” 

I agree there’s been a shift toward longtermism in EA, but I’m not convinced that’s because everyone was convinced by “the force of the arguments” like you were. At the same time people were making longtermist arguments, the views of a longtermist forum were represented as the views of EA leaders, ~90% of EA grant funding went to longtermist projects, CBG grants were assessed primarily on the number of people taking “priority” (read: longtermist) jobs, the EA.org landing page didn’t include global poverty and had animals near the bottom of an introductory list, EA Globals highlighted longtermist content, etc. Did the community become more longtermist because they found the arguments compelling, because the incentive structure shifted, or (most likely in my opinion) some combination of these factors? 

 

B: Many (I firmly believe most) knowledgeable longtermists would want to include animal welfare and global poverty in an “intro to EA” playlist (see Greg’s comment for example). Can you name specific experts who’d want to exclude this content (aside from the original curators of this list)? When people want to include this content and you object by arguing that a bunch of experts are longtermist, the implication is that generally speaking those longtermist experts wouldn’t want animal and poverty content in introductory material. I don’t think that’s the case, but feel free to cite specific evidence if I’m wrong.

Also: if you’re introducing people to “X” with a 10 part playlist of content highlighting longtermism that doesn’t include animals or poverty, what’s the harm in calling “X” longtermism rather than EA?

Launching a new resource: 'Effective Altruism: An Introduction'

It's frustrating that I need to explain the difference between the “argument that would cause us to donate to a charity for guide dogs” and the arguments being made for why introductory EA materials should include content on Global Health and Animal Welfare, but here goes…

People who argue for giving to guide dogs aren’t doing so because they’ve assessed their options logically and believe guide dogs offer the best combination of evidence and impact per dollar. They’re essentially arguing for prioritizing things other than maximizing utility (like helping our local communities, honoring a family member’s memory, etc.) And the people making these arguments are not connected to the EA community (they'd probably find it off-putting).

In contrast, the people objecting to non-representative content branded as an “intro to EA” (like this playlist or the EA Handbook 2.0) are people who agree with the EA premise of trying to use reason to do the most good. We’re using frameworks like ITN, we’re just plugging in different assumptions and therefore getting different answers out. We’ve heard longtermist arguments for why their assumptions are right. Many of us find those longtermist arguments convincing and/or identify as longtermists, just not to such an extreme degree that we want to exclude content like Global Health and Animal Welfare from intro materials (especially since part of the popularity of those causes is due to their perceived longterm benefits). We run EA groups and organizations, attend and present at EA Global, are active on the Forum, etc. The vast majority of the EA experts and leaders we know (and we know many) would look at you like you’re crazy if you told them intro to EA content shouldn’t include global health or animal welfare, so asking us to defer to expertise doesn’t really change anything. 

Regarding the narrow issue of “Crucial Considerations” being removed from effectivealtruism.org, this change was made because it makes no sense to have an extremely technical piece as the second recommended reading for people new to EA. If you want to argue that point go ahead, but I don’t think you’re being fair by portraying it as some sort of descent into populism.

Load More