AnonymousEAForumAccount

Comments

Launching a new resource: 'Effective Altruism: An Introduction'

I’m glad you’ve been discouraging people from working at Leverage, and haven’t been involved with them for a long time.

In our back and forth, I noticed a pattern of behavior that I so strongly associate with Leverage (acting as if one’s position is the only “rational” one , ignoring counterevidence that’s been provided and valid questions that have been asked, making strong claims with little evidence, accusing the other party of bad faith) that I googled your name plus Leverage out of curiosity. That’s not a theory, that’s a fact (and as I said originally, perhaps a meaningless one). 

But you're right: it was a mistake to mention that fact, and I’m sorry for doing so. 

Launching a new resource: 'Effective Altruism: An Introduction'

This is a really insightful comment.

The dynamic you describe is a big part of why I think we should defer to people like Peter Singer even if he doesn’t work on cause prioritization full time. I assume (perhaps incorrectly) that he’s read stuff like Superintelligence, The Precipice, etc. (and probably discussed the ideas with the authors) and just doesn’t find their arguments as compelling as Ryan. 

Launching a new resource: 'Effective Altruism: An Introduction'

A: I didn't say we should defer only to longtermist experts, and I don't see how this could come from any good-faith interpretation of my comment. Singer and Gates should some weight, to the extent that they think about cause prio and issues with short and longtermism, I'd just want to see the literature.

 

You cited the views of the leaders forum as evidence that leaders are longtermist, and completely ignored me when I pointed out that the “attendees [were] disproportionately from organizations focused on global catastrophic risks or with a longtermist worldview.” I also think it’s unreasonable to simply declare that “Christiano, Macaskill, Greaves, Shulman, Bostrom, etc” are “the most accomplished experts” but require “literature” to prove that Singer and Gates have thought a sufficient amount about cause prioritization. 

I’m pretty sure in 20 years of running the Gates Foundation, Bill has thought a bit about cause prioritization, and talked to some pretty smart people about it. And he definitely cares about the long-term future, he just happens to prioritize climate change over AI. Personally, I trust his philanthropic and technical credentials enough to take notice when Gates says stuff like :

[EAs] like working on AI. Working on AI is fun. If they think what they’re doing is reducing the risk of AI, I haven’t seen that proof of that. They have a model. Some people want to go to Mars. Some people want to live forever. Philanthropy has got a lot of heterogeneity in it. If people bring their intelligence, some passion, overall, it tends to work out. There’s some dead ends, but every once in a while, we get the Green Revolution or new vaccines or models for how education can be done better. It’s not something where the philanthropists all homogenize what they’re doing.

Sounds to me like he's thought about this stuff.

I agree that some longtermists would favour shorttermist or mixed content. If they have good arguments, or if they're experts in content selection, then great! But I think authenticity is a strong default.

You’ve asked us to defer to a narrow set of experts, but (as I previously noted) you’ve provided no evidence that any of the experts you named would actually object to mixed content. You also haven’t acknowledged evidence that they’d prefer mixed content (e.g. Open Phil’s actual giving history or KHorton’s observation that “Will MacAskill [and] Ajeya Cotra [have] both spoken in favour of worldview diversification and moral uncertainty.”) I don’t see how that’s “authentic.”

In my ideal universe, the podcast would be called an "Introduction to prioritization", but also, online conversation would happen on a "priorities forum", and so on. 

I agree that naming would be preferable. But you didn’t propose that name in this thread, you argued that an “Intro to EA playlist”, effectivealtruism.org, and the EA Handbook (i.e. 3 things with “EA” in the name) should have narrow longtermist focuses. If you want to create prioritization handbooks, forums, etc., why not just go create new things with the appropriate names instead of coopting and changing the existing EA brand? 

Launching a new resource: 'Effective Altruism: An Introduction'

I definitely think (1) is important. I think (2-3) should carry some weight, and agree the amount of weight should depend on the credibility of the people involved rather than raw popularity. But we’re clearly in disagreement about how deference to experts should work in practice.

There are two related questions I keep coming back to (which others have also raised), and I don’t think you’ve really addressed them yet.

A: Why should we defer only to longtermist experts? I don’t dispute the expertise of the people you listed. But what about the “thoughtful people” who still think neartermism warrants inclusion? Like the experts at Open Phil, which splits its giving roughly evenly between longtermist and neartermist causes? Or Peter Singer (a utilitarian philosophers like 2/3 of the people you named) who has said (here, at 5:22) : “I do think that the EA movement has moved too far and, arguably, there is now too much resources going into rather speculative long-termism.” Or Bill Gates? (“If you said there was a philanthropist 500 years ago that said, “I’m not gonna feed the poor, I’m gonna worry about existential risk,” I doubt their prediction would have made any difference in terms of what came later. You got to have a certain modesty. Even understanding what’s gonna go on in a 50-year time frame I would say is very, very difficult.”) 

I place negligible weight on the fact that “the EA leaders forum is very long-termist” because  (in CEA’s words): “in recent years we have invited attendees disproportionately from organizations focused on global catastrophic risks or with a longtermist worldview.” 

I agree there’s been a shift toward longtermism in EA, but I’m not convinced that’s because everyone was convinced by “the force of the arguments” like you were. At the same time people were making longtermist arguments, the views of a longtermist forum were represented as the views of EA leaders, ~90% of EA grant funding went to longtermist projects, CBG grants were assessed primarily on the number of people taking “priority” (read: longtermist) jobs, the EA.org landing page didn’t include global poverty and had animals near the bottom of an introductory list, EA Globals highlighted longtermist content, etc. Did the community become more longtermist because they found the arguments compelling, because the incentive structure shifted, or (most likely in my opinion) some combination of these factors? 

 

B: Many (I firmly believe most) knowledgeable longtermists would want to include animal welfare and global poverty in an “intro to EA” playlist (see Greg’s comment for example). Can you name specific experts who’d want to exclude this content (aside from the original curators of this list)? When people want to include this content and you object by arguing that a bunch of experts are longtermist, the implication is that generally speaking those longtermist experts wouldn’t want animal and poverty content in introductory material. I don’t think that’s the case, but feel free to cite specific evidence if I’m wrong.

Also: if you’re introducing people to “X” with a 10 part playlist of content highlighting longtermism that doesn’t include animals or poverty, what’s the harm in calling “X” longtermism rather than EA?

Launching a new resource: 'Effective Altruism: An Introduction'

It's frustrating that I need to explain the difference between the “argument that would cause us to donate to a charity for guide dogs” and the arguments being made for why introductory EA materials should include content on Global Health and Animal Welfare, but here goes…

People who argue for giving to guide dogs aren’t doing so because they’ve assessed their options logically and believe guide dogs offer the best combination of evidence and impact per dollar. They’re essentially arguing for prioritizing things other than maximizing utility (like helping our local communities, honoring a family member’s memory, etc.) And the people making these arguments are not connected to the EA community (they'd probably find it off-putting).

In contrast, the people objecting to non-representative content branded as an “intro to EA” (like this playlist or the EA Handbook 2.0) are people who agree with the EA premise of trying to use reason to do the most good. We’re using frameworks like ITN, we’re just plugging in different assumptions and therefore getting different answers out. We’ve heard longtermist arguments for why their assumptions are right. Many of us find those longtermist arguments convincing and/or identify as longtermists, just not to such an extreme degree that we want to exclude content like Global Health and Animal Welfare from intro materials (especially since part of the popularity of those causes is due to their perceived longterm benefits). We run EA groups and organizations, attend and present at EA Global, are active on the Forum, etc. The vast majority of the EA experts and leaders we know (and we know many) would look at you like you’re crazy if you told them intro to EA content shouldn’t include global health or animal welfare, so asking us to defer to expertise doesn’t really change anything. 

Regarding the narrow issue of “Crucial Considerations” being removed from effectivealtruism.org, this change was made because it makes no sense to have an extremely technical piece as the second recommended reading for people new to EA. If you want to argue that point go ahead, but I don’t think you’re being fair by portraying it as some sort of descent into populism.

AMA: JP Addison and Sam Deere on software engineering at CEA

I certainly wouldn't subject our random Googlers to eight weeks' worth of material! To clarify, by "this content" I mean "some of this content, probably a similar amount to the amount of content we now feature on EA.org", rather than "all ~80 articles".

 

Ah, thanks for clarifying :) The devil is always in the details, but "brief and approachable content" following the same rough structure as the fellowship sounds very promising. I look forward to seeing the new site!

AMA: JP Addison and Sam Deere on software engineering at CEA

Thank you for making these changes Aaron, and for your openness to this discussion and feedback!

You’re correct, I was referring to the reading list on the homepage. The changes you made there, to the key ideas series, and to the resources page (especially when you complete the planned reordering) all seem like substantial improvements. I really appreciate that you've updated the site!

I took a quick look at the Fellowship content, and it generally looks like you’ve chosen good content and done a reasonable job of providing a balanced overview of EA (thanks for getting input from the perspectives you mentioned). Ironically, my main quibble with the content (and it’s note a huge one) is that it’s too EA-centric. For example, if I was trying to convince someone that pandemics are important I’d show them Bill Gates’ TED Talk on pandemics rather than an EA podcast as the former approach leverages Gates’ and TED’s credibility.

While I generally think the Fellowship content appears good (at least after a brief review), I still think it’d be a very big mistake to “adapt EA.org to refer to this content as our default introduction.” The Fellowship is for people who opt into participating in an 8 week program with an estimated 2-3 hours of preparation for each weekly session. EA.org is for people who google “effective altruism”. There’s an enormous difference between those two audiences, and the content they see should reflect that difference. 

As an example, the first piece of core content in the Fellowship is a 30 minute intro to EA video, whereas I’d imagine EA.org should try to communicate key ideas in just a few minutes and then quickly try to get people to e.g. sign up for the EA Newsletter. That said, we shouldn’t have to guess what content works best on the EA.org homepage, we should be able to figure it out experimentally through A/B testing.

AMA: JP Addison and Sam Deere on software engineering at CEA

Thanks for this response Max!

1.  I’m torn. On one hand (as I mentioned to Aaron) I appreciate that CEA is making efforts to offer realistic estimates instead of overpromising or telling people what they want to hear. If CEA is going to prioritize the EA Wiki and would rather not outsource management of EA.org, I’m legitimately grateful that you’re just coming out and saying that. I may not agree with these prioritization decisions (I see it as continuing a problematic pattern of taking on new responsibilities before fulfilling existing ones), but at the end of the day those decisions are yours to make and not mine. 

On the other hand, I feel like substantial improvements could be made with negligible effort. For instance, I think you’d make enormous progress if you simply added the introductory article on Global Health and Development to the reading list on the EA.org homepage, replacing “Crucial Considerations and Wise Philanthropy”. 

Global Health is currently a glaring omission since it is the most popular cause in the EA community and it is highly accessible to an introductory audience. And I think nearly everyone (near-or-long-termist) would agree that “Crucial Considerations” (currently second on the reading list after a brief introduction to EA) is quite obviously not meant for an introductory audience. It assumes a working understanding of x-risk (in general and specific x-risks), has numerous slides with complex equations, and uses highly technical language that will be inscrutable to most people who have only read a brief intro to EA (e.g.  “we should oppose extra funding for nanotechnology even though superintelligence and ubiquitous surveillance might be very dangerous on their own, including posing existential risk, given certain background assumptions about the technological completion conjecture.”

You’ve written (in the same comment you quoted): “I think that CEA has a history of pushing longtermism in somewhat underhand ways… given this background of pushing longtermism, I think it’s reasonable to be skeptical of CEA’s approach on this sort of thing.” You don’t need to hire a contractor or prioritize an overhaul of the ea.org site to address my skepticism. But it would go a long way if Aaron were to spend a day looking for low hanging fruit like my suggested change, or even if you just took the tiny step of adding Global Health to the list of (mostly longtermist) causes on the homepage. I assume the omission of Global Health was an oversight. But now that it’s been called to your attention, if you still don’t think Global Health should be added to the homepage I doubt there’s anything you can say or do to resolve my skepticism. 

 

2.  Running EffectiveAltruism.org is just one example of work that CEA undertakes on behalf of the broader community (EAG, groups work, and community health are other examples). Generally speaking, how (if at all) do you think CEA should be accountable to the broader community when conducting this work? To use an absurd example, if CEA announced that the theme for EAG 2022 is going to be “Factory farmed beef… it’s what’s for dinner”, what would you see as the ideal process for resolving the inevitable objections?

Now may not be the right time for you to explain how you think about this, and this comment thread almost certainly isn’t the right place. But I think it’s important for you to address these issues at some point in the not too distant future. And before you make up your mind, I hope you’ll gather input from as broad a cross section of the community as possible.

AMA: JP Addison and Sam Deere on software engineering at CEA

FYI, I'm still seeing an error message, albeit a different one than earlier. Here's what I get now:

Your connection is not private

Attackers might be trying to steal your information from effectivealtruism.org (for example, passwords, messages, or credit cards). Learn more

NET::ERR_CERT_COMMON_NAME_INVALID

That said, I didn't mean to imply the site has historically had abnormal downtime, sorry for not making that clear.

Load More