E

elifland

2538 karmaJoined Aug 2018
www.elilifland.com/

Bio

You can give me anonymous feedback here.

Comments
135

Topic Contributions
6

If Scott had used language like this, my guess is that the people he was trying to convince would have completely bounced off of his post.

I mostly agree with this, I wasn't suggesting he included that specific type of language, just that the arguments in the post don't go through from the perspective of most leader/highly-engaged EAs. Scott has discussed similar topics on ACT here but I agree the target audience was likely different.

I do think part of his target audience was probably EAs who he thinks are too critical of themselves, as I think he's written before, but it's likely a small-ish fraction of his readers.

I do think it would have been clearer if he had included a caveat like "if you think that small changes in the chance of existential risk outweigh ~everything else then this post isn't for you, read something else instead" but oh well.

Agree with that. I also think if this is the intention the title should maybe be different, instead of being called "In continued defense of effective altruism" it could be called something else like "In defense of effective altruism from X perspective". The title seems to me to imply that effective altruism has been positive on its own terms.

Furthermore, people who identify as ~longtermists seemed to be sharing it widely on Twitter without any type of caveat you mentioned.

And it seems fine to me to argue from the basis of someone else's premises, even if you don't think those premises are accurate yourself.

I feel like there's a spectrum of cases here. Let's say I as a member of movement X in which most people aren't libertarians write a post "libertarian case for X", where I argue that X is good from a libertarian perspective.

  1. Even if those in X usually don't agree with the libertarian premises, the arguments in the post still check out from X's perspective. Perhaps the arguments are reframed to make to show libertarians that X will lead to positive effects on their belief system as well as X's belief system. None of the claims in the post contradict what the most influential people advocating for X think.
  2. The case for X is distorted and statements in the piece are highly optimized for convincing libertarians. Arguments aren't just reframed, new arguments are created that the most influential people advocating for X would disendorse.

I think pieces or informal arguments close to both (1) and (2) are common in the discourse, but I generally feel uncomfortable with ones closer to (2). Scott's piece is somewhere in the middle and perhaps even closer to (1) than (2) but I think it's too far toward (2) for my taste given that one of the most important claims in the piece that makes his whole argument go through may be disendorsed by the majority of the most influential people in EA.

EDIT: Scott has admitted a mistake, which addresses some of my criticism:



(this comment has overlapping points with titotal's)

I've seen a lot of people strongly praising this article on Twitter and in the comments here but I find some of the arguments weak. Insofar as the goal of the post is to say that EA has done some really good things, I think the post is right. But I don't think it convincingly argues that EA has been net positive for the world.[1]

First: based on surveys, it seems likely that most (not all!) highly-engaged/leader EAs believe GCRs/longtermist causes are the most important, with a plurality thinking AI x-risk / x-risk more general are the more important.[2] I will analyze the post from a ~GCR/longtermist-oriented worldview that thinks AI is the most important cause area during the rest of this comment; again I don't mean to suggest that everyone has it, but if something like it is held by the plurality of highly-engaged/leader EAs it seems highly relevant for the post to be convincing from that perspective.

My overall gripe is exemplified by this paragraph (emphasis mine):

And I think the screwups are comparatively minor. Allying with a crypto billionaire who turned out to be a scammer. Being part of a board who fired a CEO, then backpedaled after he threatened to destroy the company. These are bad, but I’m not sure they cancel out the effect of saving one life, let alone 200,000.

(Somebody’s going to accuse me of downplaying the FTX disaster here. I agree FTX was genuinely bad, and I feel awful for the people who lost money. But I think this proves my point: in a year of nonstop commentary about how effective altruism sucked and never accomplished anything and should be judged entirely on the FTX scandal, nobody ever accused those people of downplaying the 200,000 lives saved. The discourse sure does have its priorities.)

I'm concerned about the bolded part; I'm including the caveat for context. I don't want to imply that saving 200,000 lives isn't a really big deal, but I will discuss from the perspective of "cold hard math" .

  1. 200,000 lives equals roughly a ~.0025% reduction in extinction risk, or a ~.25% reduction in risk of a GCR killing 80M people, if we care literally zero about future people. To the extent we weight future people, the numbers obviously get much lower.
  2. The magnitude of the effect size of the board firing Sam, of which the sign is currently unclear IMO, seems arguably higher than .0025% extinction risk and likely higher than 200,000 lives if you weight the expected value of all future people >~100x of that of current people.
  3. The FTX disaster is a bit more ambiguous because some of the effects are more indirect; quickly searching for economic costs didn't find good numbers, but I think a potentially more important thing is that it is likely to some extent an indicator of systemic issues in EA that might be quite hard to fix.
  4. The claim that "I’m not sure they cancel out the effect of saving one life" seems silly to me, even if we just look at generally large "value of a life" estimates compared to the economic costs of the FTX scandal.

Now I'll discuss the AI section in particular. There is little attempt to compare the effect sizes of "accomplishments" (with each other and also with potential negatives, with just a brief allusion to EAs accelerating AGI) or argue that they are net positive. The effect sizes seem quite hard to rank to me, but I'll focus on some ones that seem important but potentially net negative (not claiming that they definitely are!), in order of their listing:

  1. "Developed RLHF, a technique for controlling AI output widely considered the key breakthrough behind ChatGPT."
    1. This is needless to say controversial in the AI safety community
  2. Got two seats on the board of OpenAI, held majority control of OpenAI for one wild weekend, and still apparently might have some seats on the board of OpenAI, somehow?
    1. As I said above the sign of this still seems unclear, and I'm confused why it's included when later Scott seems to consider it a negative
  3. Helped found, and continue to have majority control of, competing AI startup Anthropic, a $30 billion company widely considered the only group with technology comparable to OpenAI’s.
    1. Again, controversial in the AI safety community
  1. ^

    My take is that EA has more likely than not been positive, but I don't think it's that clear and either way, I don't think this post makes a solid argument for it.

  2. ^

    As of 2019, EA Leaders thought that over 2x (54% vs. 24%) more resources should go to long-term causes than short-term with AI getting the most (31% of resources), and the most highly-engaged EAs felt somewhat similarly. I'd guess that the AI figure has increased substantially given rapid progress since 2019/2020 (2020 was the year GPT-3 was released!). We have a 2023 survey of only CEA staff in which 23/30 people believe AI x-risk should be a top priority (though only 13/30 say "biggest issue facing humanity right now", vs. 6 for animal welfare and 7 for GHW). CEA staff could be selected for thinking AI is less important than those directly working on it, but would think it's more important than those at explicitly non-longtermist orgs.

These were the 3 snippets I was most interested in

Under pure risk-neutrality, whether an existential risk intervention can reduce more than 1.5 basis points per billion dollars spent determines whether the existential risk intervention is an order of magnitude better than the Against Malaria Foundation (AMF). 

If you use welfare ranges that are close to Rethink Priorities’ estimates, then only the most implausible existential risk intervention is estimated to be an order of magnitude more cost-effective than cage-free campaigns and the hypothetical shrimp welfare intervention that treats ammonia concentrations. All other existential risk interventions are competitive with or an order of magnitude less cost-effective than these high-impact animal interventions. 

Even if you think that Rethink Priorities’ welfare ranges are far too high, many of the plausible existential risk interventions are not an order of magnitude more cost-effective than the hypothetical ammonia-treating shrimp welfare intervention or cage-free campaigns. 

In an update on Sage introducing quantifiedintuitions.org, we described a pivot we made after a few months:

As stated in the grant summary, our initial plan was to “create a pilot version of a forecasting platform, and a paid forecasting team, to make predictions about questions relevant to high-impact research”. While we build a decent beta forecasting platform (that we plan to open source at some point), the pilot for forecasting on questions relevant to high-impact research didn’t go that well due to (a) difficulties in creating resolvable questions relevant to cruxes in AI governance and (b) time constraints of talented forecasters. Nonetheless, we are still growing Samotsvety’s capacity and taking occasional high-impact forecasting gigs.

[...]

Meanwhile, we pivoted to building the apps contained in Quantified Intuitions to improve and maintain epistemics in EA.

Personally the FTX regrantor system felt like a nice middle ground between EA Funds and donor lotteries in terms of (de)centralization. I'd be excited to donate to something less centralized than EA Funds but more centralized than a donor lottery.

Which part of my comment did you find as underestimating how grievous SBF/Alameda/FTX's actions were? (I'm genuinely unsure)

Nitpick, but I found the sentence:

Based on things I've heard from various people around Nonlinear, Kat and Emerson have a recent track record of conducting Nonlinear in a way inconsistent with EA values [emphasis mine].

A bit strange in the context of the rest of the comment. If your characterization of Nonlinear is accurate, it would seem to be inconsistent with ~every plausible set of values and not just "EA values".

Appreciate the quick, cooperative response.

I want you to write a better post arguing for the same overall point if you agreed with the title, hopefully with more context than mine.

Not feeling up to it right now and not sure it needs a whole top-level post. My current take is something like (very roughly/quickly written):

  1. New information is currently coming in very rapidly.
  2. We should at least wait until the information comes in a bit slower before thinking seriously in-depth about proposed mitigations so we have a better picture of what went wrong. But "babbling" about possible mitigations seems mostly fine.
  3. An investigation similar to the one proposed here should be started fairly quickly, with the goal of producing an initial version of a report within ~2 months so we can start thinking pretty seriously about what mitigations/changes are needed, even if a finalized report would take longer.

My main thought is that I don't know why he committed fraud. Was it actually to utility maximize, or because he was just seeking status, or got too prideful or what?

I think either way most of the articles you point to do more good than harm. Being more silent on the matter  would be worse.

I'd agree with this if I thought EA right now had a cool head. Maybe I should have said we should wait until EA has a cooler head before launching investigations.

I'd hope that the investigation would be conducted mostly by an independent, reputable entity even if commissioned by EA organizations. Also, "EA" isn't a fully homogeneous entity and I'd hope that the people commissioning the investigation might be more cool-headed than the average Forum poster.

Load more