T

Tamay

Associate Director @ Epoch
437 karmaJoined Feb 2020

Comments
15

I think much of the advocacy within EA is reasonably thoughtful and truth-seeking. Reasoning and uncertainties are often transparently communicated. Here are two examples based on my personal impressions:

  • advocacy around donating a fraction of one's income to effective charities is generally focused on providing accounts of key facts and statistics, and often acknowledges its demandingness and its potential for personal and social downsides
  •  wild animal suffering advocacy usually does things like acknowledging the second-order effects of interventions on ecosystems, highlights the amount of uncertainty around the extent of suffering and often calls for more research rather than immediate intervention


By contrast, EA veganism advocacy has done a much poorer job in remaining truth-seeking as Elizabeth has pointed out.

>I'm mostly just noting that "altruistic people don't commit crimes" doesn't seem like a likely hypothesis.

I think your data is evidence in favor of a far more interesting conclusion.

I previously estimated that 1-2% of YCombinator-backed companies with valuations over $100M had serious allegations of fraud.

While not all Giving Pledge signatories are entrepreneurs, a large fraction are, which makes this a reasonable reference class. (An even better reference class would be “non-signatory billionaires”, of course.)

My guess is that YCombinator-backed founders tend to be young with shorter careers than pledgees, and in part because of this will likely have had fewer run-ins with the law. I think better reference class would be something like founders of Fortune 500/Fortune 100 companies. 

You asked for examples of advocacy done well with respect to truth-seekingness/providing well-considered takes, and I provided examples.

I think EAs should be more critical of its advocacy contingents, and that those involved in such efforts should set a higher bar for offering more thoughtful and considered takes.

Short slogans and emojis-in-profiles, such as those often used in ‘Pause AI’ advocacy, are IMO inadequate for the level of nuance required for complex topics like these. Falling short can burn credibility and status of those involved in EA in the eyes of onlookers.

One issue I have with this is that when someone calls this the 'default', I interpret them as implicitly making some prediction about the likelihood of such countermeasures not being taken. The issue then that this is a very vague way to communicate one's beliefs. How likely does some outcome need to be for it to become the default? 90%? 70%? 50%? Something else?
 

The second concern is that it's improbable for minimal or no safety measures to be implemented, making it odd to set this as a key baseline scenario. This belief is supported by substantial evidence indicating that safety precautions are likely to be taken. For instance:

  • Most of the major AGI labs are investing quite substantially in safety (e.g. OpenAI committing some substantial fraction of its compute budget, a large fraction of Anthropic's research staff seems dedicated to safety, etc.)
  • We have received quite a substantial amount of concrete empirical evidence that safety-enhancing innovations are important for unlocking the economic value from AI systems (e.g. RLHF, constitutional AI, etc.)
  • It seems a priori very likely that alignment is important for unlocking the economic value from AI, because this effectively increases the range of tasks that AI systems can do without substantial human oversight, which is necessary for deriving value from automation
  • Major governments are interested in AI safety (e.g. the UK's AI Safety Summit, the White House's securing commitments around AI safety from AGI labs)

Maybe they think that safety measures taken in a world in which we observe this type of evidence will fall far short from what is neeeded. However, it's somewhat puzzling be confident enough in this to label it as the 'default' scenario at this point.

The term 'default' in discussions about AI risk (like 'doom is the default') strikes me as an unhelpful rhetorical move. It suggests an unlikely scenario where little-to-no measures are taken to address AI safety. Given the active research and the fact that alignment is likely to be crucial to unlocking the economic value from AI, this seems like a very unnatural baseline to frame discussions around.

I've updated the numbers based on today's predictions. Key updates:

  • AI-related risks have seen a significant increase, almost doubling both in terms of catastrophic (from 3.06% in Jun 2022 to 6.16% in September 2023) and extinction risk (from 1.56% to 3.39%).
  • Biotechnology risks have actually decreased in terms of catastrophe likelihood (from 2.21% to 1.52%), while staying constant for extinction risk (0.07% in both periods).
  • Nuclear War has shown an uptick in catastrophic risk (from 1.87% to 2.86%) but remains consistent in extinction risk (0.06% in both periods).
Answer by TamaySep 14, 202331
7
0
  • Owain Evans on AI alignment (situational awareness in LLM, benchmarking truthfulness)
  • Ben Garfinkel on AI policy (best practices in AI governance, open source, the UK's AI efforts)
  • Anthony Aguirre on AI governance, forecasting, cosmology
  • Beth Barnes on dangerous capability evals (GPT-4's and Claude's eval)
Tamay
1y66
27
1

I agree the victim-perpetrator is an important lens through which to view this saga. But, I also think that an investor-investee framing is another important one; a framing that has different prescriptions for what lessons to take away, and what to do next. The EA community staked easily a billion dollars worth of its assets (in focus, time, reputation, etc.), and ended up losing it all. I think it's crucial to reflect on whether the extent of our due diligence and risk management was commensurate with the size of EA's bet.

Load more