T

Tamay

Associate Director @ Epoch
450 karmaJoined

Comments
17

I think much of the advocacy within EA is reasonably thoughtful and truth-seeking. Reasoning and uncertainties are often transparently communicated. Here are two examples based on my personal impressions:

  • advocacy around donating a fraction of one's income to effective charities is generally focused on providing accounts of key facts and statistics, and often acknowledges its demandingness and its potential for personal and social downsides
  •  wild animal suffering advocacy usually does things like acknowledging the second-order effects of interventions on ecosystems, highlights the amount of uncertainty around the extent of suffering and often calls for more research rather than immediate intervention


By contrast, EA veganism advocacy has done a much poorer job in remaining truth-seeking as Elizabeth has pointed out.

He gives explanations for each of his labels, e.g. for the airplane he considers the relevant inventors the Wright brothers who weren't government-affiliated at the time. It's probably best to refer to his research if you want to verify how much to trust the labels.

By that token, AI won't be government controlled either because neural networks were invented by McCulloch/Pitts/Rosenblatt with minimal government involvement. Clearly this is not the right way to think about government control of technologies.

I like the idea, but the data seems sketchy. For example, the notion of "government control" seems poorly applied:

  • you assign a 0 USG control  for "airplane" but historicaly government "control" has been very high (Boeing, Lockheed, Grumman were all operating for the USG during WW2).
  • you assign 0 to the "decoding of the human genome" but the human genome project was initiated, largely funded, and directed by the U.S. government

Some entries are broad categories (e.g., "Nanotechnology"), while others are highly specific (e.g., "Extending the host range of a virus via rational protein design") which makes the list feel arbitrary. Why are "Standard model of physics" on the list but not other major theories of physics (e.g. QM or relativity)? Why aren't Neural nets on here?

>I'm mostly just noting that "altruistic people don't commit crimes" doesn't seem like a likely hypothesis.

I think your data is evidence in favor of a far more interesting conclusion.

I previously estimated that 1-2% of YCombinator-backed companies with valuations over $100M had serious allegations of fraud.

While not all Giving Pledge signatories are entrepreneurs, a large fraction are, which makes this a reasonable reference class. (An even better reference class would be “non-signatory billionaires”, of course.)

My guess is that YCombinator-backed founders tend to be young with shorter careers than pledgees, and in part because of this will likely have had fewer run-ins with the law. I think better reference class would be something like founders of Fortune 500/Fortune 100 companies. 

You asked for examples of advocacy done well with respect to truth-seekingness/providing well-considered takes, and I provided examples.

Tamay
30
15
8

I think EAs should be more critical of its advocacy contingents, and that those involved in such efforts should set a higher bar for offering more thoughtful and considered takes.

Short slogans and emojis-in-profiles, such as those often used in ‘Pause AI’ advocacy, are IMO inadequate for the level of nuance required for complex topics like these. Falling short can burn credibility and status of those involved in EA in the eyes of onlookers.

One issue I have with this is that when someone calls this the 'default', I interpret them as implicitly making some prediction about the likelihood of such countermeasures not being taken. The issue then that this is a very vague way to communicate one's beliefs. How likely does some outcome need to be for it to become the default? 90%? 70%? 50%? Something else?
 

The second concern is that it's improbable for minimal or no safety measures to be implemented, making it odd to set this as a key baseline scenario. This belief is supported by substantial evidence indicating that safety precautions are likely to be taken. For instance:

  • Most of the major AGI labs are investing quite substantially in safety (e.g. OpenAI committing some substantial fraction of its compute budget, a large fraction of Anthropic's research staff seems dedicated to safety, etc.)
  • We have received quite a substantial amount of concrete empirical evidence that safety-enhancing innovations are important for unlocking the economic value from AI systems (e.g. RLHF, constitutional AI, etc.)
  • It seems a priori very likely that alignment is important for unlocking the economic value from AI, because this effectively increases the range of tasks that AI systems can do without substantial human oversight, which is necessary for deriving value from automation
  • Major governments are interested in AI safety (e.g. the UK's AI Safety Summit, the White House's securing commitments around AI safety from AGI labs)

Maybe they think that safety measures taken in a world in which we observe this type of evidence will fall far short from what is neeeded. However, it's somewhat puzzling be confident enough in this to label it as the 'default' scenario at this point.

The term 'default' in discussions about AI risk (like 'doom is the default') strikes me as an unhelpful rhetorical move. It suggests an unlikely scenario where little-to-no measures are taken to address AI safety. Given the active research and the fact that alignment is likely to be crucial to unlocking the economic value from AI, this seems like a very unnatural baseline to frame discussions around.

I've updated the numbers based on today's predictions. Key updates:

  • AI-related risks have seen a significant increase, almost doubling both in terms of catastrophic (from 3.06% in Jun 2022 to 6.16% in September 2023) and extinction risk (from 1.56% to 3.39%).
  • Biotechnology risks have actually decreased in terms of catastrophe likelihood (from 2.21% to 1.52%), while staying constant for extinction risk (0.07% in both periods).
  • Nuclear War has shown an uptick in catastrophic risk (from 1.87% to 2.86%) but remains consistent in extinction risk (0.06% in both periods).
Load more