Zach Stein-Perlman

Research @ AI Impacts
Working (0-5 years experience)
2059Berkeley, CA, USAJoined Nov 2020



AI forecasting & strategy at AI Impacts. Blog: Not Optional.


I don't buy that CDT vs FDT matters here? It's seems like you'll do better to always try to do what's best (and appropriately take into account how actors may try to influence you) than focus on compensating for harm. And perfect altruists (at least) are able to cooperate without compensating one another's harms. And it's not like there's potential cooperation with Salinas here-- donating to her won't affect her actions. And there are some cases where you should act differently if you thought you were being simulated, but those seem to be the exception for general harm-offsetting decisions.

(I probably can't continue a discussion on this now, sorry, but if there's something explaining this argument in more detail I'd try to read it.)

P.S. thinking in terms of contractualism, I think rational agents would prefer good-maximizing over harm-compensation policies, e.g. from behind a veil of ignorance.

(It's by no means safely Democratic, but it's substantially more Democratic than the median.)

Downvoted because

  • There's little attempt to justify why Salinas is a great donation-target, much less one of the best available.
  • I believe supporting Salinas is quite clearly and robustly less valuable than alternatives.
    • There are better donation-targets for causing Democrats to keep Congress.
      • Salinas's district is substantially more Democratic than the median, so it's quite unlikely to be the tipping point.
    • There are more important, tractable, and neglected causes than causing Democrats to keep Congress.
  • Miscellaneous phrases (including "As one of the few new congressional districts in the nation" and "say that they value" and "Having tipped the balance the wrong way") are unnecessarily misleading, rude, and anti-truth-seeking.
    • E.g. it's almost totally irrelevant that this is a new congressional district, and the author presumably knows that.
    • Relatedly this is a one-sided pitch; I prefer posts like this to seek to inform rather than convince.
  • I believe we should think in terms of marginal effectiveness rather than offsetting particular harms we (individually or as a community) cause (see the author's "you will have contributed in a small way to this failure" argument). If you want to offset harm that you have done or if you feel guilty, there's little reason to do good in that particular domain (in this case, by donating to Salinas) rather than doing good in a more effective manner.

(Good luck to Salinas.)

More minor suggestions:

  • OpenAI non-technical: there are more than 5.
  • AI Impacts non-technical: there are exactly 5.
  • I  would have said Epoch is at least 5 FTEs (disagreeing with Mauricio).
  • Better estimating the number of independent technical researchers seems pretty important and tractable.

There are plenty of scenarios that I think make the world go a lot better

What are they?

(I don't think anyone has written a scenarios that make the world go a lot better post/doc; it might be useful.)

Sure (with a ton of work), though it would almost entirely consist of pointing to others' evidence and arguments (which I assume Nick would be broadly familiar with but would find less persuasive than I do, so maybe this project also requires imagining all the reasons we might disagree and responding to each of them...).

I think considerations like those presented in Daniel Kokotajlo's Fun with +12 OOMs of Compute suggest that you should have ≥50% credence on AGI by 2043.

Agree with Habryka: I believe there exist decisive reasons to believe in shorter timelines and higher P(doom) than you accept, but I don't know what your cruxes are.

Are timelines-probabilities in this post conditional on no major endogenous slowdowns (due to major policy interventions on AI, major conflict due to AI, pivotal acts, safety-based disinclination, etc.)?

I didn't say that CEA's admissions process was mistaken or bad. (In fact, I don't believe that it's bad!) I'm just sharing what may be relevant context for others' thought and discussion on EA conference admissions.

Load More