Zach Stein-Perlman

Research @ AI Impacts
Working (0-5 years experience)
2181Berkeley, CA, USAJoined Nov 2020

Bio

Participation
1

AI forecasting & strategy at AI Impacts. Blog: Not Optional.

Comments
272

And:

  • For $10,000,000,000,000, you can buy most of the major tech companies and semiconductor manufacturers.

That would really, really help us make AI go well. Until we can do that, more funding is astronomically valuable. (And $10T is more than 100 times what EA has.)

Peter Singer is originally a character in Scott Alexander's "Unsong," mentioned here (mild spoilers), so it's a pseudonym that's a reference for a certain ingroup.

I weak-downvoted (when this post had positive karma), mostly because I want less content like this on the Forum, and slightly because I do not think Bostrom should step down (and I'm kind of annoyed by the assertion without justification, but I'd also be annoyed by more arguments about the Bostrom email thing).

I would prefer markups not to be justified on the basis of GiveWell donations. That increases deadweight loss, not to mention likely resulting in worse allocation of donations than the counterfactual.

In addition to positive externalities from merch, another important reason not to be for-profit is that that causes higher prices. Raising prices increases deadweight loss and transfers money from EAs to the company.

  1. I think the first paragraph is incorrect as a description of the EA Forum's "Forecasting & Estimation" tag, which is pretty different from LessWrong's "Forecasting & Prediction" tag. The Forum tag contains many subtags, including "AI forecasting"; the Forum doesn't have an analogue of LessWrong's "Forecasts" tag to separate out object-level forecasts from discussion of forecasting.
  2. [More controversially] I'm not sure I like that this text strongly suggests that forecasting is about "making statements about what will happen in the future (and in some cases, the past) and then scoring the predictions," and that forecasting is the kind of thing that you get better at by "Keep[ing] track of [and assessing] your predictions." This excludes important forecasting topics/mindsets that lead you to ask questions like "what will the world be like in 100 years?"
  3. In general I wouldn't worry about adding text to the wiki as long as the resulting page is under 2000 words. I agree-voted even though I'm not thrilled about this text because I think it's better than nothing. (Reversal test: would I want to delete this text if it was already part of the page? No.)

I think this type of misuse is an emerging AI alignment problem.

Misuse can be important or interesting, but the word “alignment” should be reserved for problems like the problem of making systems try to do what their operators want, especially making very capable systems not kill everyone.

+1 to sharing lists of questions.

 What signs do I need to look for to tell whether a model's cognition has started to emerge?

I don't know what 'cognition emerging' means. I suspect the concept is vague/confused.

What is the best way to explain the difference between forecasting extinction scenarios and narratives from chiliasm or escatology? 

Why would you want to explain the difference?

Load More