AI forecasting & strategy at AI Impacts. Blog: Not Optional.
Peter Singer is originally a character in Scott Alexander's "Unsong," mentioned here (mild spoilers), so it's a pseudonym that's a reference for a certain ingroup.
I weak-downvoted (when this post had positive karma), mostly because I want less content like this on the Forum, and slightly because I do not think Bostrom should step down (and I'm kind of annoyed by the assertion without justification, but I'd also be annoyed by more arguments about the Bostrom email thing).
Here's the canonical introductory curriculum!
See also Study Guide.
I would prefer markups not to be justified on the basis of GiveWell donations. That increases deadweight loss, not to mention likely resulting in worse allocation of donations than the counterfactual.
In addition to positive externalities from merch, another important reason not to be for-profit is that that causes higher prices. Raising prices increases deadweight loss and transfers money from EAs to the company.
I think this type of misuse is an emerging AI alignment problem.
Misuse can be important or interesting, but the word “alignment” should be reserved for problems like the problem of making systems try to do what their operators want, especially making very capable systems not kill everyone.
+1 to sharing lists of questions.
What signs do I need to look for to tell whether a model's cognition has started to emerge?
I don't know what 'cognition emerging' means. I suspect the concept is vague/confused.
What is the best way to explain the difference between forecasting extinction scenarios and narratives from chiliasm or escatology?
Why would you want to explain the difference?
And:
That would really, really help us make AI go well. Until we can do that, more funding is astronomically valuable. (And $10T is more than 100 times what EA has.)