AI forecasting & strategy at AI Impacts. Blog: Not Optional.
I don't buy that CDT vs FDT matters here? It's seems like you'll do better to always try to do what's best (and appropriately take into account how actors may try to influence you) than focus on compensating for harm. And perfect altruists (at least) are able to cooperate without compensating one another's harms. And it's not like there's potential cooperation with Salinas here-- donating to her won't affect her actions. And there are some cases where you should act differently if you thought you were being simulated, but those seem to be the exception for general harm-offsetting decisions.
(I probably can't continue a discussion on this now, sorry, but if there's something explaining this argument in more detail I'd try to read it.)
P.S. thinking in terms of contractualism, I think rational agents would prefer good-maximizing over harm-compensation policies, e.g. from behind a veil of ignorance.
(It's by no means safely Democratic, but it's substantially more Democratic than the median.)
(Good luck to Salinas.)
More minor suggestions:
There are plenty of scenarios that I think make the world go a lot better
What are they?
(I don't think anyone has written a scenarios that make the world go a lot better post/doc; it might be useful.)
Sure (with a ton of work), though it would almost entirely consist of pointing to others' evidence and arguments (which I assume Nick would be broadly familiar with but would find less persuasive than I do, so maybe this project also requires imagining all the reasons we might disagree and responding to each of them...).
I think considerations like those presented in Daniel Kokotajlo's Fun with +12 OOMs of Compute suggest that you should have ≥50% credence on AGI by 2043.
Agree with Habryka: I believe there exist decisive reasons to believe in shorter timelines and higher P(doom) than you accept, but I don't know what your cruxes are.
Are timelines-probabilities in this post conditional on no major endogenous slowdowns (due to major policy interventions on AI, major conflict due to AI, pivotal acts, safety-based disinclination, etc.)?
I didn't say that CEA's admissions process was mistaken or bad. (In fact, I don't believe that it's bad!) I'm just sharing what may be relevant context for others' thought and discussion on EA conference admissions.