Thanks - this is helpful as a term, and closely related to privileging the hypothesis; https://www.lesswrong.com/posts/X2AD2LgtKgkRNPj2a/privileging-the-hypothesis The general solution, of course,is expensive but necessary; https://secondenumerations.blogspot.com/2017/03/episode-6-method-of-multiple-working.html
Worth noting that a number of 1DaySooner research projects I worked on or ran have paid some undergraduates, grad students, and medical school students for supervised work on research projects, which is effectively very similar to a paid internship - but as you mentioned, it's very hard to do so outside a well scoped project.
I've written about this here, where I said, among other things:
Obviously, charity is a deeply personal decision - but it’s also a key way to impact the world, and an expression of religious belief, and both are important to me. Partly due to my experience, I think it’s important to dedicate money to giving thoughtfully and in advance, rather than doing so on an ad-hoc basis - and I have done this since before hearing about Effective Altruism. But inspired by Effective Altruism and organizations like Givewell, I now dedicate 10% of my income to charities that have been evaluated for effectiveness, and which are aligned with my beliefs about charitable giving.
In contrast to the norm in effective altruism, I only partially embrace cause neutrality. I think it’s an incomplete expression of how my charity should impact the world. For that reason, I split my charitable giving between effective charities which I personally view as valuable, and deference to cause-neutral experts on the most impactful opportunities. Everyone needs to find their own balance, and I have tremendous respect for people who donate more, but I’ve been happy with my decision to limit my effective charitable giving at 10%, and beyond that, I still feel free to donate to other causes, including those that can’t be classified as effective at all.
As suggested above, community is an important part of my budget. A conclusion I came to after reflecting on the question, and grappling with effective altruism, is that separate from charitable giving, I think it’s important to pay for public goods you benefit from, both narrow ones like community organizations, and broader ones. I think it’s worth helping to fund community centers, and why I paid for NPR membership when I lived in the US, and why I pay to offset carbon emissions to reduce the harms of climate change
This is the wrong thing to try to figure out; most of the probability of existential risk is likely not to make a clear or intelligible story. Quoting Nick Bostrom:
Suppose our intuitions about which future scenarios are “plausible and realistic” are shaped by what we see on TV and in movies and what we read in novels. (After all, a large part of the discourse about the future that people encounter is in the form of fiction and other recreational contexts.) We should then, when thinking critically, suspect our intuitions of being biased in the direction of overestimating the probability of those scenarios that make for a good story, since such scenarios will seem much more familiar and more “real”. This Good-story bias could be quite powerful. When was the last time you saw a movie about humankind suddenly going extinct (without warning and without being replaced by some other civilization)? While this scenario may be much more probable than a scenario in which human heroes successfully repel an invasion of monsters or robot warriors, it wouldn’t be much fun to watch. So we don’t see many stories of that kind. If we are not careful, we can be mislead into believing that the boring scenario is too farfetched to be worth taking seriously. In general, if we think there is a Good-story bias, we may upon reflection want to increase our credence in boring hypotheses and decrease our credence in interesting, dramatic hypotheses. The net effect would be to redistribute probability among existential risks in favor of those that seem to harder to fit into a selling narrative, and possibly to increase the probability of the existential risks as a group.
As an aside, the reports end up being critical in worlds where the rapid response is needed, since they show ongoing attention, and will be looked at in retrospect. But they can also be used more directly to galvanize news coverage on key topics and as evidence by policy orgs. Promoting that avenue for impact seems valuable.
Responding to only one minor point you made, the 6-month pause letter seems like the type of thing you oppose: it's not able to help with the risk, it just does deceptive PR that aligns with the goals of pushing against AI progress, while getting support from those who disagree with their actual goal.
I think it's useful to distinguish between industrial policy, regulation, and nationalization, and your new term seems to be somewhere in between. I think your model is generally useful, but at the same time, introducing a new term without being very clear about what it means in relation to existing terms is probably more confusing than clarifying.
I've referred to this latter point as candy bar extinction; using fixed discount rates, a candy bar is better than preventing extinction with certainty after some number of years. (And with moderately high discount rates, the number of years isn't even absurdly high!)