If this is worth it to do, wouldn't it be best for a singular large donor to buy, say, $1M+ of tickets, instead of lots of random individuals buying small numbers of tickets?
"Pfizer currently intends to sell the vaccine in the US for around $110-130 per dose"Just to check – this is the sticker price, where the cost is (mostly) covered by insurers (and possibly bargained down from this), right? Not the out-of-pocket cost to most US consumers? This would be another reason to expect lower costs for the UK than this.
Noting that I like that the prizes you guys are offering are large enough that they might lead to serious work from those outside the community. My sense is the potential to convert EA capital into productive labor from nonEAs is one of the main draws of prizes, and previous attempts of testing prizes here has been somewhat ambiguous, as they haven't led to much work from outside the community, but also the prize amounts were generally small enough that they probably wouldn't be expected to do so anyway.
"it seems to me that all AIs (and other technologies) already don't give us exactly what we want but we don't call that outer misaligned because they are not "agentic" (enough?)"Just responding to this part – my sense is most of the reason that current systems don't do what we want has to do with capabilities failures, not alignment failures. That is, it's less about the system being given the wrong goal/doing goal misgeneralizing/etc, but instead simply not being competent enough.
I think this would, in general, be a really bad idea. Kowtowing to nuclear threats would lead to a huge incentive for various countries to acquire nukes (both to make nuclear threats, and to defend against nuclear threats), and thus would increase proliferation (and nuclear risk) considerably. Of course, if you can figure out a way to get Russia to de-escalate, that would be great, though I doubt anyone here has any ability to influence that. Barring that, my sense is the best strategy for the US right now is to continue to provide Ukraine with much assistance without engaging Russia directly.
It's also much more pessimistic than are prediction markets – for instance, Metaculus puts the odds of a nuclear detonation in Ukraine by 2023 at 7%, and a Russian nuclear detonation in the US this year at ≤ 1%.
Meta-point – I think it would be better if this was called something other than "baby longtermism", as I found this confusing. Specifically, I initially thought you were going to be writing a post about a baby (i.e., "dumbed-down") version of longtermism.
"That said, when I started the 10% thing, I did so under the impression that it was what the sacrifice I needed to make to gain acceptance in EA"If this sentiment is at all widespread among people on the periphery of EA or who might become EA at some point, then I find that VERY concerning. We'd lose a lot of great people if everyone assumed they couldn't join without making that kind of sacrifice.
Hmm, I don't read it that way. My read of this passage is: the risk of WWIII by 2070 might be as high as somewhat over 20% (but that estimate is probably picked from the higher end of serious estimates), WWIII may or may not lead to all-out nuclear war, all-out nuclear war has some unknown chance of leading to the collapse of civilization, and if that happened then there would also be some further unknown chance of never recovering. So all-in-all, I'd read this as Will thinking that X-risk from nuclear war in the next 50 years was well below 20%.
I also don't think NYT readers have particularly clear prejudices about nuclear war (they probably have larger prejudices about things like overpopulation), so this would be a weird place to make a concession, in my mind.
My personal view is that targeted small-dollar political donations (which large donors cannot simply fill, due to campaign finance laws) are likely to be vastly higher value on the margin than corresponding-sized (equivalent size plus tax savings) non-political donations to organizations that large donors can fill, insofar as such targeted political opportunities arise. So if I was in the situation you're describing, I'd accept the higher salary with the intention of donating to such political opportunities when they arose. Of course, this logic is specific to a particular kind of donation opportunity, and won't generalize to most areas that EAs currently donate to.