epistemic status: i timeboxed the below to 30 minutes. it's been bubbling for a while, but i haven't spent that much time explicitly thinking about this. i figured it'd be a lot better to share half-baked thoughts than to keep it all in my head — but accordingly, i don't expect to reflectively endorse all of these points later down the line. i think it's probably most useful & accurate to view the below as a slice of my emotions, rather than a developed point of view. i'm not very keen on arguing about any of the points below, but if you think you could be useful toward my reflecting processes (or if you think i could be useful toward yours!), i'd prefer that you book a call to chat more over replying in the comments. i do not give you consent to quote my writing in this short-form without also including the entirety of this epistemic status.
Thanks for the clarification — I've sent a similar comment on the Open Phil post, to get confirmation from them that your reading is accurate :)
How will this change affect university groups currently supported by Open Philanthropy that are neither under the banner of AI safety nor EA? The category on my mind is university forecasting clubs, but I'd also be keen to get a better sense of this for e.g. biosecurity clubs, rationality clubs, etc.
(I originally posted this comment under the Uni Groups Team's/Joris's post (link), but Joris didn't seem to have a super conclusive answer, and directed me to this post.)
How will this change affect university groups currently supported by Open Philanthropy that are neither under the banner of AI safety nor EA? The category on my mind is university forecasting clubs, but I'd also be keen to get a better sense of this for e.g. biosecurity clubs, rationality clubs, etc.
[epistemic status: i've spent about 5-20 hours thinking by myself and talking with rai about my thoughts below. however, i spent fairly little time actually writing this, so the literal text below might not map to my views as well as other comments of mine.]
IMO, Sentinel is one of the most impactful uses of marginal forecasting money.
some specific things i like about the team & the org thus far:
i have the highest crux-uncertainty and -elasticity around the following, in (extremely rough) order of impact on my thought process:
i’ll add $250, with exactly the same commentary as austin :)
to the extent that others are also interested in contributing to the prize pool, you might consider making a manifund page. if you’re not sure how to do this or just want help getting started, let me (or austin/rachel) know!
also, you might adjust the “prize pool” amount at the top of the metaculus page — it currently reads “$0.”
fwiw i instinctively read it as the 2nd, which i think is caleb's intended reading