Just a bundle of subroutines sans free will, aka flawed post-monkey #~100,000,000,000. Peace and love xo
(All my posts, except those where I state otherwise, reflect my views and not those of my employer(s).)
The FTX Future Fund launched in Feb 2022, and Open Phil were hiring for a program officer for their new Global Health and Wellbeing program in Feb 2022 (see here).
For context, all FTX funds go/went to longtermist causes; Open Phil currently has two grantmaking programs (see here): one in Longtermism and the other one being the Global Health and Wellbeing program that launched - I assume - around Feb '22.
So my guess, though I'm not certain, is that the launches of FTX Future Fund and Open Phil's Global Health and Wellbeing program were linked, and that Open Phil did increase its neartermist:longtermist funding ratio when FTX funds became available.
(It's interesting to note that, at present, my above comment is on -1 agreement karma after 50 votes. This suggests that the question of rebalancing the neartermist:longtermist funding ratio is genuinely controversial, as opposed to there being a community consensus either way.)
Thanks for the post, I appreciate the clarity it brings.
Given FTX Foundation’s focus on existential risk and longtermism, the most direct impacts are on our longtermist work. We don’t anticipate any immediate changes to our Global Health and Wellbeing work as a result of the recent news.
Would it not make sense for Open Phil to shift some of its neartermist/global health funds to longtermist causes?
Although any neartermist:longtermist funds ratio is, in my opinion, fairly arbitrary, this ratio has increased significantly following the FTX event. Thus, seems to me that Open Phil should maybe consider acting to rebalance it.
(I'd be curious to hear a solid counterargument.)
See also Michael Aird's comments on this Tarsney (2020) paper. His main points are:
Roodman's proposed "restriction that the various frameworks agree" makes no sense.
I'm with you. I think Roodman must disagree with the idea of giving probabilies to different - and necessarily conflicting - models of the world, but to me this seems like an odd position/disagreement. I might also be missing something.
Belated congrats on completing your PhD! I'm looking forward to reading the sections you've highlighted.
We discuss [...] how to develop your own inside views about AI Alignment.
See also Neel Nanda's (2022):
You might want to check out 'Research Summary: The Subjective Experience of Time' (Schukraft, 2020).
I notice a two-karma system has been implemented in at least one EA Forum post before, see the comments section to this "Fanatical EAs should support very weird projects" post.
Love the post. For more examples - some of them EA-oriented - of Fermi estimates / back of the envelope calculations (BOTECs), see Botec Horseman's Tweets.
(Note: Botec Horseman is neither myself nor Nuño.)