Bio

Just a bundle of subroutines sans free will, aka flawed post-monkey #~100,000,000,000. Peace and love xo

(All my posts, except those where I state otherwise, reflect my views and not those of my employer(s).) 

Sequences
5

Belief & Reality
Macrostrategy Miscellany
AI Strategy 101
Nuclear Risk 101
CERI SRF '22

Comments
46

Topic Contributions
5

The FTX Future Fund launched in Feb 2022, and Open Phil were hiring for a program officer for their new Global Health and Wellbeing program in Feb 2022 (see here).

For context, all FTX funds go/went to longtermist causes; Open Phil currently has two grantmaking programs (see here): one in Longtermism and the other one being the Global Health and Wellbeing program that launched - I assume - around Feb '22.

So my guess, though I'm not certain, is that the launches of FTX Future Fund and Open Phil's Global Health and Wellbeing program were linked, and that Open Phil did increase its neartermist:longtermist funding ratio when FTX funds became available.

(It's interesting to note that, at present, my above comment is on -1 agreement karma after 50 votes. This suggests that the question of rebalancing the neartermist:longtermist funding ratio is genuinely controversial, as opposed to there being a community consensus either way.)

Thanks for the post, I appreciate the clarity it brings.

Given FTX Foundation’s focus on existential risk and longtermism, the most direct impacts are on our longtermist work. We don’t anticipate any immediate changes to our Global Health and Wellbeing work as a result of the recent news.

Would it not make sense for Open Phil to shift some of its neartermist/global health funds to longtermist causes?

Although any neartermist:longtermist funds ratio is, in my opinion, fairly arbitrary, this ratio has increased significantly following the FTX event. Thus, seems to me that Open Phil should maybe consider acting to rebalance it.

(I'd be curious to hear a solid counterargument.)

See also Michael Aird's comments on this Tarsney (2020) paper. His main points are:

  • 'Tarsney's model updates me towards thinking reducing non-extinction existential risks should be a little less  of a priority than I previously thought.' (link to full comment)
  • 'Tarsney seems to me to understate the likelihood that accounting for non-human animals would substantially affect the case for longtermism.' (link)
  • 'The paper ignores 2 factors that could strengthen the case for longtermism - namely, possible increases in how efficiently resources are used and in what extremes of experiences can be reached.' (link)
  • 'Tarsney writes "resources committed at earlier time should have greater impact, all else being equal". I think that this is misleading and an oversimplification. See Crucial questions about optimal timing of work and donations and other posts tagged Timing of Philanthropy.' (link)
  • 'I think it'd be interesting to run a sensitivity analysis on Tarsney's model(s), and to think about the value of information we'd get from further investigation of: 
    • how likely the future is to resemble Tarsney's cubic growth model vs his steady model
    • whether there are other models that are substantially likely, whether the model structures should be changed
    • what the most reasonable distribution for each parameter is.' (link)

Roodman's proposed "restriction that the various frameworks agree" makes no sense.

I'm with you. I think Roodman must disagree with the idea of giving probabilies to different - and necessarily conflicting - models of the world, but to me this seems like an odd position/disagreement. I might also be missing something.

Belated congrats on completing your PhD! I'm looking forward to reading the sections you've highlighted.

We discuss [...] how to develop your own inside views about AI Alignment.

See also Neel Nanda's (2022):

I notice a two-karma system has been implemented in at least one EA Forum post before, see the comments section to this "Fanatical EAs should support very weird projects" post.

Love the post. For more examples - some of them EA-oriented - of Fermi estimates / back of the envelope calculations (BOTECs), see Botec Horseman's Tweets.

(Note: Botec Horseman is neither myself nor Nuño.)

Load More