MD

Max_Daniel

Senior Program Associate, GCR Capacity Building @ Open Philanthropy
5796 karmaJoined Feb 2016Working (6-15 years)

Bio

I'm part of the Global Catastrophic Risks Capacity Building team at Open Philanthropy. Previously I was the Chief of Staff at the Forethought Foundation for Global Priorities Research, participated in the first cohort of FHI's Research Scholars Programme (RSP), and then helped run it as one of its Project Managers. I also used to be the chair of the EA Infrastructure Fund.

Before that, my first EA-inspired jobs were with the Effective Altruism Foundation, e.g., running what is now the Center on Long-Term Risk. While I don't endorse their 'suffering-focused' stance on ethics, I'm still a board member there.

Unless stated otherwise, I post on the Forum in a personal capacity, and don't speak for any organization I'm affiliated with.

I like weird music and general abstract nonsense. In a different life I would be a mediocre mathematician or a horrible anthropologist.

Comments
568

Topic contributions
1

(See also this comment by Carl Shulman which is already mentioned in the post.)

 

The link to Carl's comment doesn't work for me, but this one does. 

This link from the main text to the same comment also doesn't work for me:

as Carl Shulman observed many years ago

(personal views only) In brief, yes, I still basically believe both of these things; and no, I don't think I know of any other type or action that I'd consider 'robustly positive', at least from a strictly consequentialist perspective.

​To be clear, my belief regarding (i) and (ii) is closer to "there exist actions of these types that are robustly positive", as opposed to "any action that purports to be of one these types is robustly positive". E.g., it's certainly possible to try to reduce the risk of human extinction but for that attempt to be ineffective or even counterproductive (i.e., to on net increase the risk of extinction, or to otherwise cause significant harms such that I'd consider the action impermissible), it's possible for resources that were acquired for impartial welfarist purposes to eventually be misused, etc.,

I made some nuanced updates about "acquiring resources for longtermist goals", but they are mostly things like me having become more or less excited about particular examples/substrategies, me having somewhat richer views on some pitfalls of that strategy (though I don't think I became aware of qualitatively 'new' pitfalls), etc., as opposed to sweeping updates about that whole class of actions and whether they can be robustly positive.

I don't remember I'm afraid. I don't recall having seen the article you link to, so I doubt it was that. Maybe it was this one.

Do you have any data you can share on how the population responding to the FTX section/survey differs from the full EAS survey population? E.g. along dimensions like EA engagement, demographics, ... – Or anything else that could shed light on the potential selection effect at this stage? (Sorry if you say this somewhere and I missed it.) Thanks for all your work on this!

This isn't quite what you're looking for because it's more a partial analogy of the phenomenon you point to rather than a realistic depiction, but FWIW I found this old short story by Eliezer Yudkowsky quite memorable.

In the short-term (the next ten years), WAW interventions we could pursue to help wild animals now seem less cost-effective than farmed animal interventions.

Out of curiosity: When making claims like this, are you referring to the cost-effectiveness of farmed animal interventions when only considering the impacts on farmed animals? Or do you think this claim still holds if you also consider the indirect effects of farmed animal interventions on wild animals?


(Sorry if you say this somewhere and I missed it.)

I like this idea! Quick question: Have you considered whether, for a version of this that uses past data/conjectures, one could use existing data compiled by AI Impacts rather than the Wikipedia article from 2015 (as you suggest)?

(Though I guess if you go back in time sufficiently far, it arguably becomes less clear whether Laplace's rule is a plausible model. E.g., did mathematicians in any sense 'try' to square the circle in every year between Antiquity and 1882?)

Tail-effects in education: Since interventions have to scale, they end up being mediocre to "what could be possible."

 

Related: Bloom's two-sigma problem:

Bloom found that the average student tutored one-to-one using mastery learning techniques performed two standard deviations better than students educated in a classroom environment with one teacher to 30 students

(haven't vetted the Wikipedia article or underlying research at all)

Load more