Max_Daniel

Project Manager for the Research Scholars Programme at FHI. Previously part of the 2018-2020 cohort of that programme and Executive Director of the Foundational Research Institute (now Center on Long-Term Risk), a project by the Effective Altruism Foundation (but I don't endorse that organization's 'suffering-focused' view on ethics).

Comments

alexrjl's Shortform

And yes, wasting or misusing resources due to competitive pressure in my view is one of the key failure modes to be mindful of in the context of AI alignment and AI strategy. FWIW, my sense is that this belief is held by many people in the field, and that a fair amount of thought has been going into it. (Though as with most issues in this space I think we don't have a "definite solution" yet.)

alexrjl's Shortform

Yes, I think it is very likely that growth eventually needs to become polynomial rather than exponential or hyperbolic. The only two defeaters I can think of are (i) we are fundamentally wrong about physics or (ii) some weird theory of value that assigns exponentially growing value to sub-exponential growth of resources. 

This post contains some relevant links (though note I disagree with the post in several places, including its bottom line/emphasis).

Review of FHI's Summer Research Fellowship 2020

Two Summer Research Fellows - Joshua T. Monrad and Jonas B. Sandbrink - and collaborator Neil G. Cherian have since published a paper they worked on during SRF in Nature: Promoting versatile vaccine development for emerging pandemics.

Charges against BitMEX and cofounders

My very quick take:

  • I think you make a couple of good points, and overall I updated a fair bit into the direction of "accepting functionally anonymous donations is ~always OK, even if you know the money has morally questionable origin".
  • I'm still not fully convinced, and suspect there are realistic cases where at least initially I'd be fairly strongly opposed to taking such donations.
  • I'm not sure if I can fully justify my intuition / if the things I'm going to say are actually its main drivers, but at first glance I see two reasons to be hesitant:
    • I'd guess that in practice it can be very hard to implement the level of anonymity you suggest. E.g. relationships to large donors in practice are often handled by senior staff who do have influence over the org's strategic direction.
      • This is partly due to common (actual or perceived) donor preferences.
      • But it also makes sense from the org's perspective: e.g. knowing the details about the relationships to large donors is fairly relevant when doing risk management. But for a holistic risk management you also need to look at other information that's quite dispersed throughout the org; certainly org leadership needs to be involved. So the org has an incentive that might preclude setting up the kind of "firewalls" you advocate - and when they are in place, there will be incentives to subvert them, which seems like a bad/risky setup.
    • I think that outside perceptions are a quite significant obstacle, for reasons that go beyond "PR risks" in a narrow sense. My sense is that the stakes in the arena of "moral/political signaling" are quite high for many actors, in particular if you rely a lot on informal cooperation based on perceptions of hard-to-verify shared interests. And whom you take money from will often be quite significant in that arena.
      • One issue here is that the level of anonymization / protection from adverse incentives you advocate will often be hard to verify from the outside.
        • If I know that charity X has received a substantial donation from Y, my prior will be that Y has significant influence over X. In typical cases, it would be quite costly (in terms of time, but potentially also inside/sensitive information that would need to be shared) to convince me that this is not the case.
      • Another significant issue is a kind of "contagion" due to higher-order social reasoning: Suppose I know that charity X has received a substantial donation from Y. Suppose further that I know that X's relationship to Z is relevant for X's ability to achieve its mission (think e.g. X = MyAISafetyOrg, Z = DeepMind). Even if I'm personally not that concerned about accepting donations from Y, I might still be concerned that X made a bad move if I think that Z would disapprove of getting funded by Y. "Bad move" here might refer to a narrow sense of competence, but also again to "moral"/influence issues: if it seems to me that X is willing to accept the cost that most others are going to worry if Y has influence over X, this makes it seem more likely that Y in fact has influence over X.
      • Consider also that if it would be costly for X to publicly acknowledge it accepted a donation from Y, then in virtue of this very fact accepting an anonymous donation from Y gives Y influence/leverage over X (because Y can threaten to disclose their donation).
      • My impression is that related concepts like "virtue signalling" are often discussed in a derogatory fashion in the EA sphere. I'd like to therefore add that when I said "moral/political signalling" I'm not thinking of arguably excessive/pathological cases from, say, highly public party politics as central examples. I'm more thinking of the reasons why "integrity" (and, for philosophers, Strawsonian "reactive attitudes") is a thing / an everyday concept, and of credibility/"improving one's bargaining position".
        • (Similarly, note that "follow the money" is a common heuristics.)
[Link post] Are we approaching the singularity?

I think in this case mostly informal personal conversations (which can include conversations e.g. within particular org's Slack groups or similar). It might also have been a slight overstatement that the paper was "widely discussed" - this impression might be due to a "selection effect" of me having noticed the paper early and being interested in such work.

[Link post] Are we approaching the singularity?

I don't think this would be a good reaction because:

  • Nordhaus's paper was only formally published now, but isn't substantially newer than Roodman's work. Nordhaus's paper was available as NBER working paper since at least 2018, and has been widely discussed among longtermists since then (e.g. I remember a conversation in fall 2018, there may have been earlier ones). [ETA: Actually Nordhaus's paper has circulated as a working/discussion paper since at least September 2015, and was e.g. mentioned in this piece of longtermist work from 2017.]
  • I've only had the chance to skim Roodman's work, but my quick impression is that it isn't straightforwardly the case that Nordhaus's model is "more detailed and trustworthy". Rather, it seems to me that both models are more detailed along different dimensions: Roodman's model explicitly incorporates noise/stochasticity, and in this sense is significantly more mathematically complex/sophisticated. On the other hand, Nordhaus's model incorporates more theoretical assumptions, e.g. about different types of "factors of production" and their relationship as represented by a "production function", similar to typical economic growth models. (Whereas Roodman is mostly fitting a model to a trend of a single quantity, in a way that's more agnostic about the theoretical mechanisms generating that trend.)
80,000 Hours one-on-one team plans, plus projects we’d like to see

Thanks! I really appreciate you sharing your thinking on this. 

(And suspect it would be good if more orgs did more of this on the margin.)

Khorton's Shortform

I really like the idea of working on a women's issue in a global context.

Me too. I'm also wondering about the global burden of period pain, and the tractability of reducing it. Similar to menopause (and non-gender-specific issues such as ageing), one might expect this to be neglected because of a "it's natural  and not a disease, and so we can't or shouldn't anything about it" fallacy.

Training Bottlenecks in EA (professional skills)

I'd love to hear any advice from how that charity decided which courses would be best for people to do! Also whether there are any specific ones you recommend (if any are applicable in the UK). 

I'm afraid that I'm not aware of specific courses that are also offered in the UK. 

I think that generally the charity actually didn't do a great job at selecting the best courses among the available ones. However, my suspicion is that conditional on having selected an appropriate topic there often wasn't actually that much variance between courses because most of the benefits come from some generic effect of "deliberately reflecting on and practicing X", with it not being that important how exactly this was one. (Perhaps similar to psychotherapy.)

For courses where all participants were activists from that same charity, I suspect a significant source of benefits was also just collaborative problem solving, and sharing experiences and getting peer advice from others who had faced similar problems.

Another observation is that these courses often involved in-person conversations in small groups, were quite long in total (2 hours to 2 days), and significant use of physical media (e.g. people writing ideas on sheets of paper, and then these being pinned on a wall). By contrast, in my "EA experience" similar things have been done by people spending at most one hour writing in a joint Google doc. I personally find the "non-virtual" variant much more engaging, but I don't know to what extent this is idiosyncratic.

Load More