MichaelA

I’m Michael Aird, an Associate Researcher with Rethink Priorities. In March, I'll also start part-time as a Research Scholar with the Future of Humanity Institute. Opinions expressed are my own. You can give me anonymous feedback at this link.

With Rethink, I'll likely continue their project on nuclear risk (among other things). With FHI, I might work on these things.

Previously, I did longtermist macrostrategy research for Convergence Analysis and then for the Center on Long-Term Risk. More on my background here.

I also post to LessWrong sometimes.

If you think you or I could benefit from us talking, feel free to message me or schedule a call.

Comments

Should marginal longtermist donations support fundamental or intervention research?

Yeah, I'd agree with this. This post is just about what to generally prioritise on the margin, not what should be prioritised completely and indefinitely.

fundamental research helps inform what kind of intervention research is useful, but intervention research also helps inform what kind of fundamental research is useful

That sentence reminded me of a post (which I found useful) on The Values-to-Actions Decision Chain: a lens for improving coordination.

While I agree with that sentence, I do think it seems likely: 

  • that fundamental research will tend to guide our intervention research to a greater extent than intervention research guides our fundamental research
  • that it'd often make sense to gradually move from prioritising fundamental research to prioritising intervention research as a field matures. (Though at every stage, I do think at least some amount of each type of research should be done.)

This also reminds me of the post Personal thoughts on careers in AI policy and strategy, which I perhaps should've cited somewhere in this post.

(Here it's probably worth noting again that I'm classifying research as fundamental or intervention research based on what its primary aim is, not things like how high-level vs granular it is.)

Should marginal longtermist donations support fundamental or intervention research?

I just read Jacob Steinhardt's Research as a Stochastic Decision Process, found it very interesting, and realised that it seems relevant here as well (in particular in relation to Section 2.1). Some quotes:

In this post I will talk about an approach to research (and other projects that involve high uncertainty) that has substantially improved my productivity. Before implementing this approach, I made little research progress for over a year; afterwards, I completed one project every four months on average. Other changes also contributed, but I expect the ideas here to at least double your productivity if you aren't already employing a similar process.

Below I analyze how to approach a project that has many somewhat independent sources of uncertainty (we can often think of these as multiple "steps" or "parts" that each have some probability of success). Is it best to do these steps from easiest to hardest? From hardest to easiest? From quickest to slowest? We will eventually see that a good principle is to "reduce uncertainty at the fastest possible rate". [...]

Suppose you are embarking on a project with several parts, all of which must succeed for the project to succeed. [Note: This could be a matter of whether the project will "work" or of how valuable its results would be.] For instance, a proof strategy might rely on proving several intermediate results, or an applied project might require achieving high enough speed and accuracy on several components. What is a good strategy for approaching such a project? For me, the most intuitively appealing strategy is something like the following:

(Naive Strategy)
Complete the components in increasing order of difficulty, from easiest to hardest.

This is psychologically tempting: you do what you know how to do first, which can provide a good warm-up to the harder parts of the project. This used to be my default strategy, but often the following happened: I would do all the easy parts, then get to the hard part and encounter a fundamental obstacle that required scrapping the entire plan and coming up with a new one. For instance, I might spend a while wrestling with a certain algorithm to make sure it had the statistical consistency properties I wanted, but then realize that the algorithm was not flexible enough to handle realistic use cases.

The work on the easy parts was mostly wasted--it wasn't that I could replace the hard part with a different hard part; rather, I needed to re-think the entire structure, which included throwing away the "progress" from solving the easy parts. [...]

I expect that, on the current margin in longtermism:

  • fundamental research will tend to reduce uncertainty at a faster rate than intervention research
  • somewhat prioritising fundamental research would result in fewer hours "wasted" on relatively low-value efforts than somewhat prioritising intervention research would

(Though thoes are empirical and contestable claims - rather than being true by definition - and Steinhardt's post wasn't specifically about fundamental vs intervention research.)

Propose and vote on potential tags

Hmm, I think I'd agree that most things which fit in both Longtermism (Cause Area) and Moral Philosophy would fit Longtermism (Philosophy). (Though there might be exceptions. E.g., I'm not sure stuff to do with moral patienthood/status/circles would be an ideal fit for Longtermism Philosophy - it's relevant to longtermism, but not uniquely or especially relevant to longtermism. But those things tie in to potential longtermist interventions.)

But now that you mention that, I realise that there might not be a good way to find and share posts at the intersection of two tags (which would mean that tags which are theoretically redundant are currently still practically useful). I've just sent the EA Forum team the following message about this:

[...]

I think the way one would currently [find and share posts at the intersection of two tags] is going to the frontpage, selecting two tags to filter by, and choosing +25 or required.

But when I do that for Longtermism (Philosophy) and Existential Risk at the same time (as a test), at Required no posts come up at all. But I expect there are many relevant posts with both tags, and I know at least "Crucial questions for longtermists" has both tags.

And when I do that at +25, I think what I get is just the regular frontpage. Or at least it's all pretty recent posts, most with neither of those tags.

Also, it'd be cool to be able to filter from a second tag from a tag page. E.g., to be on https://forum.effectivealtruism.org/tag/longtermism-philosophy , and then filter by another tag, like one could on the frontpage.

Finally, I think it'd be cool to have the filtering used come up in the url, so I can share a url with someone to direct them right to the intersection of two tags.

Currently I send multiple people multiple tag pages after EA conferences (and sometimes at other times), as they serve as handy collections of posts relevant to what the people expressed interest in. It'd be cool to be able to do the equivalent thing for intersections of tags as well.

Just some suggestions - not sure how easy or high-priority to implement they should be :)

So I'll hold off on making a Longtermism (Cause Area) tag or converting the Longtermism (Philosophy) tag into that until I hear back from the Forum team, and/or think more or get more input on what the best approach here would be.

Propose and vote on potential tags

Longtermism (Cause Area)

We have various tags relevant to longtermism or specific things that longtermists are often interested in (e.g., Existential Risk). But we don't have a tag for longtermism as a whole. Longtermism (Philosophy) and Long-Term Future don't fit that bill; the former is just for "posts about philosophical matters relevant to longtermism", and the latter is "meant for discussion of what the long-term future might actually look like".

One example of a post that's relevant to longtermism as a cause area but that doesn't seem to neatly fit in any of the existing longtermism-related tags is Should marginal longtermist donations support fundamental or intervention research? An analogous post that was focused on global health & dev or mental health could be given the tags that cover those cause areas, and one focused on animal welfare could be given the Farm Animal Welfare and Wild Animal Welfare tags (which seem to me to together fill the role of a tag for that whole cause area).

Donation Writeup

The name doesn't seem to intuitively capture the aim of this tag including donation recommendations, not just writeups. So maybe this should be renamed to Donation Choice, mirroring the Career Choice tag?

I also think it'd be good for the tag (or some other tag) to more explicitly include recommendations of general areas to fund or principles to follow (e.g., funding fundamental rather than intervention research). I think Donation Choice would more intuitively capture that than Career Choice would.

Should surveys about the quality/impact of research outputs be more common?

Update: The Happier Lives Institute are now running a survey about the quality and impact of their research outputs. (It's linked to from their latest newsletter.)

Exciting to see another organisation join the (seemingly) very small club of organisations who do this!

ALLFED 2020 Highlights

One of the proposed projects on which I currently feel particularly unsure of the longtermist theory of change for is:

Building heating for losing electricity/industry — Because the loss of electricity/industry is likely to be sudden, keeping people warm is an urgent need. Options we want to investigate include retrofitting ovens to burn wood for heating.

At first glance, it's hard for me to imagine why this would be a high-priority variable to influence if we're adopting a longtermist perspective (though it's more plausible to me that it could be a decently cost-effective way to - in expectation - save lives in the coming generations).

ALLFED 2020 Highlights

Thanks for this interesting post :)

I'd be interested to hear a bit more about ALLFED's thinking regarding how doing the "Projects in need of funding" would reduce existential risk. 

For example, to what extent are they aimed at reducing risks of extinction, vs risks of unrecoverable collapse, vs risks of unrecoverable dystopia? Or perhaps some of the projects are primarily aimed at saving lives in the nearer-term (rather than longtermist concerns)? Or at helping ALLFED itself develop (e.g., by gaining credibility, networks, funding), to increase how effectively you can pursue other projects which have more direct relevance to longtermism?

And to the extent that ALLFED expects these projects to achieve longtermist goals, is that because ALLFED thinks:

  • The project would reduce the number of deaths as a fairly direct result of a sun-blocking catastrophe or loss of electricity/industry, which then fairly directly, meaningfully reduces extinction risk
    • This would imply ALLFED thinks extinction risk as a fairly direct result of those things isn't already super low
  • The project would reduce the number of deaths as a fairly direct result of one of those catastrophes, which then reduces the chance of further catastrophes and conflict, which then in turn reduces extinction risk
  • The project would reduce the length of time between the initial catastrophe and the recovery, which would reduce extinction risk
    • Perhaps because the longer that interim time is, the more chance there is of further catastrophes or conflict
      • Perhaps because that period would be anarchic, and during it there's a meaningfully increased risk of states or nonstate actors doing dangerous things that increase extinction risk
  • A version of one of the above, but focused on risks of unrecoverable collapse or unrecoverable dystopia rather than risks of extinction
  • In general, reducing numbers of deaths and global instability is just a good proxy for reducing existential risk, even if one hasn't explicitly mapped out the interim steps
  • [Something else]

(This is just a quick way of slicing up the possibilities; you don't have to categorise things that way, of course.)

(Also, I'm aware that these questions overlap a bunch with things already said in this post and in prior ALLFED-related posts and presentations. Feel free to point me to relevant parts of those things, request I make these questions more specific, etc.)

Where are you donating in 2020 and why?

That's useful info, and sounds to me like a fair point. Thanks :)

But then this strikes me as tying back into the idea that "Perhaps [ALLFED] seemingly not having been funded by the EA Long-Term Future Fund, Open Phil, and various other funders is evidence that there's some reason not to support them, which I just haven't recognised?"

Here that question can take a more concrete form: If Open Phil chose to fund a group that'd work on alternative foods that ALLFED thinks will be less promising than the alternative foods ALLFED focuses on, but didn't choose to fund ALLFED (at least so far), does that mean:

  1. Open Phil are making a mistake?
  2. ALLFED are wrong about which foods are most promising? 
    • Perhaps because they're wrong about the relative costs, or because there are other considerations which outweigh the cost consideration?
  3. ALLFED are right about which foods are most promising, but there's some other overriding reason why the other team was a better donation opportunity? 
    • E.g., perhaps at the present margin, what's most needed is more academic credibility and that team could get it better than ALLFED could?
  4. There's some alternative explanation such that Open Phil's decisions are sound but also ALLFED is a good donation opportunity? 
    • E.g., perhaps there's some reason why Open Phil in particular shouldn't fund ALLFED at this stage, even if it thought ALLFED was a good opportunity for other donors?

I don't really know how likely each of those possible implications are (and thus I don't have strong reason to believe 2 or 3 are the most likely implications). So this is just a confusing thing and a potential argument against donating to ALLFED, rather than a clearly decisive argument.

I'd be interested in your (or other people's) thoughts on this - but would also understand if this is inappropriate to discuss publicly.

(Btw, I wouldn't want readers to interpret this as a major critique or an expression of strong doubt. I'd expect to have at least some doubt or reservation with regards to basically any place I choose to donate to, work for, etc. - prioritisation is hard! - and I'm still planning to give ~4% of my income this year to ALLFED.)

Potential downsides of using explicit probabilities

Hmm, I feel like you may be framing things quite differently to how I would, or something. My initial reaction to your comment is something like:

It seems usefully to conceptually separate data collection from data processing, where by the latter I mean using that data to arrive at probability estimates and decisions.

 I think Bayesianism (in the sense of using Bayes' theorem and a Bayesian interpretation of probability) and "math and technical patches" might tend to be part of the data processing, not the data collection. (Though they could also guide what data to look for. And this is just a rough conceptual divide.) 

When Ozzie wrote about going with "an approach that in-expectation does a decent job at approximating the mathematical approach", he was specifically referring to dealing with the optimizer's curse. I'd consider this part of data processing.

 Meanwhile, my intuitions (i.e., gut reactions) and what experts say are data. Attending to them is data collection, and then we have to decide how to integrate that with other things to arrive at probability estimates and decisions.

I don't think we should see ourselves as deciding between either Bayesianism and "math and technical patches" or paying attention to my intuitions and domain experts. You can feed all sorts of evidence into Bayes theorem. I doubt any EA would argue we should form conclusions from "Bayesianism and math alone", without using any data from the world (including even their intuitive sense of what numbers to plug in, or whether people they share their findings with seem skeptical). I'm not even sure what that'd look like.

And I think my intuitions or what domain experts says can very easily be made sense of as valid data within a Bayesian framework. Generally, my intuitions and experts are more likely to indicate X is true in worlds where X is true than where it's not. This effect is stronger when the conditions for intuitive expertise are met, when experts' incentives seem to be well aligned with seeking and sharing truth, etc. This effect is weaker when it seems that there are strong biases or misaligned incentives at play, or when it seems there might be. 

(Perhaps this is talking past you? I'm not sure I understood your argument.)

Load More