MichaelA's Shortform

by MichaelA22nd Dec 2019120 comments
120 comments, sorted by Highlighting new comments since Today at 1:09 AM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Notes from a call with someone who's a research assistant to a great researcher

(See also Matthew van der Merwe's thoughts. I'm sharing this because I think it might be useful to some people by itself, and so I can link to it from parts of my sequence on Improving the EA-Aligned Research Pipeline.)

  • This RA says they definitely learned more from this RA role than they would’ve if doing a PhD
    • Mainly due to tight feedback loops
    • And strong incentives for the senior researcher to give good feedback
      • The RA is doing producing "intermediate products" for the senior researcher. So the senior researcher needs and uses what the RA produces. So the feedback is better and different.
        • In contrast, if the RA was working on their own, separate projects, it would be more like the senior researcher just looks at it and grades it.
  • The RA has mostly just had to do literature reviews of all sorts of stuff related to the broad topic the senior researcher focuses on
    • So the RA person was incentivised more than pretty much anyone else to just get familiar with all the stuff under this umbrella
      • They wouldn’t be able or encouraged to do that in a PhD
  • The thing the RA hasn’t liked is that he hasn’t been producing his ow
... (read more)

For the last few years, I’ve been an RA in the general domain of ~economics at a major research university, and I think that while a lot of what you’re saying makes sense, it’s important to note that the quality of one’s experience as an RA will always depend to a very significant extent on one’s supervising researcher. In fact, I think this dependency might be just about the only thing every RA role has in common. Your data points/testimonials reasonably represent what it’s like to RA for a good supervisor, but bad supervisors abound (at least/especially in academia), and RAing for a bad supervisor can be positively nightmarish. Furthermore, it’s harder than you’d think to screen for this in advance of taking an RA job. I feel particularly lucky to be working for a great supervisor, but/because I am quite familiar with how much the alternative sucks.

On a separate note, regarding your comment about people potentially specializing in RAing as a career, I don’t really think this would yield much in the way of productivity gains relative to the current state of affairs in academia (where postdocs often already fill the role that I think you envision for career RAs). I do, however, thi... (read more)

3MichaelA4moThanks, I think this provides a useful counterpoint/nuance that I think should help people make informed decisions about whether to try to get RA roles, how to choose which roles to aim for/accept, and whether and how to facilitate/encourage other people to offer or seek RA roles. Your second paragraph is also interesting. I hadn't previously thought about how there may be overlap between the skills/mindsets that are useful for RAs and those useful for research management, and that seems like an useful point to raise. Minor point: That point was from the RA I spoke to, not from me. (But I do endorse the idea that such specialisation might be a good thing.) More substantive point: It's worth noting is that, while a lot of the research and research training I particularly care about happens in traditional academia, a lot also happens in EA parts of academia (e.g., FHI, GPI), in EA orgs, in think tanks, among independent researchers, and maybe elsewhere. So even if this specialisation wouldn't yield much productivity gains compared to the current state of affairs in one of those "sectors", it could perhaps do so in others. (I don't know if it actually would, though - I haven't looked into it enough, and am just making the relatively weak claim that it might.)
3HStencil4moYeah, I think it’s very plausible that career RAs could yield meaningful productivity gains in organizations that differ structurally from “traditional” academic research groups, including, importantly, many EA research institutions. I think this depends a lot on the kinds of research that these organizations are conducting (in particular, the methods being employed and the intended audiences of published work), how the senior researchers’ jobs are designed, what the talent pipeline looks like, etc., but it’s certainly at least plausible that this could be the case. On the parallels/overlap between what makes for a good RA and what makes for a good research manager, my view is actually probably weaker than I may have suggested in my initial comment. The reason why RAs are sometimes promoted into research management positions, as I understand it, is that effective research management is believed to require an understanding of what the research process, workflow, etc. look like in the relevant discipline and academic setting, and RAs are typically the only people without PhDs who have that context-specific understanding. Plus, they’ll also have relevant domain knowledge about the substance of the research, which is quite useful in a research manager, too. I think these are pretty much all of the reasons why RAs may make for good research managers. I don’t really think it’s a matter of skills or of mindset anywhere near as much as it’s about knowledge (both tacit and not). In fact, I think one difficulty with promoting RAs to research management roles is that often, being a successful RA seems to select for traits associated with not having good management skills (e.g., being happy spending one’s days reading academic papers alone with very limited opportunities for interpersonal contact). This is why I limited my original comment on this to RAs who can effectively manage people, who, as I suggested, I think are probably a small minority. Because good research manage
2MichaelA4moAh, thanks for that clarification! Your comments here continue to be interesting food for thought :)

One idea that comes to mind is to set up an organization that hires RAs-as-a-service.  Say, a nonprofit that works with multiple EA orgs and employees several RAs, some full-time and others part-time (think, a student job). This org can then handle recruiting, basic training, employment and some of the management. RAs could work on multiple projects with perhaps multiple different people, and tasks could be delegated to the organization as a whole to find the right RA to fit.

A financial model could be something like EA orgs pay 25-50% of the relevant salaries for projects they recruit RAs for, and the rest is complemented by donations to the non-profit itself.

5MichaelA4moYeah, I definitely think this is worth someone spending at least a couple hours seriously thinking about doing, including maybe sending out a survey to or conducting interviews with non-junior researchers[1] to gauge interest in having an RA if it was arranged via this service. I previously suggested [https://forum.effectivealtruism.org/posts/EMKf4Gyee7BsY2RP8/michaela-s-shortform?commentId=XMXKNBdujMkNDegza] a somewhat similar idea as a project to improve the long-term future: And Daniel Eth replied there: I'm going to now flag this idea to someone who I think might be able to actually make it happen.
5MichaelA4moSomeone pointed out to me that BERI already do some amount of this. E.g., they recently hired for or are hiring for RAs for Anders Sandberg and Nick Bostrom at FHI. It seems plausible that they're doing all the stuff that's worth doing, but also seems plausible (probable?) that there's room for more, or for trying out different models. I think anyone interested in potentially actually starting an initiative like this should probably touch base with BERI before investing lots of time into it.
5EdoArad4moAh, right! There still might be a need outside of longtermist research, but I definitely agree that it'd be very useful to reach out to them to learn more. For further context for people who might potentially go ahead with this, BERI [https://existence.org/#what-we-do] is a nonprofit that supports researchers working on existential risk [https://forum.effectivealtruism.org/posts/xmy2AKaGSdEDgnXej/beri-seeking-new-collaborators-1] . I guess that Sawyer [https://forum.effectivealtruism.org/users/sawyer] is the person to reach out to.
3MichaelA4moBtw, the other person I suggested this idea to today is apparently already considering doing this. So if someone else is interested, maybe contact both Sawyer and me, and I can put you in touch with this person. And this person would do it for longtermist researchers, so yeah, it seems plausible/likely to me that there's more room for this for researchers focused on other cause area.
5Jamie_Harris4moThese feel like they should be obvious points and yet I hadn't thought about them before. So this was also an update for me! I've been considering PhDs, and your stated downsides don't seem like big downsides for me personally, so it could be relevant to me too. Ok, so the imagine you/we (the EA community) successfully make the case and encourage demand for RA positions. Is there supply? * I don't recall ever seeing an RA position formally advertised (though I haven't been looking out for them per se, don't check the 80k job board very regularly, etc) * If I imagine myself or my colleagues at Sentience Institute with an RA, I can imagine that we'd periodically find an RA helpful, but not enough for a full-time role. * Might be different at other EA/longtermist nonprofits but we're primarily funding constrained. Apart from the sense that they might accept a slightly lower salary, why would we hire an RA when we could hire a full blown researcher (who might sometimes have to do the lit reviews and grunt-work themselves)?
6HStencil4moI actually think full-time RA roles are very commonly (probably more often than not?) publicly advertised. Some fields even have centralized [https://www.nber.org/career-resources/research-assistant-positions-not-nber] job boards [https://predoc.org/opportunities] that aggregate RA roles across the discipline, and on top of that, there are a growing number of formalized predoctoral [https://law.stanford.edu/research/sls-fellowships/empirical-research-fellowship/] RA programs [https://www.hbs.edu/ra/Pages/default.aspx] at major research universities in the U.S. I am actually currently working as an RA in an academic research group that has had roles posted on the 80,000 Hours job board. While I think it is common for students to approach professors in their academic program and request RA work, my sense is that non-students seeking full-time RA positions very rarely have success cold-emailing professors and asking if they need any help. Most professors do not have both ongoing need for an (additional) RA and the funding to hire one (whereas in the case of their own students, universities often have special funding set aside for students’ research training, and professors face an expectation that they help interested students to develop as researchers). Separately, regarding the second bullet point, I think it is extremely common for even full-time RAs to only periodically be meaningfully useful and to spend the rest of their time working on relatively low-priority “back burner” projects. In general, my sense is that work for academic RAs often comes in waves; some weeks, your PI will hand you loads of things to do, and you’ll be working late, but some weeks, there will be very little for you to do at all. In many cases, I think RAs are hired at least to some extent for the value of having them effectively on call.
6EdoArad4moIn regards to the third bullet point, there might be a nontrivial boost to the senior researchers' productivity and well-being. Doing grunt-work can be disproportionally (to its time) tiring and demotivating, and most people have some type of work that they dislike or just not good at which could perhaps be delegated. Additionally, having a (strong and motivated) RA might just be more fun and help with making personal research projects more social and meaningful. Regarding the salary, I've quickly checked GiveWell's salaries at Glassdoor [https://www.glassdoor.com/Salary/GiveWell-Salaries-E974290.htm] So from that I'd guess that an RA could cost about 60% as much as a senior researcher. (I'm sure that there is better and more relevant information out there)
2MichaelA4moI think you're asking "...encourage that people seek RA positions. Would there be enough demand for those aspiring RAs?"? Is that right? (I ask because I think I'm more used to thinking of demand for a type of worker, and supply of candidates for those positions.) I don't have confident answers to those questions, but here are some quick, tentative thoughts: * I've seen some RA positions formally advertised (e.g., on the 80k job board) * I remember one for Nick Bostrom and I think one for an economics professor, and I think I've seen others * I also know of at least two cases where an RA positions was opened but not widely advertised, including one case where the researcher was only a couple years into their research career * I have a vague memory of someone saying that proactively reaching out to researchers to ask if they'd want you to be an RA might work surprisingly often * I also have a vague impression that this is common with university students and professors * But I think this person was saying it in relation to EA researchers * (Of course, a vague memory of someone saying this is not very strong evidence that it's true) * I do think there are a decent number of EA/longtermist orgs which have or could get more funding than they are currently able or willing to spend on their research efforts, e.g. due to how much time from senior people would be consumed for hiring rounds or managing and training new employees * Some of these constraints would also constrain the org from taking on RAs * But maybe there are cases where the constraint is smaller for RAs than for more independent researchers? * One could think of this in terms of the org having already identified a full researcher whose judgement, choices, output, etc. the org is happy with, and they've then done further work to get that researcher on the same page with the org, more tr
3MichaelA4moSee also 80k on the career idea "Be research manager or a PA for someone doing really valuable work [https://forum.effectivealtruism.org/posts/6x2MjPXhpPpnatJFQ/some-promising-career-ideas-beyond-80-000-hours-priority#Be_research_manager_or_a_PA_for_someone_doing_really_valuable_work] ".

Readings and notes on how to do high-impact research

This shortform contains some links and notes related to various aspects of how to do high-impact research, including how to:

  1. come up with important research questions
  2. pick which ones to pursue
  3. come up with a "theory of change" for your research
  4. assess your impact
  5. be and stay motivated and productive
  6. manage an organisation, staff, or mentees to help them with the above

I've also delivered a workshop on the same topics, the slides from which can be found here.

The document has less of an emphasis on object-level things to do with just doing research well (as opposed to doing impactful research), though that’s of course important too. On that, see also Effective Thesis's collection of Resources, Advice for New Researchers - A collaborative EA doc, Resources to learn how to do research, and various non-EA resources (some are linked to from those links).

Epistemic status

This began as a Google Doc of notes to self. It's still pretty close to that status - i.e., I don't explain why each thing is relevant, haven't spent a long time thinking about the ideal way to organise this, and expect this shortform omits many great readings and tips. But seve... (read more)

5Kat Woods4moThanks for posting this! This is a gold mine of resources. This will save the Nonlinear team so much time.
2Ramiro4moDid you consider if this could get more views if it was a normal "longform" post? Maybe it's not up to your usual standards, but I think it's pretty good.
2MichaelA4moNice to hear you think so! I did consider that, but felt like maybe it's too much of just a rough, random grab-bag of things for a top-level post. But if the shortform or your comment gets unexpectedly many upvotes, or other people express similar views in comments, I may "promote" it.
2MichaelA4moMore concretely, regarding generating and prioritising research questions, one place to start is these lists of question ideas: * Research questions that could have a big social impact, organised by discipline [https://80000hours.org/articles/research-questions-by-discipline/] * A central directory for open research questions [https://forum.effectivealtruism.org/posts/MsNpJBzv5YhdfNHc9/a-central-directory-for-open-research-questions] * Crucial questions for longtermists [https://forum.effectivealtruism.org/posts/wicAtfihz2JmPRgez/crucial-questions-for-longtermists] * Some history topics it might be very valuable to investigate [https://forum.effectivealtruism.org/posts/psKZNMzCyXybcoEZR/some-history-topics-it-might-be-very-valuable-to-investigate] * This is somewhat less noteworthy than the other links And for concrete tips on things like how to get started, see Notes on EA-related research, writing, testing fit, learning, and the Forum [https://forum.effectivealtruism.org/posts/J7PsetipHFoj2Mv7R/notes-on-ea-related-research-writing-testing-fit-learning] .

Note: This shortform is now superseded by a top-level post I adapted it into. There is no longer any reason to read the shortform version.

Book sort-of-recommendations

Here I list all the EA-relevant books I've read or listened to as audiobooks since learning about EA, in roughly descending order of how useful I perceive/remember them being to me. 

I share this in case others might find it useful, as a supplement to other book recommendation lists. (I found Rob Wiblin, Nick Beckstead, and Luke Muehlhauser's lists very useful.) That said, this isn't exactly a recommendation list, because: 

  • some of factors making these books more/less useful to me won't generalise to most other people
  • I'm including all relevant books I've read (not just the top picks)

Let me know if you want more info on why I found something useful or not so useful.

(See also this list of EA-related podcasts and this list of sources of EA-related videos.)

  1. The Precipice, by Ord, 2020
    • See here for a list of things I've written that summarise, comment on, or take inspiration from parts of The Precipice.
    • I recommend reading the ebook or physical book rather than audiobook, because the footnotes contain a lot of good con
... (read more)
4Aaron Gertler7moI recommend making this a top-level post. I think it should be one of the most-upvoted posts on the "EA Books" tag, but I can't tag it as a Shortform post.
2MichaelA7moI had actually been thinking I should probably do that sometime, so your message inspired me to pull the trigger and do it now [https://forum.effectivealtruism.org/posts/zCJDF6iNSJHnJ6Aq6/a-ranked-list-of-all-ea-relevant-audio-books-i-ve-read] . Thanks! (I also made a few small improvements/additions while I was at it.)

Collection of EA analyses of how social social movements rise, fall, can be influential, etc.

Movement collapse scenarios - Rebecca Baron

Why do social movements fail: Two concrete examples. - NunoSempere

What the EA community can learn from the rise of the neoliberals - Kerry Vaughan

How valuable is movement growth? - Owen Cotton-Barratt (and I think this is sort-of a summary of that article)

Long-Term Influence and Movement Growth: Two Historical Case Studies - Aron Vallinder, 2018

Some of the Sentience Institute's research, such as its "social movement case studies"* and the post How tractable is changing the course of history?

A Framework for Assessing the Potential of EA Development in Emerging Locations* - jahying

EA considerations regarding increasing political polarization - Alfred Dreyfus, 2020

Hard-to-reverse decisions destroy option value - Schubert & Garfinkel, 2017

These aren't quite "EA analyses", but Slate Star Codex has several relevant book reviews and other posts, such as:

It appears Animal C... (read more)

7vaidehi_agarwalla1yI have a list here that has some overlap but also some new things: https://docs.google.com/document/d/1KyVgBuq_X95Hn6LrgCVj2DTiNHQXrPUJse-tlo8-CEM/edit# [https://docs.google.com/document/d/1KyVgBuq_X95Hn6LrgCVj2DTiNHQXrPUJse-tlo8-CEM/edit#]
2MichaelA1yThat looks very helpful - thanks for sharing it here!
3Shri_Samson1yThis is probably too broad but here's Open Philanthropy's list of case studies on the History of Philanthropy [https://www.openphilanthropy.org/research/history-of-philanthropy] which includes ones they have commissioned, though most are not done by EAs with the exception of Some Case Studies in Early Field Growth [https://www.openphilanthropy.org/research/history-of-philanthropy/some-case-studies-early-field-growth] by Luke Muehlhauser. Edit: fixed links
2MichaelA1yYeah, I think those are relevant, thanks for mentioning them! It looks like the links lead back to your comment for some reason (I think I've done similar in the past). So, for other readers, here are the links I think you mean: 1 [https://www.openphilanthropy.org/research/history-of-philanthropy], 2 [https://www.openphilanthropy.org/research/history-of-philanthropy/some-case-studies-early-field-growth] . (Also, FWIW, I think if an analysis is by a non-EA by commissioned by an EA, I'd say that essentially counts as an "EA analysis" for my purposes. This is because I expect that such work's "precise focuses or methodologies may be more relevant to other EAs than would be the case with [most] non-EA analyses".)

Independent impressions

Your independent impression about X is essentially what you'd believe about X if you weren't updating your beliefs in light of peer disagreement - i.e., if you weren't taking into account your knowledge about what other people believe and how trustworthy their judgement seems on this topic relative to yours. Your independent impression can take into account the reasons those people have for their beliefs (inasmuch as you know those reasons), but not the mere fact that they believe what they believe.

Armed with this concept, I try to stick to the following epistemic/discussion norms, and think it's good for other people to do so as well:

  • Trying to keep track of my own independent impressions separately from my all-things-considered beliefs (which also takes into account peer disagreement)
  • Trying to be clear about whether I'm reporting my independent impression or my all-things-considered belief
  • Feeling comfortable reporting my own independent impression, even when I know it differs from the impressions of people with more expertise in a topic

One rationale for that bundle of norms is to avoid information cascades.

In contrast, when I actually make decisions, I try t... (read more)

2MichaelA5moI just re-read this comment by Claire Zabel [https://forum.effectivealtruism.org/posts/WKPd79PESRGZHQ5GY/in-defence-of-epistemic-modesty?commentId=cubpmCn7XJE5FQYEq] , which is also good and is probably where I originally encountered the "impressions" vs "beliefs" distinction. (Though I still think that this shortform serves a somewhat distinct purpose, in that it jumps right to discussing that distinction, uses terms I think are a bit clearer - albeit clunkier - than just "impressions" vs "beliefs", and explicitly proposes some discussion norms that Claire doesn't quite explicitly propose.)

Collection of sources that seem very relevant to the topic of civilizational collapse and/or recovery

Civilization Re-Emerging After a Catastrophe - Karim Jebari, 2019 (see also my commentary on that talk)

Civilizational Collapse: Scenarios, Prevention, Responses - Denkenberger & Ladish, 2019

Update on civilizational collapse research - Ladish, 2020 (personally, I found Ladish's talk more useful; see the above link)

Modelling the odds of recovery from civilizational collapse - Michael Aird (i.e., me), 2020

The long-term significance of reducing global catastrophic risks - Nick Beckstead, 2015 (Beckstead never actually writes "collapse", but has very relevant discussion of probability of "recovery" and trajectory changes following non-extinction catastrophes)

How much could refuges help us recover from a global catastrophe? - Nick Beckstead, 2015 (he also wrote a related EA Forum post)

Various EA Forum posts by Dave Denkenberger (see also ALLFED's site)

Aftermath of Global Catastrophe - GCRI, no date (this page has links to other relevant articles)

A (Very) Short History of the Collapse of Civilizations, and Why it Matters - David Manheim, 2020

A grant applic... (read more)

5gavintaylor1yGuns, Germs, and Steel [https://en.wikipedia.org/wiki/Guns,_Germs,_and_Steel] - I felt this provided a good perspective on the ultimate factors leading up to agriculture and industry.
2MichaelA1yGreat, thanks for adding that to the collection!
3MichaelA1ySuggested by a member of the History and Effective Altruism Facebook group [https://www.facebook.com/groups/historyandea/]: * https://scholars-stage.blogspot.com/2019/07/a-study-guide-for-human-society-part-i.html [https://scholars-stage.blogspot.com/2019/07/a-study-guide-for-human-society-part-i.html?fbclid=IwAR20GCpfcQbkTN4uW2ot3fYrl0B-qekDgO1_NDxafgjTfrMk9BY3d8IyjIw] * Disputers of the Tao, by A. C. Graham
2MichaelA1ySee also the book recommendations here [https://forum.effectivealtruism.org/posts/RuYihnDzD75AM4B8q/book-on-civilisational-collapse] .

Collection of EA analyses of political polarisation

Book Review: Why We're Polarized - Astral Codex Ten, 2021

EA considerations regarding increasing political polarization - Alfred Dreyfus, 2020

Adapting the ITN framework for political interventions & analysis of political polarisation - OlafvdVeen, 2020

Thoughts on electoral reform - Tobias Baumann, 2020

Risk factors for s-risks - Tobias Baumann, 2019

Other EA Forum posts tagged Political Polarization

(Perhaps some older Slate Star Codex posts? I can't remember for sure.)

Notes

I intend to add to this list over time. If you know of other relevant work, please mention it in a comment.

Also, I'm aware that there has also been a vast amount of non-EA analysis of this topic. The reasons I'm collecting only analyses by EAs/EA-adjacent people here are that:

  • their precise focuses or methodologies may be more relevant to other EAs than would be the case with non-EA analyses
  • links to non-EA work can be found in most of the things I list here
  • I'd guess that many collections of non-EA analyses of these topics already exist (e.g., in reference lists)

To provide us with more empirical data on value drift, would it be worthwhile for someone to work out how many EA Forum users each year have stopped being users the next year? E.g., how many users in 2015 haven't used it since?

Would there be an easy way to do that? Could CEA do it easily? Has anyone already done it?

One obvious issue is that it's not necessary to read the EA Forum in order to be "part of the EA movement". And this applies more strongly for reading the EA Forum while logged in, for commenting, and for posting, which are presumably the things there'd be data on.

But it still seems like this could provide useful evidence. And it seems like this evidence would have a different pattern of limitations to some other evidence we have (e.g., from the EA Survey), such that combining these lines of evidence could help us get a clearer picture of the things we really care about.

Quick thoughts on Kelsey Piper's article Is climate change an “existential threat” — or just a catastrophic one?

  • The article was far better than I expect most reporting on climate change as a potential existential risk to be
    • This is in line with Kelsey Piper generally seeming to do great work
  • I particularly appreciated that it (a) emphasised how the concepts of catastrophes in general and extinction in particular are distinct and why that matters, but (b) did this in a way that I suspect has a relatively low risk of seeming callous, nit-picky, or otherwise annoying to people who care about climate change
  • But I also had some substantive issues with the article, which I'll discuss below
  • The article conflated “existential threat”/“existential risk” with “extinction risk”, thereby ignoring two other types of existential catastrophe: unrecoverable collapse and unrecoverable dystopia
    • See also Venn diagrams of existential, global, and suffering catastrophes
    • Some quotes from the article to demonstrate what the conflation I'm referring to:
      • “But there’s a standard meaning of that phrase [existential threat]: that it’s going to wipe out humanity — or even, as Warren implied Wednesday night, all life
... (read more)

Reflections on data from a survey about things I’ve written 

I recently requested people take a survey on the quality/impact of things I’ve written. So far, 22 people have generously taken the survey. (Please add yourself to that tally!)

Here I’ll display summaries of the first 21 responses (I may update this later), and reflect on what I learned from this.[1] 

I had also made predictions about what the survey results would be, to give myself some sort of ramshackle baseline to compare results against. I was going to share these predictions, then felt no one would be interested; but let me know if you’d like me to add them in a comment.

For my thoughts on how worthwhile this was and whether other researchers/organisations should run similar surveys, see Should surveys about the quality/impact of research outputs be more common? 

(Note that many of the things I've written were related to my work with Convergence Analysis, but my comments here reflect only my own opinions.)

The data

Q1:

Q2: 

Q3:

Q4: 

Q5: “If you think anything I've written has affected your beliefs, please say what that thing was (either titles or roughly what the topic was), and/or say how it affected ... (read more)

9HowieL1y"People have found my summaries and collections very useful, and some people have found my original research not so useful/impressive" I haven't read enough of your original research to know whether it applies in your case but just flagging that most original research has a much narrower target audience than the summaries/collections, so I'd expect fewer people to find it useful (and for a relatively broad summary to be biased against them). That said, as you know, I think your summaries/collections are useful and underprovided.
2MichaelA1yGood point. Though I guess I suspect that, if the reason a person finds my original research not so useful is just because they aren't the target audience, they'd be more likely to either not explicitly comment on it or to say something about it not seeming relevant to them. (Rather than making a generic comment about it not seeming useful.) But I guess this seems less likely in cases where: * the person doesn't realise that the key reason it wasn't useful is that they weren't the target audience, or * the person feels that what they're focused on is substantially more important than anything else (because then they'll perceive "useful to them" as meaning a very similar thing to "useful") In any case, I'm definitely just taking this survey as providing weak (though useful) evidence, and combining it with various other sources of evidence.
1HowieL1ySeems reasonable

tl;dr: Toby Ord seems to imply that economic stagnation is clearly an existential risk factor. But I that we should actually be more uncertain about that; I think it’s plausible that economic stagnation would actually decrease economic risk, at least given certain types of stagnation and certain starting conditions.

(This is basically a nitpick I wrote in May 2020, and then lightly edited recently.)

---

In The Precipice, Toby Ord discusses the concept of existential risk factors: factors which increase existential risk, whether or not they themselves could “directly” cause existential catastrophe. He writes:

An easy way to find existential risk factors is to consider stressors for humanity or for our ability to make good decisions. These include global economic stagnation… (emphasis added)

This seems to me to imply that global economic stagnation is clearly and almost certainly an existential risk factor.

He also discusses the inverse concept, existential security factors: factors which reduce existential risk. He writes:

Many of the things we commonly think of as social goods may turn out to also be existential security factors. Things such as education, peace or prosperity may help prot

... (read more)

Epistemic status: Unimportant hot take on a paper I've only skimmed.

Watson and Watson write:

Conditions capable of supporting multicellular life are predicted to continue for another billion years, but humans will inevitably become extinct within several million years. We explore the paradox of a habitable planet devoid of people, and consider how to prioritise our actions to maximise life after we are gone.

I react: Wait, inevitably? Wait, why don't we just try to not go extinct? Wait, what about places other than Earth?

They go on to say:

Finally, we offer a personal challenge to everyone concerned about the Earth’s future: choose a lineage or a place that you care about and prioritise your actions to maximise the likelihood that it will outlive us. For us, the lineages we have dedicated our scientific and personal efforts towards are mistletoes (Santalales) and gulls and terns (Laridae), two widespread groups frequently regarded as pests that need to be controlled. The place we care most about is south-eastern Australia – a region where we raise a family, manage a property, restore habitats, and teach the next generations of conservation scientists. Playing
... (read more)

I've recently collected readings and notes on the following topics:

Just sharing here in case people would find them useful. Further info on purposes, epistemic status, etc. can be found at those links.

Notes on Galef's "Scout Mindset" (2021)

Overall thoughts

  • Scout Mindset was engaging, easy to read, and had interesting stories and examples
  • Galef covered a lot of important points in a clear way
  • She provided good, concrete advice on how to put things into practice
  • So I'm very likely to recommend this book to people who aren't in the EA community, are relatively new to it, or aren't super engaged with it
    • I also liked how she mentioned effective altruism itself several times and highlighted its genuinely good features in an accurate way, but without making this the central focus or seeming preachy
      • (At least, I'm guessing people wouldn't find it preachy - it's hard to say given that I'm already a convert...)
  • Conversely, I think I was already aware of and had internalised almost all the basic ideas and actions suggested in the book, and mostly act on these things
... (read more)

Why I'm less optimistic than Toby Ord about New Zealand in nuclear winter, and maybe about collapse more generally

This is a lightly edited version of some quick thoughts I wrote in May 2020. These thoughts are just my reaction to some specific claims in The Precipice, intended in a spirit of updating incrementally. This is not a substantive post containing my full views on nuclear war or collapse & recovery

In The Precipice, Ord writes:

[If a nuclear winter occurs,] Existential catastrophe via a global unrecoverable collapse of civilisation also seems unlikely, especially if we consider somewhere like New Zealand (or the south-east of Australia) which is unlikely to be directly targeted and will avoid the worst effects of nuclear winter by being coastal. It is hard to see why they wouldn’t make it through with most of their technology (and institutions) intact. 

(See also the relevant section of Ord's 80,000 Hours interview.)

I share the view that it’s unlikely that New Zealand would be directly targeted by nuclear war, or that nuclear winter would cause New Zealand to suffer extreme agricultural losses or lose its technology. (That said, I haven't looked into that clos... (read more)

Collection of all prior work I found that explicitly uses the terms differential progress / intellectual progress / technological development

Differential progress / intellectual progress / technological development - Michael Aird (me), 2020

Differential technological development - summarised introduction - james_aung, 2020

Differential Intellectual Progress as a Positive-Sum Project - Tomasik, 2013/2015

Differential technological development: Some early thinking - Beckstead (for GiveWell), 2015/2016

Differential progress - EA Concepts

Differential technological development - Wikipedia

Existential Risk and Economic Growth - Aschenbrenner, 2019 (summary by Alex HT here)

On Progress and Prosperity - Christiano, 2014

How useful is “progress”? - Christiano, ~2013

Improving the future by influencing actors' benevolence, intelligence, and power - Aird, 2020

Differential intellectual progress - LW Wiki

Existential Risks: Analyzing Human Extinction Scenarios - Bostrom, 2002 (section 9.4) (introduced the term differential technological development, I think)

Intelligence Explosion: Evidence and Import - Muehlhauser & Salamon (for MIRI) (section 4.2) (introduced the term differentia... (read more)

Collection of all prior work I found that seemed substantially relevant to information hazards

Information hazards: a very simple typology - Will Bradshaw, 2020

Information hazards and downside risks - Michael Aird (me), 2020

Information hazards - EA concepts

Information Hazards in Biotechnology - Lewis et al., 2019

Bioinfohazards - Crawford, Adamson, Ladish, 2019

Information Hazards - Bostrom, 2011 (I believe this is the paper that introduced the term)

Terrorism, Tylenol, and dangerous information - Davis_Kingsley, 2018

Lessons from the Cold War on Information Hazards: Why Internal Communication is Critical - Gentzel, 2018

Horsepox synthesis: A case of the unilateralist's curse? - Lewis, 2018

Mitigating catastrophic biorisks - Esvelt, 2020

The Precipice (particularly pages 135-137) - Ord, 2020

Information hazard - LW Wiki

Thoughts on The Weapon of Openness - Will Bradshaw, 2020

Exploring the Streisand Effect - Will Bradshaw, 2020

Informational hazards and the cost-effectiveness of open discussion of catastrophic risks - Alexey Turchin, 2018

A point of clarification on infohazard terminology - eukaryote, 2020

Somewhat less directly relevant

The Offense-Defense Balance of Scientific Knowledge: ... (read more)

1MichaelA2yInteresting example: Leo Szilard and cobalt bombs In The Precipice, Toby Ord mentions the possibility of "a deliberate attempt to destroy humanity by maximising fallout (the hypothetical cobalt bomb)" (though he notes such a bomb may be beyond our current abilities). In a footnote, he writes that "Such a 'doomsday device' was first suggested by Leo Szilard in 1950". Wikipedia similarly says [https://en.wikipedia.org/wiki/Cobalt_bomb]: That's the extent of my knowledge of cobalt bombs, so I'm poorly placed to evaluate that action by Szilard. But this at least looks like it could be an unusually clear-cut case of one of Bostrom's [https://nickbostrom.com/information-hazards.pdf] subtypes of information hazards: It seems that Szilard wanted to highlight how bad cobalt bombs would be, that no one had recognised - or at least not acted on - the possibility of such bombs until he tried to raise awareness of them, and that since he did so there may have been multiple government attempts to develop such bombs. I was a little surprised that Ord didn't discuss the potential information hazards angle of this example, especially as he discusses a similar example with regards to Japanese bioweapons in WWII elsewhere in the book. I was also surprised by the fact that it was Szilard who took this action. This is because one of the main things I know Szilard for is being arguably one of the earliest (the earliest?) examples of a scientist bucking standard openness norms due to, basically, concerns of information hazards potentially severe enough to pose global catastrophic risks. E.g., a report by MIRI/Katja Grace [https://intelligence.org/files/SzilardNuclearWeapons.pdf] states:

Collection of EA-associated historical case study research

This collection is in reverse chronological order of publication date. I think I'm forgetting lots of relevant things, and I intend to add more things in future - please let me know if you know of something I'm missing.

Possibly relevant things:

... (read more)

Are there "a day in the life" / "typical workday" writeups regarding working at EA orgs? Should someone make some (or make more)?

I've had multiple calls with people who are interested in working at EA orgs, but who feel very unsure what that actually involves day to day, and so wanted to know what a typical workday is like for me. This does seem like useful info for people choosing how much to focus on working at EA vs non-EA orgs, as well as which specific types of roles and orgs to focus on. 

Having write-ups on that could be more efficient than people answering similar questions multiple times. And it could make it easier for people to learn about a wider range of "typical workdays", rather than having to extrapolate from whoever they happened to talk to and whatever happened to come to mind for that person at that time.

I think such write-ups are made and shared in some other "sectors". E.g. when I was applying for a job in the UK civil service, I think I recall there being a "typical day" writeup for a range of different types of roles in and branches of the civil service.

So do such write-ups exist for EA orgs? (Maybe some posts in the Working at EA organizations series ser... (read more)

4Jamie_Harris4moAnimal Advocacy Careers skills profiles are a bit like this for various effective animal advocacy nonprofit roles. You can also just read my notes on the interviews I did (linked within each profile) -- they usually just start with the question "what's a typical day?" https://www.animaladvocacycareers.org/skills-profiles [https://www.animaladvocacycareers.org/skills-profiles]

Collection of evidence about views on longtermism, time discounting, population ethics, significance of suffering vs happiness, etc. among non-EAs

Appendix A of The Precipice - Ord, 2020 (see also the footnotes, and the sources referenced)

The Long-Term Future: An Attitude Survey - Vallinder, 2019

Older people may place less moral value on the far future - Sanjay, 2019

Making people happy or making happy people? Questionnaire-experimental studies of population ethics and policy - Spears, 2017

The Psychology of Existential Risk: Moral Judgments about Human Extinction - Schubert, Caviola & Faber, 2019

Psychology of Existential Risk and Long-Termism - Schubert, 2018 (space for discussion here)

Descriptive Ethics – Methodology and Literature Review - Althaus, ~2018 (this is something like an unpolished appendix to Descriptive Population Ethics and Its Relevance for Cause Prioritization, and it would make sense to read the latter post first)

A Small Mechanical Turk Survey on Ethics and Animal Welfare - Brian Tomasik, 2015

Work on "future self continuity" might be relevant (I haven't looked into it)

Some evidence about the views of EA-aligned/EA-adjacent groups

Survey re... (read more)

4Stefan_Schubert4moAron Vallinder has put together a comprehensive bibliography [https://docs.google.com/document/d/1s_bc9e-vGf7N2gixcoAeniMegYwgojE87eNIvsHljeo/edit] on the psychology of the future.
2MichaelA4moNice, thanks. I've now also added that to the Bibliography section of the Psychology of effective altruism [https://forum.effectivealtruism.org/tag/psychology-of-effective-altruism] entry.

If a typical mammalian species survives for ~1 million years, should a 200,000 year old species expect another 800,000 years, or another million years?

tl;dr I think it's "another million years", or slightly longer, but I'm not sure.

In The Precipice, Toby Ord writes:

How much of this future might we live to see? The fossil record provides some useful guidance. Mammalian species typically survive for around one million years before they go extinct; our close relative, Homo erectus, survived for almost two million.[38] If we think of one million years in terms of a single, eighty-year life, then today humanity would be in its adolescence - sixteen years old, just coming into our power; just old enough to get ourselves into serious trouble.

(There are various extra details and caveats about these estimates in the footnotes.)

Ord also makes similar statements on the FLI Podcast, including the following:

If you think about the expected lifespan of humanity, a typical species lives for about a million years [I think Ord meant "mammalian species"]. Humanity is about 200,000 years old. We have something like 800,000 or a million or more years ahead of us if we pla
... (read more)

My review of Tom Chivers' review of Toby Ord's The Precipice

I thought The Precipice was a fantastic book; I'd highly recommend it. And I agree with a lot about Chivers' review of it for The Spectator. I think Chivers captures a lot of the important points and nuances of the book, often with impressive brevity and accessibility for a general audience. (I've also heard good things about Chivers' own book.)

But there are three parts of Chivers' review that seem to me to like they're somewhat un-nuanced, or overstate/oversimplify the case for certain things, or could come across as overly alarmist.

I think Ord is very careful to avoid such pitfalls in The Precipice, and I'd guess that falling into such pitfalls is an easy and common way for existential risk related outreach efforts to have less positive impacts than they otherwise could, or perhaps even backfire. I understand that a review gives on far less space to work with than a book, so I don't expect anywhere near the level of nuance and detail. But I think that overconfident or overdramatic statements of uncertain matters (for example) can still be avoided.

I'll now quote and... (read more)

5Aaron Gertler1yThis was an excellent meta-review! Thanks for sharing it. I agree that these little slips of language are important; they can easily compound into very stubborn memes. (I don't know whether the first person to propose a paperclip AI regrets it, but picking a different example seems like it could have had a meaningful impact on the field's progress.)
1MichaelA1yAgreed. These seem to often be examples of hedge drift [https://www.lesswrong.com/posts/oMYeJrQmCeoY5sEzg/hedge-drift-and-advanced-motte-and-bailey] , and their potential consequences seem like examples of memetic downside risks [https://www.lesswrong.com/posts/EdAHNdbkGR6ndAPJD/memetic-downside-risks-how-ideas-can-evolve-and-cause-harm] .

Notes on The WEIRDest People in the World: How the West Became Psychologically Peculiar and Particularly Prosperous (2020)

Cross-posted to LessWrong as a top-level post. 

I recently finished reading Henrich's 2020 book The WEIRDest People in the World. I would highly recommend it, along with Henrich's 2015 book The Secret of Our Success; I've roughly ranked them the 8th and 9th most useful-to-me of the 47 EA-related books I've read since learning about EA

In this shortform, I'll: 

  • Summarise my "four main updates" from this book
  • Share the Anki c
... (read more)
8Ramiro8mooh, please, do post this type of stuff, specially in shortform... but, unfortunately, you can't expect a lot of karma - attention is a scarce resource, right? I'd totally like to see you blog or send a newsletter with this.
2MichaelA8moMeta: I recently made two [https://forum.effectivealtruism.org/posts/b6qNWYAiJCRRBSoDX/notes-on-schelling-s-strategy-of-conflict-1960] similar posts [https://forum.effectivealtruism.org/posts/K4qRGSAbHyNqHMYmc/notes-on-the-bomb-presidents-generals-and-the-secret-history] as top-level posts rather than as shortforms. Both got relatively little karma, especially the second. So I feel unsure whether posts/shortforms like this are worth putting in the time to make, and are worth posting as top-level posts vs as shortforms. If any readers have thoughts on that, let me know. (Though it's worth noting that making these posts takes me far less time than making regular posts does - e.g., this shortform took me 45 minutes total. So even just being mildly useful to a few people might be sufficient to justify that time cost.) [Edited to add: I added the "My four main updates" section to this shortform 4 days after I originally posted it and made this comment.]
5Habryka8moI really like these types of posts. I have some vague sense that these both would get more engagement and excitement on LW than the EA Forum, so maybe worth also posting them to there.
4MichaelA8moThanks for that info and that suggestion. Given that, I've tried cross-posting my Schelling notes [https://www.lesswrong.com/posts/5gPENBkSRbuRkN9qZ/notes-on-schelling-s-strategy-of-conflict-1960] , as an initial experiment.

Have any EAs involved in GCR-, x-risk-, or longtermism-related work considered submitting writing for the Bulletin? Should more EAs consider that?

I imagine many such EAs would have valuable things to say on topics the Bulletin's readers care about, and that they could say those things well and in a way that suits the Bulletin. It also seems plausible that this could be a good way of: 

  • disseminating important ideas to key decision-makers and thereby improving their decisions
    • either through the Bulletin articles themselves or through them allowing one to
... (read more)
6RyanCarey8mohttps://thebulletin.org/biography/andrew-snyder-beattie/ [https://thebulletin.org/biography/andrew-snyder-beattie/] https://thebulletin.org/biography/gregory-lewis/ [https://thebulletin.org/biography/gregory-lewis/] https://thebulletin.org/biography/max-tegmark/ [https://thebulletin.org/biography/max-tegmark/]
2MichaelA8moThanks for those links! (I also realise now that I'd already seen and found useful Gregory Lewis's piece for the Bulletin, and had just forgotten that that's the publication it was in.)
4MichaelA8moHere's [https://thebulletin.org/write-for-the-bulletin/] the Bulletin's page on writing for them. Some key excerpts: And here's [https://thebulletin.org/2015/02/voices-of-tomorrow-and-the-leonard-m-rieser-award/] the page on the Voices of Tomorrow feature:

The old debate over "giving now vs later" is now sometimes phrased as a debate about "patient philanthropy". 80,000 Hours recently wrote a post using the term "patient longtermism", which seems intended to:

  • focus only on how the debate over patient philanthropy applies to longtermists
  • generalise the debate to also include questions about work (e.g., should I do a directly useful job now, or build career capital and do directly useful work later?)

They contrast this against the term "urgent longtermism", to describe the view that favours doing more donations a

... (read more)
5MichaelDickens1yI don't think "patient" and "urgent" are opposites, in the way Phil Trammell originally defined patience [https://philiptrammell.com/static/discounting_for_patient_philanthropists.pdf]. He used "patient" to mean a zero pure time preference, and "impatient" to mean a nonzero pure time preference. You can believe it is urgent that we spend resources now while still having a pure time preference. Trammell's paper argued that patient actors should give later, irrespective of how much urgency you believe there is. (Although he carved out some exceptions to this.)
2MichaelA1yYes, Trammell [https://philiptrammell.com/static/discounting_for_patient_philanthropists.pdf] writes: And I agree that a person with a low or zero pure time preference may still want to use a large portion of their resources now, for example due to thinking now is a much "hingier"/"higher leverage" time than average, or thinking value drift will be high. You highlighting this makes me doubt whether 80,000 Hours should've used "patient longtermism" as they did [https://forum.effectivealtruism.org/posts/Eey2kTy3bAjNwG8b5/the-emerging-school-of-patient-longtermism] , whether they should've used "patient philanthropy" as they arguably did*, and whether I should've proposed the term "patient altruism" for the position that we should give/work later rather than now (roughly speaking). On the other hand, if we ignore Trammell's definition of the term, I think "patient X" does seem like a natural fit for the position that we should do X later, rather than now. Do you have other ideas for terms to use in place of "patient"? Maybe "delayed"? (I'm definitely open to renaming the tag [https://forum.effectivealtruism.org/tag/patient-altruism]. Other people can as well.) *80k write [https://80000hours.org/podcast/episodes/phil-trammell-patient-philanthropy/]: This suggests to me that 80k is, at least in that post, taking "patient philanthropy" to refer not just to a low or zero pure time preference, but instead to a low or zero rate of discounting overall, or to a favouring of giving/working later rather than now.

Collection of sources relevant to moral circles, moral boundaries, or their expansion

Works by the EA community or related communities

Moral circles: Degrees, dimensions, visuals - Michael Aird (i.e., me), 2020

Why I prioritize moral circle expansion over artificial intelligence alignment - Jacy Reese, 2018

The Moral Circle is not a Circle - Grue_Slinky, 2019

The Narrowing Circle - Gwern, 2019 (see here for Aaron Gertler’s summary and commentary)

Radical Empathy - Holden Karnofsky, 2017

Various works from the Sentience Institute, including:

... (read more)
8Jamie_Harris1yThe only other very directly related resource I can think of is my own presentation on moral circle expansion [https://www.youtube.com/watch?v=my4bqQrcXI8&feature=youtu.be&t=1], and various other short content by Sentience Institute's website, e.g. our FAQ [https://www.sentienceinstitute.org/faq], some of the talks [https://www.facebook.com/sentienceinstitute/videos/2320662534634209/?__xts__[0]=68.ARANFZKbiAeouQqNHnltkAF6lLNUD13BKoYwxhD-PHJap3EJFubm3SWb0a6tuFemjlD-_GUo8IyBGv_1gT_wGPD1iJuiihQzJUav-IhQM7PZ9jGJfkBEfNSo9oOL_rx-5J4yc7Spxl68oMmNXWc8xSsM1EttDbusrH1pmOGYtq4KG_61CiWI6QFkMgVcgCXnlxX_caLacJs5niE6-UjOvN9wZsQOu7tuKE3KBCCfQTHsmNJyHmRySQ8yEdPvTDq7iXj5ylH-YkvPtBIk0H2izudVb1Sc8laDx__9TF3v6NeO7VkYxRJ6-1BjfiGrAVVjV-l4DoJ8nLcvaahMOKGb3vJUKA6si9wd&__tn__=-R] or videos. But I think that the academic psychology literature you refer to is very relevant here. Good starting point articles are, the "moral expansiveness" article you link to above and "Toward a psychology of moral expansiveness [https://journals.sagepub.com/doi/full/10.1177/0963721417730888]." Of course, depending on definitions, a far wider literature could be relevant, e.g. almost anything related to animal advocacy, robot rights, consideration of future beings, consideration of people on the other side of the planet etc. There's some wider content on "moral advocacy" or "values spreading," of which work on moral circle expansion is a part: Arguments for and against moral advocacy [https://longtermrisk.org/arguments-moral-advocacy/] - Tobias Baumann, 2017 Values Spreading is Often More Important than Extinction Risk [https://reducing-suffering.org/values-spreading-often-important-extinction-risk/] - Brian Tomasik, 2013 Against moral advocacy [https://rationalaltruist.com/2013/06/13/against-moral-advocacy/] - Paul Christiano, 2013 Also relevant: "Should Longtermists Mostly Think About Animals? [https://forum.effectivealtruism.org/posts/W5AGTHm4pTd6TeEP3/should-longtermists-mostly-think-about-animals
1MichaelA1yThanks for adding those links, Jamie! I've now added the first few into my lists above.
3Aaron Gertler1yI continue to appreciate all the collections you've been posting! I expect to find reasons to link to many of these in the years to come.
2MichaelA1yGood to hear! Yeah, I hope they'll be mildly useful to random people at random times over a long period :D Although I also expect that most people they'd be mildly useful for would probably never be aware they exist, so there may be a better way to do this. Also, if and when EA coordinates on one central wiki, these could hopefully be folded into or drawn on for that, in some way.

Collection of collections of resources relevant to (research) management, mentorship, training, etc.

(See the linked doc for the most up-to-date version of this.)

The scope of this doc is fairly broad and nebulous. This is not The Definitive Collection of collections of resources on these topics - it’s just the relevant things that I (Michael Aird) happen to have made or know of.

... (read more)

Some ideas for projects to improve the long-term future

In January, I spent ~1 hour trying to brainstorm relatively concrete ideas for projects that might help improve the long-term future. I later spent another ~1 hour editing what I came up with for this shortform. This shortform includes basically everything I came up with, not just a top selection, so not all of these ideas will be great. I’m also sure that my commentary misses some important points. But I thought it was worth sharing this list anyway.

The ideas vary in the extent to which the bottleneck... (read more)

5Daniel_Eth7mo"Research or writing assistance for researchers (especially senior ones) at orgs like FHI, Forethought, MIRI, CHAI" As a senior research scholar at FHI, I would find this valuable if the assistant was competent and the arrangement was low cost to me (in terms of time, effort, and money). I haven't tried to set up anything like this since I expect finding someone competent, working out the details, and managing them would not be low cost, but I could imagine that if someone else (such as BERI) took care of details, it very well may be low cost. I support efforts to try to set something like this up, and I'd like to throw my hat into the ring of "researchers who would plausibly be interested in assistants" if anyone does set this up.

Collection of some definitions of global catastrophic risks (GCRs)

See also Venn diagrams of existential, global, and suffering catastrophes

Bostrom & Ćirković (pages 1 and 2):

The term 'global catastrophic risk' lacks a sharp definition. We use it to refer, loosely, to a risk that might have the potential to inflict serious damage to human well-being on a global scale.
[...] a catastrophe that caused 10,000 fatalities or 10 billion dollars worth of economic damage (e.g., a major earthquake) would not qualify as a global catastrophe.
... (read more)
7MichaelA1yThere is now a Stanford Existential Risk Initiative [https://cisac.fsi.stanford.edu/content/stanford-existential-risks-initiative], which (confusingly) describes itself as: And they write: That is much closer to a definition of an existential risk [https://forum.effectivealtruism.org/posts/skPFH8LxGdKQsTkJy/clarifying-existential-risks-and-existential-catastrophes] (as long as we assume that the collapse is not recovered from) than of an global catastrophic risk. Given that fact and the clash between the term the initiative uses in its name and the term it uses when describing what they'll focus on, it appears this initiative is conflating these two terms/concepts. This is unfortunate, and could lead to confusion, given that there are many events that would be global catastrophes without being existential catastrophes. An example would be a pandemic that kills hundreds of millions but that doesn't cause civilizational collapse [https://forum.effectivealtruism.org/posts/EMKf4Gyee7BsY2RP8/michaela-s-shortform?commentId=92ejaz5s5ehAMNH4N] , or that causes a collapse humanity later fully recovers from. (Furthermore, there may be existential catastrophes that aren't "global catastrophes" in the standard sense, such as "plateauing — progress flattens out at a level perhaps somewhat higher than the present level but far below technological maturity" ( Bostrom [https://www.existential-risk.org/concept.html]).) For further discussion, see Clarifying existential risks and existential catastrophes [https://forum.effectivealtruism.org/posts/skPFH8LxGdKQsTkJy/clarifying-existential-risks-and-existential-catastrophes] . (I should note that I have positive impressions of the Center for International Security and Cooperation (which this initiative is a part of), that I'm very glad to see that this initiative has been set up, and that I expect they'll do very valuable work. I'm merely critiquing their use of terms.)
4MichaelA2ySome more definitions, from or quoted in 80k's profile on reducing global catastrophic biological risks [https://80000hours.org/problem-profiles/global-catastrophic-biological-risks/] Gregory Lewis [https://80000hours.org/problem-profiles/global-catastrophic-biological-risks/], in that profile itself: Open Philanthropy Project [https://web.archive.org/web/20200306210315/https://www.openphilanthropy.org/focus/global-catastrophic-risks] : Schoch-Spana et al. (2017) [https://web.archive.org/web/20200306210217/https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5576209/] , on GCBRs, rather than GCRs as a whole:
2MichaelA9moMetaculus features a series of questions on global catastrophic risks [https://www.metaculus.com/questions/?search=cat:series--ragnarok]. The author of these questions operationalises [https://www.metaculus.com/questions/1493/ragnar%25C3%25B6k-question-series-by-2100-will-the-human-population-decrease-by-at-least-10-during-any-period-of-5-years/] a global catastrophe as an event in which "the human population decrease[s] by at least 10% during any period of 5 years or less".
2MichaelA10moBaum and Barrett (2018) [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3155983] gesture at some additional definitions/conceptualisations of global catastrophic risk that have apparently been used by other authors:
1MichaelA1yFrom an FLI podcast interview [https://futureoflife.org/2019/08/01/the-climate-crisis-as-an-existential-threat-with-simon-beard-and-haydn-belfield/] with two researchers from CSER: "Ariel Conn: [...] I was hoping you could quickly go over a reminder of what an existential threat is and how that differs from a catastrophic threat and if there’s any other terminology that you think is useful for people to understand before we start looking at the extreme threats of climate change." Simon Beard: So, we use these various terms as kind of terms of art within the field of existential risk studies, in a sense. We know what we mean by them, but all of them, in a way, are different ways of pointing to the same kind of outcome — which is something unexpectedly, unprecedentedly bad. And, actually, once you’ve got your head around that, different groups have slightly different understandings of what the differences between these three terms are. So, for some groups, it’s all about just the scale of badness. So, an extreme risk is one that does a sort of an extreme level of harm; A catastrophic risk does more harm, a catastrophic level of harm. And an existential risk is something where either everyone dies, human extinction occurs, or you have an outcome which is an equivalent amount of harm: Maybe some people survive, but their lives are terrible. Actually, at the Center for the Study of Existential Risk, we are concerned about this classification in terms of the cost involved, but we also have coupled that with a slightly different sort of terminology, which is really about systems and the operation of the global systems that surround us. Most of the systems — be this physiological systems, the world’s ecological system, the social, economic, technological, cultural systems that surround those institutions that we build on — they have a kind of normal space of operation where they do the things that you expect them to do. And this is what human life, human flourishing,
1MichaelA1ySears [https://onlinelibrary.wiley.com/doi/epdf/10.1111/1758-5899.12800] writes: (Personally, I don't think I like that second sentence. I'm not sure what "threaten humankind" is meant to mean, but I'm not sure I'd count something that e.g. causes huge casualties on just one continent, or 20% casualties spread globally, as threatening humankind. Or if I did, I'd be meaning something like "threatens some humans", in which case I'd also count risks much smaller than GCRs. So this sentence sounds to me like it's sort-of conflating GCRs with existential risks.)

Quick thoughts on the question: "Is it better to try to stop the development of a technology, or to try to get there first and shape how it is used?"

(This is related to the general topic of differential progress.) 

(Someone asked that question in a Slack workspace I'm part of, and I spent 10 mins writing a response. I've copied and pasted that below with slight modifications. This is only scratching the surface and probably makes silly errors, but maybe this'll be a little useful to some people.)

  • I think the ultimate answer to that question is really so
... (read more)

Maybe someone should make ~1 Anki card each for lots of EA Wiki entries, then share that Anki deck on the Forum so others can use it?

Specifically, I suggest that someone:

  1. Read/skim many/most/all of the EA Wiki entries in the "Cause Areas" and "Other Concepts" sections
    • Anki cards based on entries in the other sections (e.g., Organisations) would probably be less useful
  2. Make 1 or more Anki card for many/most of those entries
    • In many cases, these cards might take forms like "The long reflection refers to... [answer]"
    • In many other cases, the cards could cover othe
... (read more)
6Pablo2moTurning the EA Wiki into a (huge) Anki deck is on my list of "Someday/Maybe" tasks. I think it might be worth waiting a bit until the Wiki is in a more settled state, but otherwise I'm very much in favor of this idea. There is an Anki deck for the old LW wiki [https://www.lesswrong.com/posts/Xd8aQsZroPYN4CZXM/lesswrong-wiki-as-anki-deck]. It's poorly formatted and too coarse-grained (one note per article), and some of the content is outdated, but I still find it useful, which suggests to me that a better deck of the EA Wiki would provide considerable value.
2MichaelA2moWhy this might be worthwhile: * The EA community has collected and developed a very large set of ideas that aren't widely known outside of EA, such that "getting up to speed" can take a similar amount of effort to a decent fraction of a bachelor's degree * But the community is relatively small and new (compared to e.g. most academic fields), so we have relatively little in the way of textbooks, courses, summaries, etc. * This means it can take a lot of effort and time to get up to speed, lots of EAs have substantial "gaps" in their "EA knowledge", lots of concepts are misinterpreted or conflated or misapplied, etc. * The EA Wiki is a good step towards having good resources to help people get up to speed * A bunch of research indicates retrieval practice, especially when spaced and interleaved, can improve long-term retention and can also help with things like application of concepts (not just memory) * And Anki provides such spaced, interleaved retrieval practice * I'm being lazy in not explaining the jargon or citing my sources, but you can find some explanation and sources here: Augmenting Long-term Memory [http://augmentingcognition.com/ltm.html] * If one person makes an Anki deck based on the EA Wiki entries, it can then be used and/or built on by other people, can be shared with participants in EA Fellowships, etc. Possible reasons not to do this: * "There's a lot of stuff it'd be useful for people to know that isn't on EA Wiki entries. Why not make Anki cards on those things instead? Isn't this a bit insular?" * I think we can and should do both, rather than one or the other * Same goes for having Anki cards based on EA sources vs Anki cards based on non-EA sources * Personally, I'd guess ~25% of my Anki cards are based on EA sources, ~70% are based on non-EA sources but are about topics I see as important for

The x-risk policy pipeline & interventions for improving it: A quick mapping

I just had a call with someone who's thinking about how to improve the existential risk research community's ability to cause useful policies to be implemented well. This made me realise I'd be keen to see a diagram of the "pipeline" from research to implementation of good policies, showing various intervention options and which steps of the pipeline they help with. I decided to quickly whip such a diagram up after the call, forcing myself to spend no more than 30 mins on it. H... (read more)

Why I think The Precipice might understate the significance of population ethics

tl;dr: In The Precipice, Toby Ord argues that some disagreements about population ethics don't substantially affect the case for prioritising existential risk reduction. I essentially agree with his conclusion, but I think one part of his argument is shaky/overstated. 

This is a lightly edited version of some notes I wrote in early 2020. It's less polished, substantive, and important than most top-level posts I write. This does not capture my full views on population ethics... (read more)

Update in April 2021: This shortform is now superseded by the EA Wiki entry on Accidental harm. There is no longer any reason to read this shortform instead of that.

Collection of sources I've found that seem very relevant to the topic of downside risks/accidental harm

Information hazards and downside risks - Michael Aird (me), 2020

Ways people trying to do good accidentally make things worse, and how to avoid them - Rob Wiblin and Howie Lempel (for 80,000 Hours), 2018

How to Avoid Accidentally Having a Negative Impact with your Project - Max Dalton and J... (read more)

Bottom line up front: I think it'd be best for longtermists to default to using more inclusive term “authoritarianism” rather than "totalitarianism", except when a person really has a specific reason to focus on totalitarianism specifically.

I have the impression that EAs/longtermists have often focused more on "totalitarianism" than on "authoritarianism", or have used the terms as if they were somewhat interchangeable. (E.g., I think I did both of those things myself in the past.) 

But my understanding is that political scientists typically consider to... (read more)

If anyone reading this has read anything I’ve written on the EA Forum or LessWrong, I’d really appreciate you taking this brief, anonymous survey. Your feedback is useful whether your opinion of my work is positive, mixed, lukewarm, meh, or negative. 

And remember what mama always said: If you’ve got nothing nice to say, self-selecting out of the sample for that reason will just totally bias Michael’s impact survey.

(If you're interested in more info on why I'm running this survey and some thoughts on whether other people should do similar, I give that ... (read more)

Preferences for the long-term future [an abandoned research idea]

Note: This is a slightly edited excerpt from my 2019 application to the FHI Research Scholars Program.[1] I'm unsure how useful this idea is. But twice this week I felt it'd be slightly useful to share this idea with a particular person, so I figured I may as well make a shortform of it. 

Efforts to benefit the long-term future would likely gain from better understanding what we should steer towards, not merely what we should steer away from. This could allow more targeted actions with be... (read more)

Collection of sources relevant to impact certificates/impact purchases/similar

Certificates of impact - Paul Christiano, 2014

The impact purchase - Paul Christiano and Katja Grace, ~2015 (the whole site is relevant, not just the home page)

The Case for Impact Purchase  | Part 1 - Linda Linsefors, 2020

Making Impact Purchases Viable - casebash, 2020

Plan for Impact Certificate MVP - lifelonglearner, 2020

Impact Prizes as an alternative to Certificates of Impact - Ozzie Gooen, 2019

Altruistic equity allocation - Paul Christiano, 2019

Social impact bond - Wikipe... (read more)

3schethik10moThe Health Impact Fund (cited above by MichaelA) is an implementation of a broader idea outlined by Dr. Aidan Hollis here: An Efficient Reward System for Pharmaceutical Innovation [https://www.who.int/intellectualproperty/news/en/Submission-Hollis.pdf]. Hollis' paper, as I understand it, proposes reforming the patent system such that innovations would be rewarded by government payouts (based on impact metrics, e.g. QALYs) rather than monopoly profit/rent. The Health Impact Fund, an NGO, is meant to work alongside patents (for now) and is intended to prove that the broader concept outlined in the paper can work. A friend and I are working on further broadening this proposal outlined by Dr. Hollis. Essentially, I believe this type of innovation incentive could be applied to other areas with easily measurable impact (e.g. energy, clean protein and agricultural innovations via a "carbon emissions saved" metric). We'd love to collaborate with anyone else interested (feel free to message me).
2EdoArad3moHey schethik, did you make progess with this?

What are the implications of the offence-defence balance for trajectories of violence?

Questions: Is a change in the offence-defence balance part of why interstate (and intrastate?) conflict appears to have become less common? Does this have implications for the likelihood and trajectories of conflict in future (and perhaps by extension x-risks)?

Epistemic status: This post is unpolished, un-researched, and quickly written. I haven't looked into whether existing work has already explored questions like these; if you know of any such work, please commen... (read more)

Update in April 2021: This shortform is now superseded by the EA Wiki entry on the Unilateralist's curse. There is no longer any reason to read this shortform instead of that.

Collection of all prior work I've found that seemed substantially relevant to the unilateralist’s curse

Unilateralist's curse [EA Concepts]

Horsepox synthesis: A case of the unilateralist's curse? [Lewis] (usefully connects the curse to other factors)

The Unilateralist's Curse and the Case for a Principle of Conformity [Bostrom et al.’s original pap... (read more)

Potential downsides of EA's epistemic norms (which overall seem great to me)

This is adapted from this comment, and I may develop it into a proper post later. I welcome feedback on whether it'd be worth doing so, as well as feedback more generally.

Epistemic status: During my psychology undergrad, I did a decent amount of reading on topics related to the "continued influence effect" (CIE) of misinformation. My Honours thesis (adapted into this paper) also partially related to these topics. But I'm a bit rusty (my Honours was in 2017... (read more)

Collection of work on value drift that isn't on the EA Forum

Value Drift & How to Not Be Evil Part I & Part II - Daniel Gambacorta, 2019

Value drift in effective altruism - Effective Thesis, no date

Will Future Civilization Eventually Achieve Goal Preservation? - Brian Tomasik, 2017/2020

Let Values Drift - G Gordon Worley III, 2019 (note: I haven't read this)

On Value Drift - Robin Hanson, 2018 (note: I haven't read this)

Somewhat relevant, but less so

Value uncertainty - Michael Aird (me), 2020

An idea for getting evidence on value drift in... (read more)

Collection of sources related to dystopias and "robust totalitarianism"

(See also Books on authoritarianism, Russia, China, NK, democratic backsliding, etc.?)

The Precipice - Toby Ord (Chapter 5 has a section on Dystopian Scenarios)

The Totalitarian Threat - Bryan Caplan (if that link stops working, a link to a Word doc version can be found on this page) (some related discussion on the 80k podcast here; use the "find" function)

Reducing long-term risks from malevolent actors - David Althaus and Tobias Baumann, 2020

The Centre for the Governa... (read more)

Thoughts on Toby Ord’s policy & research recommendations

In Appendix F of The Precipice, Ord provides a list of policy and research recommendations related to existential risk (reproduced here). This post contains lightly edited versions of some quick, tentative thoughts I wrote regarding those recommendations in April 2020 (but which I didn’t post at the time).

Overall, I very much like Ord’s list, and I don’t think any of his recommendations seem bad to me. So most of my commentary is on things I feel are arguably missing.

Regarding “other anthropogenic

... (read more)

Collection of ways of classifying existential risk pathways/mechanisms

Each of the following works show or can be read as showing a different model/classification scheme/taxonomy:

... (read more)

Collection of AI governance reading lists, syllabi, etc. 

This is a doc I made, and I suggest reading the doc rather than shortform version (assuming you want to read this at all). But here it is copied out anyway:


What is this doc, and why did I make it?

AI governance is a large, complex, important area that intersects with a vast array of other fields. Unfortunately, it’s only fairly recently that this area started receiving substantial attention, especially from specialists with a focus on existential risks and/or the long-term future. And as far as I... (read more)

Notes on Victor's Understanding the US Government (2020)

Why I read this

... (read more)

On a 2018 episode of the FLI podcast about the probability of nuclear war and the history of incidents that could've escalated to nuclear war, Seth Baum said:

a lot of the incidents were earlier within, say, the ’40s, ’50s, ’60s, and less within the recent decades. That gave me some hope that maybe things are moving in the right direction.

I think we could flesh out this idea as the following argument:

  • Premise 1. We know of fewer incidents that could've escalated to nuclear war from the 70s onwards than from the 40s-60s.
  • Premise
... (read more)

Collection of sources relevant to the idea of “moral weight”

Comparisons of Capacity for Welfare and Moral Status Across Species - Jason Schukraft, 2020

Preliminary thoughts on moral weight - Luke Muehlhauser, 2018

Should Longtermists Mostly Think About Animals? - Abraham Rowe, 2020

2017 Report on Consciousness and Moral Patienthood - Luke Muehlhauser, 2017 (the idea of “moral weights” is addressed briefly in a few places)

Notes

As I’m sure you’ve noticed, this is a very small collection. I intend to add to it over time... (read more)

A few months ago I compiled a bibliography of academic publications about comparative moral status. It's not exhaustive and I don't plan to update it, but it might be a good place for folks to start if they're interested in the topic.

2MichaelA1yAh great, thanks! Do you happen to recall if you encountered the term "moral weight" outside of EA/rationality circles? The term isn't in the titles in the bibliography (though it may be in the full papers), and I see one that says "Moral status as a matter of degree?", which would seem to refer to a similar idea. So this seems like it might be additional weak evidence that "moral weight" might be an idiosyncratic term in the EA/rationality community (whereas when I first saw Muehlhauser use it, I assumed he took it from the philosophical literature).

The term 'moral weight' is occasionally used in philosophy (David DeGrazia uses it from time to time, for instance) but not super often. There are a number of closely related but conceptually distinct issues that often get lumped together under the heading moral weight:

  1. Capacity for welfare, which is how well or poorly a given animal's life can go
  2. Average realized welfare, which is how well or poorly the life of a typical member of a given species actually goes
  3. Moral status, which is how much the welfare of a given animal matters morally

Differences in any of those three things might generate differences in how we prioritize interventions that target different species.

Rethink Priorities is going to release a report on this subject in a couple of weeks. Stay tuned for more details!

2MichaelA1yThanks, that's really helpful! I'd been thinking there's an important distinction between that "capacity for welfare" idea and that "moral status" idea, so it's handy to know the standard terms for that. Looking forward to reading that!
[+][comment deleted]9mo 2