All of Darius_M's Comments + Replies

[Creative Nonfiction] The Toba Supervolcanic Eruption

Excellent! A well-deserved second prize in the Creative Writing Contest.

Is EA compatible with technopessimism?

In my experience, many EAs have a fairly nuanced perspective on technological progress and aren't unambiguous techno-optimists. 

For instance, a substantial fraction of the community is very concerned about the potential negative impacts of advanced technologies (AI, biotech, solar geoengineering, cyber, etc.) and actively works to reduce the associated risks. 

Moreover, some people in the community have promoted the idea of "differential (technological) progress" to suggest that we should work to (i) accelerate risk-reducing, welfare-enhancing tec... (read more)

1acylhalide1moThanks this is super useful. Although I guess the question now becomes, should we improve existing institutions or build new ones in ways that allow for differential tech progress, or is it better to prevent all progress.
Arguing for utilitarianism has also recently published an article on Arguments for Utilitarianism, written by Richard Yetter Chappell. (I'm sharing this article since it may interest readers of this post)

1omnizoid1moYeah, that has some good arguments, thank you for sharing that.
Wikipedia editing is important, tractable, and neglected

Thanks, it's valuable to hear your more skeptical view on this point! I've included it after several reviewers of my post brought it up and still think it was probably worth including as one of several potential self-interested benefits of Wikipedia editing. 

I was mainly trying to draw attention to the fact that it is possible to link a Wikipedia user account to a real person and that it is worth considering whether to include it in certain applications (something I've done in previous applications). I still think Wikipedia editing is a decent signal ... (read more)

Wikipedia editing is important, tractable, and neglected

Thanks for this comment, Michael! I agree with all the points you make and should have been more careful to compare Wikipedia editing against the alternatives (I began doing this in an earlier draft of this post and then cut it because it became unwieldy). 

In my experience, few EAs I've talked to have ever seriously considered Wikipedia editing. Therefore, my main objective with this post was to get more people to recognize it as one option of something valuable they might do with a part of their time; I wasn't trying to argue that Wikipedia editing i... (read more)

Wikipedia editing is important, tractable, and neglected

I strongly agree that we should learn our lessons from this incident and seriously try to avoid any repetition of something similar. In my view, the key lessons are something like:

  1. It's probably best to avoid paid Wikipedia editing
  2. It's crucial to respect the Wikipedia community's rules and norms (I've really tried to emphasize this heavily in this post)
  3. It's best to really approach Wikipedia editing with a mindset of "let's look for actual gaps in quality and coverage of important articles" and avoid anything that looks like promotional editing

I think it wou... (read more)

6Pablo2moI strongly endorse each of these points.
Wikipedia editing is important, tractable, and neglected

As an example, look at this overview of the Wikipedia pages that Brian Tomasik has created and their associated pageview numbers (screenshot of the top 10 pages below). The pages created by Brian mostly cover very important (though fringe) topics and attract ~ 100,000 pageviews every year.  (Note that this overview ignores all the pages that Brian has edited but didn't create himself.)

Wikipedia editing is important, tractable, and neglected

Someone (who is not me) just started a proposal for a WikiProject on Effective Altruism! To be accepted, this proposal will need to be supported by at least 6-12 active Wikipedia editors. If you're interested in contributing to such a WikiProject, please express "support" for the proposal on the proposal page.  

The proposal passed!! Everyone who's interested should add themselves as a participant on the official wikiproject!

Wikipedia editing is important, tractable, and neglected

This is the best tool I know of to get an overview of Wikipedia article pageview counts (as mentioned in the post); the only limitation with it is that pageview data "only" goes back to 2015.

How can we make Our World in Data more useful to the EA community?

Create a page on biological weapons. This could include, for instance,

  1. An overview of offensive BW programs over time (when they were started, stopped, funding, staffing, etc.; perhaps with a separate section on the Soviet BW program)
  2. An overview of different international treaties relating to BW, including timelines and membership over time (i.e., the Geneva Protocol, the Biological Weapons Convention (BWC), Australia Group, UN Security Council Resolution 1540)
  3. Submissions of Confidence-Building Measures in the BWC over time (including as a percentage of the
... (read more)

(This does sound useful, though I'd note this is also a relatively sensitive area and OWID are - thankfully! - a quite prominent site, so OWID may wish to check in with global catastrophic biorisk researchers regarding whether anything they'd intend to include on such a page might be best left out.)

One-year masters degrees related to biosecurity?

For many people interested in but not yet fully committed to biosecurity, it may make more sense to choose a more general master's program in international affairs/security and then concentrate on biosecurity/biodefense to the extent possible within their program.

Some of the best master's programs to consider to this end:

  1. Georgetown University: MA in Security Studies (Washington, DC; 2 years) 
  2. Johns Hopkins University: MA in International Relations (Washington, DC; 2 years)
  3. Stanford University: Master's in International Policy (2 years)
  4. King's College Lon
... (read more)
One-year masters degrees related to biosecurity?

Georgetown University offers a 2-semester MSc in "Biohazardous Threat Agents & Emerging Infectious Diseases". Course description from the website: "a one year program designed to provide students with a solid foundation in the concepts of biological risk, disease threat, and mitigation strategies. The curriculum covers classic biological threats agents, global health security, emerging diseases, technologies, CBRN risk mitigation, and CBRN security."

New Articles on Population Ethics and Theories of Well-Being

Website traffic was initially low (i.e. 21k pageviews by 9k unique visitors from March to December 2020) but has since been gaining steam (i.e. 40k pageviews by 20k unique visitors in 2021 to date) as the website's search performance has improved. We expect traffic to continue growing significantly as we add more content, gather more backlinks and rise up the search  rank. For comparison, the Wikipedia article on utilitarianism has received ~ 480k pageviews in 2021 to date, which suggests substantial room for growth for

6kuhanj5moPageviews would also go up a lot if (as suggested in the post) articles from the website were included in intro fellowships/other educational programs. I'll discuss adding these articles/others on the site to our intro syllabi. One potential concern with adding articles from is that many (new-to-EA) people (from experience running many fellowships) have negative views towards utilitarianism (e.g. find it off-putting, think people use it to justify selfish/horrible/misguided actions, think it's too demanding (e.g. implications of the drowning child argument), think it's naive, etc etc. I think utilitarianism is often not brought up very charitably in philosophy/other classes (again, based on my impressions running fellowships). So I worry about introducing ideas through the lens of utilitarianism. So one potential solution is to include these readings in fellowship syllabi after talking about utilitarianism more broadly (for what it's worth, in our fellowship we try to present utilitarianism as we/EAs tend interpret it and address misconceptions, but we can also do so much), or to bring them up in in-depth fellowships/non-intro programs where what I've brought up might be less of a concern.
Towards a Weaker Longtermism

I'm not sure what counts as 'astronomically' more cost effective, but if it means ~1000x more important/cost-effective I might agree with (ii).

This may be the crux - I would not count a ~ 1000x multiplier as anywhere near "astronomical" and should probably have made this clearer in my original comment. 

Claim (i), that the value of the long-term (in terms of lives, experiences, etc.) is astronomically larger than the value of the near-term,  refers to differences in value of something like 1030 x.

All my comment was meant to say is that it seems hi... (read more)

Towards a Weaker Longtermism

I'd like to point to the essay Multiplicative Factors in Games and Cause Prioritization as a relevant resource for the question of how we should apportion the community's resources across (longtermist and neartermist) causes:

TL;DR: If the impacts of two causes add together, it might make sense to heavily prioritize the one with the higher expected value per dollar.  If they multiply, on the other hand, it makes sense to more evenly distribute effort across the causes.  I think that many causes in the effective altruism sphere interact more multip

... (read more)
Towards a Weaker Longtermism

Please see my above response to jackmalde's comment. While I understand and respect your argument, I don't think we are justified in placing high confidence in this  model of the long-term flowthrough effects of near-term targeted interventions. There are many similar more-or-less plausible models of such long-term flowthrough effects, some of which would suggest a positive net effect of near-term targeted interventions on the long-term future, while others would suggest a negative net effect. Lacking strong evidence that would allow us to accurately ... (read more)

Yep, not placing extreme weight. Just medium levels of confidence that when summed over, add up to something pretty low or maybe mildly negative. I definitely am not like 90%+ confidence on the flowthrough effects being negative.

Towards a Weaker Longtermism

No, we probably don’t. All of our actions plausibly affect the long-term future in some way, and it is difficult to (be justified to) achieve very high levels of confidence about the expected long-term impacts of specific actions. We would require an exceptional  degree of confidence to claim that the long-term effects of our specific longtermist intervention are astronomically (i.e. by many orders of magnitude) larger than the long-term effects of some random neartermist interventions (or even doing nothing at all). Of course, this claim is perfectly... (read more)

9anonymous_ea5moPhil Trammell's point inWhich World Gets Saved [] is also relevant:
4JackM5moFor the record I'm not really sure about 1030 times, but I'm open 1000s of times. Pretty much every action has an expected impact on the future in that we know it will radically alter the future e.g. by altering the times of conceptions and therefore who lives in the future. But that doesn't necessarily mean we have any idea on the magnitude or sign of this expected impact. When it comes to giving to the Against Malaria Foundation for example I have virtually no idea of what the expected long-run impacts are and if this would even be positive or negative - I'm just clueless. I also have no idea what the flow-through effects of giving to AMF are on existential risks. If I'm utterly clueless about giving to AMF but I think giving to an AI research org has an expected value of 1030 then in a sense my expected value of giving to the AI org is astronomically greater than giving to AMF (although it's sort of like comparing 1030 to undefined so it does get a bit weird...). Does that make any sense?
[PR FAQ] Sharing readership data with Forum authors

Agreed, I'd love this feature! I also frequently rely on pageview statistics to prioritize which Wikipedia articles to improve.

Towards a Weaker Longtermism

There is a big difference between (i) the very plausible claim that the value of the long-term (in terms of lives, experiences, etc.) is astronomically larger than the value of the near-term, and (ii) the rather implausible claim that interventions targeted at improving the long-term are astronomically more important/cost-effective than those targeted at improving the near-term. It seems to me that many longtermists believe (i) but that almost no-one believes (ii).

Basically, in this context the same points apply that Brian Tomasik made in his essay "Why Ch... (read more)

I tentatively believe (ii), depending on some definitions. I'm somewhat surprised to see Ben and Darius implying it's a really weird view, and makes me wonder what I'm missing.

I don't want the EA community to stop working on all non-longtermist things. But the reason is because I think many of those things have positive indirect effects on the EA community. (I just mean indirect effects on the EA community, and maybe on the broader philanthropic community, I don't mean indirect effects more broadly in the sense of 'better health in poor countries' --> '... (read more)

7Habryka5moI think I believe (ii), but it's complicated and I feel a bit confused about it. This is mostly because many interventions that target the near-term seem negative from a long-term perspective, because they increase anthropogenic existential risk by accelerating the speed of technological development. So it's pretty easy for there to be many orders of magnitude in effectiveness between different interventions (in some sense infinitely many, if I think that many interventions that look good from a short-term perspective are actually bad in the long term).

It seems to me that many longtermists believe (i) but that almost no-one believes (ii).

Really? This surprises me. Combine (i) with the belief that we can tractably influence the far future and don't we pretty much get to (ii)?

3Davidmanheim5moI'm unwilling to pin this entirely on the epistemic uncertainty, and specifically don't think everyone agrees that, for example, interventions targeting AI safety aren't the only thing that matters, period. (Though this is arguably not even a longtermist position.) But more generally, I want to ask the least-convenient-world question of what the balance should be if we did have certainty about impacts, given that you seem to agree strongly with (i).
Writing about my job: Research Fellow, FHI

I really appreciated the many useful links you included in this post and would like to encourage others to strive to do the same when writing EA Forum articles.

Thanks Darius! It was my pleasure.

How to reach out to orgs en masse?

Happy to have you here, Linda! It sounds like you have some really important skills to offer and I wish you will find great opportunities to apply them.

1LindaMartin6moThank you, Darius! I'm excited to use my skills to give back after a long career.
AMA: The new Open Philanthropy Technology Policy Fellowship

The listed application documents include a "Short essay (≤500 words)" without further details. Can you say more about what this entails and what you are looking for?

4Technology Policy Fellowship5moThe specific prompts were included in the application form. Apologies that this was not clear. We’ve now added a note along those lines to the fellowship page. The prompts are: * Personal statement: “What do you want to get out of this fellowship? Why do you think you are a good fit? Please describe your interest in (and any experience with) policy as well as your area of focus (e.g. AI or biosecurity).” * Short essay: “What is one specific policy idea related to AI, biosecurity, or related emerging technology areas that you think the US government should pursue today? Why do you think this idea would be beneficial?” * Statement of motivation: “How do your interests and plans align with Open Philanthropy's goals related to societal impacts of technology?”
AMA: The new Open Philanthropy Technology Policy Fellowship

Are non-US citizens who hold a US work authorization disadvantaged in the application process even if they seek to enter a US policy career (and perhaps aim to become naturalized eventually)?

2Technology Policy Fellowship5moNon-citizens are eligible to apply for the program if they do not require visa sponsorship in order to receive a placement. For example, someone with a green card should be eligible to work at any think tank. As long as applicants are eligible to work in the roles that they are applying for, non-citizens who aspire to US policy careers will not be disadvantaged. It’s our understanding that it is difficult for non-citizens to get a security clearance, which is required for many federal government roles, and executive branch offices are generally hesitant about bringing on non-citizens. Congressional offices are legally allowed to take on permanent residents (and even some temporary visa holders), but individual offices may adopt policies favoring US citizens. Out of the three categories, we therefore expect non-citizens to have the easiest time matching with a think tank. However, a lot depends on individual circumstances, so it is difficult to generalize. We encourage non-citizens with work authorization to apply, and would work through these sorts of questions with them individually if they reach the later stages of the application process.
What novels, poetry, comics have EA themes, plots or characters?

There is Eliezer Yudkowsky's Harry Potter fan fiction "Harry Potter and the Methods of Rationality" (HPMOR), which conveys many ideas and concepts that are relevant to EA:

Please note that there is also a fan-produced audio version of HPMOR:

3MaxRa6moI also really enjoyed the unofficial sequal, Significant Digits. []
The EA Forum Podcast is up and running

Great initiative! Unfortunately, I cannot seem to find the podcast on either of my two podcast apps (BeyondPod and Podcast Addict). Do you plan to make the podcast available across all major platforms?

9D0TheMath7moAnchor sends messages to podcast platforms to get the podcast on them. They say this takes a few business days to complete. In the meantime, you can use Ben Schifman's method.
5BenSchifman7moOn mobile but not for some reason on the web version there is a "more platforms" button that gives you an RSS feed that should work on any player:
You are allowed to edit Wikipedia

Strongly agree! I'm currently writing an EA Forum post making the case for Wikipedia editing.

6ChristianKleineidam7moGiven the discussion here and over at LessWrong where I crossposted this, I think when it comes to writing a larger post to make a more effective argument it's important to explain how Wikipedia works. It seems to me like many people think that changing Wikipedia articles is just about making an edit and hoping it doesn't get reverted. This works for smaller issues but when it comes to big issues it needs more then one person to create change. I'm currently in a deep discussion on a contentious issue where I wrote a lot. If 3-4 people would join in and back me up, I likely could make the change and it wouldn't take much effort for everyone of those people. When it comes to voting on an election you don't need to explain to people that even so they didn't get what they wanted this doesn't mean that there wasn't a democratic election. People have a mental model for how elections work but they don't have one for how decisions on Wikipedia get made and thus think that if they alone don't have the power to create change it's not worth speaking up on the talk page. I also read that people think the goal of Wikipedia is truth when it isn't it's to reflect what secondary sources say. While it might be great to have an encyclopedia that has truth as a goal having a place where you find a synthesis of other secondary sources is valuable. Understanding that helps to know when it's worth to speak up and when it isn't.
The most successful EA podcast of all time: Sam Harris and Will MacAskill (2020)

Awesome episode! I really enjoyed listening to it when it came out and was excited for Sam's large audiences across Waking Up and Making Sense to learn about EA in this way.

Open Thread: June 2021

Welcome Naghma! It is great to have you here and learn about your background and interests.

Know anyone interested in litigation?

Hi Alene! I suspect you already know Jay Shooster ( In case you don't, he might be a great contact for you.

YESSSSS he's amazing and has volunteered to help me think through some things! Jay Shooster, if you're reading this: You rock. 

Help me find the crux between EA/XR and Progress Studies

Regarding your question:

What would moral/social progress actually look like?

This is a big and difficult question, but here are some pointers to relevant concepts and resources:

  • Moral circle expansion (MCE) - MCE is "the attempt to expand the perceived boundaries of the category of moral patients." For instance, this could involve increasing the moral concern in the wider public (or, more  targeted, among societal decision-makers) for non-human animals or future people. Arguably,  MCE could help reduce the risk of societies committing further atroc
... (read more)
Help me find the crux between EA/XR and Progress Studies

Regarding your question:

Does XR consider tech progress default-good or default-bad?

Leopold Aschenbrenner's paper Existential risk and growth provides one interesting perspective on this question (note that while I find the paper informative, I don't think it settles the question).

A key question the paper seeks to address is this:

Does faster economic growth accelerate the development of dangerous newtechnologies, thereby increasing the probability of an existential catastrophe?

The paper's (preliminary) conclusion is 

we could be living in a unique “time

... (read more)
2technicalities8moAschenbrenner's model strikes me as a synthesis of the two intellectual programmes, and it doesn't get enough attention.
AMA: Working at the Centre for Effective Altruism

My impression is that as an organisation CEA has undergone substantial change over time. How might working at CEA today be different compared to working there, say, 3/5/7 years ago?

8MaxDalton8moI agree with a lot of Amy/Julia's impressions. Some other thoughts: 7 years ago (I was an intern over the summer, so I'm probably missing some things). I think "CEA" was really just a legal entity for a wide variety of other projects. There was a bit more research being done in-house (e.g Global Priorities Project), and I think basically everything was happening in Oxford. Compared to then: more cohesive, less research, people more distributed across the world. 5 years ago: things were beginning to get a bit more integrated. Different teams were coming together and trying to figure out what the internal culture was. I think CEA was also really figuring out what to focus on: there were research projects, projects promoting effective giving, EA community building etc. Compared to then: Narrower focus and more established/consistent team culture. 2 years ago: I think there was a lot of uncertainty: we were searching for new leadership, and didn't have a solid long-term strategy. However, I think we were beginning to integrate a bunch of cool hires that we made in 2018, and we had a supportive culture. We were focused on making sure we followed through on existing commitments (rather than ambitious goals/new things). We had an office in Berkeley as well as in Oxford. Compared to then: Clearer goals/leadership, more focus on expansion, no Berkeley office and more focus on remote work. I think I listed mostly good or neutral things. When I reflect on what I miss from previous eras, the main thing is the in-person office culture (though I hope we'll get this back as we move into our new Oxford office).
9Amy Labenz8moI agree that a lot has changed! Five years ago CEA was only had a couple of staff members and was more of an “umbrella organization” to incubate other projects. My ops role was with “CEA core” and my events role was with EA Outreach (one of the projects we supported, which no longer exists as a separate org). Julia was on the EA outreach team at that time along with a handful of teammates - I don’t think anyone else on the current team worked with us at the time. During my first 6 months or so at CEA, EAO merged with CEA core. In terms of culture, five years ago it felt much more like CEA was a startup. I think teammates were much more stretched then: I worked two roles at once (Director of US Operations and Head of Events), and the people I managed were working on different projects. And many of the EAO teammates were part time. That meant I worked lots of hours and still couldn’t always execute at the level I would have liked. Now I have a team of people helping with events (see our post, we are hiring!). And a whole different team manages Operations. This helps us improve our execution. CEA has also shifted to a culture that emphasizes self care more. I feel more able to take time off now because I have a team around me, which helps because I have two very young kids (2 yo and 4 months old). Julia managed this back in the old days but I don’t think I could have at the time.
AMA: Working at the Centre for Effective Altruism

Do you believe the EA community's overall level of investment in community building is adequate/too low/too high?

(While this question isn't strictly about CEA itself, I'd imagine a key motivating belief for many CEA staff members would be that community building work is neglected relative to other high impact opportunities.)

3Aaron Gertler8mo(Sharing impressions, there's no well-developed theory here) Intuitively, I'd say somewhere between "too low" and "adequate". I'm not very involved in groups work, so my knowledge on that side is limited, but I don't have the impression that lots of potentially awesome group leaders aren't fulfilling their potential — nothing like that. But I do think that many people who don't see themselves as "community building" types should consider how they can contribute in small ways: * Being one more friendly/experienced face at a local event * Giving helpful advice to someone outside of EA who's trying to make some relevant life decision (even via something as simple as "try GiveWell, they have great stuff" or "the 80,000 Hours career tool might be helpful") * Sharing a quick Facebook post about their next donation, to make more of their social network aware of the general idea of "effective giving" (and to catch any of those people who might be in the very real category of "hears about EA, instantly sold") These are all very generic ideas, but depending on other things about someone (language fluency, membership in other communities, personal network), there may be other smallish things they can do. It would be interesting to see everyone past a certain level of EA familiarity (e.g. has done a fellowship or read multiple books) spend 15 minutes asking themselves "how can I do one small thing to grow the community?"
6Ben_West8moI sometimes speak to people who aren't aware how many career paths in community building [] there are, even outside of EA. I do think this causes there to be fewer community builders than there "should" be. It feels hard to make really broad statements though; some people's skills and interests are pretty clearly not a fit for community building, and I don't think they should try to force it.
9Amy Labenz8moI’ll answer from an events perspective: I think we could be doing a lot more to support community building using events! This is one reason I’m so excited about the two roles I posted for my team. In particular, I’m always inspired by how well EAGx teams do. We provide them a bit of support, mentorship, and money and they create events that can be quite impactful. One of the events roles we posted, the Community Events Manager, is meant to take this to the next level by supporting a portfolio of community run events (which could include a wide range of events formats!). We have seen community members take initiative to run things on their own with very little money and support and this feels awesome and like a shame. I’m impressed with what folks have been able to do on there own, but it seems like we are leaving value on the table by not helping community run events more. I’m hopeful that we will find someone who can fix that! I also think our internal events could do more with the right person, which is why we posted a pretty broad second job of Events Generalist. I think I’ve fallen down by not doing as much as impact analysis on events as I could. In my perfect world I would add someone to the team who could help me with that. But even adding another team member who could help run targeted retreats or scale up our mentorship around events or a variety of other things would mean we could invest more in the community. I’m hopeful that we will find someone who can help us do that!
AMA: Working at the Centre for Effective Altruism

What are the most positive and/or negative aspects of your work at CEA?


  • I work with really excellent people (both at CEA, and the various people I meet through my Forum/content work).
  • Good management policy — I feel guided, encouraged, and constantly pushed to do better, but not in a draining or net-negative stressful way.
  • Lots of ways to do the job — many projects could be a good fit for my goals, there's a constant flow of new options and ideas I can try to implement.


  • Lots of ways to do the job (the dark side) — I'm aware that some optimal version of me could probably have 10x the impact in exactly this role
... (read more)

Positive: The people I work with, both at CEA as well as the wider EA community, are often impressive, talented, and kind.

Negative: I'm not a morning person, and living in Pacific time while working with Brits means I have to be up early a lot


  • I feel like I was able to create a role that played to my strengths, and I feel excited about the expected value of my career.
  • I care a lot about my work.
  • I really like my colleagues.



  • It can be stressful. I feel like I'm working on important things, and care a lot about how they go. When things don't go well or there's something time-sensitive and important to get right, it can feel stressful. This might be particularly related to my role (I handle risky situations a lot).
  • It can be hard to take my brain off of work. I'm a lot better
... (read more)

I love my job, and feel very lucky.


  • I genuinely like and trust my colleagues. I really enjoy working with people who care about very similar things and are deeply into the same ideas/culture. I've learned a lot from them.
  • Being able to (somewhat) shape the role to what I enjoy and am good at (e.g. I hate public speaking but love writing - others at CEA are the opposite, so I can write speeches for them). This is something that we try to do for everyone at CEA: to find a role that really plays to their strengths.
  • Facing a lot of open-ended and challe
... (read more)

For me it might be two sides of the same coin (particular to my role on the community health team).

The positive is getting to serve a community I really believe in, and supporting people who feel very much on the same team as me as far as big life goals.

The negative is that there's less separation between work life and community life than there would be in a lot of jobs. I'm not a normal community member in the way I was before I worked here - there are more things I have to try to be neutral on, etc. Facebook is mostly a work space for me.

9Amy Labenz8moI love my job a lot. I think the biggest positive for me is hearing impact stories from the events where people get some amazing connection or opportunity as a result of attending. From a pure enjoyment standpoint I am a sucker for the feeling when an event starts and I get to see all of the excited EAs who have come to attend this thing that my team worked so hard to make for them. There are moments when people start streaming in and it is so busy with activity during registration and everyone looks so happy... it feels like when you buy someone a present that you know they will like and you get to watch them open it! I used to find the criticism from the community to be a bit hard, but now I have a much thicker skin and better relationship with the feedback. I think part of that comes from having more time, so that if we get negative feedback about something it is a data point about something where we made a decision to do one way or the other. I still feel stress when I think I underperform on something because the stakes feel so high. In the past I traveled quite a bit for work. At the time that was a positive for me. I’m not sure if it will be as much of a positive now that I have two kids. We will see pretty soon!
How much do you (actually) work?

This is an interesting question, though I am somewhat concerned that the responses will be biased towards high numbers because people who work relatively fewer hours may be less likely to respond. I would give much more weight to an anonymous survey.

On a different note, I have personally found it useful to track my working hours using Toggl Track ( This has given me a much more accurate sense of how many hours I usually work per week and how long I should expect projects to take.

FWIW, I also think it's plausible responses will be biased towards low numbers because people want to be avoid looking like they're bragging, don't want to contribute to people's stress, etc. 

(But to be clear, I'm not saying I expect those different sources of bias to cancel out - it seems hard to say what the net bias would be - and so also endorse the idea of giving more weight to an anonymous survey.)

Moral circle expansion

Suggestion to change this tag's URL from "/moral-circle-expansion-1/" to "/moral-circle-expansion/".

3Pablo6moThanks for flagging this. There are quite a few tag pairs following this pattern, due to the way in which some of the entries were originally imported. Changing the URL so that the -1 is removed requires manually releasing the "slug []" without the -1 , and then re-associating it to the relevant tag ("moral circle expansion", in your example). There is currently no way to automate this process and only a few people in the tech team can perform these operations, so I'm afraid it will take a while before the issue is fixed. Hopefully we'll find a more permanent solution eventually.
Effective Altruism and Utilitarianism

Here are several more recent resources addressing the differences between effective altruism and utilitarianism/consequentialism:

Act utilitarianism: criterion of rightness vs. decision procedure

To learn more about the difference between criteria of rightness and decision procedures, and how this difference entails a distinction between "single-level utilitarianism" and "multi-level utilitarianism", please see the section Chapter 3: Elements and Types of Utilitarianism: Multi-level Utilitarianism Versus Single-level Utilitarianism on

On the longtermist case for working on farmed animals [Uncertainties & research ideas]

Another way to approach this is to ensure that people who are already interested in learning about utilitarianism are able to find high-quality resources that explicitly cover topics like the idea of the expanding moral circle, sentiocentrism/pathocentrism, and the implications for considering the welfare of geographically distant people, other species, and future generations. 

Improving educational opportunities of this kind was one motivation for writing this section on Chapter 3: Utilitarianism and Practical Ethics: The... (read more)

Why Hasn't Effective Altruism Grown Since 2015?

Another indicator: Wikipedia pageviews show fairly stable interest in articles on EA and related topics over the last five years.

2Ula10moGreat share!

Hi Pablo, I have only just seen your comments. Yes, of course, I am more than happy with all the changes you have made and trust your sense for how this Wiki should be designed/structured! Thank you and keep up the good work.

How many hits do the hits of different EA sites get each year?

Wikipedia pageviews could serve as a useful indicator that I expect is strongly correlated with website views.

E.g. see the following comparison of the pageviews of several EA-related Wikipedia pages in 2020. As it turns out, Peter Singer gets about 2x the number of views of Nick Bostrom, 2.5x of effective altruism, and 12x FHI or GiveWell.

Notes on "Bioterror and Biowarfare" (2006)

A somewhat related thought I had while reading this post:  Several of the nuclear-weapon states (including the US for all I remember) retain the right to retaliate with nuclear weapons against an attack with bioweapons, chemical weapons, and even cyber weapons. On the one hand, this might make the overall situation more stable because hostile actors (at least states, probably not so much terrorist groups) are deterred from using these other weapons types. On the other hand, it may be destabilising since many more actors (including non-state ones) may trigger a nuclear conflict.

2eca1yInteresting point. Note that a requirement for retaliation is knowledge of the actor to retaliate against. This is called “attribution” and is a historically hard problem for bioweapons which is maybe getting easier with modern ML (COI- I an a coauthor: [])
Books / book reviews on nuclear risk, WMDs, great power war?

On the topic of nuclear warfare, I have also read and can recommend The Bomb: Presidents, Generals, and the Secret History of Nuclear War by Fred Kaplan. The book provides a deep dive into the development of the US nuclear doctrine over time , covering all administrations across 70 years and outlining in great detail many issues and arguments around nuclear policy.

If you're also interested in books on biological weapons, I particularly recommend (HT Chris Bakerlee):

 1. Bioterror and Biowarfare: A Beginner's Guide by Malcolm Dando

2. Deadliest Enem... (read more)

3MichaelA1yThanks for your recommendations! I've now listened to The Bomb. I found it interesting and useful, and would likewise recommend it to others. I also wrote some notes on it here [] . (And your other recommendations are on my list of books to consider reading in future.) ETA: I've now also listened to Bioterror and Biowarfare, found it useful as well, and posted some takeaways and notes [] .
AMA: Jason Crawford, The Roots of Progress

What are your thoughts on the desirability and feasibility of differential technological development (DTD) as a governance strategy for emerging technologies? 

For instance, Toby Ord briefly touches on DTD in The Precipice, writing that "While it may be too difficult to prevent the development of a risky technology, we may be able to reduce existential risk by speeding up the development of protective technologies relative to dangerous ones."

9jasoncrawford1yI don't know much about it beyond that Wikipedia page, but I think that something like this is generally in the right direction. In particular, I would say: * Technology is not inherently risk-creating or safety-creating. Technology can create safety, when we set safety as a conscious goal. * However, technology is probably risk-creating by default. That is, when our goal is anything other than safety—more power, more speed, more efficiency, more abundance, etc.—then it might create risk as a side effect []. * Historically, we have been reactive rather than proactive about technology risk. People die, then we do the root-cause analysis and fix it. * Even when we do anticipate problems, we usually don't anticipate the right ones. When X-rays were first introduced, people had a moral panic about men seeing through women's clothing on the street, but no one worried about radiation burns or cancer. * Even when we correctly anticipate problems, we don't necessarily heed the warnings. At the dawn of the antibiotic age, Alexander Fleming foresaw the problem of resistance, but that didn't prevent doctors from way overprescribing antibiotics for many years. * We need to get better at all of the above in order to continue to improve safety as we simultaneously pursue other technological goals: more proactive, more accurate at predicting risk, and more disciplined about heeding the risk. (This is obviously so for x-risk, where the reactive approach doesn't work!) * I see positive signs of this in how the AI and genetics communities are approach safety in their fields. I can't say whether it's enough, too much, or just right. Anyway, DTD seems like a much better concept than the conventional “let's slow down progress across the board, for safety's sake.” This is a fundamental error, for reasons David Deutsch describes in The Beginning of Infinity. But that's also
AMA: Jason Crawford, The Roots of Progress

What are your long-term goals for The Roots of Progress? Are you pleased with how far you have come so far (e.g. quantity and quality of content produced, page-view or subscriber numbers)?

1jasoncrawford1ySee my reply to @BrianTan on a similar question, thanks!
AMA: Jason Crawford, The Roots of Progress

How do you prioritise between the various projects you are working on? What other projects, if any, do you consider working on to advance progress studies in future?

2jasoncrawford1yIt's hard to prioritize! I try to have overarching / long-term goals, and to spend most of my time on them, but also to take advantage of opportunities when they arise. I look for things that significantly advance my understanding of progress, build my public content base, build my audience, or better, all three. Right now I'm working on two things. One is continued curriculum development for my progress course for the Academy of Thought and Industry, a private high school. The other, more long-term project is a book on progress. Along the way I intend to keep writing semi-regularly at [].
Load More