Recent Discussion

Authors: John Halstead, Hauke Hillebrandt

Opinions are ours, not those of our employers.


Randomista development (RD) is a form of development economics which evaluates and promotes interventions that can be tested by randomised controlled trials (RCTs). It is exemplified by GiveWell (which primarily works in health) and the randomista movement in economics (which primarily works in economic development).

Here we argue for the following claims, which we believe to be quite weak:

  1. Prominent economists make plausible arguments which suggest that research on and advocacy for economic growth in l
... (Read more)
9Linch2hOn a meta-level, in general I think your conversation with lucy is overly acrimonious, and it would be helpful to identify clear cruxes, have more of a scout's mindset, etc. My read of the situation is that you (and other EAs upvoting or downvoting content) have better global priors, but lucy has more domain knowledge in the specific areas they chose to talk about. I do understand that it's very frustrating for you to be in a developing country and constantly see people vote against their economic best interests, so I understand a need to vent, especially in a "safe space" of a pro-growth forum like this one. However, lucy likely also feels frustrated about saying what they believe to be true things (or at least well-established beliefs in the field) and getting what they may perceive to be unjustifiably attacked by people who have different politics or epistemic worldviews. My personal suggestion is to have a stronger "collaborative truth-seeking attitude" and engage more respectfully, though I understand if either you or lucy aren't up for it, and would rather tap out.

Great comment - strong upvote! :)

8Pablo_Stafforini12hI'm not sure I'm the right person to comment on this, given that I'm one of the parties involved, but I'll provide my perspective here anyway in case it is of any help or interest. I don't think you are characterizing this exchange or the reasons behind the pattern of votes accurately. Bruno asked you to provide a source in support of the following claim, which you made four comments above [] : In response to that request, you provided two sources. I looked at them and found that both failed to support the assertion that "It was [China's] widespread education pre-1979 than reduced fertility", and that one directly contradicted it. I didn't downvote your comment, but I don't think it's unreasonable to expect some people to downvote it in light of this revelation. In fact, on reflection I'm inclined to favor a norm of downvoting comments that incorrectly claim that a scholarly source supports some proposition, since such a norm would incentivize epistemic hygiene and reduce the incidence of information cascades. I do agree with you that ingroup/outgroup dynamics sometimes explain observed behavior in the EA community, but I don't think this is one of those cases. As one datapoint confirming this, consider that a month or two ago, when I pointed out that someone had mischaracterized a paper, that person's comment was heavily downvoted, despite this user being a regular commenter and not someone (I think) generally perceived to be an "outsider". Moving to the object-level, in your recent comment you appear to have modified your original contention. Whereas before your stated that "widespread education" was the factor explaining China's reduced fertility, now you state that education was one factor among many. Although this difference may seem minor, in the present context it is very important, because both in comments to this post and elsew

The following is a heavily edited transcript of a talk I gave for the Stanford Effective Altruism club on 19 Jan 2020. I had transcribe it, and then Linchuan Zhang, Rob Bensinger and I edited it for style and clarity, and also to occasionally have me say smarter things than I actually said. Linch and I both added a few notes throughout. Thanks also to Bill Zito, Ben Weinstein-Raun, and Howie Lempel for comments.

I feel slightly weird about posting something so long, but this is the natural place to put it.

Over the last year my beliefs about AI risk have shifted moderately; I expect tha... (Read more)

2EdoArad16hThis reminds me of the discussion [] around the Hinge of History Hypothesis (and the subsequent discussion of Rob Wiblin and Will Macaskill [] ). I'm not sure that I understand the first point. What sort of prior would be supported by this view? The second point I definitely agree with, and the general point of being extra careful about how to use priors :)

Sorry, I wasn't very clear on the first point: There isn't a 'correct' prior.

In our context (by context I mean both the small number of observations and the implicit hypotheses that we're trying to differentiate between), the prior has a large enough weight that it affects the eventual result in a way that makes the method unhelpful.

3EdoArad17hJaime Sevilla wrote a long (albeit preliminary) and interesting report on the topic []

Crossposted from Charity Entrepreneurship's Blog here


There has been recent discussion within effective altruism around global mental health as a new cause area. In the 2018 EA survey, it was included as a potential top cause area, and around 4% of EAs identified it as their top priority [1]. This has led us to think about whether Charity Entrepreneurship (CE) should do prioritisation research on mental health as a potentially high-impact cause area. Many of us at CE have been convinced that this is a promising enough area to investigate as one of the four areas for our 2020... (Read more)

The Procreation Asymmetry consists of these two claims together:

  1. it’s bad to bring into existence an individual who would have a bad existence, other things being equal, or the fact that an individual would have a bad existence is a reason to not bring them into existence; and
  2. it’s at best indifferent to bring into existence an individual who would have a good existence, other things being equal, or the fact that an individual would have a good existence is not a reason to bring them into existence.

However, if a bad existence can be an "existential harm" (according to c... (Read more)

And also, what interventions can be done to increase the amount of human-digestible calories (as well as various nutrients) in the ocean that would be available after some global catastrophe?

Actually, similar questions also apply for other calorie sources. For example, maybe eating insects is good on utilitarian grounds because it encourages the insect industry which can more easily continue to thrive even if the Sun gets blocked.


You’ve also suggested that we eat bacteria. How would that work?

There are two main sources of bacteria that we looked at. There is a methane-digesting ... (Read more)

Some information I found


Could the oceans feed us?

If you looked at the amount of fish that we currently eat, it’s just a tiny fraction of the human diet. You can expand that much more without wiping out all the fisheries. If you have significant climate change, that will result in more upwelling [seawater rise from the depth of the ocean to the surface], which will be like fertilizing the ocean surface, and you get more fish. Similarly we can purposely fertilize the ocean in order to get more fish. So then we have enough fish to feed everyone... (read more)

The Weapon of Openness is an essay published by Arthur Kantrowitz and the Foresight Institute in 1989. In it, Kantrowitz argues that the long-term costs of secrecy in adversarial technology development outweigh the benefits, and that openness (defined as "public access to the information needed for the making of public decisions") will therefore lead to better technology relative to adversaries and hence greater national security. As a result, more open societies will tend to outperform more secretive societies, and policymakers should tend strongly towards openness even in cases where secrecy

... (Read more)
All else equal, I would expect a secret organisation to have worse epistemics and be more prone to corruption than an open one, both of which would impair its ability to pursue its goals. Do you disagree?

No I agree with these pro tanto costs of secrecy (and the others you mentioned before). But key to the argument is whether these problems inexorably get worse as time goes on. If so, then the benefits of secrecy inevitably have a sell-by date, and once the corrosive effects spread far enough one is better off 'cutting ones losses' - or never goin... (read more)

Artificial intelligence alignment is believed by many to be one of the most important challenges we face right now. I understand the argument that once AGI is developed it's game over unless we have solved alignment, and I am completely convinced by this. However, I have never seen anyone explain the reasoning that leads experts in the field to believe that AGI could be here in the near future. Claims that there is an X% chance of AGI in the next Y years (where X is fairly large and Y fairly small) are rarely supported by an actual argument.

I realize that for the EA community to dedicate... (Read more)

Note also that your question has a selection filter where you'd also want to figure out where the best arguments for longer timelines are. In an ideal world these two sets of things tend to live in the same place, in our world this isn't always the case.

Full Report

Summary for AIES

Over the long run, technology has improved the human condition. Nevertheless, the economic progress from technological innovation has not arrived equitably or smoothly. While innovation often produces great wealth, it has also often been disruptive to labor, society, and world order. In light of ongoing advances in artificial intelligence (“AI”), we should prepare for the possibility of extreme disruption, and act to mitigate its negative impacts. This report introduces a new policy lever to this discussion: the Windfall Clause.

What is the Windfall Clause?

The Windf

... (Read more)
B.2. “The Windfall Clause will shift investment to competitive non-signatory firms.”

The concern here is that, when multiple firms are competing for windfall profits, a firm bound by the Clause will be at a competitive disadvantage because unbound firms could offer higher returns on new capital. That is, investors would prefer firms that are not subject to a “tax” on their profits in the form of the Windfall Clause. This is especially bad because it could mean that more prosocial firms (i.e., ones that have signed the Clause) wo
... (read more)

When I reflect back on my experiences at South Bay EA, the one thing that would save us so much time (that we can then use for non-scalable activities like 1:1s) is if we had high quality pre-made discussion sheets.

(To be clear, this is still an ongoing problem).

It takes us ~3-12 hours to make a typical discussion sheet for one meetup.

Naively, it would be really helpful if CEA or a crowdsourced group (eg, this forum) worked on creating high-quality discussion sheets that would save us 90% of the effort.

I could imagine other content being helpful as well, for example ("intro to EA" ... (Read more)

Do you have a sense of whether/how much new material is needed vs. we already have all the existing material and it's just a question of compiling everything together?

If the former, a follow-up question is which new material will be helpful. Would be excited you (or anybody else) also answer this related question:

4Linch13hYeah I guess that's the null hypothesis, thought it's possible that people don't use the current resources because it's not "good" enough (eg, insufficiently accessible, too much jargon, too many local context specific stuff, etc). Another thing to consider is "curriculum", ie, right now discussion sheets, etc are shared to the internet without tips on how to adapt them (since local groups who wrote the sheets have enough local context/institutional knowledge on how the sheets should be used). An interesting analogy is the "instructor's edition" of textbooks, which iirc in the US K-12 system often has almost as much supplementary material as the textbook's content itself!

This shallow review was written by SoGive. SoGive is an organisation which provides services to donors to help them to achieve high impact donations.

This is a very quick, rough model of the cost-effectiveness of promoting clean cookstoves in the developing world. It suggests that:

- If a clean cookstove intervention is successful, it may have roughly the same ballpark of cost-effectiveness as a GiveWell-recommended charity

- C.90% of the impact comes from directly saving lives, in a model which reflected saving lives and climate change impact

This is very much not intended to be a final, polished... (Read more)

Thanks for the encouragement. I think that aiming for a "perfect" write-up has been a barrier to publishing content, so I intend for us to publish more shallow reviews to address this.

To answer your question, I think the best focus areas would be the six bullet points highlighted near the start of the article, with a particular focus on the first two (are the stoves actually used, and are they actually clean?) and the last (what is the best way to fund this work?).

Also, we would further investigate the very useful comments made by MatthewDahlhaus... (read more)

I'm thinking the objective function could have constraints on the expected number of times the AI breaks the law, or the probability that it breaks the law, e.g.

  • only actions with a probability of breaking any law < 0.0001 are permissible, or
  • only actions for which the expected number of broken laws is < 0.001 are permissible.

There could also be separate constraints for individual laws or groups of laws, and these could depend on the severity of the penalties.

Looser constraints like this seem like they could avoid issues of lexicality and prioritizing avoidance of breaking the law ... (Read more)

Cullen's argument was "alignment may not be enough, even if you solve alignment you might still want to program your AI to follow the law because <reasons>." So in my responses I've been assuming that we have solved alignment; I'm arguing that after solving alignment, AI-powered enforcement will probably be enough to handle the problems Cullen is talking about. Some quotes from Cullen's comment (emphasis mine):

Reasons other than directly getting value alignment from law that you might want to program AI to follow the law
... (read more)

We must take evolution into account when we consider animal welfare — whether we’re thinking about which animals are sentient or how animals might respond to a given intervention. In this talk, Wild Animal Initiative’s Michelle Graham presents a brief introduction to the theory of evolution (she also recommends this video for more background), explains how understanding evolution can help us conduct better research, and discusses the ways misconceptions about evolution lead us astray.

We’ve lightly edited Michelle’s talk for clarity. You may also watch it on Y
... (Read more)

Since this topic interests me and i'm killing time I decided to comment on a few things in your post.

1. wikipedia has a reasonable article on exaptations as an introduction. i also reccomend looking at the wikip article on sexual selection---in my view these topics overlap. (The wikip article on sexual selection looks less complete---i think 'fisher's runaway process' described in that article is most relevant but some others prefer the 'handicap principal'.

there are much more recent articles on these topi... (read more)

eg, discussion sheets, syllabi, how-to-do {stuff}, etc.

Shan's Shortform
12dShow Highlight
Load More