473Joined Feb 2019


Not answering the question, but I would like to quickly mention a few of the benefits of having confidence/credible intervals or otherwise quantifying uncertainty. All of these comments are fairly general, and are not specific criticisms of GiveWell's work. 

  1. Decision making under risk aversion - Donors (large or small) may have different levels of risk aversion. In particular, some donors might prefer having higher certainty of actually making an impact at the cost of having a lower expected value. Moreover, (mostly large) donors could build a portfolio of different donations in order to achieve a better risk profile. To that end, one needs to know more about the distribution rather than a point-estimate.
  2. Point-estimates are many times done badly - It is fairly easy to make many kinds of mistakes when doing point-estimates, some of which are more noticeable when quantifying uncertainties. To name one example, point-estimates of cost-effectiveness typically try to estimate the expected value, and is many times calculated as a product of different factors. While it is true that expected value is multiplicative (assuming that the factors are uncorrelated or, more generally, independent, which is also sometimes not the case but that's another problem), this is not true for other statistics, such as the median. I think it is a common mistake to use an estimate of the median for the mean, or something in between, which in many cases are wildly different.
  3. Sensitivity analysis - Quantifying uncertainty allows for sensitivity analysis, which serves many purposes, one of which is to get more accurate (point-)estimate and reduce uncertainty. One example is by understanding which parameters are the most uncertain, and focus further (internal and external) research on improving their certainty.

In direct response to Hazelfire's comment, I think that even if the uncertainty spans only one order of magnitude (he mentioned 2-3, which seems reasonable to me), this could have a really larger effect on resource allocation. The bar for funding is currently 8x relative to GiveDirectly IIRC, which is one order of magnitude, so gaining a better understanding of the certainty could be really important. For instance, we could learn that some interventions which are currently above the bar, are not very clearly so, whereas other interventions which seem to be under the bar but very close to it, could turn out to be fairly certain and thus perhaps a very safe bet.

I think that all of these effects could have a large influence on GiveWell's recommendations and donors choices, future research, and directly on getting more accurate point-estimates (which could potentially be fairly big).

Yeah, that makes sense, and is fairly clear selection bias. Since here in Israel we have a very strong tech hub and many people finishing their military service in elite tech units, I see the opposite selection bias, of people not finding too many EA (or even EA-inspired) opportunities that are of interest to them.

I failed to mention that I think your post was great, and I would also love to see (most of) these critiques flashed out.

The fact that everyone in EA finds the work we do interesting and/or fun should be treated with more suspicion.

I would like to agree with Aaron's comment and make a stronger claim - my impression is that many EAs around me in Israel, especially those coming from a strong technical background, don't find most direct EA-work very intellectually interesting or fun (ignoring its impact).

Speaking for myself, my background is mostly in pure math and in cyber-security research / software engineering. Putting aside managerial and entrepreneurial roles, it seems to me that most of the roles in EA(-adjacent) organizations open for someone with background similar to mine are:

  1. Research similar to research at Rethink Priorities or GiveWell - It seems to me that this research mostly involves literature review and analysis of existing research. I find this kind of work to be somewhat interesting, but not nearly as intrinsically interesting as the things I have done so far.
  2. Technical AI safety - This could potentially be very interesting for someone like me, however, I am not convinced by the arguments for the relatively high importance or tractability of AI safety conveyed by EA. In fact, this is where I worry said critique might be right, on the community level, I worry that we are biased by motivated reasoning.
  3. Software engineering - Most of the software needs in EA(-adjacent) organizations seem to be fairly simple technically (but the product and "market-fit" could be hard). As such, for someone looking for more research type of work or more complicated technical problems, this is not very appealing.

Additionally, most of the roles are not available in Israel or open for remote work.

In fact, I think this is a point where the EA community misses many highly capable individuals who could otherwise do great work, if we had interesting enough roles for them.

I am extremely impressed by this, and this is a great example of the kind of ambitious projects I would love to see more of in the EA community. I have added it to the list on my post Even More Ambitious Altruistic Tech Efforts.

Best of luck!

I completely agree with everything you said (and my previous comment was trying to convey a part of this, admittedly in much less transparent way).

I simply disagree with your conclusion - it all boils down to what we have at hand. Doubling the cost-effectiveness also requires work, it doesn't happen by magic. If you are not constrained by highly effective projects which can use your resources, sure, go for it. As it seems though, we have much more resources than current small scale projects are able to absorb, and there's a lot of "left-over" resources. Thus, it makes sense to start allocating resources to some less effective stuff.

I agree with the spirit of this post (and have upvoted it) but I think it kind of obscures the really simple thing going on: the (expected) impact of a project is by definition the cost-effectiveness (also called efficiency) times the cost (or resources).
A 2-fold increase in one, while keeping the other fixed, is literally the same as having the roles reversed.

The question then is what projects we are able to execute, that is, both come up with an efficient idea, and have the resources to execute it. When resources are scarce, you really want to squeeze as much as you can from the efficiency part. Now that we have more resources, we should be more lax, and increase our total impact by pursuing less efficient ideas that still achieve high impact. Right now it starts to look like there's much more resources ready to be deployed, than projects which are able to absorb them.

I am not sure that there is actually a disagreement between you and Guy.
If I understand correctly, Guy says that in so far as the funder wants research to be conducted to deepen our understanding of a specific topic, the funders should not judge researchers based on their conclusions about the topic, but based on the quality and rigor of their work  in the field and their contributions to the relevant research community.
This does not seem to conflict what you said, as the focus is still on work on that specific topic.

I strongly agree with this post and it's message.

I also want to respond to Jason Crawford's response. We don't necessarily need to move to a situation where everyone tries to optimize things as you suggest, but at this point it seems that almost no one tries to optimize for the right thing. I think even changing this to a few percents of entrepreneurial work or philanthropy could have tremendous effect, without losing much of the creative spark people worry we might lose, or maybe gain even more, as new directions open.

That's great, thanks!
I was aware of Anthropic, but not of the figures behind it.

Unfortunately, my impression is that most funding for such projects are around AI safety or longtermism (as I hinted in the post...). I might be wrong about this though, and I will poke around these links and names.

Relatedly, I would love see OPP/EA Funds fund (at least a seed round or equivalent) such projects, unrelated to AI safety and longtermism, or hear their arguments against that.

Load More