Giving season 2023

Donate to the Election Fund and discuss where the donations should go. Learn more in the Giving portal.

New & upvoted

Customize feedCustomize feed

Posts tagged community

Quick takes

Show community
View more
EZ#1 The world of Zakat is really infuriating/frustrating. There is almost NO accountability/transparency demonstrated by orgs which collect and distribute zakat - they don't seem to feel any obligation to show what they do with what they collect. Correspondingly, nearly every Muslim I've spoken to about zakat/effective zakat has expressed that their number 1 gripe with zakat is the strong suspicion that it's being pocketed or corruptly used by these collection orgs. Given this, it seems like there's a really big niche in the market to be exploited by an EA-aligned zakat org. My feeling at the moment is that the org should focus on, and emphasise, its ability to be highly accountable and transparent about how it stores and distributes the zakat it collects. The trick here is finding ways to distribute zakat to eligible recipients in cost-effective ways. Currently, possibly only two of the several dozen 'most effective' charities we endorse as a community would be likely zakat-compliant (New Incentives, and Give Directly), and even then, only one or two of GiveDirectly's programs would qualify. This is pretty disappointing, because it means that the EA community would probably have to spend quite a lot of money either identifying new highly effective charities which are zakat-compliant, or start new highly-effective zakat complaint orgs from scratch.
Bumping a previous EA forum post: Key EA decision-makers on the future of EA, reflections on the past year, and more (MCF 2023). This post recaps a survey about EA 'meta' topics (eg., talent pipelines, community building mistakes, field-building projects, etc.) that was completed by this year's Meta Coordination Forum attendees. Meta Coordination Forum is an event for people in senior positions at community- and field-building orgs/programs, like CEA, 80K, and Open Philanthropy's Global Catastrophic Risk Capacity Building team. (The event has previously gone by the name 'Leaders Forum.') This post received less attention than I thought it would, so I'm bumping it here to make it a bit more well-known that this survey summary exists. All feedback is welcome!
This December is the last month unlimited Manifold Markets currency redemptions for donations are assured: Highly recommend redeeming donations this month since there are orders of magnitude more currency outstanding than can be donated in future months
There is still plenty of time to vote in the Donation Election. The group donation pot currently stands at around $30,000. You can nudge that towards the projects you think are most worthwhile (plus, the voting system is fun and might teach you something about your preferences).  Also- you should donate to the Donation Election fund if:  a) You want to encourage thinking about effective donations on the Forum. b) You want to commit to donating in line with the Forum's preferences.  c) You'd like me to draw you one of these bad animals (or earn one of our other rewards):  NB: I can also draw these animals holding objects of your choice. Or wearing clothes. Anything is possible. 
Capacity Market 2023: Phase 2 proposals and 10 year review - GOV.UK (

Popular comments

Recent discussion

To EAs, "development economics" evokes the image of RCTs on psychotherapy or deworming. That is, after all, the closest interaction between EA and development economists. However, this characterization has prompted some pushback, in the form of the argument that all global...

Continue reading
Thanks for sharing this, I found it very interesting. I was curious about the sectoral transformation. Presumably we will always need some people working in agriculture. A lot of this specialisation has occurred between rural and urban areas, but might it not also make sense for some entire countries to focus on agriculture? They could focus their education systems, regulations and so on the industry, which might improve efficiency, rather than having smaller numbers of people in more countries doing agriculture. If this is the case then we could see some countries getting rich entirely off agriculture - just like there are wealthy farmers in the US, Australia, etc. If so pushing for sectoral transformation could be a mistake if some countries really have a comparative advantage in agriculture. This also connects to your points about agricultural productivity. My impression was that many third world countries have chronic under-utilisation of labour; you literally just have a lot of working age men hanging around doing nothing all day. This is labour slack is implicit in the model for why GiveDirectly might boost economic activity that they described on the 80k podcast. If so, productivity growth that incentivized people to stay in agriculture could be good, especially if it replaced unemployment, even though it would retard the sectoral transformation. Finally, I was interested in the negative effects of the services transition. Could this be because poor regulatory setup means many of the 'service' jobs are essentially rent-seeking rather than providing socially valuable services? e.g. an increase in lawyers or bureaucrats primarily creating more work for other people. 

I guess the fact that no country in history has gotten rich while being agrarian gives me a very strong prior against it. And there are clear reasons why; agricultural goods are commodities that are extremely cheap, so even having an advantage in them, you can only have a slim advantage. Plus different countries will always have different comparative advantages in different crops. Compare that to manufacturing where you can make increasingly specialized and high-quality goods that generate much more profits.

My impression is that underutilization of labor i... (read more)

I’m curating this post. Karthik Tadepalli makes the point that EAs have often accepted the argument, given here, that the most cost effective global health interventions are likely to be aimed at increasing growth in LMICs rather than directly targeted at health outcomes. However, there hasn’t yet been a focus on producing growth interventions within EA global health work that is proportional to this interest. In a careful, well evidenced manner, this post outlines some factors which affect economic growth in LMICs, and which may be amenable to interventions. You can read more about the discussion of boosting economic growth as a potentially tractable cause in Global Health and Wellbeing on this topics tag, and in this recent 80k podcast episode with GiveWell’s co-founder Elie Hassenfeld.  I hope, with Karthik, that this post and the series to follow is read with “an entrepreneurial eye”, and reignites debate in this pressing question. 

Next week for the 80,000 Hours Podcast I'll be interviewing Carl Shulman, advisor to Open Philanthropy, and generally super informed person about history, technology, possible futures, and a shocking number of other topics.

He has previously appeared on our show and the ...

Continue reading
Answer by LarksDec 08, 202313

Maybe if/how his thinking about AI governance has changed over the last year?

1Answer by Hayven Jackson4h
Can you ask him whether or not it's rational to assume AGI comes with significant existential risk as a default position, or if one has to make a technical case for it coming with x-risk? 
[Meta] Forum bug: when there were no comments it was showing as -1 comments

“Association of Animal and Plant Protein Intake With All-Cause and Cause-Specific Mortality”

Study authors found that substituting 3 percent of daily calories from animal protein with plant protein was associated with a lower risk for death from all causes: a 34 percent ...

Continue reading

Thanks for sharing this research! As far as I know it concords well with other research showing plant-based proteins are at least as healthy as animal-based ones, if not moreso. Plant proteins also come with much less suffering of conscious beings* than animal proteins do, which seems to make them the most moral option for people looking to eat ethically.

*not only the animals brutalized in factory farms, but also the human workers pushed into such jobs by economic desperation

Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.

Are we too willing to accept forecasts from experts on the probability of humanity’s demise at the hands of artificial intelligence? What degree of individual liberty should we curtail in the name of AI risk mitigation? I argue that focusing on AI’s existential risk distracts...

Continue reading

Welcome to the EA Forum, and thanks for the post, Zed!

Differing national levels in risk tolerance may explain the gap between public opinion on, for example, genetic engineering between the United States and Europe.

The link you have here is broken.

I'm glad you found my comment useful. I think then, with respect, you should consider retracting some of your previous comments, or at least reframing them to be more circumspect and be clear you're taking issue with a particular framing/subset of the AIXR community as opposed to EA as a whole. As for the points in your comment, there's a lot of good stuff here. I think a post about the NRRC, or even an insider's view into how the US administration thinks about and handles Nuclear Risk, would be really useful content on the Forum, and also incredibly interesting! Similarly, I think how a community handles making 'right-tail recommendations' when those recommendations may erode its collective and institutional legitimacy[1] would be really valuable. (Not saying that you should write these posts, they're just examples off the top of my head. In general I think you have a professional perspective a lot of EAs could benefit from) I think one thing where we agree is that there's a need to ask and answer a lot more questions, some of which you mention here (beyond 'is AIXR valid'): * What policy options do we have to counteract AIXR if true? * How do the effectiveness of these policy options change as we change our estimation of the risk? * What is the median view in the AIXR/broader EA/broader AI communities on risk? And so on. 1. ^ Some people in EA might write this off as 'optics', but I think that's wrong

This is the summary of the report with additional images (and some new text to explain them) The full 90+ page report (and a link to its 80+ page appendix) is on our website.


This report forms part of our work to conduct cost-effectiveness analyses ...

Continue reading

So the problem I had in mind was in the parenthetical in my paragraph:

To its credit, the write-up does highlight this, but does not seem to appreciate the implications are crazy: any PT intervention, so long as it is cheap enough, should be thought better than GD, even if studies upon it show very low effect size (which would usually be reported as a negative result, as almost any study in this field would be underpowered to detect effects as low as are being stipulated)

To elaborate: the actual data on Strongminds was a n~250 study by Bolton et al. 2003 th... (read more)

Gregory Lewis
(@Burner1989 @David Rhys Bernard @Karthik Tadepalli) I think the fundamental point (i.e. "You cannot use the distribution for the expected value of an average therapy treatment as the prior distribution for a SPECIFIC therapy treatment, as there will be a large amount of variation between possible therapy treatments that is missed when doing this.") is on the right lines, although subsequent discussion of fixed/random effect models might confuse the issue. (Cf. my reply to Jason). The typical output of a meta-analysis is an (~) average effect size estimate (the diamond at the bottom of the forest plot, etc.) The confidence interval given for that is (very roughly)[1] the interval we predict the true average effect likely lies. So for the basic model given in Section 4 of the report, the average effect size is 0.64, 95% CI (0.54 - 0.74). So (again, roughly) our best guess of the 'true' average effect size of psychotherapy in LMICs from our data is 0.64, and we're 95% sure(*) this average is somewhere between (0.54, 0.74). Clearly, it is not the case that if we draw another study from the same population, we should be 95% confident(*) the effect size of this new data point will lie between 0.54 to 0.74. This would not be true even in the unicorn case there's no between study heterogeneity (e.g. all the studies are measuring the same effect modulo sampling variance), and even less so when this is marked, as here. To answer that question, what you want is a prediction interval.[2] This interval is always wider, and almost always significantly so, than the confidence interval for the average effect: in the same analysis with the 0.54-0.74 confidence interval, the prediction interval was -0.27 to 1.55. Although the full model HLI uses in constructing informed priors is different from that presented in S4 (e.g. it includes a bunch of moderators), they appear to be constructed with monte carlo on the confidence intervals for the average, not the prediction interval for
Barry Grimes
Thanks Rebecca. I will delete the duplicates.

tl;dr: This document contains a list of forecasting questions, commissioned by Open Philanthropy as part of its aim to have more accurate models of future AI progress. Many of these questions are more classic forecasting questions, others have the same shape but are unresolvable...

Continue reading

Nice work!

In this adjacent document, we also outline a "resolution council"

The link points to this post.