A quick update on this: Good Ventures is now open to supporting work that Open Phil recommends on digital minds/AI moral patienthood. We're still figuring out where that work should slot in (including whether we’d open a public call for applications) and will update people working in the field when we do. Additionally, Good Ventures are now open to considering a wider range of recommendations in right-of-center AI policy and a couple other smaller areas (e.g. in macrostrategy/futurism), though those will be evaluated on a case-by-case basis for now. We’ll ...
Thanks Nick.
On the housing piece: we have a long internal report on the valuation question that we didn't think was particularly relevant to external folks so we haven't published it, but will see about doing so later this year. Fn 7 and the text around it of this grant writeup explain the basic math of a previous version of that valuation calc, though our recent version is a lot more complex.
If you're asking about the bar math, the general logic is explained here and the move to a 2,100x bar is mentioned here.
On R&D, the 70x number comes from Ma...
Thanks Ozzie, you’re definitely allowed to ask questions like this! We won’t always be able to answer but we welcome questions and critiques of our work.
Our innovation policy work is generally based on the assumption that long-run health and income gains are ultimately attributable to R&D. For example, Matt Clancy estimated in this report that general funding for scientific research ranged from 50-330x in our framework, depending on the model and assumptions about downside risks from scientific research. In practice we currently internally us...
First great job getting other funders on board here I love that. I've got a coupke of queries here for things that I didn't write understand
"We also think our housing policy work clears our internal bar for impact. Our current internal valuation on a marginal housing unit in a highly constrained metro area in the US is just over $400k (so a grant would be above the bar if we think it causes a new unit in expectation for $200)"
I don't understand what this means, is there a report or something you could link to which explains it?
Also I read the report ...
I think this is a complicated question - it's always been the case that individual OP staff had to submit grants to an overall review process and were not totally unilateral decision makers. As I said in my post above, they (and I) will now face somewhat more constraints. I think staff would differ in terms of how costly they would assess the new constraints as being. But it's true this was a GV rather than OP decision; it wasn't a place where GV was deferring to OP to weigh the costs and benefits.
Just flagging that I think "OP [is] open to funding XYZ areas if a new funder appears who wants to partner with them to do so" accurately describes the status quo. In the post above we (twice!) invited outreach from other funders interested in the some of these spaces, and we're planning to do a lot more work to try to find other funders for some of this work in the coming months.
I am skeptical that a new large philanthropist would be well-advised by doing their grantmaking via OP (though I do think OP has a huge amount of knowledge and skill as a grantmaker). At least given my current model, it seems hard to avoid continuing conflict about the shared brand and indirect-effects of OP on Good Ventures.
I think any new donor, especially one that is smaller than GV (as is almost guaranteed to the the case), would end up still having their donations affect Good Ventures and my best understanding of the things Dustin is hoping to protect...
To clarify, I did see the invitations to other funders. However, my perception was that those are invitations to find people to hand things off to, rather than to be a continuing partner like with GV. Perhaps I misunderstood.
I also want to be clear that the status quo you're articulating here does not match what I've heard from former grantees about how able OP staff are to participate in efforts to attract additional funding. Perhaps there has been quite a serious miscommunication.
No, the farm animal welfare budget is not changing, and some of the substreams GV are exiting (or not entering) are on the AI side. So any funding from substratgies that GV is no longer funding within FAW would be reallocated to other strategies within FAW (and as Dustin notes below, hopefully the strategies that GV will no longer fund can be taken forward by others).
Verstergaard has a reply on their website FWIW, can't vouch for it/just passing along: https://vestergaard.com/blogs/vestergaard-position-bloomberg-article-malaria-bed-nets-papua-new-guinea/
Hi Dustin :)
FWIW I also don't particularly understand the normative appeal of democratizing funding within the EA community. It seems to me like the common normative basis for democracy would tend to argue for democratizing control of resources in a much broader way, rather than within the self-selected EA community. I think epistemic/efficiency arguments for empowering more decision-makers within EA are generally more persuasive, but wouldn't necessarily look like "democracy" per se and might look more like more regranting, forecasting tournaments, etc.
This is a great point, Alexander. I suspect some people, like ConcernedEAs, believe the specific ideas are superior in some way to what we do now, and it's just convenient to give them a broad label like "democratizing". (At Asana, we're similarly "democratizing" project management!)
Others seem to believe democracy is intrinsically superior to other forms of governance; I'm quite skeptical of that, though agree with tylermjohn that it is often the best way to avoid specific kinds of abuse and coercion. Perhaps in our context there might be more...
Also, the (normative, rather than instrumental) arguments for democratisation in political theory are very often based on the idea that states coerce or subjugate their members, and so the only way to justify (or eliminate) this coercion is through something like consent or agreement. Here we find ourselves in quite a radically different situation.
Thanks MHR. I agree that one shouldn't need to insist on statistical significance, but if GiveWell thinks that the actual expected effect is ~12% of the MK result, then I think if you're updating on a similarly-to-MK-powered trial, you're almost to the point of updating on a coinflip because of how underpowered you are to detect the expected effect.
I agree it would be useful to do this in a more formal bayesian framework which accurately characterizes the GW priors. It wouldn't surprise me if one of the conclusions was that I'm misinterpreting GiveWell's current views, or that it's hard to articulate a formal prior that gets you from the MK results to GiveWell's current views.
Thanks Karthik. I think we might be talking past each other a bit, but replying in order on your first four replies:
Hey Karthik, starting separate thread for a different issue. I opened your main spreadsheet for the first time, and I'm not positive but I think the 90% reduction claim is due to a spreadsheet error? The utility gain in B5 that flows through to your bottom line takeaway is hardcoded as being in log terms, but if eta changes than the utility gain to $s at the global average should change (and by the way I think it would really matter if you were denominating in units of global average, global median, or global poverty level). In this copy I made a change to...
You... are absolutely right. That's a very good catch. I think your calculation is correct, as the utility translation only happens twice - utility from productivity growth, which I adjusted, and utility from cash transfers, which I did not. Everything else is unchanged from the original framework.
You're definitely right that it matters whether this is global average/median/poverty level. I think that the issue stems from using productivity as the input to the utility function, rather than income. This is not an issue for log utility if income is directl...
Hey Karthik,
Thanks for the thoughtful post, I really appreciate it!
Open Phil has thought some about arguments for higher eta but as far as I can find never written them up, so I'll go through some of the relevant arguments in my mind:
Hi Nicole,
I think this is a cool choice and a good post - thanks for both! I agree with your bottom line that kidney donation can be a good choice for EAs and just wanted to flag a few additional resources and considerations:
Hi MHR,
I really appreciate substantive posts like this, thanks!
This response is just speaking for myself, doing rough math on the weekend that I haven't run by anyone else. Someone (e.g., from @GiveWell) should correct me if I'm wrong, but I think you're vastly understating the difficulty and cost of running an informative replication given the situation on deworming. (My math below seems intuitively too pessimistic, so I welcome corrections!)
If you look at slide 58 here you get the minimum detectable effect (MDE) size with 80% power can be approximated as...
Thanks so much for taking the time to read the post and for really engaging with it. I very much appreciate your comment and I think there are some really good points in it. But based on my understanding of what you wrote, I’m not sure I currently agree with your conclusion. In particular, I think that looking in terms of minimum detectable effect can be a helpful shorthand, but it might be misleading more than it’s helping in this case. We don’t really care about getting statistical significance at p <0.05 in a replication, especially given that the pr...
Thanks for the thorough engagement, Michael. We appreciate thoughtful critical engagement with our work and are always happy to see more of it. (And thanks for flagging this to us in advance so we could think about it - we appreciate that too!)
One place where I particularly appreciate the push is on better defining and articulating what we mean by “worldviews” and how we approach worldview diversification. By worldview we definitely do not mean “a set of philosophical assumptions” - as Holden writes in the blog post where he introduced the concept, we defi...
Thanks very much for these comments! Given that Alex - who I'll refer to in the 3rd person from here - doesn’t want to engage in a written back and forth, I will respond to his main points in writing now and suggest he and I speak at some other time.
Alex’s main point seems to be that Open Philanthropy (OP) won't engage in idle philosophising: they’re willing to get stuck into the philosophy, but only if it makes a difference. I understand that - I only care about decision-relevant philosophy too. Of course, sometimes the philosophy does really matter: the ...
Set point. I think setting a neutral point on a life satisfaction scale of 5/10 is somewhere between unreasonable and unconscionable
The author doesn't argue that the neutral point is 5/10, he argues (1) that the decision about where to set the neutral point is crucial for prioritising resources, (2) you haven't defended a particular neutral point in public.
...and OP institutionally is comfortable with the implication that saving human lives is almost always good. Given that we think the correct neutral point is low, taking your other points on boa
This isn't an answer to the question, but two additional considerations I think you're missing that point the opposite direction and I think would make AMF look even better than GiveWell counts it as, on the total view:
Thanks, I thought this was interesting!
This question you called out in "Relevance" particularly struck me: "More concretely, it could help us estimate the potential market size of effective altruism. How many proto-EAs are there? Less than 0.1% of the population or more than 20%?"
How would you currently answer this question based on the research you report here?
If a five or higher on both scales is one way to operationalize proto-EA (you said 81% of self-ID'd EAs had that or higher), do you think the NYU estimates (6%?) or MTurk estimates (14%?) are more representative of the "relevant" population?
Thank you!
If we operationalize proto-EAs as scoring five or higher on both scales, then I’d say the 14% estimate is closer to the actual number of proto-EAs in the general (US) population (though it’s not clear if this is the relevant population or operationalization, more on that below).
First, the MTurk sample is much more representative of the general population than the NYU sample. The MTurk sample is also larger (n = 534) than the NYU sample (n = 96) so the MTurk number is a more robust estimate. Lastly, the NYU sample mostly consiste...
Really liked this post, thanks.
Minor comment, wanted to flag that I think "Open Philanthropy has also reduced how much they donate to GiveWell-recommended charities since 2017." was true through 2019, but not in 2020, and we're expecting more growth for the GW recs (along with other areas) in the future.
Obv disclaimer: not a tax adviser.
Seems like yes based on this (https://www.thebalancesmb.com/can-my-business-deduct-charitable-contributions-397602) and according to this (https://www.philanthropy.com/article/nonprofits-win-extended-charitable-deductions-and-paycheck-protection-loans-in-stimulus-bill) the recent stimulus bill increased the limit for 2021 to 25% of corporate taxable income (instead of the normal 10%).
Re your last paragraph, I just wanted to drop @jefftk's (IMO) amazing post here: https://www.jefftk.com/p/candy-for-nets
Someone emailed me this and asked for thoughts, so I thought I'd share some cleaned up reactions here. Full disclosure--I work at Open Phil on some related issues:
Thanks for these comments Alex. I agree that it would be best to look at how growth translates into subjective wellbeing, and I am planning to do this or to get someone else to do it soon. However, I'm not sure that this defeats our main claim which is that research on and advocacy for growth are likely to be better than GW top charities. There are a few arguments for this.
(1) GW estimates that deworming is the best way to improve economic outcomes for the extreme poor, in expectation. This seems to me very unlikely to be true since deworming explain...
I think this argument is wrong for broadly the reasons that pappubahry lays out below. In particular, I think it's a mistake to deploy arguments of the form, "the benefit from this altruistic activity that I'm considering are lower than the proportional benefits from donations I'm not currently making, therefore I should not do this activity."
Ryan does it when he says:
...How long would it take to create $2k of value? That's generally 1-2 weeks of work. So if kidney donation makes you lose more than 1-2 weeks of life, and those weeks constitute fun
I agree, and I'd add that what I see as one of the key ideas of effective altruism, that people should give substantially more than is typical, is harder to get off the ground in this framework. Singer's pond example, for all its flaws, makes the case for giving a lot quite salient, in a way that I don't think general considerations about maximizing the impact of your philanthropy in the long term are going to.
Yes, kidney selling is officially banned in nearly every country. My preference, at least in the U.S. context, would be to have the government offer benefits to donors to ensure high quality and fair allocation: http://www.nytimes.com/2011/12/06/opinion/why-selling-kidneys-should-be-legal.html
Just wanted to say that I enjoyed reading this and the section starting with "Online:" and your concluding question really resonated with me.