Effective giving

New Discussion New Post
Sorted by Magic (New & Upvoted), Posts Expanded

A number of recent proposals have detailed EA reforms. I have generally been unimpressed with these - they feel highly reactive and too tied to attractive sounding concepts (democratic, transparent, accountable) without well thought through mechanisms. I...

8Dustin Moskovitz3d
My decision criteria would be whether the chosen grants look likely to be better than OPs own grants in expectation. (n.b. I don't think comparing to the grants people like least ex post is a good way to do this). So ultimately,  I wouldn't be willing to pre-commit large dollars to such an experiment. I'm open-minded that it could be better, but I don't expect it to be, so that would violate the key principle of our giving. Re: large costs to small-scale experiments, it seems notable that those are all costs incurred by the community rather than $ costs. So if the community believes in the ROI, perhaps they are worth the risk?
5Jason3d
Because the donor lottery weights by donation size, the Benefactor or a large earning-to-give donor are much more likely to win than someone doing object-level work who can only afford a smaller donation. Preferences will still get funded in proportion to the financial resources of each donor, so the preferences of those with little money remain almost unaccounted for (even though there is little reason to think they wouldn't do as well as the more likely winners). Psychologically, I can understand why the current donor lottery would be unappealing to most smaller donors.  Weighting by size is necessary if you want to make the donor lottery trustless -- because a donor's EV is the same as if they donated to their preferred causes directly, adding someone who secretly wants to give to a cat rescue doesn't harm other donors. But if you employ methods of verifying trustworthiness, a donor lottery doesn't have to be trustless. Turning the pot over to a committee of lottery winners, rather than a single winner, would further increase confidence that the winners would make reasonable choices. Thus, one moderate step toward amplifying the preferences of those with less money would be a weighted donor lottery -- donors would get a multiplier on their monetary donation amount based on how much time-commitment skin in the game they had. Of course, this would require other donors to accept a lower percentage of tickets than their financial contribution percentage, which would be where people or organizations with a lot of money would come in. The amount of funding directed by of Open Phil (and formerly, FTX) has caused people to move away from earning-to-give, which reduced the supply of potential entrants who would be willing to accept a significantly lower share of tickets per dollar than smaller donors. So I would support large donors providing some funds to a weighted donor lottery in a way that boosts the winning odds -- either solo or as part of a committee -- for dono

openbook.fyi is a new website where you can see ~4,000 EA grants from donors including Open Phil, FTX Future Fund, and EA Funds in a single place.

Why?

If you're a donor: OpenBook shows you how much orgs have...

2JaimeRV17h
This is very helpful, thanks for doing it! How is the maintenance of the site planned? Is there a person in charge of periodically checking the different sources of grants and updating the page, or is there something automated? As a possible feature it would be nice if it would show somewhere when this database was last updated :)
1Rachel Weinberg1h
Yeah that's the hard part that I'm going to be thinking about a lot this week. My guess is some funders will be easy to automatically update because they release their grants in a CSV and I already have scripts for reading them (EA funds, Open Phil), but others need to be done very manually which seems super annoying (ACX). I would probably only add the donations of major funds and not scrape people's blogs or whatever Vipul/Issa did to add a lot of smaller donations, excepting maybe connecting with Giving What We Can from individuals' donation data. Anyway, I probably don't want to spent more than ~3 hours once per month updating the data, but I'll try to be as efficient as possible with that time!

UPDATE 2 FEBRUARY 2023: I've updated a new and streamlined version of the FI-lanthropy Calculatorbased on the comments of many. 

REQUEST

I am soliciting feedback on a tool I have made entitled the "FI-lanthropy Calculator". The target audience includes...

1Joseph Lemien1d
Thanks for the links. I'll explore those, and maybe even end up updating an overly conservative perspective on safe withdrawal rates and long retirements.  :)

I've been keeping an overview of the public effective giving ecosystem that I thought would be worth sharing in its own post (I've previously referred to it here). I've noticed people often aren't aware of many of...

2david_reinstein13h
That makes sense to me

I've recently come across the opportunity to influence a decent amount (20-50k) of corporate funding towards charitable causes through nominating grant recipients. The corporation is a major player in the agricultural industry. The biggest catch is that...

4Answer by BrownHairedEevee21h
CLIMATE CHANGE Giving Green's 2023 climate recommendations [https://www.givinggreen.earth/top-climate-change-nonprofit-donations-recommendations] are: * Clean Air Task Force * Evergreen Collaborative * Good Energy Collective * The Good Food Institute (GFI) * Industrious Labs Most of these orgs do climate advocacy in the United States. ANIMAL WELFARE Animal Charity Evaluators (ACE)'s current top charities [https://animalcharityevaluators.org/donation-advice/recommended-charities/] are: * Faunalytics * The Humane League * GFI * Wild Animal Initiative All four organizations operate in the U.S.
1Answer by Sam Battis1d
The environmental law groups like Earthjustice have a quite strong return on investment from my limited research. Not sure if it would be too controversial, but it does check the climate change box.
6ThomasW1d
GiveDirectly has a program for the US [https://www.givedirectly.org/united-states/] that you can donate to. I don't really know how good it is, but the organization in general seems excellent.

Project for Awesome (P4A) is a charitable initiative running February 17th-19th this year (2023), and videos must be submitted by 11:59am (noon) Eastern time on Wednesday, February 15th. This is a good opportunity to raise money for...

GWWC lists StrongMinds as a “top-rated” charity. Their reason for doing so is because Founders Pledge has determined they are cost-effective in their report into mental health.

I could say here, “and that report was written in 2019...

22JoelMcGuire17d
I’m belatedly making an overall comment about this post.  I think this was a valuable contribution to the discussion around charity evaluation. We agree that StrongMinds’ figures about their effect on depression are overly optimistic. We erred by not pointing this out in our previous work and not pushing StrongMinds to cite more sensible figures. We have raised this issue with StrongMinds and asked them to clarify which claims are supported by causal evidence.  There are some other issues that Simon raises, like social desirability bias, that I think are potential concerns. The literature we reviewed in our StrongMinds CEA (page 26 [https://www.happierlivesinstitute.org/report/strongminds-cost-effectiveness-analysis/]) doesn’t suggest it’s a large issue, but I only found one study that directly addresses this in a low-income country (Haushofer et al., 2020 [https://www.nber.org/papers/w28106]), so the evidence appears very limited here (but let me know if I’m wrong). I wouldn’t be surprised if more work changed my mind on the extent of this bias. However, I would be very surprised if this alone changed the conclusion of our analysis. As is typically the case, more research is needed. Having said that, I have a few issues with the post and see it as more of a conversation starter than the end of the conversation. I respond to a series of quotes from the original post below. If there's confusion about our methodology, that’s fair, and I’ve tried to be helpful in that regard. Regarding our relationship with StrongMinds, we’re completely independent.  This is false. As we’ve explained before [https://forum.effectivealtruism.org/posts/uY5SwjHTXgTaWC85f/don-t-just-give-well-give-wellbys-hli-s-2022-charity?commentId=e2PyZq2hFqzLEdgdp], our evaluation of StrongMinds is primarily based on a meta-analysis of psychological interventions in LMICs, which is a distinction between our work and Founders Pledge that means that many of the problems mentioned apply less to our wo

This post summarizes a Founders Pledge shallow investigation on direct communications links (DCLs or "hotlines") between states as global catastrophic risks interventions. As a shallow investigation, it is a rough attempt at understanding an issue, and is...

7Greg S6d
TL;DR of the below post is that I agree with the brief remarks about 1.5 Track Dialogues made in the Founders Pledge document you cite at f/n 68. I'd like to cheerlead briefly for a diversity of channels. That is, sometimes we'll see countries put a public freeze on one another where the most senior and high-profile figures (presidents, prime ministers, foreign affairs ministers etc) don't talk to each other for reasons of posturing over an issue.  In countries where there isn't a diversity of channels (by which I mean, most interactions occur between those most senior officials and a formal diplomatic channel), this can create risky situations because there's no longer a way to clarify (that is, communications become indirect via the oblique public statements, and prone to cross-cultural and other confusion). "Hotlines" are less relevant in this situation because the point of the posturing is that the countries aren't talking to one another. Picking up the hotline would be off-message.  What reduces risk in that situation is a diversity of channels. The post discussed diplomatic channels, and I won't repeat that. We also often think about a 'back channel' in the sense of a confidant of one leader talking to a confidant of another leader - a proxy conversation that allows the posturing to continue but some more direct communication to occur. And that can help (appreciating the clarity and timeliness points made in your post). The key thing I'd add to that point (in support of the points raised about track 2) is at-level connections within a bureaucracy (and to a lesser extent people-to-people connections). That is, where lots of officials in a country know their counterpart in the other country, a formal diplomatic freeze is of much less practical concern because the bulk of all those at-level communications means each country remains pretty much in tune with what the other is doing (there isn't likely to be a spiral of miscommunication because, when the preside

A recent post by Simon_M argued that StrongMinds should not be a top recommended charity (yet), and many people seemed to agree. While I think Simon raised several useful points regarding StrongMinds, he didn't engage with the cost-effectiveness analysis of...

2JoelMcGuire7h
I will try and summarise and comment on what I think are some possible suggestions you raise, which happen to align with your three sections.  1. Discard the results that don't result in a discount to psychotherapy [1].  If I do this, the average comparison of PT to CT goes from 9.4x --> 7x. That seems like a plausible correction, but I'm not sure it's the one I should use. I interpreted these results s as indicating none of the tests give reliable results. I'll quote myself:  I'm really unsure if 9.4x --> 7x is a plausible magnitude of correction. The truth of the perfect test could suggest a greater or smaller correction, I'm really uncertain given the behavior of these tests. That leaves me scratching my head at what principled choice to make. I think if we discussed this beforehand and I said "Okay, you've made some good points, I'm going to run all the typical tests and publish their results." would you have said have advised me to not even try, and instead, make ad hoc adjustments. If so, I'd be surprised given that's the direction I've taken you to be arguing I should move away from.  2. Compare the change of all models to a single reference value of 0.5 [2].  When I do this, and again remove anything that doesn't produce a discount for psychotherapy, the average correction leads to a 6x cost-effectiveness ratio of PT to CT. This is a smaller shift than you seem to imply.  3. Fix the weighting between the general and StrongMinds specific evidence [3]. Gregory is referring to my past CEA of StrongMinds in guesstimate [https://www.getguesstimate.com/models/18652], where if you assign an effect size of 0 to the meta-analytic results it only brings StrongMinds cost-effectiveness to 7x GiveDirectly. While such behavior is permissible in the model, obviously if I thought the effect  of psychotherapy in general was zero or close to, I would throw my StrongMinds CEA in the bin.  As I noted in my previous comment discussing the next version of my analysis, I
3ryancbriggs5h
I will probably have longer comments later, but just on the fixed effects point, I feel it’s important to clarify that they are commonly used in this kind of situation (when one fears publication bias or small study-type effects). For example, here is a slide deck [http://www.meta-analysis.cz/conference/Stanley_p.pdf] from a paper [https://onlinelibrary.wiley.com/doi/abs/10.1111/ecoj.12461] presentation with three *highly* qualified co-authors. Slide 8 reads: * To be conservative, we use ‘fixed-effect’ MA or our new unrestricted WLS—Stanley and Doucouliagos (2015) * Not random-effects or the simple average: both are much more biased if there is publication bias (PB). * Fixed-effect (WLS-FE) is also biased with PB, but less so; thus will over-estimate the power of economic estimates. This is basically also my take away. In the presence of publication bias or these small-study type effects, random effects "are much more biased" while fixed effects are "also biased [...] but less so." Perhaps there are some disciplinary differences going on here, but what I'm saying is a reasonable position in political science, and Stanley and Doucouliagos are economists, and Ioannidis is in medicine, so using fixed effects in this context is not some weird fringe position. -- (disclosure: I have a paper under review where Stanley and Doucouliagos are co-authors)
2JoelMcGuire5h
I may respond later after I’ve read more into this, but briefly — thank you! This is interesting and something I’m willing to change my mind about it. Also didn’t know about WAAP, but it sounds like a sensible alternative.

The Joshua Greene and Lucius Caviola article about their givingmultiplier.org work has just been published. Here's the abstract:

The most effective charities are hundreds of times more impactful than typical charities. However, most donors favor charities with personal/emotional

...

Giving What We Can members have pledged to donate at least 10% of what they earn to help others as best they can, but this is broader than it was originally. The pledge was specific to global poverty, and only...

1Lucas S.3d
Interesting — that’s not how I would have read that language.  I would have instead said that this phrase clarifies that the person pledging will repeatedly identify the most effective organizations, rather than just identifying the best organization(s) once at the time of the pledge and then donating to those entities throughout their life.

The idea of effective altruism has taken a lot of negative publicity recently due to its association with FTX. But has this backlash extended to orgs like GiveWell, Animal Charity Evaluators, and Giving Green that aren't as closely associated with FTX?

50Answer by Karolis Ramanauskas8d
I've been tracking how many new GWWC pledges are taken each month using this Colab for a while: https://colab.research.google.com/drive/1BR8nrIAVy7BdD7DpmMZ9CdcQZzBoHfUm?usp=sharing [https://colab.research.google.com/drive/1BR8nrIAVy7BdD7DpmMZ9CdcQZzBoHfUm?usp=sharing] Using this single data point, the picture is mixed I'd say, especially once you consider that What We Owe The Future came out in August 2022 and recession fears started at a similar time. There is also some seasonality of pledging around January, so I did year over year % change for 2021-22: Not including January 2023 yet, as it's not over and it also takes some time for people to show up on the list after they take the pledge. There is a longer term plot in the Colab, but it's too ugly to post here and I'm too lazy to fix it. If someone could make a nicer looking bar chart, I would appreciate it.

I apologize in advance for asking the EA forum to help us activate a campaign, but because I believe this to be an effective, new and interesting way to build the community and get more incremental money...

Epistemic status: speculative

This is a response to recent posts including Doing EA Better and The EA community does not own its donors' money. In order to make better funding decisions, some EAs have called for democratizing EA's funding systems....

3Davidmanheim11d
I agree that this is tricky to do, because the processes aren't so well publicly documented. (Not that they should be - funders providing information about their processes make them more gameable, as most government funding is!)  I do think that you could have asked more people with knowledge of the process to review the post, and also think that the Survival and Flourishing Fund documents what they do pretty clearly, including both their writeup, and at least one forum post by a reviewer documenting it pretty extensively.
1ben.smith11d
At a pinch, I would say review might be more worthwhile for topics where the work builds on a well-developed but pre-existing body of research. So, funding a graduate to take time to learn about AI Safety full-time as a bridge to developing a project probably wouldn't benefit from a review, but an application to develop a very specific project based on a specific idea probably would. I don't have a sense on how often five-to-low-six-figure grants involve very specific ideas. If you told me they usually don't, I would definitely update against thinking a peer review would be useful in those circumstances.
3Jason11d
I have no idea, to be honest. My belief that smaller grants might not be the best trial run for cost-effectiveness is based more on assumptions that (1) highly qualified reviewers might not think reviewing grants in that range is an effective use of their time; and (2) very quick reviews are likely to identify only clearly erroneous exercises of grantmaking discretion. Either assumption could be wrong! But I think at that grant size, the cost-effectiveness profile might be more favorable for a system of peer review under specified circumstances rather than as a automatic practice. Knowing that they were only being asked when there was a greater chance their assistance might be outcome-determinative might help with attracting quality reviewers too.

One of the roles of Giving What We Can (GWWC) is to help its members and other interested people figure out where to give. If you go to their site and click " start giving" they list charitable funds, including GiveWell's All...

15Luke Freeman25d
Yep - Jeff's pretty much captured it all here. GWWC's mission is to "make giving effectively and significantly a cultural norm" and the pledge plays a big part in that, as does advocating for and educating about effective giving. Supporting donors/members in giving effectively has always been a part of GWWC but what that's looked like has changed over the years (from very detailed charity evaluation through to just linking off to GiveWell/ACE/EA Funds when there was no one working full time on GWWC).
4MatthewDahlhausen24d
Thanks for the clarification! I took the pledge in 2016 which coincided with when the research department disbanded per Jeff's comment. I think that explains why I perceived GWWC to not be in the business of doing evaluations. Glad to see "evaluate the evaluators" is working its way back in.

What is a subforum?

Subforums are spaces for discussion, questions, and more detailed posts about particular topics. Full posts in this space may also appear on the Frontpage, and posts from other parts of the EA Forum may appear here if relevant tags are applied. Discussions in this space will never appear elsewhere.

Welcome to the effective giving subforum!

This is a dedicated space for discussions about effective giving. 

Get involved: