All of Wayne_Chang's Comments + Replies

Thanks, Ben, for writing this up! I very much enjoyed reading your intuition.

I was a bit confused in a few places with your reasoning (but to be fair, I didn't read your article super carefully).

  1. Nvidia's market price can be used to calculate its expected discounted profits over time, but it can't tell us when those profits will take place. A high market cap can imply rapid short-term growth to US$180 billion of revenues by 2027 or a more prolonged period of slower growth to US$180B by 2030 or 2035. Discount rates are an additional degree of freedom. We can
... (read more)
3
Benjamin_Todd
9h
Hi Wayne, Those are good comments! On the timing of the profits, my first estimate is for how far profits will need to eventually rise.  To estimate the year-by-year figures, I just assume revenues grow at the 5yr average rate of ~35% and check that's roughly in line with analyst expectations. That's a further extrapolation, but I found it helpful to get a sense of a specific plausible scenario. (I also think that if Nvidia revenue looked to be under <20% p.a. the next few quarters, the stock would sell off, though that's just a judgement call.) On the discount rate, my initial estimate is for the increase in earnings for Nvidia relative to other companies (which allows us to roughly factor out the average market discount rate) and assuming that Nvidia is roughly as risky as other companies. In the appendix I discuss how if Nvidia is riskier than other companies it could change the estimate. Using Nvidia's beta as an estimate of the riskiness doesn't seem to result in a big change to the bottom line.   I agree analyst expectations are a worse guide than market prices, which is why I tried to focus on market prices wherever possible.    The GPU lifespan figures come in when going from GPU spending to software revenues. (They're not used for Nvidia's valuation.) If $100bn is spent on GPUs this year, then you can amortise that cost over the GPU's lifespan.  A 4 year lifespan would mean data centre companies need to earn at least $25bn of revenues per year for the next 4 years to cover those capital costs. (And then more to pay for the other hardware and electricity they need, as well as profit.)   On consumer value, I was unsure whether to just focus on revenues or make this extra leap. The reason I was interested in it is I wanted to get a more intuitive sense of the scale of the economic value AI software would need to create, in terms that are closer to GDP, or % of work tasks automated, or consumer surplus. Consumer value isn't a standard term, but i

What concerns me is that I suspect people rarely get deeply interested in the moral weight of animals unless they come in with an unusually high initial intuitive view.

 

This criticism seems unfair to me:

  1. It seems applicable to any type of advocacy. Those who promote global health and poverty are likely biased toward foreign people. Those who promote longtermism are likely biased toward future people. Those who advocate for effective philanthropy are likely biased toward effectiveness and/or philanthropy.
  2. There's no effective counter-argument since, almo
... (read more)
6
Jeff Kaufman
7mo
I'm going to simplify a bit to make this easier to talk about, but imagine a continuum in how much people start off caring about animals, running from 0% (the person globally who values animals least) to 100% (values most). Learning that someone who started at 80% looked into things more and is now at 95% is informative, and someone who started at 50% and is now at 95% is more informative. This isn't "some people are biased and some aren't" but "everyone is biased on lots of topics in lots of ways". When people come to conclusions that point in the direction of their biases others should generally find that less convincing than then they come to ones that point in the opposite direction. What I would be most excited about seeing is people who currently are skeptical that animals matter anywhere near as much as Rethink's current best guess moral weights would suggest treat this as an important disagreement and don't continue just ignoring animals in their cause prioritization. Then they'd have a reason to get into these weights that didn't trace back to already thinking animals mattered a lot. I suspect they'd come to pretty different conclusions, based on making different judgement calls on what matters in assessing worth or how to interpret ambiguous evidence about what animals do or are capable of. Then I'd like to see an adversarial collaboration.

Thanks so much for such a thorough and great summary of all the various considerations! This will be my go-to source now for a topic that I've been thinking about and wrestling with for many years.

I wanted to add a consideration that I don't think you explicitly discussed. Most investment decisions done by philanthropists (including the optimal equity/bond split) are outsourced to someone else (financial intermediary, advisor, or board). These advisors face career risk (i.e. being fired) when making such decisions. If the advisor recommends something that ... (read more)

Thanks for posting this, Jonathan! I was going to share it on the EA Forum too but just haven't gotten around to it.

I think GIF's impact methodology is not comparable to GiveWell's. My (limited) understanding is that their Practical Impact approach is quite similar to USAID's Development Innovation Ventures' impact methodology. DIV's approach was co-authored by Michael Kremer so it has solid academic credentials. But importantly, the method takes credit for the funded NGO's impact over the next 10 years, without sharing that impact with subsequent funders.... (read more)

Thanks for your response, Joel!

Stepping back, CEARCH's goal is to identify cause areas that have been missed by EA. But to be successful, you need to compare apples with apples. If you're benchmarking everything to GiveWell Top Charities, readers expect your methodology to be broadly consistent with GiveWell's and their conservative approach (and for other cause areas, consistent with best-practice EA approaches). The cause areas that are standing out for CEARCH should be because they are actually more cost-effective, not because you're using a more lax me... (read more)

1
Joel Tan
1y
Just to clarify, one should definitely expect cost-effectiveness estimates to drop as you put more time into them, and I don't expect this cause area to be literally 1000x GiveWell. Headline cost-effectiveness always drops, from past experience, and it's just optimizer's curse where over (or under) performance comes partly from the cause area being genuinely better (or worse) but also partly from random error that you fix at deeper research stages. To be honest, I've come around to the view that publishing shallow reports - which are really just meant for internal prioritization - probably isn't useful, insofar as it can be misleading. As an example of how we more aggressive discount at deeper research stages, consider our intermediate hypertension report - there was a fairly large drop from around 300x to 80x GiveWell, driven by (among other things): (a) taking into accounting speeding up effects, (b) downgrading confidence in advocacy success rates, (c) updating for more conservative costing, and (d) doing GiveWell style epistemological discounts (e.g. taking into account a conservative null hypothesis prior, or discounting for publication bias/endogeneity/selection bias etc.) As for what our priors should be with respect to whether a cause can really be 100x GiveWell - I would say there's a reasonable case for this, if: (a) One targets NCDs and other diseases that grow with economic growth (instead of being solved by countries getting richer, and improving sanitation/nutrition/healthcare systems etc). (b) There are good policy interventions available, because it really does matter that: (i) a government has enormous scale/impact; (ii) their spending is (counterfactually) relative to EA money that would have gone to AMF and the like; and (iii) policy tends to be sticky, and so the impact lasts in a way that distributing malaria nets or treating depression may not.

Hi Joel, I skimmed your report really quickly (sorry) but suspect that you did not account for soda taxes being eventually passed anyway. So the modeled impact of any intervention shouldn't be going to 2100 or beyond but out only a few years (I'd think <10 years) when soda taxes would eventually be passed without any active intervention. You are trying to measure the impact of a counterfactual donated dollar in the presence of all the forces already at play that are pushing for soda taxes (how some countries already have them). This makes for a more plausible model, and I believe is how LEEP or OpenPhil model policy intervention cost-effectiveness (I could be wrong though).

2
Joel Tan
1y
Hi Wayne, You're right! I'm currently working on the intermediate report for diabetes, and one factor we're looking at that the shallow report did not cover is the speeding up effect, which we model by looking at the base rate from past data (i.e. country-years in which passage occurred, divided by total country-years). This definitely cuts into the headline cost-effectiveness estimate. On a related note, one issue, I think, is whether we think of tax policy success as counterfactually mutually exclusive, or as additive. (A) For the former, as you say, the idea is that the tax would have occurred anyway. (B) For the latter, the idea is that the tax an EA or EA-funded advocacy organization pushes shifts upwards the tax over time curve (i.e. what the tax rate is over time; presumably this slopes upwards, as countries get stricter). In short, we're having a counterfactual effect because the next round of tax increases don't replace so much as add on to what we've achieved, and our actions ensure that the tax rate at any one point in time is systematically higher than it otherwise would have been.  I think reality is a mix between both both viewpoints (A) & (B) - success means draining the political capital to do more in the short to medium term, but you're probably also ensuring that the tax rate is systematically higher going forward. In practice, I tend to model using (A), just to be conservative

Got it. But I think the phrasing for the number of animals that die is confusing then. Since you say "100 other human [sic] would probably die with me in that minute," the reference is to how many animals would also do during that minute.  I think what you want to say is for every human death, how many animals would die, but that's not the current phrasing (and by that logic, the number of humans that would die per human death would be 1, not 100).

I'd suggest making everything consistent on a per-second basis as smaller numbers are more relatable. So  1 other human would die with you that second, along with 10 cows, etc.

2
rosehadshar
2y
I've changed the wording to make it clearer that I mean deaths per human per minute. I don't want to change it to second; for me dying in the next minute is easier to imagine/take seriously than dying in the next second (though I imagine this varies between people).

Thanks for writing this! The very last sentence seems off. Did you mean to say every second (instead of minute)? Also, the number of farm animals that die every second should be 1/60 (not 1/120) of that in the “minute” table above.

This last sentence was quite shocking for me to read. It’s sad…but very powerful.

2
rosehadshar
2y
Thanks for picking this up Wayne! The mistake I made was number of people: it should have read 115 other people, not one. I did mean minute, and the number of animals is 1/116 to get a number of animals per human, rather than 1/60 to get a number of animals per second. I've corrected the number now. (Thanks also to someone else who messaged me about the error.)

Minor suggestion: in your title and summary, please just write out "10 k" as 10,000. No need to abbreviate when people may be unsure that it's actually 10,000 (given that it's such a large difference). 

2
Vasco Grilo
2y
Thanks. I have updated it in the title, and clarified it in the Summary.

I agree with Michael that concrete examples would be very helpful, even for researchers.  A post should be informative and persuasive, and examples almost always help with that. In this case, examples can also make clear the underlying logic, and where the explanation can be confusing. 

For example, let's think about investing in alternative protein companies as a way to tackle animal welfare. Assume that in a future state where lots more people eat real meat (bad world state), the returns for alt-proteins in that state are low but cost-effectiven... (read more)

7
jh
2y
Thank you Wayne and Michael for the helpful nudges and encouragement. I agree that the table at the bottom of the post was at best ambiguous. I have now deleted it from this post, revised it and turned it into this new post with several examples. This current post then, without the table, remains to make the point that 'mission hedging' is just a subset of 'mission correlated investing'. And that mission correlation research needs to focus on forecasting cost-effectiveness, not whether the world is 'good' or 'bad'.

This post (and the series it summarizes) draws on the scientific literature to assess different ways of considering and classifying animal sentience. It persuasively takes the conversation beyond an all-or-nothing view and is a significant advancement for thinking about wild animal suffering as well farm animal welfare beyond just cows, pigs, and chickens.

Thanks for the clarification, Owen! I had mis-understood 'investment-like' as simply having return compounding characteristics. To truly preserve optionality though, these grants would need to remain flexible (can change cause areas if necessary; so grants to a specific cause area like AI safety wouldn’t necessarily count) and liquid (can be immediately called upon; so Founder's Pledge future pledges wouldn't necessarily count). So yes, your example of grants that result "in more (expected) dollars held in a future year (say a decade from now) by careful t... (read more)

Hi Owen, even if you're confident today about identifying investment-like giving opportunities with returns that beat financial markets, investing-to-give  can still be desirable. That's because investing-to-give preserves optionality. Giving today locks in the expected impact of your grant, but waiting allows for funding of potentially higher impact opportunities in the future.

The secretary problem comes to mind (not a perfect analogy but I think the insight applies). The optimal solution is to reject the initial ~37% of all applicants and then accep... (read more)

5
Owen Cotton-Barratt
3y
But the investment-like giving opportunities also preserve optionality! This is the sense in which they are investment-like. They can result in more (expected) dollars held in a future year (say a decade from now) by careful thinking people who will be roughly aligned with our values than if we just make financial investments now.
3
RyanCarey
3y
If I recall correctly (and I may well be wrong), the secretary problem's solution only applies if your utility is linear in the ranking of the secretary that you choose - I've never come across a problem where this was a useful assumption.
2
kokotajlod
3y
Interesting! The secretary problem does seem relevant as a model, thanks! FWIW, many of us do think that. I do, for example.
2[comment deleted]3y

I highly recommend the Founder's Pledge report on Investing to Give. It goes through and models the various factors in the giving-now vs giving-later decision, including the ones you describe. Interestingly, the case for giving-later is strongest for longtermist priorities, driven largely by the possibility that significantly more cost-effective grants may be available in the future. This suggests that the optimal giving rate today could very well be 0%.  

2
kokotajlod
3y
Thanks Wayne, will read!

I think it's implausible that the optimal giving rate today could be 0%. This is because many giving opportunities function as a form of investment, and we're pretty sure that the best of those outperform the financial market. (I wrote more about ~this in this post: https://forum.effectivealtruism.org/posts/Eh7c9NhGynF4EiX3u/patient-vs-urgent-longtermism-has-little-direct-bearing-on )

Have you compared your analysis to this previous EA Forum post? Are there different takeaways? Have you done anything differently and if so, why? 

1
AppliedDivinityStudies
3y
This is very specifically attempting to compile some existing analysis on whether it's better to eat chicken or beef, incorporating ethical and environmental costs, and assuming you choose to offset both harms through donations. In the future, I would like to aggregate more analysis into a single model, including the one you link. As I understand it (this might be wrong), what we have currently is a much of floating analyses, each mostly focused on the cost-effectiveness of a specific intervention. Donors can then compare those analyses and make a judgement about where best to give their money. Where the Give Well style monolithic CEA succeed is in ensuring that a similar approach is used to produce analysis that is genuinely comparable, and in giving readers the opportunity to adjust subjective moral weights. That's my ultimate goal with this project, but it will likely take some time. This was maybe a premature release, but so far the feedback has already been useful.

Here’s the math on moral/financial fungibility:

...

You’re probably better off eating cow beef and donating the $6.03/kg to the Good Food Institute 

 

Is refraining from killing really morally fungible to killing + offsetting? Would it be morally permissible for someone to engage in murder if they agreed to offset that life by donating $5,000 to Malaria Consortium? I don't mean to be offensive with this analogy, but if we are to take seriously the pain/suffering that factory farming inflicts on animals, we should morally regard it in a similar lens t... (read more)

2
AppliedDivinityStudies
3y
Yes that's a good point, as Scott argues in the linked post: Give Well notes that their analysis should only really be taken a relative measure of cost-effectiveness. But even putting that aside, you're right that it doesn't imply human lives are cheap or invaluable. Actually, I pretty much agree with all your points. But a better analogy might be "is it okay to murder someone to prevent another murder?" That's a much fuzzier line, and you can extend this to all kinds of absurd trolly-esque scenarios. In the animal case, it's not that I'm murdering someone in cold blood and then donating some money. It's that I'm causing one animal to be produced, and then causing another animal not to be. So it is much closer to equivalent. To be clear again, the specific question this analysis address is not "is it ethical to eat meat and then pay offsets". The question is "assuming you pay for offsets, is it better to eat chicken or beef?" And of course, there are plenty of reasons murder seems especially repugnant. You wouldn't want rich people to be able to murder people effectively for free. You wouldn't want people getting revenge on their coworkers. You wouldn't want to allow a world where people have to life in fear, etc etc etc. So I don't think it's a particularly useful intuition pump.

Thanks, Sanjay, I’m sharing a basic model I’ve written that highlights the trade-off for impact investments that seek both social impact and financial returns. This isn’t specifically about ESG but the key ideas still apply. The upshot: the investment must produce annually one percent of a same-sized grant’s social benefit for every one percent concession on its financial return. I construct impact investing’s version of the Security Market Line and quantitatively define what ‘impact alpha’ means.

This model was written a couple of years ago but since then,... (read more)

I agree with Michael that a 70% allocation to US stocks is way too high. US stocks' outperformance against international developed stocks can almost entirely be explained by the increase in the US market's valuation (which shouldn't be assumed to continue and indeed, is more likely to reverse). See AQR's analysis on pg 6 here. Also, what about Emerging Market stocks? This should certainly get some allocation as well, especially if you're focused on the next 100 years. China and India will increasingly be key economic players and have capital markets that w... (read more)

This paper is relevant to your question.

Abstract: This article asks how sustainable investing (SI) contributes to societal goals, conducting a literature review on investor impact—that is, the change investors trigger in companies’ environmental and social impact. We distinguish three impact mechanisms: shareholder engagement, capital allocation, and indirect impacts, concluding that the impact of shareholder engagement is well supported in the literature, the impact of capital allocation only partially, and indirect impacts lack empirical su... (read more)

1
sbowman
4y
Thanks, Wayne! This looks like a good starting point for further research, but it's hard to take much that's actionable from this without more background in finance. Is there anything you'd take away as advice to a smallish-scale individual investor?

I don’t think it makes sense to compound the model distributions (e.g. from 1 year to 10 years). Doing so leads to non-intuitive results that are difficult to justify.

1) Compounded model results (e.g. 10x impact in 10 years) are highly sensitive to the arbitrarily assumed shape, range, and skewness parameters of the variable distributions. Also, these results will vary wildly from simulation to simulation depending on the sequence of random draws. This points to the model's fragility and leads to unnecessary confusion.

2) The parameter estimat... (read more)

A 7% real investment return over the long-term is in my opinion, highly aggressive. World real GDP growth from 1960 through 2019 is 3.5%. Since the proposed fund expects to invest over “centuries or millennia,” any growth rate faster than GDP eventually takes over the world. Piketty’s r > g can’t work if wealth remains concentrated in a fund with no regular distributions.

Even in the shorter run, it’s unrealistic to expect the fund to implement a leveraged equity-only strategy (or analogous VC strategy):

1) A leveraged ... (read more)

8
CarlShulman
4y
I agree risks of expropriation and costs of market impact rise as a fund gets large relative to reference classes like foundation assets (eliciting regulatory reaction) let alone global market capitalization. However, each year a fund gets to reassess conditions and adjust its behavior in light of those changing parameters, i.e. growing fast while this is all things considered attractive, and upping spending/reducing exposure as the threat of expropriation rises. And there is room for funds to grow manyfold over a long time before even becoming as large as the Bill and Melinda Gates Foundation, let alone being a significant portion of global markets. A pool of $100B, far larger than current EA financial assets, invested in broad indexes and borrowing with margin loans or foundation bonds would not importantly change global equity valuations or interest rates. Regarding extreme drawdowns, they are the flipside of increased gains, so are a question of whether investors have the courage of their convictions regarding the altruistic returns curve for funds to set risk-aversion. Historically, Kelly criterion leverage on a high-Sharpe portfolio could have provided some reassurance with being ahead of a standard portfolio over very long time periods, even with great local swings.

Hi Carl, thanks for your response and for posting the links. I have now retracted my initial strong downvote of your comment.

I understand and am sympathetic of the view that altruists investing to donate should be a lot more risk-seeking than when investing to fund their own future consumption. My concern was entirely based on your recommendation to invest long term in leveraged ETF’s. I did not think this is a good idea because leveraged ETF’s can have realized returns that deviate substantially from its underlying index in a bad and unexpec... (read more)

You should NOT be holding leveraged ETF's for long periods of time (i.e no more than a day or two). When held for a year, a 3x leveraged ETF will not deliver 3x the returns of the underlying index. In fact, it is quite possible given high current volatility, that the ETF delivers negative returns even when the underlying index is positive. For more info, see 'Why Leveraged ETFs Are Not a Long Term Bet.'

6
CarlShulman
4y
Wayne, the case for leverage with altruistic investment is in no way based on the assumption that arithmetic returns equal median or log returns. I have belatedly added links to several documents that go into the issues at length above,. The question is whether leverage increases the expected impact of your donations, taking into account issues such as diminishing marginal returns. Up to a point (the Kelly criterion level), increasing leverage drives up long-run median returns and growth rates at the expense of greater risk (much less than the increase in arithmetic returns). The expected $ donated do grow with the increased arithmetic returns (multiplied by leverage less borrowing costs, etc), but they become increasingly concentrated in outcomes of heavy losses or a shrinking minority of increasingly extreme gains. In personal retirement, you value money less as you have more of it at a quite rapid rate, which means the optimal amount of risk to take for returns is less than the rate that maximizes long-run growth (the Kelly criterion), and vastly less than maximizing arithmetic returns. In altruism when you are a small portion of funding for the causes you support you have much less reason to be risk-averse, as the marginal value of a dollar donated won't change a lot if it goes from $30M to $30M+$100k in a given year. At the level of the whole cause, something closer to Kelly looks sensible.

Hauke's calculation simply determines a standard Benefit/Cost ratio. If it costs $10 to avert a tonne of CO2 that provides benefits of $417 (in damages averted), this Benefit/Cost ratio equals 41.7. This ratio should be directly comparable to Copenhagen Consensus 'Social, economic, and environmental benefit per $1 spent.' For the Post-2015 Consensus, 'Climate Change Adaption' is listed as providing a Benefit/Cost ratio of 2 while climate-related 'Energy Research' has a ratio of 11. I would weight these results from meta-l... (read more)

2
Hauke Hillebrandt
5y
Thank you- I've now included this in my model: "Some global development interventions have been estimated to be 17.5x more effective than cash-transfers (e.g. deworming).[34] We use this as the optimistic case."

Thanks for your response, kbog!

Animal welfare issues are plausibly getting worse and not better so I’d be less confident to assume it will not be an issue in the future. As the world develops and eats more meat, Compassion in World Farming estimates that annual factory farm land animals killed could increase by 50% over the next 30 years. Assuming people’s expanding moral circle will reverse this trend is dangerous when the animal welfare movement has progressed little over the past few decades (number of vegetarians in US have been flat; there are some a... (read more)

2
kbog
5y
I did that because I was only looking at one year of welfare improvement. One year for one year is simpler and more robust than comparing lifetimes. If you want to look at lifetimes, you have to scale up the welfare impacts as well.
2
kbog
5y
Sorry I have little time and I'm just going to respond to the logic of offsetting right now. In utilitarianism ordinarily we maximize expected utility, so there's no need to hedge. If two actions have the same expected utility but one has a higher % chance of having a negative outcome, they're still equally good. Companies and investors need to protect certain interests so $2 million is less than twice as good as $1 million, but in utility terms 2 million utils is exactly twice as good as 1 million utils. Of course you could deny expected utility maximization and be morally loss averse/risk averse, and then this would be a conversation to have. There are good arguments against doing that, however, it's a minority view

Thanks for posting this, kbog! I would be interested in your recommendation for someone donating to the EA funds. The Long Term Future and Global Development funds focus on humans and thus potentially runs into the meat eater problem. For every dollar donated to the above funds, what would be an appropriate amount to donate to the Animal Welfare Fund that is enough to offset this issue? Thanks!

6
kbog
5y
In the long run it seems like the meat eater problem will drop off quite a bit. First because of improving welfare standards, second because of pressures to switch to more efficient plant-based calories, and third because people stop eating more meat or even eat less meat beyond a certain income. So making the US wealthier for instance is most likely good for farm animals in the long run. For global development in the short run, we can see that $1000 in Africa cuts animal welfare by -800 (best estimate) to -4000 (high estimate) points. And I conservatively estimated that $1 to an ACE charity improves animal welfare by 10,000 points. So $1,100 donated to GiveDirectly (=~$1,000 received) should require between $0.08 and $0.40 if you want to offset to an effective animal charity. But it's rather arbitrary depending on just how conservative you want to be. I sort of assumed that the real effectiveness of ACE charities is 5x lower than their estimate. Note that I don't think that offsetting as a practice actually makes sense, it doesn't make sense under utilitarianism, it's more of a methodological tool to put the impacts of different things in perspective with one another.

A company structure to consider would be a mutual organization where all profits go to members, which in your case would be the policy holders. Profits can be retained to grow the company or policy fees can be reduced by the amounts of its profits. Mutuals have a long history and many of the most successful financial organizations in the US are mutuals (e.g. Vanguard, State Farm, Liberty Mutual, NY Life). You could develop an insurance brokerage mutual that offers products from different insurance companies. I'm not sure if there are mutuals in this s... (read more)

Hi Huwelium, thanks so much for your post! I’m also advising someone on highly cost-effective interventions, so I found your thoughtful analysis to be very interesting. My question relates to your cost effectiveness estimates vs GiveWell’s. Based on GiveWell’s spreadsheet, their modeling of DDK (2017) places that program’s cost effectiveness at 0.5x – 2.5x GiveDirectly’s. Their modeling of Bettinger et al (2017) places that program’s at 0.2x – 1.4x GiveDirectly’s. Both of these estimates are for consumption effects only and excludes non-pecuniary benefits ... (read more)

3
Huwelium
5y
Hi Wayne, Thanks for your post. I would love to get in touch and compare notes on research for advising donors. I’ll try to reach you via this site’s messaging. Sorry for the very late reply (I don’t get alerts when someone posts here). I believe the difference comes simply from the wide range of cost effectiveness of education interventions. As mentioned in the Google doc, “Rachel Glennerster mentions in an 80000 Hours podcast that good interventions typically deliver at least 1 learning adjusted year of schooling (LAYS) per 100 USD spent, with some interventions delivering about 10-30 LAYS per 100 USD, and the best delivering up to 460 LAYS per 100 USD.” For Pratham, the info I found suggested roughly 1.7 to 27.6 extra years per 100 USD. Assuming an increase in income of 8.8% for each extra year of schooling, this means an increase in income of about 15% to 243% per 100 USD donated. Comparing to DDK 2017, GiveWell cites a 24% increase in income for 541$ spent, so 4.4% increase in income per 100$ spent. I don’t know if this helps? I think the basic explanation is that there is a very wide range in effectiveness of education interventions, and that Pratham seems to be higher in this range than DDK, say.

I would challenge your notion that you are over-analyzing the problem and that you must make a definitive decision soon.

1. In general, better knowledge and information leads to better decision making. If you are new to the EA community or to thinking deeply about philanthropy more generally, it is very unlikely that your current notions of how to give are appropriate.

2. Once you give away money, you cannot get it back. But money you save now can always be given away later. This argues for waiting in the presence of uncertainty. For example, in the optima... (read more)