Sindy_Li

Posts

Sorted by New

Comments

Returns Functions and Funding Gaps

On increasing and decreasing (marginal) returns:

I see that you said "claiming that expected returns are normally diminishing is compatible with expecting that true returns increase over some intervals. I think that true returns often do increase over some intervals, but that returns generally decrease in expectation."

I wasn't sure why this would be true in a model that describes the organization's behavior, so I spent some time thinking it through. Here is a way to reconcile increasing returns and decreasing expected returns, with a graph. Note that when talking about "funding" here (and the x-axis of the graph) I mean "funding the organization will receive over the next planning period, i.e. calendar year", and assume there's no uncertainty over funding received, same as in Max's model.

I think it's reasonable to assume that "increasing returns" in organization's impact often come from cases of "lumpy investments", i.e. things with high impact and high fixed costs. In this case nothing would happen until a certain level of funding is reached, and at that point there is a discrete jump in impact. For the sake of the argument let's assume that everything the organization does has this nature (we'll relax this later). So you'd expect the true returns function to be a step function (see the black curve on graph).

How does the organization makes decision? First, let's assume that these "lumpy investments" (call them "projects") aren't actually 0 or 1; rather, the closer the level of funding is to the "required" level, the more likely the project will happen (e.g. maybe AMF is trying to place an order for bed nets and the minimum requirement is 1000 nets, but it's possible that they can convince the supplier to make an order of 900 nets with probability less than 1). For simplicity let's assume the probability grows linearly (we'll relax this later). Then the expected returns is actually the red piecewise linear function in the graph. Note that overall the marginal returns are still weakly diminishing (but they are constant within each project) because given the red expected returns function the organization would choose to first do the project with the highest marginal return (i.e. slope), then the second highest, etc.

Note: We assume the probability grows linearly. If we relax this assumption, things get more complicated. I illustrate the case where probabilities grow in a convex way within each project with the ugly green curves (note that this also covers the case with no uncertainty in the project happening or not, but rather the project has a "continuous" nature and increasing marginal returns). It's true that you cannot call the whole thing concave (and I don't know if mathematicians have a word to describe something like this). But from the perspective of a donor who, IN ADDITION to the model here that assumes certainty in funding levels, has uncertainty over how much funding the organization has, the "expected-expected" returns function they face (with expectation over funding level and impact) would probably be closer to the earlier piecewise linear thing, or concave. If the probabilities grow in some weird ways that are not completely convex (note that this also covers the case with no uncertainty in the project happening or not, but rather the project has a "continuous" nature and weirdly shaped, non-convex marginal returns), things may get more complicated (e.g. switching projects half way may happen if the organization always spends the next dollar on the next thing with highest marginal return) -- maybe we should abandon such possibilities since they are unintuitive.

Note: If the organization does some projects that look more like linear in the relationship between impact and funding, 1) we can still use the red piecewise linear graph, and organizations will still start with projects with the highest slopes; 2) at a fine level things are still discrete so we'll be back to (mini) step functions.

Note: We also assumed the only uncertainty here is whether a project would happen at a funding level less than "required". There could also be uncertainty over impact, conditional the project happening -- this is not in our model, but my guess is it shouldn't change the main results much (of course it might depend on the shape of the new layer of uncertainty, and I haven't thought about it carefully).

All of the above is essentially based on the old idea that organizations do highest returns things first. The main addition is to look at a model where there are discrete projects (with elements of increasing returns) and still arrive at the same general conclusion.

I don't know how many people find this useful, but I was very confused by this issue (and said some incoherent things in my earlier comments, which I've delete to avoid confusing people), and found that I had to think through what the organization actually does in the case of lumpy investments.

Other important issues that are related but out of the scope of this discussion include how organizations and donors act under uncertainty over donation to be received by the organization.

Tom and Peter:

For an early stage charity like ACE it seems that capacity building is indeed a very important consideration (related to Ben Todd's point about the growth approach). E.g. it would allow them to move much more money later, and at the moment moving not that much money is a reason why they don't look so good in our model. Unfortunately we aren't able to incorporate this in our quantitative model (IMO another reason to look beyond quantitative models for decision making at this point, but people may have ways of incorporating it quantitatively -- it won't be hard to make a theoretical model of R&D, but fitting it empirically will be the big challenge).

On (i), Open Phil's Lewis Bollard's recommendation and ACE's own plan make it look like capacity building is something they try to do.

On (ii) and (iii), these have been true for GiveWell historically. E.g. on (iii), last year they added quite a few top charities. But I don't know ACE enough to say if they will grow in the way GiveWell did.

Returns Functions and Funding Gaps

Max, thanks for the post!

For someone like GiveWell that spends a lot of time investigating charities, they may have enough information about the charity's budget to tell when there is (something similar to) a discrete jump in the derivative of the returns function. E.g. the way they talk about "capacity-relevant funding" and "execution funding" in the post you linked to ("incentive funding" is for a completely different purpose that has no direct relationship with returns).

Also, to fix ideas it helps to think what we represent by the funding axis on the impact against funding graph, i.e. returns function. Is the function specifying the relationship between total impact, and total funding the charity expects to receive for a time period (i.e. next year), or we are looking within a time period and plotting what the charity does as (unexpected) new money comes in? In the latter case, diminishing returns seems most likely. In the former case, increasing returns is possible (but diminishing returns is as well).

Ben Todd has written about increasing returns in small organizations here. I wrote here that "Whether returns are increasing or decreasing in additional funding depends on how the funding is received. Expecting a large chunk of funding (either in the form of receiving such amounts at once, or even expecting a total large amount received in small chunks if there is no lumpy investment or borrowing constraint) could enable an organization to do more risk taking, while getting unanticipated small amounts of funding at a time -- even if the total adds up to more -- will probably just lead the organization to use the marginal dollar to “fund the activity with the lowest (estimated) cost-effectiveness”. ... The scenario Ben Todd has in mind probably applies more when a large funder is considering how much to give to an organization. This may be another argument to enter donor lottery or donate through the EA fund: giving a large and certain amount of donations to a small organization enables them to plan ahead for more risky but growth enhancing strategies, hence could be more valuable than uncoordinated small amounts even if the latter add up to the same total (because the latter may be less certain). ... This mechanism is articulated in “5.2 The funding uncertainty problem” on this page about the EA fund.” (There are probably some analogous economic model of firm investment under liquidity constraint and uncertainty, but I don't have one on the top of my head.)

In practice it may not be a big deal: even if the charity receives random small amounts of money during the year, it is probably at least as good as receiving the total amount all at once at the beginning of next year when they do the next round of planning. But for small organisations where earlier growth is much better, it could be much more preferable to have small amounts of donations be coordinated and committed at the same time to help with more ambitious planning and growth. (Of course we are assuming the charity is borrowing constrained; otherwise if earlier growth is much better they'd borrow to achieve it and repay with later donation. Also, if the market is efficient and earlier growth is really much better, then some donors should capture the opportunity ... but of course market may not be!)

My personal take on the issue is that, the better we understand how the updating works (including how to select the prior), the more seriously we should take the results. Currently we don't seem to have a good understanding (e.g. see Dickens' discussion: the way of selecting the median based on Give Directly seems reasonable, but there doesn't seem to be a principled way of selecting the variance, and this seems to be the best effort at it so far), so these updating exercises can be used as heuristics but the results are not to be taken too seriously, and certainly not literally (together with the reason that input values are so speculative in some cases).

This is just my personal view and certainly many people disagree. E.g. my team decided to use the results of Bayesian updating to decide on the grant recipient.

My experience with the project lead me to be not very positive that it's worth investing too much in improving this quantitative approach for the sake of decision making, if one could instead spend time on gathering qualitative information (or even quantitative information that don't fit neatly in the framework of cost-effectiveness calculations or updating) that could be much more informative for decision making. This is along the lines of this post and seems to also fit the current approach of the Open Philanthropy Project (of utilizing qualitative evidence rather than relying on quantitative estimates). Of course this is all based on the current state of such quantitative modeling, e.g. how little we understand how updating works as well as how to select speculative inputs for the quantitative models (and my judgment about how hard it would be to try to improve on these fronts). There could be a drastically better version of such quantitative prioritization that I haven't been able to imagine.

It could be very valuable to construct a quantitative model (or parts of one), think about the inputs and their values, etc., for reasons explained here. E.g. The MIRI model (in particular some inputs by Paul Christiano; see here) has really helped me realize the importance of AI safety. So does the "astronomical waste" argument, which gives one a sense of the scale even if one doesn't take the numbers literally. Still, when I make a decision of whether to donate to MIRI I wouldn't rely on a quantitative model (at least one like what I built) and would instead put a lot of weight on qualitative evidence that is likely impossible (for us yet) to model quantitatively.

Peter, indeed your point #2 about uncertainty is what I discuss in the last point of "2) Outcome measures", under "Model limitations". I argued in a handwaving way that because 80K still causes some lower risk and lower return global health type interventions -- which our aggregation model seems to favor, probably due to the Bayesian prior -- it will probably still beat MIRI that focuses exclusively on high risk, high return things that the model seems to penalize. But yes we should have modeled it in this way.

Robin, for what you quoted about increasing returns I was thinking only in the case of labor. Overall you are right that, if the organization has been maximizing cost-effectiveness, then they probably would have used the money they had before reaching fundraising targets in a way that makes it more cost-effective than money coming in later (assuming they are more certain about the amount of money up to fundraising target, and less certain about money coming in after that).

The value of money going to different groups

Something that will complicate the effects is that money given to people may increase not only consumption today but also consumption tomorrow through investment. This could be investments in physical capital (e.g. iron roof, livestocks) or human capital (e.g. health and education). Most of the time when people are given money, some will be consumed and some saved/invested (and consumption itself could have investment effects too, if better nutrition improves ability to work/learn), e.g. see Give Directly recipients.

This is relevant if we think that, for instance, poor people in Kenyan villages have more profitable investment opportunities than poor people in the US, for the cash they receive -- which is probably the case, e.g. there are many more opportunities to start small businesses in Kenyan villages (or higher returns to improving nutrition because they start at such a low level, though I remember "Poor Economics" says there's not much of evidence for a nutrition-based poverty trap, so probably not). In that case the benefits of giving cash to poor Kenyans (relative to giving to poor Americans) is further amplified. In fact in GiveWell's cost-effectiveness calculation for Give Directly, future increase in consumption is responsible for a substantial fraction of the effect (even with discounting) if we assume some persistence in investment returns (even if it's not compounded).

Is the community short of software engineers after all?

Ben, to recap a bit what people have said: working as a software engineer at an EA organization

  • may not be the most technically challenging/engaging job
  • may not be great for future career development
  • may not pay much

This probably applies more to EA organizations like CEA and 80,000 Hours. Give Directly may be different since you probably work with Mpesa, similar to Wave; and maybe New Incentives too since they do conditional cash transfers.

And Wave is basically like a regular tech company in the above aspects (and probably better because it's a startup hence work could be more challenging and interesting than say at Google). They pay less than Google. So it's a good example that when you have to make sacrifices in 1 dimension, out of the 3 I mentioned above, you can still find really good EA-type software engineers to join.

But for EA organizations like CEA and 80,000 Hours, you need to make sacrifices in all 3 dimensions (and pay is probably less than Wave) -- no wonder it's harder.

That being said, there's no reason it can't work. Just think about people doing similar jobs to the IT positions you want to hire for at CEA/80k -- there are plenty of people doing them at other companies or organizations. In this sense, the EA organizations may pay less than the alternative but offers the opportunity to work for an EA organization, so the EA-types among these people would be attracted to such jobs, like EA-type Google engineers would be attracted to Wave.

If the job isn't the most technically challenging/engaging, you probably shouldn't be looking for people who value these too much, since they already need to take a pay cut which is a sacrifice even for EA-types, and asking people to make sacrifices on more than one dimension makes it harder to attract them. Look for people who are EA-types working in similar jobs in non-EA places, or people who would be at least indifferent between working these jobs at a non-EA org with lower pay and working these jobs at another place with higher pay.

(But maybe the jobs ARE technically challenging/engaging and great for future career prospect.. I don't know, you should probably ask actual engineers for their views on these!)

Estimating the Value of Mobile Money

Wave is really good! (I use it) Another thing one can do is to work for some mobile money company in a developing country to design products that benefit the poor (e.g. saving, credit, that I mention in the other post), like the American guys I met in Myanmar's Wave Money (but they are still early stage and has many challenges before having an impact). (Not suggesting you should do it though -- involves moving to a developing country etc., and could be much less likely to succeed due to regulations etc.). BTW this is the mobile credit scoring company I had in mind: http://tala.co/.

Load More