This post is inspired by question 2 (source).
TLDR:
- The impact of retroactive funding is its impact premium over that of proactive funding.
- Funding decentralization covers a range of fields more efficiently than its centralization.
- Impact competition among funders and calibration training can mitigate biases.
- Retroactive funding participants should enjoy others’ impact.
The counterfactual impact of retroactive funding is the additionality that it provides to the proactive financing landscape. Retroactive funding motivates actors who can afford to take risk to do so. The impact is the sum of the differences between the retroactively funded projects’ impact (irp) and the alternative investment impact of the risk takers (irta) minus the impact of the funders’ alternative investments (irfa).
The impact is positive when the projects do more good than the total that their organizers and funders forgo.
In other words, retroactive funding is beneficial when organizers and buyers forgo relatively unimpactful alternatives and vice versa. For example, if someone works on a painting for the local museum instead of taking a gender studies course and the potential painting buyer gives up a charity donation, the impact is negative. If the risk-taker mitigates gender-based violence instead of going skiing and the potential funder forgoes buying a TV, the impact should be positive. Thus, retroactive funders should be interested in maximizing their impact.
Funders interested in maximizing their impact should be experts in their fields. Then, they will be able to express interest in the most impactful set of projects which also have possibly interested organizers. Decentralization of funding can cover a comprehensive set of fields more efficiently than its centralization, since developing a network of well-connected experts can take more time than gaining the same participation on a platform beneficial to these experts’ organizations. However, unsupervised specialists can be more biased than managed professionals. Thus, retroactive funding should use a bias mitigating mechanism. Impact competition among funders can function in this capacity efficiently.
In reality, funders can choose certificates based on their expected value growth rather than impact. This motivates strategic insights sharing. For example, a funder can notify a small group of potential organizers about their interest, subsequently purchase a large proportion of certificates, and only later share a thorough impact analysis with a wider audience. Prima facie, this favors forecasting experts. However, also people able to bias forecasts can benefit. Since biased allocation of resources decreases total impact, prediction inaccuracies should be mitigated. For example, relevant calibration training should be motivated.
The possibility of speculation can influence subjective wellbeing of project organizers and funders. Their wellbeing premium (wpo and wpf) should be added to the impact sum. Then,
The impact is positive when
The change of wellbeing of organizers and funders is significant relative to the other factors if a project influences a large number of these stakeholders. For example, participants can be enthusiastic about any counterfactually positive impact, including directly negative impact that serves as a learning opportunity. In this case, a structure that optimizes for retroactive funding participants’ wellbeing should be set up.
As of 2022-06-22, the certificate of this article is owned by brb243 (100%).
Oh, thank you for engaging my our question! We were delighted to see and read your post! (Have you joined our Discord already or would you be interested in having a call with us? Your input would be great to have, for example when we’re brainstorming something!)
When I’m considering how I would apply your methodology in practice, I mostly run into the hurdle that it’s hard to assess counterfactuals. In our competition we’ve come up with the trick that we suggest particular topics that we think it would be very impactful for someone to work on, a topic so specific that we can be fairly sure that the person wouldn’t otherwise have picked it.
That allows us to assess, for example, that you would most likely have investigated something else in the time that you spent to write this post. But (also per your model) that could still result in a negative impact if what you would’ve otherwise investigated would’ve been more important to investigate. But that depends on lots of open questions from priorities research and on your personal fit for the investigations, so it’s something that it’s hard for us take into account.
There is also the problem that issuers may be biased about their actual vs. counterfactual impact because they don’t want to believe that they’ve made a mistake and they may be afraid that their certificate price will be lower.
One simplification that may be appropriate in some cases is to assume that the same n projects are (1) funded prospectively or (2) founded retroactively. That way, we can ignore the counterfactual of the funders’ spending. Since retro funding will hopefully be used for the sorts of projects where it is appropriate, it’s probably by and large a valid approximation that the same projects get funded, just differently (and hopefully faster and smarter).
But a factor that makes it more complicated again is, I think, that it’ll be important to make explicit the number of investors. In the prospective case, the investors are just the founders who invest time and maybe a bit of money. But in the retrospective case, you get profit-seeking investors who would otherwise invest into pure for-profit ventures. The degree to which we’re able to attract these is an important part of the success of impact markets.
So ideally, the procedure should be one where we can measure something that is a good proxy for (1) additional risk capital that flows into the space, (2) effort that would not otherwise have been expended on (at all) impactful things, and (3) time saved for retro funders.
Our current metric can capture 2 to a very modest extent, but our market doesn’t yet support 1 and and we’re the only EA retro funders at the moment, and we wouldn’t otherwise have done prospective funding.
Some random ideas that come to mind (and sorry for rambling! ^.^'):
These all have weaknesses – the first relies on people’s memory and honesty, the second is costly and we can’t really prevent people from self-selecting into the group if they prefer retro funding, and the last one is similar. I’m also worried that small methodological mistakes can ruin the analysis for us. And a power analysis will probably show that we’d need to recruit hundreds of participants to be able to learn anything from the results.
So yeah, I’d love some sort of quick and dirty metric that is still informative and not biased to an unknowable extent.
What do you think? Do any solutions come to mind?
Oh, in mature markets retro funders will try to estimate at what “age” an investment into impact might break even with some counterfactual financial market investment of an investor. I have some sample calculations here. Retro funders can then announce that they’ll buy-or-not-buy at a point in time that’ll allow the investors to still make a big profit. But randomly, they can announce much later times. The investors who invest in the short term but less so in the longer term can be assumed to be mostly profit-oriented. If we can identify investors on this market, we can sum up the invested amounts weighed by the degree to which they are profit-oriented. But that’s all not viable yet at all…
Well, that’s all my thoughts. xD I’d be very curious what you think!
Oh yes, defs! :-)
Agreed. Preregistration also solves a lot of problems with our version of p-hacking (i-hacking ^.^).
... (read more)