This post is inspired by question 2 (source).
TLDR:
- The impact of retroactive funding is its impact premium over that of proactive funding.
- Funding decentralization covers a range of fields more efficiently than its centralization.
- Impact competition among funders and calibration training can mitigate biases.
- Retroactive funding participants should enjoy others’ impact.
The counterfactual impact of retroactive funding is the additionality that it provides to the proactive financing landscape. Retroactive funding motivates actors who can afford to take risk to do so. The impact is the sum of the differences between the retroactively funded projects’ impact (irp) and the alternative investment impact of the risk takers (irta) minus the impact of the funders’ alternative investments (irfa).
The impact is positive when the projects do more good than the total that their organizers and funders forgo.
In other words, retroactive funding is beneficial when organizers and buyers forgo relatively unimpactful alternatives and vice versa. For example, if someone works on a painting for the local museum instead of taking a gender studies course and the potential painting buyer gives up a charity donation, the impact is negative. If the risk-taker mitigates gender-based violence instead of going skiing and the potential funder forgoes buying a TV, the impact should be positive. Thus, retroactive funders should be interested in maximizing their impact.
Funders interested in maximizing their impact should be experts in their fields. Then, they will be able to express interest in the most impactful set of projects which also have possibly interested organizers. Decentralization of funding can cover a comprehensive set of fields more efficiently than its centralization, since developing a network of well-connected experts can take more time than gaining the same participation on a platform beneficial to these experts’ organizations. However, unsupervised specialists can be more biased than managed professionals. Thus, retroactive funding should use a bias mitigating mechanism. Impact competition among funders can function in this capacity efficiently.
In reality, funders can choose certificates based on their expected value growth rather than impact. This motivates strategic insights sharing. For example, a funder can notify a small group of potential organizers about their interest, subsequently purchase a large proportion of certificates, and only later share a thorough impact analysis with a wider audience. Prima facie, this favors forecasting experts. However, also people able to bias forecasts can benefit. Since biased allocation of resources decreases total impact, prediction inaccuracies should be mitigated. For example, relevant calibration training should be motivated.
The possibility of speculation can influence subjective wellbeing of project organizers and funders. Their wellbeing premium (wpo and wpf) should be added to the impact sum. Then,
The impact is positive when
The change of wellbeing of organizers and funders is significant relative to the other factors if a project influences a large number of these stakeholders. For example, participants can be enthusiastic about any counterfactually positive impact, including directly negative impact that serves as a learning opportunity. In this case, a structure that optimizes for retroactive funding participants’ wellbeing should be set up.
As of 2022-06-22, the certificate of this article is owned by brb243 (100%).
Thank you. I probably should schedule a call.
But you select topics, not people? Maybe, people could select which ones work best for them or form teams considering all others' expertise. This should always counterfactually optimize allocation of resources given a set of topics, using collective intelligence.
Interest in having answers should be considered, though. If this framework just creates interest in questions that relate to impact, then that is a significant contribution. Imagine you never wrote the list and I came up with this post - maybe you would have not even noticed or thought well no we do it better - so, interest increases engagement, for better or worse.
They always try to buy for a good value, so should make sure they are not making a mistake before purchasing anything. But yes, for the total impact that an issuer makes even the non-purchased certificates' counterfactuals should be used. For instance, if someone buys only 1/10 certificates in which they expressed interested and thus diverts the attention of 9 additional people from projects of different impact, then the sum of differences should be added to that of the 1/10 if the issuer seeks to evaluate the impact of their suggestions list, which they can choose to sell or not.
Hmm... yes, but maybe actually some organizers would not take the risk, so some projects would get unfunded or delayed and many more projects could be run but eventually not purchased (which could reduce some persons' 'risk or learning' budgets or be counterfactually beneficial).
That makes sense. I would have not initially imagined people motivated purely by profit investing into impact but yes, should be key.
So, (1)*(2)+(3)*funders' counterfactual productivity[4], where 1 is e. g. in $, 2 in impact/$, 3 is time, and 4 in impact/time. (2 can also be a difference with respect to the counterfactual.) For 1 and 2, you can ask the investors and organizers and their networks (what they would have done if they did not know about this opportunity), 3 by funders' time tracking, and 4 by assessing their reasoning about the prioritization of the set of projects that they fund or are interested in (whether they are aware of alternatives and complementarity of their investments).
1. surveys make sense but why anonymous - you could get greater sincerity in a ('safe' group) conversation, perhaps. 2. yes, that can be convincing. You can maybe observe the EAG/EAGx data on 'path to impact' before and after posting and the time a user spent reviewing your post/whether they submitted any certificates. 3. Almost like a lab experiment ...
Well, stating assumptions and continuing to improve methods can be the way .. Hm, not really, maybe like 126 for an RCT if you observe a greater than 50% difference (alter only E, the minimal difference in the impact metric that the research would detect).
Well, then the funders' impact perception score that people competitively critique. That uses a network of human brains to assign appropriate weights to different factors.
Ask funders: Estimate the impact of our framework compared to its non-existence. Explain your reasoning. Then, summarize quotes and see what people critique.
But why would you weigh by the extent of profit-orientation? Does it matter the rational or apparent justification of the investment? For example, some for-profit fund managers interested in impact need to justify their actions to their clients by profit, so should not be perceived any negatively, or differently from those actually interested in profit, to support positive dynamics.