Hide table of contents

This post is inspired by question 2 (source).

TLDR:

  • The impact of retroactive funding is its impact premium over that of proactive funding.
  • Funding decentralization covers a range of fields more efficiently than its centralization.
  • Impact competition among funders and calibration training can mitigate biases.
  • Retroactive funding participants should enjoy others’ impact.

 

The counterfactual impact of retroactive funding is the additionality that it provides to the proactive financing landscape. Retroactive funding motivates actors who can afford to take risk to do so. The impact is the sum of the differences between the retroactively funded projects’ impact (irp) and the alternative investment impact of the risk takers (irta) minus the impact of the funders’ alternative investments (irfa).

The impact is positive when the projects do more good than the total that their organizers and funders forgo.

In other words, retroactive funding is beneficial when organizers and buyers forgo relatively unimpactful alternatives and vice versa. For example, if someone works on a painting for the local museum instead of taking a gender studies course and the potential painting buyer gives up a charity donation, the impact is negative. If the risk-taker mitigates gender-based violence instead of going skiing and the potential funder forgoes buying a TV, the impact should be positive. Thus, retroactive funders should be interested in maximizing their impact.

Funders interested in maximizing their impact should be experts in their fields. Then, they will be able to express interest in the most impactful set of projects which also have possibly interested organizers. Decentralization of funding can cover a comprehensive set of fields more efficiently than its centralization, since developing a network of well-connected experts can take more time than gaining the same participation on a platform beneficial to these experts’ organizations. However, unsupervised specialists can be more biased than managed professionals. Thus, retroactive funding should use a bias mitigating mechanism. Impact competition among funders can function in this capacity efficiently.

In reality, funders can choose certificates based on their expected value growth rather than impact. This motivates strategic insights sharing. For example, a funder can notify a small group of potential organizers about their interest, subsequently purchase a large proportion of certificates, and only later share a thorough impact analysis with a wider audience. Prima facie, this favors forecasting experts. However, also people able to bias forecasts can benefit. Since biased allocation of resources decreases total impact, prediction inaccuracies should be mitigated. For example, relevant calibration training should be motivated.

The possibility of speculation can influence subjective wellbeing of project organizers and funders. Their wellbeing premium (wpo and wpf) should be added to the impact sum. Then,

The impact is positive when

The change of wellbeing of organizers and funders is significant relative to the other factors if a project influences a large number of these stakeholders. For example, participants can be enthusiastic about any counterfactually positive impact, including directly negative impact that serves as a learning opportunity. In this case, a structure that optimizes for retroactive funding participants’ wellbeing should be set up.

As of 2022-06-22, the certificate of this article is owned by brb243 (100%).

7

0
0

Reactions

0
0
Mentioned in

More posts like this

Comments3
Sorted by Click to highlight new comments since:

Oh, thank you for engaging my our question! We were delighted to see and read your post! (Have you joined our Discord already or would you be interested in having a call with us? Your input would be great to have, for example when we’re brainstorming something!)

When I’m considering how I would apply your methodology in practice, I mostly run into the hurdle that it’s hard to assess counterfactuals. In our competition we’ve come up with the trick that we suggest particular topics that we think it would be very impactful for someone to work on, a topic so specific that we can be fairly sure that the person wouldn’t otherwise have picked it.

That allows us to assess, for example, that you would most likely have investigated something else in the time that you spent to write this post. But (also per your model) that could still result in a negative impact if what you would’ve otherwise investigated would’ve been more important to investigate. But that depends on lots of open questions from priorities research and on your personal fit for the investigations, so it’s something that it’s hard for us take into account.

There is also the problem that issuers may be biased about their actual vs. counterfactual impact because they don’t want to believe that they’ve made a mistake and they may be afraid that their certificate price will be lower.

One simplification that may be appropriate in some cases is to assume that the same n projects are (1) funded prospectively or (2) founded retroactively. That way, we can ignore the counterfactual of the funders’ spending. Since retro funding will hopefully be used for the sorts of projects where it is appropriate, it’s probably by and large a valid approximation that the same projects get funded, just differently (and hopefully faster and smarter).

But a factor that makes it more complicated again is, I think, that it’ll be important to make explicit the number of investors. In the prospective case, the investors are just the founders who invest time and maybe a bit of money. But in the retrospective case, you get profit-seeking investors who would otherwise invest into pure for-profit ventures. The degree to which we’re able to attract these is an important part of the success of impact markets.

So ideally, the procedure should be one where we can measure something that is a good proxy for (1) additional risk capital that flows into the space, (2) effort that would not otherwise have been expended on (at all) impactful things, and (3) time saved for retro funders.

Our current metric can capture 2 to a very modest extent, but our market doesn’t yet support 1 and and we’re the only EA retro funders at the moment, and we wouldn’t otherwise have done prospective funding.

Some random ideas that come to mind (and sorry for rambling! ^.^'):

  1. Anonymous surveys asking people what they would’ve done without impact markets – where they would’ve invested, what they would’ve investigated, etc. (Sadly, we can’t ask them what they actually did, or it might deanonymize them. But we can probably still average over the whole group.)
  2. Recruit a group of people, ask them what they’re planning to do, then offer them the promise of retro funding, and then observe what they actually end up doing.
  3. Tell people that they’ll randomly get either the small prospective or the large, conditional retrospective funding if they preregister their projects. See in which group there is more follow-through.

These all have weaknesses – the first relies on people’s memory and honesty, the second is costly and we can’t really prevent people from self-selecting into the group if they prefer retro funding, and the last one is similar. I’m also worried that small methodological mistakes can ruin the analysis for us. And a power analysis will probably show that we’d need to recruit hundreds of participants to be able to learn anything from the results. 

So yeah, I’d love some sort of quick and dirty metric that is still informative and not biased to an unknowable extent.

What do you think? Do any solutions come to mind?

Oh, in mature markets retro funders will try to estimate at what “age” an investment into impact might break even with some counterfactual financial market investment of an investor. I have some sample calculations here. Retro funders can then announce that they’ll buy-or-not-buy at a point in time that’ll allow the investors to still make a big profit. But randomly, they can announce much later times. The investors who invest in the short term but less so in the longer term can be assumed to be mostly profit-oriented. If we can identify investors on this market, we can sum up the invested amounts weighed by the degree to which they are profit-oriented. But that’s all not viable yet at all…

Well, that’s all my thoughts. xD I’d be very curious what you think!

Thank you. I probably should schedule a call.

hard to assess counterfactuals ... [suggest specific topics that the] ... person wouldn’t otherwise [pick]

But you select topics, not people? Maybe, people could select which ones work best for them or form teams considering all others' expertise. This should always counterfactually optimize allocation of resources given a set of topics, using collective intelligence.

depends on lots of open questions from priorities research and on your personal fit for the investigations

Interest in having answers should be considered, though. If this framework just creates interest in questions that relate to impact, then that is a significant contribution. Imagine you never wrote the list and I came up with this post - maybe you would have not even noticed or thought well no we do it better - so, interest increases engagement, for better or worse.

issuers may be biased about their actual vs. counterfactual impact because they don’t want to believe that they’ve made a mistake and they may be afraid that their certificate price will be lower.

They always try to buy for a good value, so should make sure they are not making a mistake before purchasing anything. But yes, for the total impact that an issuer makes even the non-purchased certificates' counterfactuals should be used. For instance, if someone buys only 1/10 certificates in which they expressed interested and thus diverts the attention of 9 additional people from projects of different impact, then the sum of differences should be added to that of the 1/10 if the issuer seeks to evaluate the impact of their suggestions list, which they can choose to sell or not.

it’s probably by and large a valid approximation that the same projects get funded, just differently (and hopefully faster and smarter)

Hmm... yes, but maybe actually some organizers would not take the risk, so some projects would get unfunded or delayed and many more projects could be run but eventually not purchased (which could reduce some persons' 'risk or learning' budgets or be counterfactually beneficial).

number of investors ... in the retrospective case, you get profit-seeking investors who would otherwise invest into pure for-profit ventures. The degree to which we’re able to attract these is an important part of the success of impact markets.

That makes sense. I would have not initially imagined people motivated purely by profit investing into impact but yes, should be key.

good proxy for (1) additional risk capital that flows into the space, (2) effort that would not otherwise have been expended on (at all) impactful things, and (3) time saved for retro funders.

So, (1)*(2)+(3)*funders' counterfactual productivity[4], where 1 is e. g. in $, 2 in impact/$, 3 is time, and 4 in impact/time. (2 can also be a difference with respect to the counterfactual.) For 1 and 2, you can ask the investors and organizers and their networks (what they would have done if they did not know about this opportunity), 3 by funders' time tracking, and 4 by assessing their reasoning about the prioritization of the set of projects that they fund or are interested in (whether they are aware of alternatives and complementarity of their investments).

random ideas

1. surveys make sense but why anonymous - you could get greater sincerity in a ('safe' group) conversation, perhaps. 2. yes, that can be convincing. You can maybe observe the EAG/EAGx data on 'path to impact' before and after posting and the time a user spent reviewing your post/whether they submitted any certificates. 3. Almost like a lab experiment ...

I’m also worried that small methodological mistakes can ruin the analysis for us. And a power analysis will probably show that we’d need to recruit hundreds of participants to be able to learn anything from the results. 

Well, stating assumptions and continuing to improve methods can be the way .. Hm, not really, maybe like 126 for an RCT if you observe a greater than 50% difference (alter only E, the minimal difference in the impact metric that the research would detect).

quick and dirty metric that is still informative and not biased to an unknowable extent.

Well, then the funders' impact perception score that people competitively critique. That uses a network of human brains to assign appropriate weights to different factors.

What do you think? Do any solutions come to mind?

Ask funders: Estimate the impact of our framework compared to its non-existence. Explain your reasoning. Then, summarize quotes and see what people critique.

sum up the invested amounts weighed by the degree to which they are profit-oriented

But why would you weigh by the extent of profit-orientation? Does it matter the rational or apparent justification of the investment? For example, some for-profit fund managers interested in impact need to justify their actions to their clients by profit, so should not be perceived any negatively, or differently from those actually interested in profit, to support positive dynamics.

Maybe, people could select which ones work best for them or form teams considering all others' expertise.

 

Oh yes, defs! :-)

But yes, for the total impact that an issuer makes even the non-purchased certificates' counterfactuals should be used.

Agreed. Preregistration also solves a lot of problems with our version of p-hacking (i-hacking ^.^).

Hmm... yes, but maybe actually some organizers would not take the risk, so some projects would get unfunded or delayed and many more projects could be run but eventually not purchased (which could reduce some persons' 'risk or learning' budgets or be counterfactually beneficial).

I think we’d need more assumptions for this to happen. When it comes to business ventures, some are better funded from investments and some from loans. But if we imagine a world without investments, with only loans, and then offer investments, it’s not straightforward to me that that would harm the businesses that are better suited for loans. Similarly, I’d think (at first approximation at least) that the projects that are not suited for retro funding will still apply for and receive prospective funding at almost the same rate that it is available today. The bottleneck is currently more the investment of funding than the availability of funding.

So, (1)*(2)+(3)*[4]

Right, those are multiplicative too… That makes them a bit not-robust (volatile?). Would it also be valid to conceive of each of those as a multiplier on the current counterfactual impact, e.g., 10x increase in seed funding, 10x improvement in the allocation of seed funding, 10x through economies of scale, etc.? Here’s a sample Guesstimate. But it feel to me like we’re much too likely to err in upward direction this way even if it’s just an estimate of the success (non–black swan) case. Sort of like the multi-stage fallacy give too low estimates if you multiply many probabilities, so I feel like multiplying all these multipliers probably also ignores lots of bottlenecks.

But that said, almost all the factors are multiplicative here except for two components of the allocation – better allocation thanks to knowledge of more languages and cultures and thanks to being embedded in more communities. (I suppose a person is in about as many communities regardless which language/s they speak.) The language component may seem unusually low, which is because so much of the world speaks English and because the US are such a big part of the world economy.

I’m assuming that more entrepreneurial people will join the fray who are not sufficiently altruistic to already be motivated to start EA charities but who will do a good job if they can get at least somewhat rich from it. 

Finally, I’m assuming that retro funders reinvest all their savings into the retro funding and priorities research, and that priorities research still has < 10x room for improvement, which seems modest to me (given evidential cooperation in large worlds for example).

I’d be interested what you think of this version – any other factors that need to be considered, or anything that should be additive instead of multiplicative? But we can discuss all that in our call if you like. No need to reply here.

1. surveys make sense but why anonymous - you could get greater sincerity in a ('safe' group) conversation, perhaps.

Perhaps, but I’m worried that they may be incentivized to fib about it because it makes their impact seem more valuable and because it makes it more likely that we’ll continue to program (from which they want to benefit again), since we might discontinue it if we don’t think it’s sufficiently impactful.

Hm, not really, maybe like 126 for an RCT if you observe a greater than 50% difference

Thanks! Still a lot. I’ve updated away from this being an important factor to measure… I’d rather find a way to compare the success rates of investors vs. current prospective funders. If none of them are consistently better, they may lose interest in the system.

Ask funders: Estimate the impact of our framework compared to its non-existence. Explain your reasoning. Then, summarize quotes and see what people critique.

Yeah, that sounds good in both cases. It’s probably not easy to talk to them but it would be valuable.

But why would you weigh by the extent of profit-orientation?

If we’re just reallocating EA money, we’re not adding more. A big source of the impact stems from attracting for-profit money.

Looking forward to our call! Feel free to just respond verbally then. No need to type it all out. :-D 

More from brb243
Curated and popular this week
Relevant opportunities