Hide table of contents

Summary

There is no qualitative distinction between investors and retroactive funders on an impact market. Rather they will de facto fall along a spectrum of how altruistic they are. That is because investors will (1) expect investments into well-defined prize contests to be less risky than fully speculative investments, and will (2) expect more time to pass before they can exit fully speculative investments, so that a counterfactual riskless benchmark investment represents a higher threshold for them to consider impact markets at all.

Recap: Impact Markets

For a more comprehensive explanation of impact markets, see Toward Impact Markets.

In short, an altruistic retroactive funder announces that they will pay for impact (or “outcomes”) they approve of. It resembles a prize competition in this way. But (1) they’ll pay proportional to how much they value the impact and not only the top n submissions; (2) the impact remains resellable by default; and (3) seed investors offer to pay the people who are vying for the prizes or provide them with anything else they need and receive in return rights to the impact and thus prize money.

It is analogous to the startup ecosystem: Big companies like Google want to acquire small companies with great staff or a great product. Founders try to start these small companies but often can’t do so (as quickly) without the seed funding and network of venture capital firms. When the exit happens (if it happens), the founders may not even any longer own the majority of the company because they’ve sold so much of it to the investors.

The benefits are particularly strong for high-impact charities and hits-based funders:

  1. If a hits-based funder usually funds projects that have a 1 in 10 chance of success and switches to retroactive funding, they save:
    1. the money from 9 in 10 of the grants,
    2. the time from 9 in 10 of the due diligence processes, and
    3. the risk from accidentally funding projects that then generate bad PR.
  2. Investors can thus speculate on making around 10x return on their successful investments, and they can further increase their expected returns:
    1. by specializing in a narrow area (such as AI safety) to make excellent predictions about which project will succeed,
    2. by providing founders with their networks in those areas,
    3. by buying resources at a bulk discount that founders need (such as compute credits), and
    4. by finding founders that none of the other investors or funders are aware of to negotiate deals with them where they receive a large share of their impact certificate/s.
  3. Charities can attract top talent and align incentives with top talent who may not be fully sold on the charity’s mission:
    1. by promising them a share in all impact sold,
    2. by locking that share up in a vesting contract,
    3. by (possibly) sharing rights to the impact with another company that is the current employer of the talent so that they don’t need to quit and can draw on the infrastructure of the other company. (In fact, somewhat value-aligned companies may be interested in becoming investors themselves if they want to retain talent who want to work on prosocial applications of their knowledge.)
  4. Individual researchers can attract funding for their work even without the personal ties to funders, e.g., because they are in a different geographic region and better at their research than at networking.

Profitability of Impact Markets

We think about this in terms of the riskless benchmark  and the ratios  and . The benchmark  is a return – e.g.,  for a 10% profit – that an investor expects over some time period. An investment is interesting for the investor if it is more profitable than  is the ratio of the costs that funder and investor face respectively. This includes, for the funder, the cost of the grant, the time cost of the due diligence, the reputational risk if the due diligence misses something, and, for the investor, the cost of the grant minus savings thanks to shared infrastructure, economies of scale, etc.  (note that enumerator and denominator are the other way around) is the ratio of the probabilities that investor and funder respectively assign to the project success. The investor may specifically select projects where they have private information (e.g., thanks to their network) that give them greater confidence in the project’s success than they expect the funder to have.

Hence, investments are interesting if .

The graph shows the benchmark of an investment with 30% riskless profit compared to the maximum profit from various project configurations. It elucidates that an investor who can help realize a project more cheaply than the funder or thinks that it is more likely to succeed, can outperform the funder in a range of scenarios. These are scenarios where one or both parties can reap the gains from trade and save time or money.

The square between 0 and 1 on both axes is largely irrelevant. These are scenarios where the investor would have to pay more than the funder or is less optimistic about the project. Those are obviously uninteresting. But also just outside that square and around the edges, there are areas where the investor may not be interested because their edge (in terms of the  and  ratios is too small. Then again a riskless 30% APY is a high benchmark.

A few examples:

If a charity already has a track record of doing something really well 10 out 10 times in the past, there is very little risk involved when they try it for an 11th time:

Maybe an investor thinks they’re 99.5% likely to succeed and the funder thinks they are at least 99% likely to succeed, and the action costs $1m for either and takes a year.

That’s  and . It is only interesting for an investor who cannot otherwise invest the money at 0.5% profit per year.

  1. It’ll be worth little to the funder: If they value the impact at 99% probability at $1m, they’ll pay $1m/99% ≈ $1.01m for it, so $10k premium.
  2. If an investor offers to carry that tiny amount of risk, they’ll want it to exceed their 10–30% benchmark after a year, or else a standard ETF investment would be more profitable to them. That’s at least a $100k premium.
  3. A bid of a $10k premium (minus the overhead of the whole transaction) from the funder but an ask of $100k premium from the investor means that there’ll be no deal.

But consider a case where someone has no track record:

The investor thinks they are 20% likely to succeed. The funder thinks that they’re 10% likely to succeed. The action costs $1m for both and takes a year.

That’s  and . It’ll be interesting unless someone has a benchmark of more than 100% per year.

  1. The funder will pay up to $1m/10% = $10m for the riskless impact.
  2. That’s a 1000% return (or 900% profit) for the investor with 20% probability, so 100% profit in expectation, which beats most benchmark investments. Even if their riskless benchmark is as high as 30%, they’ll accept offers over 650% return. Naturally, these investors have to be fairly risk neutral or make many such investments. (If they are somewhat altruistic, they can consider the difference between the risk neutral and their actual utility in money a donation.)
  3. Funder and investor will meet somewhere at or below 1000%.

It is easy to create an analogous example for the case where funder and investor make the same probability assessments but where the grant size is so small that the investor, who already knows the project, can fund it at half the price compared to the funder who would have to spend a lot of time on due diligence.

Impact Timeline

A typical product that is suitable for impact markets is a scientific paper. Papers, like many other projects, have the property that they often get stuck in the ideation phase, sometimes have to be abandoned (for other reasons than being an interesting negative result) during the research, sometimes don’t make it past the reviewers, and sometimes turn out to have been a bad idea only decades later.

When an investor wants to invest into a paper that has not been written, but which they are highly optimistic about, they may see these futures:

The x axis is the time (in years), the y axis is the Attributed Impact (proportional to dollars), blue lines are possible futures, and the red line is the median future.

There are two big clusters: all the futures in which the paper gets written, published, and read, versus and all the futures in which it either never gets finished or gets read by too few people.

One to three years into the process, it becomes clear in which cluster a given future falls, particularly if it falls into the upper cluster. (Otherwise there’s a bit of a halting problem because it might still take off.) Maybe the paper has been published on arXiv and is making rounds among other researchers in the field.

After 10 years, the majority of the impact has become clear and the remaining uncertainty over the value of the Attributed Impact of the paper is low.

After 15 years, we’re asymptotically approaching something that looks like a ceiling on the Attributed Impact of the paper. Experts have hardly updated on its value anymore in years, so their confidence increases that they’ve homed in on its “true” value. (“True” in the intersubjective sense of Attributed Impact, not in any objectivist sense.)

This is a vastly idealized example. In practice it may be that a published paper that used to be held in high regard suddenly turns out to have been wrong, an infohazard, plagiarized, etc. Or it may be that it’s suddenly noticed that a decade-old forgotten-about paper (that had high ambitions at the time but seemed to fall short) contains key answers to an important new problem.

Timing of Retroactive Funding

If an investor is a specialist in some small field and profits from economies of scale in the field (e.g., the compute credits bought in bulk that we mention above), then they may expect to make a 10× profit from each retro funding that they receive. That’s the difference between the size of the retro funding at which the retro funder breaks even (ignoring interest) and the cost to the investor. We assume for simplicity that monetary and time costs (grants and due diligence) are the same. So, 2 · 10i − 2 · 5i = 10i, where i is the average seed investment. (We’re using the parameters from above where retro funders save 10× from making fewer grants and 10× from saving time spent on due diligence. We also assume that a patient, well-networked, specialized investor has twice the hit rate of the generalist funder.)

If, counterfactually, they would’ve invested this money at 30% APY, the impact market ceases to be interesting for them if they expect the retro funding to take longer than 8–9 (≈ 10.6 ≈ 1.39) years: 2 · (1 / ratefunder) − 2 · (1 / rateinvestor) = (1 + apy)years.

Here we’re comparing an investor at different hit rates to a retro funder who would otherwise have a 10% hit rate under four counterfactual market scenarios. The impact market is profitable for any number of years less than the break-even point.

If the retro funder wants to save money, they can pay out less, but will need to do so earlier. For simplicity, the following chart is only for the scenario with a counterfactual 30% APY: 2 · (1 / ratefunder) · (1 − savings) − 2 · (1 / rateinvestor) = 130%years.

A retro funder needs to take this into account when deciding how much certainty they want to buy from the investors. More added certainty comes at a higher price. They can regulate this through the size of their retro funding or through the timing. Depending on the impact in question there are usually certain sweet spots that they can aim for, and do so transparently so that investors know what time horizons to speculate on.

When it comes to our example above, it seems fairly clear whether the paper was a success (was written, published, and read by some people) after about 2–3 years. So one sweet spot may be to wait for the moment of publication (as a draft or after peer review) or after the initial public reception can be gauged. The second is interesting because investors may be well-positioned to help with the promotion.

But there are other options – less profitable options much later.

Dissolving Retroactivity

We can imagine a chain of retro funders from a particular set of futures into the present: Someone makes a binding commitment that if they are successful in making a lot of money – say, their business is successful – they will use the money or a fraction of it to buy back impact that has previously been bought by a certain set of existing retro funders who the person trusts. They can continually add new ones to this set.

This can also be formulated as a prize contest: If I’m successful, I’ll use that budget to buy impact from my favorite retro funders at a reasonable bid price. If 1 in 5 projects still fail between the time when the retro funder bought them and the time when the success happens, the retro funder may buy them at 120% of the price that the previous retro funder paid.

Under this framing there is no qualitative difference anymore between investors and earlier retro funders. They’re all just different investors with different attitudes toward risk or preferences about how they weigh the profit vs. the social bottom line of their investments. (Some of them may choose to consume their certificates, though, to signal that they’ll never resell them.) There may even be investors who choose to invest into “whatever project person X will do next,” so earlier than the abovementioned seed investors.

A startup may be interested in making such a commitment because they have the choice to either do the research in-house or at least pay for it immediately or to pay for it later and only if they are successful. Since startup success is typically Pareto distributed, they’ll have vastly more money in the futures where they are successful than they have now or in unsuccessful futures. So this deal should be interesting for most startups.

For investors it’s a question of whether they want to expose themselves more to the field or to a particular team. If they’re excited about the team behind the startup and trust that team to do well regardless of what field they go into, they’ll want to invest directly into the startup. But if they’re more agnostic about all the teams in a field but are very excited about the field, they may prefer investing into the research projects to bet on the retro funding.

Example

  1. Cultured meat (or “cell/clean/c meat”) startups may require a lot more research to be done on how to scale their production and make it cost-competitive. But they don’t yet have the money to do all of that research in-house.
  2. They commit to investing a large portion of the money they’ll make from going public into buying impact. Specifically they hash out particular terms with an organization like Founders Pledge that stipulate what impact related to cultured meat research they will buy from which retro funders.
  3. The promise of great potential future riches boosts funding and opens up hiring pools.
  4. Eventually, the now more likely future might happen, and the large budget from the exits serves to buy most of the impact from the retro funders.

Conclusion

We’ve received a grant via the Future Fund Regranting Program to work on this. If you’d like to join our discussions, please join our Discord.

Thanks to my cofounder Dony for reviewing the draft of this post! He gets 1% of the impact of it; I claim the rest.

Comments4
Sorted by Click to highlight new comments since: Today at 11:53 AM

Thanks to my cofounder Dony for reviewing the draft of this post! He gets 1% of the impact of it; I claim the rest.

If people in EA who are involved in the creation of impact markets believe that they may profit in the future from their involvement—by selling the impact certificates of their involvement—their judgment will probably be biased. And this is concerning due to the risk of impact markets causing a lot of harm.

It's an instance of the problem you called "distribution mismatch". If some future retro funders will be willing to pay a lot for the impact certificates of this post, for example, then you'll end up making a lot of money. But if everyone in the future will agree that this post was extremely harmful, you won't lose any money. So even if the latter is 90% likely and this post is very net-negative, in expectation you still make a lot of money out of this post. (And if a certificate market already existed, speculators would be willing to pay you right now for your certificates.)

(To be clear, I'm not accusing you of consciously reasoning in this manner.)

I know that you know this, but for the benefit of other readers: Our solutions to at least remove incentives like that (but not to additionally penalize it) are in the Solutions section of the article that Ofer linked.

But I think what you’re saying is that we – the Impact Markets team – may be incentivized to avoid implementing our solutions because they would prevent us from receiving a lot of money if Impact Markets turn out harmful? It’s hard to argue that I think we’re unlikely to be biased that way because that’s what someone would do who is biased in that way would argue. I think we’re rather biased toward caution and so run the risk of stalling too much. We’ve already dedicated about nine months to thinking about risk mitigation (from last June to March).

But here’s another argument that doesn’t hinge on views on our psychology. It doesn’t look like it’s clearly financially better for us to create something harmful.

 IMs succeedIMs fail
IMs goodWindfallNothing
IMs badWindfallNothing

Especially now that we’ve already developed Attributed Impact and found potential retro funders who will use Attributed Impact, it’s even harder for us to roll these back and yet make IMs succeed. We could try to pivot and search instead for funders who care about the technology for markets for their own sake and disregard the terminal impact on the world, but it would be a departure from our strategy from the past months, and I don’t know whether we’ll find them. Even the people in the crypto space that we’re in touch with are quite aware of EA and care deeply about more fundamental values than market access, liquidity, and efficiency.

Safety could be an impediment to market success, too, in other ways. Maybe there’ll be a risky but well-known project at some point that it would be useful to collaborate with for marketing purposes, or maybe an amoral retro funder will contact us and we’d have to turn them down. So the argument is not clear cut. But I feel like it’s strong enough to make the case that it’s not clear that optimizing for IM success at the expense of safety is the dominant strategy.

All in all, the windfall if IMs succeed and are good is probably also greater than the windfall if IMs succeed and are bad, especially given that the funders who are interested in retro funding are sophisticated and altruistic.

(I suppose you’re not really incentivized argue against that lest you might push us to abandon safety. xD)

Oh, we could also create a windfall clause ourselves!

Our solutions to at least remove incentives like that (but not to additionally penalize it) are in the Solutions section of the article that Ofer linked

Will those solutions work?

Do you have control over who can become a retro funder after the market is launched? To what extent will the retro funders understand or care about the complicated definition of Attributed Impact? And will they be aware of, and know how to account for, complicated considerations like: "if the certificate's description is not very specific and leaves the door open to risky, net-negative, high-impact activities then we should take this fact into account when evaluating the ex ante impact"?

Could we end up with retro funders who use a much simpler criterion, e.g. just whether they "like" the impact? (Which is how altruistic retro funders are described in the OP.)

Have you resolved the unilateralist's curse? To what extent have you consulted with the EA community about creating an impact market?

Will those solutions work?

I can’t look into the future, so the most viable approach seems to me to be the one outlined in Toward Impact Markets. My current take:

  1. We first spent a couple months (or about nine months) thinking about impact markets purely theoretically to assess whether they’re desirable at all. I’ve considered this done as of late March or so.
  2. Then we start working on small-scale experiments, such as buying impact in EA Forum comments and soon hopefully buying impact in EA Forum posts in the context of a prize contest. The EA Forum has its own moderation team, so if someone posts something in the context of the contest that is harmful or likely to be harmful, we have two layers of protection against it: First the moderators can intervene, and then we and the retro funder (who seems really smart) can still catch and not buy any posts that still got published even though we consider them potentially net harmful.
  3. We’ll learn from these experiments. If it turns out that the system incentivizes harmful actions, we can try to mitigate them or discontinue the whole project.
  4. If we opt to mitigate them, we’ll run further iterations of the contest (and maybe other contests that seem similarly safe and contained to us) to test our solutions.
  5. Finally we’ll be in a better position to answer your question about whether the solutions work than we are now when we haven’t tested them at all.

That seems safer to me than even abandoning the project altogether because other groups are also working on retro funding and impact certificates, and I haven’t heard them talk much about safety in that context. Not to mention the opportunity costs. (If we come to the conclusion that impact markets are bad, however, we can also pivot to pure impact markets safety research, as it were.)

Are you really worried about people posting harmful Forum posts due to the market’s influence or only about OpenAI-level projects years down the line? But we’ll definitely ramp this all up very gradually to catch catastrophes when they’re still on the level of someone writing a post about panspermia without considering the s-risks it might cause. (Hurts me to think of that but the actual impact on the world is probably very limited.)

If we want to make progress on impact markets, I think the time has come for carefully controlled, small-scale actual experiments.

In AI safety a strong case can be made for more up-front theoretical thought because catastrophe is overdetermined – can occur through many disjunctive channels – and is in many cases final. But with impact markets we operate well within the space where we can assume that all the other actors are human, with human intuitions and restraints and human law above them. I think this relatively increases the opportunity costs of purely theoretical thought compared to the safety benefits. I don’t know to what extent, though. Exactly how careful we should be, i.e. exactly where we should strike the balance, is something we constantly think about.

Someone who surveyed many of the existing impact markets projects even noted that we have an “extreme focus on risks.”

Also note that a lot of the bad incentives we worry about already exist. The question is not so much whether impact markets might, in certain edge cases, also bear these risks but whether they’re going to exacerbate them or create them in fields where they didn’t previously exist. E.g., lots of bad stuff can be monetized, or at least it’s plausible enough that it can be monitized to attract investors. When that’s possible, it’s about as bad as IMs can ever get. (Very bad, admittedly.) And charities have monetary incentives to lie about negative study results on their intervention. That’s also not a reason to be generally skeptical of the nonprofit format because there are also those who actually shut down or pivot when they find out that their invention doesn’t work.

In fact, I did just that (first pivot, later shut down) when, around 2010–2015, my own charity turned out to be nonoptimal. (That’s not even a euphemism; we were probably on par with Village Reach in cost-effectiveness but not scale. Village Reach was temporarily among the GW top charities.)

 Do you have control over who can become a retro funder after the market is launched?

For the time being, “the market” is probably going to be a web2 platform that doesn’t exist yet with about the power of a spreadsheet on steroids. We can take it offline any time. The idea, however, is out there (has been for a long time, minus our thoughts on risk mitigation), so others can replicate it without our permission. Someone can simply decide to make tradeable SIBs, and we have a much more risky market on our hands than the one we’re aiming for because SIBs (always, I think) prescribe direct normativity.

So quite regardless of our marketplace, anyone can already become retro funder. We want to be very careful with who we recruit as retro funder though. So for example we’ll focus on winning over major EA funders for the job.

To what extent will the retro funders understand or care about the complicated definition of Attributed Impact?

Here’s a very highly gisted version that we’ve written for this purpose. Honestly, I would be surprised if our market (or prize contest) caused people to write more harmful forum posts than would’ve been written anyway. Then again receiving retro funding might signal-boost a post, so it’s something that we need to be careful with anyway. I remember very few forum posts, though, that I considered likely net harmful, and most were probably inconsequential. (Some might’ve also been harmful for reasons that would’ve been very hard to predict or that, even if predicted, were so unlikely that it still seemed valuable in expectation to publish the post.)

And will they be aware of, and know how to account for, complicated considerations like: "if the certificate's description is not very specific and leaves the door open to risky, net-negative, high-impact activities then we should take this fact into account when evaluating the ex ante impact"?

(In case someone is confused: I think the idea here is similar to some forms of p-hacking. You write a vague description of what you’ve done; then you try all sorts of things that sort of fit into that vague description including super harmful ones; finally, when one of them succeeds and is not has super harmful as the others, you destroy all evidence of having had anything to do with the harmful attempts and only sell the successful one at a profit to the retro funder. E.g., you write a bunch of blog posts on stuff that is infohazardous if you’re wrong; publish them under different pseudonyms; wait for comments to come in that tell you whether you’re wrong; and then submit to the contest only the post that turned out well, with a description such as “I’ll write a thing.”)

I’m planning to read all submissions, so I hope I’ll catch potentially scammy ones. We haven’t assembled an evaluation committee yet. (Do you want to be on it? It’s one of my to-dos to recruit this group.) I think I’ll often err on the side of being lenient with issuers though, because vagueness is probably in most cases not going to be due to them trying to trick the retro funders but due to lacking experience with the completely new format.

But I think that for the foreseeable future and during our experiments the projects will be so small on average that very few will require monetary investments. The sorts of “investments” they’ll get will be rather in the form of proofreading, advice, designing graphics, etc. So to a large part the projects would’ve been possible anyway, the contest will just catalyze them. Most of the value will be in the value of information and only secondarily in that catalyzation.

We’re also interested in buying shares in exposés of certificates where Forum authors cheated in some fashion.

Could we end up with retro funders who use a much simpler criterion, e.g. just whether they "like" the impact? (Which is how altruistic retro funders are described in the OP.)

I intended “like” as an intuitive shorthand for “Has sufficiently high Attributed Impact, falls into a focus area of the retro funder, is within the budget of the retro funder, is legally accessible to the retro funder, etc.” So some thing stricter rather than more lose than Attributed Impact alone.

If a simpler criterion also does the job, that’ll be great. You’ll know about the feedback effects that I’ve baked into the Attributed Impact definition. It’s crucial for the early retro funders to uphold Attributed Impact for those to get going. Once that cycle has started, we’ll be in a much safer position. And eventually we can also implement the “pot” to further cement the commitment of the market to Attributed Impact.

Have you resolved the unilateralist's curse?

What unilateralist’s curse do you mean?

To what extent have you consulted with the EA community about creating an impact market?

I’ve been a very active member of the community for 8 years and was a grantmaker at my own charity for 3–4 years before that. I worked for EA orgs, was board member of an EA org, started and was part of local communities, have been advising managers of EA orgs, etc., so I consider myself part of the community and most of my friends are EAs. If anything I’m oversharing when it comes to my EA interests. So I and my cofounders have discussed impact markets with maybe hundreds other community members at length for what must be hundreds of hours at this point.

Or if you’re looking for trusted names (for good reason): I’ve coauthored IM content with Owen Cotton-Barratt, have received very helpful feedback from Nicole Ross of CEA, which she amalgamated from the feedback of many CEA staff (though her own contributions were brilliant of course), discussed it with Nonlinear, with our FTX regrantor and friends, with Ben Todd, Greg Colbourn, Justin Shovelain, and many more. In many cases I explicitly asked them to red-team the concept. This is not to say that they all endorsed all of my ideas! I usually asked for red-teaming and got red-teaming, and that’s it.

Dony has been very active in talking to countless people about IMs – at Bay Area EA meetups, at EA Global, in video calls, etc. He’s probably talked to as many EAs as I have in a shorter time span! Matt, too, has done his share of networking even though he’s only working part time on it while bootstraping another startup!

And, as you know, we’ve always tried to lay out all of our thinking in EA Forum posts to attract further valuable feedback such as yours!

Curated and popular this week
Relevant opportunities