Hide table of contents

Inducement prizes are cash prizes awarded to people who accomplish a particular feat specified ahead of time. Inducement prizes offer advantages over traditional hiring practices since the prize is allocated according to a post-hoc evaluation of performance, rather than upfront to a specific worker. Here, I propose a specific model for a small inducement platform intended to facilitate the creation of high quality effective altruism research.

The problem

Inducement prizes have experienced a long history; accordingly, economics literature going back over a century has provided empirical evidence that inducement prizes can spur innovation more efficiently than hiring researchers or engineers directly in certain circumstances (for some examples see this paper and this one).

Many existing organizations, such as the X Prize Foundation, have enjoyed some success by facilitating the creation of large inducement prizes. However, most inducement prizes are not facilitated on content-specific platforms. Rather, companies, non-profits, governments, and rich individuals interested in creating inducement prizes generally go through their own media, such as for the Brain Preservation Technology Prize and the Methuselah Mouse Prize.

Facilitating your own inducement prize contest makes sense for large bounties, but I'm currently unaware of any specific platforms dedicated to the procurement of smaller prizes, such as those with sums less than $10,000.

The closest platform that I'm currently aware of is the private Facebook group Bountied Rationality. However, there are a number of problems I see with Bountied Rationality, which should provide some indication for how I think an alternative platform could be improved,

  • The group visibility is set to private, which makes it harder to share research value created inside the group.
  • The integrity of the group itself is held up by the personal trust of those within the rationalist/effective altruism community, rather than a market-driven model of reputation.
  • There is no means of arbitrating disputes, and therefore there is no guarantee that people will be paid fairly for accomplishing the task as specified.

I still think that the group Bountied Rationality creates a lot of value. But the issues I've outlined above plausibly limit its ability to grow larger, and hamper its status as a reliable engine for producing outsourced insights and research.

The proposal

My proposed alternative is a public, market-driven bounty system aimed at procuring small inducement prizes, targeted at the effective altruism community. Below, I'll list a specific set of features which I think could help the platform to thrive.

Public

The first main difference between my model and the group Bountied Rationality is that content on the platform would be public. This feature makes the platform less suitable for personal requests, but more suitable for public research, such as inducing mathematics results, well-crafted bibliographies, well-sourced research summaries for a given topic, and in-depth investigations into potential interventions.

The Effective Altruist Forum, Lesswrong and Stackexchange already allow for something similar, in that they allow users to ask public questions, and the community answers are then curated via upvotes and downvotes. This model has been helpful to many, but in my experience, people are often hesitant to provide long-form and well-sourced answers to questions, probably due to the lack of strong incentives offered to those who give good answers.

Escrow and arbitration

The second main feature I propose is a requirement that people put their money in escrow, and that they must name someone as the arbitrator for the bounty. Escrow ensures that the bounty offerers cannot simply keep their money long after someone has already satisfied the conditions of the bounty (a problem with which I have personally been acquainted with).

The purpose of requiring people to name someone as an arbitrator serves a similar purpose as escrow. By naming a trusted third party to settle disputes, bounty offerors would be encouraged outline very specific conditions under which they want their bounty to be distributed. This incentive, and the fact that the offerors cannot simply unfairly refuse to pay, provides bounty hunters assurance that they will get paid if they perform the task successfully.

My own experience on Metaculus made me realize just how important it is for platforms to build solid mechanisms to build community trust. Even though people on Metaculus are not trading with real money, disputes can become agonizing and people can get angry when questions do not resolve in the way that they thought it would. As a result of these issues, Metaculus moderators and admins have become very careful in the way that they write questions, to ensure that questions are resolved unambiguously whenever possible.

One market-driven way of ensuring community trust is to openly allow users to bid to become arbitrators of particular bounties. In effect, the role of arbitrator could be something like a paid position: they would provide the services of trust and reliability, which could then flow through the platform, promising a fair environment for the bounty offerors and bounty hunters.

Targeted at the effective altruism community

My limited research suggests that some bounty-like services already exist. The most common bounty platforms are bug-bounty platforms, which at the moment far esclipse the size of the Facebook group Bountied Rationality.

However, even as some of these platforms exist -- and perhaps even one exists that uses the arbitration system I described above -- a large potential drawback comes from networking effects. If an EA tried to induce complex research using an existing platform, they would be unlikely to attract the people best suited to doing that research. As a result, an EA would be better off just trying to induce the research more informally, either by asking for people to collaborate with them in the community, or by hiring someone to perform the research directly.

My hope is that creating a platform that facilitates small-scale inducement prize contests would help solve this problem, better allowing EAs in need of research solutions to target people most likely to provide them.

Comments10


Sorted by Click to highlight new comments since:

I am pretty excited about the potential for this idea, but I am a bit concerned about the incentives it would create. For example, I'm not sure how much I would trust a bibliography, summary, or investigation produced via bounty. I would be worried about omissions that would conflict with the conclusions of the work, since it would be quite hard for even a paid arbitrator to check for such omissions without putting in a large amount of work. I think the reason this is not currently much of a concern is precisely because there is no external incentive to produce such works - as a result, you can pretty much assume that research on the Forum is done in good faith and is complete to the best of the author's ability.

Potential ways around this that come to mind:

  • Maybe linking user profiles on this platform to the EA Forum (kind of like the Alignment Forum and LessWrong sharing accounts) would provide sufficient trust in good intentions?
  • Maybe even without that, there's still such a strong self-selection effect anyway that we can still mostly rely on trust in good intentions?
  • Maybe this only slightly limits the scope of what the platform can be used for, and preserves most of its usefulness?

Potential ways around this that come to mind:

Good ideas. I have a few more,

  • Have a feature that allows people to charge fees to people who submit work. This would potentially compensate the arbitrator who would have to review the work, and would discourage people from submitting bad work in the hopes that they can fool people into awarding them the bounty.
  • Instead of awarding the bounty to whoever gives a summary/investigation, award the bounty to the person who provides the best  summary/investigation, at the end of some time period. That way, if someone thinks that the current submissions are omitting important information, or are badly written, then they can take the prize for themselves by submitting a better one.
  • Similar to your first suggestion: have a feature that restricts people from submitting answers unless they pass certain basic criteria. E.g. "You aren't eligible unless you are verified to have at least 50 karma on the Effective Altruist Forum or Lesswrong." This would ensure that only people from within the community can contribute to certain questions.
  • Use adversarial meta-bounties: at the end of a contest, offer a bounty to anyone who can convince the judge/arbitrator to change their mind on the decision they have made.

another incentive system/component I have seen is that forums will allow users not only to upvote but to give other incentives to good answers. stackoverflow has bounty, and reddit coins

I would find this useful. But:

- What is the likely market size for this platform?
- How much would it cost to develop? (Including escrow infrastructure?)
- What would the fees for use be / need to be to keep the platform afloat?

What is the likely market size for this platform?

I'm not sure, but I just opened a Metaculus question about this, and we should begin getting forecasts within a few days. 

Briefly:

  1. I like the idea
  2. Think it will work
  3. Also like the idea of using Metaculus to forecast this

Somewhat related, and potentially relevant if someone sets this up:

  1. The Nonlinear Fund wrote up why they use RFPs (Requests For Proposals). 
  2. Certificates of Impact.
  3. There is an upcoming project platform for EAs, designed to coordinate projects with volunteers. A forum post should be out soon, but meanwhile you can see a prototype here.

Thank you for this great post. In the past I have looked for such platforms and concepts, but was unaware of the term 'inducement prize' and did not find much. 

Two extensions to the concept you presented could make it even more interesting, especially for the EA community. Firstly, rather than just requests being supplied to such a platform, offers to conduct e.g.  research  could be posted first by  qualified researchers in order to gauge interest. Secondly, there is no reason why there couldn't be several parties/individuals who pay the bounty collectively.  Essentially, this would be a "reverse kickstarter" use case, where  payment is made  after completion rather than in the beginning.  

It seems that there a lot of potential projects in the community with distributed interest and willingness-to-pay : literature reviews,  evaluations of possible cause areas,  research into personal Covid-19 risks etc. 

I think I'm not super optimistic about this idea, mainly because it seems like it's somewhat common for coordination platforms to be built but then people don't actually coordinate to all start using them. But: 

  • I've got a lot of uncertainty, and see "this is a great idea" as plausible
  • The cost of making an MVP seems probably low
  • So I think if someone's enthusiastic about making an MVP of this, it may well be worth doing so

(This is related to the general point that if you try something and it fails, you can stop putting resources into it, but if it works you can continue putting resources in and getting value out for a while.)

I've drafted a post (which I'll publish in ~3 weeks) proposing "A central, editable database to help people choose and do research projects". This would have something similar as one of its components. But it's possible that what I propose is overly complicated and it would be better to do one or more components from it in a separate, simplified way (which could then look like your proposal). In any case, anyone interested in a longwinded draft without an MVP can check it out (and leave comments if they want) here :)

I‘d also be really interested to see more attempts in this direction. I suspect that there are a many smaller research projects and people interested in working on such projects and this could being those together and result in interesting insights and learning opportunities.

Just in case you weren’t aware, LessWrong has tags on open and closed bounties that also might provide some interesting data: https://www.lesswrong.com/tag/bounties-closed https://www.lesswrong.com/tag/bounties-active

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f