Building a successful economy for collaborative cognitive work with high externalities

by jacobjacob5 min read24th May 20194 comments

28

PrizeMeta-science
Frontpage

[Epistemic status: Quite confident. Lots of this seems obvious from first principles. Though it’s far from exhaustive. Wary of carrying costs and the planning fallacy, I publish this post rough and incomplete, rather than not at all.]

Global markets are currently only (somewhat) efficient in incentivising problem-solving in areas where the benefits can be internalised, such as by earning a profit from the product one has built. ^[Ignoring various market failures such as long-time horizons, large coordination problems, high initial costs, and more.]

Several people in the EA community have suggested that we should be able to use monetary mechanisms to gain similar benefits in areas with large externalities. Suggested mechanisms include prizes, bounties ([1], [2]), impact certificates and grants. (I will not focus on grants in this post, as there’s a ton of content about them on this site already.)

This spreadsheet summarises these efforts, in order to 1) allow people looking to do freelance cognitive work to find good opportunities, and 2) allow people interested in making prizes work to survey the history of approaches and why they failed/succeeded.

The current post is an attempt to analyse what is needed to make prizes work -- that is, to effectively change some people’s behaviour in a way which directly optimises for improving the long-term future (or some other goal we care about).

Examples of such behaviour changes include:

  • Someone spending a year living off of one’s savings, learning how to summarise comment threads, with the expectation that people will pay well for this ability in the following years
  • A competent literature-reviewer gathering 5 friends to teach them the skill, in order to scale their reviewing capacity to earn more prize money
  • A college student building up a strong forecasting track-record and then being paid enough to do forecasting for a few hours each week that they can pursue their own projects instead of having to work full-time over the summer
  • A college student dropping out to work full-time on answering questions on LessWrong, expecting this to provide a stable funding stream for 2+ years
  • A professional with a stable job and family and a hard time making changes to their life-situation, taking 2 hours/week off from work to do skilled cost-effectiveness analyses, while being fairly compensated
  • Some people starting a “Prize VC” or “Prize market maker”, which attempts to find potential prize winners and connect them with prizes (or vice versa), while taking a cut somehow

Etc. etc. (I expect the above to be a small subset of the space of exciting optimisation that emerges when you manage to get the incentives right.)

There are at least four main ways in which incentives affect behaviour:

  1. Conscious motivation: people deliberately change their behaviour to benefit from the incentives
  2. Reinforcement learning: people unconsciously change their behaviour in line with the incentives, due to the positive reinforcement that gives them
  3. Selection effects: people whose behaviour aligns with the incentives will tend to be more successful and influential than people whose behaviour does not, absent any actual changes in a single person’s behaviour
  4. Mimetics: people or entire communities share tips, tricks, memes, norms and more to enable others to benefit from the incentives

(I have written more about these here.)

Each have separate implications for how to make prizes successful. I don’t think I have exhausted each sub-mechanism and look forward to collaboratively making more progress in the comments.


1. Conscious motivation

How can one ensure that individuals will consciously choose strategies to optimise for winning the prizes?

Clarity: It must be clear to people what they are optimising for

This property fails at the tails. Sometimes it’s better to have the prize giver be someone more rational than the prize taker, meaning that it’s too hard for the latter to goodhart on the desires of the former, instead leaving them to simply do the best they can and treating the prize signal as an objective evaluation of good work

Stability: People must be able to change their behaviour in expectation

It’s not sufficient that I get a one-time prize for something I did a year ago. I must expect that there will be a stable funding stream in the future, such that I can condition my future plans on this (e.g. dropping out of college, not satisficing on a job offer when the alternative is prize work, skill-building for a skill in demand by prize-givers) -- while at the same time have prize-givers condition their actions on my availability (e.g. set aside resources to manage applications, give feedback, build supporting infrastructure, and making sure funds are available)

One might split this into:

  1. Precommitments/reliable expectations of future funding
  2. Common knowledge (between both parts of the two-factor market): for a community to build infrastructure and make plans resting upon prizes, the existence and broad rules of the prizes should be common knowledge. This enables would-be producers (e.g. potential college dropouts) and would-be consumers (e.g. EA orgs investing time into turning research questions into an outsourceable format) to move in-lockstep to the Nash equilibria where they successfully trade resources via prizes.

2. Reinforcement learning (unconscious motivation)

How can one ensure that prizes unconsciously affect behaviour?

Quick and smooth payout

Insofar as humans are hyperbolic discounters (regardless of whether we want to or not), avoiding irritating paperwork and long delays might be helpful.

Clear credit assignment

In order for someone to do more of what worked, they need to have a good sense of what aspect of their work is being rewarded (“Did I write good comments? Did I make good predictions? Was the summary good? Was the research topic novel and interesting?” etc.)

Incentivise exploration (avoid an overly sparse reward signal), otherwise prize workers won’t learn the most effective strategies

Effectively balance intrinsic and extrinsic motivation

Section 1.5 of Kraut and Resnick’s “Building successful online communities” has a useful chapter on this, including this diagram from a 2001 meta-analysis about when extrinsic rewards harm (-) vs enhance (+) intrinsic rewards:

[diagram -- I can't get the "add image" option to work]


3. Selection effects

How can one those with a comparative advantage in doing certain prize work will tend to be the ones doing that work?

I believe that to a large extent selection effects will be present whether one wants them to or not, so the main question is rather whether there are effective ways of choosing which selection effects one wants to amplify.

Here are some examples of unwanted selection effects:

  • The people who win prizes are those most eager to work for prizes, not the ones who had the highest comparative advantage in doing so
  • The incentive dynamics are set by the prize givers most generous in giving out prizes, rather than those with the best ideas of what should be funded (e.g. suppose foundation X are less careful in thinking about the opportunity costs of money than foundation Y, and so decide to award 5x as much prize money, which on the margin disincentives the more valuable work preferred by Y)
  • The people who work for a particular prize are those who thought it was interesting/a good idea (e.g. awarding a prize for responses to “Does God exist?” and only having theologians put in the work, almost all of whom answer “Yes”)

Beyond listing these, I am uncertain about what action-guiding advice there is here.


4. Memetics

How can one ensure that people successfully communicate things like: the existence of prizes, good strategies and heuristics for prize work, promising prize workers, norms and best practices for prize design, etc.?

As with selection effects, there are likely several adverse effects that memetics have on a prize (for example, messages will tend to lose nuance when being shared between many people, as there are more ways to misunderstand a claim than to understand it); and it might be hard to deliberately intervene to prevent them.

“Memeifying” prizes/creating a conceptual handle

Having a simple name for something enables the difference between:

Without meme

Alice: “Hey, you seem pretty well off lately, but I haven’t noticed you getting a job or anything? What happened?”

Bob: “Oh, it’s because of [20 minute explanation of the ideas behind having a market for impact via prizes]”

And

With meme

Alice: “Hey, you seem pretty well off lately, but I haven’t noticed you getting a job or anything? What happened?”

Bob: “Yeah, I did some cognitive prize work!”

Using this conceptual handle, Alice can now quickly ask other friends “Do you know anything about how to be successful at ‘cognitive prize work’?”, she can easily Google or search her favourite blogs for posts about “cognitive prize work”, and more.

For a real life example, Wei Dai writes:

For both of the AI alignment related bounties, when a friend or acquaintance asks me about my "work", I can now talk about these prize that I recently won, which sounds a lot cooler than "oh, I participate on this online discussion forum". :)

Producing sharable material

Ensuring there is a key reference public write-up of memes one want to incentivise the spread of, e.g. what traits caused certain prize winners to receive their prizes (“X has the skill of being both brief and accurate”, “Y used a Guesstimate model in a helpful way”) as well as ongoing public discussion of the work (“Three things I did to improve as a prize worker”, “OpenPhil recommendations for aspiring prize workers”).

A great example here is the April 2019 EA Long-term future fund write-up.

28

4 comments, sorted by Highlighting new comments since Today at 10:13 PM
New Comment

Thanks for writing this! While I've long been a big fan of impact prize-like-mechanisms, the ideas about memetics and reinforcement learning were nice and novel - or at least if I'd thought about them properly before I'd forgotten them since. I also found the quote from Wei Dai very interesting.

I think you somewhat over-hedged in your openning paragraph however:

Global markets are currently only (somewhat) efficient in incentivising problem-solving in areas where the benefits can be internalised, such as by earning a profit from the product one has built. ^[Ignoring various market failures such as long-time horizons, large coordination problems, high initial costs, and more.]

Within this circumscribed range of problems (those whose benefits can be internalised, without long-term horizons, little coordination required, low capital needs), markets seem to be one of the most efficient mechanisms we have in motivating problem-solving. While we can think of some others that are also very effective, like intellectual curiosity for scientific discovery or parental love for childraising, there do not seem to be many which are dramatically more effective. If markets are indeed a top-tier incentivisation mechanism, calling them only 'somewhat' efficient, rather than 'quite' or 'often' seems to rather understate their potential for market mechanisms to incentivise useful charitable work.

Secondly, I don't think issues like long time horizons or high initial costs, and many coordination problems, are commonly considered market failures:

  • There are many examples of markets dealing correctly with issues a long way in the future. For example, Amazon has operated on a very long-term horizon for many years, forgoing current profitability for the sake of growth, and in financial markets there are liquid and rationally priced markets for bonds and derivatives maturing 50 years in the future.
  • Market mechanisms have incentivised investments in many large projects - a good example is semicondictor fabs, which take around a billion dollars of upfront capital investment before any return can be made.
  • Although you're right that there are some classes of coordination problems for which markets can struggle (e.g. holdouts refusing to sell land needed for a large project), there are others at which it excels, like coordinating supply chains across huge numbers of people.

A quick check on some online lists here and here confirms than these three are not generally considered examples of market failure.

As such I think if anything you are underselling the potential here. The fact that markets can deal with these sorts of problems gives me hope for very long-term impact prizes - things like financing projects with tradable impact certificates that ultimately pay off years in the future when the ultimate impact is clear.

This is a great writeup! As the person who runs this Forum and tries to incentivize cognitive work* from participants, I found it useful and will return to it.

The memetics idea was particularly novel to me, but I think I intuitively understood the point -- hence my always capitalizing "the EA Forum Prize" to make it a concrete, recognized proper noun.

If you look over the most recent Prize post, do you notice any places where you see an opportunity to improve the program? Or cases where you aren't sure if we do X, but think that X would be particularly valuable to do if we weren't yet doing it?

Also:

...e.g. awarding a prize for responses to “Does God exist?” and only having theologians put in the work, almost all of whom answer “Yes”.

I'm deeply amused by the thought of a theologian who sets out to win the "G-Prize", only to discover that he's no longer convinced by the arguments that sustain his faith once money is on the line. That would be quite the r/rational story prompt.

--

*Though I prefer to think of it as "cognitive fun" for those who like writing and helping the community and getting compliments from your friendly neighborhood moderator!

Thanks, that's great to hear.

The prize has been going on for a while, which seems important, and I think the transparency of the Prize post is really important for making common knowledge of what kind of work there is demand for. So overall it's pretty great.

The structure of feedback looks to me like: "here's the object-level content of the post, and here are 2-3 reasons we liked it". I think you could be more clear about what you want to incentivise. More precisely, the current structure doesn't answer:

  • How strong were the reasons relative to each other? (e.g. maybe removing Reason A would make the person win 2nd prize instead of 1st, but removing Reason B might make them win no prize)
  • Were the reasons only jointly sufficient to merit the prize, or might accomplishing only one of them have worked?
  • What other properties did the post display, which did not merit the prize? For example, maybe prize-meriting posts tend to be quite long -- even though length is not something you want to incentivise on the margin.
  • Why did the posts end up ordered the way they did? Beyond "the black-box voting process gave that verdict" :) Currently I don't know why SHOW was judged as deserving 4x the prize money of "The Case for the Hotel", for example.

Adding a bit of critique where appropriate seems good, since I'm the person who writes the summaries and can ask myself to put in more time.

Your other questions all come back to the black-box voting criterion, and I don't push judges to share their individual reasoning. That's largely because doing so makes being a judge more difficult; right now, voting is relatively little work, and given the small amounts of money at stake, I'm reluctant to ask for many more EA hours going toward the project).

We may consider splitting up the prize funding differently in the future; saying "piece X deserves four times as much money as piece Y" isn't a good way to think about how the Prize works, but there's still an argument for trying to more closely match grant funding to judges' enthusiasm levels.