I think bounties and requests for proposals (RFPs) are powerful and underutilized in EA. This is just a quick experiment to see if there are any interesting ideas floating around that might be bounty-able or RFP-able.


If your idea is interesting to me, I’ll shop it around to smarter people with relevant expertise, and if it gets funded, I’ll pay you $1,000.


Ideas can be either be objectively verifiable or they can be subjectively evaluated by a judge. For example, to minimize Goodharting the bounty, you might propose that a panel of AI safety experts judge the bounty submissions.
 

The deadline is November 15th. If it goes well, I'll open it up to more submissions.
 

Email me your ideas - rough ones are fine! - at emersonspartz@nonlinear.org, DM me on Twitter @EmersonSpartz, or submit them here.

59

0
0

Reactions

0
0
Comments48
Sorted by Click to highlight new comments since: Today at 1:11 PM

The famous computer scientist Donald Knuth offered $2.56 for "each error in his published books, whether it be technical, typographical, or historical." These rewards were highly prized despite their low numerical value.

EAs can consider something similar, offering bounties for identifying mistakes in papers, blog posts, analyses, books, etc that we consider important. Pre-identifying which works are considered important might take some time, but it shouldn't take too long. 

For example, I think some of the errors that I identified in the CE Delft report (which got a lot of publicity and AFAICT was cited pretty frequently in at least a few alternative protein circles until recently) could easily have been uncovered by a student of cultured meat research.

Having students be more skeptical and doing things like this can be helpful for both improving general EA epistemics and in training students to be good researchers.

Also relevant: EA Funds say "You can also suggest that we give money to other people, or let us know about ideas for how we could spend our money. Suggest a grant."

So people submitting bounties to Emerson should maybe also suggest the same / similar ideas to EA Funds, and/or maybe Emerson should pass ideas on to EA Funds.

(Obviously bounty ideas and grant ideas are different, but there's probably still a fair bit of overlap or easy ways to modify one into the other in many cases.)

Thanks for pointing that out!

I think a disadvantage of bounties is that multiple people may do the same thing, and depending on the task that could be quite good or a lot of wasted effort, so I think in most cases I would prefer grants.

Can you see suggestions that were made to EA funds somewhere?

It seems to me as if you still kind of need to specify who should get the funding when you suggest a grant. I think it would be very good if you could just submit a suggestion for a task that someone should do and we had an overview of suggested tasks or projects that people could see and then quickly apply for funding to do that task.  (Maybe you could fund a bounty or grant for someone who creates such a matching-people-and-funding-to-tasks-system, or perhaps EAFunds should just integrate it.)

(Thinking even bigger, it would probably be nice if you had a good overview what ideas in AI safety had already been tried and what ideas haven't been tried yet but seem promising. Though I'm not sure if that would really be helpful, you should probably ask AI safety researchers how to best improve coordination in AI safety research.)

The suggestions are visible to EA Funds staff in a spreadsheet, but I don't think they're visible to other people.

Suggesting a person to implement an idea is good but not necessary - definitely people should feel free to submit grant ideas without having a person in mind (and conversely should also feel free to submit ideas for people to fund for something or some vague thing [e.g., "more writing"] rather than specific things, if the person seems quite promising or whatever). 

I personally do think better ways of easily seeing, filtering, searching, compiling info on, etc. a whole bunch of ideas that have been proposed (and maybe also tried) seems good, but I'm not very optimistic about attempts to do that, largely because many proposals or attempts have been made and none seems to have properly caught on. It's probably best if that's done by an actor like CEA who has the resources to do it really well and maintain it and the status/prominence to make it the central thing. See some further thoughts here and in the links (that focuses on research ideas, but similar points apply more generally): Proposal: A central, editable database to help people choose and do research projects [draft]

[Caveat: I wrote this comment quickly and don't represent anyone but myself]

Thanks, I think I will suggest some grants sometime the next days. :)

I agree that it is probably hard create such a database in a way that it would be really useful and continuously used and that it may should be implemented by CEA.

(If CEA decides not to create something like that, it would still be interesting for people like me to see the suggestions, even if it is not for the purpose of task-people-matching. ^^)

And thanks for sharing the draft! I think it is helpful input, because I had some similar ideas, I will look into it more thoroughly later. 

Offer the world's best mathematicians  (e.g. Terence Tao) a lot of money to work on AGI Safety. Say $5M to work on the problem for a year. Perhaps have it open to any Fields Medal recipient.

I imagine that they might not be too motivated by personal consumption, but with enough cash they could forward goals of their own. If they'd like more good math to be done, they could use the money to offer scholarships, grants, and prizes, or found institutes, of their own. (If $5M isn't enough -- I note Tao at least has already won $millions in prizes -- I imagine there is enough capital in the community to raise a lot more. Let them name their price.)

[Previously posted as a comment on MIRI’s Facebook page here.]

I suspect that this doesn't work as an idea, largely because of what motivates mathematicians at that level. But I'd ask Scott Garrabrant whether he thinks this would be worthwhile - given that he knows Tao, at least a bit, and has worked with other mathematicians at UCLA.

Interesting. I wonder: many people say they aren't motivated by money, but how many of them have seriously considered what they could do with it other than personal consumption? And how many have actually been offered a lot of money -- to do something different to what they would otherwise do, that isn't immoral or illegal -- and turned it down? What if it was a hundred million, or a billion dollars? Or, what if the time commitment was lower - say 6 months, or 3 months?

Good point. And yes, it seems likely that they'd change their research, but I don't think that motivation and curiosity are as transferable. Still on net not a bad idea, but I'm still skeptical it would be this easy.

If top mathematicians had an EA mindset towards money, they would most likely not be publishing pure math papers.

True. But maybe the limiting factor is just the consideration of such ideas as a possibility? When I was growing up, I wanted to be a scientist, liked space-themed Sci-Fi, and cared about many issues in the world (e.g. climate change, human rights); but I didn't care about having or wanting money (in fact I mostly thought it was crass), or really think much about it as a means to achieving ends relating to my interests. It wasn't until reading about (proto-)EA ideas that it clicked.

I suspect that the offer would at least capture his attention/curiosity. Even if he rejected the offer, he'd probably find himself curious enough to read some of the current research. And he'd probably be able to make some progress without really trying.

Idea: What if this was a fellowship? It could quickly become one of the most prestigious fellowships in the world!

Good idea about the fellowship. I've been thinking that it would need to come from somewhere prestigious. Perhaps CHAI, FLI or CSER, or a combination of such academic institutions? If it was from, say, a lone crypto millionaire, they might risk being dismissed as a crackpot, and by extension risk damaging the reputation of AGI Safety. Then again, perhaps the amounts of money just make it too outrageous to fly in academic circles? Maybe we should be looking to something like sports or entertainment instead? Compare the salary to that of e.g. top footballers or musicians. (Are there people high up in these fields who are concerned about AI x-risk?)

>I suspect that this doesn't work as an idea, largely because of what motivates mathematicians at that level.

How confident of this are you? How many mathematicians have been offered, say, $10M for a year of work and turned it down?

I spoke with someone at MIRI who I expect knows more about this, and they pointed out that there aren't good operationalizable math questions in AI safety to attack, and that the money might point them to the possibility of interesting questions, but on its own probably wouldn't convince them to change focus.

As an intuition pump, imagine  Neuman was alive today;  would it be worthwhile to pay him to look into alignment? (He explicitly did contract work at extraordinary rates IIRC). I suspect that it would be worth it, despite the uncertainties. If you agree, then it does seem worthwhile to try to figure out who is the closest to being a modern Neumann and paying them to look into alignment. 

That seems mostly right, and I don't disagree that making the offer is reasonable if there are people who will take it - but it's still different than paying someone who only does math to do AI alignment, since von Neumann was explicitly someone who bridged fields including several sciences and many areas of math.

I don't think mathematics should be a crux. As I say below, it could be generalised to being offered to anyone a panel of top people in AGI Safety would have on their dream team (who otherwise would be unlikely to work on the problem). Or perhaps “Fields Medalists, Nobel Prize winners in Physics, other equivalent prize recipients in Computer Science, or Philosophy[?], or Economics[?]”. And we could include additional criteria, such as being able to intuit what is being alluded to here. Basically, the idea is to headhunt the very best people for the job, using extreme financial incentives. We don't need to artificially narrow our search to one domain, but maths ability is a good heuristic as a starting point.

This could be generalised to being offered to anyone a panel of top people in AGI Safety would have on their dream team (who otherwise would be unlikely to work on the problem).

This is insufficiently meta. Consider that this very simple and vague payout scheme is probably not optimal for encouraging good bounty suggestions. I suggest going one level up and putting out a bounty for the optimal incentive structure of bounty bounties. A bounty bounty bounty, if you will.

(This is mostly a joke, but I’m not adverse to getting paid if you actually decide to do it)

Edit: now that I’ve thought about it more, something in this space is probably worthwhile. A “bounty bounty bounty” is, funnily enough, both too specific and too abstract. However, a general “bounty on optimal bounty schemes” may be very valuable. A detailed investigation into the optimal bounty payouts for different goals, how best to split bounties among multiple participants, how best to score proposals, etc are all important questions for bounty construction. A bounty to answer such questions makes sense.

I actually think that's an interesting idea!  I like the idea of using bounties to spur more bounty innovation. I'd love to see more bounties like this - let's try mapping the whole design space.

It's been a year, have you explored this? I'm somewhat bullish on testing the idea of an EA bounty platform, and am curious as what others would think.

We built an EA bounty platform and have paid out a few dozen bounties!

https://www.super-linear.org

eca
2y13
0
0

Are you looking for shovel ready bounties (eg write them up and you are good to go) or things which might need development time (eg figuring out exactly what to reward, working out the strategy of why the bounty might be good etc)?

Shovel ready bounties are preferred but to avoid premature exploitation I'd just like to hear as many ideas as possible at this point.  Some ideas might require back and forth, but that's ok!

Seeing the ideas coming in is already giving me lots of ideas for ways to potentially scale this.

BOUNTY IDEA (also sent in the form): Exploring Human Value Codification.

Offered to a paper or study that demonstrates a mathematical (or otherwise engineering-ready) framework to measure human's real preference-ordering directly. Basically a neuroscience experiment or proposal thereof.

End goal: Using this framework / results from experiment(s) done based on it, you can generate novel stimuli that seem similar to each other, and reliably predict which ones human subjects will prefer more. (Gradients of pleasure, of course, no harm being done). And, of course, the neuroscientific understanding of how this preference ordering came about.

Prize amount: $5-10k for the proposal, more to fund a real experiment, order of magnitude probably in the right ballpark.

As an ex-intern I should probably be excluded, but here's a few ideas:

  • Perhaps we should have a bounty for the article that best addresses a misconception in AI safety - ideally judged by members of AI safety organisations. I know that the AI safety community has generally agreed that public outreach is not a priority, but I think we should make an exception for addressing misconceptions as otherwise these could poison the well. One downside of this bounty is that if people post these publicly, they may not be able to submit them elsewhere.
  • This would be meta too - but perhaps the best article arguing for a gap in the AI safety landscape. I've been chatting to people about this recently and lots of people have lots of ideas of what needs to be done, but this would provide motivation for people to write this up and to invest more effort than they would otherwise.
  • Perhaps we could have a bounty for the best article for the most persuasive article aimed at persuading critics of AI safety research. I guess the way to test this would be to see which article critics find most persuasive (but we would want to avoid the trap where the incentive would be to agree with critics only almost everything and then argue that they should shift one little point; not saying that this is invalid, but it shouldn't be the dominant strategy) (Here's a link to an experiment that was run to convince participants to donate to charity: https://schwitzsplinters.blogspot.com/2019/10/philosophy-contest-write-philosophical.html )
  • There seems to be a shortage of video content on AI Safety. It would be good for Robert Miles to have competition to spur him on to even greater heights. So perhaps there could be a prize for the best explainer video. (Potential downside: this could result in Youtube being flooded with low quality content - although I suspect that aren't particularly good are likely to just be ignored).
  • This is still a vague idea, but it's a shame that we can't see into the future. What if there was a competition where we asked people to argue that a certain claim will seem obvious or at least much more plausible in reterospect (say in 5 years). In 5 years, the post that achieved this best would be awarded the prize. This wouldn't just be about making a future prediction, like prediction markets entail, but providing a compelling argument for this, where the argument is supposed to seem compelling in reterospect. As an example, imagine if someone had predicted in advance that longtermism would become dominant among highly-engaged EAs or that we'd realise that we'd overrated earning to give.
  • Perhaps a prediction market on what area of AI safety a panel of experts will consider to be most promising in 5 years?
  • Seeing as Substack seems to be the new hot thing, perhaps we could create a Substack fellowship for EAs? The fellowships Substack offers don't just provide funding, but other benefits too. Perhaps Substack might agree to provide these benefits to these fellows if EA were to fund the fellowship.

(Note: Maybe some of the following projects I suggest already have been done. I haven't thoroughly researched that. If you know that something like I suggest to do has already been done, please comment!)

Some ideas for bounties (or grants) for projects or tasks:

  1. An extremely good introduction to "Why should you care about AI safety?" for people who are not stupid but have no idea of AI. (In my opinion preferably as video, though a good article would also be nice.) (I think of a rather short introduction, like 10min-20min)
  2. An extremely good explanation for "Why is AI safety so hard?" for people who have just read or watched the (future) extremely good introduction to "Why should you care about AI safety?". (For people who have little idea about what AI safety is actually about.) (Should be an easily understandable introduction to the main problems in AI safety.) (I was thinking of something like 15-30 min read or video, though additionally a longer and more detailed version would probably be useful as well.)
  3. A project that tries to understand what the main reasons for are why people reject EA after hearing about it. (through a survey and explicit questioning of people)
  4. (A bit related to 3) A project that examines the question "To which degree is effective altruism innate?" and the related question "How many potential (highly, mid or slightly) EA-engaged people are there in the world?". And perhaps also the related question "What life circumstances or other causes cause people to become effective altruists?
  5. A study that examines what the best way to introduce EA is. (E.g Is it better to not use the term "effective altruism"?;  Is The Drowning Child and the Expanding Circle a good introduction to EA ideas or is it rather deterrent?; For spreading longtermism, should I rather first recommend the Precipice or HPMoR (to spread rationality first)?) (Maybe make it something like a long-term study to which many people throughout the world can contribute.)
  6. Make a good estimation of the likelihood that some individual or a small group can significantly (though often indirectly) raise x-risk (for example by creating a bioengineered pandemic, triggering atomic war, triggering an economic crisis (e.g. through hacking attacks), triggering an AI weapons armsrace, triggering a bad political movement etc.). 

I would also love to see people funded, who are just thinking about how the EA community could coordinate better, how efficiency of research in EA-aligned causes can be increased, how EA should be developed in the future, what possible good (mega-)projects are, how to increase the bottleneck of EA (e.g. how to integrate good leadership into EA).

About 1&2: I don't think that should be done just by someone, but by one or more AI researchers who have an extremely good overview about the whole field of AI safety and are able to explain it well to people without prior knowledge. I think it should be something like by far the best introduction. Something which is clearly the thing you would recommend something who is wondering about what AI safety is.

Those are all quite big and important tasks. I think it would be better to advertise those tasks and reach out to qualified people and then fund interested people so they do those tasks, instead of creating bounties, but bounties could work as well.

Of course, you could also create a bounty like "for every 100k$ you raise for EA, you get 5k$" or so. (Yes just totally convince Elon Musk of EA and become a billionaire xD.) But I'm not sure if that would be a good idea, because there could be some downside risk to the image of EA fundraisers.

Two more ideas:

  1. Create an extremely good video as introduction to effective altruism. (Should be convincing and lead to action.)
    1. maybe also create a very good video or article discussing objections against effective altruism (and why they may be questionable, if they are questionable)
  2. Create well-designed T-shirts with (funny) EA, longtermist or AI safety printings, that I would love to walk around with in public and hope that someone asks me about it. (I would prefer if there is no "effective altruism" directly on the T-shirt. Maybe sth into this direction, though I don't like the robot that much, because many people associate AGI and robots far too much, but it is still kind of good.)

I like this! I'm curious why you opted for the submissions to be private instead of public (ie. submitting by posting a comment)?

I didn't think about it much - public might be better. I assumed some people would be hesitant to share publicly and I'd get more submissions if private, but I'm not sure if that offsets the creative stimulus of sharing publicly.

If you mentioned comment as an option I would probably comment instead of sending an email.

By the way, for the future I would suggest the question format (instead of the post format), so that comments are separated as "answers to the question" and "other" :)

You could do both -- that's what I'll do if that's okay :)

Also, comments can also give you points, ya know! :P

  1. What do you forecast is the chance that at least one of the bounties you receive by Nov 15th will get funded by the end of February 2022?

  2. If you get multiple ideas you like, do you expect to try to get multiple of them funded?

  3. If yes to 2, what's your median forecast for the number of bounties you expect to get funded by the end of February 2022?

  1. I'd guess 80% chance at least one gets funded by Feb 2022.
  2. I want to fund every idea that is good enough and then figure out how to scale the bounty market making process 100x.
  3. 2 from this particular experiment, but I intend to do more experiments like this.

Bounty suggestion: Reach out to people who have had their grants accepted (or even not accepted) by the LTFF, and ask them to publish them in exchange for $100-$500.

  • Why is this good: This might make it easier for prospective candidates to write their applications
  • Why do this as a bounty + assurance contract:
    • Why assurance contract: I might find it kind of scary to publish my own application alone, but easier if others do as well.
    • Why bounty: It feels like there is a cost to publishing an application because they were written by one's younger self, and they are slightly personal, and people have limited capacity to internalize externalities before they burn out.
  • This would require taking on some coordination costs
    • E.g., talking to the LTFF about whether the risks of people "hacking" the application process is worth the increase in the ease of applying.
    • E.g., actually enforcing strict comment guidelines about not posting comments which would make it more costly to publish applications.
    • Thinking about things which could wrong.

Introduce important people* to the most important ideas by way of having seminars they are paid to attend. I recommend Holden Karnofsky’s Most Important Century series for this as it is highly engaging, very readable, and has many jumping off points to go into more depth; but other things are also very good. The format could be groups of 4, with a moderator (and optionally the authors of the pieces under discussion on the side lines to offer clarity / answer questions). It could be livestreamed for accountability (on various platforms to diversify audience). Or not livestreamed for people who care about anonymity, but under the condition that anonymised notes of the conversations are made publicly available. And it should be made clear that the fee is equivalent to a “speakers fee” and people shouldn’t feel obliged to “tow the party line”, but rather speak their opinions freely. The fee should be set at a level where it incentivises people to attend (this level could be different for different groups). In addition to (or in place of) the fee, there could be prestige incentives like having a celebrity (or someone highly respected/venerated by the particular group) on the panel or moderating, or hosting it at a famous/prestigious venue. Ideally, for maximum engagement with the ideas and ample time for discussion, the seminar should be split over multiple weeks (say one for each blog post), but I understand this might be impractical in some cases given the busy schedules important people tend to have.

(Note that this is in a similar vein to another other one of my ideas, but at a much lower level.)

*important people would include technical AI/ML experts, AI policy experts, policy makers in general, public intellectuals; those of which who haven’t significantly engaged with the ideas already.

Maybe if such a thing is successfully pulled off, it could be edited into a documentary TV series, say with clips from each week's discussion taken from each of the groups, and an overarching narrative in interludes with music, graphics, stock footage (plane on runway) etc.

(Sorry if some of my ideas are fairy big budget, but EA seems to have quite a big budget these days)

Note there have already been Most Important Century seminars hosted. I missed this one. Would be interested to hear how it went.

Are there any prior examples of this kind of thing? (I haven't found any with a quick search.)

First human to achieve some level of intelligence (as measured by some test) (prize split between the person themselves and the parents and the genetic engineering lab if applicable) (this is more about the social incentive than economical one, as I suppose there's already an economical one)

x-post: What effectively altruistic inducement prize contest would you like to be funded?

1M USD for the first to create a gamete (sperm and/or egg) from stem cells that result in a successful birth in one of the following species: humans, mice, dogs, pigs (probably should add more options).

(this could enable iterated embryo selection)

x-post: What effectively altruistic inducement prize contest would you like to be funded?

way to sequence sperm/​​eggs non-destructively

this would give us:

immediate large boost of ~2SD possible by selecting earlier in the process before variance has been canceled out, does not require any new technology other than the gamete sequencing part

see: https://www.gwern.net/Embryo-selection

What's the price range for the bounty?

That depends on the funders! Give enough bounties, I'd expect an optimal bounty distribution to look power law-ish with a few big bounties (>10k-1m?) and many small ones (<10k).

Curated and popular this week
Relevant opportunities