[Update 3: The winners have been selected and notified and will be publicly announced no later than the end of September.]

[Update 2: The contest has now officially launched! See here for the announcement.]

[Update: Work posted after September 23 2022 (and before whatever deadline we establish) will be eligible for the prizes. If you are sitting on great research, there's no need to delay posting until the formal contest announcement in 2023.]

At Open Philanthropy we believe that future developments in AI could be extremely important, but the timing, pathways, and implications of those developments are uncertain. We want to continually test our arguments about AI and work to surface new considerations that could inform our thinking.

We were pleased when the Future Fund announced a competition earlier this year to challenge their fundamental assumptions about AI. We believe this sort of openness to criticism is good for the AI, longtermist, and EA communities. Given recent developments, it seems likely  that competition is no longer moving forward.

We recognize that many people have already invested significant time and thought into their contest entries. We don’t want that effort to be wasted, and we want to incentivize further work in the same vein. For these reasons, Open Phil will run its own AI Worldviews Contest in early 2023

To be clear, this is a new contest, not a continuation of the Future Fund competition. There will be substantial differences, including:

  • A smaller overall prize pool
  • A different panel of judges
  • Changes to the operationalization of winning entries

The spirit and purpose of the two competitions, however, remains the same. We expect it will be easy to adapt Future Fund submissions for the Open Phil contest.

More details will be published when we formally announce the competition in early 2023. We are releasing this post now to try to alleviate some of the fear, uncertainty, and doubt surrounding the old Future Fund competition and also to capture some of the value that has already been generated by the Future Fund competition before it dissipates.

We are still figuring out the logistics of the competition, and as such we are not yet in a position to answer many concrete questions (e.g., about deadlines or prize amounts). Nonetheless, if you have questions about the contest you think we might be able to answer, you can leave them as comments below, and we will do our best to answer them over the next few weeks.

Comments27
Sorted by Click to highlight new comments since: Today at 11:13 AM

Thank you so much for doing this!

Also the implicit extended timeline (no longer due Dec 23) is also very welcomed.

[comment deleted]1y2
0
0

Thank you for carrying this forward! 

One comment: as with the old contest, I strongly support the decision to use a number of judges that are outside of/independent from EA. I fear the EA bubble is becoming something of an echo chamber: this is a great opportunity to verify whether such fears are well-founded, and if so, provide a check on this detrimental effect.

Glad to hear about this!

I have a recommendation for the structure of it. I'd recommend that anonymous reviewers review submissions and share their reviews with the authors (perhaps privately) before a rebuttal phase (also perhaps private). And then reviewers can revise their reviews, and then chairs can make judgments about which submissions to publish.

Fantastic news!!! My main question:

The Future Fund AI Worldview Prize had specific, very bold criteria, such as raising or lowering to certain thresholds the probability estimates of transformative AI timelines or probabilities of an AI related catastrophe, given certain timelines;

Will this AI Worldview Prize have very similar criteria, or do you have any intuitions what these criteria might be?

This would be very helpful for researchers like myself deciding whether to continue on a particular line of research!

I'm not sure contests like this are a good idea, but pre-announced contests are better than spontaneous contests in cases like this, so yay.

It would be even better if you clarified that current posts are eligible, so that people don't save their posts until you announce the details.

What are the reasons against contests like this being a good idea?

I might write this up someday, but briefly:

  1. I'm skeptical that they increase quality-adjusted work going into the area much (particularly if you subtract the value of the work that people would have done if not for the contest).
  2. I'm skeptical that they better-distribute work within the area well.
  3. I'm skeptical that they redistribute money well.
  4. I'm skeptical that they have much other benefits.

(Edit: that said, some contests can certainly achieve #1, and some can certainly have substantial other benefits.)

As one datapoint, the time spent on my entry to the original worldview prize was strictly additive. I have a grant to do AI safetystuff part time, and I still did all of that work; the work I didn't do that week was all non-AI business.

It's extremely unlikely that I would have written that post without the prize or some other financial incentive. So, to the extent that my post had value, the prize helped make it happen.

That said, when I saw another recent prize, I did notice the incentive for me to conceal information to increase the novelty of my submission. I went ahead and posted that information anyway because that's not the kind of incentive I want to pay attention to, but I can see how the competitive frame could have unwanted side effects.

Even more specifically, it would be helpful to confirm that work published on or after the date of the Future Fund's announcement on 23rd Sep 2022 is eligible (if that is actually the case).

Thanks Jason. I can now confirm that that is indeed the case!

^seconding this question 😊

Hi Zach, thanks for the question and apologies for the long delay in my response. I'm happy to confirm that work posted after September 23 2022 (and before whatever deadline we establish) will be eligible for the prize. No need to save your work until the formal announcement.

Thank you for organizing this! I have two questions. First, is there any update regarding when the official announcement will be made? Second, will essays submitted to other competitions, or for publication, be eligible? In other other words, is there any risk that submitting research elsewhere prior to the announcement of the competition will render it ineligible for the competition?

Thanks for your questions!

We plan to officially launch the contest sometime in Q1 2023, so end of March at the latest.

I asked our in-house counsel about the eligibility of essays submitted to other competitions/publications, and he said it depends on whether by submitting elsewhere you've forfeited your ability to grant Open Phil a license to use the essay. His full quote below:

Essays submitted to other competitions or for publication are eligible for submission, so long as the entrant is able to grant Open Phil a license to use the essay. Since we plan to use these essays to inform our future research and grantmaking, we need a license to be able to use the IP. Our contest rules will state that by submitting an entry, each entrant grants a license to Open Phil to use the entry to further our mission. If you had previously submitted an essay to another contest or for publication, you should check the terms and conditions of that contest/publication to confirm they do not now have exclusive rights to the work or in any way prohibit you from granting a license to someone else to use it.

Thanks for a great answer! That's very helpful.

Hello, checking in on any updates to the Open Phil contest. I look forward to submitting an entry soon!

We are just ironing out the final legal details. The official announcement will hopefully go live by the end of next week. Thanks for checking!

Here are a few ideas relevant to an AI contest that could be helpful:

  • make the goal specific to developments in AI safety, rather than AI
    • if checking for a prediction's probability, choose a prediction that is specific to AI safety (for example, "When would P(AGI is aligned with at least one human's values|AGI is developed) > .50").
    • if asking for specific content, choose content helpful to AI safety (for example, "What cause area of AI safety research do you feel is currently neglected, tractable, and important to AI safety, and why?").
  • browse the comments on the FTX contest announcement post for ideas and complaints about that prize's requirements (for example, I think someone suggested that contest submissions should have up to a year to be made, for a substantial reward, that makes a lot of sense, and would encourage more outside entries and original research from experts in the space).
  • commit to concrete submission standards that you feel are minimum requirements for you to read each submission, whatever those might be (academic credentials, format, content requirements, research approach, etc), and publish those along with the formal announcement. Then commit to reading each entry that meets the standards.
  • make the $$ amount used for the prizes guaranteed to go to some contestant, rather than optional for you and only if you think some entry deserves it. The grand prize should go to an entrant, I think that's fair and honest in a competition.

I think it's a good idea to continue the prize to the extent that it encourages AI safety research directly. My impression of the original prize was that it could encourage AGI development without necessarily encouraging AI Safety development, because its questions required more knowledge and consideration of AGI development than of AGI safety.

Awesome news, thanks! Looking forward to hearing more about the operationalization, and logistics. 

Wondering if there could be a way to incorporate the fact that doom is conditional on year that TAI is developed (i.e how well developed AI Alignment/strategy/governance is when TAI is possible)? P(doom|TAI in year 20xx) and P(10% chance of TAI in year 20xx) are both important questions.

The FTX contest description listed "two formidable problems for humanity": 

"1. Loss of control to AI systems
Advanced AI systems might acquire undesirable objectives and pursue power in unintended ways, causing humans to lose all or most of their influence over the future.

2. Concentration of power
Actors with an edge in advanced AI technology could acquire massive power and influence; if they misuse this technology, they could inflict lasting damage on humanity’s long-term future."

My sense is that the contest is largely framed around (1) at the neglect (2). Nick Beckstead's rationale behind his current views is based around a scenario involving power seeking AI, whereas arguably scenarios related to (2) don't require the existence of AGI in the first place. which is central to the main forecasting question. It seems AI developments short of AGI could be enough for all sorts of disruptive changes with catastrophic consequences, for instance in geopolitics. 

Based on my limited understanding, I'm often surprised how little focus there is within the AI safety community on human misuse of (non-general) AI. In addition to not requiring controversial assumptions about AGI, these problems also seems more tractable since we can extrapolate from exisiting social science and have a clearer sense of what the problems could look like in practice. This might mean we can forecast more accurately, and my current sense is that it's not obvious AI-related catastrophic consequences are more likely to come from AGI than human misuse (of non-AGI). 

Maybe it would be helpful to frame the contest more broadly around catastrophic consequences resulting from AI. 

Supporting the community with this new competition is quite valuable.  Thanks!

 

Here is an idea for how your impact might be amplified:  For ever researcher that is somehow has full time funding to do AI safety research I suspect there are 10 qualified researchers with interest and novel ideas to contribute, but who will likely never be full time funded for AI safety work.  Prizes like these can enable this much larger community to participate in a very capital efficient way.

But such "part time" contributions are likely to unfold over longer periods, and ideally would involve significant feedback from the full-time community in order to maximize the value of those contributions.

The previous prize required that all submissions be of never before published work.  I understand the reasoning here.  They wanted to foster NEW work.  Still this rule drops a wet blanket on any part-timer who might want to gain feedback on ideas over time.

Here is an alternate rule that might have fewer unintended side effects:  Only the portions of ones work that has never been awarded prize money in the past is eligible for consideration.

Such a rule would allow a part-timer to refine an important contribution with extensive feedback from the community over an extended period of time.  Biasing towards fewer higher quality contributions in a field with so much uncertainty seems a worthy goal.  Biasing towards greater numbers of contributors in such a small field also seems valuable from a diversity in thinking perspective too.

Any update on when "early 2023" will be?

Thanks both - I just added the announcement link to the top of this page.

[anonymous]1y1
0
0

Thank you, I was struggling to finish a forecasting essay for the Future Fund's prize.  I intend to submit something regardless of whether there's prize money, but prize money surely would help orient effort anyway.  Resources are finite.