I'm a research fellow at Open Philanthropy. Prior to that I was a senior research manager at Rethink Priorities. And prior to that I earned a PhD in philosophy from the University of Texas at Austin.
Thanks for your questions. I’ll address the last one, on behalf of the cause prio team.
One of the exciting things about this team is that, because it launched so recently, there’s a lot of room to try new things as we explore different ways to be useful. To name a few examples:
There is no particular background knowledge that is required for a role on our team. For context, a majority of current team members were working on global health and wellbeing issues less than a year ago. For this hiring round, applicants that understand the GCR ecosystem and have at least superficial understanding of frontier AI models will in general do better than applicants that lack that understanding. But I encourage everyone who is interested to apply.
Thanks for your question. Two quick points:
(1) I wouldn't model Open Phil as having a single view on these sorts of questions. There's a healthy diversity of opinions, and as stated in the "caveats" section, I think different Open Phil employees might have chosen different winners.
(2) Even for the subset of Open Phil employees who served as judges, I wouldn't interpret these entries as collectively moving our views a ton. We were looking for the best challenges to our AI worldview in this contest, and as such I don't think it should be too surprising that the winning entries are more skeptical of AI risks than we are.
Hi Paul, thanks for your question. I don't have an intrinsic preference. We encourage public posting of the entries because we believe that this type of investigation is potentially valuable beyond the narrow halls of Open Philanthropy. If your target audience (aside from the contest panelists) is primarily researchers, then it makes sense to format your entry according to the norms of the research community. If you are aiming for a broader target audience, then it may make sense to structure your entry more informally.
When we grade the entries, we will be focused on the content. The style and reference won't (I hope) make much of a difference.
The details and execution probably matter a lot, but in general I'm fine with bullet-point writing. I would, however, find it hard to engage with an essay that was mostly tables with little prose explaining the relevance of the tables.
Thanks for your question. It's a bit difficult to answer in the abstract. If your ideas hang together in a nice way, it makes sense to house them in a single entry. If the ideas are quite distinct and unrelated, it makes more sense to house them in separate entries. Another consideration is length. Per the contest guidelines, we're advising entrants to shoot for a submission length around 5000 words (though there are no formal word limits). All else equal, I'd prefer three 5000 word entries to one 15,000 word entry, and I'd prefer one 5000 word entry to ten 500 word entries.
Hope this helps.
Thanks both - I just added the announcement link to the top of this page.
Thanks for your comment. I am also concerned about groupthink within homogenous communities. I hope this contest is one small push against groupthink at Open Phil. By default, I do, unfortunately, expect most of the submissions to come from people who share the same basic worldview as Open Phil staff. And for submissions that come from people with radically different worldviews, there is the danger that we fail to recognize an excellent point because we are less familiar with the stylistic and epistemic conventions within which it is embedded.
For these sorts of reasons, we did explicitly consider including non-Open Phil judges for the contest. Ultimately, we decided that didn’t make sense for this use case. We are, after all, hoping that submissions update our thinking, and it’s harder for an outside judge to represent our point of view.
But this contest is not the only way we are stress-testing our thinking. For example, I’m involved in another project in which we are engaging directly with smart people who disagree with us about AI risk. We hope that as a result of that adversarial collaboration, we can generate a consensus of cruxes so that we have a better handle on how new developments ought to change our credences. I hope to be able to share more details on that project over the summer.
If you want to chat more about groupthink concerns, shoot me a DM. I believe it’s a somewhat underappreciated worry within EA.
Hi Phil - just to clarify: the entries must entirely be the original work of the author(s). You can cite others and you can use AI-generated text as an example, but for everything that is not explicitly flagged as someone else's work, we will assume it is original to the author.
Thanks for your questions. We're interested in a wide range of considerations. It's debatable whether human-originating civilization failing to make good use of its "cosmic endowment" constitutes an existential catastrophe. If you want to focus on more recognizable catastrophes (such as extinction, unrecoverable civilizational collapse, or dystopia) that would be fine.
In a similar vein, if you think there is an important scenario in which humanity suffers an existential catastrophe by collectively losing control over an ecosystem of AGIs, that would also be an acceptable topic.
Let me know if you have any other questions!
We are just ironing out the final legal details. The official announcement will hopefully go live by the end of next week. Thanks for checking!