Jason Schukraft

Program Officer @ Open Philanthropy
2950 karmaJoined Working (6-15 years)Roanoke, VA, USA

Bio

I'm a program officer on the AI governance team at Open Philanthropy.

Sequences
2

Moral Weight Series
Invertebrate Sentience

Comments
131

Topic contributions
1

Hi Vaipan,

Thanks for your questions. I’ll address the last one, on behalf of the cause prio team.

One of the exciting things about this team is that, because it launched so recently, there’s a lot of room to try new things as we explore different ways to be useful. To name a few examples:

  • We’re working on a constellation of projects that will help us compare our grantmaking focused on risks from advanced AI systems to our grantmaking focused on improving biosecurity and pandemic preparedness.
  • We’re producing a slew of new BOTECs across different focus areas. If it goes well, this exercise will help us be more quantitative when evaluating and comparing future grantmaking opportunities.
  • As you can imagine, the result of a given BOTEC depends heavily on the worldview assumptions you plug in. There isn’t an Open Phil house view on key issues like AI timelines or p(doom). One thing the cause prio team might do is periodically survey senior GCR leaders on important questions so we better understand the distribution of answers.
  • We’re also doing a bunch of work that is aimed at increasing strategic clarity. For instance, we’re thinking a lot about next-generation AI models: how to forecast their capabilities, what dangers those capabilities might imply, how to communicate those dangers to labs and policymakers, and ultimately how to design evals to assess risk levels.

There is no particular background knowledge that is required for a role on our team. For context, a majority of current team members were working on global health and wellbeing issues less than a year ago. For this hiring round, applicants that understand the GCR ecosystem and have at least superficial understanding of frontier AI models will in general do better than applicants that lack that understanding. But I encourage everyone who is interested to apply.


 

Hi Chris,

Thanks for your question. Two quick points:

(1) I wouldn't model Open Phil as having a single view on these sorts of questions. There's a healthy diversity of opinions, and as stated in the "caveats" section, I think different Open Phil employees might have chosen different winners.

(2) Even for the subset of Open Phil employees who served as judges, I wouldn't interpret these entries as collectively moving our views a ton. We were looking for the best challenges to our AI worldview in this contest, and as such I don't think it should be too surprising that the winning entries are more skeptical of AI risks than we are.

Hi Paul, thanks for your question. I don't have an intrinsic preference. We encourage public posting of the entries because we believe that this type of investigation is potentially valuable beyond the narrow halls of Open Philanthropy. If your target audience (aside from the contest panelists) is primarily researchers, then it makes sense to format your entry according to the norms of the research community. If you are aiming for a broader target audience, then it may make sense to structure your entry more informally.

When we grade the entries, we will be focused on the content. The style and reference won't (I hope) make much of a difference.

Hi Nicholas,

The details and execution probably matter a lot, but in general I'm fine with bullet-point writing. I would, however, find it hard to engage with an essay that was mostly tables with little prose explaining the relevance of the tables.

Hi Nicholas,

Thanks for your question. It's a bit difficult to answer in the abstract. If your ideas hang together in a nice way, it makes sense to house them in a single entry. If the ideas are quite distinct and unrelated, it makes more sense to house them in separate entries. Another consideration is length. Per the contest guidelines, we're advising entrants to shoot for a submission length around 5000 words (though there are no formal word limits). All else equal, I'd prefer three 5000 word entries to one 15,000 word entry, and I'd prefer one 5000 word entry to ten 500 word entries.

Hope this helps.

Jason

Thanks both - I just added the announcement link to the top of this page.

Hi David,

Thanks for your comment. I am also concerned about groupthink within homogenous communities. I hope this contest is one small push against groupthink at Open Phil. By default, I do, unfortunately, expect most of the submissions to come from people who share the same basic worldview as Open Phil staff. And for submissions that come from people with radically different worldviews, there is the danger that we fail to recognize an excellent point because we are less familiar with the stylistic and epistemic conventions within which it is embedded.

For these sorts of reasons, we did explicitly consider including non-Open Phil judges for the contest. Ultimately, we decided that didn’t make sense for this use case. We are, after all, hoping that submissions update our thinking, and it’s harder for an outside judge to represent our point of view.

But this contest is not the only way we are stress-testing our thinking. For example, I’m involved in another project in which we are engaging directly with smart people who disagree with us about AI risk. We hope that as a result of that adversarial collaboration, we can generate a consensus of cruxes so that we have a better handle on how new developments ought to change our credences. I hope to be able to share more details on that project over the summer.

If you want to chat more about groupthink concerns, shoot me a DM. I believe it’s a somewhat underappreciated worry within EA.

Hi Phil - just to clarify: the entries must entirely be the original work of the author(s). You can cite others and you can use AI-generated text as an example, but for everything that is not explicitly flagged as someone else's work, we will assume it is original to the author.

Hi David,

Thanks for your questions. We're interested in a wide range of considerations. It's debatable whether human-originating civilization failing to make good use of its "cosmic endowment" constitutes an existential catastrophe. If you want to focus on more recognizable catastrophes (such as extinction, unrecoverable civilizational collapse, or dystopia) that would be fine.

In a similar vein, if you think there is an important scenario in which humanity suffers an existential catastrophe by collectively losing control over an ecosystem of AGIs, that would also be an acceptable topic.

Let me know if you have any other questions!

We are just ironing out the final legal details. The official announcement will hopefully go live by the end of next week. Thanks for checking!

Load more