JK

Jam Kraprayoon

497 karmaJoined Apr 2022

Comments
7

Hi Matt, I think it’s right that there’s some distinction between domestic and international governance. Unless otherwise specified, our project ideas were usually developed with the US in mind. When evaluating the projects, I think our overall view was (and still is) that the US is probably the most important national actor for AI risk outcomes and that international governance around AI is substantially less tractable since effective international governance will need to involve China. I’d probably favour more effort going into field-building focused on the US, then the EU, then the UK, in that order, before focusing on field-building initiatives aimed at international orgs. 

In the short term, it seems like prospects for international governance on AI are low, with the political gridlock in the UN since the Russian invasion of Ukraine. I think there could be some particular international governance opportunities that are high-leverage, e.g. making the OECD AI incidents database very good, but we haven’t looked into that much.

I think it’s probably true that teams inside of major labs are better placed to work on AI lab coordination broadly, and this post was published before news of the frontier models forum came out. Still, I think there is still room for coordination to promote AI safety outcomes between labs, e.g. something that brings together open-source actors. However, this project area is probably less tractable and neglected now than when we originally shared this idea.

Thanks for the question. At the time we were generating the initial list of ideas, it wasn’t clear that AI safety was funding-constrained rather than talent-constrained (or even idea-constrained). As you’ve pointed out, it seems more plausible now that finding additional funding sources could be valuable for a couple of reasons:

  1. Helps respond to the higher funding bar that you’ve mentioned
  2. Takes advantage of new entrants to AI-safety-related philanthropy, notably the mainstream foundations that have now become interested in the space.

I don’t have a strong view on whether additional funding should be used to start a new fund or if it is more efficient to direct it towards existing grantmakers. I’m pretty excited about new grantmakers like Manifund getting set up recently that are trying out new ways for grantmakers to be more responsive to potential grantees. I don't have a strong view about whether ideas around increasing funding for AI safety are more valuable than those listed above. I'd be pretty excited about the right person doing something around educating mainstream donors about AI safety opportunities.

Hi, the General Longtermism Team at Rethink Priorities is currently looking to facilitate faster and better creation of entrepreneurial longtermist projects – that is, new organizations, infrastructure, programs, and services that we believe will cost-effectively contribute to reducing existential risk. Some of these projects are likely to be oriented around Ai safety.

I'll DM you our expression of interest form to be a founder/co-founder for one of these projects.

We used the following terms:

Germicidal light vs. Germicidal UV light

Low wavelength light vs. Far-UVC

Upper room germicidal light vs. upper room UVC

This and their accompanying descriptions (which were otherwise kept the same) can be found in Appendix item IV, 'Description of GUV light systems'.

Hi Yonatan, thanks for your question! As described in the brief, the list you've mentioned, which includes 'vetting grant opportunities' and 'improving EIP's knowledge management and organizational learning processes, is a list of our organization-wide priorities during the expected duration of the fellowship.

As these positions are aimed at early career people, we don't expect fellows to independently do these things, but it's an opportunity for those that are interested in getting exposed to these complicated tasks and doing work that contributes to these priorities (in an environment where lots of support and feedback is given). We don't expect experienced grantmakers and operations people to apply for these roles, though, as mentioned in the brief - the baseline pay is $15/hour and can go up to $30/hour depending on performance on work tests during the application process.

Effective Institutions Project is hiring full-time or part-time early career fellows to support our research, strategy, and field-building work from mid-September through mid-December 2022 (precise dates negotiable).

Exact duties will be assigned from week to week based on current needs and candidate interests and skills. Some of our priorities for this fall include:

  • Sourcing and vetting grantmaking opportunities related to improving institutions
  • Conducting outreach to engage high-level experts in our institutional research
  • Improving EIP’s knowledge management and organizational learning processes
  • Monitoring and tracking important developments across key institutions and relevant research literature
  • Supporting main operational functions and helping establish new systems
  • Helping organize meetups, reading groups, and other community events

The baseline compensation for fellows is $15/hr; candidates with outstanding performance on work tests during the application process may be offered higher rates up to $30/hr. (We expect to have one short unpaid test for candidates who pass an initial screening and one longer paid exercise for finalist candidates.) Candidates may work remotely, and our goal is not to let nationality/citizenship be a barrier to collaborating with us.

To be considered, please submit an application no later than September 9.