Hide table of contents

ALTER is an organization in Israel that works on several EA priority areas and causes. This semiannual update is intended to inform the community of what we have been doing, and provide a touchpoint for those interested in engaging with us. Since the last update at the beginning of 2023, we have made progress on a number of areas, and have ambitious ideas for future projects. 

Progress to Date

Since its founding, ALTER has started and run a number of projects.

  1. Organized and managed an AI safety conference in Israel, AISIC 2022 hosted at the Technion, bringing in several international speakers including Stuart Russell, to highlight AI Safety focused on existential-risk and global-catastrophic-risk, to researchers and academics in Israel. This was successful in raising the profile of AI safety here in Israel, and in helping identify prospective collaborators and researchers.
  2. Support for Vanessa Kosoy’s Learning-Theoretic Safety Agenda, including an ongoing prize competition, and work to hire researchers working in the area.
  3. Worked with Israel’s foreign ministry, academics here in Israel, and various delegations to and organizations at the Biological Weapons Convention to find avenues to promote Israel’s participation.
  4. Launched our project to get the Israeli government to iodize salt, to mitigate or eliminate the current iodine deficiency that we estimate causes an expected 4-IQ point loss to the median child born in Israel today.
  5. Worked on mapping the current state of metagenomic sequencing usage in Israel, in order to prepare for a potential use of widespread metagenomic monitoring for detecting novel pathogens.
  6. Organized and hosted a closed Q&A with Eliezer Yudkowsky while he was visiting Israel, for 20 people in Israel working on or interested in contributing to AI safety. This was followed by a larger LessWrong meetup with additional attendees.

Current and Ongoing Work

We have a number of ongoing projects related to both biorisk and AI safety. 

  1. Fellowship program. We have started this program to support researchers interested in developing research agendas relevant to AI safety. Ram Rahum is our inaugural funded AI safety fellow, who was found via our AI Safety conference. Since then, he has co-organized a conference in London on rebellion and disobedience in AI jointly with academics in Israel, the US, and the UK. As a fellow, he is also continuing to work with academics in Israel as well as a number of researchers at Deep Mind on understanding strategic deception and multi-agent games and dynamics for ML systems. His research home is here and monthly updates are here. Rona Tobolsky is a policy fellow, and is also working with us on policy, largely focused on biorisk and iodization.
  2. Support for Vanessa Kosoy’s Learning-Theoretic AI Safety Agenda. To replace the former FTX funding, we have been promised funding from an EA donor lottery to fund a researcher working on the learning-theoretic safety agenda. We are working on recruiting a new researcher, and are excited about expanding this. Relatedly, we are helping support a singular learning theory workshop
  3. Biosecurity. David Manheim and Rona Tobolsky attended the Biological Weapons Convention - Ninth Review Conference, and have continued looking at ways to push for greater participation by Israel, which is not currently a member. David will also be attending a UNIDIR conference on biorisk in July. We are also continuing to explore additional pathways for Israel to contribute to global pandemic preparedness, especially around PPE and metagenomic biosurveillance.
  4. AI field building. Alongside other work to build AI-safety work in Israel, ALTER helped initiate a round of the AGI Safety Fundamentals 101 program in Israel, and will be running a second round this year. We are also collaborating with EA Israel to host weekly co-working sessions on AI safety, and will hope to continue to expand this. David Manheim has also worked on a number of small projects in AI governance, largely collaboratively with other groups. 

Potential Future Projects and Expansion

We are currently working on fundraising to continue current work and embark on several new initiatives, including expanding our fellowship program, expanding engagement on biorisk, and build out a more extensive program, hiring researchers and a research manager and running an internship program and/or academic workshop(s) focused on the learning theoretic alignment agenda. All of these are very tentative, and the specific plans will depend on both feedback from advisors and funding availability.

Challenges and Missteps

  1. Our initial hire to work on Vanessa’s Learning-Theoretic agenda was not as successful as hoped. In the future, Vanessa plans to both provide more interaction and guidance, and to hire people once we understand their concrete plans to do work in the area. We are considering how to better support and manage research in order to expand this research portfolio. (We do not yet have funding for a research manager, the position is critical, and it may be difficult to find an appropriate candidate.)
  2. Identifying whether ML-based AI safety research is strongly safety-dominant (rather than capabilities-dominant) can be challenging. This is a more general issue than an ALTER-specific challenge. David Manheim pre-screens research and research agendas, but has limited ability to make determinations, especially in cases where risks are non-obvious. We have relied on informal advice from AI safety researchers at other organizations to screen work being done, and matching fellows with mentors that are more capable of overseeing the research, but this is a bottleneck for promoting such research.
  3. Banking issues following the collapse of FTX and difficulty navigating the Israeli banking system, including difficulty receiving other grants. (This is now largely resolved.)
  4. Work on mandatory salt iodization in Israel has stalled somewhat, due partly to Israeli political conditions. Despite indications of support from the manufacturer, the Israeli Health Ministry has not prioritized this. We have several ideas for a path forward which are being pursued, but are unsure if the current government is likely to allow progress.

40

0
0

Reactions

0
0

More posts like this

Comments1
Sorted by Click to highlight new comments since: Today at 10:31 AM

Quick note to say that I appreciate short, readable updates like this, and I'm excited to hear about the progress of your org!

Curated and popular this week
Relevant opportunities