Hide table of contents

ALTER has continued operating in a chaotic environment in Israel over the past half year. Despite this, we have continued to cultivate domestic groups and individuals interested in AI safety, including work in multiagent settings, as well as continued work of AI policy in the international domain. Our US-based work coordinating and funding some work on the learning-theoretic AI agenda has also continued, with promising directions but no additional concrete outputs. There is also a new project in which David will be consulting for the RAND Corporation as part of his work for ALTER, working primarily on biorisk and risks at the intersection of AI and biorisk (AIxBio).

AI Policy and Standards

We have recently joined the NIST AI Safety Consortium (AISIC), and will be continuing work on this area as opportunities present themselves. Asher Brass from IAPS has recently agreed to work with us as a standards fellow, focused on NIST and cybersecurity. (This will be in conjunction with his work at IAPS.)

We are excited to have recently co-hosted a private event on AI Standards making and how organizations can contribute to standards setting for safety. We were joined by speakers from Simon Institute, Georgetown CSET, SaferAI, and UC Berkeley Center for Long-Term Cybersecurity, along with participants from a number of other organizations interested in standards setting. 

We have a new preprint available on “The Necessity of AI Audit Standards Boards,” which partly follows from our earlier work on safety culture, and we are continuing to engage with experts and policymakers on these topics. (Unfortunately, our planned attendance of the ACM Conference on Fairness, Accountability, and Transparency (FAccT) is no longer happening due to its location, and the cancellation of our original flight from Israel.)

Mathematical AI Safety

In the midst of MIRI shifting focus away from mathematical AI safety,  ALTER-US, our sister-project for supporting Learning Theoretic AI Safety, has recently hired Alex Appel (Diffractor,) and Gergely Szucs continues to work on Infra-Bayesian approaches, including his recent post on infrabayesian physicalism for interpreting quantum physics. There is also recent work from MATS scholars on time-complexity for deterministic string machines and Infra-Baysian haggling. We are also attempting to assist others finding pathways forward for the broader research community in mathematical AI, and are very excited about the new UK ARIA funding stream. (Note that this work stream is being split off more fully going forward, as it is not part of ALTER.)

Biorisk

Alter is continuing to engage in dialogues about metagenomic approaches. Our paper analyzing the costs of such a system in Israel was accepted to Global Health Security 2024 in Australia in June, and the lead author, Isabel Meusel, will be attending to present the paper (Day 2, Noon, P38.) Sid Sharma, another co-author, will be presenting the Threatnet paper, which our work builds on.

We are also continuing to engage with the Biological Weapons Convention as an NGO. Israel’s geopolitical situation is far less conducive to positive engagement with the BWC (and otherwise) at present, but in our view, this makes the prospects for significant change more plausible, rather than less, in the coming few years. At the same time, the environment for progress on this project is volatile, and work is currently on hold.

Public Health

Our work on salt iodization in Israel has continued. The exact path forward is complex and still being discussed, and will continue as a small project for us on the side of our main research and policy work.

Funding

As noted last time, the combined SFF / Lightspeed grant meant that ALTERs core work was not fully funded for 2024. The RAND contract has greatly improved our funding position, and will also generate ALTER income which can be used for, among other things, our salt iodization policy work. In addition, we are in the process of applying for funding from other organizations, partly for Vanessa’s Learning Theoretic AI work, as well as for ALTER itself for core operations and to run conferences and support future ALTER fellows.


 

13

0
0

Reactions

0
0

More posts like this

Comments1
Sorted by Click to highlight new comments since:

Executive summary: Despite a challenging environment in Israel, ALTER has made progress on AI safety, policy, and biorisk initiatives, while also securing additional funding to support ongoing work.

Key points:

  1. ALTER joined the NIST AI Safety Consortium and co-hosted an event on AI standards setting for safety.
  2. A new preprint on "The Necessity of AI Audit Standards Boards" is available, building on earlier safety culture work.
  3. ALTER-US hired Alex Appel and continues to support Gergely Szucs' work on Infra-Bayesian approaches for mathematical AI safety.
  4. A paper analyzing the costs of metagenomic approaches in Israel was accepted to Global Health Security 2024.
  5. ALTER is engaging with the Biological Weapons Convention as an NGO, though work is currently on hold due to Israel's geopolitical situation.
  6. The RAND contract and potential additional funding have improved ALTER's financial position for core operations and future initiatives.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
Relevant opportunities