Hide table of contents

This semiannual update is intended to inform the community of what we have been doing, and provide a touchpoint for those interested in engaging with us. Since the last update in Mid-2023, we have had several updates, but the past several months have been a tumultuous time in Israel, and this has affected our work in a variety of ways, as outlined in a few places below.

People

  • @Yonatan Cale and Shahar Avin have joined the ALTER board of directors, joining current board members @Vanessa Kosoy, @Joshua Fox , @EdoArad, @GidiKadosh, Daniel Aronovich, and Ezra Hausdorff.
  • @Rona Tobolsky has been a policy fellow during the second half of 2023, continuing her work with ALTER. She has been working on a number of things, including iodization and biosecurity, especially focused on metagenomic sequencing for surveillance. She also started a masters program at Tel Aviv University’s School of Public Health, in disaster preparedness and management, and is considering next steps. 
  • Ram Rachum has completed his fellowship with ALTER, during which he focused on multi-agent cooperation and multipolar AI scenarios. He has co-run a conference on disobedience in AI, as well as written several papers on how agents cooperate. His latest paper was just accepted to AAMAS 2024. He is currently seeking funding or support for his next steps as an affiliate researcher.
  • A new, independent program based in the US, under Ashgro fiscal sponsorship, was started to promote mathematical and learning theoretic alignment research. This is independent from ALTER, but we are supporting its work. The project has hired Gergely Szucs, and will be continuing work on that agenda. See below section for further updates.

Ongoing and New Projects

  • Our work on infectious disease policy, on the BWC, and on salt iodization in Israel is at a near-complete standstill, as almost all governmental attention is on the war. 
  • We are working on a paper with Isabel Meusel on applying a model for metagenomic sequencing to Israel. This is part of a broader plan to promote biomonitoring in Israel, and we are hoping to have the paper complete and ready for submission later this month.
  • The AI safety coworking day at the EA office, which ALTER encouraged, has been successful, as has the reading group. Several members have also applied for external funding to continue this work, and at least one has received it. Unfortunately, this is on hold due to current logistical issues and the war. 
  • EA Israel and members of the AI safety group are potentially working on a cybersecurity and AI safety education project. This is still being developed.
  • David has worked on a few AI policy projects. Safety culture for AI - https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4491421, the debate on “pausing” AI, and in-progress work on Audit standards boards and other work with Transformative Futures Institute.
  • On public engagement, David has facilitated several reading groups for BlueDot in both Biosecurity and AI policy, and is working on a new project to do public communication about current and future AI benefits and risks. We have also successfully made connections with several individuals in Israel working on biosecurity-relevant projects.

Learning Theoretic / Mathematical AI alignment

(Largely via Ashgro fiscally-sponsored Affiliate):

  • Gergely Szucs is working on completing a project in infra-Bayesian physicalism, and tentatively plans to start work on a project on regret bounds in infra-Bayesian reinforcement learning, possibly related to Decision-Estimation Coefficients.
  • Vanessa Kosoy is going to be mentoring scholars in MATS, and potentially in ATHENA, to focus on other aspects of the Learning-Theoretic Agenda.
  • We have recently gathered a list of those who have expressed interest in mathematical AI alignment, and got well over 100 responses. 
    We have begun putting people in touch on the basis of that list, and hope to do more in that vein. If there are individuals who have not filled in this form who are doing relevant work, whether or not they expect we know about them, please encourage them to do so!

Funding

  • We reached a settlement with FTX Debtors that allowed ALTER to return all unspent funds. (This excludes roughly 1/3rd of the initial grant which had already been committed or spent before the FTX bankruptcy.)
  • Including incoming grants, ALTER will have enough cash on hand to fund core operations until the end of the 2024 calendar year, but the allocation of funds is unclear, and beyond work on Learning Theory, below, there is no funding for additional programming or projects. (We have a grant application outstanding which may change this.)
  • We have been awarded a Survival and Flourishing Fund grant totalling $339,900, which consists of two overlapping grants, with Lightspeed Grants contributing $316,900 focused on Learning-theory research and mathematical alignment, but the marginal $23,000 is coming from SFF.  Note that we are currently deciding how to allocate this funding between projects; the allocation from SFF was for general expenses and then learning theory, but is allocated as marginal funding over the Lightspeed amount, whereas the Lightspeed grant was specific to learning theory work.
  • As noted, the earlier funding for hiring an additional alignment researcher for Vanessa is being managed by a fiscally sponsored project run by Ashgro. This project will be used for funding learning theory work outside of Israel, and we may recommend that Lightcone direct some of the Lightspeed grant to that project.
Comments


No comments on this post yet.
Be the first to respond.
Curated and popular this week
 ·  · 1m read
 · 
Garrison
 ·  · 7m read
 · 
This is the full text of a post from "The Obsolete Newsletter," a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race to build Machine Superintelligence. Consider subscribing to stay up to date with my work. Wow. The Wall Street Journal just reported that, "a consortium of investors led by Elon Musk is offering $97.4 billion to buy the nonprofit that controls OpenAI." Technically, they can't actually do that, so I'm going to assume that Musk is trying to buy all of the nonprofit's assets, which include governing control over OpenAI's for-profit, as well as all the profits above the company's profit caps. OpenAI CEO Sam Altman already tweeted, "no thank you but we will buy twitter for $9.74 billion if you want." (Musk, for his part, replied with just the word: "Swindler.") Even if Altman were willing, it's not clear if this bid could even go through. It can probably best be understood as an attempt to throw a wrench in OpenAI's ongoing plan to restructure fully into a for-profit company. To complete the transition, OpenAI needs to compensate its nonprofit for the fair market value of what it is giving up. In October, The Information reported that OpenAI was planning to give the nonprofit at least 25 percent of the new company, at the time, worth $37.5 billion. But in late January, the Financial Times reported that the nonprofit might only receive around $30 billion, "but a final price is yet to be determined." That's still a lot of money, but many experts I've spoken with think it drastically undervalues what the nonprofit is giving up. Musk has sued to block OpenAI's conversion, arguing that he would be irreparably harmed if it went through. But while Musk's suit seems unlikely to succeed, his latest gambit might significantly drive up the price OpenAI has to pay. (My guess is that Altman will still ma
 ·  · 5m read
 · 
When we built a calculator to help meat-eaters offset the animal welfare impact of their diet through donations (like carbon offsets), we didn't expect it to become one of our most effective tools for engaging new donors. In this post we explain how it works, why it seems particularly promising for increasing support for farmed animal charities, and what you can do to support this work if you think it’s worthwhile. In the comments I’ll also share our answers to some frequently asked questions and concerns some people have when thinking about the idea of an ‘animal welfare offset’. Background FarmKind is a donation platform whose mission is to support the animal movement by raising funds from the general public for some of the most effective charities working to fix factory farming. When we built our platform, we directionally estimated how much a donation to each of our recommended charities helps animals, to show users.  This also made it possible for us to calculate how much someone would need to donate to do as much good for farmed animals as their diet harms them – like carbon offsetting, but for animal welfare. So we built it. What we didn’t expect was how much something we built as a side project would capture peoples’ imaginations!  What it is and what it isn’t What it is:  * An engaging tool for bringing to life the idea that there are still ways to help farmed animals even if you’re unable/unwilling to go vegetarian/vegan. * A way to help people get a rough sense of how much they might want to give to do an amount of good that’s commensurate with the harm to farmed animals caused by their diet What it isn’t:  * A perfectly accurate crystal ball to determine how much a given individual would need to donate to exactly offset their diet. See the caveats here to understand why you shouldn’t take this (or any other charity impact estimate) literally. All models are wrong but some are useful. * A flashy piece of software (yet!). It was built as