Hide table of contents

This is a retrospective of the AIADM 2024 Conference, Retreat, and Co-working in London.

Tl;dr: ~130 people  joined together over the span of three days to learn, connect, and make progress towards making AI safe for nonhumans.

Attendees from the onsite AI, Animals, and Digital Minds 2024 Conference outside LSE

Background

This event followed in the footsteps of the October 2023 Artificial Intelligence, Conscious Machines, and Animals: Broadening AI Ethics conference held at Princeton by Peter Singer, Tse Yip Fai, Leonie Bossert and Thilo Hagendorff. It was planned in a [formerly private] AI Coalition channel on the Hive Slack that many of the attendees of the original conference were invited to for the purposes of continuing conversation. It was here that I discovered that running the conference in 2024 would be highly counterfactual because Peter Singer was retiring from Princeton and none of the previous organizers were planning on repeating it. We were able to  get permission to hold the second iteration of the conference and even got a promotional endorsement by Peter Singer. The process of planning took place over several months and was independently funded.

Objective

The goal for this event was to explore how we can develop AI technologies in a way that protects and benefits nonhuman animals and potentially sentient AI. We had the dual purpose of increasing the salience of the field of AI and Nonhumans and also getting potential leaders to network with one another. 

To advance the former goal, the content and programming needed to be highly accessible so we made it hybrid, recorded as much as practical, created website pages and social media assets for speakers to help with pre-conference promotion and future SEO for the speakers, and added more event space as the RSVPs crept up. It was open to anyone who could offer value to furthering this field, including, but not limited to, thought leaders, researchers, industry workers, funders and hopeful future contributors.

Timing and location

All in-person events took place in London immediately after EAG London (May 31 - June 2) 

  • June 3: Conference (hybrid: onsite - live room and overflow room, offsite - livestreaming, and virtual)
  • June 4-5: Retreat (virtual and in-person options that only synced up during hour of livestreamed lightning talks)
  • June 6-11: Co-working (in person)

Attendees

There were 260+ applications for attendance. A directory was created of people who consented to have their application answers shared with other attendees. These included answers to questions such as: What is your experience or demonstrated interest in the topic? Why do you want to attend? What can you offer to other attendees?


Conference

The 1st day was a hybrid conference:

  • It took place at the London School of Economics (LSE), split between 2 rooms which had around 50-60 total attendees.It was also streamed live to Newspeak House which had 10-15 attendees. Around 50-70 people attended via zoom. 
  • We had a very tight schedule and prioritized having more time for Q&A so each talk was only 10 - 20 minutes long and we grouped similar talks together as panels.
     
Bob Fischer presenting Animal Friendly AI = Misaligned AI 

Sessions

AI for Animals

Digital Minds

Interspecies Communication

  • Reimagining Our Future Relationships with Nonhumans with AI panel by Jane Lawton and Gal Zanir (watch sessionview slides

After the conference, we traveled 30 min by bus from LSE to Newspeak House where the onsite and offsite attendees could interact over dinner.

Onsite and offsite conference attendees joined for dinner at Newspeak House

Retreat 

The 2 day in-person retreat was run as an unconference

  • It took place following the AIADM conference at Newspeak House in London and had ~40 attendees
  • Each session was 30 min long and we would ring a gong to indicate time to switch. There were 2-3 sessions going on simultaneously in different areas of the venue. Most of these were not pre-planned.
Retreat attendees writing down the sessions they planned to facilitate
  • 1 hour was spent in the morning and afternoon of each day on “Planning and Pitching.” People wrote down their ideas for unconference discussion topics and then pitched it to the group. People were encouraged to ask clarifying questions, modify topics based on feedback, and reschedule if too many people were interested in attending simultaneous sessions.
    • Topics included: 
      • AI+Wild Animals, Optimal Stakeholder Positioning for Influencing Precision Livestock Farming, Finding Digital Mind Interventions Robust to Agential Backfire, Nailing Down Concrete AI/ML Projects to Help Animals, Earth Species Project: How Can Unlocking Communication in Animals Create Greatest Change, and Conflict between AI Alignment and the Interests of Nonhumans.
  • Lightning talks were held both days for 1 hour at the end of the unconference. The first day’s presenters were scheduled in advance, and the session was livestreamed on YouTube to coalesce with the virtual retreat. The second day was spontaneous and people signed up that day for their slots. Both worked well.
  • After lightning talks, we had free-form interaction. Many participants used this time to continue having conversations from the day or have dinner at the venue. People left at various times, though many stayed until midnight.
  • Throughout the day, people would come and go as they pleased. They could even sit out of discussions altogether and use the time to catch up on some work or have side conversations with other attendees. The unconference format made it easy for participants to conserve energy and only jump into discussions that interested them. 
     
2 day Unconference for the AI, Animals, and Digital Minds Retreat

Lightning Talks

  • Soenke Ziesche | AI alignment for nonhuman animals (view slides)
  • Adrià Moret | A Conflict Between AI Safety and AI Welfare: Should We Control Near-Future AI Systems? (watch talk)
  • Ronen Bar | A Slip to the Tongue: How Language Unwittingly Shapes Bias Towards Animals (watch talk)
  • Ali Ladak | Some key empirical findings on digital minds (watch talk)
  • Alex Schwalb | Neuromorphic engineering as a potential enabling technology for digital minds (watch talk)
  • Jane Lawton | AI to Decode Animal Communication (watch talk)
  • James Faville | An AMS (Attribution, Moral, Strategic) Model of Advocacy (watch talk)
  • Jeff Sebo | Updates from the NYU Center for Mind, Ethics, and Policy (watch talk)

Virtual retreat

  • Held in Gatheround for 5 hours for 1 day with a scheduled pause to attend the livestreamed lightning talks
  • We provided a summary of the conference talks in case virtual retreat participants needed a refresher or weren’t able to attend. This was created using AI and the zoom transcript, which didn’t turn out to be very high quality because the zoom transcript was not very good.
Pre-generated discussion prompts were loaded into Gatheround for the virtual retreat

Co-Working

The following 5 days were reserved for relaxed co-working. It was designed for those that chose to stay around London and wanted additional opportunities to network with other attendees while also getting back to work. The co-working space was generously provided by the Center on Long-term Risk. Around 3 - 6 people came for co-working each day and we scheduled relaxed board games at the end of most nights.

6 day co-working reserved for attendees at Center on Long-term Risk

Follow-up Opportunities

Subscribe to AI for Animals Newsletter

We used the conference as an opportunity to gather interest to launch the AI for Animals newsletter. The AIADM application form included a question to ask if they would like to sign up to the newsletter, resulting in 232 initial signups. After discussing with other stakeholders, we decided to include a section on Digital Minds. The newsletter will be mostly written by Max Taylor (Animal Charity Evaluators), with feedback and contributions from experts in relevant fields.  

Join the AI Coalition on Hive

The coalition consists of a newly public communication channel, #s-ai-coalition, on the Hive (formerly Impactful Animal Advocacy) Slack focused on active field building in AI and Animals. There are also monthly meetings on the last Sunday of every month. You must be a member of the Hive Slack to join.

Work with us

We are hiring for a part-time project manager for the AI for Animals Coalition at Hive. See job description and submit your interest here. We will send out an application form in the next 1-2 weeks. Currently, we are fundraising (see below) so there is a good (>85%) possibility that this part-time role will expand into a full time role with provisions for going to conferences and more creative work.

We are fundraising for the full time role for AI, Animals, and Digital Minds for 1 year and have a gap of 48k (46% of total). See our proposal and consider making a donation. For donations over 1k or questions, email hello@joinhive.org.

Tangible Outcomes

  • AI, Animals and Digital Minds 2024 Conference/Retreat Tangible Outcomes
  • Continued momentum from the first conference in 2023
    • The Earth Species Project was in the audience for the last conference and went on to speak at this year’s conference. They are not usually involved in EA or animal advocacy spaces, but were able to have a platform at this event to speak to these audiences and find new collaborators.
    • Amber Sheldon, a PhD student at Brown University, wrote part of her dissertation as a rebuttal to a talk about Precision Livestock Farming (PLF) by Walter Veit at the 2023 conference. During this year’s conference, they joined a panel together to discuss their different viewpoints.
  • Established potential funding for projects at this intersection
    • There were at least 2 funding conversations that happened as a result of attendees being able to talk at the retreat with one known to be actualized at 45k.
  • Grew interest/literature/collaboration in the field
    • Zach Brown, a research assistant in Economics at MIT, attended the conference and wrote a blog post about PLF inspired in part by the ideas presented by the speakers at the event who were cited in the post. 
    • Thomas Manandhar-Richardson, a data scientist for Bryant Research, is collaborating on a paper with Jonathan Birch and his PhD student, Eva Read, by helping them analyze survey data they collected about what animal welfare researchers think the goals/methods of their field should be (i.e. activist, scientist, collaboration with industry, get involved in politics etc). He says the connection would never have happened without the event.
  • Created epistemic updates
    • Many people said they updated their thinking in response to talks or discussions from the event. This was especially true for Bob Fischer’s talk Animal-friendly AI = Misaligned AI and conversations during the retreat with James Faville where he talked about backfire risks of certain interventions and s-risks with digital minds.
  • Connected with other groups working in the space

Feedback from participants

On the last day of the retreat, around 15 participants gathered to give feedback on the event and provide thoughts on the future of the AIADM community. We also sent out an event feedback form that received 19 responses. 

Here is a summary of the main points:

  • There was some uncertainty around whether to keep Animals and Digital Minds together as a community.
    • On the one hand, these communities are quite small and could benefit from mingling and sharing resources. On the other hand, the lack of focus could drive some people away and dilute from creating more concrete outcomes.
  • There was a lack of practical outcomes for advocates.
    • The conference seemed useful for academics to exchange ideas and write new papers, but there were no tangible practical next steps for people who saw their roles primarily as advocates.
  • The unconference format during the retreat was more engaging and valuable than the lecture style conference.
    • This is very much biased as mostly the participants who stayed until the end reported this.
  • Some people had difficulty with the logistics.
    • Some people didn’t know to sign up for the event on Luma, didn’t get the zoom link automatically, didn’t get an email confirmation of their invite, didn’t realize the off-site conference at Newspeak House would just be a livestream, didn’t see the comprehensive guides on Notion, etc. 
    • Other people said the instructions and emails were quite clear.
  • Scheduling the day after EAG prevented many from fully appreciating the conference because they felt exhausted and/or got sick.
  • Diversity and scheduling of talks for the conference was great.
    • We scheduled the less cognitively demanding and more inspirational talks about reimagining out future with animals and nature at the end to give people a break
  • Venue was too packed.
    • There were some people who sat on the floor to try and consolidate into one space. While the room at LSE was a great setup for a hybrid zoom event (great microphones to pick up audience questions), it was extremely tight and people did not have a desk to put their laptops on.
  • Virtual retreat was too long.
    • People could not commit 5 full hours to attending a virtual event and kept coming in and out of the Gatherround, which was not good for creating a feeling of cohesiveness. 

Interested in learning more?
This behind the scenes document outlines the challenges and processes of planning, budgeting, managing applications, communicating with attendees, promoting, and coordinating an event, along with lessons learned and tools used. 

Thank you to Allison Agnello, @Max Taylor, and @Antoine de Scorraille for providing feedback on this document.

Comments8


Sorted by Click to highlight new comments since:

Thanks for this writeup! I especially found the linked doc, in the category of "nuts and bolts of event organizing", to be quite interesting and helpful; as a sometimes-organizer myself, it's cool to read about the design decisions and rationale that goes into other events. I was also impressed to see that you self-funded this event with ~$5.7k -- I'd be interested in providing some retroactive funding to help cover this, if you want to put up this retrospective doc on https://manifund.org/ !

Thanks for everything you've done, Austin! I'm especially grateful to the Manifold community for having raised $1,203 for Shrimp Welfare Project (to date); it's been one of the most popular charities on the platform.

Hey Austin, thanks for reading this so thoroughly, making the suggestion to put it up on Manifund, and generously offering to contribute to retroactive funding. This seems like a great idea and I just made a grant request page. :)

You're welcome! Reasoning transparency is a strong part of our org culture, if it wasn't obvious enough :) We'll add Manifund to our tasks list, thanks for the flag!

I really loved the event! Organizing it right after EA Global was probably good idea to get attendees from outside of the UK.

At the same time, being right after EA Global without a break prevented me from attending the retreat part. 6 days in a row full of intense networking was a bit too much, both physically and mentally, so I only ended up attending the first day.

But thanks a lot for organizing, I got a lot of value from it in terms of new cutting edge research ideas.

Glad you enjoyed it and sad you weren't able to attend the retreat.

tbh, I was also quite tired after EAG and skipped out on some after conference events, which was quite suboptimal. Next year, I'm thinking about doing it before EAG and giving folks a 1-2 days of rest before EAG starts.

Hi Rafael! Glad you were able to attend the first day. And we appreciate the feedback, thank you! You aren't the first mention the post-EAG overwhelm; we'll be taking this into consideration for future conferences. 

Executive summary: The AI, Animals, and Digital Minds 2024 Conference, Retreat, and Co-working event in London brought together around 130 people to learn, connect, and make progress on developing AI technologies that protect and benefit nonhuman animals and potentially sentient AI.

Key points:

  1. The event included a 1-day hybrid conference with talks on AI for animals, digital minds, and interspecies communication.
  2. A 2-day in-person unconference retreat followed, allowing attendees to discuss topics of interest in an unstructured format and share lightning talks.
  3. A 5-day co-working period provided networking opportunities for attendees who stayed in London.
  4. Follow-up opportunities included subscribing to the AI for Animals Newsletter, joining the AI Coalition on Hive, a potential job opening, and fundraising for future work.
  5. Tangible outcomes encompassed continued momentum from the previous year's conference, potential project funding, increased interest in the field, epistemic updates, and connections with other groups working in the space.
  6. Participant feedback highlighted the engaging unconference format, great diversity of talks, and some logistical challenges to address for future events.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f
 ·  · 8m read
 · 
In my past year as a grantmaker in the global health and wellbeing (GHW) meta space at Open Philanthropy, I've identified some exciting ideas that could fill existing gaps. While these initiatives have significant potential, they require more active development and support to move forward.  The ideas I think could have the highest impact are:  1. Government placements/secondments in key GHW areas (e.g. international development), and 2. Expanded (ultra) high-net-worth ([U]HNW) advising Each of these ideas needs a very specific type of leadership and/or structure. More accessible options I’m excited about — particularly for students or recent graduates — could involve virtual GHW courses or action-focused student groups.  I can’t commit to supporting any particular project based on these ideas ahead of time, because the likelihood of success would heavily depend on details (including the people leading the project). Still, I thought it would be helpful to articulate a few of the ideas I’ve been considering.  I’d love to hear your thoughts, both on these ideas and any other gaps you see in the space! Introduction I’m Mel, a Senior Program Associate at Open Philanthropy, where I lead grantmaking for the Effective Giving and Careers program[1] (you can read more about the program and our current strategy here). Throughout my time in this role, I’ve encountered great ideas, but have also noticed gaps in the space. This post shares a list of projects I’d like to see pursued, and would potentially want to support. These ideas are drawn from existing efforts in other areas (e.g., projects supported by our GCRCB team), suggestions from conversations and materials I’ve engaged with, and my general intuition. They aren’t meant to be a definitive roadmap, but rather a starting point for discussion. At the moment, I don’t have capacity to more actively explore these ideas and find the right founders for related projects. That may change, but for now, I’m interested in