Hide table of contents

Arcadia Impact is a non-profit organisation that enables individuals in London to use their careers to tackle the world’s most pressing problems. We have existed for over a year as London EA Hub (LEAH) and we recently rebranded as an organisation to Arcadia Impact.

Our current projects: 

We're also currently hiring for two roles (details below).

EA Group Support

We support EA groups at ImperialUCLKCL, and LSE[1] which includes mentoring student organisers, encouraging collaboration between groups, and running events such as retreats. 

All four universities are ranked in the top 50 globally, with over 114,000 students collectively, presenting significant potential to build capacity to address pressing global problems. London offers a unique concentration of highly talented students, and therefore an exciting opportunity for EA groups to benefit from collaboration and coordination. Additionally, London is the world's largest EA hub, with an extensive network of professionals working on various causes. Despite this London university groups have historically lacked consistent organiser capacity relative to comparable universities.

Since we were founded last year, the groups have reached hundreds of students, with over 200 applying to reading groups. Students who joined our programmes have started full-time roles, attended research programmes, or continued studying with the goal of contributing to a range of EA cause areas. Given the size and potential of the universities, we think there is still significant room to expand and improve our work.

Safe AI London

We support AI Safety field building activities with Safe AI London (SAIL) supporting individuals in London to find careers that reduce risks from advanced artificial intelligence. 

We do this by:

  1. Running targeted outreach to technical courses at Imperial and UCL due to the concentration of talent on Computer Science and related courses. 
  2. Educating people on the alignment problem, through technical and governance reading groups and speaker events.
  3. Up-skilling people on machine learning through upskilling programmes or by encouraging them to apply to programmes such as ARENA.
  4. Allowing them to test their fit for research through MARS Londonresearch sprints, and connecting them to other research opportunities such as MATS.
  5. Creating a community of people in London and connecting people to opportunities within the field through socials and retreats.

London is Europe’s largest hub for AI talent and is becoming an increasingly relevant location for AI safety, with Google DeepMind, Anthropic and OpenAI opening offices here, and AI Safety researchers at MATS, Conjecture, and Center on Longterm Risk. The UK Government has also launched the AI Safety Institute which is working on AI Safety Research within the UK government. 

AI Safety university groups have shown promising results over the last year and London universities have a unique concentration of talented students relevant to AI safety with Imperial and UCL ranked in the top 25 universities for computer science courses globally. 

LEAH Coworking Space

The LEAH Coworking Space is an office space in central London used by professionals and students working on impactful projects. The office aims to provide value from:

  1. Improving the productivity of professionals doing impactful work. In our most recent user survey, users reported an average of 6.3 additional productive hours per week from using the space.
  2. Causing impactful connections and interactions between users.
  3. Various situations where we offer assistance to the wider community:
    1. Allowing other organisations to use the space for events.
    2. Enabling in-person meetings and coworking for remote organisations.

We also benefit from using the space to host many of our events including research sprints, reading groups, and socials.

Since we moved offices in May 2023 (~8.5 months ago), we have recorded a total of >21,000 person-hours across >2,800 visits to our space and 191 unique users including visitors[2] and guests. 

If you are interested in applying to join the space, you can apply using this form.

We’re Hiring

We are currently hiring for two roles: 

  • Head of Groups: Lead our work on supporting Effective Altruism Groups at Imperial, UCL, KCL, and LSE including mentoring group leaders, running inter-university events, running events, and connecting the groups with opportunities in the EA community. 
  • Head of AI Safety: Lead our work on AI Safety field-building through Safe AI London including running reading groups, hackathons, machine learning upskilling programmes, and designing programmes for more advanced students.

Apply by 19th January. 

We expect the ideal candidates for these roles will be highly familiar with the relevant ideas and feel comfortable working independently. 

If you’d like to learn more about the roles, we are running a drop-in session at 5 pm GMT on the 5th of JanuaryAdd to calendarvideo call link.

If you have any questions or want to find out more about our work, then get in touch!
 

  1. ^

    We also work closely with EA Northeastern London.

  2. ^

    We estimate the actual person-hours and number of visits is higher than this due to users forgetting/not signing in and out.

Comments3


Sorted by Click to highlight new comments since:

Neat! As someone who's not on the ground and doesn't know much about either initiative, I'm curious what Arcadia's relationship is London Initiative for Safe AI (LISA)? Mostly in the spirit of "if I know someone in AI safety in London, in what cases should I recommend them to each?"

Thanks for the question! We aren't connected in any official capacity and haven't collaborated on any projects.
For the events we run, they are focused on students and young professionals that haven't engaged with AI safety arguments or the community before. LISA is more focused on those already doing relevant research. As office spaces, the majority of our users attend as individuals (working independently or as the only person from their organisation), while LISA is hosts organisations and up-skilling programs. Our office has a wider focus than just AI safety although I expect there is some overlap in the people we would accept, and a small number of users that are signed up to both offices.

Curated and popular this week
 ·  · 1m read
 · 
(Audio version here, or search for "Joe Carlsmith Audio" on your podcast app.) > “There comes a moment when the children who have been playing at burglars hush suddenly: was that a real footstep in the hall?”  > > - C.S. Lewis “The Human Condition,” by René Magritte (Image source here) 1. Introduction Sometimes, my thinking feels more “real” to me; and sometimes, it feels more “fake.” I want to do the real version, so I want to understand this spectrum better. This essay offers some reflections.  I give a bunch of examples of this “fake vs. real” spectrum below -- in AI, philosophy, competitive debate, everyday life, and religion. My current sense is that it brings together a cluster of related dimensions, namely: * Map vs. world: Is my mind directed at an abstraction, or it is trying to see past its model to the world beyond? * Hollow vs. solid: Am I using concepts/premises/frames that I secretly suspect are bullshit, or do I expect them to point at basically real stuff, even if imperfectly? * Rote vs. new: Is the thinking pre-computed, or is new processing occurring? * Soldier vs. scout: Is the thinking trying to defend a pre-chosen position, or is it just trying to get to the truth? * Dry vs. visceral: Does the content feel abstract and heady, or does it grip me at some more gut level? These dimensions aren’t the same. But I think they’re correlated – and I offer some speculations about why. In particular, I speculate about their relationship to the “telos” of thinking – that is, to the thing that thinking is “supposed to” do.  I also describe some tags I’m currently using when I remind myself to “really think.” In particular:  * Going slow * Following curiosity/aliveness * Staying in touch with why I’m thinking about something * Tethering my concepts to referents that feel “real” to me * Reminding myself that “arguments are lenses on the world” * Tuning into a relaxing sense of “helplessness” about the truth * Just actually imagining differ
Garrison
 ·  · 7m read
 · 
This is the full text of a post from "The Obsolete Newsletter," a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race to build Machine Superintelligence. Consider subscribing to stay up to date with my work. Wow. The Wall Street Journal just reported that, "a consortium of investors led by Elon Musk is offering $97.4 billion to buy the nonprofit that controls OpenAI." Technically, they can't actually do that, so I'm going to assume that Musk is trying to buy all of the nonprofit's assets, which include governing control over OpenAI's for-profit, as well as all the profits above the company's profit caps. OpenAI CEO Sam Altman already tweeted, "no thank you but we will buy twitter for $9.74 billion if you want." (Musk, for his part, replied with just the word: "Swindler.") Even if Altman were willing, it's not clear if this bid could even go through. It can probably best be understood as an attempt to throw a wrench in OpenAI's ongoing plan to restructure fully into a for-profit company. To complete the transition, OpenAI needs to compensate its nonprofit for the fair market value of what it is giving up. In October, The Information reported that OpenAI was planning to give the nonprofit at least 25 percent of the new company, at the time, worth $37.5 billion. But in late January, the Financial Times reported that the nonprofit might only receive around $30 billion, "but a final price is yet to be determined." That's still a lot of money, but many experts I've spoken with think it drastically undervalues what the nonprofit is giving up. Musk has sued to block OpenAI's conversion, arguing that he would be irreparably harmed if it went through. But while Musk's suit seems unlikely to succeed, his latest gambit might significantly drive up the price OpenAI has to pay. (My guess is that Altman will still ma
 ·  · 5m read
 · 
When we built a calculator to help meat-eaters offset the animal welfare impact of their diet through donations (like carbon offsets), we didn't expect it to become one of our most effective tools for engaging new donors. In this post we explain how it works, why it seems particularly promising for increasing support for farmed animal charities, and what you can do to support this work if you think it’s worthwhile. In the comments I’ll also share our answers to some frequently asked questions and concerns some people have when thinking about the idea of an ‘animal welfare offset’. Background FarmKind is a donation platform whose mission is to support the animal movement by raising funds from the general public for some of the most effective charities working to fix factory farming. When we built our platform, we directionally estimated how much a donation to each of our recommended charities helps animals, to show users.  This also made it possible for us to calculate how much someone would need to donate to do as much good for farmed animals as their diet harms them – like carbon offsetting, but for animal welfare. So we built it. What we didn’t expect was how much something we built as a side project would capture peoples’ imaginations!  What it is and what it isn’t What it is:  * An engaging tool for bringing to life the idea that there are still ways to help farmed animals even if you’re unable/unwilling to go vegetarian/vegan. * A way to help people get a rough sense of how much they might want to give to do an amount of good that’s commensurate with the harm to farmed animals caused by their diet What it isn’t:  * A perfectly accurate crystal ball to determine how much a given individual would need to donate to exactly offset their diet. See the caveats here to understand why you shouldn’t take this (or any other charity impact estimate) literally. All models are wrong but some are useful. * A flashy piece of software (yet!). It was built as
Recent opportunities in Building effective altruism
2