FAR Labs is a coworking hub in downtown Berkeley for organizations and individuals working on AI safety and related issues. Since opening the space in March 2023, we have grown to host approximately 30 members. Our members are primarily drawn from four anchor organizations, but we also host a number of independent researchers and research teams.

Now that our initial setup is complete, we are pleased to announce an open call for applications for individuals or organizations who wish to work from this space.

Our initial aims for FAR Labs:

  • First and foremost it should be a place to do great work. Our members are working on challenging problems, and we want to improve their effectiveness, reduce distractions, and provide a professional environment for them to work in. That includes providing a variety of workspaces (private offices, dedicated desks and hot-desks, general areas), catering, and other office amenities such as a gym.
  • A warm, intellectually generative culture. Having interesting and fun conversations is one of the best parts of working in a shared environment, and championing a culture that enables those interactions is incredibly important to us.
  • Supporting collaborations between members, other alignment organizations, and outside collaborators (e.g. academics, or industry researchers). While membership is tied to actively working on AI safety (technical or governance) or related areas (e.g. field building, advocacy, fundraising), we also want to make a space that’s welcoming to many viewpoints, which we expect to benefit both members and visitors.

FAR AI’s broader mission is to support research and initiatives that promote trustworthy and safe AI systems. FAR Labs is an investment in operations and coordination. By creating research environments and good operational scaffolding, we can accelerate safety research and x-risk reduction across projects and orgs.

For the past six months that’s looked like setting up the space and getting the basics in place (office, food, equipment). Moving into 2024 the Labs team will begin offering programs for members – as well as others in the AI safety ecosystem – for developing relevant skills for research and operational excellence. We’re particularly excited about identifying best practices and providing training to help members in building and scaling high performing teams.

FAR Labs runs at cost/a slight loss[1]; we’re aiming for a fully member supported office and community space.

We are opening for new membership applications. Currently we hope to onboard one to three alignment oriented organizations, and perhaps a handful of independent members[2], aiming for a total membership of 40-50 people. If you’re interested in working from FAR Labs, or would like to learn more, please reach out or apply.

  1. ^

    Programs, external visitors, and workshops will be grant funded, while our ongoing day to day office costs are covered by member dues.

  2. ^

    While we host several independent researchers we do prioritize organizations.

63

0
0

Reactions

0
0
Comments


No comments on this post yet.
Be the first to respond.
Curated and popular this week
 ·  · 23m read
 · 
Or on the types of prioritization, their strengths, pitfalls, and how EA should balance them   The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone is trying to figure out how to prepare for AI. This is the first in a series of posts examining the state of cause prioritization and proposing strategies for moving forward.   Executive Summary * Performing prioritization work has been one of the main tasks, and arguably achievements, of EA. * We highlight three types of prioritization: Cause Prioritization, Within-Cause (Intervention) Prioritization, and Cross-Cause (Intervention) Prioritization. * We ask how much of EA prioritization work falls in each of these categories: * Our estimates suggest that, for the organizations we investigated, the current split is 89% within-cause work, 2% cross-cause, and 9% cause prioritization. * We then explore strengths and potential pitfalls of each level: * Cause prioritization offers a big-picture view for identifying pressing problems but can fail to capture the practical nuances that often determine real-world success. * Within-cause prioritization focuses on a narrower set of interventions with deeper more specialised analysis but risks missing higher-impact alternatives elsewhere. * Cross-cause prioritization broadens the scope to find synergies and the potential for greater impact, yet demands complex assumptions and compromises on measurement. * See the Summary Table below to view the considerations. * We encourage reflection and future work on what the best ways of prioritizing are and how EA should allocate resources between the three types. * With this in mind, we outline eight cruxes that sketch what factors could favor some types over others. * We also suggest some potential next steps aimed at refining our approach to prioritization by exploring variance, value of information, tractability, and the
 ·  · 5m read
 · 
[Cross-posted from my Substack here] If you spend time with people trying to change the world, you’ll come to an interesting conundrum: Various advocacy groups reference previous successful social movements as to why their chosen strategy is the most important one. Yet, these groups often follow wildly different strategies from each other to achieve social change. So, which one of them is right? The answer is all of them and none of them. This is because many people use research and historical movements to justify their pre-existing beliefs about how social change happens. Simply, you can find a case study to fit most plausible theories of how social change happens. For example, the groups might say: * Repeated nonviolent disruption is the key to social change, citing the Freedom Riders from the civil rights Movement or Act Up! from the gay rights movement. * Technological progress is what drives improvements in the human condition if you consider the development of the contraceptive pill funded by Katharine McCormick. * Organising and base-building is how change happens, as inspired by Ella Baker, the NAACP or Cesar Chavez from the United Workers Movement. * Insider advocacy is the real secret of social movements – look no further than how influential the Leadership Conference on Civil Rights was in passing the Civil Rights Acts of 1960 & 1964. * Democratic participation is the backbone of social change – just look at how Ireland lifted a ban on abortion via a Citizen’s Assembly. * And so on… To paint this picture, we can see this in action below: Source: Just Stop Oil which focuses on…civil resistance and disruption Source: The Civic Power Fund which focuses on… local organising What do we take away from all this? In my mind, a few key things: 1. Many different approaches have worked in changing the world so we should be humble and not assume we are doing The Most Important Thing 2. The case studies we focus on are likely confirmation bias, where
 ·  · 1m read
 · 
I wanted to share a small but important challenge I've encountered as a student engaging with Effective Altruism from a lower-income country (Nigeria), and invite thoughts or suggestions from the community. Recently, I tried to make a one-time donation to one of the EA-aligned charities listed on the Giving What We Can platform. However, I discovered that I could not donate an amount less than $5. While this might seem like a minor limit for many, for someone like me — a student without a steady income or job, $5 is a significant amount. To provide some context: According to Numbeo, the average monthly income of a Nigerian worker is around $130–$150, and students often rely on even less — sometimes just $20–$50 per month for all expenses. For many students here, having $5 "lying around" isn't common at all; it could represent a week's worth of meals or transportation. I personally want to make small, one-time donations whenever I can, rather than commit to a recurring pledge like the 10% Giving What We Can pledge, which isn't feasible for me right now. I also want to encourage members of my local EA group, who are in similar financial situations, to practice giving through small but meaningful donations. In light of this, I would like to: * Recommend that Giving What We Can (and similar platforms) consider allowing smaller minimum donation amounts to make giving more accessible to students and people in lower-income countries. * Suggest that more organizations be added to the platform, to give donors a wider range of causes they can support with their small contributions. Uncertainties: * Are there alternative platforms or methods that allow very small one-time donations to EA-aligned charities? * Is there a reason behind the $5 minimum that I'm unaware of, and could it be adjusted to be more inclusive? I strongly believe that cultivating a habit of giving, even with small amounts, helps build a long-term culture of altruism — and it would