Hide table of contents

Below is CEA's strategic update for February 2017.  I'm going to take over sharing these updates. If you'd like to receive them by email just comment or contact me to let me know. I will be posting these updates on the CEA blog moving forward. I'm sharing this one here for anyone who missed it on our blog and so you know how to find them in future.

Since our last update, we have launched EA Funds, hosted a team retreat and continued to learn a lot at YC. As a result, we’ve updated and clarified our strategy and I’d like to share more of our bigger-picture thinking going forward. In these updates, I’ll aim to explain more of the why of our overall approach, to provide more context for what we’re working on.


CEA
’s Vision and Mission

During our team retreat, we clarified and updated our overall vision and mission. Our vision is to create an optimal world. We don’t yet know exactly what an optimal world looks like. There are some things which we think are robustly good, such as ending death from malaria or abolishing factory farming, and other areas where we are highly uncertain. For this reason, we want to see effective altruism do for the pursuit of good what the Scientific Revolution did for the pursuit of truth. We want to build a community focused on figuring out what this optimal world looks like, and how we get there.

To help make this vision a reality, we plan to first focus on building capacity to do more good in the future. In particular, we need three key resources as a community:

  1. groundbreaking ideas,
  2. talented, motivated people
  3. the money needed to put the best ideas into practice.

In the past, we’ve talked a lot about whether we’re bottlenecked by ideas, talent or money, but we’ve realised in absolute terms, we need vastly more of all of these key resources. We want to promote and strengthen effective altruism as an idea and a community, with the aim of increasing the total value of all resources effectively aimed at robustly doing good now, and figuring out how to do even more good in the future.


CEA
’s Objectives for Q1:

To better pursue our mission to support the community we are pursuing quarterly goals, focusing on one of the key community resources (ideas, talent or money) for each quarter. This quarter, our primary focus is improving the infrastructure for giving effectively. We developed the EA Funds concept after speaking to both highly engaged donors, and people who were quite new to the community. Beyond that, this quarter we are also building faster feedback loops to ensure we focus our future efforts where it is most valuable. Next quarter we aim to focus on consolidation and develop better infrastructure for developing and sharing core ideas within effective altruism.


Our specific Q1 Objectives:

Test Effective Altruism Funds as a concept. We will consider this project a success if feedback from the community is positive, and the amount we have raised for the funds in the first quarter exceeds $1M. We think that the fund managers will be able to recommend a more representative series of grants if they have at least $200K to allocate in the first round. You can keep up with how much money we've raised using this dashboard. We hope to raise additional funds through new contacts at Y Combinator. We chose to focus on money-moved during YC in part because the partners and founders there are particularly good at giving advice in this area. You can read more about why we launched EA Funds in our launch post and if you’ve yet to provide feedback on EA Funds we’d really appreciate your thoughts in this quick survey.

Establish models of how to evaluate the impact of our activities. This involves both establishing a framework for evaluating Executive Office activities and evaluating our how existing channels (such as our Effective Altruism Global conferences and social media channels) add value. We’re building some rough quantitative models to compare these different approaches, as we expect some projects to be many times more effective per dollar than others.
Maintain and grow a positive relationship with the broader effective altruism community. This includes providing greater transparency about what CEA is working on and why.

 

Changes to CEA’s organisational structure.

Our renewed focus means some changes to the organisational structure of CEA, in line with our goal of testing projects, measuring their impact and updating to focus on those which perform best.

1. Remaining teams in CEA all in one division

To allow us to better coordinate we’ve moved everyone (marketing, events, chapter support, community liaisons and Will MacAskill’ Executive Office) to one division, working towards agreed metrics and objectives. Find out more about the people who work at CEA on our team website.

 
2. The dissolution of our Special Projects Division

As part of CEA’s internal reorganization in July 2016, we created a Special Projects Division to house a number of discrete, pre-existing research-related projects:

  • Philanthropic advising
  • Policy
  • The Oxford Institute for Effective Altruism (OIEA)
  • Fundamentals research

In line with our aim to narrow CEA’s focus, we’ve been reviewing which of the projects in this division to scale up, scale down or discontinue. We have therefore decided to do the following:

discontinue the philanthropic advising project (while shifting some of that work to Founders Pledge);
move our policy work on existential and technological risks to the Future of Humanity Institute (FHI);
move OIEA fully into Oxford University; and
allow the fundamentals research team to operate independently as part of a collaborative Oxford-based research community that includes OIEA and FHI.

In light of these changes, the need for a Special Projects Division at CEA has run its course. Although CEA will continue to sponsor the fundamentals research stream and facilitate the development of OIEA, its organizational focus will be on developing and strengthening the effective altruism community.

Below is more information on the next steps for each of the research projects that previously formed part of the Special Project Division.


Philanthropic advising

The philanthropic advising project, which was originally part of Giving What We Can, has focused on (i) research into new, effective giving opportunities, and (ii) providing tailored giving recommendations to wealthy individuals and foundations. We recently decided to discontinue this project at CEA for the following reasons:

We moved less money through wealthy individuals and foundations than we expected. Although we had hoped that Founders Pledge members would provide a reliable stream of clients, we underestimated the inefficiencies to Founders Pledge of relying on a third-party for consulting services. We believe it would be more effective for Founders Pledge to provide these services itself. Marinella Capriati, formerly of our philanthropic advising team, will be joining Founders Pledge to help it develop its own philanthropic advising capacity.

Our research wasn’t able to add enough value beyond GiveWell and the Open Philanthropy Project. Our model involved conducting research into areas that GiveWell/Open Philanthropy Project had not fully explored and were unlikely to explore anytime soon. Our team’s areas of expertise overlapped considerably with those of GiveWell/Open Philanthropy Project, however. Without venturing well beyond our areas of expertise, there were fewer opportunities to provide value here than we expected. Although this might not always be the case, we believe that research that is within the focus areas of GiveWell/Open Philanthropy Project is most efficiently conducted within those organizations. James Snowden, formerly of our philanthropic advising team, will be joining GiveWell, where we believe his research will have a greater impact.

Our philanthropic advising work was insufficiently complementary to CEA’s core strategy. In the philanthropy domain, CEA’s plan is to develop, and move money through, the new EAFunds platform. We believe this platform will accomplish several of the goals we had for the philanthropic advising team and therefore reduces the value of CEA doing its own charity research.


Policy

As we mentioned in last month’s update, Seb Farquhar, who led our policy advising work (previously as the Executive Director of Global Priorities Project), is moving across the hall to FHI, where he will continue his work on existential and technological risk policy. We discussed our decision not to expand our policy focus in our year-end review.

 
Oxford Institute for Effective Altruism

OIEA is an academic institute, founded by Hilary Greaves and Will MacAskill, that we expect to go live in fall 2017. During its initial, grant-writing stage, OIEA has been housed within CEA, which has funded its first employees (Michelle Hutchinson and Jon Courtney). Having received its first (small) grant, OIEA is now at the stage where it can operate as part of Oxford University, independent of CEA.


Fundamentals research

The remaining research team within CEA will focus on pursuing research that helps to improve the intellectual community around effective altruism. This may include:

  • searching for insights that could help improve understanding of object-level questions about movement norms or strategy;
  • understanding issues that cut across different cause areas, or relate to how they might work together; and
  • producing resources that help individuals engage more thoughtfully with effective altruism

This team consists of two full-time researchers (Stefan Schubert and Max Dalton), and one part-time researcher (Ben Garfinkel). In addition, Owen Cotton-Barratt will be joining part-time to lead research direction. This team will work closely with FHI and OIEA as part of a collaborative Oxford-based research community.

Comments4


Sorted by Click to highlight new comments since:

I was just looking at the EA funds dashboard. To what extent do you think the money coming into EA funds is EA money that was already going to be allocated to similarly effective charities?

I saw the EA funds post on hacker news, are you planning to continue promoting EA funds outside the existing EA community?

[anonymous]9
0
0

To what extent do you think the money coming into EA funds is EA money that was already going to be allocated to similarly effective charities?

We expect that most of the money donated so far is not counterfactual. We'll have impact to the degree that the fund managers make better donation decisions than individuals would have made otherwise.

I saw the EA funds post on hacker news, are you planning to continue promoting EA funds outside the existing EA community?

Yes. We've been quite happy with the reception to EA Funds from the EA community. Over the next few months we plan to run some experiments to see if EA Funds is purely a niche EA product or whether we can get some traction with people new to EA.

Glad to see the plans laid out.

I think it'd have made more sense to do the "EA Funds" experiment in Quarter 4, where it ties in more with people's annual giving habits.

I do think it may be valuable to try even if the donations are not counterfactual (for purposes of being able to coordinate donations better)

This was insightful for me. I'd especially be interested in the impact evaluation models.

Please add me to the mailing list: remmeltellenis{at]gmail(dot}com

More from TaraMacAulay
31
TaraMacAulay
· · 17m read
Curated and popular this week
 ·  · 10m read
 · 
I wrote this to try to explain the key thing going on with AI right now to a broader audience. Feedback welcome. Most people think of AI as a pattern-matching chatbot – good at writing emails, terrible at real thinking. They've missed something huge. In 2024, while many declared AI was reaching a plateau, it was actually entering a new paradigm: learning to reason using reinforcement learning. This approach isn’t limited by data, so could deliver beyond-human capabilities in coding and scientific reasoning within two years. Here's a simple introduction to how it works, and why it's the most important development that most people have missed. The new paradigm: reinforcement learning People sometimes say “chatGPT is just next token prediction on the internet”. But that’s never been quite true. Raw next token prediction produces outputs that are regularly crazy. GPT only became useful with the addition of what’s called “reinforcement learning from human feedback” (RLHF): 1. The model produces outputs 2. Humans rate those outputs for helpfulness 3. The model is adjusted in a way expected to get a higher rating A model that’s under RLHF hasn’t been trained only to predict next tokens, it’s been trained to produce whatever output is most helpful to human raters. Think of the initial large language model (LLM) as containing a foundation of knowledge and concepts. Reinforcement learning is what enables that structure to be turned to a specific end. Now AI companies are using reinforcement learning in a powerful new way – training models to reason step-by-step: 1. Show the model a problem like a math puzzle. 2. Ask it to produce a chain of reasoning to solve the problem (“chain of thought”).[1] 3. If the answer is correct, adjust the model to be more like that (“reinforcement”).[2] 4. Repeat thousands of times. Before 2023 this didn’t seem to work. If each step of reasoning is too unreliable, then the chains quickly go wrong. Without getting close to co
 ·  · 1m read
 · 
JamesÖz
 ·  · 3m read
 · 
Why it’s important to fill out this consultation The UK Government is currently consulting on allowing insects to be fed to chickens and pigs. This is worrying as the government explicitly says changes would “enable investment in the insect protein sector”. Given the likely sentience of insects (see this summary of recent research), and that median predictions estimate that 3.9 trillion insects will be killed annually by 2030, we think it’s crucial to try to limit this huge source of animal suffering.  Overview * Link to complete the consultation: HERE. You can see the context of the consultation here. * How long it takes to fill it out: 5-10 minutes (5 questions total with only 1 of them requiring a written answer) * Deadline to respond: April 1st 2025 * What else you can do: Share the consultation document far and wide!  * You can use the UK Voters for Animals GPT to help draft your responses. * If you want to hear about other high-impact ways to use your political voice to help animals, sign up for the UK Voters for Animals newsletter. There is an option to be contacted only for very time-sensitive opportunities like this one, which we expect will happen less than 6 times a year. See guidance on submitting in a Google Doc Questions and suggested responses: It is helpful to have a lot of variation between responses. As such, please feel free to add your own reasoning for your responses or, in addition to animal welfare reasons for opposing insects as feed, include non-animal welfare reasons e.g., health implications, concerns about farming intensification, or the climate implications of using insects for feed.    Question 7 on the consultation: Do you agree with allowing poultry processed animal protein in porcine feed?  Suggested response: No (up to you if you want to elaborate further).  We think it’s useful to say no to all questions in the consultation, particularly as changing these rules means that meat producers can make more profit from sel