Hide table of contents

I made the following infographic adapting the Introduction to Effective Altruism post. The goal I set for myself was to make the post more accessible and easier to digest to broaden its potential audience, and to make the ideas more memorable through graphics and data visualizations. 

You can also view it in high-resolution here or get a pdf here.

More info on the thought and creative process below!

Why did I make this?

I'm a graphic designer with a background in scientific and technical illustration, specializing in making complex concepts more accessible. I am now trying to pivot my career towards communicating higher-impact ideas, and I wanted to create something that could show how I can apply my skills to EA-related topics. The Intro to EA post seemed like an excellent place to start, because it condenses a lot of general information about EA and also includes a few statistics that I could readily visualize.

Process and strategy

The idea

I set myself a brief to make the post more accessible and appealing to a broader audience, specifically people who are not used to reading long-form content on rather technical subjects. This should also make it easier to share with someone who is not yet into EA but could be interested. Finally, I wanted to stay faithful to the content and structure of the original post, since you can clearly see that a lot of thought went into crafting the narrative in order to provide an accurate introductory picture of EA.

My approach

My goal was to make the ideas presented in the post more memorable and easier to grasp by combining visuals with minimal text, since we know that combining text and (relevant) graphics improves comprehension, retention, and engagement. So I basically tried to visualize as much of the post as I could, and to reduce the text to what was strictly necessary to convey the key messages.

For the introductory section, I focused on showing the dual aspect of EA as both a research field and a community, immediately answering the question in the title "What is effective altruism?". I also introduced the EA logo in large size to immediately give viewers a strong visual anchor to associate with EA.

The examples section was the one I spent the most time on. I decided, for each cause area, to write a short introduction on why the problem is important, accompanied by two data visualizations to "show, not tell" (for more details about the data visualizations, see the section below). I also included a timeline with examples of what has been done, using diamonds for events with a specific date and gradients for ongoing efforts - I avoided adding more icons to prevent visually cluttering this section.

For the values section, I chose to represent the four values with icons on a compass, each accompanied by a short explanation. The central element of the compass, often associated with moral values, should help viewers remember that the community is united by a set of values.

For the section on taking action, I visualized the different possibilities as branching paths a person can take. Once again, this depiction of paths should help reinforce that there are different types of action one can take, and EA is not, e.g., just about donations.

The final call-to-action section could actually be adapted depending on the context in which the infographic is shown. For this project, I went for a link to the original post and two broad links to learn more about EA.

Time spent

In total, it took me 17 hours to complete this infographic, from research to planning to final execution.

Design choices

Overall style

For colors and typography, I followed quite closely the Effective Altruism Style Guide to help build a sense of trust and brand consistency, especially since this is supposed to be an introduction to the movement and first impressions matter.

I also kept the visual style flat and minimal, again to communicate a sense of trust and the importance of the topic.

Data visualizations

For the data visualizations that were already present in the original post, I wanted to make them look even more impactful and compelling. While bar charts are certainly more effective at showing the scope of the data than just presenting numbers in tables, I find that visualizations that include pictograms can show scope differences even more effectively. For example, you can immediately see that there are about 40 deaths from COVID for each death from terrorism, and seeing 180 stick figures can make you more easily imagine the actual people that could be saved with $1M.

I also wanted to add one data visualization to the AI alignment section and two to the decision-making section for better visual consistency with the other sections. 

  • For AI alignment, I adapted the "Computation used to train AI models with bioanchors" visualization by Taylor Jones I saw in this report: I downloaded the updated dataset from Our World In Data and stripped down some details, while I chose to show all models from 2010 to the end of 2024. I included this visualization to show the speed of progress in AI capabilities.
  • The decision-making section was the trickiest, because decision-making is a more abstract concept and thus harder to visualize. First, I tried to estimate how many people the average US politician can influence. This should be a decent estimate,[1] but I'd be happy to hear about more accurate ones. Then I decided to present a split timeline with examples of good and bad decision-making throughout history. I'm a little concerned that the visual representation might be interpreted as all of those decisions coming from just one person, although it should be fairly obvious this can't be the case if one reads the text. I'm also aware that classifying events as either "good" or "bad" is a little too simplistic, but this was the best I could come up with. If you have better ideas, I'd be happy to hear them!

In general, I aimed to keep the data visualizations as simple as I could, e.g. by removing unit scales from plot axes that are not really necessary to grasp the relation between different areas of the graph, while avoiding compromising understandability.

Feedback welcome!

I would love to hear your thoughts on my work! 

Please let me know if you have any ideas for improvements or if you think anything is problematic.

It would also be helpful to know you think any specific parts of my approach are particularly effective (or ineffective) for sharing this kind of EA-related information.

Going forward

As part of  my effort to build a more impactful career using my design skills, I'm looking to create more infographics, data visualizations, diagrams, illustrations, etc. on EA-related topics to show how graphic design can benefit the movement. If you have any specific post, report, concept, etc. you would like to see me work on, I would be happy to hear about it. Bonus points if it's in AI safety, but I'm really open to anything at the moment.

  1. ^

    I looked up the number of US politicians of various ranks from this website. Then, my reasoning was the following: there is one president, who can influence the entire US population; there are 100 senators, so each senator can influence, on average, 1/100 of the entire US population; and so on, for each of the categories. My estimate is the weighted average of the influence of the different categories, with weights corresponding to the number of politicians in each category. Please note that I'm not from the US, so I might have missed some nuances that could invalidate the reasoning; if so, please let me know!

Comments1


Sorted by Click to highlight new comments since:

I love seeing this kind of initiative, and it is great that the skills you have allow you to contribute in such a clear way.

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
 ·  · 8m read
 · 
In my past year as a grantmaker in the global health and wellbeing (GHW) meta space at Open Philanthropy, I've identified some exciting ideas that could fill existing gaps. While these initiatives have significant potential, they require more active development and support to move forward.  The ideas I think could have the highest impact are:  1. Government placements/secondments in key GHW areas (e.g. international development), and 2. Expanded (ultra) high-net-worth ([U]HNW) advising Each of these ideas needs a very specific type of leadership and/or structure. More accessible options I’m excited about — particularly for students or recent graduates — could involve virtual GHW courses or action-focused student groups.  I can’t commit to supporting any particular project based on these ideas ahead of time, because the likelihood of success would heavily depend on details (including the people leading the project). Still, I thought it would be helpful to articulate a few of the ideas I’ve been considering.  I’d love to hear your thoughts, both on these ideas and any other gaps you see in the space! Introduction I’m Mel, a Senior Program Associate at Open Philanthropy, where I lead grantmaking for the Effective Giving and Careers program[1] (you can read more about the program and our current strategy here). Throughout my time in this role, I’ve encountered great ideas, but have also noticed gaps in the space. This post shares a list of projects I’d like to see pursued, and would potentially want to support. These ideas are drawn from existing efforts in other areas (e.g., projects supported by our GCRCB team), suggestions from conversations and materials I’ve engaged with, and my general intuition. They aren’t meant to be a definitive roadmap, but rather a starting point for discussion. At the moment, I don’t have capacity to more actively explore these ideas and find the right founders for related projects. That may change, but for now, I’m interested in