Hide table of contents

I made the following infographic adapting the Introduction to Effective Altruism post. The goal I set for myself was to make the post more accessible and easier to digest to broaden its potential audience, and to make the ideas more memorable through graphics and data visualizations. 

You can also view it in high-resolution here or get a pdf here.

More info on the thought and creative process below!

Why did I make this?

I'm a graphic designer with a background in scientific and technical illustration, specializing in making complex concepts more accessible. I am now trying to pivot my career towards communicating higher-impact ideas, and I wanted to create something that could show how I can apply my skills to EA-related topics. The Intro to EA post seemed like an excellent place to start, because it condenses a lot of general information about EA and also includes a few statistics that I could readily visualize.

Process and strategy

The idea

I set myself a brief to make the post more accessible and appealing to a broader audience, specifically people who are not used to reading long-form content on rather technical subjects. This should also make it easier to share with someone who is not yet into EA but could be interested. Finally, I wanted to stay faithful to the content and structure of the original post, since you can clearly see that a lot of thought went into crafting the narrative in order to provide an accurate introductory picture of EA.

My approach

My goal was to make the ideas presented in the post more memorable and easier to grasp by combining visuals with minimal text, since we know that combining text and (relevant) graphics improves comprehension, retention, and engagement. So I basically tried to visualize as much of the post as I could, and to reduce the text to what was strictly necessary to convey the key messages.

For the introductory section, I focused on showing the dual aspect of EA as both a research field and a community, immediately answering the question in the title "What is effective altruism?". I also introduced the EA logo in large size to immediately give viewers a strong visual anchor to associate with EA.

The examples section was the one I spent the most time on. I decided, for each cause area, to write a short introduction on why the problem is important, accompanied by two data visualizations to "show, not tell" (for more details about the data visualizations, see the section below). I also included a timeline with examples of what has been done, using diamonds for events with a specific date and gradients for ongoing efforts - I avoided adding more icons to prevent visually cluttering this section.

For the values section, I chose to represent the four values with icons on a compass, each accompanied by a short explanation. The central element of the compass, often associated with moral values, should help viewers remember that the community is united by a set of values.

For the section on taking action, I visualized the different possibilities as branching paths a person can take. Once again, this depiction of paths should help reinforce that there are different types of action one can take, and EA is not, e.g., just about donations.

The final call-to-action section could actually be adapted depending on the context in which the infographic is shown. For this project, I went for a link to the original post and two broad links to learn more about EA.

Time spent

In total, it took me 17 hours to complete this infographic, from research to planning to final execution.

Design choices

Overall style

For colors and typography, I followed quite closely the Effective Altruism Style Guide to help build a sense of trust and brand consistency, especially since this is supposed to be an introduction to the movement and first impressions matter.

I also kept the visual style flat and minimal, again to communicate a sense of trust and the importance of the topic.

Data visualizations

For the data visualizations that were already present in the original post, I wanted to make them look even more impactful and compelling. While bar charts are certainly more effective at showing the scope of the data than just presenting numbers in tables, I find that visualizations that include pictograms can show scope differences even more effectively. For example, you can immediately see that there are about 40 deaths from COVID for each death from terrorism, and seeing 180 stick figures can make you more easily imagine the actual people that could be saved with $1M.

I also wanted to add one data visualization to the AI alignment section and two to the decision-making section for better visual consistency with the other sections. 

  • For AI alignment, I adapted the "Computation used to train AI models with bioanchors" visualization by Taylor Jones I saw in this report: I downloaded the updated dataset from Our World In Data and stripped down some details, while I chose to show all models from 2010 to the end of 2024. I included this visualization to show the speed of progress in AI capabilities.
  • The decision-making section was the trickiest, because decision-making is a more abstract concept and thus harder to visualize. First, I tried to estimate how many people the average US politician can influence. This should be a decent estimate,[1] but I'd be happy to hear about more accurate ones. Then I decided to present a split timeline with examples of good and bad decision-making throughout history. I'm a little concerned that the visual representation might be interpreted as all of those decisions coming from just one person, although it should be fairly obvious this can't be the case if one reads the text. I'm also aware that classifying events as either "good" or "bad" is a little too simplistic, but this was the best I could come up with. If you have better ideas, I'd be happy to hear them!

In general, I aimed to keep the data visualizations as simple as I could, e.g. by removing unit scales from plot axes that are not really necessary to grasp the relation between different areas of the graph, while avoiding compromising understandability.

Feedback welcome!

I would love to hear your thoughts on my work! 

Please let me know if you have any ideas for improvements or if you think anything is problematic.

It would also be helpful to know you think any specific parts of my approach are particularly effective (or ineffective) for sharing this kind of EA-related information.

Going forward

As part of  my effort to build a more impactful career using my design skills, I'm looking to create more infographics, data visualizations, diagrams, illustrations, etc. on EA-related topics to show how graphic design can benefit the movement. If you have any specific post, report, concept, etc. you would like to see me work on, I would be happy to hear about it. Bonus points if it's in AI safety, but I'm really open to anything at the moment.

  1. ^

    I looked up the number of US politicians of various ranks from this website. Then, my reasoning was the following: there is one president, who can influence the entire US population; there are 100 senators, so each senator can influence, on average, 1/100 of the entire US population; and so on, for each of the categories. My estimate is the weighted average of the influence of the different categories, with weights corresponding to the number of politicians in each category. Please note that I'm not from the US, so I might have missed some nuances that could invalidate the reasoning; if so, please let me know!

Comments8


Sorted by Click to highlight new comments since:

I love seeing this kind of initiative, and it is great that the skills you have allow you to contribute in such a clear way.

Thank you Joseph, really appreciate it!

Lovely infographic :) you do have the Good Decision Making - Bad Decision Making labels on the wrong side, for the "Improve Decision-making" box. At least, I hope they're the wrong sides, lol.

Thank you! Thanks for pointing out that mistake as well, I've just fixed it :)

First, nice infographic!

Second, I think there's a slight mistake here, where good decision-making and poor decision-making are flipped here, unless I'm missing something:

Thanks a lot! 

And thank you for catching that, good and poor decision-making were indeed flipped. I've just updated the post and the Drive files with the correct version :)

I really like the variety of cause areas you chose. Simple, appealing descriptions that draw in someone who hasn't encountered EA before. 17 hours is really short for such quality information!

Thank you so  much! I didn't do most of the research myself, though - I drew upon the existing intro post for most of the content and structure as well.

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f