Hide table of contents

I made the following infographic adapting the Introduction to Effective Altruism post. The goal I set for myself was to make the post more accessible and easier to digest to broaden its potential audience, and to make the ideas more memorable through graphics and data visualizations. 

You can also view it in high-resolution here or get a pdf here.

More info on the thought and creative process below!

Why did I make this?

I'm a graphic designer with a background in scientific and technical illustration, specializing in making complex concepts more accessible. I am now trying to pivot my career towards communicating higher-impact ideas, and I wanted to create something that could show how I can apply my skills to EA-related topics. The Intro to EA post seemed like an excellent place to start, because it condenses a lot of general information about EA and also includes a few statistics that I could readily visualize.

Process and strategy

The idea

I set myself a brief to make the post more accessible and appealing to a broader audience, specifically people who are not used to reading long-form content on rather technical subjects. This should also make it easier to share with someone who is not yet into EA but could be interested. Finally, I wanted to stay faithful to the content and structure of the original post, since you can clearly see that a lot of thought went into crafting the narrative in order to provide an accurate introductory picture of EA.

My approach

My goal was to make the ideas presented in the post more memorable and easier to grasp by combining visuals with minimal text, since we know that combining text and (relevant) graphics improves comprehension, retention, and engagement. So I basically tried to visualize as much of the post as I could, and to reduce the text to what was strictly necessary to convey the key messages.

For the introductory section, I focused on showing the dual aspect of EA as both a research field and a community, immediately answering the question in the title "What is effective altruism?". I also introduced the EA logo in large size to immediately give viewers a strong visual anchor to associate with EA.

The examples section was the one I spent the most time on. I decided, for each cause area, to write a short introduction on why the problem is important, accompanied by two data visualizations to "show, not tell" (for more details about the data visualizations, see the section below). I also included a timeline with examples of what has been done, using diamonds for events with a specific date and gradients for ongoing efforts - I avoided adding more icons to prevent visually cluttering this section.

For the values section, I chose to represent the four values with icons on a compass, each accompanied by a short explanation. The central element of the compass, often associated with moral values, should help viewers remember that the community is united by a set of values.

For the section on taking action, I visualized the different possibilities as branching paths a person can take. Once again, this depiction of paths should help reinforce that there are different types of action one can take, and EA is not, e.g., just about donations.

The final call-to-action section could actually be adapted depending on the context in which the infographic is shown. For this project, I went for a link to the original post and two broad links to learn more about EA.

Time spent

In total, it took me 17 hours to complete this infographic, from research to planning to final execution.

Design choices

Overall style

For colors and typography, I followed quite closely the Effective Altruism Style Guide to help build a sense of trust and brand consistency, especially since this is supposed to be an introduction to the movement and first impressions matter.

I also kept the visual style flat and minimal, again to communicate a sense of trust and the importance of the topic.

Data visualizations

For the data visualizations that were already present in the original post, I wanted to make them look even more impactful and compelling. While bar charts are certainly more effective at showing the scope of the data than just presenting numbers in tables, I find that visualizations that include pictograms can show scope differences even more effectively. For example, you can immediately see that there are about 40 deaths from COVID for each death from terrorism, and seeing 180 stick figures can make you more easily imagine the actual people that could be saved with $1M.

I also wanted to add one data visualization to the AI alignment section and two to the decision-making section for better visual consistency with the other sections. 

  • For AI alignment, I adapted the "Computation used to train AI models with bioanchors" visualization by Taylor Jones I saw in this report: I downloaded the updated dataset from Our World In Data and stripped down some details, while I chose to show all models from 2010 to the end of 2024. I included this visualization to show the speed of progress in AI capabilities.
  • The decision-making section was the trickiest, because decision-making is a more abstract concept and thus harder to visualize. First, I tried to estimate how many people the average US politician can influence. This should be a decent estimate,[1] but I'd be happy to hear about more accurate ones. Then I decided to present a split timeline with examples of good and bad decision-making throughout history. I'm a little concerned that the visual representation might be interpreted as all of those decisions coming from just one person, although it should be fairly obvious this can't be the case if one reads the text. I'm also aware that classifying events as either "good" or "bad" is a little too simplistic, but this was the best I could come up with. If you have better ideas, I'd be happy to hear them!

In general, I aimed to keep the data visualizations as simple as I could, e.g. by removing unit scales from plot axes that are not really necessary to grasp the relation between different areas of the graph, while avoiding compromising understandability.

Feedback welcome!

I would love to hear your thoughts on my work! 

Please let me know if you have any ideas for improvements or if you think anything is problematic.

It would also be helpful to know you think any specific parts of my approach are particularly effective (or ineffective) for sharing this kind of EA-related information.

Going forward

As part of  my effort to build a more impactful career using my design skills, I'm looking to create more infographics, data visualizations, diagrams, illustrations, etc. on EA-related topics to show how graphic design can benefit the movement. If you have any specific post, report, concept, etc. you would like to see me work on, I would be happy to hear about it. Bonus points if it's in AI safety, but I'm really open to anything at the moment.

  1. ^

    I looked up the number of US politicians of various ranks from this website. Then, my reasoning was the following: there is one president, who can influence the entire US population; there are 100 senators, so each senator can influence, on average, 1/100 of the entire US population; and so on, for each of the categories. My estimate is the weighted average of the influence of the different categories, with weights corresponding to the number of politicians in each category. Please note that I'm not from the US, so I might have missed some nuances that could invalidate the reasoning; if so, please let me know!

Comments9


Sorted by Click to highlight new comments since:

I love seeing this kind of initiative, and it is great that the skills you have allow you to contribute in such a clear way.

Thank you Joseph, really appreciate it!

Great work!

Please note, if you copied substantial portions of the "Intro to effective altruism" article, you should include a link to the CC-BY 4.0 license in your PDF, as it is required by the license terms. This helps inform users that the content you used is free to use. Thanks for helping build the digital commons!

Lovely infographic :) you do have the Good Decision Making - Bad Decision Making labels on the wrong side, for the "Improve Decision-making" box. At least, I hope they're the wrong sides, lol.

Thank you! Thanks for pointing out that mistake as well, I've just fixed it :)

First, nice infographic!

Second, I think there's a slight mistake here, where good decision-making and poor decision-making are flipped here, unless I'm missing something:

Thanks a lot! 

And thank you for catching that, good and poor decision-making were indeed flipped. I've just updated the post and the Drive files with the correct version :)

I really like the variety of cause areas you chose. Simple, appealing descriptions that draw in someone who hasn't encountered EA before. 17 hours is really short for such quality information!

Thank you so  much! I didn't do most of the research myself, though - I drew upon the existing intro post for most of the content and structure as well.

Curated and popular this week
 ·  · 20m read
 · 
Advanced AI could unlock an era of enlightened and competent government action. But without smart, active investment, we’ll squander that opportunity and barrel blindly into danger. Executive summary See also a summary on Twitter / X. The US federal government is falling behind the private sector on AI adoption. As AI improves, a growing gap would leave the government unable to effectively respond to AI-driven existential challenges and threaten the legitimacy of its democratic institutions. A dual imperative → Government adoption of AI can’t wait. Making steady progress is critical to: * Boost the government’s capacity to effectively respond to AI-driven existential challenges * Help democratic oversight keep up with the technological power of other groups * Defuse the risk of rushed AI adoption in a crisis → But hasty AI adoption could backfire. Without care, integration of AI could: * Be exploited, subverting independent government action * Lead to unsafe deployment of AI systems * Accelerate arms races or compress safety research timelines Summary of the recommendations 1. Work with the US federal government to help it effectively adopt AI Simplistic “pro-security” or “pro-speed” attitudes miss the point. Both are important — and many interventions would help with both. We should: * Invest in win-win measures that both facilitate adoption and reduce the risks involved, e.g.: * Build technical expertise within government (invest in AI and technical talent, ensure NIST is well resourced) * Streamline procurement processes for AI products and related tech (like cloud services) * Modernize the government’s digital infrastructure and data management practices * Prioritize high-leverage interventions that have strong adoption-boosting benefits with minor security costs or vice versa, e.g.: * On the security side: investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the ris
saulius
 ·  · 22m read
 · 
Summary In this article, I estimate the cost-effectiveness of five Anima International programs in Poland: improving cage-free and broiler welfare, blocking new factory farms, banning fur farming, and encouraging retailers to sell more plant-based protein. I estimate that together, these programs help roughly 136 animals—or 32 years of farmed animal life—per dollar spent. Animal years affected per dollar spent was within an order of magnitude for all five evaluated interventions. I also tried to estimate how much suffering each program alleviates. Using SADs (Suffering-Adjusted Days)—a metric developed by Ambitious Impact (AIM) that accounts for species differences and pain intensity—Anima’s programs appear highly cost-effective, even compared to charities recommended by Animal Charity Evaluators. However, I also ran a small informal survey to understand how people intuitively weigh different categories of pain defined by the Welfare Footprint Institute. The results suggested that SADs may heavily underweight brief but intense suffering. Based on those findings, I created my own metric DCDE (Disabling Chicken Day Equivalent) with different weightings. Under this approach, interventions focused on humane slaughter look more promising, while cage-free campaigns appear less impactful. These results are highly uncertain but show how sensitive conclusions are to how we value different kinds of suffering. My estimates are highly speculative, often relying on subjective judgments from Anima International staff regarding factors such as the likelihood of success for various interventions. This introduces potential bias. Another major source of uncertainty is how long the effects of reforms will last if achieved. To address this, I developed a methodology to estimate impact duration for chicken welfare campaigns. However, I’m essentially guessing when it comes to how long the impact of farm-blocking or fur bans might last—there’s just too much uncertainty. Background In
 ·  · 2m read
 · 
In my opinion, we have known that the risk of AI catastrophe is too high and too close for at least two years. At that point, it’s time to work on solutions (in my case, advocating an indefinite pause on frontier model development until it’s safe to proceed through protests and lobbying as leader of PauseAI US).  Not every policy proposal is as robust to timeline length as PauseAI. It can be totally worth it to make a quality timeline estimate, both to inform your own work and as a tool for outreach (like ai-2027.com). But most of these timeline updates simply are not decision-relevant if you have a strong intervention. If your intervention is so fragile and contingent that every little update to timeline forecasts matters, it’s probably too finicky to be working on in the first place.  I think people are psychologically drawn to discussing timelines all the time so that they can have the “right” answer and because it feels like a game, not because it really matters the day and the hour of… what are these timelines even leading up to anymore? They used to be to “AGI”, but (in my opinion) we’re basically already there. Point of no return? Some level of superintelligence? It’s telling that they are almost never measured in terms of actions we can take or opportunities for intervention. Indeed, it’s not really the purpose of timelines to help us to act. I see people make bad updates on them all the time. I see people give up projects that have a chance of working but might not reach their peak returns until 2029 to spend a few precious months looking for a faster project that is, not surprisingly, also worse (or else why weren’t they doing it already?) and probably even lower EV over the same time period! For some reason, people tend to think they have to have their work completed by the “end” of the (median) timeline or else it won’t count, rather than seeing their impact as the integral over the entire project that does fall within the median timeline estimate or