One way I think EA fails to maximise impact is by its focus on legible, clear and attributable impact over actions where the impact is extremely difficult to estimate.

Writing Wikipedia articles on and around important EA concepts (except perhaps on infohazardous bioterrorism incidents) has low downside risk and extremely high upside risk, making these ideas much more easy to understand for policymakers and other people in positions of power who may come across them and google them. However, the feedback loops are virtually non-existent and the impact is highly illegible.

For example, there is currently no dedicated Wikipedia page for “Existential Risk” and “Global Catastrophic Biological Risk”.

Writing Wikipedia pages could be a particularly good use of time for people new to EA and people in university student groups who want to gain a better understanding of EA concepts or of EA-relevant policy areas.

Some other ideas for creating new Wikipedia articles or adding more detail to existing ones:

International Biosecurity and Biosafety Initiative for Science

Alternative Proteins

Governance of Alternative Proteins

Global Partnership Biological Security Working Group

Regulation of gain-of-function biological research by country

Public investment in alternative proteins by country

Space governance

Regulation of alternative proteins

UN Biorisk Working Group

Political Representation of Future Generations

Political Representation of Future Generations by Country

Political Representation of Animals

Joint Assessment Mechanism

Public investment in AI Safety research by country

International Experts Group of Biosafety and Biosecurity Regulators

Tobacco taxation by country

Global Partnership Signature Initiative to Mitigate Biological Threats in Africa

Regulations on lead in paint by country

Alcohol taxation by country

Regulation of dual-use biological research by country

Joint External Evaluations

Biological Weapons Convention funding by country

Comments14


Sorted by Click to highlight new comments since:

I broadly agree with this and have also previously made a case for Wikipedia editing on the Forum: https://forum.effectivealtruism.org/posts/FebKgHaAymjiETvXd/wikipedia-editing-is-important-tractable-and-neglected

As a caveat, there are some nuances to Wikipedia editing to make sure you're following community standards, which I've tried to lay out in my post. In particular, before investing a lot of time writing a new article, you should check if someone else tried that before and/or if the same content is already covered elsewhere. For example, there have been previous unsuccessful efforts to create an 'Existential risk' Wikipedia article. Those attempts failed in part because relevant content is already covered on the 'Global catastrophic risks' article.

Could this also be a good opportunity for pages written in languages other than English?

Yes very good point!

This article is several years old, but as of 2019, their machine translation tool was quite poor and my experience is that articles can have vastly different levels of depth in different languages, so simply getting French/Spanish/etc. articles up to the level of their English language analogues might be an easy win.

[anonymous]7
0
0

Thank you for your comment.

I believe that translators of EA articles should have a quality mindset and not only a mindset of translating x articles or y words in z time. Translators should translate from the articles with the most depth and those articles are mostly in English. Current article pageviews may determine priorities but we also need a depth of content on the subject and not only a handful of articles that are predicted to have more pageviews in the target language.

Translating articles about EA is low hanging fruit especially in Wikipedia language versions with more than several million speakers. We should not underestimate that one or 100 articles that we translate today will most likely remain in Wikipedia for decades even if not centuries even if totally changed by editors along the way.

There is a visibility gap of Effective Altruism in the Internet in general and in Wikipedia specifically. This and the fact that the impact of Wikipedia as a source of knowledge for the general public and to policy makers and decisors should not be ignored.

What I vehemently recommend is that there should not be payed editing promotion and investment. If individual EAs insist on this path what could happen is that EA will have a label for payed editing in Wikipedia. Payed editing in Wikipedia has a very bad reputation in the Wikipedian community and also outside of it and it stains EA and repels people. Voluntary translators are harder to come by perhaps but that should lead to an even more strong will by EA communities to reach out to its fellow members and argue for voluntary work on this matter. Edit-a-thons should be promoted by EA communities but with clear guidelines of Neutral Point of View (NPOV) editing and non-remunerized.

Edited: Corrected several typos by my part.

Note that it's much easier to improve existing pages than to add new ones.

More EA-relevant Wikipedia articles that don't yet exist:

  • Place premium
  • Population Ethics pages
    • Sadistic conclusion
    • Critical-threshold approaches
  • Cantril Ladder
  • Axelrod's Meta-Norm
  • Open-source game theory
  • Humane Technology
  • Chris Olah
  • Machine Learning Interpretability
    • Circuit
    • Induction head
  • Lottery Ticket Hypothesis
  • Grokking
  • Deep Double Descent
  • Nanosystems: Molecular Machinery Manufacturing and Computation
  • Global Priorities Institute
  • Scaling Laws for Large Language Models

Some of these articles are about AI capabilities, so perhaps not as great to write about.

Additionally, the following EA-relevant articles could be greatly improved:

That hasn’t been entirely my experience. In fact, when I made the page for the Foreign Dredge Act of 1906, I was pleasantly surprised at how quickly others jumped in to improve on my basic efforts - it was clearly a case of just needing the page to exist at all before it started getting the attention it deserved.

By contrast, I’ve found that trying to do things like good article nominations, where you’re trying to satisfy the demands of self-selected nonexpert referees, can be frustrating. The same is true for trying to improve pages already getting a lot of attention. Even minor improvements to the Monkeypox page during the epidemic were the subject of heated debate and accusations on the talk page. When a new page is created, it doesn’t have egos invested in it yet, so you don’t really have to argue with anybody very much.

I’d be interested in learning more about your experiences that leads you to say it’s harder to create than improve pages. I’m not that novice but you seem like you have a lot more experience than me.

Epistemic status: ~150 Wikipedia edits, of which 0 are genuine article creations (apart from redirects). I've mostly done slight improvements on non-controversial articles. Dunno about being a novice, but looking at your contributions on WP you've done more than me :-)

I was thinking mostly of the fact that you need to be autoconfirmed, i.e. more than 4 days old and ≥10 edits. I also have the intuition that creating an article is more likely to be wasted effort than an improvement to an existing article, because of widespread deletionism. An example for the deletionism is the Harberger tax article, which was nearly removed, much to my dismay.

Perhaps this is more true for the kind of article I'm interested in, which is relatively obscure concepts from science (with less heated debate), and less about current events (where edits might be more difficult due to controversy & edit wars).

I have also encountered deletionism. When I was improving the aptamer article for a good article nomination, the reviewer recommended splitting a section on peptide aptamers into a separate article. After some thinking, I did so. Then some random editor who I’d never interacted with before deleted the whole peptide aptamer article and accused me of plagiarism/copying it from someplace else on the internet, and never responded to my messages trying to figure out what he was doing or why.

It’s odd to me because the Foreign Dredge Act is a political issue, while peptide aptamers are an extremely niche topic. And the peptide aptamer article contained nothing but info that had been on Wikipedia for years, while I wrote the Dredge Act article from scratch. Hard to see rhyme or reason, and very frustrating that there’s no apparent process for dealing with a vandal who thinks of themselves as an “editor.”

Here are a couple of social science papers on the evidence that (well-written) Wikipedia articles have an impact on real world outcomes:

I think the main caveat (also mentioned in other comments) is that these papers are predicated on high quality edits or page creations that align with Wikipedia standards.

[anonymous]1
0
0

I honestly never thought that I would read a post on this forum about Wikipedia. To my happiness there is talk about Wikipedia in here as I found out today! :)

Great work!!

[anonymous]1
0
0

Thank you! :) Have a good day!

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr