Note: I didn’t have the time to write this up in as detailed/compelling a fashion as I would have liked, but erred on the side of submitting it. 

I think that strong votes should be removed from the EA Forum, or at least lowered in strength. I believe that the negatives of the current system outweigh its benefits, and that it would be beneficial for community health to change the current system.

As far as I can tell, strong votes are 4x stronger than regular votes for at least some users (it appears to vary, but this was difficult to pin down; I couldn't find clear information on how votes are weighed on the Forum, which was itself frustrating). Ultimately, this serves to amplify the power of people who express their opinions more strongly and to diminish the power of those who use the strong vote more sparingly. There are not any obvious guidelines for when it is appropriate to use strong vs. regular votes, although there may well be ones listed in the Forum guidelines; nonetheless, I think it’s safe to say that most forum users have not encountered a standard guide for when strong votes are appropriate. In the absence of such guidance, people use their own intuitions/judgements. 

As with so many things, people’s intuitions vary! I think there’s a solid analogy here of a bunch of people chatting at a get-together. People are assessing whether they want to speak up, and how strongly they want to frame their arguments if so. In my experience, some people are naturally much more forceful about how they frame their arguments, or are calibrated differently about how strongly they put things. Sometimes (not always!) this is shaped by their race/gender/class/other aspects of their identity. I personally (a woman) have come to realize that I am more tentative in sharing my opinions than many of my male friends, or express a comparably-certain opinion with a lower level of forcefulness.

What sort of community norms exist shape to what extent certain people’s voices end up dominating the conversation. Again, at some get togethers there may be implicit norms that it’s rude to interrupt, while at others there may be a sense that it indicates strength of conviction if you’re expressing your views forcefully and often. 

Given that we’re operating from behind a screen, choices about when to strong vote are made based on diffuse, rarely discussed judgements, which I think makes it more likely that it’s based on gut intuitions than well-considered, widely-shared norms. And in turn, I think gut intuition differences mean that in practice, some people are strong upvoting everything that they themselves post/comment, while others are never doing so because it feels impolite. Some people are likely strong upvoting everything by their friends/colleagues, while others may abstain from doing so because it’s unfair. At the end of the day, that means that some people end up having more voting power in practice than others, solely based on their intuitions.

One or two strong upvotes can make a big difference in where a comment or post ranks. And because of the momentum associated with upvotes, the effects of small initial differences (do you strong upvote your own post?) can have exponential effects. 

It also creates weird effects with co-authored posts. I coauthored a post and it automatically strong upvoted it from both of us, meaning that it started with, if I recall correctly, 8 or 9 upvotes. That meant that it ranked far higher than other posts at the same time.

I suspect it also heightens things on contentious topics. When people are more emotional, they’re likely more likely to use strong votes.

Additionally, many people have been recently discussing the importance of maintaining space for a diversity of beliefs within EA discourse. Strong votes make it easier for dissenting voices to be effectively shut out, if it only takes one or two strong votes to severely knock down a comment or vote. 

There are some benefits to the system. It lets people express their preferences in a more granular fashion, and undoubtably helps some “really great” posts get more traction vs. “decent” posts. But I believe that a system with evenly valued votes would do that as well, with the strongest posts receiving upvotes from more people. 

In my opinion, strong votes should be eliminated. But I’m not familiar with the nuances of this issue, and it seems like there are other alternatives that could also be workable. Strong votes could also be halved in strength, such that they’re worth only double regular votes rather than quadruple. Alternatively, clearer norms could be created/disseminated about when strong votes are appropriate to use. 

Comments15


Sorted by Click to highlight new comments since:

I was also not sure how the strong votes worked, but found a description from four years ago here. I'm not sure if the system's in date.

Normal votes (one click) will be worth

  • 3 points – if you have 25,000 karma or more
  • 2 points – if you have 1,000 karma
  • 1 point  – if you have 0 karma

Strong Votes (click and hold) will be worth

  • 16 points (maximum) – if you have 500,000 karma
  • 15 points – 250,000
  • 14 points – 175,000
  • 13 points – 100,000
  • 12 points – 75,000
  • 11 points – 50,000
  • 10 points – 25,000
  • 9 points  – 10,000
  • 8 points  – 5,000
  • 7 points  – 2,500
  • 6 points  – 1,000
  • 5 points  – 500
  • 4 points  – 250
  • 3 points  – 100
  • 2 points  – 10
  • 1 point – 0

If you create a search then edit it to become empty on the Forum, you can see a list of the highest karma users. The first two pages:

Aaron Gertler 8y 16729 karma

Linch 7y 14385 karma

Peter Wildeford 8y 11209 karma

MichaelA 4y 10836 karma

Kirsten 5y 8430 karma

John G. Halstead 6y 8310 karma

Habryka 8y 7922 karma

Larks 8y 7740 karma

Pablo 8y 7735 karma

Julia_Wise 8y 7288 karma

RyanCarey 8y 6760 karma

NunoSempere 4y 6677 karma
 

That seems accurate to me, my normal upvote is +2 and my strong upvote is +8.

I think that's right other than that weak upvotes never become worth 3 points anymore (although this doesn't matter on the EA forum, given that no one has 25,000 karma), based on this lesswrong github file linked from the LW FAQ.

I thought about the topic a bit at some point and my thoughts were

  • The strength of the strong upvote depends on the karma of the user (see other comment)
  • Therefore, the existence of a strong upvote implies that users that have gained more Karma in the past, e.g. because they write better or more content, have more influence on new posts.
  • Thus, the question of the strong upvote seems roughly equivalent to the question "do we want more active/experienced members of the community to have more say?"
  • Personally, I'd say that I currently prefer this system over its alternatives because I think more experienced/active EAs have more nuanced judgment about EA questions. Specifically, I think that there are some posts that fly under the radar because they don't look fancy to newcomers and I want more experienced EAs to be able to strongly upvote those to get more traction.
  • I think strong downvotes are sometimes helpful but I'm not sure how often they are even used. I don't have a strong opinion about their existence. 
  • I can also see that strong votes might lead to a discourse where experienced EAs just give other experienced EAs lots of Karma due to personal connections but most people I know use their strong upvotes based on how important they think the content is and not by how much they like the author. 
  • In conclusion, I think it's good that we give more experienced/active members that have produced high-quality content in the past more say.  I think one can discuss the size of the difference, e.g. maybe the current scale is too liberal or too conservative. 

I use my strong downvote to hide spam a couple times a month. I pretty rarely use it for other things, although I'll occasionally strong downvote a comment or post that's exceptionally offensive. (I usually report those posts as well.)

Yeah same. I don't even strong downvote egregiously bad reasoning, though I do try my best to downvote them. 

A smaller change that I think would be beneficial is to eliminate strong upvotes on your own comments.  I really don't see how those have a use at all.

And in turn, I think gut intuition differences mean that in practice, some people are strong upvoting everything that they themselves post/comment, while others are never doing so because it feels impolite.

I'd be surprised if many people are strong-upvoting all their comments. The algorithmic default is to strong upvote your posts, but weak upvote your own comments, and I very rarely see a post with 1 vote above 2 karma. If I had to guess my median estimate would be that zero frequent commenters strong upvote more 5% of the comments.

I do think it would not be unreasonable to ban strong-upvoting your own comments.

I do think it would not be unreasonable to ban strong-upvoting your own comments.

Agreed.

I went to strong-upvote this post and then I was like '....hang on, wait' :p 

But yeah, this is a really good point. The strong-upvotes system requires a lot of trust that people aren't just going to liberally strong-upvote anything they agree with, and since votes are anonymous we can't tell if people are doing this. 

fwiw there is some info on how people are supposed to use strong upvotes in the Forum norms guide, but I agree that many people won't have read this, and it's pretty subjective and fuzzy. 

I think any issues with abuse of strong upvotes is tempered by the fact that someone has to spend a lot of time writing posts and getting upvotes from other forum members before they can have much influence with their votes, strong or otherwise. So in practice this is probably not a problem, because the trust is earned through months and years of writing the posts and comments and getting the votes that earn one a lot of karma.

Curated and popular this week
 ·  · 5m read
 · 
This work has come out of my Undergraduate dissertation. I haven't shared or discussed these results much before putting this up.  Message me if you'd like the code :) Edit: 16th April. After helpful comments, especially from Geoffrey, I now believe this method only identifies shifts in the happiness scale (not stretches). Have edited to make this clearer. TLDR * Life satisfaction (LS) appears flat over time, despite massive economic growth — the “Easterlin Paradox.” * Some argue that happiness is rising, but we’re reporting it more conservatively — a phenomenon called rescaling. * I test rescaling using long-run German panel data, looking at whether the association between reported happiness and three “get-me-out-of-here” actions (divorce, job resignation, and hospitalisation) changes over time. * If people are getting happier (and rescaling is occuring) the probability of these actions should become less linked to reported LS — but they don’t. * I find little evidence of rescaling. We should probably take self-reported happiness scores at face value. 1. Background: The Happiness Paradox Humans today live longer, richer, and healthier lives in history — yet we seem no seem for it. Self-reported life satisfaction (LS), usually measured on a 0–10 scale, has remained remarkably flatover the last few decades, even in countries like Germany, the UK, China, and India that have experienced huge GDP growth. As Michael Plant has written, the empirical evidence for this is fairly strong. This is the Easterlin Paradox. It is a paradox, because at a point in time, income is strongly linked to happiness, as I've written on the forum before. This should feel uncomfortable for anyone who believes that economic progress should make lives better — including (me) and others in the EA/Progress Studies worlds. Assuming agree on the empirical facts (i.e., self-reported happiness isn't increasing), there are a few potential explanations: * Hedonic adaptation: as life gets
 ·  · 38m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI)[1] by 2028? In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote). This means that, while the compa
 ·  · 4m read
 · 
SUMMARY:  ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you’re able to support or share this appeal, please visit allfed.info/donate. Donate to ALLFED FULL ARTICLE: I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED’s co-founder, to ask for your support. This is the first time in Alliance to Feed the Earth in Disaster’s (ALLFED’s) 8 year existence that we have reached out on the EA Forum with a direct funding appeal outside of Marginal Funding Week/our annual updates. I am doing so because ALLFED’s funding situation is serious, and because so much of ALLFED’s progress to date has been made possible through the support, feedback, and collaboration of the EA community.  Read our funding appeal At ALLFED, we are deeply grateful to all our supporters, including the Survival and Flourishing Fund, which has provided the majority of our funding for years. At the end of 2024, we learned we would be receiving far less support than expected due to a shift in SFF’s strategic priorities toward AI safety. Without additional funding, ALLFED will need to shrink. I believe the marginal cost effectiveness for improving the future and saving lives of resilience is competitive with AI Safety, even if timelines are short, because of potential AI-induced catastrophes. That is why we are asking people to donate to this emergency appeal