Note: I didn’t have the time to write this up in as detailed/compelling a fashion as I would have liked, but erred on the side of submitting it. 

I think that strong votes should be removed from the EA Forum, or at least lowered in strength. I believe that the negatives of the current system outweigh its benefits, and that it would be beneficial for community health to change the current system.

As far as I can tell, strong votes are 4x stronger than regular votes for at least some users (it appears to vary, but this was difficult to pin down; I couldn't find clear information on how votes are weighed on the Forum, which was itself frustrating). Ultimately, this serves to amplify the power of people who express their opinions more strongly and to diminish the power of those who use the strong vote more sparingly. There are not any obvious guidelines for when it is appropriate to use strong vs. regular votes, although there may well be ones listed in the Forum guidelines; nonetheless, I think it’s safe to say that most forum users have not encountered a standard guide for when strong votes are appropriate. In the absence of such guidance, people use their own intuitions/judgements. 

As with so many things, people’s intuitions vary! I think there’s a solid analogy here of a bunch of people chatting at a get-together. People are assessing whether they want to speak up, and how strongly they want to frame their arguments if so. In my experience, some people are naturally much more forceful about how they frame their arguments, or are calibrated differently about how strongly they put things. Sometimes (not always!) this is shaped by their race/gender/class/other aspects of their identity. I personally (a woman) have come to realize that I am more tentative in sharing my opinions than many of my male friends, or express a comparably-certain opinion with a lower level of forcefulness.

What sort of community norms exist shape to what extent certain people’s voices end up dominating the conversation. Again, at some get togethers there may be implicit norms that it’s rude to interrupt, while at others there may be a sense that it indicates strength of conviction if you’re expressing your views forcefully and often. 

Given that we’re operating from behind a screen, choices about when to strong vote are made based on diffuse, rarely discussed judgements, which I think makes it more likely that it’s based on gut intuitions than well-considered, widely-shared norms. And in turn, I think gut intuition differences mean that in practice, some people are strong upvoting everything that they themselves post/comment, while others are never doing so because it feels impolite. Some people are likely strong upvoting everything by their friends/colleagues, while others may abstain from doing so because it’s unfair. At the end of the day, that means that some people end up having more voting power in practice than others, solely based on their intuitions.

One or two strong upvotes can make a big difference in where a comment or post ranks. And because of the momentum associated with upvotes, the effects of small initial differences (do you strong upvote your own post?) can have exponential effects. 

It also creates weird effects with co-authored posts. I coauthored a post and it automatically strong upvoted it from both of us, meaning that it started with, if I recall correctly, 8 or 9 upvotes. That meant that it ranked far higher than other posts at the same time.

I suspect it also heightens things on contentious topics. When people are more emotional, they’re likely more likely to use strong votes.

Additionally, many people have been recently discussing the importance of maintaining space for a diversity of beliefs within EA discourse. Strong votes make it easier for dissenting voices to be effectively shut out, if it only takes one or two strong votes to severely knock down a comment or vote. 

There are some benefits to the system. It lets people express their preferences in a more granular fashion, and undoubtably helps some “really great” posts get more traction vs. “decent” posts. But I believe that a system with evenly valued votes would do that as well, with the strongest posts receiving upvotes from more people. 

In my opinion, strong votes should be eliminated. But I’m not familiar with the nuances of this issue, and it seems like there are other alternatives that could also be workable. Strong votes could also be halved in strength, such that they’re worth only double regular votes rather than quadruple. Alternatively, clearer norms could be created/disseminated about when strong votes are appropriate to use. 

Comments15


Sorted by Click to highlight new comments since:

I was also not sure how the strong votes worked, but found a description from four years ago here. I'm not sure if the system's in date.

Normal votes (one click) will be worth

  • 3 points – if you have 25,000 karma or more
  • 2 points – if you have 1,000 karma
  • 1 point  – if you have 0 karma

Strong Votes (click and hold) will be worth

  • 16 points (maximum) – if you have 500,000 karma
  • 15 points – 250,000
  • 14 points – 175,000
  • 13 points – 100,000
  • 12 points – 75,000
  • 11 points – 50,000
  • 10 points – 25,000
  • 9 points  – 10,000
  • 8 points  – 5,000
  • 7 points  – 2,500
  • 6 points  – 1,000
  • 5 points  – 500
  • 4 points  – 250
  • 3 points  – 100
  • 2 points  – 10
  • 1 point – 0

If you create a search then edit it to become empty on the Forum, you can see a list of the highest karma users. The first two pages:

Aaron Gertler 8y 16729 karma

Linch 7y 14385 karma

Peter Wildeford 8y 11209 karma

MichaelA 4y 10836 karma

Kirsten 5y 8430 karma

John G. Halstead 6y 8310 karma

Habryka 8y 7922 karma

Larks 8y 7740 karma

Pablo 8y 7735 karma

Julia_Wise 8y 7288 karma

RyanCarey 8y 6760 karma

NunoSempere 4y 6677 karma
 

That seems accurate to me, my normal upvote is +2 and my strong upvote is +8.

I think that's right other than that weak upvotes never become worth 3 points anymore (although this doesn't matter on the EA forum, given that no one has 25,000 karma), based on this lesswrong github file linked from the LW FAQ.

I thought about the topic a bit at some point and my thoughts were

  • The strength of the strong upvote depends on the karma of the user (see other comment)
  • Therefore, the existence of a strong upvote implies that users that have gained more Karma in the past, e.g. because they write better or more content, have more influence on new posts.
  • Thus, the question of the strong upvote seems roughly equivalent to the question "do we want more active/experienced members of the community to have more say?"
  • Personally, I'd say that I currently prefer this system over its alternatives because I think more experienced/active EAs have more nuanced judgment about EA questions. Specifically, I think that there are some posts that fly under the radar because they don't look fancy to newcomers and I want more experienced EAs to be able to strongly upvote those to get more traction.
  • I think strong downvotes are sometimes helpful but I'm not sure how often they are even used. I don't have a strong opinion about their existence. 
  • I can also see that strong votes might lead to a discourse where experienced EAs just give other experienced EAs lots of Karma due to personal connections but most people I know use their strong upvotes based on how important they think the content is and not by how much they like the author. 
  • In conclusion, I think it's good that we give more experienced/active members that have produced high-quality content in the past more say.  I think one can discuss the size of the difference, e.g. maybe the current scale is too liberal or too conservative. 

I use my strong downvote to hide spam a couple times a month. I pretty rarely use it for other things, although I'll occasionally strong downvote a comment or post that's exceptionally offensive. (I usually report those posts as well.)

Yeah same. I don't even strong downvote egregiously bad reasoning, though I do try my best to downvote them. 

A smaller change that I think would be beneficial is to eliminate strong upvotes on your own comments.  I really don't see how those have a use at all.

And in turn, I think gut intuition differences mean that in practice, some people are strong upvoting everything that they themselves post/comment, while others are never doing so because it feels impolite.

I'd be surprised if many people are strong-upvoting all their comments. The algorithmic default is to strong upvote your posts, but weak upvote your own comments, and I very rarely see a post with 1 vote above 2 karma. If I had to guess my median estimate would be that zero frequent commenters strong upvote more 5% of the comments.

I do think it would not be unreasonable to ban strong-upvoting your own comments.

I do think it would not be unreasonable to ban strong-upvoting your own comments.

Agreed.

I went to strong-upvote this post and then I was like '....hang on, wait' :p 

But yeah, this is a really good point. The strong-upvotes system requires a lot of trust that people aren't just going to liberally strong-upvote anything they agree with, and since votes are anonymous we can't tell if people are doing this. 

fwiw there is some info on how people are supposed to use strong upvotes in the Forum norms guide, but I agree that many people won't have read this, and it's pretty subjective and fuzzy. 

I think any issues with abuse of strong upvotes is tempered by the fact that someone has to spend a lot of time writing posts and getting upvotes from other forum members before they can have much influence with their votes, strong or otherwise. So in practice this is probably not a problem, because the trust is earned through months and years of writing the posts and comments and getting the votes that earn one a lot of karma.

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f
Recent opportunities in Building effective altruism
6
2 authors
· · 3m read