Hide table of contents

As many of you know, on LessWrong there is now:

two axes on which you can vote on comments: the standard karma axis remains on the left, and the new axis on the right lets you show much you agree or disagree with the content of a comment.

I was thinking we should have this on EA Forum for the same reasons ... to avoid (i) agreement with the claim/position being confounded with (ii) liking the contribution to the discussion/community.

Reading the comments over there, it seems there are mixed reviews. Some key critiques:

  1. Visual confusion and mental overload (maybe improvable with better formats)
  2. It's often hard to discern what 'agree with the post' means.

My quick takes:

A. We might consider this for EAFo after LW works out the bugs (and probably the team is considering it)

B. Perhaps the 'agreement' axis should be something that the post author can add voluntarily, specifying what is the claim people can indicate agreement/disagreement with? (This might also work well with the metaculus prediction link that is in the works afaik).

What are your thoughts...? [1]

  1. On two-factor voting for EA Forum overall
  2. On "post author chooses what the agreement target ('central claim') is"
  3. On whether the considerations here are different for EA Forum vs. LessWrong

  1. Meta: wasn't sure whether to post this as a link post or question post ↩︎

81

0
0

Reactions

0
0
Comments18
Sorted by Click to highlight new comments since: Today at 2:23 AM

I would find this helpful - I'm tired of being downvoted when I provide useful information to support an argument that people overall disagree with!

Seems useful, especially for critical posts. I may want to upvote them to show my appreciation and have more people read them though still disagree with e.g. the conclusion they draw.

Big support!

  1. By making agreement a separate axis, people will feel safer upvoting something for quality/novelty/appreciation with less of a risk that it's confounded with agreement. Unpopular opinions that people still found enlightening should get marginally more karma. And we should be optimising for increased exposure to information that people can update on in either direction, rather than for exposure to what people agree with.[1]
  2. We now have an opinion poll included for every comment/post. This just seems like a vast store of usefwl-but-imperfect information. Karma doesn't already provide it, since it has more confounders.

But, observing how it empirically plays out is just going to matter way more than any theoretical arguments I can come up with.

  1. ^

    Toy model here, but: The health of an epistemic community depends on, among other things, an optimal ratio between the transmission coefficients of technical (gears-level) evidence vs testimonial (deference) evidence. If the ratio is high, people are more likely to be exposed to arguments they haven't heard yet, increasing their understanding and ability to contribute to the conversation. If the ratio is low, people are mainly interested in deferring to what other people think, and understanding is of secondary importance.

I think it would be better if the agreement was expressed as a percentage rather than a score, to make it feel more distinct // easier to remember what the two were.

Interesting point. 

I guess it could be useful to be able to see how many have voted as well, since 75% agreement with four votes is quite different from 75% agreement with forty votes.

Yeah to proxy this maybe I'd imagine something like adding a virtual five upvotes and five downvotes to each comment to start it near 50%, so it's a strong signal if you see something with an extreme value.

Maybe that's a bad idea; makes it harder (you'd need to hover) to notice when something's controversial.

I seem to recall some places, when sorting thinks by average rating, will use something like the lower 90th percent confidence bound on the mean. This doesn't solve for which number to display though, as it is not a very user-intuitive number to read.

I am extremely excited about this feature for a long time now, as part of a vision for a "Better Social Network".

Even more than the object level idea: I'm excited about the meta level approach of trying things like this sometimes and seeing how they go (and focusing on having the experiments not too expensive, and not accidentally breaking something important). I am guessing most of the results won't be predicted in advance anyway. Still, this specific feature seems very promising to me

I'll add: This kind of feature is really complicated and is not something we'll solve in a post, this is what product people are for

(But, cough cough, CEA are hiring a product manager)

I strongly disagree with the first part (by which I mean I'm not excited), and strongly agree with the second (cheap exploration is good and consequences are hard to foresee).

I wanted to write that I couldn't decide whether to upvote or not, because it fits the narrative nicely, but eventually I did.

lol

you know what would really help? emoji replies!

Imagine all the emoji's you'd use to reply to this comment of mine right now!!

Although, if it's not too late, maybe 'two-factor' could use a better name? I suspect many people get confused because they associate it with 2-factor authentification.

Thanks for the post! With 80 karma, this is surprising-to-me popular! I've been watching LessWrong  experimenting with multiple very different forms of multi-factor voting, and they now seem to have settled on this one. As you note, there have been bugs, but they have recently fixed some obvious UI issues. (And we really appreciate all their work!) This now seems like an appropriate time for the Forum to try it. We plan on testing it out with some comment-heavy posts, and we’ll see how it goes from there.

Is 2-factor voting popular, or did they love my epistemic rigor and rhetorical clarity? :)

Seriously, though, this is exciting and I'm eager to see how it goes. It seems to me to be very much on-brand for the EA forum.

[anonymous]2y3
0
0

+1 to "post author chooses what the agreement target ('central claim') is"

I don't support having two rating systems. For one, it seems overly complicated and hindering to communication (particularly for newcomers).

Second, I don't think agreement and "liking the contribution to the discussion" are that discernible to begin with - particularly by a person about their own views. We're biased, political creatures, and trying to contain that will only result in a superficial improvement that will mask the bias and politicization that will still exist in all ratings and content.

I agree it's not a panacea, but I could imagine it helping mitigate bias/politicization in a few ways:

  • It prompts people to think about 'liking' and 'agreeing' as two separate questions at all. I don't expect this to totally de-bias either 'liking' or 'agreeing', but I do expect some progress if people are prompted like this.
  • Goodwill and trust is generated when people are upvoted in spite of having an unpopular-on-the-forum view. This can create virtuous cycles, where those people reciprocate and in general there are fewer comment sections that turn into 'one side mass-downvotes the other side, the other side retaliates, etc.'.

Example: Improving EA Forum discourse by 8% would obviously be worth it, even if this is via a "superficial improvement" that doesn't fix the whole problem.

I can't think of many examples where I agreed with a position but didn't want to see it or wanted to see a position that I disagreed with. I think that I've only experienced the latter case when I want to see discussions about the topic. In those cases I feel like you should balance between the good and the bad on upvoting and choose between the 5 levels (if you take into account the strong votes and no vote) that the current system provides. Also, if you believe that a topic that you want to talk about (and believe that others too) is going to be divised, you can just write "Let's discuss about X" and then reply it with your opinion.

I read examples on the comments that I disagreed with and I feel more comfortable counterarguing them all in this comment:

  • Useful information for an argument that people disagrees with: Then how is it useful?
  • Critical posts which you disagree with that you appreciate and want other people to read: Then why do you appreciate them? It seems you like them in part but not fully, I would just not vote them. And why do you want people to read it? Seems like a waste of time.
  • Voting something for quality, novelty or appreciation: I believe that the voting system is better as a system where you vote what you want other people to read or what you enjoy seeing. And I think that we should appreciate each other in other ways or places (like in the comments).
  • Unpopular opinions that people still found enlightening should get marginally more karma: That sounds like opinions that change the minds of some people, but get little karma or even negative points. I don't know how would the people that disagrees with it would downvote it less than other opinions which they disagree with. In other words, I don't know how exactly the "enlightenment" is seen by the ones blind to it lol, or what would "enlightening" mean.
  • And we should be optimising for increased exposure to information that people can update on in either direction, rather than for exposure to what people agree with: How is that useful? I'm not that familiar with the rationalist community so maybe this is obvious, or maybe I'm misunderstanding. Are you saying that you agree with some arguments (so you update beliefs) but not all of them and you don't change the conclusion? That probably would mean no vote at all from me, and depending the specifics weak upvote or downvote.
  • It prompts people to distinguish between liking and agreeing: Why would you like a contribution to a discussion when you don't agree with the contribution?
  • There would be fewer comment sections that turn into 'one side mass-downvotes the other side, the other side retaliates, etc.': Why would there be a difference with this new axis?

Agree with:

  • Goodwill and trust is generated when people are upvoted in spite of having an unpopular view.

But I believe that the downsides are worse. So, if you were to encourage people to upvote unpopular views, then they could get even more points that the popular views, no matter how "bad" they are. Also there could be more bad arguments at the top than good ones. That sounds pretty confusing and annoying honestly. I think better options are to reply to those comments and upvote good replies and to not show points below 0 nor hide those comments.

Also:

  • It sounds to me that to vote in a two vote system would be to vote something and then to think if I agree or disagree with the comment and then to vote again >95% of the time for kinda the same thing (agree after like, disagree after dislike) and then to see the same number repeated or to see a difference in them and wonder about what does it mean and if it exists because people are voting in just one system.
  • Really bad for new people.
  •  There are cases where there isn't anything to agree or disagree with, like information and jokes.
     
Curated and popular this week
Relevant opportunities