Starting today we're activating two-factor voting on all new comment threads. 

Now there are two axes on which you can vote on comments: the standard karma axis remains on the left, and the new axis on the right lets you show much you agree or disagree with the content of a comment.

How the system works

For the pre-existing voting system, the most common interpretation of up/down-voting is "Do I want to see more or less of this content on the site?" As an item gets more/less votes, the item changes in visibility, and the karma-weighting of the author is eventually changed as well.

Agree/disagree is just added on to this system. Here's how it all hooks up.

  • Agree/disagree voting does not translate into a user's or post's karma — its sole function is to communicate agreement/disagreement. It has no other direct effects on the site or content visibility (i.e. no effect on sorting algorithms).
  • For both regular voting and the new agree/disagree voting, you have the ability to normal-strength vote and strong-vote. Click once for normal-strength vote. For strong-vote, click-and-hold on desktop or double-tap on mobile. The weight of your strong-vote is approximately proportional to your karma on a log-scale (exact numbers here).

Ben's personal reasons for being excited about this split

Here's a couple of reasons that are alive for me.

  • I personally feel much more comfortable upvoting good comments that I disagree with or whose truth value I am highly uncertain about, because I don’t feel that my vote will be mistaken as setting the social reality of what is true.
  • I also feel very comfortable strong-agreeing with things while not up/downvoting on them, so as to indicate which side of an argument seems true to me without my voting being read as “this person gets to keep accruing more and more social status for just repeating a common position at length”.
  • Similarly to the first bullet, I think that many writers have interesting and valuable ideas but whose truth-value I am quite unsure about or even disagree with. This split allows voters to repeatedly signal that a given writer's comments are of high value, without building a false-consensus that LessWrong has high confidence that the ideas are true. (For example, many people have incompatible but valuable ideas about how AGI development will go, and I want authors to get lots of karma and visibility for excellent contributions without this ambiguity.)
  • There are many comments I think are bad but am averse to downvoting, because I feel that it is ambiguous whether the person is being downvoted because everyone thinks their take is unfashionable or whether it's because the person is wasting the commons with their behavior (e.g. belittling, starting bravery debates, not doing basic reading comprehension, etc). With this split I feel more comfortable downvoting bad comments without worrying that everyone else who states the position will worry if they'll also be downvoted.
  • I have seen some comments that previously would have been "downvoted to hell" are now on positive karma, and are instead "disagreed to hell". I won't point them out to avoid focusing on individuals, but this seems like an obvious improvement in communication ability.

I could go on but I'll stop here.

Please give us feedback

This is one of the main voting experiments we've tried on the site (here's the other one). We may try more changes and improvement in the future. Please let us know about your experience with this new voting axis, especially in the next 1-2 weeks.

If you find it concerning/invigorating/confusing/clarifying/other, we'd like to know about it. Comment on this post with feedback and I'll give you an upvote (and maybe others will give you an agree-vote!) or let us know in the intercom button in the bottom right of the screen.

We've rolled it out on many (15+) threads now (example), and my impression is that it's worked as hoped and allowed for better communication about the truth.

One virtue of the new voting axis is how it allows users to clearly express great 
appreciation for high-quality comments that many users disagree with.

New to LessWrong?

New Comment
219 comments, sorted by Click to highlight new comments since: Today at 9:46 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Still somewhat sad about this, as it feels to me like a half-solution that's pumping against human nature.

My claims in previous discussions about alternate voting systems were that:

  • It was going to be really important to have a single click, not multiple clicks (i.e. any two-separate-votes system was going to have people overwhelmingly just using one of the votes and largely ignoring the second one)
  • It was going to be really important to use visual cues and directionality and not just have two things side by side

I wanted something like the following:

... where users would single-click one of the four buttons and could click-hold to strong vote, but would with a single click show:

  • Upvote this and also it's true (the dark blue one)
  • This seems true but slight downvote/it's not helping/it's making things worse (the light blue one)
  • This seems false or sketchy but slight upvote/it's helping/I'm glad it's here (the light orange one)
  • Downvote this and also it's false (the dark orange one).

True-false is on the forward/backward axis, in other words, and good/bad is on the vertical axis, as usual.

The display for aggregation could look a lot of different ways; please don't hate the below for its ugly... (read more)

display an aggregated "what's this user's rep" function

Agreement votes must never be aggregated, otherwise there is incentive for uncontroversial commenting.

I agree with the sentiment here, but I think you have too little faith in some people's willingness to be disagreeable... especially on LessWrong!  Personally I'd feel fine/great about having a high karma and a low net-agreement score, because it means I'm adding a unique perspective to the community that people value.

6Andrew_Critch2y
... and, I'd go so far as to bet that the large number of agreement with your comment here is representative of a bunch of users that would feel similarly, but I'm putting this in a separate comment so accrues a separate agree/disagree score.  If lots of people disagree, I'll update :)
9Sean H2y
Absolutely! Agree 100%.

I suspect it's a half-solution that will decay back to mostly-people-just-use-the-first-vote

Regardless of whether it's a bad solution in other respects, I predict that people will use the agree/disagree vote a ton, reliably, forever.

I don't think it lets me grok the quality of the reaction to a comment at a glance; I keep having to effortfully process "okay, what does—okay, this means that people like it but think it's slightly false, unless they—hmm, a lot more people voted up-down than true-false, unless they all strong voted up-down but weak-voted tru—you know what, I can't get any meaningful info out of this."

I mostly care about agree/disagree votes (especially when it comes to specifics). From my perspective, the upvotes/downvotes are less important info; they're mostly there to reward good behavior and make it easier to find the best content fast.

In that respect, the thing that annoys me about agree/disagree votes isn't any particular relationship to the upvotes/downvotes; it's that there isn't a consistent way to distinguish 'a few people agreeing strongly' from 'a larger number of people agreeing weakly', 'everyone agrees with this but weakly' from 'some agree strongly but ... (read more)

I predict that people will use the agree/disagree vote a ton, reliably, forever.

I feel zero motivation to use it. I feel zero value gained from it, in its current form. I actually find it a deterrent, e.g. looking at the information coming in on my comment above gave me a noticeable "ok just never comment on LW again" feeling.

(I now fear social punishment for admitting this fact, like people will decide that me having detected such an impulse means I'm some kind of petty or lame or bad or whatever, but eh, it's true and relevant. I don't find downvotes motivationally deterring in the same fashion, at all.)

EDIT: this has been true in other instances of looking at these numbers on my other comments in the past; not an isolated incident.

More detail on the underlying emotion:

"Okay, so it's ... it's plus eight, on some karma meaning ... something, but negative nine on agreement? What the heck does this even mean, do people think it's good but wrong, are some people upvoting but others downvoting in a different place—I hate this. I hate everything about this. Just give up and go somewhere where the information is clear and parse-able."

Like, maybe it would feel better if I could see something that at least confirmed to me how many people voted in both places? So I'm not left with absolutely no idea how to compare the +8 to the -9?

But overall it just hurts/confuses and I'm having to actively fight my own you'd-be-happier-not-being-here feelings, which are very strong in a way that they aren't in the one-vote system, and wouldn't be in either my compass rose system or Rob's heart/X system.

7Vladimir_Nesov2y
The parent comment serves as a counterexample to this interpretation: It seems natural to agreement-downvote your comment to indicate that I don't share this feeling/salient-impression, without meaning to communicate that I believe your feeling-report to be false (about your own impression). And to karma-upvote it to indicate that I care for existence of this feeling to become a known issue and to incentivise corroboration from others (with visibility given by karma-upvoting) who feel similarly (which might in part be communicated with agreement-upvoting).
6[DEACTIVATED] Duncan Sabien2y
I think you're confusing "this should make sense to you, Duncan" with "therefore this makes sense to you, Duncan" (or more broadly, "this should make sense to people" with "therefore, it will/will be good.") I agree that there is some effortful, System-2 processing that I could do, to draw out the meaning that you have spelled out above.
2Vladimir_Nesov2y
The important distinction is about existence of System-1 distillation that enables ease, which develops with a bit of exposure, and of the character of that distillation. (Is it ugly/ruinous/not-forming, despite the training data being fine?) Whether a new thing is immediately familiar is much less strategically relevant.
9[DEACTIVATED] Duncan Sabien2y
This function has been available, and I've encountered it off and on, for months. This isn't a case of "c'mon, give it a few tries before you judge it." I've had more than a bit of exposure.
4M. Y. Zuo2y
If being highly upvoted yet highly disagreed with make you feel deterred and never want to comment again, wouldn't that also be the case if you see a lot of light orange beside your comments? Since it seems unlikely you'll forget your own proposal nor what the colours correspond to. In fact it may hasten your departure since bright colours are a lot more difficult to ignore than a grey number.
2[DEACTIVATED] Duncan Sabien2y
I do not have a model/explanation for why, but no, apparently not. I've got pretty decent introspection and very good predicting-future-Duncan's-responses skill and the light orange does not produce the same demoralization as negative numbers. Though the negative numbers also produce less demoralization if the prompt is changed in accordance with some suggestions to something like "I could truthfully say this or something close to it from my own beliefs and experience."
2Vladimir_Nesov2y
Their role is different: it's about quality/incentives, so the appropriate way of deciding visibility (comment ordering) and aggregating into user's overall footprint/contribution. Agreement clarifies attitude to individual comments without compromising the quality vote, in particular making it straightforward/convenient to express approval/incentivization of disagreed-with comments. In this way agreement vote improves fidelity of the more strategic quality/incentives vote, while communicating an additional tactical fact about each particular comment.

It was going to be really important to have a single click, not multiple clicks (i.e. any two-separate-votes system was going to have people overwhelmingly just using one of the votes and largely ignoring the second one)

I feel like it's slightly less work for me to consider one axis and up/downvote it, and then consider the second axis and up/downvote it, than it'd be to vote on two axes with a single click. The former lets me consider the two separately, "make one decision and then forget about it", whereas the latter requires me to think about both at the same time. That means that I'm (slightly) more likely to cast two votes on a multiple-click system than on a single-click system. 

Though I do also consider it a feature if the system allows me to only cast one vote rather than forcing me to do both. E.g. in situations where I want to upvote a domain expert's comment giving an explanation about a domain that I'm not familiar with, so don't feel qualified to cast a vote on its truth even though I want to indicate that I appreciate having the explanation.

After using the new system for a couple of days, I now believe that a single-click[1] system, like the one Duncan describes, would probably be preferable for interaction efficiency / satisfaction reasons. (Having to click on two different UI widgets in two different screen locations—i.e., mouse move, click, another mouse move, click—is an annoyance.)

One downside of Duncan’s proposed widget design would be that it cannot accommodate the full range of currently permissible input values. The current two-widget system has 25 possible states (karma and agreement can each independently take on any of five values: ++, +, 0, −, −−), while the proposed “blue and orange compass rose” single-widget system has only 9 possible states (the neutral state, plus two strengths of vote × four directions).

It is not immediately obvious to me what an ideal solution would look like. The obvious solution (in terms of interaction design) would be to construct a mapping from the 25 states to the 9, wherein some of the 25 currently available input states should be impermissible, and some sets of the remainder of the 25 should each be collapsed into one of the 9 input states of the proposed widget. (I haven’t... (read more)

5gwern1y
Hypothetically, you could represent all of the states by using the diamond, but adding a second 'diamond' or 'shell' around it, and making all of the vertexes and regions clickable. To express a +/+ you click in the upper right region; to express ++/++, the uppermost right region; to express 0/++, you click on the right-most tip; to express ++/0, you click on the bottom tip; and so on. The regions can be colored. (And for users who don't get strong votes, it degrades nicely: you should omit the outer shell corresponding to the strong votes.) I'm sure I've seen this before in video games or something, but I'm not sure where or what it may be called (various search queries for 'diamond' don't pull up anything relevant). It's a bit like a radar chart, but discretized. This would be easy to use (as long as the vertexes have big hit boxes) since you make only 1 click (rather than needing to click 4 times or hold long twice for a ++/++) and use the mouse to choose what the pair is (which is a good use of mice), and could be implemented even as far back as Web 1.0 with imagemaps, but somewhat hard to explain - however, that's what tooltips are for, and this is for power-users in the first place, so some learning curve is tolerable.
7Rana Dexsin2y
I find all of the four-way graphical depictions in this subthread to be horribly confusing at an immediate glance; indeed I had to fight against reversing the axes on the one you showed. I already know what a karma system is, as imperfect as that is, and I already know what agreement and disagreement are—and being able to choose to only use one of the axes at any given moment is an engagement win for me, because (for instance) if I start out wanting to react “yes, I agree” but then have to think about “but also was the post good in a different sense” before I can record my answer, or vice versa, that means I have to perform the entire other thought-train with the first part in my working-memory stack. And my incentive and inclination to vote at all doesn't start out very high. It's like replacing two smaller stairs with one large stair of combined height. A more specific failure mode there is lack of representation for null results on either axis, especially “I think this was a good contribution, but it makes no specific claims to agree or disagree with and instead advances the conversation some other way” and “I think this was a good contribution, but I will have to think for another week to figure out whether I agree or disagree with it on the object level, and vaguely holding onto the voting-intention in the back of my mind for a week gives a horrible feeling of cruftifying my already-strained attention systems”. To try to expand the separate-axes system in this exact case, I have upvoted your comment here on the quality axis and also marked my disagreement (which I don't like the term “downvote” for, as described elsewhere), because I think it's a good thing that you went to the effort of thinking about this and posting about it, and I think the explanation is coherent and reasonable, but I also think the suggestion itself would be more complicated and difficult and overall worse than what's been implemented, largely because of differing impressions of the act
6Raemon2y
I think many of the UI ideas here are potentially interesting, but one major issue is the amount of space we have to work with. The design here is particularly cool because it's a compass rose which matches the LW logo, but... I don't see how we could fit a version into every comment that actually worked. (Maybe if it was little but got big when you hovered over it?) (to be clear these seem potentially fixable, just noting that that's where my attention goes next in a problem-solving-y way) FYI In my mind there's still some radically different solutions that might be worth trying for agree/disagree, I'm still pretty uncertain about the whole thing.
1[DEACTIVATED] Duncan Sabien2y
Yeah, the UI issues seem real and substantial. In my mind, the thing is roughly as tall as the entire box holding the current vote buttons.
3[DEACTIVATED] Duncan Sabien2y
3Raemon2y
hmm, I could see that working. Click-target seems small-ish but maybe fixable with some-UI magic (and maybe it's actually just fine?)
5Rob Bensinger2y
I found the current UI intuitive. I find the four-pointed star you suggested confusing (though mayyyybe I'd like it if I got used to it?). I tend to mix up my left and my right, and I don't associate left/right with false/true at all, nor do I associate blue with "truth". (if anything, I associate blue more with goodness, so I might have guessed dark-blue was 'good and true' and light-blue was 'good and false') A version of this I'm confident would be easier for me to track is, e.g.: It's less pretty, but: * The shapes give me an indication of what each direction means. ✔ and ✖ I think are very useful and clear in that respect: to me, they're obviously about true/false rather than good/bad. * Green vs. red still isn't super clear. But it's at least clearer than blue vs. red, to me; and if I forget what the colors mean, I have clear indicators via 'ah, there's a green X, but no green checkmark, because the heart is the special "good on all dimensions" symbol, and because green means "good" (so it would be redundant to have a green heart and a green checkmark)'. * The left and right options are smaller and more faded. Some consequences: * (a) This makes the image as a whole feel less overwhelming, because there's a clear hierarchy that encourages me to first pay attention to one thing, then only consider the other thing as an afterthought. In this case, I first notice the heart and X, which give me an anchor for what green, red, and X mean. Then I notice the smaller symbols, which I can then use my anchors to help interpret. This is easier than trying to parse four symbols at the exact same moment, especially when those symbols have complicated interactions rather than being primitives. * I think this points at the core reason Duncan's proposal is harder for me to fit in my head than the status quo: my working memory can barely handle four things at once, and the four options here are really ordered pairs. At least, my brain thinks of them as ordered pai
9Rob Bensinger2y
Here's a version that's probably closer to what would actually work for me: Now all four are closer to being conceptual primitives for me. 💚 is 'good on all the dimensions'; ❌ is 'bad on all the dimensions'. The facepalm emoji is meant to evoke a specific emotional reaction: that exasperated feeling I get when I see someone saying a thing that's technically true but is totally irrelevant, or counter-productive. (Colored purple because purple is an 'ambiguous but bad-leaning' color, e.g., in Hollywood movies, and is associated with villainy and trolling.) The shaking-head icon is meant to evoke another emotional reaction: the feeling of being a teacher who's happy with their student's performance, but is condescendingly shaking their head to say "No, you got the wrong answer". (Colored blue because blue is 'ambiguous but good-leaning' and is associated with innocence and youthful naïveté.) Neither of these emotional reactions capture the range of situations where I'd want to vote (true,bad) or (false,good). But my goal is to give me a vivid, salient handle at all on what the symbols might mean, at a glance; I think the hard part for me is rapidly distinguishing the symbols at all when there are so many options, not so much 'figuring out the True Meaning of the symbol once I've distinguished it from the other three'.
5Rob Bensinger2y
I don't like my own proposals, so do the disagree-votes mean that you agree with me that these are bad proposals, or do they mean you disagree with me and think they're good? :P
4Rob Bensinger2y
(I should have phrased this as a bald assertion rather than a question, so people could (dis)agree with it to efficiently reply. :P)
2habryka2y
For me it meant "I think this is a bad proposal".
2gjm2y
For what it's worth, the head icon doesn't read to me at all like a condescending head-shake. My brain parses it as "contented  face plus halo".
4Richard_Kennaway2y
With two axes, each on a scale -strong/-weak/null/weak/strong, there are 24 non-trivial possibilities. Why have you chosen these four, excluding such things as "this is an important contribution that I completely agree with", or "this is balderdash on both axes"?
2[DEACTIVATED] Duncan Sabien2y
Subjective sense of what would make LessWrong both a) more a place I'm excited to be, and b) (not unrelatedly) more of a place that helps me be better according to my own goals and values.
4MondSemmel2y
I'm also struggling to interpret cases where karma & agreement diverge, and would also prefer a system that lets me understand how individuals have voted. E.g. Duncan's comment above currently has positive karma but negative agreement, with different numbers of upvotes and agreement votes. There are many potential voting patterns that can have such a result, so it's unclear how to interpret it. Whereas in Duncan's suggestion, a) all votes contain two bits of information and hence take a stand on something like agreement (so there's never a divergence between numbers of votes on different axes), and b) you can tell if e.g. your score is the result of lots of voters with "begrudging upvotes", or "conflicted downvotes" or something.

Whereas in Duncan's suggestion, a) all votes contain two bits of information and hence take a stand on something like agreement

I didn't notice that! I don't want to have to decide on whether to reward or punish someone every time I figure out whether they said a true or false thing. Seems like it would also severely enhance the problem of "people who say things that most people believe get lots of karma".

4Vladimir_Nesov2y
The alternative solutions you are gesturing at do communicate the problems of the current solution, but I think they are worse than the current solution, and I'm not sure there is a feasible UI change that's significantly better than the current solution (among methods for collecting the data with the same meaning, quality/agreement score). Being convenient to use and not using up too much space are harsh constraints.

For what it's worth, I quite dislike this change. Partly because I find it cluttered and confusing, but also because I think audience agreement/disagreement should in fact be a key factor influencing comment rankings.

In the previous system, my voting strategy roughly reflected the product of (how glad I was some comment was written) and (how much I agreed with it). I think this product better approximates my overall sense of how much I want to recommend people read the comment—since all else equal, I do want to recommend comments more insofar as I agree with them more.

all else equal, I do want to recommend comments more insofar as I agree with them more

It's a fair point. Sometimes the point of a thread is to discuss and explore a topic, and sometimes the point of a thread is to locally answer a question. In the former I want to reward the most surprising and new marginal information over the most obvious info. In the latter I just want to see the answer.

I'll definitely keep my eye out for whether this system breaks some threads, though it seems likely to me that "producing the right answer in a thread about answering a question" will be correctly upvoted in that context.

I almost wonder if there should be a slider bar for post authors to set how much they want to incentivize truth-as-evaluated-by-LWers vs. incentivizing debate / spitballing / brainstorming / devil's advocacy / diversity of opinion / uncommon or nonstandard views / etc. in their post's comment section.

Setting the slider all the way toward Non-Truth would result in users getting 0 karma for agree-votes. Setting the slider all the way toward Truth would result in users getting lots of karma (and would reduce the amount of karma users get from normal Upvotes a bit, so people are less inclined to just pick the 'Truth' option in order to maximize karma). Nice consequences of this:

  • It gives users more control over what they want to see in their comment section. (Similar to how users get to decide their posts' moderation policies.)
  • Over time, we'd get empirical evidence about which system is better overall, or better for certain use cases. If the results are sufficiently clear and consistent, admins could then get rid of the slider and lock in the whole site at the known-to-be-best level.
2MikkW2y
I agree that having such a slider could be good, but I think it should only impact visibility of comments in that post's comments section, and shouldn't impact karma (only quality-axis votes should impact karma even if the slider is set to give maximum visibility to high-'agree' comments).
2Rob Bensinger2y
Hm, I'd have guessed the opposite was better.

Partly because I find it cluttered and confusing, but also because I think audience agreement/disagreement should in fact be a key factor influencing comment rankings.

I have a different ontology here. I'd say that "truth-tracking" is pretty different from "true". A comment section with just the audience's main beliefs highly upvoted is different from one where the conversational moves that seem truth-tracking are highly upvoted. The former leans more easily into an echo-chamber than the latter, which better rewards side-ways moves and thoughtful arguments for positions most people disagree with.

6Andrew_Critch2y
I mostly agree with Ben here, though I think Adam's preference could be served by having a few optional sorting options available to the user on a given page, like "Sort by most agreement" or "sort by most controversial".  Without changing the semantics of what you have now, you could even allow the user to enter a custom sorting function (air-table style), like "2*karma + 3*(agreement + disagreement)" and sort by that.   These could all be hidden under a three-dots menu dropdown to avoid clutter.

I could imagine this sort of fix mostly solving the problem for readers, but so far at least I've been most pained by this while voting. The categories "truth-tracking" and "true" don't seem cleanly distinguishable to me—nor do e.g. "this is the sort of thing I want to see on LW" and "I agree"—so now I experience type error-ish aversion and confusion each time I vote.

4Ben Pace2y
I see. I'd be interested in chatting about your experience with you offline, sometime this week.

I would be extremely surprised if karma does not track with agreement votes in the majority of cases. I only expect them to diverge in a narrow range of cases like excellently stated arguments people disagree with,  extremely banal comments that are true but don't really add anything, actual voting, and high social conflict posts. If we can operationalize this prediction I'm interested in a bet.

I used to think this and now disagree! (See e.g. the karma vs. agree/disagree on this post)

Would be open to operationalizing this (just to be clear, I of course still expect them to be correlated).

1Alex Caswen1y
I agree or disagree based on content, & UpVote or Downvote based on vibes. Historically socially toxic nerds have been excused for being assholes if they were smart, knowledgeable, and always right. With this system, I can agree with what they say, but downvote how they say it. In my opinion LW ironically is the community that needs this the least but YMMV. I am not surprised it is the community that has implemented it. 
5Vladimir_Nesov2y
Even a completely wrong claim occasionally contributes relevant ideas to the discussion. A comment can contain many claims and ideas, and salient wrongness of some of the claims (or subjective opinions not shared by the voter) can easily coexist with correctness/relevance of other statements in the same comment. So upvote/disagree is a natural situation. Downvote/correct corresponds to something true that's trivial/irrelevant/inappropriate/unkind. Being forced to collapse such cases into a single scale is painful, and the resulting ranking is ambiguous to the point of uselessness.
6Vladimir_Nesov2y
Bug report: At the moment, the parent comment says that it has 2 votes on the karma box for the total score of +2 (the karma self-vote is the default +2), and 1 vote on the agreement box for the total score of +2 (there is no agreement self-vote). When I remove the default self-upvote, it still says that there are 2 votes on the karma box (for the total score of 0). For the old karma-only comments removing the self-vote results in decrementing the number of votes displayed, and a comment with removed self-upvote that nobody else voted on says that it has 0 votes. I believe here one other user agreement-upvoted the comment with strength +2, and nobody karma-voted except for the default +2 self-karma-upvote. So in this example I expect to see that the number of karma votes displayed after removal of the default self-upvote is 0, not 2. And I expect to see that the number of karma votes when self-upvote remains is 1, not 2. (I did reload the page in both voting states in a logged-off context to check that it's not just a local javascript or same-user-observation issue.)
2habryka2y
Yeah, I noticed this myself. We should fix this.

I'm currently pretty dissatisfied with the icons for Agree/Disagree. They look ugly and cluttered to me. Unfortunately all the other icons I can think of ("thumbs up?", "+ / - "?) come with an implication of general positive affect that's hard to distinguish from upvote/downvote.

Curious if anyone has ideas for alternate icons or UI stylings here.

I think I'd change the left/right for regular karma to up/down, to match common usage.  I agree with dissatisfaction for the agree/dis icons, but I'm not sure what's better.  Perhaps = and ≠, but that's not perfect either.  Perhaps a handshake for agree, but I don't know the opposite for disagree.

edit: I'd also swap the icons.  Good on the left, bad on the right.  Only works if the votes are no longer less-than/greater-than symbols, though.

The problem with doing up/down is mostly just that this is hard to combine with the bigger arrows we use for strong-votes. If you just rotate them naively, the arrows stick out from the comment when strong-voted, or we have to add a bunch of padding to the comment to make it fit, which looks ugly and reduces information density.

5the gears to ascension2y
what if you rotate the arrows's icons to icon-up, icon-down, but don't move them into a vertical column?
3RyanCarey2y
I would do thumbs up/down for good/bad, and tick/cross for correct/incorrect.
2Rob Bensinger2y
Weak disagree Disagree, they don't bother me Yeah plausibly, if the switch is made
4Rob Bensinger2y
I kinda like that the site 'LessWrong' uses a 'less' symbol for downvotes, and 'more' for upvotes. I also like how this gestures at the intended interpretation of voting (an indication of whether you want less or more of the thing, not necessarily of the comment's inherent goodness or badness). I think the current symbols for agree / disagree are fine. Maybe there's a version that does the 'less vs. more' thing too, though. (Here referring to 'less true/probable' vs. 'more true/probable'.) E.g., ⩤ and ⩥, or ⧏ and ⧐, or ◀ and ▶.

Aesthetically speaking, this current implementation still looks rather ugly to me. Specific things I find ugly:

  • Left-right arrows in the comments vs. down-up arrows on LW posts.
  • The visible boundary box around normal votes & agree-disagree votes.
    • I might understand vertical lines between date & normal upvotes, and between normal upvotes & agree-disagree votes. But why do we need boundary lines at the top & bottom?
    • And rather than even vertical lines, maybe just extra whitespace between the various votes might already be enough?
  • The boundary boxes even seem to push some of the other UI elements around by a few pixels:
    • See this screenshot from desktop Firefox: the boundary box creates a few pixels of extra whitespace above and below the comment headline. This creates undesirable wasted space.
    • Also, the comment menu button on the right (the three vertical dots) are not aligned with the text on the left, but rather with the upper line of the boundary box.
  • None of the comment hover tooltips are aligned: That is, when hovering over comment username, date, normal downvote & upvote button, normal karma, agree & disagree vote button, and agreement karma, the tooltips just seem to pop up at semi-random but inconsistent positions.
6MondSemmel2y
And while I'm already in my noticing-tiny-things perfectionist mode: The line spacings between paragraphs and bulleted lists of various indentation levels seem inconsistent. Though maybe that's good typographical practice? See this screenshot from desktop Firefox: there seem to be 3+ different line spacings with little consistency. For example: * big spacing between an unindented paragraph and a bullet point * medium spacing between bullet points of the same indentation level * medium spacing between a bullet point of a higher indentation level, followed by one with a lower indentation level * tiny spacing between a bullet point of a lower indentation level, followed by one with a higher indentation level * big spacing between the end of a comment and the "Reply" button
6Dustin2y
What about something like text buttons?   When I'm designing a UI, I try to use text if there is not a good iconographic way of representing a concept. Something like: AGREE (-12) DISAGREE I'm not sure how that would look with the current karma widget. Would require some experimentation.
2Raemon2y
Huh, are all the disagreement votes here meaning "the current icons are not cluttered looking?" I'm hella surprised, I was not expecting this to be a controversial take since the current UI was whipped up really quickly.
3Ben Pace2y
My upvote-disagree meant "Current UI is not that bad, though am supportive of a thread of dissatisfied folks exploring alts".
2Rob Bensinger2y
UI looks fine to me! There might be improvements available, but I'd need to see the alternatives to know whether I think they're better.

Should we show the agreement number as a ratio rather than a sum?  regular votes can be summed, because "low total" doesn't matter much whether it's a mix of up and down, or just low engagement overall.  But for agreement, I want to know how agreed it was among those who bothered to have an opinion.  Not having an opinion is not a negative on agreement.

I think I'd either show total and number of votes (as 20 / 12), or just the ratio (1.66).  

edit: I may get this in the current setup from looking at agreement compared to karma, once I get used to it.  But that makes it worth aligning the default self-votes for the two, so comments don't start out controversial.

8habryka2y
I think this is worth some experiments at least. I do think any number that is visible on every comment really needs to pass a very high bar, though this one seems like it could plausibly pass it.
4Rob Bensinger2y
+1 I don't think this is reliable enough.

Pulling together thoughts from a variety of subthreads:

I expect this to meaningfully deter me/create substantial demoralization and bad feelings when I attempt to participate in comment threads, and therefore cause me to do so even less than I currently do.

This impression has been building across all the implementations of the two-factor voting over the past few months.

In particular: the thing I wanted and was excited about from a novel or two-factor voting system was a distinction between what's overall approved or disapproved (i.e. I like or dislike the addition to the conversation, think it was productive or counterproductive) and what's true or false (i.e. I endorse the claims or reasoning and think that more people should believe them to be true).

I very much do not believe that "agree or disagree" is a good proxy for that/tracks that. I think that it doesn't train LWers to distinguish their sense of truth or falsehood from how much their monkey brain wants to signal-boost a given contribution. I don't think it is going to nudge us toward better discourse and clearer separation of [truth] and [value].

It feels like it's an active step away from that, and therefore it makes me sa... (read more)

I very much do not believe that "agree or disagree" is a good proxy for that/tracks that. I think that it doesn't train[ LWers to distinguish their sense of truth or falsehood from how much their monkey brain wants to signal-boost a given contribution. I don't think it is going to nudge us toward better discourse and clearer separation of [truth] and [value].

See my other comment. I don't think agree/disagree is much different from true/false, and am confused about the strength of your reaction here. I personally don't have a strong preference, and only mildly prefer "agree/disagree" because it is more clearly in the same category as "approve/disapprove", i.e. an action, instead of a state.

I think the hover-over text needs tweaking anyways. If other people also have a preference for saying something like "Agree: Do you think the content of this comment is true?" and "Disagree: Do you think the content of this comment is false?", then that seems good to me. Having "approve/disapprove" and "true/false" as the top-level distinction does sure parse as a type error to me (why is one an action, and the other one an adjective?).

I also think we should definitely change the hover for the karma-vote dimension to say "approve" and "disapprove", instead of "like" and "dislike", which I think captures the dimensions here better.

4Vladimir_Nesov2y
Apart from equivocation of words with usefully different meanings, I think it's less useful to extract truth-dimension than agreement-dimension, since truth-dimension is present less often, doesn't help with improving approval-dimension, and agreement-dimension becomes truth-dimension for objective claims, so truth-dimension is a special case of the more-useful-for-other-things agreement-dimension.

I think the karma dimension already captures the-parts-of-the-agreement-dimension-that-aren't-truth.

9Vladimir_Nesov2y
I think this is false. Subjective disagreement shouldn't imply disapproval, capturing subjective-disagreement by disapproval rounds it off to disincentivization of non-conformity, which is a problem. Extracting it into a separate dimension solves this karma-problem. It is less useful for what you want because it's contextually-more-ambiguous than the truth-verdict. So I think the meaningful disagreement between me and you/habryka(?) might be in which issue is more important (to spend the second-voting-dimension slot on). I think the large quantity of karma-upvoted/agreement-downvoted comments to this post is some evidence for the importance of the idea I'm professing.
4Rana Dexsin2y
To derive from something I said as a secondary part of another comment, possibly more clearly: I think that extracting “social approval that this post was a good idea and should be promoted” while conflating other forms of “agreement” is a better choice of dimensionality reduction than extracting “objective truth of the statements in this post” while conflating other forms of “approval”. Note that the former makes this change kind of a “reverse extraction” where the karma system was meant to be centered around that one element to begin with and now has some noise removed, while the other elements now have a place to be rather than vanishing. The last part of that may center some disapprovals of the new system, along the lines of “amplifying the rest of it into its own number (rather than leaving it as an ambiguous background presence) introduces more noise than is removed by keeping the social approval axis ‘clean’” (which I don't believe, but I can partly see why other people might believe). Of Strange Loop relevance: I am treating most of the above beliefs of mine here as having primarily intersubjective truth value, which is similar in a lot of relevant ways to an objective truth value but only contextually interconvertible.
8habryka2y
Hmm, what about language like "Agree: Do you think the content of this comment is true? (Or if the comment is about an emotional reaction or belief of the author, does that statement resonate with you?)" It sure is a mouthful, but it feels like it points towards a coherent cluster.
2Vladimir_Nesov2y
I think the thing Duncan wants is harder to formulate than this, it has to disallow voting on aspects of the comment that are not about factual claims whose truth is relevant. And since most claims are true, it somehow has to avoid everyone-truth-upvotes-everything default in a way that retains some sort of useful signal instead of deciding the number of upvotes based on truth-unrelated selection effects. I don't see what this should mean for comments-in-general, carefully explained, and I don't currently have much hope that it can be operationalized into something more useful than agreement.
5[DEACTIVATED] Duncan Sabien2y
I am self-aware about the fact that this might just mean "this isn't your scene, Duncan; you don't belong" more than "this group is doing something wrong for this group's goals and values." Like, the complaint here is not necessarily "y'all're doing it Wrong" with a capital W so much as "y'all're doing it in a way that seems wrong to me, given what I think 'wrong' is," and there might just be genuine disagreement about wrongness. But I think "agree/disagree" points people toward yet more of the same social junk that we're trying to bootstrap out of, in a way that "true/false" does not. It feels like that's where this went wrong/that's what makes this seem doomed-from-the-start and makes me really emotionally resistant to it. I do not trust the aggregated agreement or disagreement of LW writ large to help me see more clearly or be a better reasoner, and I do not expect it to identify and signal-boost truth and good argument for e.g. young promising new users trying to become less wrong.
5[DEACTIVATED] Duncan Sabien2y
e.g. a -1 just appeared on the top-level comment in the "agree/disagree" category and it makes me want to take my ball and go home and never come back. I'm taking that feeling as object, rather than being fully subject to it, but when I anticipate fighting against that feeling every time I leave a comment, I conclude "this is a bad place for me to be." EDIT: it's now -3. Is the takeaway "this comment is substantially more false than true"? EDIT: now at -5, and yes, indeed, it is making me want to LEAVE LESSWRONG.
8Valentine2y
This means you're using others' reactions to define what you are or are not okay with. I mean, if you think this -1 -3 -5 is reflecting something true, are you saying you would rather keep that truth hidden so you can keep feeling good about posting in ignorance? And if you think it's not reflecting something true, doesn't your reaction highlight a place where your reactions need calibrating? I'm pretty sure you're actually talking about collective incentives and you're just using yourself as an example to point out the incentive landscape. But this is a place where a collective culture of emotional codependence actively screws with epistemics. Which is to say, I disagree in a principled way with your sense of "wrongness" here, in the sense you name in your previous comment: I think a good truth-tracking culture acknowledges, but doesn't try to ameliorate, the discomfort you're naming in the comment I'm replying to. (Whether LW agrees with me here is another matter entirely! This is just me.)
5[DEACTIVATED] Duncan Sabien2y
No, not quite. There's a difference (for instance) between knowledge and common knowledge, and there's a difference (for instance) between animosity and punching. Or maybe this is what you meant with "actually talking about collective incentives and you're just using yourself as an example to point out the incentive landscape." A bunch of LWers can be individually and independently wrong about matters of fact, and this is different from them creating common knowledge that they all disagree with a thing (wrongly). It's better in an important sense for ten individually wrong people to each not have common knowledge that the other nine also are wrong about this thing, because otherwise they come together and form the anti-vax movement. Similarly, a bunch of LWers can be individually in grumbly disagreement with me, and this is different from there being a flag for the grumbly discontent to come together and form SneerClub. (It's worth noting here that there is a mirror to all of this, i.e. there's the world in which people are quietly right or in which their quiet discontent is, like, a Correct Moral Objection or something. But it is an explicit part of my thesis here that I do not trust LWers en-masse. I think the actual consensus of LWers is usually hideously misguided, and that a lot of LW's structure (e.g. weighted voting) helps to correct and ameliorate this fact, though not perfectly (e.g. Ben Hoffman's patently-false slander of me being in positive vote territory for over a week with no one speaking in objection to it, which is a feature of Old LessWrong A Long Time Ago but it nevertheless still looms large in my model because I think New LessWrong Today is more like the post-Civil-War South (i.e. not all that changed) than like post-WWII-Japan (i.e. deeply restructured)).) What I want is for Coalitions of Wrongness to have a harder time forming, and Coalitions of Rightness to have an easier time forming. It is up in the air whether RightnessAndWrongness
6Valentine2y
Mmm. It makes sense. It was a nuance I missed about your intent. Thank you. Abstractly that seems maybe good. My gut sense is you can't do that by targeting how coalitions form. That engenders Goodhart drift. You've got to do it by making truth easier to notice in some asymmetric way. I don't know how to do that. I agree that this voting system doesn't address your concern. It's unclear to me how big a problem it is though. Maybe it's huge. I don't know.
7habryka2y
I think other people are saying "the sentences that Duncan says about himself are not true for me" while also saying "I am nevertheless glad that Duncan said it". This seems like great information for me, and is like, quite important for me getting information from this thread about how people want us to change the feature.
5Vladimir_Nesov2y
And if you change agreement-dimension to truth-dimension, this data will no longer be possible to express in terms of voting, because it's not the case that Duncan-opinion is false.
2[DEACTIVATED] Duncan Sabien2y
The distinction between "not true for me, the reader" and "not true at all" is not clear. And that is the distinction between "agree/disagree" and "true/false."
8habryka2y
Hmm, I do sure find the first one more helpful when people talk about themselves. Like, if someone says "I think X", I want to know when other people would say "I think not X". I don't want people to tell me if they really think whether the OP accurately reported on their own beliefs and really believes X.
4[DEACTIVATED] Duncan Sabien2y
Yeah. Both are useful, and each is more useful in some context or other. I just want it to be relatively unambiguous which is happening—I really felt like I was being told I was wrong in my top-level comment. That was the emotional valence.

I'm sad this is your experience!

I interpret "agree/disagree" in this context as literally 'is this comment true, as far as you can tell, or is it false?', so when I imagine changing it to "true/false" I don't imagine it feeling any different to me. (Which also means I'm not personally opposed to such a change. 🤷)

Maybe relevant that I'm used to Arbital's 'assign a probability to this claim' feature. I just tihnk of this as a more coarse-grained, fast version of Arbital's tool for assigning probabilities to claims.

When I see disagree-votes on my comments, I think I typically feel bad about it if it's also downvoted (often some flavor of 'nooo you're not fully understanding a thing I was trying to communicate!'), but happy about it if it's upvoted. Something like:

  • Amusement at the upvote/agreevote disparity, and warm feelings toward LW that it was able to mentally separate its approval for the comment from how much probability it assigns to the comment being true.
  • Pride in LW for being one of the rare places on the Internet that cares about the distinction between 'I like this' and 'I think this is true'.
  • I mostly don't perceive the disagreevotes as 'you are flatly telling me to my face
... (read more)
6[DEACTIVATED] Duncan Sabien2y
It is not totally off-base; these hypotheses above plus my reply to Val pretty much cover the reaction. ... resonated pretty strongly. Yes. Yes. In particular, I feel I have been, not just misunderstood, but something-like attacked or willfully misinterpreted, many times, and usually I am wanting someone, anyone, to come to my defense, and I only get that defense perhaps one such time in three. Worth noting that I was on board with the def of approve/disapprove being "I could truthfully say this or something close to it from my own beliefs and experience."
6Said Achmiz2y
It seems to me (and really, this doubles as a general comment on the pre-existing upvote/downvote system, and almost all variants of the UI for this one, etc.) that… a big part of the problem with a system like this, is that… “what people take to be the meaning of a vote (of any kind and in any direction)” is not something that you (as the hypothetical system’s designer) can control, or determine, or hold stable, or predict, etc. Indeed it’s not only possible, but likely, that: * different people will interpret votes differently; * people who cast the votes will interpret them differently from people who use the votes as readers; * there will be difficult-to-predict patterns in which people interpret votes how; * how people interpret votes, and what patterns there are in this, will drift over time; * how people think about the meaning of the votes (when explicitly thinking about them) differs from how people’s usage of the votes (from either end) maps to their cognitive and affective states (i.e., people think they think about votes one way, but they actually think about votes another way); … etc., etc. So, to be frank, I think that any such voting system is doomed to be useless for measuring anything more subtle or nuanced than the barest emotivism (“boo”/“yay”), simply because it’s not possible to consistently and with predictable consequences dictate an interpretation for the votes, to be reliably and stably adhered to by all users of the site.
2TekhneMakre2y
If true, that would imply an even higher potential value of meta-filtering (users can choose which other users's feedback they want to modulate their experience).
2Said Achmiz2y
I don’t think this follows… after all, once you’re whitelisting a relatively small set of users you want to hear from, why not just get those users’ comments, and skip the voting? (And if you’re talking about a large set of “preferred respondents”, then… I’m not sure how this could be managed, in a practical sense?)
2TekhneMakre2y
That's why it's a hard problem. The idea would be to get leverage by letting you say "I trust this user's judgement, including about whose judgement to trust". Then you use something like (personalized) PageRank / eigenmorality https://scottaaronson.blog/?p=1820 to get useful information despite the circularity of "trusting who to trust about who to trust about ...", and which leverages all the users's ratings of trust.
2[DEACTIVATED] Duncan Sabien2y
I agree, but I find something valuable about, like, unambiguous labels anyway? Like it's easier for me to metabolize "fine, these people are using the button 'wrong' according to the explicit request made by the site" somehow, than it is to metabolize the confusingly ambiguous open-ended "agree/disagree" which, from comments all throughout this post, clearly means like six different clusters of Thing.
2Said Achmiz2y
Did you mean “confusingly ambiguous”? If not, then could you explain that bit?
2[DEACTIVATED] Duncan Sabien2y
I did mean confusingly ambiguous, which is an ironic typo. Thanks. I think we should be in the business of not setting up brand-new motte-and-baileys, and enshrining them in site architecture.
4Said Achmiz2y
Yes, I certainly agree with this. (I do wonder whether the lack of agreement on the unwisdom of setting up new motte-and-baileys comes from the lack of agreement that the existing things are also motte-and-baileys… or something like them, anyway—is there even an “official” meaning of the karma vote buttons? Probably there is, but it’s not well-known enough to even be a “motte”, it seems to me… well, anyhow, as I said—maybe some folks think that the vote buttons are good and work well and convey useful info, and accordingly they also think that the agreement vote buttons will do likewise?)
3Rana Dexsin2y
I think an expansion of that subproblem is that “agreement” is determined in more contexts and modalities depending on the context of the comment. Having only one axis for it means the context can be chosen implicitly, which (to my mind) sort of happens anyway. Modes of agreement include truth in the objective sense but also observational (we see the same thing, not quite the same as what model-belief that generates), emotional (we feel the same response), axiological (we think the same actions are good), and salience-based (we both think this model is relevant—this is the one of the cases where fuzziness versus the approval axis might come most into play). In my experience it seems reasonably clear for most comments which axis is “primary” (and I would just avoid indicating/interpreting on the “agreement” axis in case of ambiguity), but maybe that's an illusion? And separating all of those out would be a much more radical departure from a single-axis karma system, and impose even more complexity (and maybe rigidity?), but it might be worth considering what other ideas are around that. More narrowly, I think having only the “objective truth” axis as the other axis might be good in some domains but fails badly in a more tangled conversation, and especially fails badly while partial models and observations are being thrown around, and that's an important part of group rationality in practice.
2Kaj_Sotala2y
If the labels were "true/false", wouldn't it still be unclear when people meant "not true for me, the reader" and when they meant "not true at all"?
4[DEACTIVATED] Duncan Sabien2y
I've gone into this in more detail elsewhere. Ultimately, the solution I like best is "Upvoting on this axis means 'I could truthfully say this or something close to it from my own beliefs and experience.'"
7[DEACTIVATED] Duncan Sabien2y
I think I experience silent, contentless net-disagreement as very hard to interface with. It doesn't specify what's wrong with my comment, it doesn't tell me what the disagreer's crux is, it doesn't give me any handholds or ways-to-resolve-the-disagreement. It's just a "you've-been-kicked" sign sitting on my comment forever. Whereas "the consensus of LW users asked to evaluate this comment for truth is that it is more false than true" is at least conveying something interesting. It can tell me to, for instance, go add more sources and argument in defense of my claims.
5habryka2y
Yeah, I think this is a problem, but I think contentless net-disapproval is substantially worse than that (at least for me, I can imagine it being worse for some people, but overall expect people to strongly prefer contentless net-disagreement to contentless net-disapproval). Like, I think one outcome of this voting system change is that some contentless net-disapproval gets transformed into contentless net-disagreement, which I think has a substantially better effect on the discourse (especially if combined with high approval, which I think carves out a real place for people who say lots of stuff that others disagree with, which I think is good).
2[DEACTIVATED] Duncan Sabien2y
(I added a small edit after the fact that you may not have seen.)
2habryka2y
Ah, indeed. Seems like it's related to a more broader mismatch on agree/disagree vs. true/false that we are discussing in other threads.
3Rana Dexsin2y
(Preamble: I am sort of hesitant to go too far in this subthread for fear of pushing your apparent strong reaction further. Would it be appropriate to cool down for a while elsewhere before coming back to this? I hope that's not too intrusive to say, and I hope my attempt below to figure out what's happening isn't too intrusively psychoanalytical.) I would like to gently suggest that the mental motion of not treating disagreement (even when it's quite vague) as “being kicked”—and learning to do some combination of regulating that feeling and not associating it to begin with—forms, at least for me, a central part of the practical reason for distinguishing discursive quality from truth in the first place. By contrast, a downvote in the approval sense is meant to (but that doesn't mean “will consistently be treated as”, of course!) potentially be the social nudge side—the negative-reinforcement “it would have been better if you hadn't posted that” side. I was initially confused as well as to how the four-pointed star version you suggested elsewhere would handle this, but combining the two, I think I see a possibility, now. Would it be accurate to say that you have difficulty processing what feels like negative reinforcement on one axis when it is not specifically coupled with either confirmatory negative or relieving positive reinforcement on the other, and that your confusion around the two-axis system involves a certain amount of reflexive “when I see a negative on one axis, I feel compelled to figure out which direction it means on the other axis to determine whether I should feel bad”? Because if so, that makes me wonder how many people do that by default.
2[DEACTIVATED] Duncan Sabien2y
I think it's easy for me to parse approval/disapproval, and it's easy for me to parse assertions-of-falsehood/assertions-of-truth. I think it's hard for me to parse something like "agree/disagree" which feels set up to motte-bailey between those.
4Rana Dexsin2y
Okay. I think I understand better now, and especially how this relates to the “trust” you mention elsewhere. In other words, something more like: you think/feel that not locking the definition down far enough will lead to lack of common knowledge on interpretation combined with a more pervasive social need to understand the interpretation to synchronize? Or something like: this will have the same flaws as karma, only people will delude themselves that it doesn't?
2[DEACTIVATED] Duncan Sabien2y
Yes to both of your summaries, roughly. 
1Rana Dexsin2y
Strange-Loop relevant: this very comment above is one where I went back to “disagree” with myself after Duncan's reply. What I meant by that is that I originally thought the idea I was stating was likely to be both true and relevant, but now I have changed my mind and think it is not likely to be true, but I don't think that making the post in the first place was a bad idea with what I knew at the time (and thus I haven't downvoted myself on the other axis). However, I then remembered that retraction was also an option. I decided to use that too in this case, but I'm not sure that makes full sense here; there's something about the crossed-out text that gives me a different impression I'm not sure how to unpack right now. Feedback on whether that was a “correct” action or not is welcome.
3Vladimir_Nesov2y
Disagreement is not necessarily about truth, it's often about (not) sharing a subjective opinion. In that case resolving it doesn't make any sense, the things in disagreement can coexist, just as you and the disagreer are different people. The expectation that agreement is (always) about truth is just mistranslation, the meaning is different. Of course falsity/fallaciousness implies disagreement with people who see truth/validity, so it's some evidence about error if the claims you were making are not subjective (author-referring). For subjective claims, the alternative to disagreement being comfortable is emotional experience of intolerance, intuitive channeling of conformance-norm-enforcement (whether externally enacted, or self-targeted, or neither).
2[DEACTIVATED] Duncan Sabien2y
Right. I'm advocating that we do have a symbol for agreement/disagreement about truth, and leave the subjective stuff in the karma score.
4Vladimir_Nesov2y
When the comment is about truth, then agreement/disagreement is automatically about truth. There are comments that are not about truth, being about truth is a special case that shouldn't be in the general interface, especially if it happens to already be the intended special case of this more general thing I'm pointing at.
4[DEACTIVATED] Duncan Sabien2y
I definitely don't think that "When the comment is about truth, then agreement/disagreement is automatically about truth" is a true statement about humans in general, though it might be aspirationally true of LWers? theyhatedhimbecausehetoldthemthetruth.meme
2Rana Dexsin2y
One particularly useful thing I think this idea points in the direction of (though I think Duncan would say that this is not enough and does nothing to fix his central problem with the new system) is that the ability to default-hide each axis separately would be a good user-facing option. If a user believes they would be badly influenced by seeing the aggregated approval and/or agreement numbers, they can effectively “spoiler” themselves from the aggregate opinion and either never reveal it or only reveal it after being satisfied with their own thought processes.
4gjm2y
You would prefer, if I am understanding you right (I remark explicitly that of course I might not be), a world where the thing people do besides approving/disapproving is separating out specific factual claims and assessing whether they consider those true or false. I think that (1) labelling the buttons agree/disagree will not get you that, (2) there are important cases in which something else, closer to agree/disagree, is more valuable information, (3) reasonable users will typically use agree/disagree in the way you would like them to use true/false except in those cases, and (4) unreasonable users would likely use true/false in the exact same unhelpful ways as they would use agree/disagree. Taking those somewhat out of order: On #2: as has been mentioned elsewhere in the thread, for comments that say things like "I think X" or "I like Y" a strict true/false evaluation is answering the question "does the LW readership agree that Duncan thinks X?" whereas an agree/disagree evaluation is answering the question "does the LW readership also think X or like Y?", and it seems obvious to me that the latter is much more likely to be useful than the former. On #4: some people don't think very clearly, or aren't concerned with fairness, or have a grudge against a particular other user, or are politically mindkilled, or whatever, and I completely agree with you that those people are liable to abuse an agree/disagree button as (in effect) another version of aspprove/disapprove with extra pretentions. But I would expect those people to do the same with true/false buttons. By definition, they are not trying hard to use the system in a maximally helpful way, attending to subtle distinctions of meaning. Hence #1: labelling the buttons true/false will not in fact make those people use them the way you would like them to be used. On #3: Users who are thinking clearly, trying to be fair, etc., will I think typically interpret agree/disagree buttons as asking whether they agree
4[DEACTIVATED] Duncan Sabien2y
I think a single vote system baaasically boils down to approve/disapprove already. People do some weighted sum of how true and how useful/productive they find a comment is, and vote accordingly. I think a single vote already conveys a bunch of information about agreement. Very very few people upvote things they disagree with, even on LW, and most of the time they do, they leave a disambiguating comment (I've seen Rob and philh and Daystar do this, for instance). So making the second vote "agree/disagree" feels like adding a redundant feature; the single vote was already highly correlated with agree/disagree. (Claim.) What I want, and have bid for every single time (with those bids basically being ignored every time, as far as I can tell) is a distinction between "this was a good contribution" and "I endorse the claims or reasoning therein." The thing I would find most useful is the ability to separate things out into "[More like this] and also [endorsed as true]," "[More like this] but [sketchy on truth]," "[Less like this] though [endorsed as true]," and "[Less like this] and [sketchy on truth]." I think that's a fascinatingly different breakdown than the usual approve/disapprove that karma represents, and would make LessWrong discussions a more interesting and useful place. I don't want these as two separate buttons; I have argued vociferously each time that there should be a single click that gives you two bits. Given a two-click solution, though, I think that there are better/more interesting questions to pose to the user than like-versus-agree, especially because (as I've mentioned each time) I don't trust the LW userbase to meaningfully distinguish those two. I trust some users to do so most of the time, but that's worse than nothing when it comes to interpreting e.g. a contextless -5 on one of my posts, which means something very different if it was put there by users I trust than by users I do not trust. On your #2, the solution I've endorsed in a fe
4philh2y
I was surprised by this because I don't remember doing it. After a quick look: * I didn't find any instances where I said I upvoted something I disagreed with. * But I did find two comments that I upvoted (without saying so) despite disagreeing, because I'd asked what someone thought and they'd answered and I didn't want to punish that. I feel like I have more often given "verbal upvotes" for things I disagree with, things like "I'm glad you said this but", without actually voting? I don't vote very much for whatever reason.
4[DEACTIVATED] Duncan Sabien2y
I must've swapped in a memory of some other LWer I've been repeatedly grateful for at various points.
4philh2y
<3
2cata2y
I am not very knowledgeable about a lot of things people post about on LW, so my median upvote is on a post or comment which is thought-provoking but which I don't have a strong opinion about. I don't know if I am typical, but I bet there are at least many people like me.
2ambigram2y
In a two-factor voting system, what happens if I'm not sure if I agree or disagree, e.g. because I am still thinking about it? If agree means "I endorse the claims or reasoning and think that more people should believe them to be true", I would probably default to no (I would endorse only if I'm pretty sure about something, and not endorsing doesn't mean I think it's wrong), so it's more like +1/0 voting. But if agree means "I think this is true", disagree would then mean saying "I think this is false", i.e. more like +1/-1 voting, so I would probably abstain?
2[DEACTIVATED] Duncan Sabien2y
Yeah, I think if you're torn you just don't vote yet.

I look forward to seeing what it feels like once it's just part of things.  Currently, it feels like complexity and distraction for pretty low information value.  

Also, why the indirect metaphor text of "agreement up/down vote", rather than much more straightforward "agree/disagree" labels?  I'm not sure about the x/check icons - I can't think of better, though it doesn't quite feel right, especially because it's next to the left/right voting icons, which never seemed weird to me, but now they kind of do.    I do like the detailed hover text, and it makes me continue to be grateful that I'm not usually on mobile on this site.

Also, also - it's a bit confusing that karma defaults to a normal upvote by the poster, but the agreement defaults to none (but it can be added by the poster if they actually agree with themselves)?

Also, also - it's a bit confusing that karma defaults to a normal upvote by the poster, but the agreement defaults to none (but it can be added by the poster if they actually agree with themselves)?

On this point, I suggest making it so that people cannot vote agree/disagree on their own comments. It's one thing to say "I find my own comment here so valuable that I use a strong upvote on it so more people see it" - that's weird and somewhat discouraged by the community, but at least carries some information.

But what's the equivalent supposed to be for agreement? "I find my own comment so correct that I strongly agree with it"? Just disallow that in the software.

9Kaj_Sotala2y
Now I'm imagining someone writing a devil's advocate kind of comment they themselves disagree with, and then strong-downvoting agreement.
7Rana Dexsin2y
As someone who regularly has the almost-habit of unrolling multiple perspectives but finds it difficult to express outwardly, and does in fact have different levels of agreement with (or, if you want to get first-person, “confidence in” perhaps?) things I write, I would appreciate the ability to signal this. On the karma axis, I have also retrospectively gone back and weak-downvoted my own comments on occasion when I changed my mind about whether they were net good for (my idea of what) the site (intends to be)—including ones that a number of other people had upvoted.

Currently, it feels like complexity and distraction for pretty low information value.  

Strong disagree

Also, why the indirect metaphor text of "agreement up/down vote", rather than much more straightforward "agree/disagree" labels?

Agree

4habryka2y
Yeah, this does just seem weird. Agree that we should just label them "agree/disagree".
7Rana Dexsin2y
Aside from finding the wording confusing to start with, there is a sign error in the disagreement text where it still says “for strong upvote” below.

I am pretty uncertain about whether this change is good, and I don't think anyone can confidently say it is or isn't good. But no other forum with voting does this (AFAIK), so it's good to try it and see what happens.

Something to think about: What sorts of observations might constitute evidence in favor of or against this system?

3Ben Pace2y
Something I'm hoping to see (and would constitute positive evidence for me) would be seeing a comment with a high/low agree score, and someone responding along the lines of "Huh, seems like lots of people agree/disagree with this comment, which seems wrong to me, let me flesh out a counterargument here" and that post leading to many users changing their minds, and future comments about that point getting a v different agree/disagree score.

It might be good to explicitly state in the hover text over the upvote and downvote buttons that they mean "would like to see more of this" and "would like to see less of this", rather than the mysterious and vague "like" and "dislike".

More radically, instead of vague "agree" and "disagree", one could imagine placing a small probability distribution in each comment and votes consist of marking how much credence you have in whatever that comment is saying. This is more confusing if the comment makes multiple claims, though, but that's a failure mode of the agree and disagree also.

Perhaps it should be possible to highlight sections of a comment and mark them with probability distributions that pop up when you hover over them and which also subtly color the highlight (divide probabilities into three ranges: red=0-33%, green=33-67%, blue=67-100%, then weight the RGB values by the number of votes in each range), as well as putting a small unobtrusive icon shaped like the probability distribution (perhaps in the margin?) when not hovering...

I just made a bunch of claims all at once... that is indeed a failure mode of this system which is going to regularly occur.

3Rob Bensinger2y
It's also more confusing if the original comment made a claim like "The sky is blue, with 70% probability." Then if a user assigns 40% probability to that comment, it's not clear whether they mean: * I think it's 40% likely that the sky is blue. * I think it's 40% likely that you assign 70% probability to the sky being blue. (E.g., maybe you're going back and forth about what your true belief is, and I want to weigh in on what I think your view is.) * I think it's 40% likely that you're correct in assigning 70% probability to the sky being blue. (E.g., maybe I think you're underconfident and the true probability is 90%; or maybe I think you're overconfident and the true probability is 50%; etc.) I think the current system isn't ideal, but I don't particularly mind this specific issue. It's already a problem for upvotes/downvotes, and I think upvotes/downvotes are a good feature on net in spite of this. (And it's at least plausible to me that adding more UI complexity in order to let someone upvote/downvote parts of posts/comments would be net-negative.) Part of why I'm fine with this issue is that I think it's just good for people to be separately tracking agree/disagree and good/bad. Even if they don't end up voting 'agree/disagree' that often, I expect positive affects from the mental activity alone. (E.g., prompting people to think in this mode might cause them to notice that they agree with the first half of a comment but not the second half; in which case we're already making good things happen, whether or not they write a follow-up comment explicitly saying 'I agree with the first half but not the second'.)

Excellent change and the icons look nice. Do keep it!

To combat the negativity bias that internet comments have (you only comment if something is wrong/bad/broken), I'll state that I find the current design intuitive, aesthetically pleasing, useful and on the whole a big step up from the past voting norms on the site, to the point that I don't have any ideas how that particular piece of lesswrong could be improved.

4niplav1y
After 3 months, I still stand by this assessment.

This comment is an experiment. I'm trying out a variant of the proposed idea of voting by headings/block quotes: this comment contains my comment, and the replies below contain claims extracted from my comment for agree/disagree voting. 

Agree/disagree buttons incentivizes knee-jerk, low-effort reactions rather than deliberate, high-effort responses 

Something I like about LW's system of upvotes meaning "things you want to see more of" and having no agree/disagree button is that there's no simple way of expressing agreement or disagreement. This means that when there's something I disagree with, I'm more incentivized to write a comment to express it. That forces me to think more deeply because I need to be able to state clearly what it is I'm agreeing or disagreeing with, especially since it can be quite nuanced. It also feels fairer because if someone went to the effort of writing a comment, then surely it's only fair that I do likewise when disagreeing. (Unless of course it was a low effort comment, in which case I could always just downvote.)

I suspect that if there's an agree/disagree button, the emotional part of me would be satisfied with clicking the disagree button, ... (read more)

0ambigram2y
Claim 2: Agree/disagree buttons are confusing or even harmful for comments that are making multiple claims. This is significant enough that there should not be an agree/disagree button for comments where agree/disagree buttons are not suitable. * Agree: The negative consequences are significant enough that there should not be agree/disagree buttons for certain types of comments. For example, authors may be able to decide if they will allow agree/disagree votes on their comment. * Disagree: It is acceptable to have agree/disagree votes even for posts/comments where this does not make sense, e.g. because people will adjust accordingly. We can add in a feature to disable agree/disagree votes for certain comments, but it is also okay if we don't.
0ambigram2y
Claim 1A:  Agree/disagree buttons disincentivizes productive conversations because clicking the disagree button satisfies the need for expressing disagreement (or agreement) with lower cost (less effort & no reputational cost since votes are anonymous) than writing out a reply. This is a significant enough concern that we should consider its effects when deciding whether or not to go with the new voting system. * Agree: This matches my experience: I am less likely to write replies expressing agreement/disagreement because I am now able to vote agree/disagree. * Disagree: This does not match my experience: If I was already going to write a reply, I would still write one even if I can just vote agree/disagree.
0ambigram2y
Claim 1C: See claim 1A. * Agree: I may or may not think that I/other users have this experience, but I think the effects are negative and significant enough, or have the potential to be significant enough that we should see if there are ways to address this when designing a new voting system. * Disagree: I may or may not think that I/other users have this experience, but I think that the effects are not negative or are negligible enough that we do not need to factor this into the design of a new voting system.  
0ambigram2y
Claim 1B: See claim 1A. * Agree: This may or may not match my experience, but I believe that for majority (>50%) of users on LW, they are less likely to write replies expressing agreement/disagreement because they can now vote agree/disagree. * Disagree: This may or may not match my experience, but I believe that majority (>50%) of users on LW, would still write a reply even if they can just vote agree/disagree.

Shortform posts created in the past don't have agreement voting for new top level comments, which are otherwise intended to be analogous to new posts/threads.

5habryka2y
It's true! Not fully clear how to fix this, since the whole architecture we've chosen kind of assumes the voting-system is set at the post-level.

I appreciate this voting system in controversial threads, but find it a bit overkill otherwise.

Maybe you could make this option "enabled by default", so if a thread creator doesn't think it's a good fit for a post, they can opt out of it by unchecking a box?

6tutor vals2y
Giving a post's creator the option to enable/disable this secondary axis voting seems valuable. A post creator will probably know when his post will generally need nuanced comments with differing opinions, or is more lightweight (ie. what's your favourite icecream) and would appreciate the lighter UI. 

Sounds good. I'm confused that you can only agree/disagree with comments, not posts.

Posts tend to make a lot of claims at the same time, such that "agree/disagree" became less meaningful, and it also came with more substantial UI challenges (having a whole second number visible from the frontpage per post would add a lot of clutter).

9Dagon2y
It would be super-cool to have a "claim delineation" feature, where someone could set up voting independently on each separable idea in a post.
4MondSemmel2y
Even when it comes to comments, I often wish people would break up their long comments more so I could vote separately on different claims.
3Rob Bensinger2y
What about a feature where you can mark block quotes in your own comment with 'strong agree', 'weak agree', 'weak disagree', or 'strong disagree'?
6Ben Pace2y
Being able to optionally add agree/disagree voting UI to block quotes sounds sweet.
2MondSemmel2y
Another option would be heading-based voting, i.e. if you use headings in your comments, each one of those could become votable, or be treated internally as separate comments to vote on and reply to. However, one problem with all such approaches (besides the big issue of increased UI complexity, of course) is that they're kind of incompatible with the ability to edit one's own comments - what if someone votes on a block quote or heading in you comment, and then you edit that part, or remove it altogether?
1MSRayne2y
As someone who thinks out loud, I probably would annoy the heck out of you. I regularly make like five or six different orthogonal claims in one comment. My standard is "stream of consciousness, then edit like five times as I think of more things to say / better ways to say them". It's kind of a bad habit though and I should just make more comments. Anyway, my point is that I agree and would like to be able to delineate claims in my comments too.
5Ben Pace2y
(I have changed the post title from saying it's on all new posts to saying it's on all new comment threads.)
0bigbird2y
too radical

Probably any GW users have already noticed this, but just in case any have not:

GreaterWrong now supports the new agreement voting feature.

Comment vote buttons on GreaterWrong

(As usual, double-click for strong-vote.)

Instead of a single value that shows the sum of all agreement upvotes and downvotes, what’s displayed by default is a ratio of the number of ‘agree’ to the number of ‘disagree’ votes (that’s the “10:1” in the screenshot).

You can also hover over the ratio to see the aggregated total, same as the way it would appear on Less Wrong (that’s the “Epistemic Status: 19” in the screenshot below), plus some more details:

Agreement vote tooltip on GreaterWrong

Have any other online forums tried something similar? If so, knowing what results they had seems decently valuable. I say decently instead of a stronger word because what works for one community doesn't necessarily work for another, especially one as unique as LessWrong.

Hmm didn't really find anything similar, but here are some examples of rating systems I found that looked interesting (though not necessarily relevant):

2-factor rating systems

SaidIt: (1) Insightful & (2) Fun

SaidIt is a Reddit alternative which seeks to "create an environment that encourages thought-provoking discussion". SaidIt has two types of upvotes to choose from: 1) insightful, and 2) fun.[1]

Goodfilms: (1) quality & (2) rewatchability

Goodfilms is a movie site for users to rate, review, share films and find movies to watch. Users rate movies on two dimensions: quality and rewatchability. The ratings are displayed as a scatterplot,  giving users a better sense of the type of movie (e.g. most people agree it is highly rewatchable, but there is disagreement on its quality => may not be very good, but is fun to watch).[2]

Scatterplot for Starship Troppers ratings

Suggestion by Majestic121: (1) Agree/Disagree & (2) Productive/Unproductive

A Hacker News comment by Majestic121 suggests a 2-factor voting system:

Up/Down : Agree/Disagree Left/Right : Makes the discussion go backward/forward

This way you could express disagreement while acknowledging that the point is inte

... (read more)

I am opposed to this change, because it makes voting more cognitively expensive. Now I feel forced to produce two judgments on each comment, which in turn makes me think about the exact difference between them. A single "liked this / disliked this" requires much less thinking. Multiply it by the number of comments each day.

Doing this for short time may be an interesting experiment, but if this feature stays here, I will probably just try to ignore it and only use the first button. But then from my perspective the UI just got cluttered.

I would appreciate if... (read more)

Update a couple days in: I do find myself being a little annoyed at having to decide whether to click an extra button, and confused about the norms about whether/when I should.

I tentatively quite like this.

Quite a bit of what I say gets lots & lots of karma votes but sort of middle ground net karma. It'd be helpful to know if this is about people being split on whether they agree with what I'm saying, or if it's a split on whether it belongs on LW whatsoever.

…although maybe folk won't make a very careful distinction between those when voting, so maybe I still don't get to know!

A minor suggested tweak for this experiment: Maybe change the "overall karma" hovertext to say something more along the lines of "How much do you thin... (read more)

Enabling it on all new comment threads is an experiment (you can always disable it if it ends up not working out). Getting the results of that experiment is valuable. The value of those experimental results seems like it's the more important consideration.

This so far feels really good for me to use, as a reader. It's almost immediately obvious-to-me how it works (as a reader), and I feel relief and satisfaction when I get to separate out my agreement from my upvote. 

I wonder how it'll be as a commenter and poster!

I think I'm confused why the chosen distinction is something like "good/bad vs. agree/disagree" rather than "approve/disapprove vs. true/false."

I do not have faith that people will use the agree/disagree voting for assessments of truth, which was the thing I personally wanted added to our voting system. Right now it feels like there are just two S1 monkey buttons and no nudge toward teasing out different strands of value.

5habryka2y
Is there a difference between "agree/disagree" and "true/false"? They definitely parse in my mind as the same mental action (I mean, there are some very minor associative differences, but they do really point towards the same mental action for me). I am pretty open to renaming the dimensions "approve/disapprove" vs. "true/false". That's pretty close to how I am referring to them in my mind. I think it's also currently how almost everyone I've interviewed seems to interpret the current buttons, though you disagreeing is definitely evidence there is a broader distribution.
4[DEACTIVATED] Duncan Sabien2y
I do not trust the mean, median, and modal LW users to reliably use "agree/disagree" to mean "true or mostly containing truth/false or mostly containing falsehood." So I don't trust the aggregate of a lot of people using those buttons to be good signal rather than noise.
4habryka2y
nods I think in that case we don't disagree at all about the intention of the feature (the feature is intended to point people at true/false, so in as much as you picked up something else from the wording, seems good to clarify that). I do think we disagree about what the median user will think. I do actually think we should definitely say the words "true" and "false" in the hover-over (or maybe "correct" and "incorrect", though I feel a bit confused about that one). Does putting the thing in the hover-over just resolve your crux?
6TekhneMakre2y
(As a sort of sad but seems-good-to-share datapoint, on reading this comment it has 2 karma and -10 agrees, and I felt I had to explicitly undo my "woah, status punch!" reaction. On mousing over the agrees, it turns out it was only 1 vote, and that seemed to make it easier to undo the reaction; it was just one person, not the social winds.)
4[DEACTIVATED] Duncan Sabien2y
(I feel like I should be clear that it wasn't me.)
2[DEACTIVATED] Duncan Sabien2y
I think so? Depending on the exact wording or phrasing, but yeah: if it's clear that the agreement or disagreement requested is an evaluation of truth/accuracy, then that resolves it.
4habryka2y
There are some specific edge-cases that we hit on in another thread. In particular, I would like to somehow have a more principled distinction on whether pressing agree on sentences like "I believe X" means "I think you accurately report your beliefs" vs. "I would also report the same belief". I think we almost always want to do the latter (since it's more useful information), but "true" feels like it points a bit more toward the former. Maybe we can somehow massage that into the hover-over, or at least the FAQ. Curious about your takes here. My sense is we are mostly on the same page on this distinction being important (and confusion between them seems like it could pretty easily cause a bunch of hurt).
5[DEACTIVATED] Duncan Sabien2y
I think that the correct norm is: The second button is an assessment of truth or falsehood, and in order to make that happen, we generally don't click it one way or the other on somebody saying "I believe X." If I want to note that I would also report the same belief, I do a karma upvote and leave a comment.
2habryka2y
Hmm, I think this would get rid of ~80% of the value for me, and also produce a lot of voting inconsistency, since it's kind of author-specific how much they insert "I think X" vs. just saying "X", and take the "I think" implicit. I much prefer getting data on whether people agree with X in that case, and would really value that information.
6[DEACTIVATED] Duncan Sabien2y
80% of the value in those cases, or of the button overall? 'Cos if the latter, it seems like that's our real disagreement.
4habryka2y
Button overall. Like, I think I approximately never make a comment that doesn't preface almost all of my (edit: not obviously correct) beliefs with "I think", so this would cause no agree/disagree voting to happen on my comments.

I think this is why this button will be a very strong pressure away from LW, for me.

If the button claims to be about evaluating the truth or falsehood of the content of a comment, and also my comment has said a bunch of true stuff, and has a -17 on it or something, I will absolutely find this emotionally relevant and be Sad about it and want to spend much much less time on LW.

And if the button is not about the truth or falsehood of the content, and is just a signal of ... how Other I am, versus how much I am Like the rest of the monkeys reading it, I expect to very frequently be receiving blunt You Are Not Like Us signals, all the time, and to have those signals permanently inscribed on all of my commentary ("look at what the guy that everybody disagrees with thinks!") and to find this sad and alienating.

Like, I really cannot overstate the strength of the deterrent of the -n numbers on my comments on this post, alone. I'm keeping my hand on the hot stove because this feels important, but it does not feel good.

If this change sticks as it currently is, it will be really really difficult and painful for me to be on LW. Or, to be more specific: it's already quite difficult and painful ... (read more)

So someone can make a statement: "X". X might be indexical or not. Indexical statements refer to the speaker, like "I think that probabilities are cool" or "I see a parrot.". Non-indexical statements don't, like "Probabilities track priors + evidence" or "There are parrots in the world". The line is blurry: is "Probabilities are cool" implicitly indexical? Agree/disagree with X could be taken to mean, "It would be true if I said X, with the index pointing to me", while true/untrue means, "X is the case". If X is non-indexical, asserting agree/disagree is the same as asserting true/untrue. If X is indexical, they're not the same; disagreeing with "I see a parrot" means "I (the disagree-er) don't (myself) see a parrot", while saying " 'I (the original speaker) see a parrot' is untrue" means "No, you don't see a parrot". 

Duncan, what would you think about a button that means agree/disagree in that sense, i.e., "I could also say this truthfully"? (As opposed to, it would be good for me to say this, or I would actually say this.) Is there a way to make that meaning clear? habryka, would that button get the value for you?

8habryka2y
I like the sentence "I could also say this truthfully", and I feel like it points towards the right generator that I have for what I would like "agree/disagree" to mean. The tooltip of "Agree: Do you agree with the statements in this comment? Would the statements in this comment ring true if you said them yourself?" feels possibly good, though sure is a bit awkward and am not fully sure how reliably it would get the point across.
2Dagon2y
I'm pretty cynical about the ability to encourage any nuanced interpretation of such a simple input.  Enough people will just use their first impression based on the icons and a quick reading of the labels that you will never be sure what the votes ACTUALLY mean, regardless of how clear your text guidance is.   I hope that people will just not use the agree/disagree voting for comments where it's ambiguous what an entry would mean.  If it doesn't provide useful information about my reaction to the comment, why wouldn't I just let my karma vote stand alone?
7[DEACTIVATED] Duncan Sabien2y
I find the solution of "I could also say this truthfully" to be pretty clever and my gut sense is that it would resolve the distress.
2Kaj_Sotala2y
I'm confused by this, since to me it's not even a question of trust, to me it seems like "agree/disagree" means "I think this is true/false". In my head, to agree with a claim means that you think it's true, and to disagree with a claim means that you think it's false. (Of course, that also means that I'd be fine with changing the names.) Of course, "agree" does have some other meanings too (like "I agree to these terms of service"), but all of them seem clearly inapplicable to this context?
9Said Achmiz2y
Consider the following hypothetical posts: 1: 2: 3: 4: 5: 6: 7: Would you say that, for each of these posts, “agree” means “I think this is true”? If so, what would it mean to “disagree” with any of these? They are (with one partial exception) simply reports of the commenter’s views. Does “disagree” mean “You are lying or mistaken about what you claim to believe”? If not, then it seems to me that “disagree” must (at least sometimes!) mean something different from, or at least something more subtle/nuanced than, merely “I think this is false”.
3Kaj_Sotala2y
These were useful examples, thanks.
4[DEACTIVATED] Duncan Sabien2y
I think that you and similar people who are confused at my reaction (e.g. Oli, e.g. at least a little bit Rob) are basically ... colorblind to something? Like, I think that because it seems so obvious to you that agree/disagree is just about true/false that you're not seeing how many many LWers would not and are not using it in that manner. On a forum made up of just Kajs, Olis, and Robs, I would not have negative feelings about the way the second vote is used. But I think that its current agree/disagree label is much more ambiguous for people unlike yourselves, and so you're not seeing why it needs to be more carefully specified (if we want distress like mine to be less in the mix).
2Kaj_Sotala2y
It's certainly possible that we're colorblind to something, that's why I was hoping for examples of what those alternative meanings could be so I could better understand what that something is. (And feel like I got them from Said's response.)
2Pattern2y
Agree/Disagree are weird when evaluating your comment. Agree with you asking the question (it's the right question to ask) or disagree with your view?   I read Duncan's comment as requesting that the labeling of the buttons be more explicit in some way, though I wasn't sure if it was your way. (Also Duncan disagreeing with what they reflect).
4Pattern2y
Upvote (Like**) * Quality* Agreement (Truth) * Veracity Not present***: Value?Judgement? (Good/bad) * Good/Bad   **This is in ()s because it's the word that shows up in bold when hovering over a button. *How well something is written? ***That is a harsher bold than I was going for.
2[DEACTIVATED] Duncan Sabien2y
I guess a different point is that, given what I understand to be the goals of LessWrong, I'm confused about valid reasons for liking something other than either: * This just seems true, irrespective of any of its other properties (e.g. whether it reduces the heat of a conversation or not) * This just seems like it moves the conversation in a better/more productive direction, irrespective of any of its other properties (e.g. whether it's true or not) Writing quality is a good one to mention; I suppose I have upvoted things purely on the grounds that I wanted to incentivize [more like this] for a comment that was clear and clearly effortful.
4Pattern2y
Yeah. When something is very unclear, it's like Is it good or bad? It's impossible to decipher, I can't tell. Is it true or false? No way to tell. (It doesn't happen often, but it's usually downvoted.)   ETA: I'm not sure at the moment what other aspects there are.

I love this change, for most of the same reasons as Ben.  Thanks, LessWrong team!  Some ideas for further ways to empower finer-grained epistemics at the community level:

  1. (additional metrics) I think it'd be nice to have a drop-down or hover-over to see more fine-grained statistics on a post or comment, such as:
    1. total_upvotes := the the total number of upvotes (ignoring downvotes)
    2. total_downvotes
    3. voting_activity := total_upvotes + total_downvotes
    4. voting_controversy := min(total_upvotes, total_downvotes)
    5. total_agreement
    6. total_disagreement
    7. agreement_activi
... (read more)

I feel quite happy to see this implemented site-wide; ever since I saw it selectively enabled on some posts, I've thought that it worked really well and felt it incomprehensible that it wasn't already in use everywhere.

If there's going to be an agreement-disagreement axis, maybe reconsider how and whether voting strength interacts with it. I saw a comment in this thread which got to -10 agreement from one vote. Which is, if not laughably absurd, certainly utterly unintuitive. What is that even supposed to mean?

As with most things in life: this seems like it could be a real improvement, it's great that we're testing it and finding out!

People previously didn't like being downvoted into the negatives. I wonder whether the same will be true along the disagreement axis. On the one hand "I disagree with this comment" isn't really saying something contentious in the same way that "I dislike this comment" is, so being "in the red" along the disagreement axis shouldn't really feel too bad. On the other hand, I have a feeling that being "in the red" along any dimension just has a certain inherent social disapproval kinda feeling that is pretty uncomfortable.

If the latter is true (people find bei... (read more)

i think, in retrospect, this feature was a really great addition to the website.

Bug: When comment threads are loaded as a permalink, comment sorting is wrong or at least influenced by agreement karma.

Example: This comment thread. In this screenshot, the comment with 2 karma and 1 agreement is sorted above the comment with 8 karma and 0 agreement.

4Ruby2y
It's possible there's a bug in the comment ordering here that we should look into, but it's very unlikely to be because the agreement voting is being taken into account.
2MondSemmel2y
There's definitely a bug / inconsistency here: the linked comments are in a different order when viewed as a permalink vs. when viewed in the thread itself. But yeah, I was way too quick to assume, based on a single data point, that this was a) a new problem and b) caused by or influenced by agreement karma or the related recent website update. Oops. I thought these things were likely related because, as stated in this thread, only karma (but not agreement) is supposed to dictate how things are ordered; so when I saw a wrong ordering with differing agreement scores, I figured the latter had to be the reason. Anyway, do you want me to post this issue as a github ticket, or is this comment enough? Actually, I have the same question for my two aesthetics-related comments here and here.

I notice myself wanting to vote on things I like, and being confused about whether to upvote or agree.

My guess at what's happening: the part of me that forms an opinion is basically driven by status (reward good thought vs slap down bad thought) and I've tuned this well enough that it's a good judge of quality. Two-axis karma forces me to go to system 2, which is sometimes good, but my system 1 is already pretty good at flagging things like "just because they disagree with you doesn't mean they suck or you suck".

I'll probably get used to it.

Thread for changes that might be good for the voting system. (I'm going to make my ideas separate comments so people can agree-vote on ones they like)

i didn't intend to comment .but then i read comment about fighting negativity bias and decided the commenter right, so, I'm doing it to - this new feature is really good, i encountered it in the wild, find it intuitive (except the sides of the votes, but when i doing it wrong the colors clarify it and i fix that immediately), and basically very good and useful feature. in my model of the world, 70%+ of users like this feature, and don't say that, so the result is the comment section below.

i also find it much better then Duncan's suggestion below, for reaso... (read more)

Would it be possible to add a forum-wide search/sorting option for comments with unusually high [disagreement*karma]?

Usually, karma is strongly correlated with agreement on some level, even with this system. So if a comment has high disagreement and high karma, the karma has been deconfounded and seems much more likely to have been caused by people having updated on it, or otherwise thought the arguments have gone underappreciated. And if a high proportion of people updated on it, then it's more likely that I will too.

Finding comments like this is a great ... (read more)

1Emrik2y
FWIW, if you ever reintroduce multi-axis voting of some sort, the primary axis I'd like to see is "novelty" (can't be downvoted), for the same reasons as above.
3Pattern2y
I think some aspects of 'voting' might benefit from being public. 'Novelty' is one of them. (My first thought when you said 'can't be downvoted' was 'why?'. My filtering desires for this might be...complex. The simple feature being: I want to be able to sort by novelty. (But also be able to toggle 'remove things I've read from the list'. A toggle, because I might want it to be convenient to revisit (some) 'novel' ideas.))
2Ben Pace2y
Hm, k, have added an edit.

How do sorting algorithms (for comments) work now?

2Ben Pace2y
The same as always. Karma score, with a hint of magic (i.e. putting new comments higher for a period on the order of a few hours). As it says in the OP section titled "How the system works", agree/disagree voting has no effect on sorting.
2Pattern2y
It didn't state that explicitly re sorting, but looking at: I see what you mean. (This would have been less of a question in a 'magic-less sorting system'.)

I'm confused about why you can agree with your own post. What is that supposed to do?

(I strong-agreed with this post.)

2Pattern2y
Upvoting/downvoting self * Sorting importance 'Agreeing'/'Disagreeing' * 'I have discovered that this (post (of mine)) is wrong in important ways' * or * Looking back, this has still stood the test of time. These methods aren't necessarily very effective (here).   Arguably, this can be done better by: Having them be public (likely in text). What you think of your work is also important. ('This is wrong. I'm leaving it up, but also see this post explaining where I went wrong, etc.')   See the top of this article for an example: https://www.gwern.net/Fake-Journal-Club  ⁠certainty: log ⁠importance: 4
2Ben Pace2y
Nothing much. It's probably the right call to just remove self-agreeing.

I agree this.

  • Agree/disagree voting does not translate into a user's or post's karma — its sole function is to communicate agreement/disagreement. It has no other direct effects on the site or content visibility.

Then that's not much incentive for me to stop upvoting/downvoting stuff I agree/disagree with, then, is it?

[/pro-echo-chamber-jack-sparrow]

1tutor vals2y
If you're really into manipulating public opinion, you should also consider strong upvoting posts you disagree with but that are weakly written, so as to present an easily defeated strawman.  I'd say you're correct this new addition does not change much to the previous incentives that exist in manipulating comment visibility, but that's not the point of this new addition, so not a negative of this update.  [Edited for clarity thanks to Pattern's comment]
2Pattern2y
Consider replacing this long phrase (above) with 'consider'.
1tutor vals2y
Partially agreed for replacing 'have to be thinking about' by 'consider', ie :  If you're really into manipulating public opinion, you should also consider strong upvoting [...] Disagreed on replacing the "should also" part because it reminds you this is only hypothetical and not actually good behaviour. 

Displaying the combined agreement score loses context.

It may be more helpful to split the information out:

< 45 > 6 people agree, 42 people disagree.

Needless to say, a lot of people won't simply vote reflecting their own agreement or disagreement, but aim at the net amount of agreement minus disagreement they think the comment should have.

I started a [link post to this on the EA Forum](https://forum.effectivealtruism.org/posts/e7rWnAFGjWyPeQvwT/2-factor-voting-karma-agreement-for-ea-forum) to discuss if it makes sense over there. 

One thing I suggested as a variation of thi:



> B. Perhaps the 'agreement' axis should be something that the post author can add voluntarily, specifying what is the claim people can indicate agreement/disagreement with? (This might also work well with the metaculus prediction link that is in the works afaik).


 

2Ben Pace2y
My first thought against would be that it would end up pretty misleading. Like, suppose the recent AGI lethalities post had this, and Eliezer picked "there is at least a 50% chance of extinction risk from AGI" as the claim. Then I think many people would agree with it, but that would look (on first glance) like many people agreed with the post, which actually makes a way more detailed and in-depth series of claims (and stronger claims of extinction), and create a false-consensus. (I personally think this is the neatest idea so far, that allows the post author to make multiple truth-claims in the post and have them independently voted on, and doesn't aggregate them for the post overall in any way.)
[+][comment deleted]2y10