Hide table of contents

As many of you know, on LessWrong there is now:

two axes on which you can vote on comments: the standard karma axis remains on the left, and the new axis on the right lets you show much you agree or disagree with the content of a comment.

I was thinking we should have this on EA Forum for the same reasons ... to avoid (i) agreement with the claim/position being confounded with (ii) liking the contribution to the discussion/community.

Reading the comments over there, it seems there are mixed reviews. Some key critiques:

  1. Visual confusion and mental overload (maybe improvable with better formats)
  2. It's often hard to discern what 'agree with the post' means.

My quick takes:

A. We might consider this for EAFo after LW works out the bugs (and probably the team is considering it)

B. Perhaps the 'agreement' axis should be something that the post author can add voluntarily, specifying what is the claim people can indicate agreement/disagreement with? (This might also work well with the metaculus prediction link that is in the works afaik).

What are your thoughts...? [1]

  1. On two-factor voting for EA Forum overall
  2. On "post author chooses what the agreement target ('central claim') is"
  3. On whether the considerations here are different for EA Forum vs. LessWrong

    1. Meta: wasn't sure whether to post this as a link post or question post ↩ī¸Ž

81

0
0

Reactions

0
0
Comments18


Sorted by Click to highlight new comments since:

I would find this helpful - I'm tired of being downvoted when I provide useful information to support an argument that people overall disagree with!

Seems useful, especially for critical posts. I may want to upvote them to show my appreciation and have more people read them though still disagree with e.g. the conclusion they draw.

Big support!

  1. By making agreement a separate axis, people will feel safer upvoting something for quality/novelty/appreciation with less of a risk that it's confounded with agreement. Unpopular opinions that people still found enlightening should get marginally more karma. And we should be optimising for increased exposure to information that people can update on in either direction, rather than for exposure to what people agree with.[1]
  2. We now have an opinion poll included for every comment/post. This just seems like a vast store of usefwl-but-imperfect information. Karma doesn't already provide it, since it has more confounders.

But, observing how it empirically plays out is just going to matter way more than any theoretical arguments I can come up with.

  1. ^

    Toy model here, but: The health of an epistemic community depends on, among other things, an optimal ratio between the transmission coefficients of technical (gears-level) evidence vs testimonial (deference) evidence. If the ratio is high, people are more likely to be exposed to arguments they haven't heard yet, increasing their understanding and ability to contribute to the conversation. If the ratio is low, people are mainly interested in deferring to what other people think, and understanding is of secondary importance.

I think it would be better if the agreement was expressed as a percentage rather than a score, to make it feel more distinct // easier to remember what the two were.

Interesting point. 

I guess it could be useful to be able to see how many have voted as well, since 75% agreement with four votes is quite different from 75% agreement with forty votes.

Yeah to proxy this maybe I'd imagine something like adding a virtual five upvotes and five downvotes to each comment to start it near 50%, so it's a strong signal if you see something with an extreme value.

Maybe that's a bad idea; makes it harder (you'd need to hover) to notice when something's controversial.

I seem to recall some places, when sorting thinks by average rating, will use something like the lower 90th percent confidence bound on the mean. This doesn't solve for which number to display though, as it is not a very user-intuitive number to read.

I am extremely excited about this feature for a long time now, as part of a vision for a "Better Social Network".

Even more than the object level idea: I'm excited about the meta level approach of trying things like this sometimes and seeing how they go (and focusing on having the experiments not too expensive, and not accidentally breaking something important). I am guessing most of the results won't be predicted in advance anyway. Still, this specific feature seems very promising to me

I'll add: This kind of feature is really complicated and is not something we'll solve in a post, this is what product people are for

(But, cough cough, CEA are hiring a product manager)

I strongly disagree with the first part (by which I mean I'm not excited), and strongly agree with the second (cheap exploration is good and consequences are hard to foresee).

I wanted to write that I couldn't decide whether to upvote or not, because it fits the narrative nicely, but eventually I did.

lol

you know what would really help? emoji replies!

Imagine all the emoji's you'd use to reply to this comment of mine right now!!

Although, if it's not too late, maybe 'two-factor' could use a better name? I suspect many people get confused because they associate it with 2-factor authentification.

Thanks for the post! With 80 karma, this is surprising-to-me popular! I've been watching LessWrong  experimenting with multiple very different forms of multi-factor voting, and they now seem to have settled on this one. As you note, there have been bugs, but they have recently fixed some obvious UI issues. (And we really appreciate all their work!) This now seems like an appropriate time for the Forum to try it. We plan on testing it out with some comment-heavy posts, and we’ll see how it goes from there.

Is 2-factor voting popular, or did they love my epistemic rigor and rhetorical clarity? :)

Seriously, though, this is exciting and I'm eager to see how it goes. It seems to me to be very much on-brand for the EA forum.

[anonymous]3
0
0

+1 to "post author chooses what the agreement target ('central claim') is"

I don't support having two rating systems. For one, it seems overly complicated and hindering to communication (particularly for newcomers).

Second, I don't think agreement and "liking the contribution to the discussion" are that discernible to begin with - particularly by a person about their own views. We're biased, political creatures, and trying to contain that will only result in a superficial improvement that will mask the bias and politicization that will still exist in all ratings and content.

I agree it's not a panacea, but I could imagine it helping mitigate bias/politicization in a few ways:

  • It prompts people to think about 'liking' and 'agreeing' as two separate questions at all. I don't expect this to totally de-bias either 'liking' or 'agreeing', but I do expect some progress if people are prompted like this.
  • Goodwill and trust is generated when people are upvoted in spite of having an unpopular-on-the-forum view. This can create virtuous cycles, where those people reciprocate and in general there are fewer comment sections that turn into 'one side mass-downvotes the other side, the other side retaliates, etc.'.

Example: Improving EA Forum discourse by 8% would obviously be worth it, even if this is via a "superficial improvement" that doesn't fix the whole problem.

I can't think of many examples where I agreed with a position but didn't want to see it or wanted to see a position that I disagreed with. I think that I've only experienced the latter case when I want to see discussions about the topic. In those cases I feel like you should balance between the good and the bad on upvoting and choose between the 5 levels (if you take into account the strong votes and no vote) that the current system provides. Also, if you believe that a topic that you want to talk about (and believe that others too) is going to be divised, you can just write "Let's discuss about X" and then reply it with your opinion.

I read examples on the comments that I disagreed with and I feel more comfortable counterarguing them all in this comment:

  • Useful information for an argument that people disagrees with: Then how is it useful?
  • Critical posts which you disagree with that you appreciate and want other people to read: Then why do you appreciate them? It seems you like them in part but not fully, I would just not vote them. And why do you want people to read it? Seems like a waste of time.
  • Voting something for quality, novelty or appreciation: I believe that the voting system is better as a system where you vote what you want other people to read or what you enjoy seeing. And I think that we should appreciate each other in other ways or places (like in the comments).
  • Unpopular opinions that people still found enlightening should get marginally more karma: That sounds like opinions that change the minds of some people, but get little karma or even negative points. I don't know how would the people that disagrees with it would downvote it less than other opinions which they disagree with. In other words, I don't know how exactly the "enlightenment" is seen by the ones blind to it lol, or what would "enlightening" mean.
  • And we should be optimising for increased exposure to information that people can update on in either direction, rather than for exposure to what people agree with: How is that useful? I'm not that familiar with the rationalist community so maybe this is obvious, or maybe I'm misunderstanding. Are you saying that you agree with some arguments (so you update beliefs) but not all of them and you don't change the conclusion? That probably would mean no vote at all from me, and depending the specifics weak upvote or downvote.
  • It prompts people to distinguish between liking and agreeing: Why would you like a contribution to a discussion when you don't agree with the contribution?
  • There would be fewer comment sections that turn into 'one side mass-downvotes the other side, the other side retaliates, etc.': Why would there be a difference with this new axis?

Agree with:

  • Goodwill and trust is generated when people are upvoted in spite of having an unpopular view.

But I believe that the downsides are worse. So, if you were to encourage people to upvote unpopular views, then they could get even more points that the popular views, no matter how "bad" they are. Also there could be more bad arguments at the top than good ones. That sounds pretty confusing and annoying honestly. I think better options are to reply to those comments and upvote good replies and to not show points below 0 nor hide those comments.

Also:

  • It sounds to me that to vote in a two vote system would be to vote something and then to think if I agree or disagree with the comment and then to vote again >95% of the time for kinda the same thing (agree after like, disagree after dislike) and then to see the same number repeated or to see a difference in them and wonder about what does it mean and if it exists because people are voting in just one system.
  • Really bad for new people.
  •  There are cases where there isn't anything to agree or disagree with, like information and jokes.
     
Curated and popular this week
 Âˇ  Âˇ 1m read
 Âˇ 
> Summary: We propose measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under a decade, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks. > > The length of tasks (measured by how long they take human professionals) that generalist frontier model agents can complete autonomously with 50% reliability has been doubling approximately every 7 months for the last 6 years. The shaded region represents 95% CI calculated by hierarchical bootstrap over task families, tasks, and task attempts. > > Full paper | Github repo Blogpost; tweet thread. 
 Âˇ  Âˇ 5m read
 Âˇ 
Last week, I participated in Animal Advocacy Careers’ Impactful Policy Careers programme. Below I’m sharing some reflections on what was a really interesting week in Brussels! Please note I spent just one week there, so take it all with a grain of (CAP-subsidized) salt. Posts like this and this one are probably much more informative (and assume less context). I mainly wrote this to reflect on my time in Brussels (and I capped it at 2 hours, so it’s not a super polished draft). I’ll focus mostly on EU careers generally, less on (EU) animal welfare-related careers. Before I jump in, just a quick note about how I think AAC did something really cool here: they identified a relatively underexplored area where it’s relatively easy for animal advocates to find impactful roles, and then designed a programme to help these people better understand that area, meet stakeholders, and learn how to find roles. I also think the participants developed meaningful bonds, which could prove valuable over time. Thank you to the AAC team for hosting this! On EU careers generally * The EU has a surprisingly big influence over its citizens and the wider world for how neglected it came across to me. There’s many areas where countries have basically given a bunch (if not all) of their decision making power to the EU. And despite that, the EU policy making / politics bubble comes across as relatively neglected, with relatively little media coverage and a relatively small bureaucracy. * There’s quite a lot of pathways into the Brussels bubble, but all have different ToCs, demand different skill sets, and prefer different backgrounds. Dissecting these is hard, and time-intensive * For context, I have always been interested in “a career in policy/politics” – I now realize that’s kind of ridiculously broad. I’m happy to have gained some clarity on the differences between roles in Parliament, work at the Commission, the Council, lobbying, consultancy work, and think tanks. * The absorbe
Max Taylor
 Âˇ  Âˇ 9m read
 Âˇ 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr
Recent opportunities in Building effective altruism
46
Ivan Burduk
¡ ¡ 2m read