13 comments, sorted by Click to highlight new comments since: Today at 8:57 PM
New Comment

Reasonably often (maybe once or twice a month?) I see fairly highly upvoted posts that I think are basically wrong in something like "how they are reasoning", which I'll call epistemics. In particular, I think these are cases where it is pretty clear that the argument is wrong, and that this determination can be made using only knowledge that the author probably had (so it is more about reasoning correctly given a base of knowledge).

Sometimes I write a comment explaining why. If I reliably did this on all of the posts then you could still rely on karma as an indicator of epistemic soundness, but sadly I don't, because it's actually a fair amount of work and my time has high opportunity cost. So here is your PSA: for any particular high-karma post, knowing nothing else about the post besides that it is high karma, there is a non-trivial probability that I would find significant reasoning issues in that post. You can't rely solely on karma as a strong signal of epistemics.

Clear, strong examples from the EA Forum:

Moderate examples from the EA Forum (either they were lower karma, or only one particular thing was off instead of most of the post, or something else):

Weak examples from the EA Forum that are still some evidence (for these ones it's pretty likely someone would have made the points I made if I hadn't; that's not true for the others):

Clear, strong examples from LessWrong:

Agreed. To some extent it's OK for bad posts to get upvoted. But I think the fact that posting volume is so much higher now means we should be able to trade off some of that volume for greater post quality. This could be by having a review process for posts, or reinstating the minimum upvote requirement before a user is allowed to post. I also think there may be some achievable gains that don't require trading off volume, such as improving the upvote strength algorithm.

Fwiw my view is that forum members shouldn't upvote posts whose reasoning isn't up to standard even if they agree with the conclusion.

I'd assume that forum members don't notice that the reasoning is bad.

As evidence in favor of this view, at least sometimes after I post such a comment, the post's karma starts to go down, suggesting that the comment informed voters about bad reasoning that they hadn't previously noticed. (Possibly this happened in most of the examples above, I wasn't carefully tracking this and don't know of any way to check now.)

I'd assume that forum members don't notice that the reasoning is bad.

Probably yeah, at least in part. Sometimes they may notice it a bit but put insufficient weight on it relative to the fact that they agree with the conclusion. But some may also miss it altogether.

My comment was in response to the claim that "to some extent it's OK for bad posts to get upvoted".

Ah, I interpreted that claim as "it's not a huge priority to prevent bad posts from being upvoted, regardless of how that happens", rather than "it's fine for forum members to upvote posts whose conclusions they agree with even if they see that the reasons are bad".

Yes. But people are sticky, so you need to instill more vetting power in people who evaluate appropriately. The question is how to do that.

I know this is just a small detail and not what you wrote about, but: much of your comment on the recommender systems post hinged on news articles being uncorrelated with the truth. Do you have data to back that up?

I'm replying here because it's a strong claim that's relevant to many things beyond that specific post.

I have data in the sense that when I read news articles and check how correct they are, they are usually not very correct. (You can have more nuance than this, e.g. facts about what mundane stuff happened in the world tend to be correct.)

I don't have data  in the sense that I don't have a convenient list of articles and ways they were wrong such that I could easily persuade someone else of this belief of mine. (Though here's one example of an article that you at least have to read closely if you want to not be misled.)

Also, I could justify ignoring those two particular news articles without this general claim, at least to myself. I did briefly look at them before I wrote that comment; I didn't particularly expect to believe them but if they were the rare good kind of news article I would have noticed.

For radicalization, I know specific people who have looked into it and come away unconvinced; Stefan Schubert links to some of this work in a different comment on that post.

The article about social media being addictive is basically just a bunch of quotes from people rather than particular studies / data. It generally seems pretty easy to find people saying things you want so I don't update much on "such-and-such person said X". I've also once experienced and many times heard stories of journalists adversarially quoting people to make it sound like their position was very different than it actually was, so I usually don't even update on "such-and-such person believes X". 

I'm wondering if it'd be good to have something special happen to posts where a comment has more karma than the OP. Like, decrease the font size of the OP and increase the font size of the comment, or display the comment first, or have a red warning light emoji next to the post's title or ...

Or maybe the commenter gets a $1,000 prize whenever that happens.

Good versions of "something special" would also incentivize the public service of pointing out significant flaws in posts by making comments that have a shot at exceeding the OP's karma score.

Obviously "there exists a comment that has higher karma than the OP" is an imperfect proxy of what we're after here, but anecdotally it seems to me this proxy works surprisingly well (though maybe it would stop due to Goodhart issues if we did any of the above) and it has the upside that it can be evaluated automatically.

I sometimes see people arguing for people to work in area A, and declaring a conflict of interest that they are personally working on area A.

If they already were working in area A for unrelated reasons, and then they produced these arguments, it seems reasonable to be worried about motivated reasoning.

On the other hand, if because of these arguments they switched to working in area A, this is in some sense a signal of sincerity ("I'm putting my career where my mouth is").

I don't like the norm of declaring your career as a "conflict of interest", because it implies that you are in the former rather than latter category, regardless of which one is actually true. (And the latter is especially common in EA.) However, I don't really have a candidate alternative norm.

I share your feeling towards it... but I also often say that one's "skin in the game" (your latter example) is someone else's "conflict of interest."

I don't think that the listener / reader is usually in a good position to distinguish between your first and your second example; that's enough to justify the practice of disclosing this as a potential "conflict of interest." In addition, by knowing you already work for cause X, I might consider if your case is affected by some kind of cognitive bias.

I'm not objecting to providing the information (I think that is good), I'm objecting to calling it a "conflict of interest".

I'd be much more keen on something like this (source):

For transparency, note that the reports for the latter three rows are all Open Philanthropy analyses, and I am co-CEO of Open Philanthropy.