2 karmaJoined


I think in general the argument makes sense, but I'd point a few things:

  • Bad arguments of the fallacy type actually do not take a long time to reply to. You can simply suggest to the person that you think X is a fallacy because of Y and move on.
  • Bad arguments of the trolling type require you detecting when a person is not interested in the argument itself but making you angry, etc. Trolling is typically a feature of anonymous communication, although some people enjoy doing this face-to-face. In general, one should avoid feeding the trolls, of course, because doing so achieves nothing other than to entertain (or even give money to in certain platforms) the troll. In person, throw the troll-might-be your best argument and see how they react. If their answer does not reveal reflection, just move on.
  • "Bad arguments" of the sort "people just say X is wrong" typically just reveal a difference in values. It's possible to argue, e.g., about the positive and negative things associated with a given thing (e.g., homosexuality, cultural appropriation), but it's not possible to argue the valence of the thing in itself (e.g., whether these things are bad in and of themselves). Sometimes you can argue based on internal logic of a value system (e.g., "Ok, so you think homosexuality is bad because the Bible says so, but it also says you shouldn't eat pork or seafood and you do it. Why do you care about it for some things and not others?"), but I find these discussions are usually not worth it unless done for enjoyment of both parties or between people who will have a long-term close relationship, in which value-alignment or at least value-awareness is important.

In general, I think it's good to practice letting go and just accepting that you can't win every argument or change everyone's mind on any one thing. I'd say Cognitive Behavioral Therapy and Meditation might be good suggestions for people who frequently get worked up after an argument with others and that ruminate (with associated negative feelings) on the argument for hours to days after the fact.

In general I agree (but I already did before reading the arguments) there's probably a hype around AI-based disinformation and argument 2, i.e., that we actually don't have a lot of evidence of large change in behavior or attitudes due to misinformation, is the strongest of the arguments. There is evidence from the truth effect that could be used to argue that a flood of disinformation may be bad anyway; but the path is longer (truth effect leads to slightly increased belief than then has to slightly influence behavior; probably something we can see with a population of 8 billions, but nothing with losses as large as war or environmental pollution/destruction). The other arguments are significantly weaker, and I'd note the following things: 

  1. It's interesting that, in essence, the central claim relies on the idea that there's widespread misinformation about the fact that misinformation does not impact people's attitudes, behavior, etc., that much. [Edit: In general, I'd note that most studies on misinformation prevalence depend on a) representativeness of the data collected and b) on our ability to detect misinformation. As for the first, we don't know whether most studies are representative (although it is easy to suspect they aren't due to their recruitment methods; e.g., MTurk and Prolific) nor in terms of the experiences these already-probably-not-representative people have (they mostly focus on one platform, with X, formerly Twitter, dominating most research). In terms of the second, self-reported misinformation exposure and lists of URL "poor quality websites" are probably very noisy (the latter depending on how exhaustive the URL lists are and relying on the assumption that good quality news don't ever share misinformation and that bad quality news don't ever share actual information; and of course this does not allow the detection of misinformation that is not shared through an URL link but by user's own words, images, videos, etc.), and alternatives such as human fact checkers and fact-check databases are also not perfect, as they reflect the biases (e.g., in the selection of what to fact-check in the first place) and lack of knowledge that the human curators naturally have. In sum: We should have a good amount of uncertainty around our estimates of misinformation prevalence.]
  2. The dig at the left with "aligns better with the prevailing sensibility and worldview of the liberal commentariat" is unnecessarily inflammatory, particularly when no evidence of difference in perspective between left and right is advanced (and everyone remembers how Trump weaponized the term "fake news"; regardless, left-right asymmetries aren't settled with anecdotes, but with actual data; or even better, instead of focusing on "who does it more" focusing on "stopping to do it"). 
  3. It sounds odd to me to assume in 4. that what people fear is only or even mostly about anti-establishment propaganda. In part because, without any evidence, this is just mind-reading, in part because, given modern levels of affective polarization, the most likely is that when one's party is not in power (or has only recently left), we are more likely to believe that the president (we don't like) is using their powers for the purposes of propaganda, and so are more worried about establishment propaganda as it then becomes salient (e.g., in the US context: https://edition.cnn.com/2021/01/24/politics/trump-worst-abuses-of-power/index.html | https://www.nytimes.com/2023/06/14/business/media/fox-news-biden-dictator-trump.html).

Yeah, without an individual differences approach, my opinion is that Julia's idea of a scout mindset is a jangle (https://en.wikipedia.org/wiki/Jingle-jangle_fallacies), as an "accuracy motivation" has been part of the Psychology literature since at least the 80s (see, e.g., http://www.craiganderson.org/wp-content/uploads/caa/Classes/~SupplementalReadings/Attribu-Decision-Explanation/90Kunda-motivated-reasoning.pdf, where Kunda is making the case for directional motivation and she mentions a bit non-directional motivations, such as accuracy motivations). I didn't have time to look at the conditional reasoning test carefully, but the use of the Wason 2-4-6 task suggests to me that Bastian is correct and this would probably be very correlated with AOT / intellectual humility and/or with the cognitive reflection test. To be perfectly honest, I think a good test of an accuracy motivation as a stable trait would not be this sort of thing but, indeed, a test that involves actually resisting motivated reasoning, a bit like in the cognitive reflection test you aim to measure analytical reasoning not by seeing whether people know basic arithmetic but by looking how well they resist intuitive reasoning. Do notice that AOT is sometimes theorized to be how much one can resist myside bias, which is the quintessential motivational bias. I'd suggest reading on it: https://www.tandfonline.com/doi/pdf/10.1080/13546780600780796

Hello Adam! I had donated to GiveDirectly last year but was going to miss this year's matching campaign if it wasn't for this post. I have given almost nothing in comparison with you (just a mere 100€) but would to say I feel very grateful for being able to double my contribution thanks to you and that you're an inspiration. I hope I too can contribute more in the future. Thanks!