I think in general the argument makes sense, but I'd point a few things:
In general, I think it's good to practice letting go and just accepting that you can't win every argument or change everyone's mind on any one thing. I'd say Cognitive Behavioral Therapy and Meditation might be good suggestions for people who frequently get worked up after an argument with others and that ruminate (with associated negative feelings) on the argument for hours to days after the fact.
In general I agree (but I already did before reading the arguments) there's probably a hype around AI-based disinformation and argument 2, i.e., that we actually don't have a lot of evidence of large change in behavior or attitudes due to misinformation, is the strongest of the arguments. There is evidence from the truth effect that could be used to argue that a flood of disinformation may be bad anyway; but the path is longer (truth effect leads to slightly increased belief than then has to slightly influence behavior; probably something we can see with a population of 8 billions, but nothing with losses as large as war or environmental pollution/destruction). The other arguments are significantly weaker, and I'd note the following things:
Yeah, without an individual differences approach, my opinion is that Julia's idea of a scout mindset is a jangle (https://en.wikipedia.org/wiki/Jingle-jangle_fallacies), as an "accuracy motivation" has been part of the Psychology literature since at least the 80s (see, e.g., http://www.craiganderson.org/wp-content/uploads/caa/Classes/~SupplementalReadings/Attribu-Decision-Explanation/90Kunda-motivated-reasoning.pdf, where Kunda is making the case for directional motivation and she mentions a bit non-directional motivations, such as accuracy motivations). I didn't have time to look at the conditional reasoning test carefully, but the use of the Wason 2-4-6 task suggests to me that Bastian is correct and this would probably be very correlated with AOT / intellectual humility and/or with the cognitive reflection test. To be perfectly honest, I think a good test of an accuracy motivation as a stable trait would not be this sort of thing but, indeed, a test that involves actually resisting motivated reasoning, a bit like in the cognitive reflection test you aim to measure analytical reasoning not by seeing whether people know basic arithmetic but by looking how well they resist intuitive reasoning. Do notice that AOT is sometimes theorized to be how much one can resist myside bias, which is the quintessential motivational bias. I'd suggest reading on it: https://www.tandfonline.com/doi/pdf/10.1080/13546780600780796
Hello Adam! I had donated to GiveDirectly last year but was going to miss this year's matching campaign if it wasn't for this post. I have given almost nothing in comparison with you (just a mere 100€) but would to say I feel very grateful for being able to double my contribution thanks to you and that you're an inspiration. I hope I too can contribute more in the future. Thanks!
Just to comment on the results: In hindsight, these results seem pretty obvious: in the first years of my Bachelor in Psychology we had this saying: "The best predictor of behavior is past behavior" and I know of a bit of research on the effectiveness of interventions over time, things such as implicit attitude change (e.g., https://psycnet.apa.org/manuscript/2016-29854-001.pdf) or fake news belief interventions (https://onlinelibrary.wiley.com/doi/full/10.1111/jasp.13049) typically lose a lot of effectiveness in days or weeks, so seeing an intervention not work after 6 months seems not surprising at all. But it would have been nice to see the expectations of most people who consider putting money on EAGx makes sense + people who typically do research into behavior change to know whether this lack of surprise I feel is just me being biased. Regardless, great to see the impact of EAGx being tested!