This is kind of tangential, but anyone who is FODMAP-sensitive would be unable to eat any of Soylent, Huel, or Mealsquares as far as I'm aware.
Relevant blog post I wrote: https://bounded-regret.ghost.io/film-study/
Thanks for writing this! One thing that might help would be more examples of Phase 2 work. For instance, I think that most of my work is Phase 2 by your definition (see here for a recent round-up). But I am not entirely sure, especially given the claim that very little Phase 2 work is happening. Other stuff in the "I think this counts but not sure" category would be work done by Redwood Research, Chris Olah at Anthropic, or Rohin Shah at DeepMind (apologies to any other people who I've unintentionally left out).
Another advantage of examples is it could help highlight what you want to see more of.
I'm teaching a class on forecasting this semester! The notes will all be online: http://www.stat157.com/
It seems clear that none of the content in the paper comes anywhere close to your examples. These are also more like "instructions" than "arguments", and Rubi was calling for suppressing arguments on the danger that they would be believed.
At the same time, what occurred mostly sounded reasonable to me, even if it was unpleasant. Strong opinions were expressed, concerns were made salient, people may have been defensive or acted with some self-interest, but no one was forced to do anything. Now the paper and your comments are out, and we can read and react to them. I have heard much worse in other academic and professional settings.
I don't think "the work got published, so the censorship couldn't have been that bad" really makes sense as a reaction to claims of censorship. You won't see work that doesn't get published, so this is basically a catch-22 (either it gets published, in which cases there isn't censorship, or it doesn't get published, in which case no one ever hears about it).
Also, most censorship is soft rather than hard, and comes via chilling effects.
(I'm not intending this response to make any further object-level claims about the current situation, just that the quoted argument is not a good argument.)
I also agree with you. I would find it very problematic if anyone was trying to "ensure harmful and wrong ideas are not widely circulated". Ideas should be argued against, not suppressed.
Re: Bayesian thinking helping one to communicate more clearly. I agree that this is a benefit, but I don't think it's the fastest route or the one with the highest marginal value. For instance, when you write:
A lot of expressed beliefs are “fake beliefs”: things people say to express solidarity with some group (“America is the greatest country in the world”), to emphasize some value (“We must do this fairly”), to let the listener hear what they want to hear (“Make America great again”), or simply to sound reasonable (“we will balance costs and benefits”) or wise (“I don’t see this issue as black or white”).
I'm immediately reminded of Orwell's essay Politics and the English Language. I would generally expect people to learn more about clear, truth-seeking communication from reading Orwell (and other good books on writing) than by being Bayesian. Indeed, I find many Bayesian rationalists to be highly obscurantist in practice, perhaps moreso than the average similarly-educated person, and I feel that rationalist community norms tend to reward rather than punish this, because many people are drawn to deep but difficult-to-understand truths.
I would say that the value of the rationalist project so far has been in generating important hypotheses, rather than in clear communication around those hypotheses.
I just don't think this is very relevant to whether outreach to debaters is good. A better metric would be to look at life outcomes of top debaters in high school. I don't have hard statistics on this but the two very successful debaters I know personally are both now researchers at the top of their respective fields, and certainly well above average in truth-seeking.
I also think the above arguments are common tropes in the "maths vs fuzzies" culture war, and given EA's current dispositions I suspect we're systematically more likely to hear and be receptive to anti-debate than to pro-debate talking points. (I say this as someone who loved to hate on debate in high school, especially as it was one of the main competitors with math team for recruiting smart students. But with hindsight from seeing my classmates' life outcomes I think most of the arguments I made were overrated.)
Thanks for this thoughtful and excellently written post. I agree with the large majority of what you had to say, especially regarding collective vs. individual epistemics (and more generally on the importance of good institutions vs. individual behavior), as well as concerns about insularity, conflicts of interest, and underrating expertise and overrating "value alignment". I have similarly been concerned about these issues for a long time, but especially concerned over the past year.
I am personally fairly disappointed by the extent to which many commenters seem to be dismissing the claims or disagreeing with them in broad strokes, as they generally seem true and important to me. I would value the opportunity to convince anyone in a position of authority in EA that these critiques are both correct and critical to address. I don't read this forum often (was linked to this thread by a friend), but feel free to e-mail me (jacob.steinhardt@gmail.com) if you're in this position and want to chat.
Also, to the anonymous authors, if there is some way I can support you please feel free to reach out (also via e-mail). I promise to preserve your anonymity.