I posted for the ~first time in the EA forum after the SBF stuff, and was pretty disappointed by the voting patterns: almost all critical posts get highly upvoted (well, taking into account the selection effect where I wouldn't see negative-karma posts), seemingly regardless of how valid or truthseeking or actionable they are. And then the high-karma comments very often just consist of praise for writing up that criticism, or ones that take the criticism for granted and expand on it, while criticism of criticism gets few upvotes.
(Anyway, after observing the voting patterns on this comment thread of mine, I see little reason with engaging on this forum anymore. I find the voting patterns on LW healthier.)
IIRC commenters disputed whether / to which degree MIRI's secrecy & infohazards policy was in any way worse than typical NDAs for big companies.
IIRC re: Michael Vassar, the problem was not so much the connection to him, but that several people around Vassar (the author included) had experienced drug-induced psychosis, which made their criticisms and reported experiences suspect. My sense of the post was that it described innocuous facts and then considered them to be bad by analogy to e.g. Leverage.
Re: mental health, I agree that the MIRI world view is likely not good for one's mental health, but I wouldn't consider that a "ground fact" about MIRI, but rather (assuming one buys into that worldview) a problem with the world being the way it is. For instance, it sure would be better for everyone's mental health if AI alignment were universally agreed upon to be trivially easy, but unfortunately that's not the case.
based on what I see to be a fairly superficial reading of karma/comment count.
I read a significant fraction of the comments in that thread when it first appeared (though not all of them). I'm stressing those data points so much because that thread is still getting cited to this day as if it's undisputed, legitimate and broadly community-endorsed criticism, merely because it has positive karma. Hence I think stressing how to interpret the karma score and number of comments is a very important point, not a superficial one.
To their credit, the EA and LW communities love to question and criticize themselves, and to upvote all criticism. Unfortunately, that lends credence to weak or epistemically dubious criticisms far beyond what would be merited.
The on the ground facts about MIRI in it are mostly undisputed from what I can tell
They absolutely are in no way undisputed, and I don't understand why anyone would possibly think that, given that the post has a crazy 956 comments. And again, there's a reason why the post is much much lower-karma than tons of the comments.
In fact, I had the opposite impression: that the author saw Zoe's legitimate grievances about Leverage and drew spurious parallels to MIRI.
I'm referring to the source referenced in the Need for Better Norms section:
Rather, my goal is to encourage people in the EA community to internalise and better implement some of the core values of good governance and institutional design.
The 12 Principles of Good Democratic Governance encapsulate fundamental values defining a common vision of democratic governance in Europe. Using the 12 Principles as a reference point can help public authorities at any level measure and improve the quality of their governance and enhance service delivery to citizens.
I agree that good governance is important, but I'm bemused that your source for principles of good governance is the Council of Europe. Though it's not identical with the EU, I'm skeptical that an EU-affiliated institution is a good source to take lessons about good governance. Also, the list is about democratic governance, and hence not particularly relevant to businesses or nonprofits.
More generally, there's a significant tradeoff between doing stuff and having oversight. (See e.g. this Vitalik Buterin post on the bulldozer vs vetocracy political axis.) Many big institutions are very far on the oversight end of the spectrum, and are hence very slow to act in any situation. Conversely, startups are probably too far on the doing-stuff end of the spectrum, but for good reason.
That said, in a lifecycle of institutions, it makes sense for the surviving ones to become more bureaucratic and professionalized over time. Paul Graham:
There is more to setting up a company than incorporating it, of course: insurance, business license, unemployment compensation, various things with the IRS. I'm not even sure what the list is, because we, ah, skipped all that. When we got real funding near the end of 1996, we hired a great CFO, who fixed everything retroactively. It turns out that no one comes and arrests you if you don't do everything you're supposed to when starting a company. And a good thing too, or a lot of startups would never get started. 
 A friend who started a company in Germany told me they do care about the paperwork there, and that there's more of it. Which helps explain why there are not more startups in Germany.
And separately, it's easy to pay lip service to good governance, but hard to put it into practice. For instance, almost all Western democracies use highly suboptimal voting systems.
Plus in practice a lot of oversight is just CYA, which can incidentally be soul-crushing for the employees who have to implement it. (E.g. consider complaints by doctors about how much time they have to spend on administrative tasks rather than taking care of patients.)
Regarding your examples from the "Weak Norms of Governance" section:
1 - As I understand it, wasn't that a crime committed by an accountant and CFO (see the embezzlement charge here), i.e. by the kind of person you hire for oversight? How does that affect your conclusion?
4 - Surely the main lesson here is "don't update on, and maybe update against, criticism that sits on 75 karma despite 193 votes", especially when tons of comments have several times that karma.
I'm not familiar with the other situations.
This is only very loosely related to your post, but the current situation somewhat reminds me of Scott's Cyclic Theory Of Subcultures. In that model, the sudden loss of resources available to EA might push the movement from the Growth phase into the Involution phase, so there are suddenly new incentives to critizice one another and vie for the remaining resources and status:
During this phase, a talented status-hungry young person who joins the movement is likely to expect status but not get it. The frontier is closed; there’s no virgin territory to go homesteading in. The only source of status is to seize someone else’s - ie to start a fight.
Sometimes these fights are object-level: the movement’s art is ugly, its intellectual arguments are false, its politics are unjust. But along with the object level disagreements, there are always accusations that accurately reflect status-famine, ones like “the leaders of this movement are insular and undemocratic” or “the elites don’t listen to criticism”. These accusations may or may not be true. But during the Growth phase, nobody makes them, even when they are true; during the Involution phase, people always make them, even when they aren’t.
Someone with very novel and interesting criticism might start an entirely new subculture based on their ideas; their complaints might suggest a new research direction with unexplored vistas and plenty of free energy. But more likely, they’ll have more minor criticism, and end up vying for the same pool of resources and subcultural energy, only wanting it to be their pool rather than other people’s. This person is now a counterelite (or as they used to call it, a heresiarch).
For context, here's Matt Yglesias on the state of UK housing policy: