At many points now, I've been asked in private for a critique of EA / EA's history / EA's impact and I have ad-libbed statements that I feel guilty about because they have not been subjected to EA critique and refutation. I need to write up my take and let you all try to shoot it down.
Before I can or should try to write up that take, I need to fact-check one of my take-central beliefs about how the last couple of decades have gone down. My belief is that the Open Philanthropy Project, EA generally, and Oxford EA particularly, had bad AI timelines and bad ASI ruin conditional probabilities; and that these invalidly arrived-at beliefs were in control of funding, and were explicitly publicly promoted at the expense of saner beliefs.
An exemplar of OpenPhil / Oxford EA reasoning about timelines is that, as late as 2020, their position on timelines seemed to center on Ajeya Cotra's "Biological Timelines" estimate which put median timelines to AGI at 30 years later. Leadership dissent from this viewpoint, as I recall, generally centered on having longer rather than shorter median timelines.
An exemplar of poor positioning on AI ruin is Joe Carlsmith's "Is Power-Seeking AI an Existential Risk?" which enacted a blatant Multiple Stage Fallacy in order to conclude this risk was ~5%.
I recall being told verbally in person by OpenPhil personnel that Cotra and Carlsmith were representative of the OpenPhil view and would be the sort of worldview that controlled MIRI's chances of getting funding from OpenPhil, i.e., we should expect funding decisions to be premised on roughly these views and try to address ourselves to those premises if we wanted funding.
In recent personal conversations in which I exposited my current fault analysis of EA, I've heard people object, "But this wasn't an official OpenPhil view! Why, some people inside OpenPhil discussed different views!" I think they are failing to appreciate the extent to which mere tolerance of dissenting discussion is not central, in an organizational-psychology analysis of what a large faction actually does. But also, EAs have consistently reacted with surprised dismay when I presented my view that these bad beliefs were in effective control. They may have better information than I did; I was an outsider and did not much engage with what I estimated to then be a lost cause. I want to know the true facts of OpenPhil's organizational history whatever they may be.
I therefore throw open to EAs / OpenPhil personnel / the Oxford EAs, the question of whether they have strong or weak evidence that any dissenting views from "AI in median >30 years" and "utter AI ruin <10%" (as expressed in the correct directions of shorter timelines and worse ruin chances; and as said before the ChatGPT moment), were permitted to exercise decision-making power over the flow of substantial amounts of funding; or if the weight of reputation and publicity of OpenPhil was at any point put behind promoting those dissenting viewpoints (in the correct direction, before the ChatGPT moment).
This to me is the crux in whether the takes I have been giving in private were fair to OpenPhil. Tolerance of verbal discussion of dissenting views inside OpenPhil is not a crux. EA forum posts are not a crux even if the bylines include mid-level OpenPhil employees.
Public statements saying "But I do concede 10% AGI probability by 2036", or "conditional on ASI at all, I do assign substantial probability to this broader class of outcomes that includes having a lot of human uploads around and biological humans thereby being sidelined", is not something I see as exculpatory; it is rather a clear instance of what I see as a larger problem for EA and a primary way it did damage.
(Eg, imagine that your steamship is sinking after hitting an iceberg, and you are yelling for all passengers to get to the lifeboats. As it seems like a few passengers might be starting to pay some little attention, somebody wearing a much more expensive and serious-looking suit than you can afford, stands up and begins declaiming about how their own expert analysis does suggest a 10% chance that the ship takes on enough water to sink as early as the next week; and that they think this has a 25% chance of producing a broad class of genuinely attention-worthy harms, like many passengers needing to swim to the ship's next destination.)
I have already asked the shoggoths to search for me, and it would probably represent a duplication of effort on your part if you all went off and asked LLMs to search for you independently. I want to know if insiders have contrary evidence that I as an outsider did not know about. If my current take is wrong and unfair, I want to know it; that is not the same as promising to be easy to convince, but I do want to know.
I repeat: You should understand my take to be that of an organizational-psychology cynic who is not per se impressed by the apparent tolerance of dissenting views, people invited to give dissenting talks, dissenters still being invited to parties, et cetera. None of that will surprise me. I do not view it as sufficient to organizational best practices. I will only be surprised by the demonstrated past pragmatic power to control the disposition of funding and public promotion of ideas, contrary to "AGI median in 30 years or longer" and "utter ruin at 10% or lower", before the ChatGPT moment.
(If you doubt my ability to ever concede to evidence about this sort of topic, observe this past case on Twitter where I immediately and without argument concede that OpenPhil was right and I was wrong, the moment that the evidence appeared to be decisive. (The choice of example may seem snarky but is not actually snark; it is not easy for me to find other cases where, according to my own view, clear concrete evidence came out that I was definitely wrong and OpenPhil definitely right; and I did in that case immediately concede.))

There is a surprising amount of normative judgment in here for a fact check. Are you looking just for disagreements that people held roughly the beliefs you later outline (I think you overstate things but are directionally correct in describing how beliefs differed from yours), or also disagreements about whether they were bad beliefs?
For flavour: as I ask that question, I'm particularly (but not only) thinking of the reports you cite, where you seem to be casting them as "OP really throwing its weight behind these beliefs", and I perceived them more as "earnest attempts by people at OP to figure out what was legit, and put their reasoning in public to let others engage". I certainly didn't just agree with them at the time, but I thought it was a good step forwards for collective epistemics to be able to have conversations at that level of granularity. Was it confounding that they were working at a big funder? Yeah, kinda -- but that seemed second order compared to it just being great that anyone at all was pushing the conversation forwards in this way, even if there were a bunch of aspects of them I wasn't on board with. I'm not sure if this is the kind of disagreement you're looking for. (Maybe it's just that I was on board with more of them than you were, and so I saw them as flawed-but-helpful rather than unhelpful? Then we get to the general question of what standards bad should be judged by given our lack of access to ground truth.)