'Hold fire on making projections' is the correct read, and I agree with everything else you mention in point 2.
About point 1 — I think sharing negative thoughts is absolutely a-ok and important. I take issue with airing bold projections when basic facts of the matter aren't even clear. I thought you were stating something akin to "xyz are going to happen," but re-reading your initial post, I believe I misjudged.
I am unsure how I feel about takes like this. On one hand, I want EAs and the EA community to be a supportive bunch. So, expressing how you are feeling and receiving productive/helpful/etc. comments is great. The SBF fiasco was mentally strenuous for many, so it is understandable why anything seemingly negative for EA elicits some of the same emotions, especially if you deeply care about this band of people genuinely aiming to do the most good they can.
On the other hand, I think such takes could also contribute to something I would call a "negative memetic spiral." In this particular case, several speculative projections are expressed together, and despite the qualifying statement at the beginning, I can't help but feel that several or all of these things will manifest IRL. And when you kind of start believing in such forecasts, you might start saying similar things or expressing similar sentiments. In the worst case, the negative sentiment chain grows rapidly.
It is possible that nothing consequential happens. People's mood during moments of panic are highly volatile, so five years in, maybe no one even cares about this episode. But in the present, it becomes a thing against the movement/community. (I think a particular individual may have picked up one such comment from the Forum and posted it online to appease to their audience and elevate negative sentiments around EA?).
Taking a step back, gathering more information, and thinking independently, I was able to reason myself out of many of your projections. We are two days in and there is still an acute lack of clarity about what happened. Emmett Shear, the interim CEO of OpenAI, stated that the board's decision wasn't over some safety vs. product disagreement. Several safety-aligned people at OpenAI signed the letter demanding that the board should resign, and they seem to be equally disappointed over recent events; this is more evidence that the safety vs. product disagreement likely didn't lead to Altman's ousting. There is also somewhat of a shift back to the "center," at least on Twitter, as there are quite a few reasonable, level-headed takes on what happened and also on EA. I don't know about the mood in the Bay though, since I don't live there.
I am unsure if I am expressing my point well, but this is my off-the-cuff take on your off-the-cuff take.
I like the LW emoji palette, but it is too much. Reading forum posts and parsing through comments can be mentally taxing. I don't want to spend additional effort going through a list of forty-something emojis and buttons to react to something, especially comments. I am often pressed for time, so almost always I would avoid the LW emoji palette entirely. Maybe a few other important reactions can be added instead of all of them? Or maybe there could be a setting which allows people to choose if they want to see a "condensed" or "extended" emoji palette? Either way, just my two cents.
The closest would be CEA's communication team, but as you point out: "it’s not desirable to have a big comms. function that speaks for EA and makes the community more formal than it is."
I think it'd be challenging (and not in good taste) for CEA to craft responses on behalf of the entire EA community; it is better if individual EAs critique articles which they think misrepresents ideas within the movement.
I see the same recycled and often wrong impressions of EA far too often, so I appreciate you taking the time and doing this!
Thank you for sharing your impressions! Some comments and questions:
I don't think professional longtermist organizations operate on this belief or even entertain it.
I wholeheartedly agree with points 2 and 3, but I don't understand point 1.
I don't know much about Benjamin Lay, but casually glancing through his Wikipedia, it seems that his actions were morally commendable and supererogatory. Is the charge that he could have picked his fights/approach to advocacy more tactfully?
...we're not hosting any discussions where a group organiser could convince people to work on AI safety over all else.
I feel it is important to mention that this isn't supposed to happen during introductory fellowship discussions. CEA and other group organizers have compiled recommendations for facilitators (here is one, for example), and all the ones I have read quite clearly state that the role of the facilitator is to help guide the conversation, not overly opine or convince participants to believe in x over y.
Minor nitpick: "NEOs (objects smaller than asteroids)"
The definition of NEOs here seems wrong. Wouldn't it be more accurate to call them "Tiny NEOs?" The current definition makes it sound as if asteroids aren't NEOs, but most NEOs are asteroids.