I am an attorney in a public-sector position not associated with EA, although I cannot provide legal advice to anyone. My involvement with EA so far has been mostly limited so far to writing checks to GiveWell and other effective charities in the Global Health space, as well as some independent reading. I have occasionally read the forum and was looking for ideas for year-end giving when the whole FTX business exploded . . .
As someone who isn't deep in EA culture (at least at the time of writing), I may be able to offer a perspective on how the broader group of people with sympathies toward EA ideas might react to certain things. I'll probably make some errors that would be obvious to other people, but sometimes a fresh set of eyes can help bring a different perspective.
Plus some downstream consequences of the above, like the social and political instability that seems likely with massive job loss. In past economic transformations, we've been able to find new jobs for most workers, but that seems less likely here. People who feel they have lost their work and associated status/dignity/pride (and that it isn't coming back) could be fairly dangerous voters and might even be in the majority. I also have concerns about fair distribution of gains from AI, having a few private companies potentially corner the market on one of the world's most critical resources (intelligence), and so on. I could see things going well for developing countries, or poorly, in part depending on the choices we make now.
My own take is that civil, economic, and political society has to largely have its act together to address these sorts of challenges before AI gets more disruptive. The disruptions will probably be too broad in scope and too rapid for a catch-up approach to end well -- potentially even well before AGI exists. I see very little evidence that we are moving in an appropriate direction.
I am inclined to see a moderate degree of EA distancing more as a feature than a bug. There are lots of reasons to pause and/or slow down AI, many of which have much larger (and politically influential) national constituencies than AI x-risk can readily achieve. One could imagine "too much" real or perceived EA influence being counterproductive insofar as other motivations for pausing / slowing down AI could be perceived with the odor of astroturf.
I say all that as someone who thinks there are compelling reasons that are completely independent of AI safety grounds to pause, or at least slow down, on AI.
I think a related discussion could be had around funders making the decision to quit on projects too early, which is likely much more prevalent/an issue.
The lack of incentives to write posts criticizing one's former funders for pulling the plug early may be a challenge, though. After all, one may be looking to them for the next project. And writing such a post may not generate the positive community feeling that writing an auto-shutdown postmortem does.
The eschatology question is interesting. I think it can still make sense to work on what amounts practically to x-risk prevention even when expecting humans to be around at the Second Coming of Christ (or some eschatological event in other religions).
Also, one can think that x-risk work is also generally effective in mitigating near-x-risk (e.g., a pandemic that "only" kills 99% of us). Particularly given the existence of the Genesis flood narrative, I expect most Christians would accept the possibility of a mass catastrophe that killed billions but less than everyone.
With that being said, if and when having a positive impact on the world and satisfying community members does come apart, we want to keep our focus on the broader mission.
I understand the primary concern posed in this comment to be more about balancing the views of donors, staff, and the community about having a positive impact on the world, rather than trading off between altruism and community self-interest. To my ears, some phrases in the following discussion make it sound like the community's concerns are primarily self-interested: "trying to optimize for community satisfaction," "just plain helping the community," "make our events less of a pleasant experience (e.g. cutting back on meals and snack variety)," "don’t optimize for making the community happy" for EAG admissions).
I don't doubt that y'all get a fair number of seemingly self-interested complaints from not-satisfied community members, of course! But I think modeling the community's concerns here as self-interested would be closer to a strawman than a steelman approach.
On point 4:
I'm pretty sure we could come up with various individuals and groups of people that some users of this forum would prefer not to exist. There's no clear and unbiased way to decide which of those individuals and groups could be the target of "philosophical questions" about the desirability of murdering them and which could not. Unless we're going to allow the question as applied to any individual or group (which I think is untenable for numerous reasons), the line has to be drawn somewhere. Would it be ethical to get rid of this meddlesome priest should be suspendable or worse (except that the meddlesome priest in question has been dead for over eight hundred years).
And I think drawing the line at we're not going to allow hypotheticals about murdering discernable people[1] is better (and poses less risk of viewpoint suppression) than expecting the mods to somehow devise a rule for when that content will be allowed and consistently apply it. I think the effect of a bright-line no-murder-talk rule on expression of ideas is modest because (1) posters can get much of the same result by posing non-violent scenarios (e.g., leaving someone to drown in a pond is neither an act of violence nor generally illegal in the United States) and (2) there are other places to have discussions if the murder content is actually important to the philosophical point.[2]
By "discernable people," I mean those with some sort of salient real-world characteristic as opposed to being 99-100% generic abstractions (especially if in a clearly unrealistic scenario, like the people in the trolley problem).
I am not expressing an opinion about whether there are philosophical points for which murder content actually is important.
A less important but still meaningful win in the US and other places with existing iodization might be extending it to salt in processed food.