I haven't gone through any of this in detail so I'm sure I'm missing quite a bit of the technical specifics and maybe some of the big picture, but here's what I'm seeing. It doesn't seem surprising that a large neural network would perform well on scenarios in the training distribution but generalize poorly. It also does not surprise me that models somehow constrained or incentivised to search for conceptually simpler solutions would discover physical laws that do generalize better. But this just sounds like you've re-discovered Occam's razor. Yes, all science is oriented around searching for laws that have a sort of conceptual simplicity to them, a simplicity that your standard neural network is just not interested in, and that bias toward conceptual simplicity is what allows generalization. So were the systems really designed around the physical laws, or were they simply designed around the scientific method? A human scientist has to be taught Occam's razor in their training too, so perhaps it should not surprise us that the same holds for ML models.
Lets back up and ask a more basic question: What, exactly, do you mean by "fascism"? What is this thing that you are "anti"? As used in modern discourse, it seems to be used more as a slur than as a word with actual semantic content, and the people I have encountered in the past who self-identify as "anti-fascist" have not come off as serious people to me. So don't use that word like we all have a shared understanding of what actual thing it refers to. I don't, and unless you define it, I question whether you do either.
I think I disagree. Admittedly I tend to use AI more for talking through ideas and editing than actual drafting, so maybe something different happens when AI is used for drafting, but to me this feels like a demand that writers tell you whether they used MS Word or Google Docs or Open Office or a type writer. Why? The use of a tool, whether an AI or a word processor, doesn't make the work any less the work of the human author, nor does it diminish the trust we should place in the claims made. The human author is still responsible for the accuracy of the claims.
The main thing I'm hearing here is a criticism of the choice to handle marketing only after strategy decisions have been made, rather than incorporating marketing in to strategy decisions. My instinct is to view that as a feature, not a bug. EAs strength is its epistemics. We figure out what is good to do based on facts and evidence, not based on how we think it will look. A marketing person is just an expert in how it will look, or how to make it look good. Seems like having them in the room when making strategy decisions would poison those decisions, not strengthen them. What am I missing?
I'm not especially familiar with the history - I came to EA after the term "longtermism" was coined so that's just always been the vocabulary for me. But you seem to be equating an idea being chronologically old with it already being well studied and explored and the low hanging fruit having been picked. You seem to think that old -> not neglected. And that does not follow. I don't know how old the idea of longtermism is. I don't particularly care. It is certainly older than the word. But it does seem to be pretty much completely neglected outside EA, as well as important and, at least with regard to x-risks, tractable. That makes it an important EA cause area.
Why on earth would you set 2017 as a cutoff? Language changes, there is nothing wrong with a word being coined for a concept, and then applied to uses of the concept that predate the word. That is usually how it goes. So I think your exclusion of existential risk is just wrong. The various interventions for existential risks, of which there are many, are the answer to your question.
merely possible people are not people
And this, again, is just plane false, at least in the morally relevant senses of these words.
I will admit that my initial statement was imprecise, because I was not attempting to be philosophically rigorous. You seem to be focusing in on the word "actual", which was a clumsy word choice on my part, because "actual" is not in the phrase "person affecting views". Perhaps what I should have said is that Parfit seems to think that possible people are somehow not people with moral interests.
But at the end of the day, I'm not concerned with what academic philosophers think. I'm interested in morality and persuasion, not philosophy. It may be that his practical recommendations are similar to mine, but if his rhetorical choices undermine those recommendations, as I believe they do, that does not make him a friend, much less a godfather of longermism. If he wasn't capable of thinking about the rhetorical implications of his linguistic choices, then he should not have started commenting on morality at all.
You seem to be making an implicit assumption that longtermism originated in philosophical literature, and that therefor whoever first put an idea in the philosophical literature is the originator of that idea. I call bullshit on that. These are not complicated ideas that first arose amongst philosophers. These are relatively simple ideas that I'm sure many people had thought before anyone thought to write them down. One of the things I hate most about philosophers is their tendency to claim dominion over ideas just because they wrote long and pointless tomes about them.
Overall, I think the degree to which intelligences of whatever kind, looking at the same phenomena, will converge on the same concepts, is significantly greater than you make out. To take one of your examples, we may not be able to read off the concept of "car" from the laws of physics, and it is a concept that might be lacking in an alien civilization. However, an alien anthropologist who chooses to study 21st century humans will necessarily develop the concept of "car", because that actually is a meaningful cluster of the physical objects that humans interact with, that actually is one of the joints of 21st century human reality.
That said, I'm still pretty confused about what the implications of any of this for alignment are. You seem to agree that so long as we train models on the same sorts of data we are now, they are likely to converge on the same concepts. But when do you foresee us doing anything else? Training a frontier model is expensive. Nobody is going to put in the investment for a model that isn't expected to be useful to humans, and so every model will continue to be trained on human scale data for the foreseeable future. What change do you imagine might happen in how we build LLMs that might shift them to a different basin of attractors?