Thanks for your kind words, Esben! If anything comes out of this post, I agree that it should be a renewed focus on better framings - though James does raise some excellent points at the cost-effectiveness of this approach :))
Thank you for your excellent points, James! Before responding to your points in turn, I do agree that a significant part of the appeal of my proposal is to make it nicer for EAs. Whether that is worth investing in is not clear to me either - there are definitely more cost-effective ways of doing that. Now to your points:
I totally agree that it serves more as an internal strategic shorthand rather than as a part of communication. Ideally, no one outside core EA would need to know what "low-key longtermism" even refers to.
Super interesting stuff so far! It seems that quite a few of the worries (particularly in "Unclear definitions and constrained research thinking" and "clarity") seem to stem from AI safety currently being a pre-paradigmatic field. This might suggest that it would be particularly impactful to explore more than exploiting (though this depends on just how aggressive ones timelines are). It might also suggest that having a more positive "let's try out this funky idea and see where it leads" culture could be worth pursuing (to a higher degree than is being done currently). All and all, very nice to see pain points fleshed out in this way!
(Disclaimer: I do work for Apart Research with Esben, so please adjust for that excitement in your own assessment :))