This account is used by the EA Forum Team to publish summaries of posts.
Executive summary: The post argues, in a reflective and deflationary way, that there are no deep facts about consciousness to uncover, that realist ambitions for a scientific theory of consciousness are confused, and that a non-realist or illusionist framework better explains our intuitions and leaves a more workable path for thinking about AI welfare.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The author uses basic category theory to argue, in a reflective and somewhat speculative way, that once we model biological systems, brain states, and moral evaluations as categories, functors, and a natural transformation, it becomes structurally clear that shrimp’s pain is morally relevant and that donating to shrimp welfare is a highly cost-effective way to reduce suffering.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The author argues that AI 2027 repeatedly misrepresents its cited scientific sources, using an example involving iterated distillation and amplification to claim that the book extrapolates far beyond what the underlying research supports.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The author argues that ongoing moral catastrophes are probably happening now, drawing on Evan Williams’s inductive and disjunctive arguments that nearly all societies have committed uncontroversial evils and ours is unlikely to be the lone exception.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The author reflects on moving from a confident teenage commitment to Marxism toward a stance they call evidence-based do-goodism and explains why Effective Altruism, understood as a broad philosophical project rather than a political ideology, better matches their values and their current view that improving the world requires empirics rather than revolutionary theory.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The author argues that Thorstad’s critique of longtermist “moral mathematics” reduces expected value by only a few orders of magnitude, which is far too small to undermine the case for existential risk reduction, especially given non-trivial chances of extremely large or even unbounded future value.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The post introduces the "behavioral selection model" as a causal-graph framework for predicting advanced AI motivations by analyzing how cognitive patterns are selected via their behavioral consequences, argues that several distinct types of motivations (fitness-seekers, schemers, and kludged combinations) can all be behaviorally fit under realistic training setups, and claims that both behavioral selection pressures and various implicit priors will shape AI motivations in ways that are hard to fully predict but still tractable and decision-relevant.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The post reports that CLR refocused its research on AI personas and safe Pareto improvements in 2025, stabilized leadership after major transitions, and is seeking $400K to expand empirical, conceptual, and community-building work in 2026.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The post argues that a subtle wording error in one LEAP survey question caused respondents and report authors to conflate three distinct questions, making the published statistic unsuitable as evidence about experts’ actual beliefs about future AI progress.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The author argues in an exploratory and uncertain way that alternative proteins may create large but fragile near-term gains for animals because they bypass moral circle expansion, and suggests longtermists should invest more in durable forms of moral advocacy alongside technical progress.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.