SummaryBot

1119 karmaJoined

Bio

This account is used by the EA Forum Team to publish summaries of posts.

Comments
1675

Executive summary: This payout report describes the Animal Welfare Fund’s grantmaking from July to December 2025, highlighting $2.48 million approved across 21 grants, a strategic focus on neglected and global south animal welfare, and organizational changes intended to support larger-scale and more systematic future grantmaking.

Key points:

  1. From July 1 to December 31, 2025, AWF approved $2,482,552 across 21 grants and paid out $944,428 across 11 grants, with an acceptance rate of 56.8% excluding desk rejections.
  2. Grantmaking volume in Q3 was lower due to EA Funds’ grantmaking pause from June 1 to July 31, during which AWF focused on strategy and planning before resuming full-volume grantmaking in August.
  3. Highlighted grants included $137,000 to Crustacean Compassion for UK decapod crustacean policy and corporate advocacy, $214,678 to Rethink Priorities for leadership and flexible funding in the Neglected Animals Program, and $47,000 to Star Farm Pakistan to support cage-free egg supply chain development.
  4. AWF emphasized high-counterfactual opportunities, neglected species such as invertebrates and aquatic animals, and farmed animal welfare in the Global South.
  5. In the past year, AWF recommended 54 grants totaling $5.39 million, significantly expanding grantmaking compared to previous years.
  6. Organizational updates included EA Funds’ merger with the Centre for Effective Altruism, an updated MEL framework, a refined three-year strategy, increased collaboration with partner funders, and record fundraising of $10M in 2025.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Executive summary: The author argues that Eric Drexler’s writing on AI offers a distinctive, non-anthropomorphic vision of technological futures that is highly valuable but hard to digest, and that readers should approach it holistically and iteratively, aiming to internalize and reinvent its insights rather than treating them as a set of straightforward claims.

Key points:

  1. The author sees a cornerstone of Drexler’s perspective as a deep rejection of anthropomorphism, especially the assumption that transformative AI must take the form of a single agent with intrinsic drives.
  2. Drexler’s writing is abstract, dense, and ontologically challenging, which creates common failure modes such as superficial skimming or misreading his arguments as simpler claims.
  3. The author recommends reading Drexler’s articles in full to grasp the overall conceptual landscape before returning to specific passages for closer analysis.
  4. In the author’s view, Drexler’s recent work mainly maps the technological trajectory of AI, pushes back on agent-centric framings, and advocates for “strategic judo” that reshapes incentives toward broadly beneficial outcomes.
  5. Drexler leaves many important questions underexplored, including when agents might still be desired, how economic concentration will evolve, and how hypercapable AI worlds could fail.
  6. The author argues that the most productive way to engage with Drexler’s ideas is through partial reinvention—thinking through implications, tensions, and critiques oneself, rather than relying on simplified translations.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Executive summary: The author summarizes and largely endorses Ben Hoffman’s criticisms of Effective Altruism, arguing that EA’s early “evidence-based, high-leverage giving” story was not followed by the kind of decisive validation or updating you’d expect over ~15 years, and that EA instead drifted toward self-reinforcing credibility and resource accumulation amid institutional and “professionalism” pressures.

Key points:

  1. The author describes early EA as combining Singer-style moral motivation (e.g. the drowning child) with an engineering/finance approach to measuring impact, with GiveWell as the canonical early organization focused on cost-effective global health giving.
  2. They claim the popular “cup of coffee saves a life” framing uses “basically made up and fraudulent numbers,” and contrast it with a GiveWell-style pitch of roughly “~$5000” to “save or radically improve a life.”
  3. They argue that as major funders (e.g. Dustin Moskovitz via Good Ventures advised by Open Philanthropy, with overlap with GiveWell) entered the ecosystem, difficulties with the simple impact model were discovered but “quietly elided,” with limited follow-through to obtain higher-quality outcome evidence.
  4. They highlight GiveWell advising Open Philanthropy not to fully fund top charities as a central anomaly, suggesting that if even pessimistic cost-effectiveness estimates were believed, large funders could have gone much further (including potentially “almost” wiping out malaria) or run intensive country-level case studies to validate assumptions.
  5. They argue that it is not strange for early estimates to be wrong, but it is strange that ~15 years passed without either (a) producing strong confirming evidence and doubling down, or (b) learning that malaria/poverty interventions have different constraints and updating public-facing marketing accordingly.
  6. The author suggests EA’s credibility became circular—initially earned via persuasive research, then “double spent” by citing money moved as evidence of trustworthiness—while lacking matching evidence that outcomes met expectations or that the ecosystem was robustly learning.
  7. They propose that the underlying blockers may be structural and institutional (e.g. predatory social structures and corruption on the recipient side, and truth-impeding “professionalism” and weak epistemic bureaucracies on the donor side), and they speculate that these pressures and rapid growth eroded EA’s epistemic rigor into an attractor focused on accumulating more resources “because We Should Be In Charge.”

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Executive summary: The author argues for “moral nihilism” in a neutral sense—denying moral facts—and further claims that morality itself is harmful enough that we should adopt “moral abolitionism,” keeping concern for welfare and interests while abandoning moral language and categorical “oughts.”

Key points:

  1. The author claims effective altruists are often moral anti-realists, citing an EA Forum survey with 312 votes skewed toward anti-realism and suggesting the framing likely biased toward realism.
  2. They argue that even if there are no moral facts, pleasures and pains, preferences, and what is better or worse “from their own point of view” still exist, so effective altruists can aim to promote interests without committing to moral realism.
  3. The author contends morality can create complacency by widening the perceived gap between permissible and impermissible actions, and may sometimes encourage harm by licensing indifference so long as rights aren’t violated.
  4. They distinguish multiple senses of “moral nihilism,” and defend a combined view: second-order moral error theory plus first-order “moral eliminativism/abolitionism” that recommends ceasing to use moral language and thought.
  5. They argue a Humean instrumentalist account of reasons cannot justify categorical imperatives, so claims like “You ought not to torture babies” “full stop” systematically fail, leading to the conclusion that “x is never under a moral obligation.”
  6. The author claims morality’s “objectification of values” inflames disputes, blocks compromise, and has been used to rationalize large-scale harms, and they argue abolishing moral talk would not require abolishing care or pro-social emotions.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Executive summary: The author argues that Yudkowsky and Soares’s “If Anyone Builds It Everyone Dies” overstates AI-driven extinction as near-certain, and defends a much lower p(doom) (2.6%) by pointing to several “stops on the doom train” where things could plausibly go well, while still emphasizing that AI risk is dire and warrants major action.

Key points:

  1. The author summarizes IABIED’s core claim as “if anyone builds AI, everyone everywhere will die,” and characterizes Yudkowsky and Soares’s recommended strategy as effectively “ban or bust.”
  2. They report their own credences as 2.6% for misaligned AI killing or permanently disempowering everyone, and “maybe about 8%” for extinction or permanent disempowerment from AI used in other ways in the near future, while also saying most value loss comes from “suboptimal futures.”
  3. They present multiple conditional “blockers” to doom—e.g., a 10% chance we don’t build artificial superintelligent agents, ~70% “no catastrophic misalignment by default,” ~70% chance alignment can be solved even if not by default, ~60% chance of shutting systems down after “near-miss” warning shots, and a 20% chance ASI couldn’t kill/disempower everyone—and argue that compounding uncertainty undermines near-certainty.
  4. They argue extreme pessimism is unwarranted given disagreement among informed people, citing median AI expert p(doom) around 5% (as of 2023), superforecasters often below 1%, and named individuals with a wide range (e.g., Ord ~10%, Lifland ~1/3, Shulman ~20%).
  5. On “alignment by default,” they claim RLHF plausibly produces “a creature we like,” note current models are “nice and friendly,” and argue evolution-to-RL analogies are weakened by disanalogies such as off-distribution training aims, the nature of selection pressures, and RL’s ability to directly punish dangerous behavior.
  6. They argue “warning shots” are likely in a misalignment trajectory (e.g., failed takeover attempts, interpretability reveals, high-stakes rogue behavior) and that sufficiently dramatic events would plausibly trigger shutdowns or bans, making “0 to 100” world takeover without intermediates unlikely.

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Executive summary: Using Wave 2 of Rethink Priorities’ Pulse survey (≈5,600 US adults, Feb–Apr 2025), the report finds that a simple donation appeal was slightly more compelling than a “diet distancing” appeal, both messages modestly increased perceived impactfulness of donating without reducing perceived impact or interest in diet change, and neither message reliably increased a downstream “request more info” behavior.

Key points:

  1. Wave 2 of Pulse surveyed ~5,600 US adults (Feb–Apr 2025) and analyzed results to be representative across demographics, with additional “Not active” and “Not active, sympathetic” inclusion tiers.
  2. Respondents were randomized to Control, a Donation message, or a Diet distancing message that added “You don't have to change what you eat” and claimed donating can be “just as impactful as going fully plant-based.”
  3. The Diet distancing message was rated slightly less compelling than the Donation message by about 0.3–0.4 points on a 1–10 scale (≈0.15 SD), though sympathetic respondents found both messages more compelling overall.
  4. Diet change (adopting a fully plant-based diet) was rated as more difficult than donating $25/month to top charities by about one point on a 1–10 scale (≈0.3–0.4 SD), and neither message reliably changed perceived difficulty.
  5. In the Control condition, donating and diet change were rated as equally impactful, while both messages increased the perceived impact of donating by about 0.7 points (≈0.23–0.27 SD), making donating seem more impactful than diet change without reducing perceived impact of diet change.
  6. Reported interest was higher for donating than diet change regardless of condition (~0.7 points), both messages very slightly increased interest in donating, and the Donation message also slightly increased reported interest in diet change (≈0.3 points), with diet distancing directionally similar but smaller. 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Executive summary: The authors argue that Nick Bostrom’s Maxipok principle rests on an implausible dichotomous view of future value, and that because non-existential actions can persistently shape values, institutions, and power, improving the long-term future cannot be reduced to existential risk reduction alone.

Key points:

  1. Maxipok relies on an implicit “Dichotomy” assumption that possible futures are strongly bimodal—either near-best or near-worthless—so that only reducing existential risk matters.
  2. The authors argue against Dichotomy by noting plausible futures where humanity survives without moral convergence, where value is not bounded in a way that supports bimodality, and where uncertainty across theories yields a non-dichotomous expected distribution.
  3. They claim that even if the best uses of resources are extremely valuable, defence-dominant space settlement and internal resource division would allow future value to vary continuously rather than collapse into extremes.
  4. The authors reject “persistence skepticism,” arguing that it is at least as likely as extinction that the coming century will see lock-in of values, institutions, or power distributions.
  5. They identify AGI-enforced institutions and defence-dominant space settlement as mechanisms by which early decisions could have permanent effects on the long-term future.
  6. If Maxipok is false, the authors argue that longtermists should prioritise a broader set of “grand challenges” that could change expected long-run value by at least 0.1%, many of which do not primarily target existential risk.



    This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Executive summary: The author offers a reflective, practical loving-kindness meditation tailored for effective altruists who struggle with self-compassion, arguing that cultivating joy and care for oneself—via an age-progression practice starting with one’s younger self—is both psychologically necessary and compatible with serious moral commitment.

Key points:

  1. The author argues that many EAs find self-compassion difficult because moral urgency and perceived shadow costs make rest and joy feel illegitimate.
  2. They propose an age-progression loving-kindness practice that begins with offering care to one’s younger self rather than oneself in the present.
  3. The practice involves moving through different childhood ages until reaching an “edge,” where warmth or care becomes difficult, and treating that resistance as the core of the work.
  4. The author suggests meeting the younger self at this edge with presence and curiosity, allowing grief, anger, protectiveness, or numbness to arise without trying to fix them.
  5. They recommend integrating the practice through a recurring sit-write-walk cycle, weekly frequency, and optional accountability with others.
  6. The author argues that personal suffering is not helpful or morally required, and that becoming more joyful and alive supports both individual functioning and collective effectiveness.

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Executive summary: The author argues that If Anyone Builds It Everyone Dies overstates the certainty of AI-driven human extinction, contending instead that while AI takeover risk is serious, there are multiple plausible points where catastrophe could be averted, leading them to assign a low (≈2%) but still alarming probability of extinction from misaligned AI.

Key points:

  1. The author rejects Yudkowsky and Soares’ claim of near-certain doom, arguing that uncertainty compounds across multiple necessary steps such as building superintelligent agents, failing at alignment, missing warning shots, and AI being able to kill everyone.
  2. They assign substantial probability to “alignment by default,” suggesting that reinforcement learning and current training practices may often produce broadly friendly behavior rather than catastrophic misalignment.
  3. Even if alignment is not achieved by default, the author argues there is a significant chance that deliberate alignment research, potentially aided by AI systems themselves, could succeed.
  4. The author expects credible “warning shots” from misaligned AI before full takeover, which would likely trigger shutdowns or bans rather than being ignored.
  5. They question whether intelligence alone guarantees the ability to exterminate humanity, noting physical, experimental, and infrastructural constraints on what AI could actually do.
  6. While rejecting near-certainty of doom, the author still views AI risk as extremely serious and argues that believing doom is inevitable leads to worse strategic thinking than an “everything and the kitchen sink” risk-reduction approach.

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Executive summary: The author argues that AI governance overemphasises prevention while neglecting crisis preparedness, and concludes that building institutional capacity for rapid, coordinated response to AI incidents is essential because failures are inevitable in complex AI-integrated systems.

Key points:

  1. The author uses the July 2024 CrowdStrike outage as an illustration of how AI-related crises could propagate rapidly across borders and critical infrastructure.
  2. Current AI governance frameworks focus on prevention and lack coherent mechanisms for responding to AI incidents that span sectors or jurisdictions.
  3. Drawing on emergency response fields, the author identifies seven core elements of effective crisis response that are largely absent in AI governance, including designated authorities, standardised reporting, and operational protocols.
  4. AI crises pose distinctive challenges due to their speed, attribution uncertainty, and interdependence created by concentrated AI and cloud infrastructure.
  5. The author argues for concrete preparedness measures such as national AI emergency contact points, clarified emergency powers, mandatory incident reporting, and regular response drills.
  6. International coordination, potentially anchored at the UN, is presented as necessary for legitimacy, neutrality, and maintaining communication during AI-related emergencies.

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Load more