I'm a globally ranked top 20 forecaster. I believe that AI is not a normal technology. I'm working to help shape AI for global prosperity and human freedom. Previously, I was a former data scientist with five years of industry experience.
There are two ways to interpret this claim.
One is to interpret this claim as causal -- "the things that cause AGI to go well for humans also cause AGI to go well for animals".
In general, my concern here is something like "AGI gets aligned primarily based on 2025-era human values by imitation learning and doesn't magically converge on my ideal philosophy". I think what happens to animals after that would be fairly contingent on human moral evolution.
Another is to interpret the claim as evidentiary -- "what happens to animals conditional only on things going well for humans by any means?" In this sense, we likely do have ensuing transformative economic growth and rapid technological advancement, which likely shifts society away from factory farming. Especially as society becomes primarily digital and/or spacefaring, factory farming likely is not economically suited for that. Though I think this is far from guaranteed.
I'm hedging between both these interpretations, which is why I end up somewhere in the middle.
Hi - thanks for this comment. As someone working on export control policy, let me give you my perspective.
Firstly, an important precondition for a cooperative pause is leverage. You don't get China to agree to a mutual pause by first giving away your main strategic advantage. You get them to agree by making the alternative to be "a race they're losing", which is worse than cooperation. Export controls are thus part of what creates the conditions for being able to pause. If you equalize compute access first, China has no reason to agree to a pause because they'd be in a great position to race.
This is basic negotiation theory. You don't just disarm and hand over your weapons before the arms control treaty; you disarm as part of the treaty.
More critically, export controls are already priced in. The US has maintained semiconductor restrictions on China since October 2022, tightening them in 2023 and again in 2024, before loosening them late last year. The core diplomatic costs of these controls have already been paid. Easing controls now doesn't recoup that trust. China won't say "oh great, all is forgiven". But easing controls does give away the strategic advantage those controls purchased.
I'd also dispute that placing chip export controls on a state makes them a "pariah state". Restricting dual-use technology exports to strategic competitors is completely normal behavior among non-pariah states. Similarly to how we might restrict F-35 technology, nuclear technology, satellite components, rocket launch components, etc., to many countries.
Hi! I'm a long time effective altruist (14+ years) and utilitarian/utilitarian-adjacent. This is a sweet and earnest post - you're clearly bright and I admire your dedication at such a young age. You're right to recognize that burnout isn't utilitarian, but I worry that your "donate everything / camper van" framing is premature and probably wrong.
At age 14/15, the highest-EV move is almost always building optionality, not making binding commitments to extreme frugality. You're right to maximize income, but I think you're thinking too much about this in terms of "save money" and not enough in terms of "earn more money". Your comparative advantage at 14 is completely unknown. You might become a researcher, policy person, entrepreneur, or earn-to-give - these all have very different optimal life strategies. And your ability to find and execute on the best path, and do that path well, will certainly lead to higher impact in the long-run than self-sacrificing immensely right now. I think moving into an apartment or a house is compatible with being a utilitarian. For example, Peter Singer lives in a house, as did a lot of other historical utilitarians.
Also... "Living alone would cause a huge debt of social interaction, causing me to die earlier" is... technically true but also a somewhat alienating way to think about relationships? Friends aren't just longevity inputs. Be careful not to instrumentalize everything - that path leads to feeling disconnected and often to abandoning the whole framework.
Advice I'd give:
Hi. Thanks for writing this. I find electoral reform to be a genuinely interesting cause area, and I appreciate the effort to apply EA frameworks to it. I have a few concerns with the framing and some factual details:
On neglectedness: The claim that this is "the most neglected intervention in EA" doesn't match the track record. The Center for Election Science has received over $2.4M from Open Philanthropy, $100K from EA Funds, and $40K+ from SFF. 80,000 Hours has a problem profile on voting reform calling it a "potential highest priority area" and did a full podcast episode with Aaron Hamlin. There's also a dedicated EA Forum topic page, and another post appeared today advocating for CES funding. This doesn't mean additional funding isn't warranted, but framing this as EA's "most neglected" area overstates the case.
On the theory of change: The post seems to conflate two distinct interventions: (1) running congressional candidates on electoral reform platforms to "prevent authoritarian consolidation" in 2026, and (2) actually implementing approval voting + top-four primaries long-term. The "$125M protects $400B" framing treats these as equivalent, but they're quite different propositions. The 20% success probability is asserted without justification, and it's unclear what "success" means... winning midterms? Passing state initiatives? Both?
I'd also find it helpful to understand the mechanism better: how does "50 congressional candidates running on electoral reform" prevent authoritarianism? Most swing voters care about bread-and-butter issues like affordability and healthcare - the claim that approval voting advocacy serves as an effective "organizing principle" would benefit from more evidence or argument.
Some factual corrections: The post advocates for approval voting but cites Alaska as evidence. Alaska uses ranked-choice voting, not approval voting. It's also worth noting that the repeal measure in Alaska barely failed (50.1% to 49.9%) despite being outspent 100:1 - which cuts both ways on tractability. And while the post mentions Fargo's approval voting was banned by the North Dakota legislature, this seems like an important cautionary tale about scaling that deserves more attention.
I appreciate you sharing but I'd just encourage tightening up the claims and being more precise about what evidence supports which interventions and what your suggested interventions are doing.
See also "A Model Estimating the Value of Research Influencing Funders" that comes to a similar conclusion
No