I'm a managing partner at AltX, an EA-aligned quantitative crypto hedge fund. I previously earned to give as a Quant Trading Analyst at DRW. In my free time, I enjoy reading, discussing moral philosophy, and dancing bachata and salsa.
I'm also on LessWrong and have a Substack blog.
Does portfolio theory apply better at the individual level than the community level?
I think the individual level applies if you have risk aversion on a personal level. For example, I care about having personally made a difference, which biases me towards certain individually less risky ideas.
is this "k-level 2" aggregate portfolio a 'better' aggregation of everyone's information than the "k-level 1" of whatever portfolio emerges from everyone individually optimising their own portfolios?
I think it's a tough situation because k=2 includes these unsavory implications Jeff and I discuss. But as I wrote, I think k=2 is just what happens when people think about everyone's donations game-theoretically. If everyone else is thinking in k=2 mode but you're thinking in k=1 mode, you're going to get funged such that your value system's expression in the portfolio could end up being much less than what is "fair". It's a bit like how the Nash equilibrium in the Prisoner's Dilemma is "defect-defect".
At some point what matters is specific projects...?
I agree with this. My post frames the discussion in terms of cause areas for simplicity and since the lessons generalize to more people, but I think your point is correct.
I just wanted to say I really liked this post and consider it a model example of reasoning transparency!
I think animal welfare as a cause area is important and neglected within EA. Invertebrates have been especially neglected since Open Phil pulled out of the space, so my top choices are the Arthropoda Foundation and Shrimp Welfare Project (SWP).
With high uncertainty, I weakly prefer Arthropoda over SWP on the margin. Time is running short to influence the trajectory of insect farming in its early stages. The quotes for Arthropoda's project costs and overhead seem very reasonable. Also, while SWP's operational costs are covered through 2026, Arthropoda's projects may not happen at all without marginal funding, so donations to Arthropoda feel more urgent to me since they're more existential. But all of this is held loosely and I'm very open to counterarguments.
I think these unsavory implications you enumerate are just a consequence of applying game theory to donations, rather than following specifically from my post's arguments.
For example, if Bob is all-in on avoiding funging and doesn't care about norms like collaboration and transparency, his incentives are exactly as you describe: Give zero information about his value system, and make donations secretly after other funders have shown their hands.
I think you're completely right that those are awful norms, and we shouldn't go all-in on applying game theory to donations. This goes both for avoiding funging and for my post's argument about optimizing "EA's portfolio".
However, just as we can learn important lessons from the concept of funging while discouraging the bad, I still think this post is valuable and includes some nontrivial practical recommendations.
Thanks for this; I agree that "integrity vs impact" is a more precise cleavage point for this conversation than "cause-first vs member-first".
Would you sometimes advocate for prioritizing impact (e.g. SUM shipping resources towards interventions) over alignment within the EA community?
Unhelpfully, I'd say it depends on the tradeoff's details. I certainly wouldn't advocate to go all-in on one to the exclusion of the other. But to give one example of the way I think, I'd currently prefer the marginal 1M be given to EA Funds' Animal Welfare Fund than used to establish a foundation to investigate and recommend improvements to EA's epistemics.
It seems to me that I think the EA community has a lot more "alignment/integrity" than you do. This could arise from empirical disagreements, different definitions of "alignment/integrity", and/or different expectations we place on the community.
For example, the evidence Elizabeth presented of a lack of alignment/integrity in EA is that some veganism advocates on Facebook incorrectly claimed that veganism doesn't have tradeoffs, and weren't corrected by other community members. While I'd prefer people say true things to false things, especially when they affect people's health, this just doesn't feel important enough to update upon. (I've also just personally never heard any vegan advocate say anything like this, so it feels like an isolated case.)
One thing that could change my mind is learning about many more cases to the point that it's clear that there are deep systemic issues with the community's epistemics. If there's a lot more evidence on this which I haven't seen, I'd love to hear about it!
Thanks for the interesting conversation! Some scattered questions/observations:
(I didn't downvote your comment, by the way.)
I feel bad that my comment made you (and a few others, judging by your comment's agreevotes) feel bad.
As JackM points out, that snarky comment wasn't addressing views which give very low moral weights to animals due to characteristics like mind complexity, brain size, and behavior, which can and should be incorporated into welfare ranges. Instead, it was specifically addressing overwhelming hierarchicalism, which is a view which assigns overwhelmingly lower moral weight based solely on species.
My statement was intended to draw a provocative analogy: There's no theoretical reason why one's ethical system should lexicographically prefer one race/gender/species over another, based solely on that characteristic. In my experience, people who have this view on species say things like "we have the right to exploit animals because we're stronger than them", or "exploiting animals is the natural order", which could have come straight out of Mein Kampf. Drawing a provocative analogy can (sometimes) force a person to grapple with the cognitive dissonance from holding such a position.
While hierarchicalism is common among the general public, highly engaged EAs generally don't even argue for hierarchicalism because it's just such a dubious view. I wouldn't write something like this about virtually any other argument for prioritizing global health, including ripple effects, neuron count weighting, denying that animals are conscious, or concerns about optics.
(Just wanted to say that your story of earning to give has been an inspiration! Your episode with 80k encouraged me to originally enter quant trading.)
Given your clarification, I agree that your observation holds. I too would have loved to hear someone defend the view that "animals don't count at all". I think it's somewhat common among rationalists, although the only well-known EA-adjacent individuals I know who hold it are Jeff, Yud, and Zvi Mowshowitz. Holden Karnofsky seems to have believed it once, but later changed his mind.[1]
As @JackM pointed out, Jeff didn't really justify his view in his comment thread. I've never read Zvi justify that view anywhere either. I've heard two main justifications for the view, either of which would be sufficient to prioritize global health:
Solely by virtue of our shared species, helping humans may be lexicographically preferential to helping animals, or perhaps their preferences should be given an enormous multiplier.
I use the term "overwhelming" because depending on which animal welfare BOTEC is used, if we use constant moral weights relative to humans, you'd need a 100x to 1000x multiplier for the math to work out in favor of global health. (This comparison is coherent to me because I accept Michael St. Jules' argument that we should resolve the two envelopes problem by weighing in the units we know, but I acknowledge that this line of reasoning may not appeal to you if you don't endorse that resolution.)
I personally find overwhelming hierarchicalism (or any form of hierarchicalism) to be deeply dubious. I write more about it here, but I simply see it as a convenient way to avoid confronting ethical problems without having the backing of any sound theoretical justification. I put about as much weight on it as the idea that the interests of the Aryan race should be lexicographically preferred to the interests of non-Aryans. There's just no prior for why that would be the case.
Yud and maybe some others seem to believe that animals are most likely not conscious. As before, they'd have to be really certain that animals aren't conscious to endorse global health here. Even if there's a 10% chance that chickens are conscious, given the outsize cost-effectiveness of corporate campaigns if they are, I think they'd still merit a significant fraction of EA funding. (Probably still more than they're currently receiving.)
I think it's fair to start with a very strong prior that at least chickens and pigs are probably conscious. Pigs have brains with all of the same high-level substructures, which are affected the same way by drugs/painkillers/social interaction as humans' are, and act all of the same ways that humans act would when confronted with situations of suffering and terror. It would be really surprising a priori if what was going on was merely a simulacrum of suffering with no actual consciousness behind it. Indeed, the vast majority of people seem to agree that these animals are conscious and deserve at least some moral concern. I certainly remember being able to feel pain as a child, and I was probably less intelligent than an adult pig during some of that.
Apart from that purely intuitive prior, while I'm not a consciousness expert at all, the New York Declaration on Animal Consciousness says that "there is strong scientific support for attributions of conscious experience to other mammals and to birds". Rethink Priorities' and Luke Muehlhauser's work for Open Phil corroborate that. So Yud's view is also at odds with much of the scientific community and other EAs who have investigated this.
All of this is why I feel like Yud's Facebook post needed a very high burden of proof to be convincing to me. Instead, it seems like he just kept explaining what his model (a higher-order theory of consciousness) believes without actually justifying his model. He also didn't admit any moral uncertainty about his model. He asserted some deeply unorthodox and unintuitive ideas (like pigs not being conscious), admitted no uncertainty, and didn't make any attempt to justify them. So I didn't find anything about his Facebook post convincing.
To me, the strongest reason to believe that animals don't count at all is because smart and well-meaning people like Jeff, Yud, and Zvi believe it. I haven't read anything remotely convincing that justifies that view on the merits. That's why I didn't even mention these arguments in my follow-up post for Debate Week.
Trying to be charitable, I think the main reasons why nobody defended that view during Debate Week were:
I don't think most people take as a given that maximizing expected value makes perfect sense for donations. In the theoretical limit, many people balk at conclusions like accepting a gamble with a 51% chance of doubling the universe's value and a 49% chance of destroying it. (Especially so at the implication of continuing to accept that gamble until the universe is almost surely destroyed.) In practice, people have all sorts of risk aversion, including difference-making risk aversion, avoiding worst case scenarios, and reducing ambiguity.
I argue here against the view that animal welfare's diminishing marginal returns would be sufficient for global health to win out against it at OP levels of funding, even if one is risk neutral.
So long as small orgs apply to large grantmakers like OP, so long as one is locally confident that OP is trying to maximize expected value, I'd actually expect that OP's full-time staff would generally be much better positioned to make these kinds of judgments than you or I. Under your value system, I'd echo Jeff's suggestion that you should "top up" OP's grants.