William McAuliffe

200Joined Sep 2021


Senior Research Manager at Rethink Priorities.


Thanks for this passionate post. Because of it I also donated to the White Helmets. For other readers I would suggest also considering Doctors Without Borders, but in general I have a lot of uncertainty about how best to help.

Thanks for your work on this, Tessa! I have some similar follow-up questions:

"Thanks for your support and comment. Unfortunately, it appears as though the environmental permitting regarding this specific farm is being allowed to proceed."

To clarify, do you mean that Nueva Pescanova has in fact received  its environmental permit?

"How likely do you think it is that the farm will succeed in creating a commercially viable product, apart from public pressure?  Sounds like there are significant biological and ecological barriers."

I am also interested in ALI's take on this. Nueva Pescanova claims it will be able to raise 3k tonnes of farmed octopus starting in 2023. Has ALI been able to verify that this scale is actually feasible right now?

Finally, is an outright, blanket ban on octopus legally possible in Spain or the EU (or even narrowly within the Canary Islands)? Or is a "ban" shorthand for "convince legislators that, in practice, octopus farming won't meet existing minimal environmental and animal welfare standards"? And what existing farmed animal welfare standards could be invoked, given that octopuses are invertebrates, not vertebrates?

Agreed re: other measures of well-being.

I think the standard approach here would be a two-way fixed effects models with whatever time-varying covariates you can get access to. It makes strong assumptions though:


The cutting edge here is probably the general cross-lagged panel model, which in the tutorial below could not distinguish the long-run effect of national income on national well-being from zero.


I do not know of such evidence. I would note the Gallup world data suggesting that the very down and out are the exception to the "most people are happy most of the time" generalization.

"We can’t decisively rule out scale shifts. Obviously, it’s in the nature of something subjective that we can’t objectively check it. As with assessing validity, we have to reason about what’s likely - the fact something is possible doesn’t mean it's likely."

There is an objective way  to test for scale shifts across countries and over time:  Measurement invariance testing (or testing for "differential item functioning," to use the item response theory vocabulary). Though I agree "we have to reason about what's likely," as the test makes strong statistical and substantive assumptions.

There is evidence that subjective well-being measures violate invariance across countries. I do not know of studies examining invariance of well-being  across generations off the top of my head,  but there is evidence that scale-shifts do occur across generations for other traits, such as narcissism

"The other part of the explanation appeals to hedonic adaptation and the fact that we do get used to lots of things"

Without taking a stance on the broader thesis of this report, I think the evidence of hedonic adaptation is easy to overstate. Latent state-trait models show that changes in circumstances have detectable changes on well-being at least 10 years later, especially for affective measures.  Winning the lottery also has long-term effects of well-being (though more so on life satisfaction), contra the Brickman study that played a role in popularizing hedonic adaptation. 

More supportive of your thesis is evidence that the relationship between wealth and happiness is mostly  driven by stable factors .

"Also in this context, the research by my own organisation, the Happier Lives Institute, finds that cash transfers to the very poor — those on the global poverty line — actually do have a small but significant effect on subjective wellbeing, one that continues over several years (McGuire, Kaiser, Bach-Mortensen, 2022)."

I would suggest also citing the evidence that this result may be an artifact of publication bias.

"More generally, I think what underlies these ideas of using lower salaries as a costly signal of value-alignment is the tacit assumption that value-alignment is a relatively cohesive, unidimensional trait. But I think that assumption isn't quite right - as stated, our factor analyses rather suggested there are two core psychological traits defining positive inclinations to effective altruism (expansive altruism and effectiveness-focus), which aren't that strongly related. (And I wouldn't be surprised if we found further sub-facets if we did more extensive research on this.)"

I agree with the last sentence of this-- there are probably at least as many sub-facets as there are distinct tenets of effective altruism, and only most or all of them coming  together in the same person is sufficient for making someone aligned. Two facets is too few, and,  echoing David, I do not think that the effectiveness-focus and expansive altruism measures are valid measures of actual psychological constructs (though these constructs may nevertheless exist). My view is that these measures should only be used for prediction, or reconstructed from scratch. 

I am less sure the final part of the following:

"I think it's better for EA recruiters to try to gauge, e.g. inclinations towards cause-neutrality, willingness to overcome motivated reasoning, and other important effective altruist traits, directly, rather than to try to infer them via their willingness to accept a lower salary - since those inferences will typically not have a high degree of accuracy."

This depends, I think,  on how difficult it is to ape effective altruism.  As effective altruism becomes more popular and more materials are available to figure out the sorts of things walking-talking EAs say and think, I would speculate that aping effective altruism becomes easier. In this case, if you care about selecting for alignment, a willingness to take on a lower salary could be an important independent source of complimentary evidence. 

"In an intermediate step, we removed items that were not predictive of relevant EA outcome measures (these outcome measures include overall agreement with the key principles of EA, interest in learning more about EA, willingness to change one's career in order to have more impact)."

"Notably, we found that some items with an obligation framing (e.g., “it would be wrong ...”) yielded high variance and were particularly predictive. Remember that our goal was not to create items that together define effective altruism conceptually or philosophically, but rather to identify items that are predictive of our outcome measures. Thus, our inclusion of these obligation items does not mean that we think that effective altruism should be defined in terms of an obligation to help effectively."

I think prioritizing items with predictive power is questionable from a psychometric perspective. Predictive power should not be the goal of measurement (McIntosh, 2007), and latent indicators should not be preferentially chosen because they optimize prediction (Smits et al., 2018). A valid indicator of a construct does not necessarily have good predictive power and invalid indicators often do. Even though you are not conceptually defining Effective Altruism, the indicators should conceptually match your theoretical beliefs about Expansive Altruism and Effectiveness-Focus. If feelings of obligation are constitutive of those constructs, great, but if not then you might want to toss them even if they have predictive value.  (Though having a single item with an obligation focus would not be a bad idea, as the variance attributable to the obligation framing would be partialed out anyway when you load the item on the factor.)

This procedure risks attributing more predictive power to Expansive Altruism and Effectiveness-Focus than would be found with optimal indicators for these constructs.  Accordingly, measurement validity can decrease, and the latent variables may be endogenous with respect to the outcome variables that you optimized the factor to predict. One potential source of evidence for this would be implausibly large associations between the factors and the outcomes. Other sources of evidence would include the Effectiveness and Expansiveness factors correlating with the disturbances of the outcome variables, and the Effectiveness and Expansiveness factors' disturbances correlating with the common variance of the outcome measures. Using Study 2 data, I found evidence consistent with each of these possibilities by regressing a latent variable composed of the charitable giving tasks on the Expansive and Effectiveness factors (model and modification indices), as well as regressing a latent variable composed of the EA-interest items on the Expansive and Effectiveness factors (model and modification indices).  (I also wrote up a more expository walk-through of these analyses here.)

A stronger way of testing whether predictive power has been overstated would be to examine the predictive validity of  measures of similar constructs. The principle of care scale, for instance, could stand in for the Expansive Altruism scale. 

If I were Reviewer 2, I would probably also nitpick on these much more minor issues:

"α = 0.80, CFI = .97." 

I would recommend reporting McDonald's omega instead of Cronbach's alpha unless you provide evidence that the factor loadings are all equal or nearly so. Even more emphatically I would recommend reporting the chi-square test instead of or in addition to CFI. No approximate fit index can (or ever could) measure how much discrepancy there is between the fitted model and true model (Greiff  & Heene, 2017); local fit indices and careful judgment are the only real guides, though fallible in their own ways. Note also that the models I linked to above find much worse fit by all metrics, probably because I embedded the factors in a larger model. Testing the fit of a factor in isolation of the other variables is a weak test because in practice those factors will be entered into models with other variables (Hayduk & Glaser, 2000; e.g., the Expansive Altruism and Effectiveness-Focus factors will often be entered into the same model in future studies).

"We believe that expansive altruism is a particular type of altruism. In contrast to other forms of altruism, expansive altruism is driven by a desire to not just help proximate and emotionally salient individuals, but also distant individuals that may be less emotionally salient. A similar type of distinction is made by psychologist Paul Bloom in his book Against Empathy (2017), in which he contrasts empathy-driven altruism with, what he refers to as, “rational compassion”. Our studies support the notion that such a form of rational compassion exists. However, our studies don’t suggest that this sort of compassion entails a lack of empathy. To the contrary, we find that expansive altruism strongly correlates with empathy (i.e., the trait “Empathic Concern”; see below)."

Another good source that I sense is sometimes overlooked is Hoffman. There is also closely related older work on the universal care ethic, such as in The Altruistic Personality.  One thing I am currently unclear about with the Expansive Altruism scale is whether it measures the belief that one ought to care about distant others (which might or might not cause a desire to help) or an actual desire to help (which may or may not be caused by a belief that one ought to help).

"And notably effectiveness-focus and instrumental harm are also psychologically different: they correlated only weakly (r = .34; see below)."

"Only weakly" is probably too strong. This correlation is large for individual differences research (Gignac et al., 2016). 


Huemer's book is what convinced me to go vegan after being a vegetarian for a number of years. I like his approach of putting general normative frameworks to the side. My only complaint is that, by downplaying the probability of invertebrate sentience and the ability to help wild animals, he has made the ethical landscape appear less complex than it really is. I also appreciate his (largely unsuccessful) attempts to engage libertarians on this issue, who often focus arbitrarily on state coercion rather than all coercion.

Chekhov's The Petcheneg is a delight that touches on vegetarianism, farmed animal welfare, wild animal welfare, and moral cluelessness, all just in service of character and plot development...Link: https://www.ibiblio.org/eldritch/ac/petcheneg.htm

Load more