Nice work! Many good hopes in there, but, hard to compete with "make furries real".
I'm confused. What are you trying to say here? You linked a proposal to prioritize violence against women and girls as an EA cause area (which I assume you don't object to?) and a tweet by some person unknown to me saying that critics of EA hold it to a standard they don't apply to feminism (which probably depends a lot on what kind of critics, and on their political background in particular). What do you expect the readers to learn from this or do about it?
Thanks so much for replying, I learned a lot from your response and its clarity helped me update my thinking.
You're very welcome, I'm glad it was useful!
I would expect these to be exceptions rather than norms (because if e.g. wanting to have a career was the norm, over enough time, that would tend to become culturally normative and even in the process of it becoming a more normative view the difference with a SWB measure should diminish).
I'm much more pessimistic. The processes that determine what is culturally normative are complicated, there are many examples of norms that discriminate against certain groups or curtail freedoms lasting over time, and if you're optimizing for the near future then "over enough time" is not a satisfactory solution.
I suppose I'm also thinking about the potential difference in specific SWB scales. Something like the SWLS scale or the single item measures would not be very domain specific but scales based around the e.g. Wheel of Life tradition tell you a lot more different facets of your life (e.g. you can see high overall scale but low for job satisfaction), so it seems to me that with the right scales and enough items you can address culture or other variance even further.
I don't know how those scales work, but (as I wrote in my reply to Joel), I would be much more optimistic about scales that are relative i.e. ask you to compare your well-being in situation A to situation B (whether these situations are familiar or hypothetical) rather than absolute (in which case it's not clear what's the reference frame).
What I was unable to articulate well is that your individual preferences are not stable (or I suppose: per person, rather than across people), i.e. Alice when she has $5 will exchange a different amount of free time for an extra $1 then when Alice has $10.
This is considered a consistent preference in standard (VNM) decision theory. It is entirely consistent that U(6$ and X free time) > U(5$ and Y free time) but U(11$ and X free time) < U(10$ and Y free time).
Thank you for the informative reply!
I think there's a big difference between asking people to rate their present life satisfaction and asking people what would make them more satisfied with their life. The latter is a comparison: either between several options or between future and present, depending on the phrasing of the questions. In a comparison it makes sense people report their relative preferences. On the other hand, the former is in some ill-posed reference frame. So I would be much more optimistic about a variant of WELLBY based on the former than on the latter.
I think the fact that SWB measures differs across cultures is actually a good sign that these measures capture what they are supposed to capture... In fact, I would be more concerned if different people with different views and circumstances did not, as you say, 'differ substantially.'
My claim is not "SWB is empirically different between cultures therefore SWB is bad". My claim is, I suspect that cultural factors cause people to choose different numbers for reasons orthogonal to what they actually want. For example, maybe Alice wants to be a career woman instead of her current role as a housewife (and would make choices to this effect if she had an opportunity), but she reports high life satisfaction because she feels that is expected of her (and it's not like reporting a low number would help her). Or, maybe people in Fooland consistently report higher life satisfaction than people in Baristan (because they have lower expectations of how life should be), but nobody from Baristan wants to move to Fooland and everyone from Fooland want to move to Baristan if they can (because life is actually better in Baristan).
I think these differences, attributable to culture or individual variance, are not likely to be of concern for what I would imagine would be the more common ways WELLBYs could be used. Most cost effectiveness analyses rely on RCTs or comparable designs with pre and post measures.
I agree that directly comparing "pre" to "post" SWB might work okay for many interventions, because the intervention doesn't affect the confounding factors, as long as you're comparing different interventions applied to similar populations. I would still rely more on asking people directly how much this intervention helped them / how much their life improved over this period (as opposed to comparing numbers reported at different points of time). And, we should still be vigilant about situations in which the confounders cannot be ignored (e.g. interventions that cause cultural shifts). And, there might be a non-linear relationship between SWB and decision-utility which should be somehow divulged if we are averaging these numbers.
In my reading, there's a long body of researcher suggesting these are stable, yet in practice your 'revealed' preference at $5 is likely to be different than at $10.
I'm guessing you are not talking about things like, how much free time you would exchange for an additional $1? Because that's consistent with constant preferences? So, Alice has $5 and Bob has $10, they are asked to choose between X and Y, and they have predictably different preferences despite the fact that post-X-Alice has the same wealth (and other circumstances) and post-X-Bob and the same for Y? And this despite somehow controlling for confounders are correlated both with the causes for Alice's and Bob's wealth and with their preferences?
I imagine such things can happen, in which case I would try to add hindsight judgements and judgements of people who experienced different circumstances into the mix. I expect that as people become more informed and experienced they roughly converge to some stable set of preferences, and the tradeoffs that don't converge are not really important. If I'm wrong and they are important, then we need to use the revealed preferences of people in those particular circumstances (which, yes, might include SWB, might also include other parameters).
Even under optimistic assumptions about SWB, this seems less noisy. Under pessimistic assumptions, I can imagine e.g. people implicitly interpreting the question as comparing their life to their neighbors (which were also affected by the intervention) or comparing their life now to their life in the past (which was still after the intervention), in which case SWB has no signal at all.
I don't know much about supplements/bednets, but AFAIU there are some economy of scale issues which make it easier for e.g. AMF to supply bednets compared with individuals buying bednets for themselves.
As to how to predict "decision utility when well informed", one method I can think of is look at people who have been selected for being well-informed while similar to target recipients in other respects.
But, I don't at all claim that I know how to do it right, or even that life satisfaction polls are useless. I'm just saying that I would feel better about research grounded in (what I see as) more solid starting assumptions, which might lead to using life satisfaction polls or to something else entirely (or a combination of both).
Suppose I'm the intended recipient of a philanthropic intervention by an organization called MaxGood. They are considering two possible interventions: A and B. If MaxGood choose according to "decision utility" then the result is equivalent to letting me choose, assuming that I am well-informed about the consequences. In particular, if it was in my power to decide according to what measure they choose their intervention, I would definitely choose decision-utility. Indeed, making MaxGood choose according to decision-utility is guaranteed to be the best choice according to decision-utility, assuming MaxGood are at least as well informed about things as I am, and by definition I'm making my choices according to decision-utility.
On the other hand, letting MaxGood choose according to my answer on a poll is... Well, if I knew how the poll is used when answering it, I could use it to achieve the same effect. But in practice, this is not the context in which people answer those polls (even if they know the poll is used for philanthropy, this philanthropy usually doesn't target them personally, and even if it did individual answers would have tiny influence). Therefore, the result might be what I actually want or it might be e.g. choosing an intervention which will influence society in a direction that makes putting higher numbers culturally expected or will lower the baseline expectations w.r.t. which I'm implicitly calculating this number.
Another issue with polls is, how do we know the answer is utility rather than some monotonic function of utility? The difference is important if we need to compute expectations. But this is the least of the problem IMO.
Now, in reality it is not in the recipient's power to decide on that measure. Hence MaxGood are free to decide in some other way. But, if your philanthropy is explicitly going against what the recipient would choose for themself, well... From my perspective (as Vanessa this time), this is not even altruism anymore. This is imposing your own preferences on other people.
A similar situation arises in voting, and I indeed believe this causes people to vote in ways other than optimizing the governance of the country (specifically, vote according to tribal signalling considerations instead).
Although in practice, many interventions have limited predictable influence on this kind of factors, which might mean that poll-based measures are usually fine. It might still be difficult to see the signal through the noise in this measure. And, we need to be vigilant about interventions that don't fall into this class.
It is ofc absolutely fine if e.g. MaxGood are using a poll-based measure because they believe, with rational justification, that in practice this is the best way to maximize the recipient's decision-utility.
I'm ignoring animals in this entire analysis, but this doesn't matter much since the poll methodology is in applicable to animals anyway.
I am skeptical of using answers to questions such as "how satisfied are you with your life?" as a measure of human preferences. I suspect that the meaning of the answer might differ substantially between people in different cultures and/or be normalized w.r.t. some complicated implicit baseline, such as what a person thinks they should "expect" or "deserve". I would be more optimistic of measurements based on revealed preferences, i.e. what people actually choose given several options when they are well-informed or what people think of their past choices in hindsight (or at least what they say they would choose in hypothetical situations, but this is less reliable).
[I'm assuming that something like preference utilitarianism is a reasonable model of our goal here, I do realize some people might disagree but didn't want to dive into those weeds just yet.]
(I only skimmed the article, so my apologies if this was addressed somewhere and I missed it.)
My spouse and I are both heavily involved with EA, but we nevertheless have significant differences in our philosophies. My spouse's world view is pretty much a central example of EA: impartiality, utilitarianism et cetera. On the other hand, I assign far greater weight to helping people who are close to me compared to helping random strangers. Importantly, we know that we have value differences, we accept it, and we are consciously working towards solutions that are aimed to benefit both of our value systems, with some fair balance between the two. This is also reflected in our marriage vows.
I think that the critical thing is that your SO accepts that:
If your SO cannot concede that much, it's a problem IMO. A healthy relationship is built on a commitment to each other, not on a commitment to some abstract philosophy. Philosophies enters it only inasmuch as they are important to each of you.
That said, I also accept considerations of the form "help X (at considerable cost) if they would have used similar reasoning to decide whether to help you if the roles were reversed".
By Scott Garrabrant et al:
By John Wentworth