C

CasparKaiser

94 karmaJoined Nov 2020Working (0-5 years)sites.google.com/view/casparkaiser

Bio

Assistant Professor at Tilburg University.

Research Affiliate at Oxford Wellbeing Research Centre and Institute for New Economic Thinking.

Trustee at Happier Lives Institute.

Comments
4

Hi geoffrey!

Yes, you are right. 

All of the methods we are currently thinking of require that for all respondents i,j  the top response threshold for person i  must be at least as large as the bottom response threshold for person j. `

However, with the vignettes, I believe that this is in part testable. 
Suppose that for a given vignette no person selected the top response category, and no person selected the bottom response category. Additionally suppose that the assumptions in section 4.1.1 of the report hold (i.e. that people perceive vignettes similarly, and use the same scale for their own wellbeing as for the vignettes). In that case all respondents’ scales must have at least some overlap with each other. 

We have not checked this though I imagine that it would show overlap of scales. Would this kind of test convince you?

As an aside, in section 4.6.1 we show that almost all respondents choose either “The most/least satisfied that any human could possibly be” or “The most/least satisfied that you personally think you could become” as the endpoints of the scale.  Since the latter set of endpoints is contained by the former set of endpoints, this evidence also seems to suggest that scales overlap.

Hey!

Isn't the variance 0? Since mean(x)/mean(y) is a number and not a distribution?

 No, I don't think that's correct. I take it that with "mean(x)" and "mean(y)" you mean the sample averages of x and y.   In this case, these means will have variances equal to  and . Consequently, the ratio of mean(x) and mean(y) will also have a variance. See here and here

That's a great report! Three sets of questions:

1) We sometimes distinguish between "experienced utility" and "decision utility" and we know that the two sometimes diverge. Do you know of experiments that tried to explain discrepancies  between choice behaviour and reported happiness with affective forecasting errors?  Less ambitiously, how much work is there showing that the presence of these biases predicts choice?

2) If a large part of the discrepancies between choice and experienced wellbeing are driven by affective forescasting errors, I should be extremely motivated to become better at affective forecasting. How can I become better at affective forecasting?

3) It seems like the "future anhedonia" bias and the "intensity" bias go in opposite directions. When is each more likely to be operating?

(Disclosure: I’m the author of the second linked paper, board member of HLI, and a collaborator on some of its research.)

Hi Michael! 

In my paper on scale use, I generally find that people who become more satisfied tend to also become more stringent in the way they report their satisfaction (i.e., for a given satisfaction level, they report a lower number). As a consequence, effects tend to be underestimated. 

If effects are underestimated by the same amount across different variables/treatments, scale norming is not an issue (apart from costing us statistical power). However, in the context of this post, if (say) the change in reporting behaviour is stronger for cash-transfers than for psychotherapy, then cash-transfers will seem relatively less cost-effective than psychotherapy . 

To assess whether this is indeed a problem, we’d either need data on so-called vignettes (link), or people’s assessment of their past wellbeing. Unfortunately, as far as I know, this data does not currently exist. 

That being said, in my paper (which is based on a sample from the UK), I find that accounting for changes in scale use does not, compared to the other included variables, result in statistically significantly larger associations between income and satisfaction.