Hide table of contents

Ordinal numbers are ranked. In contrast, cardinal numbers are equidistant. So you can do mathematical operations on cardinal numbers, but not ordinal, because 15th place isn’t 5 times better (or worse) than 3rd place. 

The ordinal critique of effective altruism is centred on interpersonal comparisons. The ordinal critique makes the claim that a dollar, or an intervention that increases the subjective well-being of a donation recipient cannot be said to give more utility in your pocket because utility is ordinal, not cardinal.

But isn’t it obvious to say that a starving person in the developing world gets more utility from a hotdog than a bodybuilder in the developed world? Perhaps, but that can be an ordinal comparison. The question of how many bodybuilders fed by hotdogs is equivalent utility to a starving person getting a hot dog, that’s a mathematical operation, cardinal, and by an ordinal theory of value, nonsensical. You can substitute persons with charities here. 

The case against the ordinal argument

In 1947, Game theorists Von Neumann and Morgenstern’s published on Expected Utility Theory, arguing that individual’s ordinal rankings of options under certain axioms mimiced a cardinal utility function anyway. 

The steelmanning of the ordinal critique

The ‘’Allais paradox’’ reveals that people often behave in ways inconsistent with von Neumann and Morgenstern axioms, so cardinal numbers can’t be assigned to ordinal preferences.

Temperature on thermometers can be viewed as cardinal because molecules are doing their thing and there’s an absolute zero to which we can calibrate our scale. Can we say the same for any or a particular measure of utility utilised by the effective altruism movement? How bad is torture? How bad is death? How good is the value of a human life compared with an animal, or a friend, or an effective altruism, or a person with a disability?

8

0
0

Reactions

0
0
Comments9


Sorted by Click to highlight new comments since:

"Cardinal" and "Ordinal" denote a certain extremely crude way in economics in which different utility functions can still be compared in certain cases. They gesture at a very important issue in EA which everybody who thinks about it encounters: that different people (/different philosophies) have different ideas of the good, which correspond to different utility functions.

But the two terms come from math and are essentially only useful in theoretical arguments. In practical applications they are extremely weak to the point of being essentially meaningless in any specific case - using them in a real-world analysis is like trying to do engineering by assuming every object is either a cube or a sphere.  The real world is much more detailed, and a key assumption of people who do EA-style work, which turns out to be correct in most instances, is that most people's models of utility overlap significantly, so that if you do an altruistic action that you think is has high utility, most other people will agree with your assessment to some degree, or at least not assign it negative utility. 

Of course this breaks in certain situations, especially around zero-sum games like war. I think it would be interesting to analyze how this can break, but I recommend using less jargony terms like ordinal/cardinal (which I think aren't very useful here) and more concrete examples (which can be speculative, but not too far removed from the real world or from the context of altruistic interventions). 

The term comes from economics (the term was created by Pareto who pioneered the field of micro-economics...)

Edit: I've only now learnt that these are also terms from economics, on which I don't know much; Since they're adopted from the mathematical ones, I'm going to assume my comment still has some useful content and leave it up.

Only referring to the first paragraph, because mathematically it's very wrong:

Ordinal numbers are ranked. In contrast, cardinal numbers are equidistant.

Equidistant in what way? How are ordinal numbers not equidistant in the same way? And how is that a contrast to being ranked? Distance and order are unrelated.

So you can do mathematical operations on cardinal numbers, but not ordinal

Sure you can:

https://en.wikipedia.org/wiki/Ordinal_arithmetic

Yes this is the conventional usage in economics, not its abstraction in mathematics. The crux of my argument is that welfare values are not directly observable.

FWIW, I thought  Chapter 1 of John Roemer's Theories of Distributive Justice gave me a helpful introduction to some of these scale/measurability issues as they relate to social choice. 

I think this is just an equivocation of "utility." Utility in the ethical sense is not identical to the "utility" of von Neumann Morgenstern utility functions.

I don't think so - this is mainstream usage of the term in welfare economics.

I'm confused — welfare economics seems premised on the view that interpersonal comparisons of utility are possible. In any case, ethics =/= economics; comparisons of charity effectiveness aren't assessing interpersonal "utility" in the sense of VNM preferences, they're concerned with "utility" in the sense of e.g. hedonic states, life satisfaction, so-called objective lists, and so on.

To quote John C Harsanyi in ''from most branches of economics the concept of cardinal utility has been eliminated as redundant since ordinal utility has been found to suffice for doing the job. Cardinal utility has been kept only in welfare economics to support the demand for a more equal income distribution.' 

I would recommend further reading on the ordinal revolution that followed the marginal revolution. The reason for restricting mainstream economics to ordinal rather than cardinal utility was based not arbitrary, and the effectiveness of economics compared to ethics should be considered if one is to be chosen over the other in the context of effective altruism.

In any case, I see how frosty a reception my ideas have had on here, not just in this post or comments, and don't feel this is fertile ground for new ideas outside the echo chamber. I don't expect to return here but I think I do get an email if someone private messages me so if anyone has something they want to reach me feel free to personal message me - thanks muchly

Curated and popular this week
 ·  · 1m read
 · 
I recently read a blog post that concluded with: > When I'm on my deathbed, I won't look back at my life and wish I had worked harder. I'll look back and wish I spent more time with the people I loved. Setting aside that some people don't have the economic breathing room to make this kind of tradeoff, what jumps out at me is the implication that you're not working on something important that you'll endorse in retrospect. I don't think the author is envisioning directly valuable work (reducing risk from international conflict, pandemics, or AI-supported totalitarianism; improving humanity's treatment of animals; fighting global poverty) or the undervalued less direct approach of earning money and donating it to enable others to work on pressing problems. Definitely spend time with your friends, family, and those you love. Don't work to the exclusion of everything else that matters in your life. But if your tens of thousands of hours at work aren't something you expect to look back on with pride, consider whether there's something else you could be doing professionally that you could feel good about.
 ·  · 14m read
 · 
Introduction In this post, I present what I believe to be an important yet underexplored argument that fundamentally challenges the promise of cultivated meat. In essence, there are compelling reasons to conclude that cultivated meat will not replace conventional meat, but will instead primarily compete with other alternative proteins that offer superior environmental and ethical benefits. Moreover, research into and promotion of cultivated meat may potentially result in a net negative impact. Beyond critique, I try to offer constructive recommendations for the EA movement. While I've kept this post concise, I'm more than willing to elaborate on any specific point upon request. Finally, I contacted a few GFI team members to ensure I wasn't making any major errors in this post, and I've tried to incorporate some of their nuances in response to their feedback. From industry to academia: my cultivated meat journey I'm currently in my fourth year (and hopefully final one!) of my PhD. My thesis examines the environmental and economic challenges associated with alternative proteins. I have three working papers on cultivated meat at various stages of development, though none have been published yet. Prior to beginning my doctoral studies, I spent two years at Gourmey, a cultivated meat startup. I frequently appear in French media discussing cultivated meat, often "defending" it in a media environment that tends to be hostile and where misinformation is widespread. For a considerable time, I was highly optimistic about cultivated meat, which was a significant factor in my decision to pursue doctoral research on this subject. However, in the last two years, my perspective regarding cultivated meat has evolved and become considerably more ambivalent. Motivations and epistemic status Although the hype has somewhat subsided and organizations like Open Philanthropy have expressed skepticism about cultivated meat, many people in the movement continue to place considerable hop
 ·  · 7m read
 · 
Introduction I have been writing posts critical of mainstream EA narratives about AI capabilities and timelines for many years now. Compared to the situation when I wrote my posts in 2018 or 2020, LLMs now dominate the discussion, and timelines have also shrunk enormously. The ‘mainstream view’ within EA now appears to be that human-level AI will be arriving by 2030, even as early as 2027. This view has been articulated by 80,000 Hours, on the forum (though see this excellent piece excellent piece arguing against short timelines), and in the highly engaging science fiction scenario of AI 2027. While my article piece is directed generally against all such short-horizon views, I will focus on responding to relevant portions of the article ‘Preparing for the Intelligence Explosion’ by Will MacAskill and Fin Moorhouse.  Rates of Growth The authors summarise their argument as follows: > Currently, total global research effort grows slowly, increasing at less than 5% per year. But total AI cognitive labour is growing more than 500x faster than total human cognitive labour, and this seems likely to remain true up to and beyond the point where the cognitive capabilities of AI surpasses all humans. So, once total AI cognitive labour starts to rival total human cognitive labour, the growth rate of overall cognitive labour will increase massively. That will drive faster technological progress. MacAskill and Moorhouse argue that increases in training compute, inference compute and algorithmic efficiency have been increasing at a rate of 25 times per year, compared to the number of human researchers which increases 0.04 times per year, hence the 500x faster rate of growth. This is an inapt comparison, because in the calculation the capabilities of ‘AI researchers’ are based on their access to compute and other performance improvements, while no such adjustment is made for human researchers, who also have access to more compute and other productivity enhancements each year.