Ordinal numbers are ranked. In contrast, cardinal numbers are equidistant. So you can do mathematical operations on cardinal numbers, but not ordinal, because 15th place isn’t 5 times better (or worse) than 3rd place.
The ordinal critique of effective altruism is centred on interpersonal comparisons. The ordinal critique makes the claim that a dollar, or an intervention that increases the subjective well-being of a donation recipient cannot be said to give more utility in your pocket because utility is ordinal, not cardinal.
But isn’t it obvious to say that a starving person in the developing world gets more utility from a hotdog than a bodybuilder in the developed world? Perhaps, but that can be an ordinal comparison. The question of how many bodybuilders fed by hotdogs is equivalent utility to a starving person getting a hot dog, that’s a mathematical operation, cardinal, and by an ordinal theory of value, nonsensical. You can substitute persons with charities here.
The case against the ordinal argument
In 1947, Game theorists Von Neumann and Morgenstern’s published on Expected Utility Theory, arguing that individual’s ordinal rankings of options under certain axioms mimiced a cardinal utility function anyway.
The steelmanning of the ordinal critique
The ‘’Allais paradox’’ reveals that people often behave in ways inconsistent with von Neumann and Morgenstern axioms, so cardinal numbers can’t be assigned to ordinal preferences.
Temperature on thermometers can be viewed as cardinal because molecules are doing their thing and there’s an absolute zero to which we can calibrate our scale. Can we say the same for any or a particular measure of utility utilised by the effective altruism movement? How bad is torture? How bad is death? How good is the value of a human life compared with an animal, or a friend, or an effective altruism, or a person with a disability?
"Cardinal" and "Ordinal" denote a certain extremely crude way in economics in which different utility functions can still be compared in certain cases. They gesture at a very important issue in EA which everybody who thinks about it encounters: that different people (/different philosophies) have different ideas of the good, which correspond to different utility functions.
But the two terms come from math and are essentially only useful in theoretical arguments. In practical applications they are extremely weak to the point of being essentially meaningless in any specific case - using them in a real-world analysis is like trying to do engineering by assuming every object is either a cube or a sphere. The real world is much more detailed, and a key assumption of people who do EA-style work, which turns out to be correct in most instances, is that most people's models of utility overlap significantly, so that if you do an altruistic action that you think is has high utility, most other people will agree with your assessment to some degree, or at least not assign it negative utility.
Of course this breaks in certain situations, especially around zero-sum games like war. I think it would be interesting to analyze how this can break, but I recommend using less jargony terms like ordinal/cardinal (which I think aren't very useful here) and more concrete examples (which can be speculative, but not too far removed from the real world or from the context of altruistic interventions).
The term comes from economics (the term was created by Pareto who pioneered the field of micro-economics...)