I am an Economist working at Banco de España (Spanish Central Bank). I am 45 years old and have recently finished my PhD Thesis (See ORCID webpage: https://orcid.org/0000-0002-1623-0957 ).
Risk Management, banking regulation, energy and commodities, mechanism design.
What about life expectancy by income bucket in the US? How objective is that relation?
It looks that income matters in the US, but then it does not matter across countries…
The US is an extremely diverse society, with extreme outcomes. You have a Bulgaria and a Denmark in each city, we have them in different countries. In fact the positive relation at the micro level between health and income shall be more relevant that aggregate comparisons, that can be extremely affected by ecological fallacies.
Of course, there is a lot causal reversion in the income - welfare relation; both people and countries that are richer are often better in extra economic terms.
But you cannot separate material and non material prosperity. It is the loop of activity and personal virtue what allows people to became affluent, and income gives the resources to have a fulfilling life.
What were the social and moral consequences of stagnating socialism in the USRR? Demoralization and collapse.
Macro growth can be disputed, because is removed from personal experience, but parents always try to put their children in the path of (micro) growth…
Objective measures of subjetive welfare?
The problem with macro GDP skepticism is that it implies micro personal income skepticism, and that is totally implausible (among other consequences of personal income skepticism, imagine how irrelevant becomes unequality for USA where more than 20.000 per capita income is so prevalent).
Those who tell me that more income will not make me happier are telling me that I do not know how to use additional freedom. It is a very marxist position: as (Groucho) Marx said: “Who are you going to believe, me, or your lying eyes?”"
Regarding Spain, of course, many British people retire here: you have the Sun, the sunny people, and structural unemployment levels (or the realtive lack of tech and other "high end" jobs) do not affect you.
Of course not. There can be substantial externalities and zero sum games related to IA, as to any other tecnology. AI is like the "internal combustion engine". But "income" is a direct input for human welfare.
You can consider that externalities are not properly priced, and that natural resources are too cheap because their owners have too high discount rates. Then you need to tax both things to limit the resource use/ externality production.
Still, after you have put this "extra market" constraints in the economy, you want maximum GDP (that is more or less, the "aggregate" budget contraint). More degrees of freedom are inescapably "good", and that is what growth give us.
When you are against GDP, you are against people.
Recently I wrote this piece that can of interest to you:
Veganism is too demanding (see retention rates), and even unnecesary (are ethical eggs truly impossible? Some degree of cruelty is unavoidable in dairy production, but probably the "net positive" milky cow could be attainable).
But lots of people would pay a substantial premium for more ethical animal products. Unfortunately, certifications are not very trustworthy.
In my view, the natural way forward is the creation of a parallel market of "high quality" and (more) ethical animal products. This is also the interest of the industry, as much as renewable electricity production is the interest of electrical utilities.
Currently, farms face cutthroat competition for a stagnant or decrasing share of GDP. But, "ethical and organical" imply true growth perspectives for the sector. The consumer can pay it, and ethical concerns will fuel the whole process. Still, nobody is learning from the enormous sucess of "decarbonization", when utilities understood that renewables were the goose of gold eggs.
I include links to my two old posts arguing for keeping AI development:
First I argued that AI is a necesary (almost irrepleaceable) tool to deal with the other existential risks (mainly nuclear war):
Then, that currently AI risk is simply "too low to be measured", and we need to be closer to AGI to develop realistic alignment work:
Regarding research, I would say that each research line has some probability of success and some cost and probably an optimal research portfolio would include more or less four times more expenditure for type 2 diabetes, but no zero expenditure for type 1. Choice under uncertainty imply some natural preference for diversity.
For deployment (suppose no uncertainty) you spend money in strict order or “welfare” or “additional life years” by dollar spent, so probably you would spend in an expensive cancer treatment for a young person, but the same money would not be spent in an old one.
So, yes, reality is diverse and that creates apparent diversity in expenditure even for a ruthless utilitarian with no intrinsic preference for diversity.
There is not trade off: social estabilization and international pacification are main tools to reduce existencial risk, which in my view mainly comes from nuclear war.
https://forum.effectivealtruism.org/posts/6j6qgNa3uGmzJEMoN/artificial-intelligence-as-exit-strategy-from-the-age-of
All this EROI issues are far easier to follow when you use the inverse of EROI (energy auto consumption).
https://www.bde.es/f/webbde/SES/Secciones/Publicaciones/PublicacionesSeriadas/DocumentosTrabajo/12/Fich/dt1217e.pdf