Married 35 year old Texan with 5 young children. USMA '09, US Army w/ 1 Afghanistan deployment, licensed Realtor, middle manager setting rents at an institutional landlord.
I identify as a Republican, rationalist, fusionist conservative, Austrian, classical liberal, Evangelical Christian, and neoconservative.
I meet the broad definition of an effective altruist (do the max good), but am more precisely an EA skeptic, particularly wary of the EA's technocratic inclinations and overconfidence in empiricism. I'm here for good faith persuasion as to why I'm wrong!
Favorite concept is Auftragstaktik. Favorite thinker is Hayek.
I want to have my views challenged.
I have real estate industry experience including pricing (forecasting) with iBuyers, institutional investors, real estate technology, and MLS institutions.
My half baked theory is that there will always be jobs shy of radical abundance in which case jobs won't be necessary.
If AI automated all knowledge work WITHOUT delivering radical abundance, then there would still be jobs delivering goods/services that AI is, by definition, not delivering.
And if so, we have nothing to fear.
I'm loathe to use this, but let's use QLY and assume, as I believe, it can never be less than 0 (e.g. better to die than live).
There is nothing worse than death. There are no benefits unless that death unlocks life.
I don't think the (likely nonexistent) positive effects of "generation replacement" will mean literally fewer deaths, and certainly not on a scale to justify discounting the deaths of entire generations of individuals.
I don't think "personal beliefs" should be included in an "all known factors" analysis of how we invest our resources. Should I value Muslim lives less because they may disagree with me on gay rights? Or capital punishment? Why not, in your framework?
I also don't think there's a "but" after "all lives are equal". That can be true AND we have to make judgment calls about how we invest our resources. My external action is not a reflection of your intrinsic worth as a human but merely my actions given constraints. Women and children may be first on the lifeboat, but that does not mean they are intrinsically worth more morally than men. I think it's a subtle but extremely important distinction, lest we get to the kind of reasoning that permits explicitly morally elevating some subgroups over others.
I do agree that there is private sector incentive for anti-aging, but I think that's true of a lot of EA initiatives. I'm personallg unsure of how wise diverting funds from Really Important Stuff is a good thing just because RIS happens to be profitable. I could perhaps make the case it's even MORE important to invest there, if you're inclined to be skeptical of the profit motive (though I'm not, so I'm not included).
I think I understand and that makes sense to me.
If I understand what you're saying correctly, this is another reason I don't identify as EA.
You're basically saying people dying is advantageous because their influence is replaced by people you deem as having superior virtues?
Its not obvious to me that "replacement" generations have superior values to those that they replace merely on account of being younger/newer etc..
But even accepting that's the case, how is discounting someone's life because they have the wrong opinions not morally demented?
I don't understand the positive duty to procreate which seems to be an accepted premise here?
Morality is an adverb not an adjective.
Is a room of 100 people 100x more "moral" than a room with 1 person. What's wrong with calling that a morally neutral state? (I'm not totalling up happiness or net pleasure or any of that weird stuff).
Only when forced into a trolley problem when we have actual decisions do our decisions, e.g. kill 1 person or 100 people, does the number of people have significance.
This may fall under "general medical stuff" but I've always been surprised how little EA seems to care about aging and human longevity, especially given how fond this community is in measuring "quality life years".
Progress here could solve depopulation problems among the other obvious benefits.
I wonder if mass economic literacy is less important than a few elites and institutions who have leverage over a nation's economic decision making. Are there more significant structural reforms that would be more impactful rather than merely "electing better"?
Not sure what kind of reform would drive a policy like cap and trade (but I also think there are economically cogent arguments against cap and trade).
If our interests are qualitatively the same, it doesn't matter to what extent we weigh their interest. We are achieving it by pursuing our own: health, wealth, happiness.
Perhaps if you feel otherwise, it's because you feel we are mortgaging our future for the present? I don't think that is true, generally. Climate change, national debt, insolvent pensions....all are tractable problems we're either a) solving now and/or, b) are sufficiently incentivized to solve for.
The best chapter in Super forecasting was Chapter 10: the Leaders Dilemma (although I'm biased as a former military officer in that it confirmed many of my priors). I feel like its concepts are the most relevant to the practical implementation of effective forecasting, yet I don't see a lot of talk about it in the EA community.
I believe marginal utility simply means that automation will reduce the cost of many things to negligible, meaning our resources will be free to spend on other domains that are by definition not automated and still labor intensive.
At the point there is no such job, we'll have, also by definition, achieved radical abundance at which point being jobless doesn't matter.
Wouldn't a UBI then artificially prop up the current economy to the detriment of achieving radical abundance? Because it would be paid for via a tax of some kind on these "so abundant it's free" goods and keep them from becoming....so abundant they're free, no?
Of all the things AGI concerns me about, losing my job is by far the least of my worries.