Hi, this is my first post and I apologize if this question is too subjective, in which case I'll take it down.  Ok here goes:

I'm personally starting to feel an accelerating, slightly visceral sense of fear at the increasing pace of news about AI breakthroughs that seem mere years from causing mass unemployment among white collar and blue collar workers alike (everything from automated artistry to automated burger-making).  My wife & I have been incredibly blessed with two adorable toddlers so far, and if they eat healthily, exercise, and benefit from the arrival of regenerative medical technology such as stem cell therapies, it seems quite reasonable that they'll live for at least 110 years if not much more (I hope even 1,000's of years at least).  Even taking the base case as 110 years, it seems a near-certainty that a transformative and super-dangerous AGI Singularity or Intelligence Explosion will occur while they are alive. Since I obviously deeply love our kids, I think about this a lot, and since I work in this field and am well-aware of the risks, I tend to think that the Singularity is the #1 or #2 threat to my young children's lives, together with nuclear war.  

I also can't help but wonder what jobs they will be able to find on the job market that aren't yet taken over by AI, by the time they graduate from college in 20 years or more.

I wish my fears were unfounded, but I'm well acquainted with the various dangers of both x-risks and s-risks associated with unaligned, hacked, or corrupted AGI.  I help run a startup called Preamble which works to reduce AGI s-risk and x-risk, and as part of our civic engagement efforts I've spent some years working with folks in the US military to raise awareness about AGI x-risks, especially those associated with 'Skynet' systems (hypothetical systems called Nuclear Command Automation systems, which would be deeply stupid to ever build, even for the nation that built them).  The author of the following article, Prof. Michael Klare, is a good friend, and he sought my advice while he was planning this piece, so it represents a good synthesis of our views: https://www.armscontrol.org/act/2020-04/features/skynet-revisited-dangerous-allure-nuclear-command-automation  He and I, along with other friends and allies of ours, have recently been grateful to see that some of our multi-year, long-shot civic engagement efforts have borne fruit!   Most exciting are these two US government statements:
   (1)  In March 2021, the National Security Commission on AI (NSCAI) included a couple lines in their official Report to Congress which, for the first time, briefed Congress about the importance of value alignment technology as a field of technology, and one which the US should invest in as a way to reduce AGI risk:  "Advances in AI, including the mastery of more general AI capabilities along one or more dimensions, will likely provide new capabilities and applications. Some of these advances could lead to inflection points or leaps in capabilities. Such advances may also introduce new concerns and risks and the need for new policies, recommendations, and technical advances to assure that systems are aligned with goals and values, including safety, robustness and trustworthiness. The US should monitor advances in AI and make necessary investments in technology and give attention to policy so as to ensure that AI systems and their uses align with our goals and values."
   (2)  In Oct 2022, the Biden administration's 2022 Nuclear Posture Review (NPR) became the first ever statement by the US Federal government explicitly prohibiting any adoption of Nuclear Command Automation by the US:  "In all cases, the United States will maintain a human “in the loop” for all actions critical to informing and executing decisions by the President to initiate and terminate nuclear weapon employment."

I'm extremely grateful that the US has finally banned Skynet systems!  Now, we at Preamble and in the arms control community are trying to find allies within China so as to convince them to make a similar ban of Skynet systems in their jurisdiction.  That would also open the door for our nations to have a dialogue on how to avoid being tricked into going to war, by an insane terrorist group using cyberattacks and misinformation to cause what is called a Catalytic Nuclear War (a war that neither side wanted, that was caused by trickery from a 3rd "catalytic" party).  https://mwi.usma.edu/artificial-intelligence-autonomy-and-the-risk-of-catalytic-nuclear-war/

All of us in the AGI safety community are working hard to prevent bad outcomes, but it feels like the years are starting to slip away frighteningly quickly on what might be the wick of the candle of human civilization, if we don't get 1,000 details right to ensure everything goes perfectly according to plan when the superintelligence is born.  Not only do we have to solve AI alignment, but we also have to perfectly solve software and hardware supply chain security; otherwise we can't trust the software to actually do what the pixels on the screen say that the source code says.  http://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_ReflectionsonTrustingTrust.pdf

I'm sorry if I'm rambling but I just wanted to convey an overall sense and impression of an emotion and see if others were feeling the same.  I dread that our civilization is hurtling at 100MPH towards an impassable cliff, and it's starting to give me a sense of visceral fear.  It really does seem like OpenAI, and the companies they are inspiring, are flooring the gas pedal and I was just wondering if anyone else is feeling scared.  Thank you.

19

0
0

Reactions

0
0

More posts like this

Comments11
Sorted by Click to highlight new comments since: Today at 4:01 PM

I've recently started feeling more and more concerned as well. While in the past my worries were more intellectual, with the recent breakthroughs I start to actually feel scared.

On mass unemployment: at least until we reach AI that can replace every human worker (which needs big advances in robotics, not just machine learning), I don't see why A.I. should be different from other labour-saving devices that perform work that humans used to do  manually. And those haven't caused mass unemployment in the past, they've just made us richer. Maybe A.I. will be different, even before it can replace any human worker on any job (and again to know how far away that is, you need to be watching robotics, not just machine learning). But I think the burden of proof is on people saying it will. 

Dear friends, you talk about AI generating a lot of riches, and I get the feeling that you mean 'generate a lot of riches for everybody' - however, I fail to understand this. How will AI generate income for a person with no job, even if the prices of goods drop? Won't the riches be generated only for those who run the AIs? Can somebody please clarify for me? I hope I haven't missed something totally obvious

You’re absolutely right. Unless tax policy catches up fast, stuff like the robots that replace fast food chefs is taking money out of the little guy’s wallet and right into the hands of the wealthiest business moguls who no longer have to pay human wages.

This fundamental issue is addressed very well in an excellent book you might love to check out, called Taxing Robots, by Prof. Xavier Oberson, a Swiss economist. Here’s the book on Amazon: https://a.co/d/eWjvuWE and here’s a summary: https://en.empowerment.foundation/amp/taxing-robots-by-xavier-oberson-professor-at-geneva-university-attorney-at-law-1

After getting even a page in, the core premise of the book seemed so obvious in retrospect, but hasn’t caught on as a possible solution: we need to fix the fact that algorithms and robots don’t pay income tax! Income tax disincentivizes human labor, thus effectively subsidizing robots! This needs to be fixed!

There are two possible solutions:

Left-wing approach: tax algorithmic labor at a similar or higher rate as human labor

Right-wing approach: repeal income tax! Make entitlement cuts to help fix the budget but also add back lost tax revenue by making so-called “Pigouvian” taxes on harmful activities like pollution.

Though my politics lean a bit more left, I think this is an area where republicans have the ideological advantage, as getting rid of income tax and standing up a new carbon tax is doable, Whereas in the dem’s solution, you need to somehow define what is labor-saving automation in the tax code, which seems really hard to define fairly due to the influence of special interests.

Though I voted for Obama and Biden, I would happily vote for DeSantis if he ran on repealing income tax and fixing the budget gap in other ways that don’t penalize human workers!

Dear Jon, 

Many thanks for this, for your kindness in answering so thoghtfully and giving me food for thought too! I'm quite a lazy reader but I may actually spend money to buy the book you suggest (ok, let's take the babystep of reading the summary as soon as possible first). If you still don't want to give up on your left leanings, you may be interested in an older classic (if you haven't already read it): https://en.wikipedia.org/wiki/The_Great_Transformation_(book)

The great takeaway for me from this book was that the 'modern' (from a historical perspective) perception of labor is a relatively recent development, plus that it's an inherently political development (born out of legislation rather than as a product of the free market). My own politics (or scientopolitics let's call them) are that politics and legislation should be above all, so I wouldn't feel squeamish about political solutions (i know this positions has its own obvious pitfalls though). 

The speed at which AI will progress the next decades will be faster than technological changes in the past. In the longterm, if the road to AGI goes well, job loss might not be an issue because of the vast richness the world will have. But in the short-to-medium term, if there's not some sort of UBI, we'll see massive job losses that won't easily (or fast enough) be replaced by other jobs I'm afraid.

I believe marginal utility simply means that automation will reduce the cost of many things to negligible, meaning our resources will be free to spend on other domains that are by definition not automated and still labor intensive.

At the point there is no such job, we'll have, also by definition, achieved radical abundance at which point being jobless doesn't matter.

Wouldn't a UBI then artificially prop up the current economy to the detriment of achieving radical abundance? Because it would be paid for via a tax of some kind on these "so abundant it's free" goods and keep them from becoming....so abundant they're free, no?

Of all the things AGI concerns me about, losing my job is by far the least of my worries.

Yes. Many of us are freaking out about this. The situation has gotten increasingly scary.

Nope. I do not expect my children to live for 1000s of years, nor am I in the slightest bit worried about AI-induced disemployment, nor Skynet coming alive and killing everyone. There are all kinds of far more likely disaster scenarios I could worry about and don't, not least because I am deeply aware of how hard these things are to accurately forecast (unlike the median EA, who seemingly just takes a random 10-year-horizon Metaculus output as gospel) and because I have deep faith in the ability of the human race to adopt to novel risks posed by new technologies (as we have done in recent decades with threats as diverse as nuclear war, climate change, and covid). Fear is no way to life your life! Have some faith! Rather I shall say with Haggai that "the glory of this latter house shall be greater than of the former".

Dear @JonCefalu, thanks for this very honest, insightful and thought-provoking article! 
You do seem very anxious and you do touch on quite a number of topics. I would like to engage with you on the topic of joblessness, which I find really interesting and neglected (i think) by at least the EA literature that I have seen. 

To me, a future where most people no longer have to work (because AI and general robots or whatever take care of food-production, production of entertainment programs, work in the technoscientific sector) could go both ways, in the sense that: a) it can indeed be an s-risk dystopia where we spend our time consuming questionable culture at home or at malls (and generally suffer from ill-health and associated risks) (though with no job to give us money, I don't know how these transactions would be made, and I'd like to hear some thoughts about this) or b) it can be a utopia and a virtuous circle where we produce new ways of entertaining ourselves or producing quality time (family, new forms of art or philosphy, etc.) or keeping ourselves busy, the AI-AGI saturates the market, we react (in a virtuous way, nothing sinister), the AGI catches up, and so on. 

So to sum up, the substance of the above all-too likely thought-experiment would be, in the event of AGI taking off, what will happen to (free) time, and what would happen to money? Regarding the latter, given that the most advanced technology lies with companies whose motive is money-making, I would be a bit pessimistic. 

As for the other thoughts about nuclear weapons and Skynet, I'd really love to learn more as it sounds fascinating and like stuff which mere mortals rarely get to know about :) 

Sabs
1y-16
1
3