Still feeling a bit disillusioned after pursuing academic research up to Post-Doctoral level, spent some time teaching languages and working at a democracy NGO, I feel that I haven't found a way to do good for the world and sustain myself and my wife at the same time.
I feel that I have something to contribute to various never-ending conversations about global problems. Seeking a position from where I can do so.
I can help others with what I hope will be food for thought, but also hopefully practically contribute to discussions on global problems.
This sounds quite shocking - the absence of an answer and the laughter in the video, as well as the -23 votes here and the lack of a big discussion.
On the conspiracy theory front, it may be that the guy doesn't want to create panic. Or that the threat is not there, despite what the main players/experts believe (the Musks and Zuckerbergs of the world).
I think we should take seriously the first possibility, that the key political player thinks the threat is real (and thus agrees with the players/experts) and knows his stuff, it's just that he doesn't want to reveal much to the public. What do you think?
Dear friends,
I won't hide this, I was kindly asked by a friend to take a look at this thread. I have to admit that I was surprised and taken aback from the fact that the discussion focused not on whether this will restore dignity and give independence and a new lease on life to those not-so-well-off for whatever reason, and the reduction of inequality (after all, from what I hear, the US is one of the most unequal societies in the developed world), but rather gave me the impression that it was concerning itself too much with minutiae. From the evidence and the history, as this article points out: https://en.wikipedia.org/wiki/Universal_basic_income , it seems that the idea is a) not new at all and that it has quite a venerable and 'universal' history (from Julius Caesar's Rome to Ahmadinejad's Iran) and that b) it has worked well in various settings (not everywhere admittedly).
So with all due respect I would kindly ask you to see the forest and miss it for the tree, in other words consider whether UBI can help alleviate poverty and reduce inequality (my take would be by empowering people through guaranteed money - If I remember correctly, in some experiments with UBI, there was a surge in enterpreneurship from formerly disempowered sections of the population).
As for numbers (I think EA likes numbers), if a person with an income of 5,000 annually receives a 1,000 annual help, this represents a 20% increase in their revenues. If a person earns 1,000,000 annually then a 1,000 help is merely a (if i'm doing my sums right) 0,1% percent revenue added. However, the difference may be that the first person feeds their whole family milk and bread for the year whilst the second one buys their third Rolex watch. So everybody's happy.
Apologies in advance if this sounds a bit crude and not logical enough, I'm just feeling a bit sentimental today,
Haris
Dear Jon,
Many thanks for this, for your kindness in answering so thoghtfully and giving me food for thought too! I'm quite a lazy reader but I may actually spend money to buy the book you suggest (ok, let's take the babystep of reading the summary as soon as possible first). If you still don't want to give up on your left leanings, you may be interested in an older classic (if you haven't already read it): https://en.wikipedia.org/wiki/The_Great_Transformation_(book)
The great takeaway for me from this book was that the 'modern' (from a historical perspective) perception of labor is a relatively recent development, plus that it's an inherently political development (born out of legislation rather than as a product of the free market). My own politics (or scientopolitics let's call them) are that politics and legislation should be above all, so I wouldn't feel squeamish about political solutions (i know this positions has its own obvious pitfalls though).
Dear friends, you talk about AI generating a lot of riches, and I get the feeling that you mean 'generate a lot of riches for everybody' - however, I fail to understand this. How will AI generate income for a person with no job, even if the prices of goods drop? Won't the riches be generated only for those who run the AIs? Can somebody please clarify for me? I hope I haven't missed something totally obvious
Dear @JonCefalu, thanks for this very honest, insightful and thought-provoking article!
You do seem very anxious and you do touch on quite a number of topics. I would like to engage with you on the topic of joblessness, which I find really interesting and neglected (i think) by at least the EA literature that I have seen.
To me, a future where most people no longer have to work (because AI and general robots or whatever take care of food-production, production of entertainment programs, work in the technoscientific sector) could go both ways, in the sense that: a) it can indeed be an s-risk dystopia where we spend our time consuming questionable culture at home or at malls (and generally suffer from ill-health and associated risks) (though with no job to give us money, I don't know how these transactions would be made, and I'd like to hear some thoughts about this) or b) it can be a utopia and a virtuous circle where we produce new ways of entertaining ourselves or producing quality time (family, new forms of art or philosphy, etc.) or keeping ourselves busy, the AI-AGI saturates the market, we react (in a virtuous way, nothing sinister), the AGI catches up, and so on.
So to sum up, the substance of the above all-too likely thought-experiment would be, in the event of AGI taking off, what will happen to (free) time, and what would happen to money? Regarding the latter, given that the most advanced technology lies with companies whose motive is money-making, I would be a bit pessimistic.
As for the other thoughts about nuclear weapons and Skynet, I'd really love to learn more as it sounds fascinating and like stuff which mere mortals rarely get to know about :)
Flagging a potential problem for longtermism and the possibility on expanding human civilisation on other planets: What will the people eat there? Can we just assume that technoscience will give us the answer'? Or is that a quick and too optimistic question? Can one imagine a situation where humanity goes extinct because the earth finally becomes uninhabitable and the on the first new planet on which we step on the technology either fails or the settlers miss the opportunity window to develop their food? I'm sure there must be some such examples in the history of settlers into new worlds in the existing human history, I don't know if anybody's working on this in the context of longtermism though.
Just some food for thought hopefully
https://www.theguardian.com/environment/2023/jan/07/holy-grail-wheat-gene-discovery-could-feed-our-overheated-world
https://www.theguardian.com/commentisfree/2022/nov/30/science-hear-nature-digital-bioacoustics
what happens if in the future we discover that all life on Earth (especially plants) are sentient, but at the same time a) there are a lot more humans on the planet waiting to be fed and b) synthetic food/ proteins are deemed dangerous to human health?
Do we go back to eating plants and animals again? Do we farm them? Do we continue pursuing technologies for food given the past failures?
Hey hello,
Thanks, let's digest stuff a bit in the next few days and see how it goes. Thanks for the offer, same goes for me, at the moment I've got time! :)
Best Wishes,
Haris
Hey hello,
wow, that sounds really interesting, the lobsters evidence! Though if you ask most people they'll probably say that humans are 'something more' than just animals, be they either god's images or just rational beings (suggesting that other beings are less or not rational).
Best Wishes,
Haris
Dear friend @titotal
Many many thanks for your measured response, as well as with the link to your article, which is very, very enlightening to me. I think I agree with you in your assessment that the transition to an AGI or something close to it will not take place overnight, and that it may even never arrive or at least there won't be such an AGI existential-threat as many prominent commentators, even in this community, assume.
However, I guess as you may see from my own (ok, admittedly a bit polemical) linked post (though from what I see now I haven't managed to turn into a hyperlink), I'm a bit worried by us humans making AI (or computability anyway) the yardstick of our intelligence, and then being surprised that we may fail in that or find something that is better in that, rather than naming the thing as something different to intelligence. A sort of negative performativity in action there.
So, in summary, ok, nailing responding to linguistic prompts in language terms, fine, good, excellent, but not reduce what we humans believe makes us lords of the universe (intelligence, this is a bit tongue-in-cheek as I also believe that animals have civilisations and intelligences of their own) to responding to prompts, when we can do so much better (as in I believe that intelligence also entails emotions, artistic behaviour, cooking behaviour, empathy behaviour, and other behaviour not reducible to 'responding to prompts').
Best Wishes
Apologies if I was waffling a bit above, I'd be delighted to hear your thoughts!
Haris
PS: The edit is just changing the link to the article into a hyperlink :)