MC

Mahdi Complex

54 karmaJoined Oct 2021

Comments
9

I have come to see the term 'religion' (as well as 'ideology') as unhelpful in these discussions. It might be helpful to taboo these words and start talking in terms of 'motivating world-views' instead.

Thanks for writing this. A similar observation lead me to write this post.

Reality has no requirement to conform to your heuristics of what is ‘normal’, but I think that we could use some more outside-view work on just how bizarre and unsettling this world-view is, even if it is true.

I believe the end-goal isn't a world ruled by a benevolent global elite that owns all the robots. The goal isn't to create a 'techno-leviathan' for people to ride. The goal is to find a benevolent God in mind design space. One we would be happy to give up sovereignty to. That's I think what AI alignment is about.

(A related discussion on LW.)

Either way, I think we're going to need some serious 'first principles' work at the intersection of AI alignment and political philosophy. "What is the nature of a just political and economic order when humans are economically useless and authority lies with a superhuman AI?" "What institution would even have the legitimacy to ask this question, let alone answer it?"

I have read the book and the book review. They provide some great descriptive insights into what's going on, but I'm more interested in a historical perspective of where what might be viewed as "consensus reality" and the correct order of things came from.

Hi Tomer, I really appreciate the kind words! I think the piece turned out a little strange because I was trying to do too many things at once. I was trying to frame EA and the Singularity in terms that would lead religious people to take it seriously, while also making the largely atheistic EA crowd more appreciative of some compatibilist religious ideas.

I just published a post that expounds a bit on some of the ideas I mention in this piece.

I think that AGI might require us to dig a little deeper when it comes to governance and political philosophy. The framing of transformative AI as just another technology like electricity or nuclear power with little major implications on global governance and society seems wrong to me. And getting a proper understanding of the ideas that currently govern us and that motivate most people seems really underrated in EA.

We can’t afford to wait for a “Long Reflection”.

Alternatively, the "Long Reflection" has already begun, it's just not very evenly distributed. And humanity has a lot of things to hash out.

The question I’m currently trying to answer is, how did we get to a point where the main actor concerning itself with humanity’s survival, the plight of those most in need, technological utopianism and humanity’s destiny in the cosmos is a small eclectic network of academics, young professionals and misfits. It’s not governments, it’s not international organizations, it’s not religious institutions. It’s a group of non-profits, primarily funded by a bunch of eccentric billionaires. Am I the only one who thinks that this is crazy and really calls for an explanation? How did the world come to be this way? Why was EA necessary in the first place?

I get the impression that almost all philosophical work that is happening within EA is very “first principles” and problem oriented, and does not seem to engage or concern itself at all with the history, ideas and assumptions that are the foundations of the institutions that are actually “in charge” today.

I’d like to know if anyone has opinions or resources to share on this matter.

My own theory is that the answer lies in the philosophical underpinnings of classical liberalism. My ideas are a bit weird and potentially controversial. Who within EA would be a good person for me to reach out to, and get feedback from?