geoffreymiller

Topic Contributions

Comments

"Long-Termism" vs. "Existential Risk"

I agree with Scott Alexander that when talking with most non-EA people, an X risk framework is more attention-grabbing, emotionally vivid, and urgency-inducing, partly due to negativity bias, and partly due to the familiarity of major anthropogenic X risks as portrayed in popular science fiction movies & TV series.

However, for people who already understand the huge importance of minimizing X risk, there's a risk of burnout, pessimism, fatalism, and paralysis, which can be alleviated by longtermism and more positive visions of desirable futures. This is especially important when current events seem all doom'n'gloom, when we might ask ourselves 'what about humanity is really worth saving?' or 'why should we really care about the long-term future, it it'll just be a bunch of self-replicating galaxy-colonizing AI drones that are no more similar to us than we are to late Permian proto-mammal cynodonts?'

In other words, we in EA need long-termism to stay cheerful, hopeful, and inspired about why we're so keen to minimize X risks and global catastrophic risks.

But we also need longtermism to broaden our appeal to the full range of personality types, political views, and religious views out there in the public.  My hunch as a psych professor is that there are lots of people who might respond better to longtermist positive visions than to X risk alarmism. It's an empirical question how common that is, but I think it's worth investigating.

Also, a significant % of humanity is already tacitly longtermist in the sense of believing in an infinite religious afterlife, and trying to act accordingly. Every Christian who takes their theology seriously & literally (i.e. believes in heaven and hell), and who prioritizes Christian righteousness over the 'temptations of this transient life', is doing longtermist thinking about the fate of their soul, and the souls of their loved ones.  They take Pascal's wager seriously; they live it every day. To such people, X risks aren't necessarily that frightening personally, because they already believe that 99.9999+% of sentient experience will come in the afterlife. Reaching the afterlife sooner rather than later might not matter much, given their way of thinking.

However, even the most fundamentalist Christians might be responsive to arguments that the total number of people we could create in the future -- who would all have save-able souls -- could vastly exceed the current number of Christians. So, more souls for heaven; the more the merrier. Anybody who takes a longtermist view of their individual soul might find it easier to take a longtermist view of the collective human future.

I understand that most EAs are atheists or agnostics, and will find such arguments bizarre. But if we don't take the views of religious people seriously, as part of the cultural landscape we're living in, we're not going to succeed in our public outreach, and we're going to alienate a lot of potential donors, politicians, and media influencers.

There's a particular danger in overemphasizing the more exotic transhumanist visions of the future, in alienating religious or political traditionalists.  For many Christians, Muslims, and conservatives, a post-human, post-singularity, AI-dominated future would not sound worth saving. Without any humane connection to their human social world as it is, they might prefer a swift nuclear Armageddon followed by heavenly bliss, to a godless, soulless machine world stretching ahead for billions of years.

EAs tend to score very highly on Openness to Experience. We love science fiction. We like to think about post-human futures being potentially much better than human futures. But it that becomes our dominant narrative, we will alienate the vast majority of current living humans, who score much lower on Openness. 

If we push the longtermist narrative to the general public, we better make the long-term future sound familiar enough to be worth fighting for.

The transformative potential of cryptocurrencies

This post raises good points. I think crypto is a very neglected cause area with enormous upside potential, especially for developing countries. There's much, much more to the crypto industry than just Bitcoin as a 'store of value', or crypto trading as a way to make money.

There are tens of thousands of smart people working on blockchain technologies and protocols that could offer a huge range of EA-adjacent use cases, such as:

  • much faster, cheaper remittances 
  • protect savings against hyperinflation by irresponsible central banks
  • secure economic identity that allows poor people to get loans, buy insurance, receive gov't vouchers, prove educational credentials & work histories, etc
  • voting systems that are more secure, inclusive, hard-to-hack, and easy to validate
  • secure property rights & land records in areas where governments are often overthrown, and lands are confiscated
  •  access to reliable, validated, uncensorable data through oracle apps -- e.g. weather data that can support crop insurance for poor farmers; inflation statistics that can't be biased by government economists
  • social networks that can build in consensus mechanisms for quality control, without centralized censorship
  • smart contracts for royalty payments that allow creators to receive a share of any increase in value of their unique art-works

Projects that could support these use cases include Ethereum, Cardano, Chainlink, Algorand, Polkadot, and many others. Many use Proof of Stake consensus protocols (low energy consumption) rather than Proof of Work (like Bitcoin, which requires higher energy consumption). 

Also, there's a lot of overlap between EA and crypto in terms of culture, personalities, and values. Apart from the 'toxic bitcoin maximalists', most people in the crypto industry pride themselves on their rationality, openness to evidence, long-termism, global outlook, optimism, and skepticism about virtue signaling. 

X-risks of SETI and METI?

Thanks to everybody for your helpful links! I've shared your suggestions with the journalist, who is grateful. :)

Cognitive and emotional barriers to EA's growth

Yes, I always put too much text on slides the first few times I present on a new topic, and then gradually strip it away as I remember better what my points are. Thanks!

Open Thread #38

Heterodox Academy also has this new online training for reducing polarization and increasing mutual understanding across the political spectrum: https://heterodoxacademy.org/resources/viewpoint-diversity-experience/

Open Thread #38

Cool idea. Although I think domain-specific board games might be more intuitive and vivid for most people -- e.g. a set on X-risks (one on CRISPR-engineered pandemics, one on an AGI arms race), one on deworming, one on charity evaluation with strategic conflict between evaluators, charities, and donors, a modified 'Game of Life' based on 80k hours principles, etc.

Nothing Wrong With AI Weapons

Fascinating post. I agree that we shouldn't compare LAWs to (a) hypothetical, perfectly consequentialist, ethically coherent, well-trained philosopher-soldiers, but rather to (b) soldiers as the order-following, rules-of-engagement-implementing, semi-roboticized agents they're actually trained to become.

A key issue is the LAWs' chain of commands' legitimacy, and how it's secured.

Mencius Moldbug had some interesting suggestions in Patchwork about how a 'cryptographic chain of command' over LAWs could actually increase the legitimacy and flexibility of governance over lethal force. https://www.amazon.com/dp/B06XG2WNF1

Suppose a state has an armada/horde/flock of formidable LAWS that can potentially destroy or pacify the civilian populace -- an 'invincible robot army'. Who is permitted to issue orders? If the current political leader is voted out of office, but they don't want to leave, and they still have the LAWS 'launch codes', what keeps them from using LAWS to subvert democracy? In a standard human-soldier/secret service agent scenario, the soldiers and agents have been socialized to respect the outcomes of democratic elections, and would balk at defending the would-be dictator. They would literally escort him/her out of the White House. In the LAWs scenario, the soldiers/agents would be helpless against local LAWs under the head of state. The robot army would escort the secret service agents out of the White House until they accept the new dictator.

In other words, I'm not as worried about interstate war or intrastate protests; I'm worried about LAWs radically changing the incentives and opportunities for outright dictatorship. Under the Second Amendment, the standard countervailing force against dictatorship is supposed to be civilian ownership of near-equivalent tech that poses a credible threat against dictatorial imposition of force. But in this invincible-robot-army scenario, that implies civilians would need to be able to own and deploy LAWs too, either individually (so they can function as aggrieved tyrant-assassins) or collectively (so they can form revolutionary militias against gov't LAWs).

I guess this is just another example of an alignment problem - in this case between the LAWs and the citizens, with the citizens somehow able to collectively over-rule a dictator's 'launch codes'. Maybe every citizen has their own crypto key, and they do some kind of blockchain vote system about what the LAWs do and who they obey. This then opens the way to a majoritarian mob rule with LAWs forcibly displacing/genociding targeted minorites -- or the LAWs must embody some 'human/constitutional rights interrupts' that prevent such bullying.

Any suggestions on how to solve this 'chain of command' problem?

Load More