Wiki Contributions


The transformative potential of cryptocurrencies

This post raises good points. I think crypto is a very neglected cause area with enormous upside potential, especially for developing countries. There's much, much more to the crypto industry than just Bitcoin as a 'store of value', or crypto trading as a way to make money.

There are tens of thousands of smart people working on blockchain technologies and protocols that could offer a huge range of EA-adjacent use cases, such as:

  • much faster, cheaper remittances 
  • protect savings against hyperinflation by irresponsible central banks
  • secure economic identity that allows poor people to get loans, buy insurance, receive gov't vouchers, prove educational credentials & work histories, etc
  • voting systems that are more secure, inclusive, hard-to-hack, and easy to validate
  • secure property rights & land records in areas where governments are often overthrown, and lands are confiscated
  •  access to reliable, validated, uncensorable data through oracle apps -- e.g. weather data that can support crop insurance for poor farmers; inflation statistics that can't be biased by government economists
  • social networks that can build in consensus mechanisms for quality control, without centralized censorship
  • smart contracts for royalty payments that allow creators to receive a share of any increase in value of their unique art-works

Projects that could support these use cases include Ethereum, Cardano, Chainlink, Algorand, Polkadot, and many others. Many use Proof of Stake consensus protocols (low energy consumption) rather than Proof of Work (like Bitcoin, which requires higher energy consumption). 

Also, there's a lot of overlap between EA and crypto in terms of culture, personalities, and values. Apart from the 'toxic bitcoin maximalists', most people in the crypto industry pride themselves on their rationality, openness to evidence, long-termism, global outlook, optimism, and skepticism about virtue signaling. 

X-risks of SETI and METI?

Thanks to everybody for your helpful links! I've shared your suggestions with the journalist, who is grateful. :)

Cognitive and emotional barriers to EA's growth

Yes, I always put too much text on slides the first few times I present on a new topic, and then gradually strip it away as I remember better what my points are. Thanks!

Open Thread #38

Heterodox Academy also has this new online training for reducing polarization and increasing mutual understanding across the political spectrum:

Open Thread #38

Cool idea. Although I think domain-specific board games might be more intuitive and vivid for most people -- e.g. a set on X-risks (one on CRISPR-engineered pandemics, one on an AGI arms race), one on deworming, one on charity evaluation with strategic conflict between evaluators, charities, and donors, a modified 'Game of Life' based on 80k hours principles, etc.

Nothing Wrong With AI Weapons

Fascinating post. I agree that we shouldn't compare LAWs to (a) hypothetical, perfectly consequentialist, ethically coherent, well-trained philosopher-soldiers, but rather to (b) soldiers as the order-following, rules-of-engagement-implementing, semi-roboticized agents they're actually trained to become.

A key issue is the LAWs' chain of commands' legitimacy, and how it's secured.

Mencius Moldbug had some interesting suggestions in Patchwork about how a 'cryptographic chain of command' over LAWs could actually increase the legitimacy and flexibility of governance over lethal force.

Suppose a state has an armada/horde/flock of formidable LAWS that can potentially destroy or pacify the civilian populace -- an 'invincible robot army'. Who is permitted to issue orders? If the current political leader is voted out of office, but they don't want to leave, and they still have the LAWS 'launch codes', what keeps them from using LAWS to subvert democracy? In a standard human-soldier/secret service agent scenario, the soldiers and agents have been socialized to respect the outcomes of democratic elections, and would balk at defending the would-be dictator. They would literally escort him/her out of the White House. In the LAWs scenario, the soldiers/agents would be helpless against local LAWs under the head of state. The robot army would escort the secret service agents out of the White House until they accept the new dictator.

In other words, I'm not as worried about interstate war or intrastate protests; I'm worried about LAWs radically changing the incentives and opportunities for outright dictatorship. Under the Second Amendment, the standard countervailing force against dictatorship is supposed to be civilian ownership of near-equivalent tech that poses a credible threat against dictatorial imposition of force. But in this invincible-robot-army scenario, that implies civilians would need to be able to own and deploy LAWs too, either individually (so they can function as aggrieved tyrant-assassins) or collectively (so they can form revolutionary militias against gov't LAWs).

I guess this is just another example of an alignment problem - in this case between the LAWs and the citizens, with the citizens somehow able to collectively over-rule a dictator's 'launch codes'. Maybe every citizen has their own crypto key, and they do some kind of blockchain vote system about what the LAWs do and who they obey. This then opens the way to a majoritarian mob rule with LAWs forcibly displacing/genociding targeted minorites -- or the LAWs must embody some 'human/constitutional rights interrupts' that prevent such bullying.

Any suggestions on how to solve this 'chain of command' problem?

How should we assess very uncertain and non-testable stuff?

In academic research, government and foundation grants are often awarded using criteria similar to ITN, except:

1) 'importance' is usually taken as short-term importance to the research field, and/or to one country's current human inhabitants (especially registered voters),

2) 'tractability' is interpreted as potential to yield several journal publications, rather than potential to solve real-world problems,

3) 'neglectedness' is interpreted as addressing a problem that's already been considered in only 5-20 previous journal papers, rather than one that's totally off the radar.

I would love to see academia in general adopt a more EA perspective on how to allocate scarce resources -- not just when addressing problems of human & animal welfare and X-risk, but in addressing any problem.

Load More