geoffreymiller

Comments

X-risks of SETI and METI?

Thanks to everybody for your helpful links! I've shared your suggestions with the journalist, who is grateful. :)

Cognitive and emotional barriers to EA's growth

Yes, I always put too much text on slides the first few times I present on a new topic, and then gradually strip it away as I remember better what my points are. Thanks!

Open Thread #38

Heterodox Academy also has this new online training for reducing polarization and increasing mutual understanding across the political spectrum: https://heterodoxacademy.org/resources/viewpoint-diversity-experience/

Open Thread #38

Cool idea. Although I think domain-specific board games might be more intuitive and vivid for most people -- e.g. a set on X-risks (one on CRISPR-engineered pandemics, one on an AGI arms race), one on deworming, one on charity evaluation with strategic conflict between evaluators, charities, and donors, a modified 'Game of Life' based on 80k hours principles, etc.

Nothing Wrong With AI Weapons

Fascinating post. I agree that we shouldn't compare LAWs to (a) hypothetical, perfectly consequentialist, ethically coherent, well-trained philosopher-soldiers, but rather to (b) soldiers as the order-following, rules-of-engagement-implementing, semi-roboticized agents they're actually trained to become.

A key issue is the LAWs' chain of commands' legitimacy, and how it's secured.

Mencius Moldbug had some interesting suggestions in Patchwork about how a 'cryptographic chain of command' over LAWs could actually increase the legitimacy and flexibility of governance over lethal force. https://www.amazon.com/dp/B06XG2WNF1

Suppose a state has an armada/horde/flock of formidable LAWS that can potentially destroy or pacify the civilian populace -- an 'invincible robot army'. Who is permitted to issue orders? If the current political leader is voted out of office, but they don't want to leave, and they still have the LAWS 'launch codes', what keeps them from using LAWS to subvert democracy? In a standard human-soldier/secret service agent scenario, the soldiers and agents have been socialized to respect the outcomes of democratic elections, and would balk at defending the would-be dictator. They would literally escort him/her out of the White House. In the LAWs scenario, the soldiers/agents would be helpless against local LAWs under the head of state. The robot army would escort the secret service agents out of the White House until they accept the new dictator.

In other words, I'm not as worried about interstate war or intrastate protests; I'm worried about LAWs radically changing the incentives and opportunities for outright dictatorship. Under the Second Amendment, the standard countervailing force against dictatorship is supposed to be civilian ownership of near-equivalent tech that poses a credible threat against dictatorial imposition of force. But in this invincible-robot-army scenario, that implies civilians would need to be able to own and deploy LAWs too, either individually (so they can function as aggrieved tyrant-assassins) or collectively (so they can form revolutionary militias against gov't LAWs).

I guess this is just another example of an alignment problem - in this case between the LAWs and the citizens, with the citizens somehow able to collectively over-rule a dictator's 'launch codes'. Maybe every citizen has their own crypto key, and they do some kind of blockchain vote system about what the LAWs do and who they obey. This then opens the way to a majoritarian mob rule with LAWs forcibly displacing/genociding targeted minorites -- or the LAWs must embody some 'human/constitutional rights interrupts' that prevent such bullying.

Any suggestions on how to solve this 'chain of command' problem?

How should we assess very uncertain and non-testable stuff?

In academic research, government and foundation grants are often awarded using criteria similar to ITN, except:

1) 'importance' is usually taken as short-term importance to the research field, and/or to one country's current human inhabitants (especially registered voters),

2) 'tractability' is interpreted as potential to yield several journal publications, rather than potential to solve real-world problems,

3) 'neglectedness' is interpreted as addressing a problem that's already been considered in only 5-20 previous journal papers, rather than one that's totally off the radar.

I would love to see academia in general adopt a more EA perspective on how to allocate scarce resources -- not just when addressing problems of human & animal welfare and X-risk, but in addressing any problem.

Psychedelics could bring many benefits, but the EA community needs to be careful not to become associated with flaky New Age beliefs. I think we can do this best by being very specific about how psychedelics could help with certain 'intention setting', e.g. 1) expanding the moral circle: promoting empathy, turning abstract recognition of others beings' sentience into a more gut-level connection to their suffering; 2) career re-sets: helping people step back from their daily routines and aspirations to consider alternative careers, lifestyles, and communities; 80k hours applications; 3) far-future goal setting: getting more motivated to reduce X-risk by envisioning far-future possibilities more vividly, as in Bostrom's 'Letter from Utopia' 4) recalibrating utility ceilings: becoming more familiar with states of extreme elation and contentment can remind EAs that we're fighting for trillions of future beings to be able to experience those states whenever they want.

Load More