I study computer science and information science at Cornell University. I am a public interest technologist interested in using CS to address pressing social problems. (she/her)
(Status: unsure) Preserving democracy in the United States is more valuable insofar as the world perceives the U.S. as the "leader" or "guarantor" of the liberal world order, particularly global democracy. But I don't think this outweighs the importance of democracy in the rest of the world, especially large democracies like India.
I think EAs' comparative advantages in promoting democracy in our own countries is the more important factor here.
I agree. Just as the EA movement has been pushing against the bias towards philanthropy in rich countries, so we should also try to resist the urge to pay attention only to political crises in rich countries like the United States.
NYC is adopting ranked-choice voting for the 2021 City Council election. One challenge will be explaining the new voting system, though.
I agree - I'm especially worried that focusing too much on longtermism will make us seem out of touch with the rest of humanity, relative to other schools of EA thought. I would support conducting a public opinion poll to learn about people's moral beliefs, particularly how important and practical they believe focusing on the long-term future would be. I hypothesize that people who support ideas such as sustainability will be more sympathetic to longtermism.
I think I started out with r/EffectiveAltruism and checking out effective altruism websites. Then, someone wrote a post on the subreddit encouraging people to post on the EA Forum because that's where the action is. So now I'm mostly involved in the forum, but also some Facebook groups (although I try not to use FB often) and Discord.
I have a social constructivist view of technology - that is, I strongly believe that technology is a part of society, not an external force that acts on it. Ultimately, a technology's effects on a society depend on the interactions between that technology and other values, institutions, and technologies within that society. So for example, although genetic engineering may enable human gene editing, the specific ways in which humans use gene editing would depend on cultural attitudes and institutions regarding the technology.
How this worldview applies to AI: Artificial intelligence systems have embedded values because they are inherently goal-directed, and the goals we put into them may match with one or more human values. Also, because they are autonomous, AI systems have more agency than most technologies. But AI systems are still a product of society, and their effects depend on their own values and capabilities as well as economic, social, environmental, and legal conditions in society.
Because of this constructivist view, I'm moderately optimistic about AI despite some high-stakes risks. Most technologies are net-positive for humanity; this isn't surprising, because technologies are chosen for their ability to meet human needs. But no technology can solve all of humanity's problems.
I've previously expressed skepticism about AI completely automating human labor. I think it's very likely that current trends in automation will continue, at least until AGI is developed. But I'm skeptical that all humans will always have a comparative advantage, let alone a comparative advantage in labor. Thus, I see a few ways that widespread automation could go wrong:
I think economics will determine which human tasks are automated and which are still performed by humans.
The embedded values thesis is sometimes considered a form of "soft determinism" since it posits that technologies have their own effects on society based on their embedded values. However, I think it's compatible with social constructivism because a technology's embedded values are imparted to it by people. ↩︎
I just listened to Andrew Critch's interview about "AI Research Considerations for Human Existential Safety" (ARCHES). I took some notes on the podcast episode, which I'll share here. I won't attempt to summarize the entire episode; instead, please see this summary of the ARCHES paper in the Alignment Newsletter.
Also, earlier I had the idea for a YouTube channel similar to many educational YouTube channels. The more zany, TikTok-style video content could complement it.
This reminds me of the Planet Money TikTok!
Thank you for sharing this! I'm sympathetic to the YIMBY movement and appreciate your piece's comparative perspective.