evelynciara

I study computer science and information science at Cornell University. I am a public interest technologist interested in using CS to address pressing social problems. (she/her)

Comments

Election scenarios

(Status: unsure) Preserving democracy in the United States is more valuable insofar as the world perceives the U.S. as the "leader" or "guarantor" of the liberal world order, particularly global democracy. But I don't think this outweighs the importance of democracy in the rest of the world, especially large democracies like India.

I think EAs' comparative advantages in promoting democracy in our own countries is the more important factor here.

Election scenarios

I agree. Just as the EA movement has been pushing against the bias towards philanthropy in rich countries, so we should also try to resist the urge to pay attention only to political crises in rich countries like the United States.

evelynciara's Shortform

NYC is adopting ranked-choice voting for the 2021 City Council election. One challenge will be explaining the new voting system, though.

Thomas Kwa's Shortform

I agree - I'm especially worried that focusing too much on longtermism will make us seem out of touch with the rest of humanity, relative to other schools of EA thought. I would support conducting a public opinion poll to learn about people's moral beliefs, particularly how important and practical they believe focusing on the long-term future would be. I hypothesize that people who support ideas such as sustainability will be more sympathetic to longtermism.

How have you become more (or less) engaged with EA in the last year?

I think I started out with r/EffectiveAltruism and checking out effective altruism websites. Then, someone wrote a post on the subreddit encouraging people to post on the EA Forum because that's where the action is. So now I'm mostly involved in the forum, but also some Facebook groups (although I try not to use FB often) and Discord.

evelynciara's Shortform

Social constructivism and AI

I have a social constructivist view of technology - that is, I strongly believe that technology is a part of society, not an external force that acts on it. Ultimately, a technology's effects on a society depend on the interactions between that technology and other values, institutions, and technologies within that society. So for example, although genetic engineering may enable human gene editing, the specific ways in which humans use gene editing would depend on cultural attitudes and institutions regarding the technology.

How this worldview applies to AI: Artificial intelligence systems have embedded values because they are inherently goal-directed, and the goals we put into them may match with one or more human values.[1] Also, because they are autonomous, AI systems have more agency than most technologies. But AI systems are still a product of society, and their effects depend on their own values and capabilities as well as economic, social, environmental, and legal conditions in society.

Because of this constructivist view, I'm moderately optimistic about AI despite some high-stakes risks. Most technologies are net-positive for humanity; this isn't surprising, because technologies are chosen for their ability to meet human needs. But no technology can solve all of humanity's problems.

I've previously expressed skepticism about AI completely automating human labor. I think it's very likely that current trends in automation will continue, at least until AGI is developed. But I'm skeptical that all humans will always have a comparative advantage, let alone a comparative advantage in labor. Thus, I see a few ways that widespread automation could go wrong:

  • AI stops short of automating everything, but instead of augmenting human productivity, displaces workers into low-productivity jobs - or worse, economic roles other than labor. This scenario would create massive income inequality between those who own AI-powered firms and those who don't.
  • AI takes over most tasks essential to governing society, causing humans to be alienated from the process of running their own society (human enfeeblement). Society drifts off course from where humans want it to go.

I think economics will determine which human tasks are automated and which are still performed by humans.


  1. The embedded values thesis is sometimes considered a form of "soft determinism" since it posits that technologies have their own effects on society based on their embedded values. However, I think it's compatible with social constructivism because a technology's embedded values are imparted to it by people. ↩︎

evelynciara's Shortform

I just listened to Andrew Critch's interview about "AI Research Considerations for Human Existential Safety" (ARCHES). I took some notes on the podcast episode, which I'll share here. I won't attempt to summarize the entire episode; instead, please see this summary of the ARCHES paper in the Alignment Newsletter.

  • We need to explicitly distinguish between "AI existential safety" and "AI safety" writ large. Saying "AI safety" without qualification is confusing for both people who focus on near-term AI safety problems and those who focus on AI existential safety problems; it creates a bait-and-switch for both groups.
  • Although existential risk can refer to any event that permanently and drastically reduces humanity's potential for future development (paraphrasing Bostrom 2013), ARCHES only deals with the risk of human extinction because it's easier to reason about and because it's not clear what other non-extinction outcomes are existential events.
  • ARCHES frames AI alignment in terms of delegation from m ≥ 1 human stakeholders (such as individuals or organizations) to n ≥ 1 AI systems. Most alignment literature to date focuses on the single-single setting (one principal, one agent), but such settings in the real world are likely to evolve into multi-principal, multi-agent settings. Computer scientists interested in AI existential safety should pay more attention to the multi-multi setting relative to the single-single one for the following reasons:
    • There are commercial incentives to develop AI systems that are aligned with respect to the single-single setting, but not to make sure they won't break down in the multi-multi setting. A group of AI systems that are "aligned" with respect to single-single may still precipitate human extinction if the systems are not designed to interact well.
    • Single-single delegation solutions feed into AI capabilities, so focusing only on single-single delegation may increase existential risk.
    • What alignment means in the multi-multi setting is more ambiguous because the presence of multiple stakeholders engenders heterogeneous preferences. However, predicting whether humanity goes extinct in the multi-multi setting is easier than predicting whether a group of AI systems will "optimally" satisfy a group's preferences.
  • Critch and Krueger coin the term "prepotent AI" to refer to an AI system that is powerful enough to transform Earth's environment at least as much as humans have and where humans cannot effectively stop or reverse these changes. Importantly, a prepotent AI need not be an artificial general intelligence.
Some thoughts on EA outreach to high schoolers

Also, earlier I had the idea for a YouTube channel similar to many educational YouTube channels. The more zany, TikTok-style video content could complement it.

Foreign Affairs Piece on Land Use Reform

Thank you for sharing this! I'm sympathetic to the YIMBY movement and appreciate your piece's comparative perspective.

Load More