Topic Contributions

Comments

Milan Griffes on EA blindspots

Some back-and-forth on this between Eliezer & me in this thread.

Milan Griffes on EA blindspots

Compare the number of steps required for an agent to initiate the launch of existing missiles to the number of steps required for an agent to build & use a missile-launching infrastructure de novo.

Milan Griffes on EA blindspots

Here's Ben Hoffman on burnout & building community institutions: Humans need places

Milan Griffes on EA blindspots

This is the Ben Hoffman essay I had in mind: Against responsibility

(I'm more confused about his EA is self-recommending

Milan Griffes on EA blindspots

This orientation resonates with me too fwiw. 

Milan Griffes on EA blindspots

Existing nuclear weapon infrastructure, especially ICBMs, could be manipulated by a powerful AI to further its goals (which may well be orthogonal to our goals).

The Future Fund’s Project Ideas Competition

Researching valence for AI alignment

Artificial Intelligence, Values and Reflective Processes

In psychology, valence refers to the attractiveness, neutrality, or aversiveness of subjective experience. Improving our understanding of valence and its principal components could have large implications for how we approach AI alignment. For example, determining the extent to which valence is an intrinsic property of reality could provide computer-legible targets to align AI towards. This could be investigated experimentally: the relationship between experiences and their neural correlates & subjective reports could be mapped out across a large sample of subjects and cultural contexts.

The Future Fund’s Project Ideas Competition

Nuclear arms reduction to lower AI risk

Artificial Intelligence and Great Power Relations

In addition to being an existential risk in their own right, the continued existence of large numbers of launch-ready nuclear weapons also bears on risks from transformative AI. Existing launch-ready nuclear weapon systems could be manipulated or leveraged by a powerful AI to further its goals if it decided to behave adversarially towards humans. We think understanding the dynamics of and policy responses to this topic are under-researched and would benefit from further investigation.

Load More