A

aviv

76 karmaJoined Jun 2020
aviv.me

Posts
1

Sorted by New
2
aviv
· 2y ago · 1m read

Comments
14

Assuming misaligned AI is a risk, is technical AI alignment enough, or do you need joint AI/Societal alignment?

My work has involved trying to support risk awareness and coordination similar to what has been suggested for AI alignment. For example, for mitigating harms around synthetic media / “deepfakes” (now rebranded to generative AI) and it worked for a few years with all the major orgs and most relevant research groups. 

But then new orgs jumped in to fill the capability gap! (e.g. eleuther, stability, etc.) 
Due to demand and for potentially good reasons: those capabilities which can harm people can also help people. The ultimate result is the proliferation/access/democratization of AI capabilities in the face of risks.

Question 1) What would stop the same thing from happening for technical AI safety alignment?[1]

I’m currently skeptical that this sort of coordination is possible without some addressing deeper societal incentives (AKA reward functions; e.g. around profit/power/attention maximization, self-dealing, etc.) and related multi-principal-agent challenges. This joint/ai societal alignment or holistic alignment would seem to be a prerequisite to the actual implementation of technical alignment.[2] 

Question 2) Am I missing something here? If one assumes that misaligned AI is a threat worth resourcing, what is the likelihood of succeeding at AI alignment longterm without also  succeeding  at 'societal alignment'?

  1. ^

    This is assuming you can even get the major players on board, which isn't true for e.g. misaligned recommender systems that I've also worked on (on the societal side).

  2. ^

    This would also be generally good for the world! E.g. to address externalities, political dysfunction, corruption, etc.

Thanks for sharing! I'm curious if any of these readings were most helpful around forming "theories of change toward achieving a limited form of global governance in specific key domains where it might be most important" or "viable mechanisms for partial global governance in those domains."

As someone exploring alternative ways to govern near-global powerful technology organizations that interact closely with nation-states and fund a significant proportion of AI research, this is what I would be most curious about (and which also seems e.g. particularly relevant re. x-risks). In the linked doc, I focus on sortition-based systems as one potential approach, but there are additional routes (e.g. ML-augmented) that I am also exploring using ~this framework, and I'd be interested any I have not considered. 

Would Governance Experiments and Scaling  be a better name?

I'm curious if this is essentially taking on what I allude to here: https://aviv.medium.com/building-wise-systems-combining-competence-alignment-and-robustness-a9ed872468d3

@Sean_o_h , Just seeing this now when searching for my name on the forum, actually to find a talk I did for an EA community! Thanks for the shoutout. 

For context, while I've not been super active community-wise, and I don't to find identities, EA or otherwise, particularly useful to my work, I definitely e.g fit all the EA definitions as outlined by CEA, use ITN, etc.

I just added a comment above which aims to provide a potential answer to this question—that you can use "approaches like those I describe here (end of the article; building on this which uses mini-publics)".  This may not directly get you something to measure, but it may be able to elicit the values needed for defining an objective function.

You provide the example of this very low bar:

I guarantee 100% that people would rather have the recommendations given by Netflix and YouTube than a uniform random distribution. So in that basic sense, I think we are already aligned.

The goal here would be to scope out what a much higher bar might look like. 

I'm curious if approaches like those I describe here (end of the article; building on this which uses mini-publics) for determining rec system policy  help address the concerns of your first 3 bullets. I should probably do a write-up or modification specifically for the EA audience (this is for a policy audience), but it ideally gets some of the point across re. how to do "deliberative retrospective judgment" in a way that is more likely to avoid problematic outcomes (I will also be publishing an expanded version that has much more sourcing).

Art, writing, and other creative outputs are incredibly valuable for societal change and movement building. I know for my work, I'm currently looking for graphic designers who can convey complex ideas in glanceable simple images to skeptical stakeholders. Well produced animated videos would be even better.

I think there is a huge amount of opportunity, and part of the challenge is that EA in my experience has some potential cultural gaps given its seemingly technocratic focus. 

It would may be helpful to grow a cross-national/ethnic overarching identity around "wisdom and doing good". EA does this is a bit, but is heavily constrained to the technocratic.  While that is it useful subcomponent of that broader identity,  it can push away people who share or aspire the underlying ideals of (1) "Doing good as a core goal of existence" and (2) "Being wise about how one chooses to do good"—but who don't share the disposition or culture of most EA's. Even the name itself can be a turnoff—it sound intellectual and elitist. 

Having a named identity which is broader than EA, but which contains it, could be incredibly helpful for connecting across neurodiverse divides in daily work, and could be incredibly valuable as a cross-cutting cleavage in national/ethnic/ etc. divides in conflict environments, if this can encompass a broad enough population over time.

I'm not sure what that name might be in English, or if it makes more sense to just expand meaning of EA, but it may be worth thinking about this, and consciously growing a movement around that with aligned movements that perhaps get at other "lenses of wisdom" that focus on best utilizing/growing resources for broad positive impact.  

Global Mini-public on AI Policy and Cooperation

Artificial Intelligence (Governance), Epistemic Institutions, Values and Reflective Processes, Great Power Relations

We'd like to fund an organization to institutionalize regular (e.g. yearly) global mini-publics to create recommendations on AI policy and cooperation; ideally in partnership with the  key academic journals (and potentially the UN, major corporations, research instituions, etc.) . Somewhat analogous to globalca.org which focuses on gene editing (https://www.science.org/doi/10.1126/science.abb5931) and globalassembly.org which focuses on climate (those are essentially pilots, heavily limited by funding).

aviv
2y13
0
0

Platform Democracy Institutions

Artificial Intelligence (Governance), Epistemic Institutions, Values and Reflective Processes, Great Power Relations

Facebook/Meta, YouTube/Google, and other platforms make incredibly impactful decisions about the communications of billions. Better choices  can significantly impact geopolitics, pandemic response,  the incentives on politicians and journalists, etc. Right now, those decisions are primarily in the hands of corporate CEO’s—and heavily influenced by pressure from partisan and authoritarian governments aiming to entrench their own power. There is an alternative: platform democracy. In the past decade, a new suite of democratic processes have been shown to be surprisingly effective at navigating challenging and controversial issues, from nuclear power policy in South Korea to abortion in Ireland. 

Such processes have been tested around the world, overcome the pitfalls of elections and referendums, and can work at platform scale. They enable the creation of independent ‘people’s mandates’ for platform policies—something invaluable for the impacted populations, well-meaning governments which are unable to act on speech, and even the platforms themselves (in many cases at least, they don't want to decide things since it opens them up to more government retaliation). We have a rapidly closing  policy window  to test and deploy platform democracy and give it real power and teeth. We'd like to see new organizations to advocate for, test, measure, certify, and scale platform democracy processes. We are especially excited about exploring the ways that these approaches can be used beyond just platform policies, but also for governance of the AI systems created and deployed by powerful corporations.

(Note: This is not as crazy as it sounds; several platforms you have heard  are dedicating significant resources to  actively explore this, but they need neutral 3rd party orgs to work with; relevant non-profits are very interested but are stretched too thin to do much. The primary approaches I am referring to here are mini-publics and systems like Polis.)

More detail at platformdemocracy.com (not an org; just a working paper right now)

Load more