All of aviv's Comments + Replies

Assuming misaligned AI is a risk, is technical AI alignment enough, or do you need joint AI/Societal alignment?

My work has involved trying to support risk awareness and coordination similar to what has been suggested for AI alignment. For example, for mitigating harms around synthetic media / “deepfakes” (now rebranded to generative AI) and it worked for a few years with all the major orgs and most relevant research groups. 

But then new orgs jumped in to fill the capability gap! (e.g. eleuther, stability, etc.) 
Due to demand and for potentially g... (read more)

Thanks for sharing! I'm curious if any of these readings were most helpful around forming "theories of change toward achieving a limited form of global governance in specific key domains where it might be most important" or "viable mechanisms for partial global governance in those domains."

As someone exploring alternative ways to govern near-global powerful technology organizations that interact closely with nation-states and fund a significant proportion of AI research, this is what I would be most curious about (and which also seems e.g. particularly rel... (read more)

Would Governance Experiments and Scaling  be a better name?

I'm curious if this is essentially taking on what I allude to here: https://aviv.medium.com/building-wise-systems-combining-competence-alignment-and-robustness-a9ed872468d3

2
RoboTeddy
2y
Yep I think almost entirely overlapping! RE: the name: I like "Governance Experiments and Scaling" but just asked around and some other people said they liked "Governance Design & Formation" better 🤷 I don't have any strong feelings about the names.

@Sean_o_h , Just seeing this now when searching for my name on the forum, actually to find a talk I did for an EA community! Thanks for the shoutout. 

For context, while I've not been super active community-wise, and I don't to find identities, EA or otherwise, particularly useful to my work, I definitely e.g fit all the EA definitions as outlined by CEA, use ITN, etc.

I just added a comment above which aims to provide a potential answer to this question—that you can use "approaches like those I describe here (end of the article; building on this which uses mini-publics)".  This may not directly get you something to measure, but it may be able to elicit the values needed for defining an objective function.

You provide the example of this very low bar:

I guarantee 100% that people would rather have the recommendations given by Netflix and YouTube than a uniform random distribution. So in that basic sense, I think we ar

... (read more)

I'm curious if approaches like those I describe here (end of the article; building on this which uses mini-publics) for determining rec system policy  help address the concerns of your first 3 bullets. I should probably do a write-up or modification specifically for the EA audience (this is for a policy audience), but it ideally gets some of the point across re. how to do "deliberative retrospective judgment" in a way that is more likely to avoid problematic outcomes (I will also be publishing an expanded version that has much more sourcing).

3
Rohin Shah
2y
These approaches could help! I don't have strong reason to believe that they will, nor do I have strong reason to believe that they won't, and I also don't have strong reason to believe that the existing system is particularly problematic. I am just generally very uncertain and am mostly saying that other people should also be uncertain (or should explain why they are more confident). Re: deliberative retrospective judgments as a solution: I assume you are going to be predicting what the deliberative retrospective judgment is in most cases (otherwise it would be far too expensive); it is unclear how easy it will be to do these sorts of predictions. Bullet points 1 and 2 were possibilities where the prediction was hard; I didn't see on a quick skim why you think they wouldn't happen. I agree "bridging divides" probably avoids bullet point 3, but I could easily tell different just-so stories where "bridging divides" is a bad choice (e.g. current affairs / news / politics almost always leads to divides, and so is no longer recommended; the population becomes extremely ignorant as a result worsening political dynamics).

Art, writing, and other creative outputs are incredibly valuable for societal change and movement building. I know for my work, I'm currently looking for graphic designers who can convey complex ideas in glanceable simple images to skeptical stakeholders. Well produced animated videos would be even better.

I think there is a huge amount of opportunity, and part of the challenge is that EA in my experience has some potential cultural gaps given its seemingly technocratic focus. 

1
Zach Roush
2y
I feel the same way! Technology and data is so essential for being effective, but we give something up when we ONLY use data and tech to solve problems. But I see this as an opportunity as well, to forge a new kind of artistic culture around saving the world.

It would may be helpful to grow a cross-national/ethnic overarching identity around "wisdom and doing good". EA does this is a bit, but is heavily constrained to the technocratic.  While that is it useful subcomponent of that broader identity,  it can push away people who share or aspire the underlying ideals of (1) "Doing good as a core goal of existence" and (2) "Being wise about how one chooses to do good"—but who don't share the disposition or culture of most EA's. Even the name itself can be a turnoff—it sound intellectual and elitist. ... (read more)

Global Mini-public on AI Policy and Cooperation

Artificial Intelligence (Governance), Epistemic Institutions, Values and Reflective Processes, Great Power Relations

We'd like to fund an organization to institutionalize regular (e.g. yearly) global mini-publics to create recommendations on AI policy and cooperation; ideally in partnership with the  key academic journals (and potentially the UN, major corporations, research instituions, etc.) . Somewhat analogous to globalca.org which focuses on gene editing (https://www.science.org/doi/10.1126/science.ab... (read more)

aviv
2y13
0
0

Platform Democracy Institutions

Artificial Intelligence (Governance), Epistemic Institutions, Values and Reflective Processes, Great Power Relations

Facebook/Meta, YouTube/Google, and other platforms make incredibly impactful decisions about the communications of billions. Better choices  can significantly impact geopolitics, pandemic response,  the incentives on politicians and journalists, etc. Right now, those decisions are primarily in the hands of corporate CEO’s—and heavily influenced by pressure from partisan and authoritarian governments ai... (read more)

Operations and Execution Support for Impact

Empowering Exceptional People, Effective Altruism

The skill of running operations for building and growing a non-profit organization is often very different from doing the "core work" of that org. Figuring out operational details can suck energy away from the core work, leaving many promising people deciding not to start new orgs even when it is appropriate and necessary for scaling  impact. We'd like to see an organization that could provide a sort of recruiting and matchmaking service which identifies promis... (read more)

aviv
2y19
0
0

Bridging-based Ranking for  Recommender Systems

Artificial Intelligence, Epistemic Institutions, Values and Reflective Processes, Great Power Relations

Recommender systems are used by platforms like FB/Meta, YouTube/Google, Twitter, TikTok, etc. to direct the attention of billions of people every day. These systems, due to a combination of psychological, sociological, organizational, etc. factors are currently most likely to reward content producers with attention if they stoke division (e.g.  outgroup animosity). Because attention is a currency th... (read more)

Answer by avivDec 10, 20205
0
0

I was informed of this thread by someone in the EA community who suggested I help. I have deep subject matter expertise in this domain (depending on how you count, I've been working in it full-time for 5 years, and toward it for 10+ years). 

The reason I started working on this could be characterized as resulting from my beliefs on the "threat multiplier" impact that broken information ecosystems have on catastrophic risk.

A few caveats about all this though: 

  1. Most of the public dialogue around these issues is very simplistic and reductionist (which
... (read more)
1
Jan-Willem
3y
Thanks for this! I've sent you an email. Especially regarding caveat #2 I believe you can help with relative little time and resources. 
aviv
4y21
0
0

Avoiding Infocalypse: How a decline in epistemic competence makes catastrophic risks inevitable — and what EA's can do about it

This would be shortened and modified version of a talk I gave at Cambridge University, at the Leverhulme Centre for the Future of Intelligence and Centre for the Study of Existential Risk. The general public version of many of the ideas can be found in this TEDx talk that I gave in 2018 (ignore the title, not my choice).

Part 1: Framing the underlying problem

Describe what is meant by epistemic competence (the ability or ... (read more)