R

Remmelt

545 karmaJoined Feb 2017

Bio

Program Coordinator of AI Safety Camp.

Sequences
3

Bias in Evaluating AGI X-Risks
Developments toward Uncontrollable AI
Why Not Try Build Safe AGI?

Comments
133

Topic Contributions
3

amicus briefs from AI alignment, development, or governance organizations, arguing that AI developers should face liability for errors in or misuse of their products.

Sounds like a robustly useful thing to do to create awareness of the product liability issues of buggy spaghetti code.

Actually, there are many plaintiffs I’m in touch with (especially those representing visual artists, writers, and data workers) who need funds to pay for legal advice and to start class-action lawsuits (given having to pay court fees if a case is unsuccessful).

Remmelt
4mo64

A friend in AI Governance just shared this post with me.

I was blunt in my response, which I will share below:

~ ~ ~

Two cruxes for this post:

  1. Is aligning AGI to be long-term safe even slightly possible – practically given default AI scaled training and deployment trends and complexity of the problem (see Yudkowsky’s list of AGI lethalities) or theoretically given strict controllability limits (Yampolskiy) and uncontrollable substrate-needs convergence (Landry).

If clearly, pre-aligning AGI to not cause a mass extinction is not even slightly possible, then IMO splitting hairs about “access to good data that might help with alignment” is counterproductive.

  1. Is a “richer technological world” worth the extent to which corporations are going to automate away our ability to make our own choices (starting with our own data), the increasing destabilisation of society, and the toxic environmental effects of automating technological growth?

These are essentially rhetorical questions, but covers the points I would ask someone who proposes desisting from collaborating with other groups who notice related harms and risks of corporations scaling AI.

To be honest, the reasoning in this post seems rather motivated without examination of underlying premises.

These sentences particularly:

“A world that restricts compute will end up with different AGI than a world that restricts data. While some constraints are out of our control — such as the difficulty of finding certain algorithms — other constraints aren't. Therefore, it's critical that we craft these constraints carefully, to ensure the trajectory of AI development goes well. Passing subpar regulations now — the type of regulations not explicitly designed to provide favorable differential technological progress — might lock us into bad regime.”

It assumes AGI is inevitable, and therefore we should be picky about how we constrain developments towards AGI.

It also implicitly assumes that continued corporate scaling of AI counts as positive “progress” – at least for the kind of world they imagine would result and want to live in.

The tone also comes across as uncharitable. As if they are talking down at others they have not spent time trying to listen carefully to, take the perspective of, and paraphrase back their reasoning to (at least nothing is written about/from those attempts in the post).

Frankly, we cannot be held back by motivated techno-utopian arguments from taking collective action against exponentially increasing harms and risks (in extents of the scale and local impacts). We need to work with other groups to make traction.

~ ~ ~

Remmelt
5mo00

Unfortunately, perhaps due to the prior actions of others in your same social group, a deceptive frame of interpretation is more likely to be encountered first, effectively 'inoculating' everyone else in the group against an unbiased receipt of any further information.



Written in 2015.  Still relevant.

Remmelt
5mo10

Say maybe Illusion of Truth and Ambiguity Effect each are biasing how researchers in AI Safety evaluate one option below. 

If you had to choose, which bias would more likely apply to which option?

  • A:  Aligning AGI to be safe over the long term is possible in principle.
  • B:  Long-term safe AGI is impossible fundamentally.
Remmelt
5mo1-1

Also thinking of doing an explanatory talk about this!

Yesterday, I roughly sketched out the "stepping stones" I could talk about to explain the arguments:

Remmelt
5mo31

Thank you too for responding here, Anthony. It feels tough  trying to explain this stuff with people around me, so just someone pointing out what is actually needed to make constructive conversations work here is helpful.

Remmelt
5mo22

the proper 'bar' for new ideas: consideration of the details, and refutation of those details. If refutation cannot be done by them, then they have no defense against your arguments!


Yes, and to be clear: we have very much been working on writing up those details in ways hopefully more understandable to AI Safety researchers. 

But we are really not working in a context of "neutral" evaluation here, which is why we're not rushing to put those details out onto the Alignment/LW/EA Forum (many details though can already be found across posts on Forrest's blog). 

Remmelt
5mo10

Yes, agreed with the substance of your points (I try to be more diplomatic about this, but it roughly lines up with my impressions).

 

If the objective is to persuade this community to pay attention to your work, then even if in some platonic sense their bar is 'too high' is neither here nor there: you still have to meet it else they will keep ignoring you.

Rather than helping encourage reasonable evaluations in the community  (no isolated demands for rigour for judging long-term safe AGI impossibility formal reasoning compared to intuitions about AGI safety being possible in principle), this is saying that a possibly unreasonable status quo is not going to be changed, so therefore people should just adjust to the status quo if they want to make any headway. 

The issue here is that the inferential distance is already large enough as it is, and in most one-on-ones I don't get further than discussing basic premises before my interlocutor side-tracks or cuts off the conversation. I was naive 11 months  ago to believe that many people would actually just dig into the reasoning steps with us, if we found a way to translate them nearer to Alignment Forum speak to be easier to comprehend and follow step-by-step.

In practice, I do think it's correct that we need to work with the community as it is. It's on us to find ways to encourage people to reflect on their premises and to detail and discuss the formal reasoning from there.

Remmelt
5mo10

Ah thanks. There seems to be a bug with the EA Forum referring on to the MFLB blog. 

If you copy-paste the link, it works: https://mflb.com/ai_alignment_1/af_proof_irrationality_psr.html

Load more