R

Remmelt

587 karmaJoined Feb 2017

Bio

Program Coordinator of AI Safety Camp.

Sequences
3

Bias in Evaluating AGI X-Risks
Developments toward Uncontrollable AI
Why Not Try Build Safe AGI?

Comments
165

Topic Contributions
3

I hadn’t made the GMO protests - AI protests connection.

This reads as a well-researched piece.

The analysis makes sense to me – with an exception to seeing efforts to restrict facial recognition, the Kill Cloud, etc, as orthogonal. I would also focus more on preventing increasing AI harms and Big Tech power consolidation, which most AI-concerned communities agree on.

The problem here is doing insufficient safety R&D at AI labs that enables the AI labs to market themselves as seriously caring about safety and thus that their ML products are good for release.

You need to consider that, especially since you work at an AI lab.

I can see how the “for sure” makes it look overconfident.

Suggest reading the linked-to post. That addresses most of your questions.

As to your idea of having some artificial super-intelligent singleton lead to some kind of alignment between or technological maturity of both planetary cultures, if that’s what you meant, please see here: https://www.lesswrong.com/posts/xp6n2MG5vQkPpFEBH/the-control-problem-unsolved-or-unsolvable

Great, we agree there then. 

Questions this raises:

  1. How much can supporting other communities to restrict data laundering, worker exploitation, unsafe uses, pollutive compute, etc, slow or restrict AI development? (eg. restrict data laundering by supporting lawsuits against unlawful TDM in the EU and state attorney actions against copyright violations in the US)
  2. How much should we work to support other communities to restrict AI development in different areas ("outside game) vs. working with AI companies to slow down development or "differentiatively" develop ("inside game")? 
  3. How much are we supporting those other communities now?

AI Safety’s old approach of building relationships with AI labs has enabled labs to further scale the training and commercialisation of AI models.

as we need both pressure to take action and the ability to direct it in a productive way.

As far ask I can see, little actual pressure has been put on these labs by folks in this community. I don’t think saying stuff by gently protesting counts as pressure.

That’s not what animal welfare organisations do when applying pressure. They seriously point out the deficiencies of the companies involved. They draft binding commitments for companies to wean off using harmful systems. And they ratchet up public pressure for more stringent demands with the internal conversations, so it makes sense for company leaders to make more changes.

My sense is that AI Safety people often are not comfortable with confronting companies, and/or hold somewhat naive notions of what it takes to push for reforms on the margin.

If AI Safety funders could not even stomach the notion of supporting another community (creatives) to ensure existing laws are not broken, then they cannot rely on themselves acting to ensure future laws are not broken by the AI companies.

A common reaction in this community to any proposed campaign that pushes for actually restricting the companies is that the leaders might no longer see us as being nice to them and no longer want to work with us. Which is implying that we perceive the company leaders as having the power in this relationship, and we don’t want to cross them lest they drop us.

Companies whose start up we supported have been actively eroding the chance of safe future AI for years now. And we’re going to let them continue, because we want to “maintain” this relationship with them.

From a negotiation stance, this is will not work out. We are not building the leverage for company leaders to actually consider to stop scaling. They will do lip service on “extinction risks” and then bulldoze over our wishes that they slow down.

The default is that the AI companies are going to scale on, and successfully reach the deployment of very harmful and long-term dangerous systems integrated into our economy.

What do you want to do? Further follow the old approach of trying to make AI labs more safety conscious (with some pause advocacy thrown in)?

Two thoughts:

  • I lack the ML expertise to judge this paper, but my sense is it means you can create a pretty good working chatbot on a bunch of licensed textbooks.

  • Having said that, I don’t see how a neural network could generate the variety of seemingly fitting responses that ChatGPT does for various contexts (eg. news, social situations) without neural weights being adjusted to represent patterns found in those contexts.

I expect many communities would agree on working to restrict Big Tech's use of AI to consolidate power.  List of quotes from different communities here.

my own thoughts have become a lot more pessimistic over the last ~year or so

Just read through your thoughts, and responded.

I appreciate your honesty here, and the way you stay willing to be open to new opinions, even when things are looking this pessimistic.

FAccT attendees are mostly a distinct group of researchers from the AI ethics researchers who come from or are actively assisting marginalised communities (and not with eg. fairness and bias abstractions).

Load more