stecas

178Joined Mar 2019

Comments
10

Thanks for the comment. I hope you think this is interesting content. 

I'm not sure I understand what the argument is.

The most important points we want to argue with this post are that (1) if a system itself is made to be safe, but it's copycatted and open-sourced, then the safety measures were not effective (2) it is bad when developers like OpenAI publish incomplete/overly-convenient analysis of the risks of what they develop that, for example, ignore copycatting,  and (3) the points from "What do we want"? and "What should we do?"

"...we are not convinced of any fundamental social benefits that film provides aside from, admittedly, being entertaining..."

Yes entertainment has value, but I don't think that entertainment from text-to-image models is/will be commensurate with film.  I could also very easily list a lot of non-entertainment uses of film involving stuff like education, communication, etc. And I think someone from 1910 could easily think of these as well. What stuff like this would you predict from text-to-image diffusion models?

So, what justifies this casual dismissal of the entertainment value of text-to-image models?

We don't. We argue that it's unlikely to outweigh harms. 

Your primary conclusion is that "the AI research community should curtail work on risky capabilities".

I wouldn't say this is our primary conclusion. See my first response above. Also, I don't think this is obvious. Sam Altman, Demis Hassabis, and many others strongly disagree with this. 

The problem is coordinating our behavior. If OpenAI decides not to work on it, someone else will

We disagree that the counterfactual to OpenAI not working on some projects like DALLE2 or GPT4 would be similar to the status quo.  We discussed this in the paragraph that says "...On one hand, AI generators for offensive content were probably always inevitable. However..."

...Government bans? Do you realize how hard it would be to prevent text-to-image models from being created and shared...

Yes. We do not advocate for government bans. My answer to this is essentially what we wrote in the "The role of AI governance." I don't have much to add beyond what we already wrote. I recommend rereading that section. In short, there are regulatory tools that can be used. For example, the FTC may have a considerable amount of power in some cases.

Are we talking about 100 people a year being victimized? That would indeed be sad, but compared to potential human extinction from AI, probably not as big of a deal.

Where did the number 100 come from? In the post, we cite one article about a study from 2019 that found  ~15,000 deepfakes online. That was in 2019 when image and video generation were much less developed than today. And in the future, things may be much more widespread because of open-source tools based on SD that are easy to use.

Another really important point, I think, is that we argue in the post that trying to avoid dynamics involving racing toward TAI, copycatting, and open-sourcing of models will LESSEN X-risk. You wrote your comment as if we are trying to argue that preventing sex crimes are more important than X-risk. We don't say this. I recommend rereading the "But even if one does not view the risks specific to text-to-image models as a major concern..." paragraph and the "The role of AI researchers" section. 

Finally, and I want to put a star on this point -- we all should care a lot about sex crime. And I'm sure you do. Writing off problems like this by comparing them to X-risk  (1) isn't valid in this case because we argue for improving the dev ecosystem to address both of these problems, (2) should be approached with great care and good data if it needs to be done, and (3) is one type of thing that leads to a lot of negativity  and bad press about EA. 

I think this is probably even more  true for your comments on entertainment value and whether that might outweigh the harms of deepfake sex crimes. First, I'm highly skeptical that we will find uses for text-to-image models that are so widely usable and entertaining that it would be commensurate to the harms of diffusion-deepfake sex crime. But even if we could be confident that entertainment would hypothetically outweigh sex crimes on pure utilitarian grounds, in the real world with real politics and EA critics, I do not think this position would be tenable. It could serve to undermine support for EA and end up being very negative if widespread. 

Thanks or the comment. I think that simple interfaces for SD like this are not particularly worrysome. But I think that now (1) inpainting/outpainting, (2) dreambooth (see this SFW example), (3) GUIs that make it easy to use these, and (4) future advancements in difusion models (remember that DLALE-2 was only released in April of this year) are the main causes for concern. 

In 2016, world champion Go player, Lee Sedol, played against an AI system named AlphaGo for five games. In game two, AlphaGo’s move 37 proved instrumental toward beating Sedol [1]. Nobody predicted or understood the move. Later, in game four, it blundered move 79 which led to a loss against Sedol [2]. Nobody predicted or understood the move. AlphaGo ultimately won 4 of the 5 games and provided a concrete example of how humans are not as smart as smart gets. This illustrates a key reason to invest in making AI more safe and trustworthy. The limits of intelligence are unknown unknowns, and advanced AI may be one of the most transformative developments in human history. We hope that next generation AI systems will be well-aligned with our values and that they will make brilliant and useful decisions like move 37. But misaligned values or failures like move 79 will pose hazards and undermine trust unless they can be avoided. It would have been really nice if we had a prescient research community in the 1920s dedicated to making sure that nuclear technology went well — or one in the 1970s with the internet. For the same reason, we shouldn’t miss our chance to invest in research toward safer and more trustworthy AI today. 
 

[1] https://www.wired.com/2016/03/googles-ai-wins-pivotal-game-two-match-go-grandmaster/

[2] https://web.archive.org/web/20161116082508/https:/gogameguru.com/lee-sedol-defeats-alphago-masterful-comeback-game-4/

Thanks for the observation. The idea was definitely not to say that promoting disasters is a pragmatic course of action, bur rather that disasters which inspire us to prevent future risks can be good. I hope that the first line of the post would clear up any potential confusion.

I'm pretty averse to making major changes to a post, but for the sake of preventing possible future confusion, I opted to change 'Inspiring' to 'Inspirational' in the title.

[Update: in response to some additional feedback, another update was made. See the first line of the post.]

When I mentioned the classic trolley problem, that was not to say that it's analogous. The analogous trolley problem would be a trolley barreling down a track that splits in two and rejoins. On the current course of a trolley there are a number of people drawn from distribution X who will stop the trolley if hit. But if the trolley is diverted to the other side of the fork, it will hit a number of people drawn from distribution Y. The question to ask would be: "What type of difference between X and Y would cause you to not pull the lever and instead work on finding other levers to pull?" Even a Kantian ought to agree that not pulling the lever is good if the mean of Y is greater than the mean of X.

Thanks, cwbakerlee for the comment. Maybe this is somewhat due in part to how much more time I've been spending on LessWrong recently than EA Forums, but I have been surprised by the characterizaation of this post as one that seems dismissive of the severity of some disasters. This isn't what I was hoping for. My mindset in writing it was one of optimism. It was inspired directly by this post plus another conversation I had a friend about how if self-driving cars turn out to be riddled with failures, it could lend much more credibility to AI safety work.

I didn't intend for this to be a long post, but if I wrote it again, I'd have a section on "reasons this may not be the case." But I would not soften the message that endurable disasters may be overwhelmingly net positive in expectation. I disagree that non-utilitarian moral systems would generally dismiss the main point of this post. I think that rejecting the idea that disasters can be net-good if they prevent bigger future disasters would be pretty extreme even by commonsense standards. This post does not suggest that these disasters should be caused on purpose. To anyone who points out the inhumanity of sanctioning a constructive disaster, it can easily be pointed out the even greater inhumanity of trading a small number of deaths for a large one. I wouldn't agree with making the goal of a post like this to appeal to viewpoints that are this myopic. Even considering moral uncertainty, I would discount this viewpoint almost entirely--similarly to how I would discount the idea that pulling the lever in the trolley problem is wrong. To the extent that the inspiring disaster thesis is right (on its own terms), it's an important consideration. And if so, I don't find the idea that it should be too taboo to write a post on very tenable.

About COVID-19 in particular, I am not an expert, but I would probably ascribe a fairly low prior to the possibility that increased risks from ineffective containment of novel pathogens in labs would outweigh reduced risks from other adaptations regarding prevention, epidemiology, isolation, medical supply chains, and vaccine development. I am aware of speculation that the current outbreak was the result of a laboratory in Wuhan, but my understanding is that this is not seen as very substantiated. Empirically in the past few decades, it seems that far more deaths have been due to "naturally" occurring disease outbreaks not being handled better than to outbreaks due to pathogens escaping a lab.

I think it's pretty safe to say that

It seems uncooperative with the rest of humanity...'let people suffer so they'll learn their lesson.'

is a strawperson for this post. This post argues that the ability for certain disasters to spark future preventative work should be a factor in cause prioritization work. If that argument cannot be properly made by discussing examples or running a simulation, then I do not know how it could be. I would be interested in how you would recommend discussing this if this post was not the right way to do so.

Thanks for the comment. I think that 2 and 4 are good points, and 1 and 3 are great ones. In particular, I think that one of the more important factors that the toy model doesn't capture is the nonindependence of the arrival of disasters. Disasters that are large enough are likely to have a destabilizing affect which breeds other disasters. An example might be that WWII was in large part a direct consequence of WWI and a global depression. I agree that all of these should also be part of a more complete model.

Load More