P

philljkc

53 karmaJoined Nov 2022

Posts
1

Sorted by New

Comments
3

We agree for sure that cost/benefit ought be better articulated when deploying these models (see the What Do We Want section on Cost-Benefit Analysis). The problem here really is the culture of blindly releasing and open-sourcing models like this, using a Go Fast And Break Things mentality, without at least making a case for what the benefits are, what the harms are, and not appealing to any existing standard when making these decisions. 

Again, it's possible (but not our position) that the specifics of DALLE-2  don't bother you as much, but certainly the current culture we have around such models and their deployment seems an unambiguously alarming development.

The text-to-image models for education + communication here seems like a great idea! Moreover, I think it's definitely consistent with what we've put forth here too, since you could probably fine-tune on graphics contained in papers related to your task at hand. The issue here really is that people are incurring unnecessary amounts of risk by making, say, an automatic Distill-er by using all images on the internet or something like that, when training on a smaller corpora would probably suffice, and vastly reduce the amount of possible risk of a model intended originally for Distill-ing papers. The fundamental position we advance that better protocols are needed before we start mass-deploying these models, and not that NO version of these models / technologies could be beneficial, ever.

When it comes to gene editing, our society decides to regulate its application but is very open that developing the underlying technology is valuable. 

Here, I would refer to the third principle proposed in the "What Do We Want" section as well (on Cost-Benefit evaluation): I think that there should be at least more work done to try and anticipate / mitigate harms done by these general technologies. Like what is the rough likelihood of an extremely good outcome vs. extremely bad outcome for model X being deployed? If I add modification Y to it, does this change? 

I don't think our views are actually inconsistent here: if society scopes down the allowed usage of a general technology to comply with a set of regulatory standards that are deemed safe, that would work for me. 

My personal view on the danger here really is really that there isn't enough technical work here to mitigate the misusage of models, or even to enforce compliance in a good way. We really need technical work on that, and only then can we start effectively asking the regulation question. Until then, we might want to just delay release of super-powerful successors for this kind of technologies, until we can give better performance guarantees for systems like this, deployed this publicly. 

I think the core takeaway, at least from my end, is that this post elucidates a model , and tells a more concrete story, for how proliferation of technologies of a certain  structure and API (e.g., general-purpose query-based ML models) can occur, and why they are dangerous. Most importantly, this entails that, even if you don't buy the harms of DALLE-2 itself (which, we have established, you should, in particular for its potential successors), this pattern of origination -> copycatting -> distribution  -> misuse is a typical path for the release of technologies like this. If you buy that a dangerous capability could ever be produced by an AI model deployable with an API of the form query -> behaviour  (e.g. by powerful automatic video generation from prompts,  powerful face-editing tools given a video,  or an agent with arbitrary access to the internet controlled via user queries), this line of reasoning could therefore apply and/or be useful. This informs a few things:

  1. Technologies, once proliferated, are like a Pandora's Box (or indeed, a slippery slope), and so therefore the very coordination problem / regulatory problem you speak of  is most easily solved at the level of origination. This is a useful insight now, while many of the most dangerous AIs to be developed are yet to be originated.
  2. The potential harms of these technologies come from their unbounded scope, i.e. from the generality of function, lack of restriction of user access, or from the parameter count of these models being so large as to make their behaviour inherently hard to reason with. All of these things make these kinds of models more particularly amenable to misuse. So this post, in my mind, also takes a view on the source of capabilities risk from these models: in their generality and open scope. This can therefore inform the kinds of models / training techniques that are more dangerous: e.g. that for which the scope is the widest, where most possible failures could happen because the right behaviour is more nebulously defined. 

In general, I would urge you to consider this paragraph (in particular point (3)), the argument there seeming to be the bulk of your criticism. 

Overall, the slippery slope from the carefully-guarded DALLE-2 to the fully-open-source Stable Diffusion took less than 5 months. On one hand, AI generators for offensive content were probably always inevitable. However (1) not this soon. Delays in advancements like these increase the chances that regulation and safety work won’t be so badly outpaced by capabilities. (2) Not necessarily in a way that was enabled by companies like OpenAI and StabilityAI who made ineffective efforts to avoid harms yet claim to have clean hands while profiting greatly off these models. And (3) other similar issues with more powerful models and higher stakes might be more avoidable in the future. What will happen if and when video generatorsGPT-N, advanced generalist agents, or other potentially very impactful systems are released and copycatted? 

In other words, it's maybe not as much about DALLE-2 itself, but about the extrapolation of this pattern to models like it, and ways to deal with that before a model with existential risk is brought up (and by that point, if the data is in on that, we're probably dead already).

Thanks for reading, and for the comment. I hope this clarifies the utility of this article for you.