Karthik Tadepalli

Economics PhD @ UC Berkeley
Pursuing a doctoral degree (e.g. PhD)
916Joined Apr 2021karthiktadepalli.com

Comments
165

I think this makes a lot of sense for algorithmic regulation of human expression, but I still don't see the link to algorithmic expression itself. In particular I agree that we can't perfectly measure the violence of a speech act, but the consequences of incorrectly classifying something as violent seem way less severe for a language model than for a platform of humans.

It's hard to stop this argument from heading down the Dead Children Currency route. I think your heuristic that we should try to balance convenience with not being wasteful is right, and the optimizing heuristic that we should only spend on things that are more effective than giving that money away is wrong. It feels wrong in the same way that it would feel wrong to say "we should only spend time doing (activity if that activity is highly effective/it would increase our productivity in EA work". EA is a community, for better or worse, and I think it's bad for communities to create norms that are bad for community members well-being. I think a counterfactual norm of comparing all spending decisions to the potential impact of donating that money would be terrible for the well-being of EAs, especially very scrupulous EAs. Effective altruism in the garden of ends talks beautifully about the dark side of bringing such a demanding framework into everyday decisions.

That obviously does not mean all forms of EA spending are good, or even that most of them are. It's a false dichotomy to say the only options are to spend on useless luxuries or to obsess over dead children currency. But it does suggest that we should have a more heuristic approach to feeling out when spending is too much spending. Yes, we shouldn't spend on Ubers just because EA is footing the bill. Take the BART in most cases. But if it's late at night and you don't want to be on the BART then don't force yourself into a scary situation because you were scared of wasting money that could save lives.

The most obvious (and generic) objection is that censorship is bad.

This strikes me as a weird argument because it isn't object-level at all. There's nothing in this section about why censoring model outputs to be diverse/not use slurs/not target individuals or create violent speech is actually a bad idea. There was a Twitter thread of doing GPT-3 injections to make a remote work bot make violent threats to people. That was pretty convincing evidence to me that there is too much scope for abuse without some front-end modifications.

If you have an object-level, non-generic argument for why this form of censorship is bad, I would love to hear it.

This will create the illusion of greater safety than actually exists, and (imo) is practically begging for something to go wrong.

If true, this would be the most convincing objection to me. But I don't think this is actually how public perception works. Who is really out there who thinks that Stable Diffusion is safe, but if they saw it generate a violent image they would be convinced that Stable Diffusion is a problem? Most people who celebrate stable diffusion or GPT3 know they could be used for bad ends, they just think the good ends are more important/the bad ends are fixable. I just don't see how a front-end tweak really convinces people who otherwise would have been skeptical. I think it's much more realistic that people see this as transparently just a bandaid solution, and they just vary in how much they care about the underlying issue.

I also think there's a distinction between a model being "not aligned" and being misaligned. Insofar as a model is spitting out objectionable inputs it certainly doesn't meet the gold standard of aligned AI. But I also struggle to see how it is actually concretely misaligned. In fact, one of the biggest worries of AI safety is AIs being able to circumvent restrictions placed on them by the modeller. So it seems like an AI that is easily muzzled by front-end tweaks is not likely to be the biggest cause for concern.

Calling content censorship "AI safety" (or even "bias reduction") severely damages the reputation of actual, existential AI safety advocates.

This is very unconvincing. The AI safety vs AI ethics conflict is long-standing, goes way beyond some particular front-end censorship and is unlikely to be affected by any of these individual issues. If your broader point is that calling AI ethics AI safety is bad, then yes. But I don't think the cited tweets are really evidence that AI safety is widely viewed as synonymous with AI ethics. Timnit Gebru has far more followers than any of these tweets will ever reach, and is quite vocal about criticizing AI safety people. The contribution of front-end censorship to this debate is probably quite overstated.

I am not super clear on the delineation between DNT pesticides and suicide-risk pesticides and their relative importance so I'll defer to you.

That's fair but I don't think horses pre-1900 were treated in terrible ways. In particular the incentives for treating farm animals are VERY different from the incentives for treating service animals, whose usefulness depends on their continued health and quality of life.

Domestication isn't the same as exploitation, as wild animal welfare advocates will attest to. Dogs and cats and horses probably live better lives than all other animals.

How do you square that with the success of the Center for Pesticide Suicide Prevention in advocating for some pesticides to be banned in dozens of countries? Even if the CPSP wasn't instrumental in all of these cases, it doesn't seem to have been destroyed by the food and farming lobbies.

Exciting contest! I'd encourage the creation of a tag for this contest, to help people collect and read through entries that are posted on the Forum.

is caring about the future really enough to meaningfully equate movements with vastly different ideas about how to improve the world?

Given that longtermism is literally defined as a focus on improving the long-term future, I think yes? You can come up with many vastly different ways to improve the long-term future, but we should think of the category as "all movements to improve the long term future" and not "all movements to improve the long term future focusing on AI and bio risk and value lock in".

I think this comment on another post about the polycrisis is pretty good and captures why I'm skeptical of the polycrisis as a concept. But I'm very suspicious of people downvoting a post that is not actually a substantive claim about the polycrisis, but rather an invitation to a collaboration (which can't possibly be negative, and could definitely be positive).

Load More