Fai

Contractor RA to Peter Singer, Princeton

Topic Contributions

Comments

The Future Might Not Be So Great.

Maybe another typo? : "Bostrom argues that if humanizes could colonize the Virgo supercluster", should that be "humanity" or "humans"?

Transcript of a talk on The non-identity problem by Derek Parfit at EAGxOxford 2016

Thank you so much! I used this in my research just last week. I can now revise this more easily!

Steering AI to care for animals, and soon

I and a few other people are discussing how to start some new charities along the lines of animals and longtermism, which includes AI. So maybe that's what we need in EA before we can talk about where we can donate to help steer AI to better care for animals.

Steering AI to care for animals, and soon

Hi Cate, thank you for your courage to express potentially controversial claims, and I upvoted (but not strongly) for this reason.

I am not a computer or AI scientist. But my guess is that you are probably right, if by "predictable" we mean "predictable to humans only". For example, in a paper (not yet published) Peter Singer and I argue that self-driving cars should identify animals that might be on the way and dodge them. But we are aware that the costs of detection and computation will rise, and that the AI will have more constraints in its optimization problem. As a results the cars might be more expensive and they might be willing  sacrifice some human welfare, such as by causing discomfort or scare to passengers while braking violently for a rat crossing. 

But maybe this is not a reason to worry. If, like how most of the stakes/wellbeing lie in the future,  most of the stakes and wellbeing lie with nonhuman animals, maybe that's a bullet we need to bite. We (longtermists) probably wouldn't say we worry that if an AI cares about the whole future it would be a lot less predictable with respect to the welfare of current people, we are likely to say this is how it should be. 

Another reason to not over-worry is that human economics will probably constrain that from happening to a high extent. Using the self-driving car example again, if some companies' cars care about animals, some don't, the cars that don't will, other things being equal, be cheaper and safer for humans. So unless we so miraculously convince all car producers to take care of animals, we probably won't have the "problem" (which for me, is the actual problem). The point probably goes beyond just economics, politics, culture, human psychology, possibly all have similar effects. My sense is that as far as humans are in control of the development of AI, AI is more likely to be too humancentric than not being humancentric enough.

Steering AI to care for animals, and soon

So if an AI being aligned means that it cares about animals to the extent humans do, it could still be unaligned with respect to the animals' own values to the extent humans are mistaken about them (which we most certainly are).

 

I very much agree with this. This will actually be one of the topics I will research in the next 12 months, with Peter Singer.

Megaprojects for animals

I really like this idea. In addition to financial supports, maybe EA should formally take a stance on this?

Steering AI to care for animals, and soon

But you are introducing a regress here. Already, EAs care about animal welfare and consider AI important.

 

But I think it's much more like, some EAs care about animal welfare, and some EAs care about AI, and less care about both things. More importantly, of the relatively few people who care about both AI and animals, quite few of them care about them in a connected way. 

Thus, I doubt that any AI safety agreements would omit non-human animals.

I actually doubt any AI safety agreements would explicitly include non-human animals. If you look at the public AI principles/statements/agreements from NGOs, universities, governments, and corporations, only the Montreal University said "all sentient beings".  From my experience in reading and discussing with the EA longtermist/AI community, I think AI safety principles published by EAs might be more likely to include all sentient beings than the world average. I think it's still more unlikely than likely that EA AI safety principles will explicitly include animals.

Further, AI will probably consider non-human sentience, if it is sentient.

I would like to hear your argument on why you think so? It seems to me that humans didn't simply care about other sentient beings only by being sentient ourselves. 

Also, what about the scenario where there will be no sentient AI?

Also, you are assuming an erroneous dynamic. Animal welfare is important for AI safety not only because it enables it to acquire diametrically different impact but also since it provides a connection to the agriculture industry, a strategic sector in all nations. 

I actually think that you might be assuming an  erroneous dynamic. You might be connecting AI to the agricultural sector because you think there AI might affect farmed animals, which I actually agree will be the case (my main research focus is AI's impacts on farmed animals). But AI won't just affect the lives of farmed animals, but rather pretty much all animals: Farmed animals, wild animals, experimented animals, companion animals, human animals. For me the core reason animal welfare is important for AI is similar to why human welfare is important for AI - It's because all sentient beings matter.

Steering AI to care for animals, and soon

A project called Evolving Language was also hiring a ML researcher  to "push the boundaries of unsupervised and minimally supervised learning problems defined on animal vocalizations and on human language data".

There's also deepsqueak which studies rat squeaks using DL. But their motive seems to be to do better, and more, animal testing. (not suggesting this is neccessarily net bad)

Steering AI to care for animals, and soon

Ah yes! I think copy and paste probably didn't work at that time, or my brain! I fixed it.

Steering AI to care for animals, and soon

I am so glad to see people interested in this topic! What do you think of my ideas on AI for animals written here?

And I don't think we have to wait for full AGI to do something for wild animals with AI. For example, it seems to me that with image recognition and autopilot, an AI drone can identify wild animals that have absolutely no chance of surviving (fatally injured, about to be engulfed by forest fire), and then euthanize them to shorten their suffering.

Load More