AM

Adrià Moret

69 karmaJoined Pursuing an undergraduate degree
philpeople.org/profiles/adria-r-moret

Bio

Participation
1

I am a (20yo) independent philosophy researcher pursuing research and interested in Global Priorities, Animal Ethics, The Ethics of Digital Minds, Phenomenal Consciousness, The A(S)I value alignment problem, Longtermism and S-risks.  I am also a philosophy undergrad at the University of Barcelona. To see my publications go to: https://philpeople.org/profiles/adria-r-moret 

Comments
6

Perfect! 

It's more or less similar. I do not focus that much on the moral dubiousness of "happy servants". Instead, I try to show that standard alignment methods or preventing near-future AIs with moral patienthood from taking actions they are trying to take, causes net harm to the AIs according to desire satisfactionism, hedonism and objective list theories.

It's great to see this topic being discussed. I am currently writing the first (albeit significantly developed) draft of an academic paper on this. I argue that there is a conflict between AI safety and AI welfare concerns. This is so basically because (to reduce catastrophic risk) AI safety recommends implementing various kinds of control measures to near-future AI systems which are (in expectation) net-harmful for AI systems with moral patienthood according to the three major theories of well-being. I also discuss what we should do in light of this conflict. If anyone is interested in reading or giving comments on the draft when it is finished, send me a message or an e-mail (adriarodriguezmoret@gmail.com).

Thanks for this! I would be curious to know what you think about the tension there seems to be between allocating resources to Global health & development (or even prioritizing it over Animal Welfare) and rejecting speciesism given The Meat eater problem.