Rafael Ruiz

PhD in Philosophy @ London School of Economics
382 karmaJoined Pursuing a doctoral degree (e.g. PhD)Working (0-5 years)London, UK
www.rafaelruizdelira.com/

Bio

Participation
3

PhD Student in Philosophy at the London School of Economics, researching Moral Progress and the causes that drive it. 

Previously, I did a MA in Philosophy at King's College London and a MA in Political Philosophy at Pompeu Fabra University (Spain). More information about my research at my personal website: https://www.rafaelruizdelira.com/

From time to time, I write on my blog: https://themoralcircle.substack.com/ 

You might also know me from EA Twitter. :)

Comments
37

Hi Fin, sorry I'm a bit late with my question, I was rereading parts of the Better Futures series. First of all, I have to say it's one of my favorite article series I've ever read, and I'll be citing it in my own work going forward. The easygoing-versus-fussy distinction in particular is something I'm finding really interesting to dig into. :) Would love to discuss it in more detail at some point.

I wanted to push on the metaphor of sailing to an island, which appears at the start of No Easy Eutopia, but my question is going to take some preamble explanation (sorry!).

I find myself preferring a slightly different picture. Rather than thinking of eutopia as an island we're navigating to, I tend to think of society as the ship itself, drifting through a sea of value over time (a topography of better and worse regions we're already moving through). Societal change feels to me more like a search through uncharted moral territories than an expedition to a specific destination. On that picture, the priority seems more likely to be "how do we improve the ship, so that society reliably moves toward better regions of the sea?"

A couple of clarifications. First, I grant fussiness, I agree most plausible axiologies locate near-best futures in a very narrow region (I lean towards total hedonistic utilitarianism, myself). Second, I'm not a a quietist, in my own work I'm defending what I call moral niche construction, a fairly interventionist view on which we should actively reshape institutions, technologies, and even our own moral psychology (through things like AI moral decisionmakers or bioenhancement) to push society toward better regions. So the disagreement isn't really about ambition, either.

Where I want to press is the following. In the ship-improvement picture, I can grant openly that we probably will never reach eutopia. We end up in a high-value region of the sea (in a local optima), much better than where we are now, plausibly very good in absolute terms, but not the narrow island. 

That sounds like a concession, but on rereading Convergence and Compromise, it looks to me like the target-pursuit picture probably doesn't reach the island either: you mention how WAM-convergence is unlikely, partial convergence plus trade faces serious obstacles, value-destroying threats can eat most of the value... So the comparison isn't "guaranteed eutopia versus probably-not-eutopia", since you yourself seem pretty pessimistic. So it's two orientations that both probably miss the island, where one delivers reliable improvements to our current region of the sea along the way, and the other keeps optimizing toward a target it probably won't hit. And, well, if you miss the moon, you don't really land upon the stars... you drift in empty space and die, haha.

(There are similar points on Jerry Gaus' The Tyranny of the Ideal, and on recent debates between ideal theory and non-ideal theory in moral and political philosophy)

So, finally, my question is: given that target-pursuit probably doesn't reach eutopia either, on the series' own analysis, why is the practical orientation toward the narrow target rather than toward improving our current region of the sea (e.g. pursuing very high + plausibly easy to reach and resilient local optima)? What's the case for target-pursuit as a practical orientation, once we factor in that we will probably fail? Is it a case akin to fanaticism, where, if we land in the island, the payoff would be huge?

(Apologies in advance if this is addressed somewhere in the series, my memory context window isn't large enough to hold the whole essay series at once!)

For what it's worth, here's some bibliography in case anyone is interested in researching (moral) intuitions in philosophy.

An excerpt from my MA thesis:

"There are several possible characterizations of what intuitions are precisely supposed to be. Exceptionalists (e.g. Sosa, Ludwig) argue that intuitions are analytic or conceptual truths, a priori, and/or dealing with conceptual competence. Particularists (e.g. Bealer, Huemer, Schwitzgebel, Kagan) argue that intuitions have a distinct phenomenology, such as being snap judgments that are not consciously inferred from any other belief, or are a sui generis faculty. Minimalists (e.g. Machery, Lewis) argue that intuitions are not different from the application of concepts in ordinary life. (Machery, 2017, Ch. 2)"

I borrowed this terminology from Chapter 2 of Edouard Machery's book, Philosophy Within Its Proper Bounds (2017).

Rafael Ruiz
3
0
0
50% agree

How much confidence would you give to the statement: "Whiteleg shrimps are sentient"?

Justification for my vote: High credence given that I also believe many insects of similar size are sentient too. But my credence is highly volatile to further evidence, given that we lack a lot of empirical information and we don't know much the correct philosophy of mind either.

See the paper "Can insects feel pain? A review of the neural and behavioural evidence" by Gibbons et al. here for the Birch framework applied to insects. (Feel free to skip to the conclusion if you just want the quick takeaway).

Thank you for recording the talks! I couldn't attend but will be watching them

Rafael Ruiz
6
1
0
50% agree

Morality is Objective

(Vote Explanation) Morality is objective in the sense that, under strong conditions of ideal deliberation (where everyone affected is exposed to all relevant non-moral facts and can freely exchange reasons and arguments) we would often converge on the same basic moral conclusions. This kind of agreement under ideal conditions gives morality its objectivity, without needing to appeal to abstract and mind-independent moral facts. This constructivist position avoids the metaphysical and epistemological problems of robust moral realism, while still grounding moral claims in terms of justification.

(Although their views are not exactly the same, I take this view to be aligned with the metaethical views of philosophers Christine Korsgaard, Sharon Street, Philip Kitcher, and Jürgen Habermas. https://plato.stanford.edu/entries/constructivism-metaethics/ )

RE: "I am curious, why do you care about Big Things without small things? Are Big Things not underpinned by values of small everyday things?"

Perhaps it has to do with the level of ambition. Let's talk about a particular value to narrow down the discussion. Some people see "caring for all sentient beings" as an extension of empathy. Some others see it as a logical extension of a principle of impartiality or equality for all. I think I am more in this second camp. I don't care about invertebrate welfare, for example, because I am particularly empathetic towards them. Most people find bugs to be a bit icky, particularly under a magnifying glass, which turns off their empathy.

Rather, they are suffering sentient beings, which means that the same arguments for why we should care about people (and their wellbeing/interests/preferences) also apply to these invertebrates. And caring about, say, invertebrate welfare, requires a use of reason towards impartiality that might sometimes make you de-prioritize friends and family.

Secondly, I also have a big curiosity about understanding the universe, society, etc. which makes me feel like I'm wasting my time in social situations of friends and family when the conversation topics are a bit trivial.

As I repeat a bit throughout the post, I realize I might be a bit of an psychological outlier here, but I hope people can also see why this perspective might be appealing. Most people are compartimenalizing their views on AI existential risk to a level that I'm not sure makes sense.

To answer the two questions: For me as a philosopher, I think this is where I can have greatest impact, compared to writing technical stuff on very niche subjects, which might probably not matter much. Think how the majority of the impact that Peter Singer, Will MacAskill, Toby Ord, Richard Chappell, or Bentham's Bulldog have been a mix of new ideas and public advocacy for them. I could say similar thing about other types of intellectuals like Eliezer Yudkowsky, Nick Bostrom, or Anders Sandberg.

I think polymathy is also where the comparative advantage often lies for a philosopher. Particularly for me, I'm not so good at technical topics that I would greatly excel at a niche thing such as population ethics. I can, however, draw from other fields and learn how particular moral intuitions might be unreliable, for example. And what might feel like a advocating for a relatively small change in moral beliefs (e.g. what we do about insect suffering, or the potential suffering of digital minds) could change future societies greatly.

Yet I don't disregard specializing into one thing. I'm currently working on my PhD, which a very specialized project.

And I would give very different advice if I was working on AI safety directly. If that were the case, maybe digging deep into a topic to become a world expert or have a breakthrough might be the best way to go.

"Is it possibly good for humans to go extinct before ASI is created, because otherwise humans would cause astronomical amounts of suffering? Or might it be good for ASI to exterminate humans because ASI is better at avoiding astronomical waste?"

These questions really depend on whether you think that humans can "turn things around" in terms of creating net positive welfare to other sentient beings, rather than net negative. Currently, we create massive amounts of suffering through factory farming and environmental destruction. Depending how you weigh those things, it might lead to the conclusion that humans are currently net-negative to the world. So a lot turns on whether you think the future of humanity would be deeply egoistic and harmful, or if you think we can improve substantially. There are some key considerations you might want to look into, in the post The Future Might Not Be So Great by Jacy Reese Anthis: https://forum.effectivealtruism.org/posts/WebLP36BYDbMAKoa5/the-future-might-not-be-so-great 

"Why is it reasonable to assume that humans must treat potentially lower sentient AIs or lower sentient organic lifeforms more kindly than sentient ASIs that have exterminated humans?"

I'm not sure I fully understand this paragraph, but let me reply to the best of my abilities from what I gathered.

I haven't really touched on ASIs on my post at all. And, of course, currently no ASIs have killed humans since we don't have ASIs yet. They might also help us flourish, if we manage to align them.

I'm not saying we must treat less-sentient AIs more kindly. If anything, it's the opposite! The more sentient a being is, the more moral worth they will have, since they will have stronger experiences of pleasure and pain. I think we should promote the welfare of beings in ways that are correlated to their abilities for welfare.  But it might be an empirical fact that we might want to promote the welfare of simpler beings rather than more complex ones because they are easier/cheaper to copy/reproduce and help. There might be also more sentience, and thus more moral worth, per unit of energy spent on them.

"Yes, such ASIs extinguish humans by definition, but humans have clearly extinguished a very large number of other beings, including some human subspecies as well."

We have currently driven many other species to extinction through environmental destruction and climate change. I think this is morally bad and wrong, since it is possible (e.g. invertebrates) to probable (e.g. vertebrates) that these animals were sentient. 

I tend to think in terms of individuals rather than species. By which I mean: Imagine you were in the moral dilemma that you had to either to fully exterminate a species by killing the last 100 members, versus killing 100,000 individuals of a very similar species but not making them extinct. I tend to think of harm in terms of the individuals killed or thwarted potential. In such a scenario, it is possible that we might prefer some species becoming extinct, but since what we care about is promoting overall welfare. (Though second-order effects on biodiversity makes these things very hard to predict).

I hope that clarifies some things a little. Sorry if I misunderstood your points in that last paragraph.

Re: Advocacy, I do recommend policy and advocacy too! I guess I haven't seen too many good sources on the topic just yet. Though I just remembered two: Animal Ethics https://www.animal-ethics.org/strategic-considerations-for-effective-wild-animal-suffering-work/ and some blog posts by Sentience Institute https://www.sentienceinstitute.org/research

I will add them at the end of the post.

I guess I slightly worry that these topics might still seem too fringe, too niche, or too weird outside of circles that have some degree of affinity with EA or weird ideas in moral philosophy. But I believe that the overton window will shift inside some circles (some animal welfare organizations, AI researchers, some AI policymarkers), so we might want to target them rather than spreading these somewhat weird and fringe ideas to all of society. Then they can push for policy.

Re: Geoffrey Hinton, I think he might subscribe to a view broadly held by Daniel Dennett (although I'm not sure Dennett would agree with the interpretation of his ideas). I guess in the simplest terms, it might boil down to a version of functionalism, where since the inputs and outputs are similar to a human, it is assumed that the "black box" in the middle is also conscious. 

I think that sort of view assumes substate-independence of mental states. It leads to slightly weird conclusions such as the China Brain https://en.wikipedia.org/wiki/China_brain , where people arranged in a particular way doing the same function as neurons in a brain, would make the nation of China be a conscious entity. 

Besides that, we might also want to distinguish consciousness and sentience. We might get cases with phenomenal consciousness (basically, an AI with subjective experiences, and also thoughts and beliefs, possibly even desires) but no valenced states of pleasure and pain. While they come together in biological beings, these might come apart in AIs.

Re: Lack of funding for digital sentience, I was also a bit saddened by those news. Though Caleb Parikh did seem excited for funding digital sentience research. https://forum.effectivealtruism.org/posts/LrxLa9jfaNcEzqex3/calebp-s-shortform?commentId=JwMiAgJxWrKjX52Qt 

Load more