All of Rafael Ruiz's Comments + Replies

Thank you for recording the talks! I couldn't attend but will be watching them

Rafael Ruiz
6
1
0
50% agree

Morality is Objective

(Vote Explanation) Morality is objective in the sense that, under strong conditions of ideal deliberation (where everyone affected is exposed to all relevant non-moral facts and can freely exchange reasons and arguments) we would often converge on the same basic moral conclusions. This kind of agreement under ideal conditions gives morality its objectivity, without needing to appeal to abstract and mind-independent moral facts. This constructivist position avoids the metaphysical and epistemological problems of robust moral realism, wh... (read more)

RE: "I am curious, why do you care about Big Things without small things? Are Big Things not underpinned by values of small everyday things?"

Perhaps it has to do with the level of ambition. Let's talk about a particular value to narrow down the discussion. Some people see "caring for all sentient beings" as an extension of empathy. Some others see it as a logical extension of a principle of impartiality or equality for all. I think I am more in this second camp. I don't care about invertebrate welfare, for example, because I am particularly empathetic towa... (read more)

To answer the two questions: For me as a philosopher, I think this is where I can have greatest impact, compared to writing technical stuff on very niche subjects, which might probably not matter much. Think how the majority of the impact that Peter Singer, Will MacAskill, Toby Ord, Richard Chappell, or Bentham's Bulldog have been a mix of new ideas and public advocacy for them. I could say similar thing about other types of intellectuals like Eliezer Yudkowsky, Nick Bostrom, or Anders Sandberg.

I think polymathy is also where the comparative advantage ofte... (read more)

"Is it possibly good for humans to go extinct before ASI is created, because otherwise humans would cause astronomical amounts of suffering? Or might it be good for ASI to exterminate humans because ASI is better at avoiding astronomical waste?"

These questions really depend on whether you think that humans can "turn things around" in terms of creating net positive welfare to other sentient beings, rather than net negative. Currently, we create massive amounts of suffering through factory farming and environmental destruction. Depending how you weigh those ... (read more)

1
Duckruck
I'm not talking about the positive or negative sign of the net contribution of humans, but rather the expectation that the sign of the net contribution produced by sentient ASI should be similar to that of humans. Coupled with the premise that ASI alone is more likely to do a better job of full-scale cosmic colonization faster and better than humans, this means that either sentient ASI should destroy humans to avoid astronomical waste, or that humans should be destroyed prior to the creation of sentient ASI or cosmic colonization to avoid further destruction of the Earth and the rest of the universe by humans. This means that humans being (properly) destroyed is not a bad thing, but instead is more likely to be better than humans existing and continuing. Alternatively ASI could be created with the purpose of maximizing perpetually happy sentient low-level AI/artificial life rather than paperclip manufacturing. in which case humans would either have to accept that they are part of this system or be destroyed as this is not conducive to maximizing averaging or overall hedonism. This is probably the best way to maximize the hedonics of sentient life in the universe, i.e. utility monster maximizers rather than paperclip maximizers. I am not misunderstanding what you are saying, but pointing out that these marvelous trains of thought experiments may lead to even more counterintuitive conclusions.

Re: Advocacy, I do recommend policy and advocacy too! I guess I haven't seen too many good sources on the topic just yet. Though I just remembered two: Animal Ethics https://www.animal-ethics.org/strategic-considerations-for-effective-wild-animal-suffering-work/ and some blog posts by Sentience Institute https://www.sentienceinstitute.org/research

I will add them at the end of the post.

I guess I slightly worry that these topics might still seem too fringe, too niche, or too weird outside of circles that have some degree of affinity with EA or weird ideas in... (read more)

2
SiebeRozendal
Thanks! Tbh, I think the Overton window isn't so important. AI is changing fast, and somebody needs to push the Overton window. Hinton says LLMs are conscious and still gets taken seriously.. I would really like to see policy work on this soon!

Thanks a lot for the links, I will give them a read and get back to you!

Regarding the "Lower than 1%? A lot more uncertainty due to important unsolved questions in philosophy of mind." part, it was a mistake because I was thinking of current AI systems. I will delete the % credence since I have so much uncertainty that any theory or argument that I find compelling (for the substrate-dependence or substate-independence of sentience) would change my credence substantially.

I really loved the event! Organizing it right after EA Global was probably good idea to get attendees from outside of the UK.

At the same time, being right after EA Global without a break prevented me from attending the retreat part. 6 days in a row full of intense networking was a bit too much, both physically and mentally, so I only ended up attending the first day.

But thanks a lot for organizing, I got a lot of value from it in terms of new cutting edge research ideas.

6
Constance Li
Glad you enjoyed it and sad you weren't able to attend the retreat. tbh, I was also quite tired after EAG and skipped out on some after conference events, which was quite suboptimal. Next year, I'm thinking about doing it before EAG and giving folks a 1-2 days of rest before EAG starts.
2
Hive
Hi Rafael! Glad you were able to attend the first day. And we appreciate the feedback, thank you! You aren't the first mention the post-EAG overwhelm; we'll be taking this into consideration for future conferences. 

Even my grocery shopping list? 😳 That's a bit embarrassing but I hope fellow EAs can help me optimize it for impact

Climate change is going pretty well, I've heard carbon emissions are up!

Also, humans are carbon-based creatures so having more carbon around seems plausibly good 😊

Are we using the old 12 signs astrological chart, or the updated one with Ophiuchus 13th astrological sign?

1
Ivan Burduk
For now, the standard 12 signs is the easiest to implement. However, we will follow up with an Ophiuchus update in due course!

Fair! I agree to that, at least until this point of time.

But I think there could be a time where we could have picked most of the "social low-hanging fruit" (cases like the abolition of slavery, universal suffrage, universal education), so there's not a lot for easy social progress left to do. At least comparatively, then investing on the "moral philosophy low-hanging fruit" will look more worthwhile.

Some important cases of philosophical moral problems that might have great axiological moral importance, at least under consequentialism/utilitarianism could ... (read more)

Same! Seems like a fascinating, although complicated topic. You might enjoy Oded Galor's "The Journey of Humanity", if you haven't read it. :)

2
JackM
Hey, did you ever look into the Moral Consequences of Economic Growth book?

Sure! So I think most of our conceptual philosophical moral progress until now has been quite poor. If looked under the lens of moral consistency reasoning I outlined in point (3), cosmopolitanism, feminism, human rights, animal rights, and even longtermism all seem like slight variations on the same argument ("There are no morally relevant differences between Amy and Bob, so we should treat them equally").

In contrast, I think the fact that we are starting to develop cases like population ethics, infinite ethics, complicated variations of thought experimen... (read more)

2
JoshuaBlake
You need a step beyond this though. Not just that we are coming up with harder moral problems, but that solving those problems is important to future moral progress. Perhaps a structure as simple as the one that has worked historically will prove just as useful in the future, or, as you point out has happened in the past, wider societal changes (not progress in moral philosophy at an academic discipline) is the major driver. In either case, all this complex moral philosophy is not the important factor for practical moral progress across society.

Outside of Marxism and continental philosophy (particularly the Frankfurt School and some Foucault), I think this idea has lost a lot of grip! So it has actually become a minority view or even awareness among current academic philosophers, particularly in the anglosphere.

However, I think it's a very useful idea that should make us look at our social arrangements (institutions, beliefs, morality...) with some level of initial suspicion. Luckily, some similar arguments (often called "debunking arguments" or "genealogical arguments") are starting to gain traction within philosophy again. 

I hadn't! Thanks for bringing this to my attention, I will take a look in the coming months.

3
JackM
Please do! I'm fascinated by the idea that we can accelerate moral progress by focusing on economic growth.

Good! I think I mostly agree with this and I should probably flag it somewhere in the main post. 

I do agree with you, and I think it also shows what is a central point of the later parts of my thesis, when I will talk about the empirical ideas rather than philosophical ideas: that technologies (from shipbuilding, to the industrial revolution, to factory farming, to future AI) are more of a factor in moral progress or regress than ideologies. So many moral philosophers might have the wrong focus. 

(Although many of those things I would call "social... (read more)

2
Arturo Macias
Well, I hope philosophers are aware of how much ideas are super-structure of the productive forces and the social relations! I am far from being a Marxist, but I suppose this is  a commonplace on modern Western historiography...

I agree with you this is very important, and I'd like to see more work on it. Sadly I don't have much concrete to say on this topic. The following is my opinion as a layman on AI:

I've found Toby Ord's framework here https://www.youtube.com/watch?v=jb7BoXYTWYI to be useful for thinking about these issues. I guess I'm an advocate for differential progress, like Ord. That is, prioritizing safety advancements relative to technical advancements. Not stopping work on AI capabilities, but right now shifting the current balance from capabilities work to safety wor... (read more)

1
Daniel_Friedrich
Sounds reasonable! I think the empirical side to the question "Will society be better equipped to set AI values in 2123?" is more lacking. For this purpose, I think "better equipped" can be nicely operationalized in a very value-uncertain way as "making decisions based on more reflection & evidence and higher-order considerations". This kind of exploration may include issues like: 1. Populism. Has it significantly decreased the amount of rationality that goes into gov. decision-making, in favor of following incentives & intuitions? And what will be faster - new manipulative technologies or the rate at which new generations get immune to them? 2. Demographics. Given that fundamentalists tend to have more children, should we expect there will be more of them in 2123? 3. Cultural evolution. Is Ian Morris or Christopher Brown more right, i.e. should we expect that as we get richer, we'll be less prone to decide based on what gives us more power, and in turn attain values better calibrated with the most honest interpretation of reality?

Hi Jonas! Henrich's 2020 book is very ambitious, but I thought it was really interesting. It has lots of insights from various disciplines, attempting to explain why Europe became the dominant superpower from the middle ages (starting to take off around the 13th century) to modernity.

Regarding AI, I think it's currently beyond the scope of this project. Although I mention AI at some points regarding the future of progress, I don't develop anything in-depth. So sadly I don't have any new insights regarding AI alignment.

I do think theories of cultural evolut... (read more)

Hi Ulrik! I'm definitely aware of this issue, and it's a very ugly side of this debate, which is why some people might have moved away from the topic in the past.

The dangers of using moral progress to justify colonialism and imperialism will be one key point in my next post, and it's also a brief section in the first chapter of my thesis. It's definitely worth cautioning against imposing progress to other cultures. And political intervention is much more complicated than "my culture is more progressed, so we should enforce it upon the rest". It deals with ... (read more)

2
Benevolent_Rain
Excellent, I am happy you are on top of this. I will have a think and see if I can come up with some. Anthropology might have things to offer. One that comes to mind is a book I think called "The Gift" or something similar - perhaps not directly relevant. And as I said I am more versed in Tibetan Buddhism with a lot of focus on cultivating compassion and of ~empowering oneself to release all beings (not just humans) from suffering for ever - that is a pretty wide moral circle!

Hi Scott, glad I could motivate you to get Buchanan and Powell. It's a great book! It might feel a bit long if you're not a philosopher, but it's definitely a standout solid reading with many insights on this topic.

On The Blank Slate and Moral Uncertainty, sure, let me add the following to my reviews in order to add to that: 

Those two books I think are really good with regards to their subject matter. They're both general overviews of their respective fields. Moral Uncertainty is much more technical, but basically the required reading if you're gettin... (read more)

Thanks a lot for the further reading recommendations, I will take a look!

Thanks for your comments!

Regarding (1), I'll get in touch with you if I have a specific question.

(2) I'll rewrite my characterization of Robert Wright's work. I think his main line of argument is that cultural evolutionary processes lead to bigger networks of cooperation, which foster positive sum games, which in turn foster further cooperation in a positive feedback loop. (Though certainly not everything fosters further growth or cooperation, conspicuous consumption being one exception)

(3) Could you say more? Do you mean differences between people's perso... (read more)

I hope you find it Kitcher's book worthwhile read! I always learn something from him.

(Typo fixed!)

Thanks for the support, Fin! I definitely agree with you, and I hope this way people can get most of the bang for their buck and save their research time. This topic is greatly time-inefficient, just because it's very broad and interdisciplinary, and there was no clear initial indication of what's good and what's not. So I think reading from either the "TL;DR / Recommended Reading Order", or some of the "Five Star" or "Four Star" books, or the "Worthwhile Articles" should be more than enough for the interest of EAs. The rest are more for completeness' sake... (read more)

Interesting introduction! I have a couple of first impressions that I'd like to share:

  1. The beginning of the article seems strange to me. This is the first time I have seen "Effective Altruism" defined as "a project". To me, "a project" seems to have the connotation of something happening within a specific organization, rather than an idea, question, ideology, philosophy, or social movement. I think Effective Altruism is not a project. Rather, it contains hundreds or thousands of projects. I think there might be a better concept to encompass the idea.
  2. I find
... (read more)
1
Mike Pool
I came back to this post specifically to ask the same question as your point 1. - why "project" was used? I'd love hear to hear the reasoning!
2
Jeroen Willems🔸
Yeah I'd like to understand point 1 better too. Why 'project' rather than 'movement' or 'community'? I assume a lot of thought was put into it so I'm curious to know what the explanation is! Personally, point two makes sense to me. "What does EA do?" is a question most outsiders are interested in, and I like that the explanations come with the EA reasoning behind it so it doesn't look like EA is specifically about the mentioned issue.

Just in case some people don't know them, some useful material I've found related to introducing EA to newcomers is the following:

It's not exactly what you're asking for, but I thought it would be good to mention them. That way more people can know about them and we can also avoid repeating efforts. :)

3
Evan_Gaensbauer
No, this is great. Thanks for sharing.  Resources like these can be used by pulling existing representations of crucial information and bringing them together as a sequential, coherent whole. I don't want to create any unnecessary redundancy. At the same time, though, information like this should be concentrated in a single spot online, or at least presented in a way that meets the needs of those who are finding EA inaccessible.  I'm aware in the last year or two the Centre for Effective Altruism (CEA) and others have been trying to redress this by creating new resources, like the resource centre and different programs/courses. Given that those resources are designed to address the exact problem that EA is not sufficiently accessible, those are the resources I'm most concerned that this effort could just cause confusion about.  I'll seek feedback throughout the effort, including from particular organizations as pertinent for different subjects, and failing that, I'm guessing some others will recommend suggestions or correct mistakes along the way. It can be done in a gradual, semi-coordinated way. I figure if it's done well enough, different groups might want to integrate parts of the sequence into existing resource portals off of the EA Forum too.

This is post great! I had the idea for a similar post but you put it better than I could.

I hope more diverse messaging attracts a variety of people to EA and makes them more engaged overall.

Thanks for sharing! I might start reading from your most recommended. :)

Thanks for posting this! I'll be sharing some of these graphs with people new to EA

I share many of your worries, but I think that luckily they have solutions! Here is what I've learned from my own experience in the past couple of years.

Regarding financial stability, I think it's wise to save in order to have the runway to sustain yourself for several months without income.

Regarding burnout, often my advice to others in this situation is to "try to give 80% effort", because attempting to give 100% effort leads to burnout in just a few weeks or months. 

If you want to maximize positive impact in the world, it has to be sustainable. Thi... (read more)

1
Anon115
no doubt this is great advice even if you work outside of EA. Still I have the feeling that similar jobs outside EA are more stable, but maybe I'm wrong as I don't have too much information about how stable are the jobs/companies in EA. is also a great piece of advice that I will have to work to assimilate. that's a great idea! I'm going to start doing it and see how it works for me :)

Thanks for the source. I had never heard about this organization before.

Precisely the "ad hoc and informal" nature of the current system is what I criticize in the main post. I wish that there was a website maintained by CEA or a similar organization filling this role, similar to the EA Groups Resource Centre.

Thanks for sharing! I had no idea these resources existed. (I think most people don't know about them either)

Just two points:

-By a very rough estimate, I think the Wiki is missing like 70% of EA organizations, particularly the smaller ones. Seems like there's a lot of work left to be done adding them!

-How do we join the EA Operations Slack?