Morality is Objective
(Vote Explanation) Morality is objective in the sense that, under strong conditions of ideal deliberation (where everyone affected is exposed to all relevant non-moral facts and can freely exchange reasons and arguments) we would often converge on the same basic moral conclusions. This kind of agreement under ideal conditions gives morality its objectivity, without needing to appeal to abstract and mind-independent moral facts. This constructivist position avoids the metaphysical and epistemological problems of robust moral realism, wh...
RE: "I am curious, why do you care about Big Things without small things? Are Big Things not underpinned by values of small everyday things?"
Perhaps it has to do with the level of ambition. Let's talk about a particular value to narrow down the discussion. Some people see "caring for all sentient beings" as an extension of empathy. Some others see it as a logical extension of a principle of impartiality or equality for all. I think I am more in this second camp. I don't care about invertebrate welfare, for example, because I am particularly empathetic towa...
To answer the two questions: For me as a philosopher, I think this is where I can have greatest impact, compared to writing technical stuff on very niche subjects, which might probably not matter much. Think how the majority of the impact that Peter Singer, Will MacAskill, Toby Ord, Richard Chappell, or Bentham's Bulldog have been a mix of new ideas and public advocacy for them. I could say similar thing about other types of intellectuals like Eliezer Yudkowsky, Nick Bostrom, or Anders Sandberg.
I think polymathy is also where the comparative advantage ofte...
"Is it possibly good for humans to go extinct before ASI is created, because otherwise humans would cause astronomical amounts of suffering? Or might it be good for ASI to exterminate humans because ASI is better at avoiding astronomical waste?"
These questions really depend on whether you think that humans can "turn things around" in terms of creating net positive welfare to other sentient beings, rather than net negative. Currently, we create massive amounts of suffering through factory farming and environmental destruction. Depending how you weigh those ...
Re: Advocacy, I do recommend policy and advocacy too! I guess I haven't seen too many good sources on the topic just yet. Though I just remembered two: Animal Ethics https://www.animal-ethics.org/strategic-considerations-for-effective-wild-animal-suffering-work/ and some blog posts by Sentience Institute https://www.sentienceinstitute.org/research
I will add them at the end of the post.
I guess I slightly worry that these topics might still seem too fringe, too niche, or too weird outside of circles that have some degree of affinity with EA or weird ideas in...
Thanks a lot for the links, I will give them a read and get back to you!
Regarding the "Lower than 1%? A lot more uncertainty due to important unsolved questions in philosophy of mind." part, it was a mistake because I was thinking of current AI systems. I will delete the % credence since I have so much uncertainty that any theory or argument that I find compelling (for the substrate-dependence or substate-independence of sentience) would change my credence substantially.
I really loved the event! Organizing it right after EA Global was probably good idea to get attendees from outside of the UK.
At the same time, being right after EA Global without a break prevented me from attending the retreat part. 6 days in a row full of intense networking was a bit too much, both physically and mentally, so I only ended up attending the first day.
But thanks a lot for organizing, I got a lot of value from it in terms of new cutting edge research ideas.
Fair! I agree to that, at least until this point of time.
But I think there could be a time where we could have picked most of the "social low-hanging fruit" (cases like the abolition of slavery, universal suffrage, universal education), so there's not a lot for easy social progress left to do. At least comparatively, then investing on the "moral philosophy low-hanging fruit" will look more worthwhile.
Some important cases of philosophical moral problems that might have great axiological moral importance, at least under consequentialism/utilitarianism could ...
Sure! So I think most of our conceptual philosophical moral progress until now has been quite poor. If looked under the lens of moral consistency reasoning I outlined in point (3), cosmopolitanism, feminism, human rights, animal rights, and even longtermism all seem like slight variations on the same argument ("There are no morally relevant differences between Amy and Bob, so we should treat them equally").
In contrast, I think the fact that we are starting to develop cases like population ethics, infinite ethics, complicated variations of thought experimen...
Outside of Marxism and continental philosophy (particularly the Frankfurt School and some Foucault), I think this idea has lost a lot of grip! So it has actually become a minority view or even awareness among current academic philosophers, particularly in the anglosphere.
However, I think it's a very useful idea that should make us look at our social arrangements (institutions, beliefs, morality...) with some level of initial suspicion. Luckily, some similar arguments (often called "debunking arguments" or "genealogical arguments") are starting to gain traction within philosophy again.
Good! I think I mostly agree with this and I should probably flag it somewhere in the main post.
I do agree with you, and I think it also shows what is a central point of the later parts of my thesis, when I will talk about the empirical ideas rather than philosophical ideas: that technologies (from shipbuilding, to the industrial revolution, to factory farming, to future AI) are more of a factor in moral progress or regress than ideologies. So many moral philosophers might have the wrong focus.
(Although many of those things I would call "social...
I agree with you this is very important, and I'd like to see more work on it. Sadly I don't have much concrete to say on this topic. The following is my opinion as a layman on AI:
I've found Toby Ord's framework here https://www.youtube.com/watch?v=jb7BoXYTWYI to be useful for thinking about these issues. I guess I'm an advocate for differential progress, like Ord. That is, prioritizing safety advancements relative to technical advancements. Not stopping work on AI capabilities, but right now shifting the current balance from capabilities work to safety wor...
Hi Jonas! Henrich's 2020 book is very ambitious, but I thought it was really interesting. It has lots of insights from various disciplines, attempting to explain why Europe became the dominant superpower from the middle ages (starting to take off around the 13th century) to modernity.
Regarding AI, I think it's currently beyond the scope of this project. Although I mention AI at some points regarding the future of progress, I don't develop anything in-depth. So sadly I don't have any new insights regarding AI alignment.
I do think theories of cultural evolut...
Hi Ulrik! I'm definitely aware of this issue, and it's a very ugly side of this debate, which is why some people might have moved away from the topic in the past.
The dangers of using moral progress to justify colonialism and imperialism will be one key point in my next post, and it's also a brief section in the first chapter of my thesis. It's definitely worth cautioning against imposing progress to other cultures. And political intervention is much more complicated than "my culture is more progressed, so we should enforce it upon the rest". It deals with ...
Hi Scott, glad I could motivate you to get Buchanan and Powell. It's a great book! It might feel a bit long if you're not a philosopher, but it's definitely a standout solid reading with many insights on this topic.
On The Blank Slate and Moral Uncertainty, sure, let me add the following to my reviews in order to add to that:
Those two books I think are really good with regards to their subject matter. They're both general overviews of their respective fields. Moral Uncertainty is much more technical, but basically the required reading if you're gettin...
Thanks for your comments!
Regarding (1), I'll get in touch with you if I have a specific question.
(2) I'll rewrite my characterization of Robert Wright's work. I think his main line of argument is that cultural evolutionary processes lead to bigger networks of cooperation, which foster positive sum games, which in turn foster further cooperation in a positive feedback loop. (Though certainly not everything fosters further growth or cooperation, conspicuous consumption being one exception)
(3) Could you say more? Do you mean differences between people's perso...
Thanks for the support, Fin! I definitely agree with you, and I hope this way people can get most of the bang for their buck and save their research time. This topic is greatly time-inefficient, just because it's very broad and interdisciplinary, and there was no clear initial indication of what's good and what's not. So I think reading from either the "TL;DR / Recommended Reading Order", or some of the "Five Star" or "Four Star" books, or the "Worthwhile Articles" should be more than enough for the interest of EAs. The rest are more for completeness' sake...
Interesting introduction! I have a couple of first impressions that I'd like to share:
Just in case some people don't know them, some useful material I've found related to introducing EA to newcomers is the following:
It's not exactly what you're asking for, but I thought it would be good to mention them. That way more people can know about them and we can also avoid repeating efforts. :)
I share many of your worries, but I think that luckily they have solutions! Here is what I've learned from my own experience in the past couple of years.
Regarding financial stability, I think it's wise to save in order to have the runway to sustain yourself for several months without income.
Regarding burnout, often my advice to others in this situation is to "try to give 80% effort", because attempting to give 100% effort leads to burnout in just a few weeks or months.
If you want to maximize positive impact in the world, it has to be sustainable. Thi...
Thanks for the source. I had never heard about this organization before.
Precisely the "ad hoc and informal" nature of the current system is what I criticize in the main post. I wish that there was a website maintained by CEA or a similar organization filling this role, similar to the EA Groups Resource Centre.
Thanks for sharing! I had no idea these resources existed. (I think most people don't know about them either)
Just two points:
-By a very rough estimate, I think the Wiki is missing like 70% of EA organizations, particularly the smaller ones. Seems like there's a lot of work left to be done adding them!
-How do we join the EA Operations Slack?
Thank you for recording the talks! I couldn't attend but will be watching them