I'm a student of moral sciences at the university of Ghent. I've also started an EA group in Ghent.
Say a genie were to give you the choice between:
1) Creating a stunningly beautiful world that is uninhabited and won’t influence sentient beings in any way or 2) Not creating it.
In addition, both the genie’s and your memories of this event are immediately erased once you make the choice, so no one knows about this world and you cannot derive happiness from the memory.
Would you choose option one or two?
I would choose option one, because I prefer a more beautiful universe to an uglier one (even if no one experiences it). This forms an argument against classic utilitarianism.
Classic Utilitarianism says that I’m wrong. The choice doesn’t create any happiness, only beauty. This means that, according to classic utilitarianism, I should have no preferences between the two options.
There are several moral theories that do allow you to prefer option one. One of which is preference utilitarianism which states that it’s okay to have preferences that don’t bottom out in happiness. For this reason, I find preference utilitarianism more persuasive than classic utilitarianism.
A possible counterargument would be that the new world isn't really beautiful since no one experiences it. Here we have a disagreement over whether beauty needs to be experienced to even exist.
A third way of looking at this thought experiment would be through the lens of value uncertainty. Through this lens, it does make sense to pick option one. Even if you have a thousand times more credence in the theory that happiness is the arbiter of value, the fact that no happiness is created either way leaves the door open for your tiny credence that beauty might be the arbiter of value. Value uncertainty suggests that you take the first option, just in case.
Hey Lumpy, good post and I support the project.
You might be interested to know that these things were already discussed on this platform. I made a post about Making a crowdaction website and gabcoh also wrote a post on this topic. On Less Wrong there is even a whole sequence about this idea.I don't have much programming skills, but I might be able to help in other ways (graphic design, mechanism design). I can also bring you in contact with other people that are working on this project like: Ron (co-creator of collaction), DonyChristie (a programmer who already build a prototype website) and gabcoh.
Hey Aaron, great post.Maybe this isn't the best place to ask this, but should posts like these be tagged with the 80,000 Hours tag? We've discussed the tagging system in my recent post, but I'm still not sure when certain tags should be used. When I look at the 80,000 hours tag, almost all posts are from the 80,000 hours account. But the two bottom ones aren't. So should this tag to be used exclusively by the 80,000 hours account, or should it be used when people talk about 80,000 hours in general? And is mentioning them/their research in the post enough, or should the post be about 80,000 hours before you can tag it?
Of course, I'm just using 80,000 hours as an example. What I'm really asking is if we should create a tag guideline. Something like the New Tag Guidelines from Less Wrong. (I would be willing to write it if you want)
Also, for a totally unrelated comment. Congratulations on your satisfying karma-score milestone:
Oh wow, that's fantastic! I now feel like the tone of this post seems way too harsh. Seeing that most of my points are already being addressed by the mod-team makes me think I should have reached out to you before posting this. I'll make it up to you by winning that upcoming tagging event :)Thank you mod-team, for your continuing work on this amazing site!
Hey Derek, once again a great post!You might not know this, but the EA forum allows you to store a sequence of posts into an ordered sequence. If you go to https://forum.effectivealtruism.org/sequences you can see all the other sequences. When you click on the button on the right:
you can create your own sequence. Once you click on it you will be asked to write a short introduction and add a banner and card image. I haven't written any sequences, but as you can see I have designed most of them. I'm sure you can write a good introduction, but if you want a snazzy card image and banner, I've made some for you:
Here's how it will look among the other sequence cards:
Hope you like them.
I don't think we need a separate tag for this. Especially since there aren't any posts about J-PAL.
Thank you! The cropping in photoshop only takes five minutes at most, so it isn't a big deal. All of the images are made with creative commons images, except for "moral anti-realism" (which I took from Lukas' own page, so I assume he has the rights) and "rwas library" which I found on a bunch of websites with no indication of it's status (if it does get copyrightstriked I'll photoshop a similar looking image).
Btw, could you add an "EA Forum (meta)" tag to this post? I can't add tags at the moment.
I love that sequence, but it's specifically about motivation and how to cultivate it. An "Introduction to EA" sequences would ideally focus on introducing some of the key concepts and organizations. Something like Doing Good Better, but with a little more focus on the movement.
No problem! As a non-native english speaker this was an extremely difficult post to write, hence why I leaned so heavily on images. If you (or anyone) have any suggestions for how I could reword this post to make it clearer, please let me know.
EDIT: I've changed the word "standard utilitarianism" into "moment utilitarianism", I hope this clears up some of the confusion.
EDIT 2: I've realized that other moral theories (outside of consequentialism) might have other ways of solving this. It has become a big project which I call "timespan ethics", since some theories reject the possibility of different timelines. I'm planning to make this project my masters thesis.
I think so too, because you can't really talk about ethics without a timeframe. I wasn't trying to argue that people don't use timeframes, but rather that people automatically use total timeline utilitarianism without realizing that other options are even possible. This was what I was trying to get at by saying:
Usually when people talk about different types of utilitarianism they automatically presuppose "total timeline utilitarianism". In fact, the current debate between total and average utilitarianism is actually a debate between "total total utilitarianism" and "total average utilitarianism".