Max Görlitz

Pursuing other degree/diploma
437Munich, GermanyJoined Dec 2020

Bio

Organizer at EA Munich, third year of medical school. Focused on biosecurity and looking to try out research soon. I also did some high-talent EA outreach in Germany. 

Sometimes I write about meditation. Here is my Substack: https://glozematrix.substack.com/

(last updated in July 2022)

How others can help me

How I can help others

  • EA Munich is the second biggest EA group in Germany after Berlin. It is also unusual in that more group members are professionals than students. Message me if you'd like to chat about community building.
  • Other than that, I have some expertise in medicine, meditation & well-being, and effective learning techniques.

Comments
34

Cross-posting this from my blog because the philosophical issues around egoism and altruism will be of interest to some people here.

 

Until I was ~16, I used to believe that there was no altruism and that everything anybody does is always for purely egoistical reasons. As far as I can remember, I grew confident of this after hearing an important childhood mentor talk about it. I had also read about the ideas of Max Stirner and had a vague understanding of his notion of egoism.

I can't remember what made me change my mind exactly, but pondering thought experiments similar to this one from Nate Soares played a significant role:

Imagine you live alone in the woods, having forsaken civilization when the Unethical Psychologist Authoritarians came to power a few years back.

Your only companion is your dog, twelve years old, who you raised from a puppy. (If you have a pet or have ever had a pet, use them instead.)

You're aware of the fact that humans have figured out how to do some pretty impressive perception modification (which is part of what allowed the Unethical Psychologist Authoritarians to come to power).

One day, a psychologist comes to you and offers you a deal. They'd like to take your dog out back and shoot it. If you let them do so, they'll clean things up, erase your memories of this conversation, and then alter your perceptions such that you perceive exactly what you would have if they hadn't shot your dog. (Don't worry; they'll also have people track you and alter the perceptions of anyone else who would see the dog, so that they also see the dog, so that you won't seem crazy. And they'll remove that fact from your mind, so you don't worry about being tracked.)

In return, they'll give you a dollar.

I noticed that taking the dollar and getting the dog killed would be the self-interested and egoistically rational choice, but I cared about the dog and didn't want him to die!

Furthermore, psychological egoism bugs me because its proponents often need to come up with overly complicated explanations for seemingly altruistic behaviours. My belief in the importance of Occam's razor led me to conclude that often the simplest explanation of certain behaviours is that they are "just altruistic."

For example, say a soldier throws himself on a grenade to prevent his friends from being killed. Psychological egoists need to come up with some convoluted reason for why he did this out of pure self-interest:

Even though I have come to believe that “real” altruistic behaviour exists, I still believe that pure altruism is rare and that I take the vast majority of my actions for egoistical reasons. The reasoning behind this often boils down to somewhat cynical explanations à la The Elephant in the Brain. For example, I engage a lot with effective altruism, but my motivation to do so can also largely be explained by egoistic reasons.

While pondering this whole issue of egoism, I wondered whether I could come up with a self-measure of how egoistic I am. Two potential proxies came to mind:

 

Proxy 1: Two buttons 

Consider the following thought experiment:

You are given the option of pressing one of two buttons. Press button A and you die instantly. Press button B and a random person in the world dies instantly. You have to press one of them. Which one do you choose?

Let's take it up a notch:

Imagine that when you press button B, instead of one random person dying, two random people die. We can increase this to any positive number n. At what number n of people—that would die at the press of button B—would you press button A instead of button B?

Now consider a different version:

You are given a gun and the choice to take one of two actions. You can either shoot yourself in the head or shoot a random person from somewhere in the world in the head that is suddenly teleported to stand in front of you. You have to take one of those two actions. Which one do you choose?

Let's take it up a notch again:

You are given a gun and the choice to take one of two actions. You can either shoot yourself in the head or shoot n random people from somewhere in the world in the head that are suddenly teleported to stand in front of you. At what number n of people, that you would need to shoot in the head, would you prefer to shoot yourself?

In a consequentialist framework, both decisions result in the same outcome. Notice that they feel very different, though. At least they do so for me.

 

Proxy 2: Hedonium

Imagine being offered the opportunity of spending the rest of your life in hedonium. What I mean by hedonium is being in a state of perfect bliss for every second of the rest of your life. Variations of this thought experiment and related concepts are the experience machine, orgasmium, and wireheading. Let's say we take the valence of the best moment of your life, increase its pleasure by a few orders of magnitude, and enable you to stay in that state of boundless joy until you cease existing in ~100 years.

The only downside is that while all this pleasure is filling your mind, your body is floating around in a tank of nutrient fluid or something. You can’t take any more actions in the world. You can’t impact it in any way. Opting into hedonium could therefore be considered egoistic because while you are experiencing pleasure beyond words, you wouldn’t be able to help other people live better lives. 

How likely would you be to accept the offer? Consider how tempting it feels, whether your gut reaction is to accept or decline it outright or whether it seems more complicated. 

 

I hypothesise that giving a larger number of n in the first proxy and a higher probability of opting into hedonium roughly correlates with being more egoistic. I explicitly choose not to give my reactions to these thought experiments to not bias anyone, but I invite readers to leave their responses in the comments. 

I was referring to the one Sarah pointed out :)

 

Thanks for replying! 

Yes, that is the one! Thanks, Sarah. I wasn't able to find it. 

I remember there was this forum post with a list of selfish/egoistic reasons to be into effective altruism. I can't find it right now, can anyone point me to it?

It contains things like:

  1. EA gives me meaning in my life
  2. EA provides me with an ingroup where I can have a high  social status
  3.  I get to be friends with smart people
  4. I can feel superior to most other people because I know that I am doing something to improve the world

etc. 

I strongly disagree with the sentiment that 1-1s are dehumanizing, as I have found that most of my 1-1s have been friendly, warm, and fun even though they were mostly confined to 30 min.

Something else I can recommend is taking a walk around the block with someone instead of sitting at a table facing each other. This makes it more casual and less formal.

Overall I understand your idea that 1-1s can seem too business-like but my impression is that there are a few tricks to approach them in a way that is more fun.

Regarding your example of meeting the best people in the strangest places, I also make sure to add randomness to my conferences: https://forum.effectivealtruism.org/posts/ixdejJKnonBmaiF4T/add-randomness-to-your-ea-conferences

In two instances people DM'd me about things I posted on the forum which lead to meeting them in person at EAG. These connections probably wouldn't have happened without the prior conversation on the forum and I expect to reconnect with these people in the future. I would thus consider them quite valuable.

I think I read this somewhere, but can't remember who to attribute the idea to: Maybe we need something like an EA safety net, similar to an insurance. Knowing that you will have enough money to take care of your family even if you fail at your ambitious project would at least take away some of the anxiety of not succeeding. This would also be helpful in case you suffer burnout (which we should prevent in the first place!). 

DIY decentralized nucleic acid observatory

Biorisk and Recovery from Catastrophes

As part of the larger effort of building an early detection center for novel pathogens, a smaller self-sustaining version is needed for remote locations. The ideal early-detection center would not only have surveillance stations in the largest hubs and airports of the world, but also in as many medium sized ones as possible. For this it is needed to provide a ready-made, small and transportable product which allows meta-genomic surveillance of wastewater or air ventilation. One solution would be designing a workflow utilizing the easily scalable and portable technology of nanopore sequencing and combining it with a workflow to extract nucleic acids from wastewater. The sharing of instructions on how to build and use this method could lead to a "do it yourself" (DIY) and decentralized version of a nucleic acid observatory. Instead of staffing a whole lab at a central location, it would be possible to only have one or two personnel in key locations who use this product to sequence samples directly and only transmit the data to the larger surveillance effort. 


 

One thing that came to mind is that in the last year, it seems the EA community has payed more attention to the progress studies (PS) movement. They probably take an approach to global health and poverty which is more focused on economic growth. 

Load More