This is a special post for quick takes by Max Görlitz. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Do you know of any research institutes that provide no-strings-attached, multi-year funding and are committed to open science? I’m looking for examples of metascience, where they experiment with new ways of doing and funding science. 

Institutions like Bell Labs or the Princeton Institute for Advanced Study are the historical reference class.

Based on this article in the Atlantic, I am aware of these 3 and am looking for similar examples:

  • Arc Institute — A new institution for curiosity-driven biomedical science and technology
  • New Science is a 501c3 research nonprofit building the 21st century institutions of basic science
  • Arcadia Science — A new ecosystem for scientific progress

I'm not sure whether, for example, Arcadia fits that bill of no-strings-attached funding given their research agenda on non-model organisms. But it's definitely a new science org. Something like Hypothesis Fund (https://www.hypothesisfund.org/) maybe. 
I'd recommend having a look at the Overedge Catalogue (https://arbesman.net/overedge/)

The Overedge catalog looks extremely interesting. Thanks!

An idea about AI in medicine:

Fecal microbiota transplants have been gaining prominence as we discover that many diseases seem to have a relation with the gut microbiome. There seems to be therapeutic potential for certain gut infections, inflammatory bowel disease, obesity, metabolic syndrome, and functional gastrointestinal disorders.

What if you could use new high-throughput sequencing techniques like Nanopore to figure out what kinds of bacteria constitute the microbiome of tens of thousands of people?

Combine this with tons of computational power and the latest machine learning algorithms to find relationships between certain illnesses and symptoms and those people's microbiome. Maybe this would allow finding the perfect fecal transplant donor to reverse the relevant symptoms.

Hey there, I was just wondering if there are any fellow EAs who are using SuperMemo?

Spaced Repetition Systems (SRS) have generally become more popular over the last years. As a medical student, I have personally read tons of stuff about effective learning and how to win the fight against forgetting. I have probably spent hundreds of hours with Anki, which might be more familiar to people.

For a few months now, I've been trying out SuperMemo (https://supermemopedia.com/wiki/SuperMemo). Especially with the feature of Incremental Reading, I feel like many people in the EA community could benefit from using it.

Main features:

  • allows you to remember facts with the best Spaced Repetition Algorithm I know of
  • read hundreds of articles/papers "at the same time" by making incremental progress on each one
  • have all your reading sorted by priority and connected in a knowledge tree structure

Furthermore it can potentially be used for things like boosting creativity and incremental writing

The biggest hurdle to overcome when starting SuperMemo is the learning curve. For a beginner, it can be a bit unintuitive and many people initially dislike the UI.

Luckily, the community seems to be growing pretty well right now and there is more and more high quality information online (see https://supermemo.wiki/learn/#/). There are even people who can help you get started and offer 1on1 teaching.

I'm just wondering if any of you have heard about SuperMemo or maybe have personal experience?

If anyone is interested, I'm highly willing to teach any EAs that want to learn how to use it, feel free to dm me (I'm the one that taught Max). Alone I think it takes probably at least a week or two to get the hang of it whereas with maybe ~2 hours of teaching you can get started immediately afterwords. 

Whenever I hear about a new memory tool, I struggle to figure out what I would do with it. I try to keep knowledge "on paper" as much as possible, rather than relying on memory. The skills I need for my work are ones I use almost every day already, so I feel like I'm not forgetting anything essential.

What have you personally found useful to practice remembering using SuperMemo?

I think looking for things that you want to memorize is the wrong approach: I think it's better for things you want to learn and be able to use long-term. 

The SRS part probably gives you the impression that most of your time is meant to be for memorizing things but the incremental reading component massively shifts most of the use of SuperMemo to being learning and then memorization/maintenance of knowledge when it's in a suitably refined form. 

For me personally, there have been lots of things that I've found useful to learn though it's hard to point at them too specifically since it's not easy to tell what I only remember because of SuperMemo. Likely some of the things that made the biggest difference for me were:

-learning about REBT (mainly the idea of musts. I'm sure other therapies would have been useful this is just one that I happened upon)

-Replacing Guilt by Nate Soares

-supermemo.guru from creator of SM. There are lots of useful writings from here making it hard to point to specific one of most utility but probably my favorite is his one on the Pleasure of Learning

-discovery of Bloom's 2 Sigma Problem from entirely random article I likely otherwise would have not ended up getting around to without SM

There are lots of other things I have in SM that I think aren't directly applicable now but are likely to server as the foundation for bigger ideas (the more things you have in your head the more connections you can make thus increasing creativity). I don't think it's a good idea to assume that more knowledge is an implicit good (since actual effort needs to be made for things to be applicable in real life with formuationl) but I do think long-term knowledge can have a multiplicative effect and it only takes one good idea sparked by two things in memory to be worth the rest of the time you spend on the system,

 

Beyond that, I think it's just insanely valuable having a system where I can chuck in whatever I find interesting and then not have to spend brainpower thinking about how to manage having 100 tabs. It completely fixes that.

 

(I am writing this at midnight so sorry if it is not entirely coherent)

Hi Aaron, great question.
Let's get the obvious out of the way: For people who are still in university, in study-intensive subjects, it's a great advantage to use a Spaced Repetition System like SuperMemo. To me it feels empowering not to have to worry about forgetting. It's a common experience to feel very frustrated to study so much for an exam, only to forget most of it afterwards. This doesn't happen to me anymore, because I just know the algorithm will take care and as long as I do my daily repitions, my knowledge will get transfered into long-term memory. 

Another obvious use case is learning languages. An SRS can greatly help you to learn a language much faster and this seems to be the most common usage. 

One not so obvious advantage is about creativity/innovation. In my understanding, creativity has a lot to do with connecting ideas from different fields, ones you wouldn't initially notice as being related to each other. Imagine you study two different domains, e.g. Biology and Economics. Actively remembering important information from both of those might result in two at first glance disparate ideas appearing in your mind in close succession. This is what leads to creativity, you making the connection between those. This is less likely to happen if you store your information mostly externally, e.g. in Evernote. 

To answer your question more straightforward: So far, I have found it most useful for studying medicine in University and learning French/Spanish.

Hi! I use SuperMemo! If you want to schedule a call feel free: calendly.com/spaceclottey/dinner30

I remember there was this forum post with a list of selfish/egoistic reasons to be into effective altruism. I can't find it right now, can anyone point me to it?

It contains things like:

  1. EA gives me meaning in my life
  2. EA provides me with an ingroup where I can have a high  social status
  3.  I get to be friends with smart people
  4. I can feel superior to most other people because I know that I am doing something to improve the world

etc. 

Yes, that is the one! Thanks, Sarah. I wasn't able to find it. 

Is it this one?

https://forum.effectivealtruism.org/posts/bXP7mtkK6WRS4QMFv/are-bad-people-really-unwelcome-in-ea

I was referring to the one Sarah pointed out :)

 

Thanks for replying! 

Cross-posting this from my blog because the philosophical issues around egoism and altruism will be of interest to some people here.

 

Until I was ~16, I used to believe that there was no altruism and that everything anybody does is always for purely egoistical reasons. As far as I can remember, I grew confident of this after hearing an important childhood mentor talk about it. I had also read about the ideas of Max Stirner and had a vague understanding of his notion of egoism.

I can't remember what made me change my mind exactly, but pondering thought experiments similar to this one from Nate Soares played a significant role:

Imagine you live alone in the woods, having forsaken civilization when the Unethical Psychologist Authoritarians came to power a few years back.

Your only companion is your dog, twelve years old, who you raised from a puppy. (If you have a pet or have ever had a pet, use them instead.)

You're aware of the fact that humans have figured out how to do some pretty impressive perception modification (which is part of what allowed the Unethical Psychologist Authoritarians to come to power).

One day, a psychologist comes to you and offers you a deal. They'd like to take your dog out back and shoot it. If you let them do so, they'll clean things up, erase your memories of this conversation, and then alter your perceptions such that you perceive exactly what you would have if they hadn't shot your dog. (Don't worry; they'll also have people track you and alter the perceptions of anyone else who would see the dog, so that they also see the dog, so that you won't seem crazy. And they'll remove that fact from your mind, so you don't worry about being tracked.)

In return, they'll give you a dollar.

I noticed that taking the dollar and getting the dog killed would be the self-interested and egoistically rational choice, but I cared about the dog and didn't want him to die!

Furthermore, psychological egoism bugs me because its proponents often need to come up with overly complicated explanations for seemingly altruistic behaviours. My belief in the importance of Occam's razor led me to conclude that often the simplest explanation of certain behaviours is that they are "just altruistic."

For example, say a soldier throws himself on a grenade to prevent his friends from being killed. Psychological egoists need to come up with some convoluted reason for why he did this out of pure self-interest:

Even though I have come to believe that “real” altruistic behaviour exists, I still believe that pure altruism is rare and that I take the vast majority of my actions for egoistical reasons. The reasoning behind this often boils down to somewhat cynical explanations à la The Elephant in the Brain. For example, I engage a lot with effective altruism, but my motivation to do so can also largely be explained by egoistic reasons.

While pondering this whole issue of egoism, I wondered whether I could come up with a self-measure of how egoistic I am. Two potential proxies came to mind:

 

Proxy 1: Two buttons 

Consider the following thought experiment:

You are given the option of pressing one of two buttons. Press button A and you die instantly. Press button B and a random person in the world dies instantly. You have to press one of them. Which one do you choose?

Let's take it up a notch:

Imagine that when you press button B, instead of one random person dying, two random people die. We can increase this to any positive number n. At what number n of people—that would die at the press of button B—would you press button A instead of button B?

Now consider a different version:

You are given a gun and the choice to take one of two actions. You can either shoot yourself in the head or shoot a random person from somewhere in the world in the head that is suddenly teleported to stand in front of you. You have to take one of those two actions. Which one do you choose?

Let's take it up a notch again:

You are given a gun and the choice to take one of two actions. You can either shoot yourself in the head or shoot n random people from somewhere in the world in the head that are suddenly teleported to stand in front of you. At what number n of people, that you would need to shoot in the head, would you prefer to shoot yourself?

In a consequentialist framework, both decisions result in the same outcome. Notice that they feel very different, though. At least they do so for me.

 

Proxy 2: Hedonium

Imagine being offered the opportunity of spending the rest of your life in hedonium. What I mean by hedonium is being in a state of perfect bliss for every second of the rest of your life. Variations of this thought experiment and related concepts are the experience machine, orgasmium, and wireheading. Let's say we take the valence of the best moment of your life, increase its pleasure by a few orders of magnitude, and enable you to stay in that state of boundless joy until you cease existing in ~100 years.

The only downside is that while all this pleasure is filling your mind, your body is floating around in a tank of nutrient fluid or something. You can’t take any more actions in the world. You can’t impact it in any way. Opting into hedonium could therefore be considered egoistic because while you are experiencing pleasure beyond words, you wouldn’t be able to help other people live better lives. 

How likely would you be to accept the offer? Consider how tempting it feels, whether your gut reaction is to accept or decline it outright or whether it seems more complicated. 

 

I hypothesise that giving a larger number of n in the first proxy and a higher probability of opting into hedonium roughly correlates with being more egoistic. I explicitly choose not to give my reactions to these thought experiments to not bias anyone, but I invite readers to leave their responses in the comments. 

Does anybody have examples of pre-registered self experiments or thoughts about the usefulness of self experiments?

I am only aware of Alexey Guzey's experiment on sleep

Recently I have become quite interested in improving my running with heart rate training and read a lot about the related physiology. I would like to try out Zone 2 Training by using heart rate as a proxy and experiment with this in a structured way. 

The blogger gwern has many posts on self-experiments  here.

Cool! I knew gwern but wasn't aware of his experiments, thank you.

Curated and popular this week
Relevant opportunities