Rainbow Affect

16 karmaJoined Pursuing an undergraduate degree


I am a psychology undergraduate student from Moldova. I wish to help with these causes: AI safety, global poverty, future energy and resources extraction decline, building effective altruism, information security and privacy, factory farming, polution, and maybe others as well.


Thanks a lot for writing this thoughtful comment. (I hope you won't be unreasonably downvoted here)

I wonder how much do top AI labs use these techniques? Do they know about them? What if they knew, but they didn't use them for some reason?

So far this sounds good!

Mr. Monroe, that sounds like a terrible experience to go through. I'm sorry you went through that.

(So, there are software techniques that can allow us to control AI? Sounds like good news! I'm not a software engineer, just someone who's vaguely interested in AIs and neuroscience, but I'm curious as to what those techniques are.)

Sounds good, especially when you want to write blog posts for worldbuilders who don't have a lot of specialised knowledge.

Here is an example I wrote about city agriculture after an energy decline. Maybe I could use this strategy to write worldbuilding blog posts?

Food and water

People couldn't trade food over long distances. So they ate food grown nearby. Everyone grew crops and took care of them. They grew food forests in parks and boulevards. These forests had fruit and nut trees, shrubs, veggies, herbs.

People prunned weeds. They kept calendars for sawing and harvesting. They catched rain in containers. They weaned the grains and stored them indoors. They made manure. They kept chickens and other birds inside. They grew copiced trees in their parks and boulevards. And they cut their branches.

People used stationary bicycles to power electric stoves. Others used solar stoves. Yet others used camping stoves. They cooked fresh or nonperishable food. Meat was rare.

People socialized while cooking. They also cooked in community kitchens. These kitchens were in appartment buildings or restaurants. Kids played, friends talked, neighbors banqueted.

People moved water from rivers through channels. Or they took it by hand. They purified the water at home. They flushed waste down the toilet. Or they made manure. They sold manure to villagers. People used toilet waste in fish ponds. Algae multiplied and fish ate the algae. People then fished them.

By the way, this was grade 5 level text, so no wonder it's that simple.

People also say "expain this to a 11 year-old and you'll understand this". It seems like this Hemingway app is useful for the Feynman technique.


I think that people should break down their goals, no matter how easy they seem, into easier and smaller steps, especially if they feel lazy. Laziness appears when we feel like we need to do tasks that seem unecessary for us, even when we know that they're necessary. One reason why they appear unecessary is their difficulty of achievement. Why exercise for 30 minutes per day if things are "fine" without that? As such, one way to deal with that is by taking whatever goal you have and breaking it down into a lot of easy steps. As an example, imagine that you want to write the theoretical part of your thesis. So, you could start by writing what is the topic, what questions you might want to research, what key uncertainties you have about those questions, then you search for papers in order to clarify those uncertainties, and so on, immediate step by step, until you finish your thesis. If a step seems difficult, break it down even more. That's why I think that breaking down your goals into smaller and easier steps might help when you feel lazy.

Anyways, thanks for your quick take!

This was a wonderful series of posts. I'm glad I read them!

I'm not an expert in economics and these other fields of study, so I'm sorry if I get anything wrong. That said, the French report on peak oil you cited forecasts that oil production will start to decline around year 2026 (https://theshiftproject.org/wp-content/uploads/2021/05/The-Future-of-Oil-Supply_Shift-Project_May-2021_SUMMARY.pdf). And you said that this decline could lead to various negative outcomes like supply chain disruptions and widespread famine. It seems like our societies are short on time.

What do you think needs to happen in order for this kind of famine to be less likely or for supply chain disruptions to be less damaging? How can we figure that out? As you said, people will need ways to provide themselves with the necessary food, water, shelter, medicine, sanitation and maybe other things without a lot of transportation. What are some tractable ways to ensure that, if you know any?

Anyways, thanks for the resources you cited!

Thanks for commenting!

In other words, there seem to be values that are more related to executive functions (i.e. self-control) than affective states that feel good or bad? That seems like a plausible possibility.

There was a personality scale called ANPS (Affective Neuroscience Personality Scale) that was correlated with the Big Five personality traits. They found that conscienciousness wasn't correlated with any of the six affects of the ANPS, while the other traits in the Big Five were correlated with at least one of the traits in the ANPS. So conscienciousness seems related to what you talk about (values that don't come from affects). But at the same time, there was research about how much conscientious people are prone to experience guilt. They found that conscientiousness was positively correlated to how prone to guilt one is.

So, it seems that guilt is an experience of responsibility that is different in some way from the affective states that Panksepp talked about. And it's related to conscientiousness that could be related to the ethical philosophical values you talked about and the executive functions.

Hm, I wonder if AIs should have something akin to guilt. That may lead to AI sentience, or it may not.

Bibliography Uncovering the affective core of conscientiousness: the role of self-conscious emotions Jennifer V Fayard et al., J Pers., 2012 Feb. https://pubmed.ncbi.nlm.nih.gov/21241309/

A brief form of the Affective Neuroscience Personality Scales, Frederick S Barrett et al., Psychol Assess., 2013 Sep. https://pubmed.ncbi.nlm.nih.gov/23647046/

Edit: I must say, I'm embarrassed by how much these comments of mine go by the "This makes intuitive sense!" logic, instead of doing rigurous reviews of scientific studies. I'm embarrassed by how my comments have such a low epistemic status. But I'm glad that at least some EA found this idea interesting.

Oh my goodness, thanks for your comment!

Panksepp did talk about the importance of learning and cognition for human affects. For example, pure RAGE is a negative emotion from which we seek to agressively defend ourselves from noone in particular. Anger is learned RAGE and we are angry about something or someone in particular. And then there are various resentments and hatreds that are more subdued and subtle and which we harbor with our thoughts. Something similar goes for the other 6 basic emotions.

Unfortunately, it seems like we don't know that much about how affects work. If I understand you correctly, you said that some of our values have little to no connection to our basic affects (be they emotional or otherwise). I thought that all our values are affective because values tell us what is good or bad and affects also tell us what is good or bad (i.e. values and affects have valence), and that affects seem to "come" from older brain regions compared to the values we think and talk about. So I thought that we first have affects (i.e. pain is bad for me and for the people I care about) and then we think about those affects so much that we start to have values (i.e. suffering is bad for anyone who has it). But I could be wrong. Maybe affects and values aren't always good or bad and that their difference may lie in more than how cognitively elaborated they are. I'd like to know more about what you meant by "value learning at the fundamental level".

Thank you a lot for these kind words, Mr. JP Addison!

I would like some EAs reading this comment to give some feedback on an idea I had regarding AI alignment that I'm really unsure about, if they're willing to. The idea goes like this:

  1. We want to make AIs that have human values.
  2. Human values ultimately come from our basic emotions and affects. These affects come from brain regions older than the neocortex. (read Jaak Pankseep's work and affective neuroscience for more on that)
  3. So, if we want AIs with human values, we want AIs that at least have emotions and affects or something resembling them.
  4. We could, in principle, make such AIs by developing neural networks that work similarly to our brains, especially regarding those regions that are older than the neocortex.

If you think this idea is ridiculous and doesn't make any sense, please say so, even in a short comment.

Wonderful. I liked these suggestions, especially the triangulating genius and the expert observation ones. I need to use these strategies.

Hi, I'm an undergraduate psychology student from Moldova. I found effective altruism when I was searching for a list of serious global problems when I was trying to write some fiction.

Now I'm trying to learn more about affective neuroscience and brain simulation in the hope that this information could help with AI alignment and safety.

Anyways, good luck with whatever you're working on.

Load more