MaxRa

Hi, I'm Max :)

  • background in cognitive science & biology
  • currently German group organizer in Darmstadt, previously in Osnabrück & Hannover
  • most worried about AI going badly for technical & coordination reasons
  • vegan for the animals
  • forecasts at Metaculus: https://www.metaculus.com/accounts/profile/110500/
  • currently exploring longtermist research roles, currently currently longtermist fellow at Rethink Priorities

Topic Contributions

Comments

Will faster economic growth make us happier? The relevance of the Easterlin Paradox to Progress Studies

Nice, super interesting. Some very scattered thoughts:

  1. Scale shift seems significant to me.
    1. It would be really surprising if increased health, material comfort, increased leisure not all lead to increased well-being, right?
    2. A theme in some 20th century history podcasts I listened to: It's pretty astonishing how a new generation fully blanks out the horrors that happened only a few decades ago. Kinda points to your pet theory and people having a different reference class for the "least happy a person could realistically be"
    3. Also anecdotally, a few people I know from LMIC grew up watching a lot of US movies and shows (there's probably some selection bias here as those people ended up living in Western countries), which plausibly affects what type of life seems normal or adequate to them?
  2. People don't only value well-being (or they underrate it?) and use their wealth for other things
    1. As you said relative wealth seems a big factor. 
      1. I somewhat buy the story that (given basic needs like sufficient diet and safety are met) relative wealth evolutionarily determined a lot about e.g. who you were able to mate with. Robin Hanson's main thesis in The Elephant in the Brain also points in this direction: a lot of our motivation is driven by signalling to others that we're better companions than others.
    2. Financial safety seems another desire that probably can swallow up a ton of money.
  3. People also do not seem super skilled at using their wealth to increase their well-being (yet, growth mindset!)
    1. I somewhat buy the Buddhist story that human psychology is to a large part driven by somewhat futile attempts at trying to avoid unpleasantness. E.g. I kinda sympathize with classic criticisms of consumerism that it doesn't bring lasting joy, that it's kinda a nice but short-lasting rush to get shiny new things, etc.
    2. Relatedly, I have the vague impression that it's somewhat recent that wellbeing is given a much more central position among educated & wealthier people? For example I imagine this group of people to spend more time in meditation retreats today compared to 30 years ago?
      1. Do you, or anybody, happen to know whether there are longitudinal surveys that ask "What do you most value in life?". Maybe then one could see what people use their wealth for?

It's kinda obvious, but I wanted to point out anyway that many of your suggestions for increasing well-being also seems to require significant levels of wealth to pull off:

In some sense, this is the story we all seem to accept: that we do need resources, but only up to a point, and after that point we're just showing off. Hence, we should focus on how society is organised, as opposed to how wealthy it is.

More concretely, in his 2021 book, An Economist’s Lessons on Happiness, Easterlin suggests that job security, a comprehensive welfare state, getting citizens to be healthy, and encouraging long-term relationships would increase average wellbeing. All of those seem fairly plausible to me. [...]

We should also take mental health and palliative care more seriously […] We could also consider improved air quality, reduced noise, more green and blue space (blue spaces being water), and getting people to commute smaller distances

EA Dedicates

For me, thinking of relationships and hobbies in an instrumental way takes away from how much joy and energy and meaning etc. I get from them. So in practice I expect most "EA dedicates" should instrumentally just live a life of a "non-dedicate", i.e. to value their relationships with their parents, siblings, partners and friends for their own sake.

Other things make this distinction messy:

  • How strongly various psychological needs are expressed for an individual will have strong effects for how their most sustainable "EA dedicate" life looks like. For example 
    • the need for meaning,
    • the need for feeling connected to others, for feeling love,
    • the need for fun.
  • How strongly you wish to found a family probably also is not under your control.
  • Your stamina, e.g. I'd be surprised if I ever be able to productively work 80 hours for more than one week, so I'll probably never look like I'm sacrificing too much.
  • Plausibly somewhat innate character traits like risk-aversion, agreeableness, openness to experience, neuroticism will have a strong effect on what lifestyles you can sustainably live or even just explore without draining a lot of energy.
  • Plausibly how financially independent has a lot of psychological effects that affect how much of an "EA dedicate" you can look like. E.g. I heard that Maslow's hierarchy of needs is very disputed, but it also seems true that helping others is very commonly given less weight by our motivational systems than making sure that we are personally safe etc. 

There is probably a distinction where some EAs would or wouldn't push the button that turns them into an omniscient utility maximizer who would always just take the action that is doing the most good. I would push this button because the lives and the suffering and the beauty that are at stake are so much more important than me and my other values. But in practice I think I will probably never need the distinction between EA dedicates and non-dedicates.

Disruptive climate protests in the UK didn’t lead to a loss of public support for climate policies

Thanks, interesting topic and glad you looked into this! (Just read the summary and skimmed the rest.) My spontaneous reaction to the results was that only days after the protest might be a little too soon to observe a backlash? 

How accurate are Open Phil's predictions?

Thanks for sharing, super interesting!

The organization-wide Brier score (measuring both calibration and resolution) is .217, which is somewhat better than chance (.250). This requires careful interpretation, but in short we think that our reasonably good Brier score is mostly driven by good calibration, while resolution has more room for improvement (but this may not be worth the effort). [more]

Another explanation for the low resolution, besides the limited time you spend on the forecasts, might be that you chose questions that you are most uncertain about (i.e. that you are around 50% certain about resolving positively), right?

This is something I noticed when making my own forecasts. To remove this bias I sometimes use a dice to chose the number for questions like 

By Jan 1, 2018,the grantee will have staff working in at least [insert random number from a reasonable range] European countries

Breaking Up Elite Colleges

I suppose all your points would be satisfied as long the breaking up of colleges happens in a to me pretty reasonable way e.g. by not forcing the new colleges to stay small and non-elite? I understood the main benefit of this to be to remove the current possibly suboptimal college administrations  and to replace them with better management that avoids current problems. 

What We Owe the Past

I had a somewhat related random stream of thoughts the other day regarding the possibility of bringing past people back to life to allow them to live the life they would like.

While I'm fairly convinced of hedonistic utilitarianism, I found the idea of "righing past wrongs" very appealing. For example allowing a person that died prematurely to live out the fulfilled life that this person would wish for themself, that would feel very morally good to me.

That idea made me wonder if it makes sense to distinguish between persons who were born, and persons that could have existed but didn't, as it seemed somewhat arbitrary to distinguish based on random fluctuations that led to the existence of one kind of person over the other. So at the end of the stream of thought I thought "Might as well spend some infinitely small fraction of our cosmic endowment on instantiating all possible kinds of beings and allow them to live the life they most desire." :D 

Types of information hazards

Thanks for sharing the summary, I wasn’t aware of many of these. 

Bibliography of EA writings about fields and movements of interest to EA

Amnesty International seems like another case that would be worth understanding better:

  • cosmopolitan, secular, broad and somewhat abstract principles
  • strong presence as university groups (at least in Germany)
  • 10 million "supporters" according to the Wikipedia article
  • sobering  reports of "toxic culture" in the main offices (bullying, sexism & racism) despite what I assume to be well-meaning people
AI Alternative Futures: Exploratory Scenario Mapping for Artificial Intelligence Risk - Request for Participation [Linkpost]

Nice, thinking more about possible AI risk scenarios seems super important to me, thanks for working on this!

I'm super unfamiliar with your methodology, do you have a good example where this process is applied to a similar situation (sorry if I didn't spot this in the text)?

EA, Psychology & AI Safety Research

Thanks for sharing this list, a bunch of great people! I have a background in cognitive science and am interested in exploring the strategy of understanding human intelligence for designing aligned AIs.

Some quotes from Paul Christiano that I read a couple months ago on the intersection.

From The easy goal inference problem is still hard:

The possible extra oomph of Inverse Reinforcement Learning comes from
an explicit model of the human’s mistakes or bounded rationality. It’s 
what specifies what the AI should do differently in order to be 
“smarter,” what parts of the human’s policy it should throw out. So it 
implicitly specifies which of the human behaviors the AI should keep. 
The error model isn’t an afterthought — it’s the main affair.

and

It’s not clear to me whether or exactly how progress in AI will make 
this problem [of finding any reasonable representation of any reasonable 
approximation to what that human wants] easier. I can certainly see how enough progress in 
cognitive science might yield an answer, but it seems much more likely 
that it will instead tell us “Your question wasn’t well defined.” What 
do we do then?

From Clarifying “AI alignment”:

“What [the human operator] H wants” is even more problematic [...]. Clarifying what this expression means, and how to operationalize it in a way that could be used to inform an AI’s behavior, is part of the alignment problem. Without additional clarity on this concept, we may not be able to build an AI that tries to do what H wants it to do.

Load More