2670Joined Mar 2017



Hi, I'm Max :)

  • background in cognitive science & biology
  • most worried about AI going badly for technical & coordination reasons
  • vegan for the animals
  • forecasts at Metaculus:
  • currently exploring AI governance roles, currently currently research contractor at Rethink Priorities' AI Governance and Strategy team


Topic Contributions

I'd have guessed it's not outrage or indignation but instead feeling 1) sorry for how frustrating it must be to deal with Torres and dishonest criticism in general, and 2) gratitude for pushing back against it.

Thanks for sharing your thoughts on this, especially your points on PR updated me a bit towards taking PR more seriously.

One piece of pushback on your overall message is that I think there are different kinds of communications than cold or hot takes (which I understand as more or less refined assessments of the situation and its implications). One can:

  • share what one is currently doing about the situation,
  • share details that help others figure out what can be learned from this,
    • (this sometimes might require some bravery and potentially making oneself susceptible to legal risks, but I'd guess that for many who can share useful info it wouldn't?)
  • share your feelings about the crisis.

I'm overall feeling like contributing to the broader truth-seeking process is a generally cooperative and laudable move,  and that it's often relatively easy to share relatively robust assessments that one is unlikely to have to backtrack, such as those I've seen from MacAskill, Wiblin, Sam Harris.

For example I really appreciated Oliver Habryka reflecting publicly about his potential role in this situation by not sufficiently communicating his impression of SBF. I expect Habryka giving this "take" and the associated background info will not prove wrong-headed in 3 months, and it didn't seem driven by hot emotions or an overreaction to me.

Do you have something specific in mind where you were too hasty, or do you in hindsight generally think you should've contributed less to trying to figure things out while things were happening? Because my first gut reaction is that the time you and other invested here was likely worth it. Two things that come to mind: 

  1. Some things cannot wait 2 weeks, things are sometimes developing very quickly and one can prevent damages by understanding things better quicklier, even if many things will turn out to be dead-ends and wrong calls.
    1. Especially when it seems likely to be such a defining moment in the history of EA.
  2. Related to 1, EA is under a lot of scrutiny right now and cooperating with broader truth-finding processes and clearly saying that we do not condone immoral behavior for the sake of the greater good seems to me very important. And doing this better rather than worse seems important.
    1. I saw that people are (I think correctly) worry about legal risk, and that contributes to people close to SBF not wanting to speak up much yet. But I'd guess people who don't face considerable risks are currently being very helpful by trying to contribute to truth-seeking and emphasizing the values of the EA community to outsiders.

Thanks a lot for joining the discussion and sharing these observations, that's super valuable info and imo extremely damning  if true. Do you happen to have some sources I could check which corroborate what you've written here?

Yeah, I think it's a good point that stretching the replication process over time seems kind of arbitrary and might making the existence of the replica and yourself contemporaneous reduces the intuition that it is "you" who gets to live the life you wished for. 

At the same time my personal intuitions (which are often different from other reasonable people :D) are actually not reduced much by the thought of a replicated copy of myself living at the same time. E.g. if I now think about a 1:1 copy of mine living a fullfilled life with his wife and children in a "parallel universe", I feel more deeply happy about this than thinking about the same scenario for friends or strangers.

Similarly, I think it would help to right past wrongs if, in the future, the past person's desired state of the world comes to pass. But I still don't see how it is any better for that person, or somehow corrected further, if some replica of their self experiences it.

I think where my intuitions diverge is that I expect many people to have a lot of self-directed preferences that I regard as ethically on the same footing as non-self directed preferences: It seems you're mostly considering states of the world like ensuring the survival and flourishing of their loved ones, or justice happening for crimes against humanity, or an evil government being overthrown and replaced by a democracy. But I'd guess this class of preferences should not be so distinct from people wanting the state of the world in future including themselves being happy, with a loving partner and family, friends and  a community that holds him or her in high regard. And that's why I feel like a past person would feel at least a little redeemed if they knew that in some future time they would see themselves living the fulfilled live that the past selves wished they could've enjoyed.

Thanks for the thoughts and pointers. I hadn't considered this anthropics connection, interesting thought.

Thanks for the pushback, it clarified my thinking further.

if I cloned you absolutely perfectly now, and then said, I'm going to torture you for the rest of your life, but don't worry, your clone will be experiencing eqaul and opposite pleasures, would you think this is good (or evens out)

I think this thought experiment introduces more complexities that the scenario in the post avoids, e.g. having to weigh suffering vs. happiness. In the original scenario the torture/suboptimal life already would have happened to me, and now the question is whether it's better from a moral sense to have a future filled with tons of happy fulfilled lives vs. one where one of those lives is lived by somebody that is basically me. And my intuition is, that I'd feel much better knowing that what "I" am, my hopes, dreams, basic drives, etc. will be fulfilled at some point in the future despite having been first instantiated in a world where those hopes etc. where tragically crushed.

So my intuition here probably comes more from a preference utilitarian perspective, where I want the preferences of specific minds to be fulfilled, and this would be somewhat possible by having a future close version of yourself with almost identical preferences/hopes/desires/affections etc.

Nice, that’s good to hear. :)

Hey Eli, just stumbled upon the post. Sorry that you had to go through bad times. Hope you got the chance to take at least a week off and that things are looking only up since then and from here on. <3 Was really nice to see you again in DC, btw.

Load More