smountjoy

155Joined May 2020

Comments
19

My (possibly wrong) understanding of what Eliezer is saying:

FTX ought to have responded internally to the conflict of interest, but they had no obligation to disclose it externally (to Future Fund staff or wider EA community).

The failure in FTX was that they did not implement the right internal controls—not that the relationship was "hidden from investors and other stakeholders."

If EA leadership and FTX investors made a mistake, it was failing to ensure that FTX had implemented the right internal controls—not failing to know about the relationship.

Great idea!

Jump on a Zoom Call once a week with a carefully chosen peer for 1:1s and a group of 5-8 like-minded EAs with the same goal

Is this a group program, or one-on-one, or some of each? Is the "carefully chosen peer" matched with you for all 4–8 weeks?

What type or granularity of goal are you referring to?

Oops, thank you! Not sure what I was thinking. Fixed now.

Overall agreed, except that I'm not sure the idea of patient longtermism does anything to defend longtermism against Aron's criticism? By my reading of Aron's post, the assumptions there are that people in the future will have a lot of wealth to deal with problems of their time, compared to what we have now—which would make investing resources for the future (patient longtermism) less effective than spending them right away.

I think your point is broadly valid, Aron: if we knew that the future would get richer and more altruistically-minded as you describe, then we would want to focus most of our resources on helping people in the present.

But if we're even a little unsure—say, there's just a 1% chance that the future is not rich and altruistic—then we might still have very strong reason to put our resources toward making the future better: because the future is (in expectation) so big, if there's anything at all we can do to influence it, that could be very important.

And to me it seems pretty clear that the chance of a bad future is quite a bit more than 1%, which further strengthens the case.

Wow, I'm glad I noticed Vegan Nutrition in among the winners. Many thanks to Elizabeth for writing, and I hope it will eventually appear as a post. A few months ago I spent some time looking around the forum for exactly this and gave up—in hindsight, I should've been asking why it didn't exist!

I'm starting to think there's no possible question for which Will can't come up with an answer that's true, useful, and crowd-pleasing. We're lucky to have him!

If it does not serve any useful purpose, then why focus on longtermism?

I think you're right that we can make a good case for increased spending on nuclear safety, pandemic preparedness, and AI safety without appeal to longtermism. But here's one useful purpose of longtermism: only the longtermist arguments suggest that those causes are overwhelmingly important; and because of the longtermist arguments, we have many talented people are working zealously to solve those issues—people who would otherwise be working on other things.

Obviously this doesn't address your concern that longtermism is incorrect; it's merely a reason why, if longtermism is correct, it's a useful thing to talk about.

Agreed. The first big barrier to putting self-modification into practice is "how do you do it"; the second big barrier is "how do you prove to others that you've done it." I'm not sure why the authors don't discuss these two issues more.

  • On how to actually self-modify/self-deceive, all they say is that it might involve "leaning into and/or refraining from over-riding common-sense moral intuitions". But that doesn't explain how to make the change irrevocably (which is the crucial step).
  • On how to demonstrate self-modification to others, they mention a "society of peers where one's internal motivations are somewhat transparent to others." I agree that our motivations are in general somewhat transparent—but are they transparent in this particular case, the case of differentiating between between a deontologist and a consequentialist-leaning-into-common-sense-morality-in-order-to-be-more-trustworthy?

    Maybe so. For instance, maybe the deontologist naturally reacts to side-constraint violations with strong emotion, believing that they are intrinsically bad—but the consequentialist naturally reacts with less emotion, believing that the violation is neither good nor bad intrinsically, but instrumentally bad through [long chain of reasoning]. And maybe the emotional response is hard to fake.

    So when someone lies to you, if you get  angry—rather than exhibiting calculated disapproval—maybe that's weak evidence that you have an intrinsic aversion to lying.
Load More