Thanks! FWIW, I completely agree with your framing. In my head the question was about debate ("did FTX look sketchy enough that we should've seen big debates about it on the forum") and I should've made that explicit. Sounds like the majority answer so far is yes, it did look that bad. My impression is also the same as yours that those debates did not happen.
My (possibly wrong) understanding of what Eliezer is saying:
FTX ought to have responded internally to the conflict of interest, but they had no obligation to disclose it externally (to Future Fund staff or wider EA community).
The failure in FTX was that they did not implement the right internal controls—not that the relationship was "hidden from investors and other stakeholders."
If EA leadership and FTX investors made a mistake, it was failing to ensure that FTX had implemented the right internal controls—not failing to know about the relationship.
Great idea!
Jump on a Zoom Call once a week with a carefully chosen peer for 1:1s and a group of 5-8 like-minded EAs with the same goal
Is this a group program, or one-on-one, or some of each? Is the "carefully chosen peer" matched with you for all 4–8 weeks?
What type or granularity of goal are you referring to?
Overall agreed, except that I'm not sure the idea of patient longtermism does anything to defend longtermism against Aron's criticism? By my reading of Aron's post, the assumptions there are that people in the future will have a lot of wealth to deal with problems of their time, compared to what we have now—which would make investing resources for the future (patient longtermism) less effective than spending them right away.
I think your point is broadly valid, Aron: if we knew that the future would get richer and more altruistically-minded as you describe,...
Wow, I'm glad I noticed Vegan Nutrition in among the winners. Many thanks to Elizabeth for writing, and I hope it will eventually appear as a post. A few months ago I spent some time looking around the forum for exactly this and gave up—in hindsight, I should've been asking why it didn't exist!
I'm starting to think there's no possible question for which Will can't come up with an answer that's true, useful, and crowd-pleasing. We're lucky to have him!
You might be interested in these posts by Nate Soares:
They explore how we should act given that some things "cannot be known ahead of time, not even approximated."
If it does not serve any useful purpose, then why focus on longtermism?
I think you're right that we can make a good case for increased spending on nuclear safety, pandemic preparedness, and AI safety without appeal to longtermism. But here's one useful purpose of longtermism: only the longtermist arguments suggest that those causes are overwhelmingly important; and because of the longtermist arguments, we have many talented people are working zealously to solve those issues—people who would otherwise be working on other things.
Obviously this doesn't address your concern that longtermism is incorrect; it's merely a reason why, if longtermism is correct, it's a useful thing to talk about.
Agreed. The first big barrier to putting self-modification into practice is "how do you do it"; the second big barrier is "how do you prove to others that you've done it." I'm not sure why the authors don't discuss these two issues more.
Thanks for writing! It sounds like part of your pitch is that there are some types of therapy which are much more effective than the types in common use. Scott's book review of all therapy books makes me pretty pessimistic about that. If you've read that post, do you have any thoughts?
Hi Sarah! I broadly agree with the post, but I do think there's a marginal value argument against becoming a doctor that doesn't apply to working at EA orgs. Namely:
Suppose I'm roughly as good at being a doctor as the next-doctor-up. My choosing to become a doctor brings about situation A over situation B:
Situation A: I'm a doctor, next-doctor-up goes to their backup plan
Situation B: next-doctor-up is a doctor, I go to my backup plan
Since we're equally good doctors, the only difference is in whose backup plan is better—so I should prefer situation B, in wh...
I had the opposite takeaway from the podcast. Ajeya and Rob definitely don't come to a confident conclusion. Near the end of the segment, Ajeya says, referring definitely to the simulation argument but also I think to anthropics generally,
I would definitely be interested in funding people who want to think about this. I think it is really deeply neglected. It might be the most neglected global prioritisation question relative to its importance. There’s at least two people thinking about AI timelines, but zero people [thinking about simulation/anthropics], basically. Except for Paul in his spare time, I guess.
When I first read it, I assumed that "meaningful, lasting change" meant "all the kinds of changes we want," rather than "any particular change." Maybe that's what the authors intended. But on rereading I think your interpretation is more correct.
Congrats! I don't know you but I'm very happy for you!
The networking was hard for me, and I often felt thrown off or wired up after my networking calls. It took me a long time to send each email.
I'm impressed you were able to persist in your job search while feeling this way. Did you have a particularly strong motivation toward your long-term goal, or were there other strategies you used to overcome these mental blockers?
Just broaden your conception of the team to the whole EA community, and stop worrying about how much of the “credit” is yours.
To me, this is the crux. If you can flip that switch, problem (practically) solved—you can take on huge amounts of personal risk, safe in the knowledge that the community as a whole is diversified.
Easier said than done, though: by and large, humans aren’t wired that way. If there’s a psychological hurdle tougher than the idea that you should give away everything you have, it the idea that you should give away everything you ha...
This was helpful to me (knowing nothing about climate policy) in terms of ideas about how to break down TSM's "transformative change" into more tractable parts. I guess I'd been treating "transformative change" and what Dan said about "fundamental uncertainty" as something like semantic stopsigns.
One thing I'm confused about:
...Indeed, insofar as mass mobilization and climate grassroots activism are strongly tied to the Democratic party and making Democrats more ambitious on climate, it seems likely that the value of this advocacy has decreased due to the rel
Thanks for that clarification—maybe the $1m/year figure is distracting. I only mentioned it as an illustration of this point:
The post argues that the kind of talent valuable for direct work is rare. Insofar as that's true, the conclusion ("prefer direct work") only applies to people with rare talent.
Thanks, Mark! I've been struggling to figure out what career goals I myself should pursue, so I appreciated this post.
Those considering EtG as their primary career path might want to consider direct work instead
I think this advice is missing a very important qualification: if you are a highly talented person, you might want to consider direct work. As the post mentions, highly talented people are rare—for example, you might be highly talented if you could plausibly earn upwards of $1m/year.
Regularly talented people are in general poor substitutes for highl...
I think this advice is missing a very important qualification: if you are a highly talented person, you might want to consider direct work. As the post mentions, highly talented people are rare—for example, you might be highly talented if you could plausibly earn upwards of $1m/year.
I expect this isn't what you're actually implying, but I'm a bit worried this could be misread as saying that most people who are sufficiently talented in the relevant sense to work at an EA org are capable of earning $1m/year elsewhere, and that if you can't, then you prob...
Oops, thank you! I thought I had selected linkpost, but maybe I unselected without noticing. Fixed!