I am a big fan of 80k, and have found talking to 80k advisors helpful. But this program feels reminiscent of the excesses of pre-FTX-implosion EA, in that this is a lot of money to be giving people to do something that is not very hard and (in my view) of questionable value, though maybe I’m underestimating the efficacy of 80k’s filtering process, how much these conversations will shift the career paths of the referred parties, how well people will use the career grants, or something else. I’m sure a lot of thought went into doing this, so I’d be curious to see the BOTEC that led to these career grants.
Some feedback on this episode: The part of the interview I listened to was really cool and interesting, but this episode is also 3 hours 48 minutes, and it’s pretty hard for me to commit that much attention/time to listening to an episode outside of my area. I know that this is kind of 80k’s thing, but I’m wondering if—for episodes of this length—it might be worth separately releasing a ~60-90 minute version of highlights. (I also felt that even in the portion I listened to, there could’ve been edits—e.g., the question that went unanswered about the number of juvenile insects.) Overall, though, really fantastic episode—thanks for doing this interview!
Yeah, to be clear, I think inappropriate interpersonal behavior can absolutely warrant banning people from attending events, and this whole situation has given me more respect for how CEA strikes this balance with respect to EAGs.
I was mainly responding to the point that "we might come up with ideas that let each side get more of what they want at a smaller cost to what the other side wants," by suggesting that, at a minimum, the organizers could've done things that would've involved ~no costs.
I apologize if I did not characterize the fears correctly
I think you didn't. My fear isn't, first and foremost, about some theoretical future backsliding, creating safe spaces, or protecting reputations (although given the TESCREAL discourse, I think these are issues). My fear is:
I am bolstered by the fact that Manifest is not Rationalism and Rationalism is not EA. But I am frustrated that articulating the above position is seen as even remotely in the realm of "pushing society in a direction that leads to things like... the thought police from 1984." This strikes me as uncharitable pearl-clutching, given that organizers have an easy, non-speech-infringing way of reducing the likelihood that their events elicit and incite racism: not listing Hanania, who wasn't even a speaker, as a special guest on their website, while still allowing him to attend if he so chooses.
One feature I think it'd be nice for the Forum to have is a thing that shows you the correlation between your agree votes and karma votes. I don't think there is some objectively correct correlation between these two things, but it seems likely that it should be between, say, .2 and .6 (probably depending on the kind of comments you tend to read/vote on), and it might be nice for users to be able to know and track this.
Making this visible to individual users (and, potentially, to anyone who clicks on their profile) would provide at least a weak incentive to avoid reflexively downvoting comments that one disagrees with, something that happens a lot, and that I also find myself doing more than I'd like.
The fact that racists is in quotes in the title of this post (“Why so many “racists” at Manifest?”) when there have been multiple, first-hand accounts of people experiencing/overhearing racist exchanges strikes me as wrongly dismissive, since I can only interpret the quotation marks as implying that there weren’t very many racists. (Perhaps relevantly, I have never overheard this kind of exchange at any conference I have ever attended, so the fact that multiple people are reporting these exchanges makes Manifest a big outlier in this regard, in my view.)
Nothing in the post seems to refute that the reported exchanges occurred among attendees, just that the organizers didn’t go out of their way to invite controversial/racist speakers or incite these exchanges. In other words, I think everything in the post is compatible with there having been “so many” racists at Manifest, but the quotation marks in the title seem to imply otherwise.
This isn’t so much a stylistic critique as it is a substantive one, since I think the title implies that not a lot of racist stuff went down, which feels importantly different from acknowledging that it did, but, say, disputing that the organizers caused this or suggesting that Hanania’s presence justified it.
I don't agree with @Barry Cotter's comment or think that it's an accurate interpretation of my comment (but didn't downvote).
I think EA is both a truth-seeking project and a good-doing project. These goals could theoretically be in tension, and I can envision hard cases where EAs would have to choose between them. Importantly, I don't think that's going on here, for much the same reasons as were articulated by @Ben Millwood in his thoughtful comment. In general, I don't think the rationalists have a monopoly on truth-seeking, nor do I think their recent practices are conducive to it.
More speculatively, my sense is that epistemic norms within EA may—at least in some ways—now be better than those within rationalism for the following reason: I worry that some rationalists have been so alienated by wokeness (which many see as anathema to the project of truth-seeking) that they have leaned pretty hard into being controversial/edgy, as evidenced by them, e.g., platforming speakers who endorse scientific racism. Doing this has major epistemic downsides—for instance, a much broader swath of the population isn't going to bother engaging with you if you do this—and I have seen limited evidence that rationalists take these downsides sufficiently seriously.
I think it would be phenomenally shortsighted for EA to prioritize its relationship with rationalists over its relationship with EA-sympathetic folks who are put off by scientific racists, given that the latter include many of the policymakers, academics, and professional people most capable of actualizing EA ideas. Most of these people aren't going to risk working/being associated with EA if EA is broadly seen as racist. Figuring out how to create a healthy (and publicly recognized) distance between EAs and rationalists seems much easier said than done, though.
Think about how precious the life is of a young child—concretely picture a small child coughing up blood and lying in bed with a fever of 105. We—the effective altruists—are the ones doing something about that.
The vast majority of people trying to keep kids from dying of malaria are not effective altruists.
Thanks; this is helpful, and I appreciate your candor. I’m not questioning whether 80k’s advising overall is valuable, and am thus willing to grant stuff like “most of the shifts people make as a result of 80k advising are +EV”. My reservations mainly pertain to the following:
I get that it’s easy to be critical of (1) post-hoc, but I think we should subject the general model of “give EAs a lot of money to do things that are easy and that have very uncertain and difficult to quantify value” to a high degree of scrutiny because (as best I can tell based on a small n) this: (a) hasn’t tended to work that well, (b) is self-serving, and (c) often seems to be held to a lower evidentiary standard than other kinds of interventions EAs fund. (A countervailing piece of evidence is that OP does this for hiring referrals, and they presumably do have good evidence re: efficacy, although the benefits there also seem much clearer for the reasons you mention.)
Regarding (2), my worry is that the people who get referred as a result of this program will be importantly different from the general population of people who receive 80k career advising. This is because I suspect highly engaged EAs will have already applied for or received 80k advising. Conversely, people who are not familiar enough with EA to have previously heard of 80k advising—which I think is a low bar, given many people learn about EA via 80k—probably won’t have successful applications. Thus, my model of the median successful referral is “someone who has heard of 80k but not previously opted to pursue 80k advising.” Which brings me to (3): by virtue of these people having not previously opted into a free service, I suspect that they’re less likely to benefit from it. In other words, I suspect that people referred as a result of this program will be less likely (or less able) to make changes as a result of their advising meetings. (Or at least this was the conclusion I came to in deciding who to send my referral links to.)
Regarding (4), I haven’t seen evidence to support the claim that “very engaged and agentic EAs… will use $5,000 very well to advance their careers and create good down the line,” and while this seems prima facie plausible, I don’t think that is the standard of evidence we should apply to this—or any—intervention. (This is a less important point, because if this program generated tons of great referrals, it wouldn’t really matter how the $50k was spent.)