K

kl

83 karmaJoined May 2019

Comments
9

kl
1y1
0
0

Here is how you can get the definitive answer to this question for your particular case.

1. Make your own initial best guess about what the best discipline for you to study is, backed up by as much research as you can do. Make sure you read through some stuff from the biorisk syllabi Ben Stewart linked, study who employs biorisk researchers and what their qualifications are, and pay particular attention to the details of the careers of biorisk researchers you personally admire.

2a. Make a post to the EA Forum called "If you want to research biorisk, study X," where X is the degree course you have provisionally concluded is best for you to become a biorisk researcher. Present your arguments for why degree X is better than degrees Y, Z, etc., based on your research. Your post should be lengthy. You have now made a public, controversial, and certainly oversimplified claim with which some people will disagree, and they will reply to your post with well-reasoned arguments about why you are wrong. This will expose gaps and flaws in your thinking and give you much more information to make your decision. It may also connect you with new conversation partners you can ask for additional help and advice.

2b. Alternatively, contact as many leaders in biorisk research as you can and interview them about what course of study they think is best, using your research from Part 1 to identify who to interview and to formulate good questions that will elucidate what you most care about in making your decision. Then, write up what they tell you in a post to the forum: "20 biorisk leaders on how to get into biorisk research" or something like that. You could append a note or comment with what you personally concluded for yourself based on doing the interviews. People will reply to your post with additional information.

Good luck with your decision!

kl
1y1
0
0

Would you be up for making a few concrete proposals for how to factor in the optics of a contemplated action with some example cases?

kl
1y6
0
0

I'm so happy to see this here! Hugh Thompson is one of my favorite heroes of history.

What I especially love about his story is that there's nothing remarkable in his public biography besides his actions on the one day he's known for. Everyone can aspire to Thompson's model of courage. We won't all achieve perfection or reinvent ourselves in the image of a Borlaug, but we should all try to cultivate moral discernment and bravery.

I love the thought of Hugh Thompson as exemplifying some cherished EA principles.

"Why did you decide to try to save those villagers, Hugh?"

"I employed the importance/neglectedness/tractability framework! The villagers' lives were important, nobody else was helping them, and I saw that I might be able to help myself."

EA doesn't place much emphasis on person-to-person opportunities for doing good, but I think that applying the concepts outside their typical contexts can help in remembering that it's not only in career choices or funding allocations that we confront decision points at which we can profoundly impact others' lives for the better.

Answer by klDec 10, 202226
7
1

Thanks for posting this. I appreciate the question very much, but I don't think it's the right approach to postulate the existence of a single correct community point of view on FTX's business practices, and try to figure out what that single view would be and whether the community had the good fortune to hold it in the absence of discussion. Even if EAs had the so-called correct view, epistemic "luck" is not epistemic health. A culture of robust debate and inquiry is what matters in the long run.

In my opinion, the important things to ask are: (1) did FTX's business practices deserve debate within the EA community a year ago, (2) to what extent did the EA community debate them, and (3) did the range and health of debate in the EA community meet the standards the community wants to uphold?

It's easy to make arguments that FTX was publicly engaged in morally wrong behavior a year ago, and some people have done so in this thread, given that (among other potential points of criticism) FTX's strategy for growth involved trying to persuade as many sports fans as possible to gamble on speculative investments. Condemnation of this behavior is also easy to find in the media and among non-EAs. I don't intend to express a personal view here about the morality of FTX's business, but if there is a commonly held view outside EA that something is morally wrong, and the EA community wants to profit from that thing, I think debate is merited about whether the EA community should profit from it.

Was there that debate? Correct me if I'm wrong, but I'm not aware that there was any degree of debate within the EA public sphere (forum discussions, conference talks, meetup talks, EA org blog posts, etc) about the morality of FTX's business or whether EAs should take FTX money. There was of course an extremely strong disincentive to debate, because -- doing good aside -- many EAs had a lot to gain personally from FTX money.

I think that the lack of debate here reflects a weakness in the EA public sphere, which EAs should try to address. To be clear, I don't intend to positively claim here that EAs or EA organizations should not have taken FTX money, but rather that a debate about it was merited by the degree of public criticism attached to FTX before its collapse.

(In other words, to respond somewhat more directly to your question, it could be reasonable to believe both that FTX's business practices as publicly known a year ago were morally acceptable, and that it is deeply troubling that EA did not debate them.)

kl
1y22
3
1

I like your recommendations, and I wish that they were norms in EA. A couple questions:

(1) Two of your recommendations focus on asking EAs to do a better job of holding bad actors accountable. Succeeding at holding others accountable takes both emotional intelligence and courage. Some EAs might want to hold bad actors accountable, but fail to recognize bad behavior. Other EAs might want to hold bad actors accountable but freeze in the moment, whether due to stress, uncertainty about how to take action, or fear of consequences. There's a military saying that goes something like: "Under pressure, you don't rise to the occasion, you sink to the level of your training." Would it increase the rate at which EAs hold each other accountable for bad behavior if EAs were "trained" in what bad behavior looks like in the EA community and in scripts or procedures for how to respond, or do you think that approach would not be a fit here?

(2) How you would phrase your recommendations if they were specifically directed to EA leadership rather than to the community at large?

kl
1y24
3
0

Thank you for posting this. I was so sad to see the recent post you linked to be removed by its author from the forum, and as depressing as the subject matter of your post is, it cheers me up that someone else is eloquently and forcefully speaking up. Your voice and experience are important to EA's success, and I hope that you will keep talking and pushing for change.

kl
4y5
0
0

Thanks for this post!

The upside of jargon is that it can efficiently convey a precise and sometimes complex idea. The downside is that jargon will be unfamiliar to most people.

Jargon has another important upside: its use is a marker of in-group belonging. So, especially IRL, employing jargon might be psychologically or socially useful for people who are not immediately perceived as belonging in EA, or feel uncertain whether they are being perceived as belonging or not.

Therefore, when first using a particular piece of jargon in a conversation, post, or whatever, it will often be valuable to provide a brief explanation of what it means, and/or a link to a good source on the topic. This helps people understand what you’re saying, introduces them to a (presumably) useful concept and perhaps body of work, and may make them feel more welcomed and less disorientated or excluded.

Because jargon is a marker of in-group belonging, I fear that giving an unprompted explanation could be alienating to someone who makes the implication that jargon is being explained to them because they're perceived as not belonging. (E.g., "I know what existential risk is! Would this person feel the need to explain this to me if I were white/male/younger?") In some circumstances, explaining jargon unprompted will be appreciated and inclusionary, but I think it's a judgment call.

kl
5y3
0
0

I love the idea of gathering this information. But would EA orgs be able to answer the salary questions accurately? I particularly wonder about the question comparing salaries at the org to for-profit companies. If the org isn't paying for compensation data (as many for-profit companies do), they may not really be in a good position to make that comparison. Their employees, especially those who have always worked in nonprofits, may not even know how much they could be making. Perhaps the org could cobble together a guess via Glassdoor, but limitations of the data on there would make that difficult to do meaningfully, not to mention time-consuming.

For orgs willing to share, it would be better to get the granular salary data itself (ideally, correlated to experience and education).

kl
5y9
0
0

I think "competitors" for key EA orgs, your point #2, are key here. No matter how smart and committed you are, without competitors there is less pressure on you to correct your faults and become the best version of yourself.

Competitors for key EA orgs will also be well-positioned (in some cases, perhaps in the best possible position) to dialogue with the orgs they compete with, improving them and likely also the EA "public sphere."

I don't think an independent auditor that works across EA orgs and mainly focuses on logic would be as high a value-add as competitors for specific orgs. The auditor is not going to be enough of a domain expert to competently evaluate the work of a bunch of different orgs. But I think it's worth thinking more about. Would be curious if you or anyone has more ideas about the specifics of that.