All of kl's Comments + Replies

kl
1y1
0
0

Here is how you can get the definitive answer to this question for your particular case.

1. Make your own initial best guess about what the best discipline for you to study is, backed up by as much research as you can do. Make sure you read through some stuff from the biorisk syllabi Ben Stewart linked, study who employs biorisk researchers and what their qualifications are, and pay particular attention to the details of the careers of biorisk researchers you personally admire.

2a. Make a post to the EA Forum called "If you want to research biorisk, study X,... (read more)

1
Eduardo
1y
I'll do that, thank you!
kl
1y1
0
0

Would you be up for making a few concrete proposals for how to factor in the optics of a contemplated action with some example cases?

3
freedomandutility
1y
Some illegal stuff (i.e - financial fraud for earning-to-give, bribing politicians to prioritise EA cause areas) seems positive EV before considering optics and negative EV after considering optics. (I’m purely focusing on the effects of optics on EV here. Obviously, EV shouldn’t be the only consideration when making decisions, and we should avoid doing illegal stuff even when it maximises EV because we should follow certain deontological constraints.) You could just break down optics into a set of smaller factors like with any Fermi estimate - number of people who would hear about *thing *, proportion who would think badly of EA because of it, proportion who would have counterfactually become engaged with EA otherwise, etc.
kl
1y6
0
0

I'm so happy to see this here! Hugh Thompson is one of my favorite heroes of history.

What I especially love about his story is that there's nothing remarkable in his public biography besides his actions on the one day he's known for. Everyone can aspire to Thompson's model of courage. We won't all achieve perfection or reinvent ourselves in the image of a Borlaug, but we should all try to cultivate moral discernment and bravery.

I love the thought of Hugh Thompson as exemplifying some cherished EA principles.

"Why did you decide to try to save those villager... (read more)

Answer by klDec 10, 202226
7
1

Thanks for posting this. I appreciate the question very much, but I don't think it's the right approach to postulate the existence of a single correct community point of view on FTX's business practices, and try to figure out what that single view would be and whether the community had the good fortune to hold it in the absence of discussion. Even if EAs had the so-called correct view, epistemic "luck" is not epistemic health. A culture of robust debate and inquiry is what matters in the long run.

In my opinion, the important things to ask are: (1) did FTX'... (read more)

1
smountjoy
1y
Thanks! FWIW, I completely agree with your framing. In my head the question was about debate ("did FTX look sketchy enough that we should've seen big debates about it on the forum") and I should've made that explicit. Sounds like the majority answer so far is yes, it did look that bad. My impression is also the same as yours that those debates did not happen.

And (4) -- Would people have felt comfortable questioning the morality of FTX's known business, EA's reliance on FTX-derived funds, and certain leaders' endorsements of SBF without fear of ostracism or adverse effects on their career? From a standpoint of practical psychology, I think the answer is probably not, and we need to have the discussion about which geese we are willing to accept golden eggs from before we are offered the eggs. Once they start laying many eggs, the psychological incentives to not ask questions -- and to ignore those who ask questions -- is just too strong.

kl
1y22
3
1

I like your recommendations, and I wish that they were norms in EA. A couple questions:

(1) Two of your recommendations focus on asking EAs to do a better job of holding bad actors accountable. Succeeding at holding others accountable takes both emotional intelligence and courage. Some EAs might want to hold bad actors accountable, but fail to recognize bad behavior. Other EAs might want to hold bad actors accountable but freeze in the moment, whether due to stress, uncertainty about how to take action, or fear of consequences. There's a military saying tha... (read more)

3
Maya D
1y
These are both very important questions - for (1), I think it depends on the circumstance in all honesty. For example, the same way that volunteers are often trained before EAGs and EAGxs, I could see participants receiving something (as part of the behavior guidelines) outlining scenarios and describing why they were an example of inappropriate or appropriate behavior. However, I think it would be extremely difficult to "train" all members of the EA community as people are involved in many different capacities.  For (2), I think that, despite all situations involving interpersonal harm and conflict being unique and complex, it could be useful to have more transparency in some areas. I don't mean naming specific individuals and discussing all the details of each case, I more mean something like (X action is unaccpetable and will result in Y consequence if found to be true). Another note - my suggestions were aimed towards EA community members because I truly believe that, often, people simply do not understand how their actions/words make others feel. I hope that by raising awareness of this people will be motivated to change themselves without necessitating external conflict (although I understand that's not always the case).
kl
1y24
3
0

Thank you for posting this. I was so sad to see the recent post you linked to be removed by its author from the forum, and as depressing as the subject matter of your post is, it cheers me up that someone else is eloquently and forcefully speaking up. Your voice and experience are important to EA's success, and I hope that you will keep talking and pushing for change.

2
Maya D
1y
Thank you! Glad it resonated. 
kl
4y5
0
0

Thanks for this post!

The upside of jargon is that it can efficiently convey a precise and sometimes complex idea. The downside is that jargon will be unfamiliar to most people.

Jargon has another important upside: its use is a marker of in-group belonging. So, especially IRL, employing jargon might be psychologically or socially useful for people who are not immediately perceived as belonging in EA, or feel uncertain whether they are being perceived as belonging or not.

Therefore, when first using a particular piece of jargon in a conversation, post, o

... (read more)
2
MichaelA
4y
Yes, I think these are all valid points. So my suggestion would indeed be to often provide a brief explanation and/or a link, rather than to always do that. I do think I've sometimes seen people explain jargon unnecessarily in a way that's a bit awkward and presumptuous, and perhaps sometimes been that person myself. In my articles for the EA Forum, I often include just links rather than explanations, as that gives readers the choice to get an explanation if they wish. And in person, I guess I'd say that it's worth: * entertaining both the hypothesis that using jargon without explanation would make someone feel confused/excluded, and the hypothesis that explaining jargon would make the person feel they're perceived as more of a "newcomer" than they really are * then trying to do whatever seems best based on the various clues and cues * with the options available including more than just "assume they know the jargon" and "assume they don't and therefore do a full minute spiel on it"; there are also options like giving a very brief explanation that feels natural, or asking if they've come across that term One last thing I'd say is that I think the fact jargon is used as a marker of belonging is also another reason to sometimes use jargon-free statements or explain the jargon, to avoid making people who don't know the jargon feel excluded. (I guess I intended that point to be implicit in saying that explanations and/or hyperlinks of jargon "may make [people] feel more welcomed and less disorientated or excluded".)
kl
5y3
0
0

I love the idea of gathering this information. But would EA orgs be able to answer the salary questions accurately? I particularly wonder about the question comparing salaries at the org to for-profit companies. If the org isn't paying for compensation data (as many for-profit companies do), they may not really be in a good position to make that comparison. Their employees, especially those who have always worked in nonprofits, may not even know how much they could be making. Perhaps the org could cobble together a guess via Glassdoor, but limitations of t

... (read more)
2
Jon_Behar
5y
I think some EA organizations will have a good sense of how their pay stacks up, while others won’t have a good reference. One of the benefits of starting to collect info is that these latter organizations will be able to make more informed decisions. Granular salary data would be terrific as you note, but I'm a bit concerned over how time consuming that could be for organizations to provide. It’ll also be important to supplement any data from EA employers with survey data from EA job seekers too; I doubt we’ll get a clear picture from one source alone.
kl
5y9
0
0

I think "competitors" for key EA orgs, your point #2, are key here. No matter how smart and committed you are, without competitors there is less pressure on you to correct your faults and become the best version of yourself.

Competitors for key EA orgs will also be well-positioned (in some cases, perhaps in the best possible position) to dialogue with the orgs they compete with, improving them and likely also the EA "public sphere."

I don't think an independent auditor that works across EA orgs and mainly focuses on logic would be as high a value-add as comp

... (read more)
4
Raemon
5y
I basically agree with this. I have a bunch of thoughts about healthy competition in the EA sphere I've been struggling to write up.