208 karmaJoined


Hi Leopold,

Thank you for the thoughtful comment! I appreciate that my experience has informed your decision-making, but in the end it’s just my experience, so take it with a grain of salt. I also appreciate your caution; I would say that I’m also a pretty cautious person (especially for an EA; I personally think we sometimes need a little more of that).

I will say that big and risky projects aren’t necessarily a bad thing; they’re just big and risky. So if you’ve carefully considered the risks and acknowledged that you’re committing to a big project that might not pay off and you have some contingency plans, then I think it’s fine to do. I just think that sometimes we get caught up in the vision and end up goodharting for bigger and more visionary projects rather than more actually effective ones (my failure mode in Spring 2023).

Best, Kenneth

This kind of reminds me of a psychological construct called the Militant Extremist Mindset. Roughly, the mindset is composed of three loosely related factors: proviolence, vile world, and Utopianism. The idea is that elevated levels in each of the three factors is most predictive of fanaticism I think (total) utilitarianism/strong moral realism/lack of uncertainty/visions of hedonium-filled futures fall into the utopian category. I think EA is pretty pervaded but vile world thinking, including reminders about how bad the world is/could be and cynicism about human nature. Perhaps what holds most EAs back at this point is a lack of proviolence—a lack of willingness to use violent means/cause great harm to others; I think this can be roughly summed up as “not being highly callous/malevolent”.

I think it’s important to reduce extremes of Utopianism and vile world in EA, which I feel are concerningly abundant here. Perhaps it is impossble/undesirable to completely eliminate them. But what might be most important is something that seems fairly obvious: try to screen out people who are capable of willfully causing massive harm (i.e., callous/malevolent individuals).

Based on some research I’ve done, the distribution of malevolence is relatively highly right-skewed, so screening for malevolence probably affects the fewest individuals while still being highly effective. It also seems that callousness and a willingness to harm others for instrumental gain are associated with abnormalities in more primal regions of the brain (like the Amygdala) and are highly resistant to interventions. Therefore, changing the culture is very unlikely to robustly “align” them. And intuitively, a willingness to cause harm seems to be the most crucial component, while the other components seem to be more channeling malevolence towards a more fanatical bent.

Sorry I’m kind of just rambling and hoping something useful comes out of this.

TLDR: Recent graduate with a B. S. in Psychology and certificate in Computer Science. Looking for opportunities which involve (academic) research, writing, and/or administrative/ops.

Skills & background: ~6 months doing academic research for Polaris Ventures regarding malevolence (dark personality traits). Before this I was a leader of the UW–Madison chapter of EA and President of its chapter of Effective Animal Advocacy. I also have a substack where I write mostly about EA stuff and philosophy. I have experience writing articles for both academic and lay audiences, writing newsletters, and coordinating events.

Here's my substack: https://kennethdiao.substack.com/
I should also have a forum post out soon which will showcase more of my research aptitude

Location/remote: Would prefer remote but willing to relocate. I'm currently based in the Twin Cities, MN.

Availability & type of work: Currently, I am quite available and can start immediately. I am interested primarily in paid part-time (or full-time) opportunities, though I'm also open to volunteering. 

Resume/CV/LinkedIn: https://www.linkedin.com/in/kenneth-diao-292b02168/

Email/contact: kenneth.diao@outlook.com

Other notes: My principal cause areas are animal advocacy and suffering reduction, though I'm also interested in learning more about AI governance. My fuzzy vision for my ultimate role is that it involves doing writing and research which is close enough to the public and policy world to be grounded and have a concrete impact. I'm hoping the next couple of roles are able to help me test my fit and develop aptitudes/capital for reaching that eventual stage.


  • Is it generally effective to go into academia?
    • Am I a good fit for an academic environment?
  • How should timelines (not) impact my decisions?
  • How effective is writing for public audiences?
  • How can I become a researcher who impacts policy (e.g., working at a think tank)? Do I need a policy or law degree?

Thanks everyone!

Hi Rob,

Thank you for writing this post. I am also highly disappointed that no institutional post-mortem has been conducted, so I'm glad that you're speaking out about it. Now that the verdict has been officially handed down to SBF, there's no excuse for there not to be an investigation anymore.

Maybe somehow there are good excuses (and yes, they are excuses) for why a formal investigation has not taken place. But no matter how florid or sophisticated they are, they won't change my mind that a public investigation should take place. Pretty much no matter what, the reputation of the core EA leadership is going to take a hit if no public and formal investigation is carried out, at least in my eyes.

Regarding comments about psychopathy/sociopathy: I recently did a bunch of research on malevolence, so I feel confident in speaking on the subject. The term "sociopathy" seems to be the less well-defined term, so I would somewhat advise against using it, at least until greater clarity arises. However, psychopathy is a fairly established construct in the literature with a few widely-used instruments from the academy, so if you're choosing between using psychopathy or sociopathy, I would say use psychopathy. But even psychopathy is a pretty confused term because it captures so many different characteristics (including callousness, grandiosity, impulsivity, and criminality) which don't necessarily coincide. My opinion is that the cleanest way of talking about all this is to list out more specific and well-defined traits, such as callousness.

But, and I stress this, just because he wasn't a violent criminal doesn't prove he was a good, compassionate person. Neuroscientific evidence suggests that deficiencies in empathy/caring for others have distinct origins from violent or socially unacceptable behavioral expressions. Indeed, the main distinguishing point between psychopathy and Antisocial Personality Disorder (ASPD) is that psychopathy has a component that does not theoretically relate to violent or socially unacceptable behavioral expressions (according to an authority on Psychopathy). It would be most adaptive for a person to be able to abide by the most explicit and universal social norms (e.g., don't kill people) but still do harm in covert, neutral, or even socially desirable ways (e.g., being the CEO of a giant meat company). This is the type of malevolent person I expect SBF is, if he indeed is malevolent.

I also intend to publish a post on this topic, but I thought I'd clarify here since I saw a discussion regarding sociopathy in the comments.

Hi Brian,

I'm honored that you read my article and thought it was valuable!

For the record, I also think that it's good to know the truth. Maybe I wish it wasn't necessary for us to know about these things, but I think it is necessary, and I very much prefer knowing about something and thus being able to act in accordance to that knowledge than not knowing about it. So yeah, don't let my adverse reaction fool you; I love your work and admire you as a person.

Regarding love and hatred, the points you brought up do make me think. I try to always keep an evolutionary perspective in mind; that is, I tend to assume something is adaptive, especially if it's survived across big time. So I think that, at least in certain environments, things like the dark tetrad traits (narcissism, machiavellianism, psychopathy, sadism) are adaptive even on a group level; maybe they reach some kind of local maximum of adaptiveness. My hope is that there is a better way to retain the adaptive behavioral manifestations of these traits while avoiding the volatile and maladaptive aspects of these traits, and my belief is that we can approach this by having more correct motivations. Like I really idealise the approaches of people like Gandhi and MLK who recognised the wrongness of the status quo while also trying to create positive change with love and peace; I believe we need more of that. That being said, I take your point that darkness and hate can lead to love/reduction in hatred, and that this may always be true, especially in our non-ideal world.

Hi Alfredo, thanks for reading and suggesting those articles! I've skimmed the logarithmic scales article and for sure find that terrifying and depressing. All the more reason to lighten that heavy tail!

Thank you for writing about this. I am definitely a person whose concerns about AI are primarily about the massive suffering they might cause, especially when it comes to already-marginal entities or potential entities like non-human animals or digital minds.

I'll note beforehand that I'm suffering-focused, but I'll also note that I think even a regular utilitarian using EV reasoning could come to the same conclusions as I do.

I'm curious as to why this isn't a greater focus in the AI Safety community. At least from my vantage point and recollection, over 90% of the people who talk about AI Safety focus exclusively on the threat AI poses to the continued existence of humanity. If they elaborate at all on what's at stake in the far future, they emphasize the potential good that could come from having massive populations that are in immense states of bliss, which could be destroyed if we are destroyed (again this is my experience). 

I think this rests on the assumption that there is a high likelihood (let's say >90% confidence) that humanity will become a force of net good in the long term future should it survive to see that. I think that, at the very least, this crux should be tested more than it currently is. I would argue that humanity of the current day is almost certainly (>99% confidence) net harmful (even factory farming alone is an immense harm that it's hard to argue any good humans do outweighs). I would also argue with similar confidence that humanity's net impact was consistently negative at least from the agricultural revolution onward (mistreatment/exploitation of non-human animals, slavery, war to name a few major things). Suffice it to say that I would be very worried if an AGI was locked-in with the values of a randomly selected person today (I know some AGI timelines are quite short), or even a randomly selected person 100 years from now (assuming we survive that long), especially if they decide to keep us alive. I can't give an estimate for how confident I am that humanity's continued existence with AGI would be a good/bad thing. However, I agree that the suffering risk from AGI is not emphasized proportional to its potential expected consequence, and I'm curious to hear EA/AI Safety perspectives regarding this topic.

I'll also quickly throw in the idea of humans deliberately creating malicious AGI with the intention of serving their own ends, which is an idea I've heard around a few times but know practically nothing about. Though I will say that I think the potential for such a scenario to arise and then become an S-risk is non-negligible (though I can't really give a good estimate or back it with anything more than intuition).