Hauke Hillebrandt

CEO @ hauke.substack.com
3578 karmaJoined Dec 2014Working (6-15 years)London, UK


Follow me on hauke.substack.com 

I'm an independent researcher working on EA topics (Global Priorities Research, Longtermism, Global Catastrophic Risks, and Economics).

How others can help me

Looking for collaborators, hires, job offers, or grant funding.

How I can help others

I can give advice and offer research collaborations.

My current research projects.


AI Competition


it's AI generated w/ Gemini 1.5 Pro- I had initially indicated that but then had formatting issues and had to repaste and forgot about adding it - now fixed.

Reimagining Malevolence: A Primer on Malevolence and Implications for EA - AI Summary

This extensive post delves into the concept of malevolence, particularly within the context of effective altruism (EA).

Key points:

Defining Malevolence:

The post critiques the limitations of the Dark Triad/Tetrad framework and proposes the Dark Factor (D) as a more comprehensive model. D focuses on the willingness to cause disutility to others, encompassing traits like callousness, sadism, and vindictiveness.

The post also distinguishes between callousness (lack of empathy) and antagonism (active desire to harm), and further differentiates reactive antagonism (vengefulness) from instrumental antagonism (premeditated harm for personal gain).

Why Malevolence Persists:

Despite its negative consequences, malevolence persists due to evolutionary factors such as varying environmental pressures, frequency-dependent selection, and polygenic mutation-selection balance.

Chaotic and lawless environments tend to favor individuals with malevolent traits, providing them with opportunities for power and survival.

Factors Amplifying Malevolence:

  • Admiration: The desire for power and recognition can drive individuals to seek positions of influence, amplifying the impact of their malevolent tendencies.
  • Boldness: The ability to remain calm and focused in stressful situations can be advantageous in attaining power.
  • Disinhibition/Planfulness: A balance of impulsivity and self-control can be effective in achieving goals, both good and bad.
  • Conscientiousness: Hard work and orderliness contribute to success in various domains, including those with potential for harm.
  • General Intelligence: Higher intelligence can enhance an individual's ability to plan and execute harmful actions.
  • Psychoticism: Paranoia and impaired reality testing can lead to harmful decisions and actions.

Recommendations for EA:

  • Screening: Implementing psychometric measures to assess malevolence in individuals seeking positions of power.
  • Awareness: Recognizing that malevolence is not always linked to overt antisocial behavior or mental illness.
  • Intervention: While challenging, interventions should ideally target the neurological and biological underpinnings of malevolence, particularly during early development.
  • EA Community: While EA's values and selection processes may offer some protection against malevolent actors, its emphasis on rationality and risk-neutrality could inadvertently attract or benefit such individuals. Vigilance and robust institutions are crucial.
  • Compassion and Action:
  • The post concludes by acknowledging the complexity of human nature and the potential for evil within all individuals. However, it emphasizes the need to draw lines and prevent individuals with high levels of malevolence from attaining positions of power. This requires a combination of compassion, understanding, and decisive action to safeguard the well-being of society.

Great comment - thanks so much!

Regarding CCEI's effect of shifting deploy$ to RD&D$:

  • Yes, in the Guesstimate model the confidence intervals went from 0.1% to 1% lognormally distributed, with a mean of ~0.4%
  • With UseCarlo I used a metalog distribution with parameters 0%, 0.1%, 2%, 10%, resulting in a mean of ~5%

So you're right, there is indeed about an order of magnitude difference between the two estimates:

  • This is mostly driven by my assigning some credence to the possibility that CCEI might have had as much as a 10% influence, which I wouldn't rule out entirely.
  • However, the confidence intervals of the two estimates are overlapping.
  • I agree this is the weakest part of the analysis. As I highlighted, it's a guesstimate motivated by the qualitative analysis that CCEI is part of the coalition of key movers and shakers that shifted budget increases to energy RD&D.
  • I think both estimates are roughly valid given the information available. Without further analysis, I don't have enough precision to zero in on the most likely value. 
  • I lost access to UseCarlo during the writeup and the  after the analysis was delayed for quite some time (I had initially pitched it to FTX as an Impact NFT).
  • I just wanted to get the post out rather than delay further. With more resources, one could certainly dig deeper and make the analysis more rigorous and detailed. But I hope it provides a useful starting point for discussion and further research. 
  • One could further nuance this analysis e.g. by calculating marginal effect of our $1M on US climate policy philanthropy at the current ~$55M level vs. what it's now.

Thanks also for the astute observation about estimating expected cost-effectiveness in t/$ vs $/t. You raise excellent points and I agree it would be more elegant to estimate it as t/$ for the reasons you outlined.

I really appreciate you taking the time to engage substantively with the post.

AI Summary of the "Quick Update on Leaving the Board of EV" Thread (including comments):

Rebecca Kagan's resignation from the board of Effective Ventures (EV) due to disagreements regarding the handling of the FTX crisis has sparked an intense discussion within the Effective Altruism (EA) community. Kagan believes that the EA community needs an external, public investigation into its relationship with FTX and its founder, Sam Bankman-Fried (SBF), to address mistakes and prevent future harm. She also calls for clarity on EA leadership and their responsibilities to avoid confusion and indirect harm.

The post generated extensive debate, with many community members echoing the call for a thorough, public investigation and postmortem. They argue that understanding what went wrong, who was responsible, and what structural and cultural factors enabled these mistakes is crucial for learning, rebuilding trust, and preventing future issues. Some point to the concerning perception gap between those who had early concerns about SBF and those who seemingly ignored or downplayed these warnings.

However, others raise concerns about the cost, complexity, and legal risks involved in conducting a comprehensive investigation. They worry about the potential for re-victimizing those negatively impacted by the FTX fallout and argue that the key facts may have already been uncovered through informal discussions.

Alternative suggestions include having multiple individuals with relevant expertise conduct post-mortems, focusing on improving governance and organizational structures, and mitigating the costs of speaking out by waiving legal obligations or providing financial support for whistleblowers.

The thread also highlights concerns about recent leadership changes within EA organizations. Some argue that the departure of individuals known for their integrity and thoughtfulness regarding these issues raises questions about the movement's priorities and direction. Others suggest that these changes may be less relevant due to factors such as the impending disbanding of EV or reasons unrelated to the FTX situation.

Lastly, the discussion touches on the concept of "naive consequentialism" and its potential role in the FTX situation and other EA decisions. The OpenAI board situation is also mentioned as an example of the challenges facing the EA community beyond the FTX crisis, suggesting that the core issues may lie in the quality of governance rather than a specific blind spot.

Overall, the thread reveals a community grappling with significant trust and accountability issues in the aftermath of the FTX crisis. It underscores the urgent need for the EA community to address questions of transparency, accountability, and leadership to maintain its integrity and continue to positively impact the world.

What are the most surprising things that emerged from the thread?

Based on the summaries, a few surprising or noteworthy things emerged from the "Quick Update on Leaving the Board of EV" thread:

  1. The extent of disagreement and concern within the EA community regarding the handling of the FTX crisis, as highlighted by Rebecca Kagan's resignation from the EV board and the subsequent discussion.
  2. The revelation of a significant perception gap between those who had early concerns about Sam Bankman-Fried (SBF) and those who seemingly ignored or downplayed these warnings, suggesting a lack of effective communication and information-sharing within the community.
  3. The variety of perspectives on the necessity and feasibility of conducting a public investigation into the EA community's relationship with FTX and SBF, with some advocating strongly for transparency and accountability, while others raised concerns about cost, complexity, and potential legal risks.
  4. The suggestion that recent leadership changes within EA organizations may have been detrimental to reform efforts, with some individuals known for their integrity and thoughtfulness stepping back from their roles, raising questions about the movement's priorities and direction.
  5. The mention of the OpenAI board situation as another example of challenges facing the EA community, indicating that the issues extend beyond the FTX crisis and may be rooted in broader governance and decision-making processes.
  6. The discussion of "naive consequentialism" and its potential role in the FTX situation and other EA decisions, suggesting a need for the community to re-examine its philosophical foundations and decision-making frameworks.
  7. The emotional weight and urgency conveyed by many community members regarding the need for transparency, accountability, and reform, underscoring the significance of the FTX crisis and its potential long-term impact on the EA movement's credibility and effectiveness.

These surprising elements highlight the complex nature of the challenges facing the EA community and the diversity of opinions within the movement regarding the best path forward.

He did mention the head of the FTX foundation which was Nick Beckstead - not sure about the others, but would still seem weird for them to say it like that - maybe one of the younger staff members said something like 'I care more about the far future' or something along the lines of 'GiveDirectly is too risk averse'. but would still think he's painting quite the stereotype of EA here.

Pointing to white papers from think tanks that you fund isn't a good evidentiary basis to support the claim of R&D's cost effectiveness.


I cite a range of papers from the academia, government, and think tanks in the appendix. You don't cite anything either those are just like... your opinions no? 

The R&D benefit for advanced nuclear since the 1970s has yielded a net increase in price for that technology

Are you saying the more we invest in R&D the higher the costs? I agree that nuclear is getting more expensive on net but that can still mean that R&D will drive the price down.

After that, all the technology gains came from scaling, not R&D.

What about the perovskite fever from the mid '10s?

Also there's a long lag with research. 

And historic estimates are not necessarily indicative of future gains; we should expect diminishing returns.

Furthermore, most of the money in BIL and IRA were for demonstration projects - advanced nuclear, the hydrogen hubs, DAC credits. Notably NOT research and development. You make a subtle shift in your cost effectiveness table where you use unreviewed historic numbers on cost-effectiveness for research and development, and then apply that to the much larger demonstration and deployment dollars. Apples and oranges. The needs for low TRL tech is very different from high TRL tech.

I've simplified R&D to RD&D here, but I do cite RD&D projections - see and my calculation - do you think these numbers are off? What do you think they are? All models are wrong as they say.

Lastly, a Bill Gates retweet is not the humble brag you think it is. Bill has a terrible track record of success in energy ventures; he's uninformed and impulsive. Saying Bill Gates likes your energy startup is like saying Jim Cramer likes your stock. Both indicate a money-making opportunity for those who do the opposite.

That was a straightforward brag because he has millions of followers on X. I'm quite critical of Gates - I have blogged about this here. But also maybe we should give more credit to doing high-risk high reward stuff even if it doesn't work out... like Solyndra?

It's just my inside view that he carelessly and to some extent intentionally plays fast and loose with the truth to the point of libel by saying that Beckstead said 'To be honest I don't care that much about poverty' and then ended the call and went off to to have lunch. Stewart then framed it in a way as if he just in a very unreflective way just cares about 'asteroid strikes and robot overlords' - you can also call it hyperbole. I think he just couldn't bear that someone younger - 'sitting in California in his hoodie'- didn't want to give him- Rory Stewart OBE- a grant for a charity whose effectiveness he probably understands less well then Beckstead. I have a strong prior that he misrepresented Beckstead's view on this (Beckstead used to work for Givewell) and also due to the Sam Harris incident (which I only came across incidentally because I sometimes hate read Sam Harris on this topic). I thought it was worth it to come out strong with my inside view and on the spectrum from misremembering to lying I'm more inclined to call it lying.

I think Rory Stewart is lying... he has had problems with this recently:


(not endorsing Sam Harris here and not saying Stewart is not directionally correct).

I doubt that Nick Beckstead literally said 'I don't care about poverty'.

He seems bitter that his EA org was unable to raise funds from the Future Fund even though it had a different focus area and risk profile. Now he's shoehorning his peeves into the FTX fraud.

According to the guy who wrote the 2nd book on FTX, it was a fraud from mid-’21, when:

  1. FTX lost $1B when a trader took advantage of a software bug using a token called MobileCoin. The loss would’ve wiped out all the revenue FTX had ever made. SBF told employees to count the loss as Alameda's. This concealment enabled FTX to raise ~$1B from VCs.
  2. Even with that VC money, Alameda then borrowed more from FTX (especially for the Binance buyout). “We don’t really have the money for this,” Ellison testified that she told SBF.
  3. Then even before SBF’s spending spree really got going, Ellison warned him that Alameda’s debts were risky. But SBF asked her to invest an additional $3B in VC, even though Alameda had already helped itself to ~$2B from FTX users and borrowed $9B from other lenders. Alamada’s biggest asset was crypto that FTX had either created himself or was pushing (FTT, etc.) and without those, FTX owed ~$3B more than it had. She testified telling SBF that if they made the investments, and the market crashed and lending firms asked for their money back, Alameda would go broke and FTX would fail. Which then happened.


If they can now pay back user due to the Anthropic investment, that’s ex post luck.

Global development EAs were very much looking into vaccines around 2015 and then and now it seemed that the malaria vaccine is just not crazy cost-effective, because you have to administer it more than once and it's not 100% effective - see

Public health impact and cost-effectiveness of the RTS,S/AS01 malaria vaccine: a systematic comparison of predictions from four mathematical models

Modelling the relative cost-effectiveness of the RTS,S/AS01 malaria vaccine compared to investment in vector control or chemoprophylaxis 

Load more