Ulrik Horn

237Joined Apr 2021



​​I am currently working on an initiative to build a refuge (a.k.a. bunker, bioweapons shelter, etc.). I have so far been working on this refuge alongside my full-time job and with two small kids on top but look forward to making this my full-time job in April 2023.

My EA journey started in 2007 as I considered switching from a Wall Street career to instead help tackle climate change by making wind energy cheaper – unfortunately, the University of Pennsylvania did not have an EA chapter back then! A few years later, I started having doubts about my decision that climate change was the best use of my time. After reading a few books on philosophy and psychology, I decided that moral circle expansion was neglected but important and donated a few thousand sterling pounds of my modest income to a somewhat evidence-based organisation. Serendipitously, my boss stumbled upon EA in a thread on Stack Exchange around 2014 and sent me a link. After reading up on EA, I then pursued E2G with my modest income, donating ~USD20k to AMF (another ~USD15k to be donated this year). I have also done some limited volunteering for building the EA community here in Stockholm, Sweden.

How others can help me

Lately, and in consultation with 80k hours and some “EA veterans”, I have concluded that I should consider instead working directly on EA priority causes. Thus, I am determined to seek opportunities for entrepreneurship within EA, especially considering if I could contribute to launching new projects. Therefore, if you have a project where you think I could contribute, please do not hesitate to reach out (even if I am engaged in a current project - my time might be better used getting another project up and running and handing over the reins of my current project to a successor)!

How I can help others

I can share my experience working at the intersection of people and technology in deploying wind energy globally. I can also share my experience in coming from "industry" and doing entrepreneurship for direct work. Or anything else you think I can help with.

I am also concerned about the "Diversity and Inclusion" aspects of EA and would be keen to contribute to make EA a place where even more people from all walks of life feel safe and at home. Please DM me if you think there is any way I can help. Currently, I expect to have ~5 hrs/month to contribute to this (a number that will grow as my kids become older and more independent).


Thanks for responding. Take only what you think is useful from my comments - you have thought much more deeply about this than I have and seem on top of the issues I have raised. Just a couple of responses in case it might be helpful (otherwise please disregard them):

  1. Sorry, I have not seen such numbers. Just thought perhaps there might be some numbers lying around somewhere, e.g. results from surveys. I actually think perhaps the best number would be E2G EAs that pursue for-profit entrepreneurship - not sure if even the quant traders have a high probability of becoming billionaire donors. But this number might be even harder to come by.
  2. I think I would not exclude the Ivy League base rate. Instead some possibilities could be (and please disregard this if it does not seem promising - I have not thought deeply about it!): 
    1. Perhaps one path could be to actually discard the EA base rate. My intuition here is that the number of EAs who later become  billionaires is so low that the base rate calculated from it does not carry much weight (not sure if statistical significance is the right term here, and if not something close to it). Instead one could use an adjusted Ivy League base rate. And adjusting it based on some assumptions about "strength of talent", fraction of population that pursues becoming rich and maybe some other adjustments, which would lower the final estimate.
    2. Alternatively keep both base rates but still adjust the Ivy League base rate downwards due to the observations I made. That should also lower the final estimate.

Your point of having a simple model is a good one - I am not sure how much more accurate the forecast would be by making a more complex model. And I think you point out well in the post that one should not lean too heavily on the model but take into consideration other sources of evidence.

Some perhaps naïve observations/questions:

  1. Is the P(billionaire|effective altruist) number fairly calculated? It seems most of your model assumes EA billionaires are people who first become an EA, then a billionaire. You do mention this being important but it did not seem like it was integrated into the model. Perhaps it would make sense to instead make two categories and predict each one separately:
    1. One category is billionaires that become EA later (like Moskowitz). Maybe here a base rate could be EA billionaires / all billionaires in the world. Then you can calculate how many more billionaires there will be until 2027 and get an estimate for how many of these will later decide to donate to EA.
      1. Another refinement here could be to get a sense about how many of current billionaires have heard of EA - maybe outreach here is poor and it might be that current non-EA donating billionaires start donating in the future (like Musk, although he has head about EA).
    2. Another is EAs that become billionaires. Not sure who that currently is, but would have been SBF if that didn't go so badly.  I would perhaps even try to use the number of E2G EAs, and not the overall number of EAs to calculate the base rate here.
  2. Related to point 1, a, ii, above - I feel like using the Ivy rate introduces bias. At least at UPenn where I went, a seemingly very high proportion of graduates tried to get rich. I feel like this proportion might be in the 20%-70% range. I think this differs from EA where I feel like the number of people really trying to get rich is probably closer to 5%-20% (maybe e.g. the EA survey has ways to find out). Also if the proportion of Ivy League graduates who become billionaires is significantly higher than graduates from all universities, perhaps this is not applicable to EA? Not sure what the makeup in terms of academic credentials is in the 9500 EA number you use, but it might be reasonable to be less optimistic about EAs ability to become billionaires (one anecdotal reason from my experience is that it seems the network one builds at an Ivy League is a major factor for billionaire success).
  3. A last, and I think minor point is the timeline. For the number of future EA billionaires you expect to come from future EAs becoming billionaires, you might want to take into account the time it takes from becoming an EA, through to deciding to E2G and then finally starting executing on a plan to become rich. That could take several years meaning your number might be a bit optimistic for 2027. But then it might be a good estimate for ~2035. Ways to get info on this parameter could be to look at CrunchBase and look at time from incorporation to billion dollar valuation. And then add a few years before incorporation before that.

That said, I find your analysis super helpful. I am using it to get a sense of the likelihood of financing a bioweapons shelter/refuge in the next few years and after reading your analysis I became much more enthusiastic about this being possible (I previously used the expected value of Founder's Pledge and was quite pessimistic about significant funding being available 5-10 years from now) . So thanks a ton for doing this analysis and posting it!

One idea from a lay person perspective: Just be open with participants on the reason for a remote location for the retreat (or any other "manipulative" tactic).

In Buddhism there is especially for newcomers a practice of strengthening one's dedication to Buddhist practice. It is usually taught  with  a clear message that "performing these practices will alter your motivations and what you value most in life." It should be pointed out that you want to have these disclosures before people sign up so they do not learn about it when they are "trapped" on a retreat in a remote location.

I think we already do well on this front regarding value-drift - we are consciously trying to mold ourselves into the person we wish to become.

Could someone please expand on the relevance of this post to EA? It is not clear to me how it contributes to the discussion around Bostrom nor what the "important means of improving the world" are.

I do not think this was the intention behind this post at all, but it struck me that perhaps this analysis can be used to refute the often touted "it is cheaper to help immigrants where they came from" argument.

Absolutely agree! I like that Rethink Priorities explicitly ask people not to include a photo in their applications (if I remember correctly):

This seems somewhat overconfident, knowing what to me seems like robust results from e.g. resume experiments:

I doubt he treats people significantly differently on the basis of race

I acknowledge that you did use the phrase "significantly differently", but still I would not consider discrimination in hiring processes insignificant. That said, I do not think this detracts much from your overall argument and I might be nit-picky here, but it felt important to me to point out. In other words: I would expect most people to exhibit somewhat significantly differential treatment of people based on race.

I am no expert in this, perhaps there is solid refutation of resume experiment results - I would be happy to learn that my current understanding of this is wrong as it would indicate a less discriminatory world!

I can relate. I like fixing things and being helpful. And I like to know that what I am helping with is important. However, I am not sure I should spend significant time understanding the philosophical debates in EA, nor to get into details of cause prioritization. Instead I think I should put my head down and try to fix something it seems I might be able to help with. 

I am happy to defer to experts on philosophy and cause prioritization. EA seems so much better than the state of affairs outside EA where there is very little information if what you are working on is among the most important things you can work  on (everyone outside EA seems to be yelling about CC, capitalism, species going extinct, water, etc. but nobody I came across could articulate clearly why "their" problem was more important than the others).

I kind of see EA as a menu of important problems to work on - with several of them seeming like something I can help with. It is fantastic!

Not sure if it is still being maintained as much, but there is also the Donations List Website . I also have not looked enough into the two to understand if they are doing similar things or not but felt that is sounded similar.

Perhaps a bit controversial, but I am grateful for all the journalists, critical voices and others spending their valuable time vetting the EA movement. While not all their findings are communicated in the most productive manner, I am happy that the community I am associating with is being thoroughly investigated so that I can feel more comfortable associating with EA. This is not what I am most grateful for, but I feel that their work is somewhat underappreciated, perhaps because it is quite painful for many of us.

Load more