Researcher @ Shapley Maximizers
11887 karmaJoined Nov 2018


I am an independent research and programmer working at my own consultancy, Shapley Maximizers ÖU. I like to spend my time acquiring deeper models of the world, and generally becoming more formidable.  I'm also a fairly good forecaster: I started out on predicting on Good Judgment Open and CSET-Foretell, but now do most of my forecasting through Samotsvety, of which Scott Alexander writes:

Enter Samotsvety Forecasts. This is a team of some of the best superforecasters in the world. They won the CSET-Foretell forecasting competition by an absolutely obscene margin, “around twice as good as the next-best team in terms of the relative Brier score”. If the point of forecasting tournaments is to figure out who you can trust, the science has spoken, and the answer is “these guys”.

I used to post prolifically on the EA Forum, but nowadays, I post my research and thoughts at / rather than on this forum, because:

  • I disagree with the EA Forum's moderation policy—they've banned a few disagreeable people whom I like, and I think they're generally a bit too censorious for my liking. 
  • The Forum website has become more annoying to me over time: more cluttered and more pushy in terms of curated and pinned posts (I've partially mitigated this by writing my own minimalistic frontend)
  • The above two issues have made me take notice that the EA Forum is beyond my control, and it feels like a dumb move to host my research in a platform that has goals different from my own. 

But a good fraction of my past research is still available here on the EA Forum. I'm particularly fond of my series on Estimating Value.

I used to do research around longtermism, forecasting and quantification, as well as some programming, at the Quantified Uncertainty Research Institute (QURI). At QURI, I programmed, a search tool which aggregates predictions from many different platforms, which I still maintain. I spent some time in the Bahamas as part of the FTX EA Fellowship, and did a bunch of work for the FTX Foundation, which then went to waste when it evaporated. 

Previously, I was a Future of Humanity Institute 2020 Summer Research Fellow, and then worked on a grant from the Long Term Future Fund to do "independent research on forecasting and optimal paths to improve the long-term." I used to write a Forecasting Newsletter which gathered a few thousand subscribers, but I stopped as the value of my time rose. I also generally enjoy winning bets against people too confident in their beliefs.

Before that, I studied Maths and Philosophy, dropped out in exasperation at the inefficiency, picked up some development economics; helped implement the European Summer Program on Rationality during 2017, 2018 2019, 2020 and 2022; worked as a contractor for various forecasting and programming projects; volunteered for various Effective Altruism organizations, and carried out many independent research projects. In a past life, I also wrote a popular Spanish literature blog, and remain keenly interested in Spanish poetry.

You can share feedback anonymously with me here.

Note: You can sign up for all my posts here: <>, or subscribe to my posts' RSS here: <>


Vantage Points
Estimating value
Forecasting Newsletter


Topic contributions

I really liked it

This is an understatement. At the time, I thought they were the best teachers I'd ever had, the course majorly influenced my perspective in life, they've provided useful background knowledge, etc.

I have a review of two courses within it here. I really liked it. Given your economics major, though, my sense is that you might find some of the initial courses too basic. That said, they should be free online, so you might as well listen to the first/a random lecture to see if you are getting something out of it.

Someone reminded me that I have an admonymous. If some of y'all feel like leaving some anonymous feedback, I'd love to get it and you can do so here:

No, 3% is "chance of success". After adding a bunch of multipliers, it comes to about 0.6% reduction in existential risk over the next century, for $8B to $20B.

I happen to disagree with these numbers because I think that numbers for effectiveness of x-risk projects are too low. E.g., for the "Small-scale AI Misalignment Project": "we expect that it reduces absolute existential risk by a factor between 0.000001 and 0.000055", these seem like many zeroes to me.

Ditto for the "AI Misalignment Megaproject": $8B+ expenditure to only have a 3% chance of success (?!), plus some other misc discounting factors. Seems like you could do better with $8B.

In case it's of interest, you can see some similar algebraic manipulations here:, as well as some explanations of how to get a normal from its 95% confidence interval here:

Manifund funding went to... LTFF

This is explained by LTFF/Open Philanthropy doing the imho misguided matching. This has the effect of diverting funding from other places for no clear gain. A lump sum would have been a better option

To elaborate a bit on the offer in case other people search the forum for printing to pdfs, this happens to be a pet issue. See here: for a way to compile a document like this to a pdf like this one. I am very keen on the method. However, it requires people to be on Linux, which is a nontrivial difficulty. Hence the offer.

I have extracted top questions to here: with the Linux command at the top of the page. Hope this is helpful enough.

Load more