I'm an independent researcher, hobbyist forecaster, programmer, and aspiring effective altruist.

In the past, I've studied Maths and Philosophy, dropped out in exhasperation at the inefficiency; picked up some development economics; helped implement the European Summer Program on Rationality during 2017, 2018 and 2019, and SPARC during 2020; worked as a contractor for various forecasting and programming projects; volunteered for various Effective Altruism organizations, and carried out many independent research projects. In a past life, I also wrote a popular Spanish literature blog, and remain keenly interested in Spanish poetry.

I like to spend my time acquiring deeper models of the world, and a good fraction of my research is available on

With regards to forecasting, I am LokiOdinevich on GoodJudgementOpen, and Loki on CSET-Foretell, and I have been running a Forecasting Newsletter since April 2020. I also enjoy winning bets against people too confident in their beliefs.

I was a Future of Humanity Institute 2020 Summer Research Fellow, and I'm working on a grant from the Long Term Future Fund to do "independent research on forecasting and optimal paths to improve the long-term." You can share feedback anonymously with me here.


Forecasting Newsletter


An experiment to evaluate the value of one researcher's work

Yeah, I agree that for forecasting setups self-fulfilling prophesies/feedback loops can be a problem, but it seems likely that they can be mitigated with a certain amount of exploration (e.g., occasionally try things you'd expect to fail in order to test your system.)

It's also not clear that this type of evaluation is worse than the alternative informal evaluation system. For example, with a formal evaluation system you'd be able to pick up high quality outputs even if they come from otherwise low status people (and then give them further support.)

NunoSempere's Shortform

Sure. So I'm thinking that for impact, you'd have sort of causal factors (Scale, importance, relation to other work, etc.) But then you'd also have proxies of impact, things that you intuit correlate well with having an impact even if the relationship isn't causal. For example, having lots of comments praising some project doesn't normally cause the project to have more impact. See here for the kind of thing I'm going for.

An experiment to evaluate the value of one researcher's work

Note that all none of the projects wound up with a negative score, for example. I’m sure that at least one really should if we were clairvoyant, although it’s not obvious to me to say which one at this point.

Expected Error, or how wrong you expect to be ended up with a -1, because of the negative comments.

An experiment to evaluate the value of one researcher's work

Thanks! A willingness to pay is an interesting proxy; will keep in mind. In particular, I imagine that it consolidates some intuitions, or makes them more apparent, though it probably won't help if your intuitions are just wrong.

NunoSempere's Shortform

Here is a more cleaned up — yet still very experimental — version of a rubric I'm using for the value of research:


  • Probabilistic
    • % of producing an output which reaches goals
      • Past successes in area
      • Quality of feedback loops
      • Personal motivation
    • % of being counterfactually useful
      • Novelty
      • Neglectedness
  • Existential
    • Robustness: Is this project robust under different models?
    • Reliability: If this is a research project, how much can we trust the results?


  • Overall promisingness (intuition)
  • Scale: How many people affected
  • Importance: How important for each person
  • (Proxies of impact):
    • Connectedness
    • Engagement
    • De-confusion
    • Direct applicability
    • Indirect impact
      • Career capital
      • Information value

Per Unit of Resources

  • Personal fit
  • Time needed
  • Funding needed
  • Logistical difficulty

See also: Charity Entrepreneurship's rubric, geared towards choosing which charity to start.

A toy model for technological existential risk

So you'd in general be correct in applying Laplace's law to this kind of scenario except that you run into selection effects (a keyword to Google is anthropic effect, or anthropic principle.) I.e., suppose that the chance of human extinction was actually much higher, on the order of 10% per year. Then, after 250 years, Earth will probably not have any humans, but if it does and they use Laplace's rule to estimate its chances, it will overshoot them by a lot. That is, they can't actually update on extinction happening because if it happens nobody will be there to update.

There is a magic trick where I give you a deck of cards, tell you to shuffle it, and choose a card however you want, and then I guess it correctly. Most of the time it doesn't work, but on the 1/52 chance that it does, it looks really impressive (or so I'm told, I didn't have the patience to do it enough times). There is also a scam based on a similar principle.

On the other hand, Laplace's law is empirically really quite brutal, and in my experience tends to output probabilities that are too high. In particular, I'd assign some chance to there being no black balls, and that would eventually bring my probability of extinction close to 0, whereas Laplace's law always predicts that an event will happen if given enough time (even if it has never happened before).

Overall, I guess I'd be more interested in trying to figure out the pathways to extinction and their probabilities. For technologies which already exist, that might involve looking at close calls, e.g., nuclear close calls.

What is the marginal impact of a small donation to an EA Fund?

You can think about this in terms of Shapley values, rather than counterfactual/marginal values, and you'll get an answer which is, in my opinion, less confused. 

In particular, suppose that 2001 people donate $5 to an EA fund, and the fund gives out a grant of $100k. Then I claim that you should think of the impact of the $5 as closer to $100k/2001 (~the Shapley value), rather than as 0 (the counterfactual value of each donation, because without one $5 donation the $100k would have still gone through.) 

In other words, you might want to think about each of the $5000 donors as coming together to enable one big $100k donation and sharing its impact, rather than emphasizing that their counterfactual/marginal impact is lower. 

Interrogating Impact

The Giving Pledge, which Bill Gates and Warren Buffet started, allegedly has pledges worth $1.2 trillion.

Load More