NunoSempere

I'm an independent researcher, hobbyist forecaster, programmer, and aspiring effective altruist.

In the past, I've studied Maths and Philosophy, dropped out in exasperation at the inefficiency; picked up some development economics; helped implement the European Summer Program on Rationality during 2017, 2018 and 2019, and SPARC during 2020; worked as a contractor for various forecasting and programming projects; volunteered for various Effective Altruism organizations, and carried out many independent research projects. In a past life, I also wrote a popular Spanish literature blog, and remain keenly interested in Spanish poetry.

I like to spend my time acquiring deeper models of the world, and a good fraction of my research is available on nunosempere.github.io.

With regards to forecasting, I am LokiOdinevich on GoodJudgementOpen, and Loki on CSET-Foretell, and I have been running a Forecasting Newsletter since April 2020. I also enjoy winning bets against people too confident in their beliefs.

I was a Future of Humanity Institute 2020 Summer Research Fellow, and I'm working on a grant from the Long Term Future Fund to do "independent research on forecasting and optimal paths to improve the long-term." You can share feedback anonymously with me here.

Sequences

Estimating value
Forecasting Newsletter

Wiki Contributions

Load More

Comments

DeepMind: Generally capable agents emerge from open-ended play

My hot take: This seems like a somewhat big deal to me. It's what I would have predicted, but that's scary, given my timelines

Might be confirmation bias. But is it.

Buck's Shortform

But if you already have this coalition value function, you've already solved the coordination problem and there’s no reason to actually calculate the Shapley value! If you know how much total value would be produced if everyone worked together, in realistic situations you must also know an optimal allocation of everyone’s effort. And so everyone can just do what that optimal allocation recommended.

This seems correct


A related claim is that the Shapley value is no better than any other solution to the bargaining problem. For example, instead of allocating credit according to the Shapley value, we could allocate credit according to the rule “we give everyone just barely enough credit that it’s worth it for them to participate in the globally optimal plan instead of doing something worse, and then all the leftover credit gets allocated to Buck”, and this would always produce the same real-life decisions as the Shapley value.

This misses some considerations around cost-efficiency/prioritization. If you look at your distorted "Buck values", you come away that Buck is super cost-effective; responsible for a large fraction of the optimal plan using just one salary. If we didn't have a mechanistic understanding of why that was, trying to get more Buck would become an EA cause area.

In contrast, if credit was allocated according to Shapley values, we could look at the groups whose Shapley value is the highest, and try to see if they can be scaled.


The section about "purely local" Shapley values might be pointing to something, but I don't quite know what it is, because the example is just Shapley values but missing a term? I don't know. You also say "by symmetry...", and then break that symmetry by saying that one of the parts would have been able to create $6,000 in value and the other $0. Needs a crisper example.


Re: coordination between people who have different values using SVs, I have some stuff here, but looking back the writting seems too corny.


Lastly, to some extent, Shapley values are a reaction to people calculating their impact as their counterfactual impact. This leads to double/triple counting impact for some organizations/opportunities, but not others, which makes comparison between them more tricky. Shapley values solve that by allocating impact such that it sums to the total impact & other nice properties. Then someone like OpenPhilanthropy or some EA fund can come and see which groups have the highest Shapley value (perhaps highest Shapley value per unit of money/ressources) and then try to replicate them/scale them. People might also make better decisions if they compare Shapley instead of counterfactual values (because Shapley values mostly give a more accurate impression of the impact of a position.)

So I see the benefits of Shapley values as fixing some common mistakes arising from using counterfactual values. This would make impact accounting slightly better, and coordination slightly better to the extent it relies on impact accounting for prioritization (which tbh might not be much.)

I'm not sure to what extent I agree with the claim that people are overhyping/misunderstanding Shapley values. It seems a plausible.

A Sequence Against Strong Longtermism

I think that some of your anti-expected-value beef can be addressed by considering stochastic dominance as a backup decision theory in cases where expected value fails.

For instance, maybe I think that a donation to ALLFED in expectation leads to more lives saved than a donation to a GiveWell charity. But you could point out that the expected value is undefined, because maybe the future contains infinite amount of both flourishing and suffering. Then donating to ALLFED can still be the superior option if I think that it's stochastically dominant.

There are probably also tweaks to make to stochastic dominance, e.g., if you have two "games",

  • Game 1: Get X expected value in the next K years, then play game 3
  • Game 2: Get Y expected value in the next K years, then play game 3
  • Game 3: Some Pasadena-like game with undefined value

then one could also have a principle where Game 1 is preferable to Game 2 if X > Y, and this also sidesteps some more expected value problems.

NunoSempere's Shortform

Notes on: A Sequence Against Strong Longtermism

Summary for myself. Note: Pretty stream-of-thought.

Proving too much

  • The set of all possible futures is infinite which somehow breaks some important assumptions longtermists are apparently making.
    • Somehow this fails to actually bother me
  • ...the methodological error of equating made up numbers with real data
    • This seems like a cheap/unjustified shot. In the world where we can calculate the expected values, it would seems fine to compare (wide, uncertain) speculative interventions with harcore GiveWell data (note that the next step would probably be to get more information, not to stop donating to GiveWell charities)
  • Sometimes, expected utility is undefined (Pasadena game)
    • The Pasadena game also fails to bother me, because the series hasn't (yet) showed that longtermism bets are "Pasadena-like"
    • (Also, note that you can use stochastic dominance to solve many expected value paradoxes, e.g, to decide between two universes with infinite expected value, or with undefined expected value.)
  • ...mention of E.T. Jaynes
    • Yeah, I'm also a fan of E.T. Jaynes, and I think that this is a cheap shot, not an argument.
  • Subject, Object, Instrument
    • This section seems confused/bad. In particular, there is a switch from "credences are subjective" to "we should somehow change our credences if this is useful". No, if one's best guess is that "the future is vast in size", then considering that one can change one's opinions to better attain goals doesn't make it stop being one's best guess

Overall: The core of this section seems to be that expected values are sometimes undefined. I agree, but this doesn't deter me from trying to do the most good by seeking more speculative/longtermist interventions. I can use stochastic dominance when expected utility fails me. 

The post also takes issue with the following paragraph from The Case For Strong Longtermism:

Then, using our figure of one quadrillion lives, the expected good done by Shivani contributing $10,000 to [preventing world domination by a repressive global political regime] would, by the lights of utilitarian axiology, be 100 lives. In contrast, funding for the Against Malaria Foundation, often regarded as the most cost-effective intervention in the area of short-term global health improvements, on average saves one life per $3500. (Nuño: italics and bold from the OP, not from original article)

I agree that the paragraph just intuitively looks pretty bad, so I looked at the context:

Now, the argument we are making is ultimately a quantitative one: that the expected impact 
one can have on the long-run future is greater than the expected impact one can have on the 
short run. It’s not true, in general, that options that involve low probabilities of high stakes 
systematically lead to greater expected values than options that involve high probabilities of 
modest payoffs: everything depends on the numbers. (For instance, not all insurance contracts 
are worth buying.) So merely pointing out that one ​might be able to influence the long run, or 
that one can do so to a nonzero extent (in expectation), isn’t enough for our argument. But, 
we will claim, any reasonable set of credences would allow that for at least one of these 
pathways, the expected impact is greater for the long-run. 

Suppose, for instance, Shivani thinks there’s a 1% probability of a transition to a world 
government in the next century, and that $1 billion of well-targeted grants — aimed (say) at 
decreasing the chance of great power war, and improving the state of knowledge on optimal 
institutional design — would increase the well-being in an average future life, under the 
world government, by 0.1%, with a 0.1% chance of that effect lasting until the end of 
civilisation, and that the impact of grants in this area is approximately linear with respect to 
the amount of spending. Then, using our figure of one quadrillion lives to come, the expected 
good done by Shivani contributing $10,000 to this goal would, by the lights of a utilitarian 
axiology, be 100 lives. In contrast, funding for Against Malaria Foundation, often regarded 
as the most cost-effective intervention in the area of short-term global health improvements, 
on average saves one life per $3500

Yeah, this is in the context of a thought experiment. I'd still do this with distributions rather than with point estimates, but ok.

The Credence Assumption

  • Ok, so the OP wants to argue that expected value theory breaks => the tool is not useful => we should abandon credences => longtermism somehow fails.
    • But I think that "My best guess is that I can do more good with more speculative interventions" is fairly robust to that line of criticism; it doesn't stop being my best guess just because credences are subjective.
      • E.g., if my best guess is that ALLFED does "more good" (e.g., more lives saved in expectation) than GiveWell charities, pointing out that actually the expected value is undefined (maybe the future contains both infinite amounts of flourishing and suffering) doesn't necessarily change my conclusion if I still think that donating to ALLFED is stochastically dominant.
  • Cox Theorem requires that probabilities be real numbers
    • The OP doesn't buy that. Sure, a piano is not going to drop on his head, but he might e.g., make worse decisions on account of being overconfident because he has not been keeping track of his (numerical) predictions and thus suffers from more hindsight bias than someone who kept track.
  • But what alternative do we have?
    • One can use e.g., upper and lower bounds on probabilities instead of real valued numbers: Sure, I do that. Longtermism still doesn't break.
  • Some thought experiment which looks like The Whispering Earring.
  • Instead of relying on explicit expected value calculations, we should rely on evolutionary approaches

The Poverty of Longtermism

  • "In 1957, Karl Popper proved it is impossible to predict the future of humanity, but scholars at the Future of Humanity Institute insist on trying anyway"
    • Come on
  • Yeah, this is just fairly bad
  • Lesson of the 20th Century
    • This is going to be an ad-hitlerium, isn't it
      • No, an ad-failures of communism
        • At this point, I stopped reading.
Shallow evaluations of longtermist organizations

Sure, but it was particularly salient to me in this case because the evaluation was so negative

Shallow evaluations of longtermist organizations

In what capacity are you asking? I'd be more likely to do so if you were asking as a team member, because the organization right now looks fairly small and I would almost be evaluating individuals.

Shallow evaluations of longtermist organizations

So what I specifically meant was: It's interesting that the current leadership probably thinks that CSER is valuable (e.g., valuable enough to keep working at it, rather than directing their efforts somewhere else, and presumably valuable enough to absorb EA funding and talent). This presents a tricky updating problem, where I should probably average my own impressions from my shallow review with their (probably more informed) perspective. But in the review, I didn't do that, hence the "unmitigated inside view" label. 

What should we call the other problem of cluelessness?

I like "opaqueness" for the reason that it is gradable.

Load More