T

trammell

1767 karmaJoined

Bio

Econ PhD student at Oxford, about to start a postdoc at the Digital Economy Lab at Stanford, and research affiliate at the Global Priorities Institute. I'm slightly less ignorant about economic theory than about everything else.

https://philiptrammell.com/

Comments
150

Thanks for noting this possibility--I think it's the same, or at least very similar, to an intuition Luisa Rodriguez had when we were chatting about this the other day actually. To paraphrase the idea there, even if we have a phenomenal field that's analogous to our field of vision and one being's can be bigger than another's, attention may be sort of like a spotlight that is smaller than the field. Inflicting pains on parts of the body lower welfare up to a point, like adding red dots to a wall in our field of vision with a spotlight on it adds redness to our field of vision, but once the area under the spotlight is full, not much (perhaps not any) more redness is perceived by adding red dots to the shadowy wall outside the spotlight. If in the human case the spotlight is smaller than "the whole body except for one arm", then it is about equally bad to put the amputee and the non-amputee in an ice bath, or for that matter to put all but one arm of a non-amputee and the whole of a non-amputee in an ice bath.

Something like this seems like a reasonable possibility to me as well. It still doesn't seem as intuitive to me as the idea that, to continue the metaphor, the spotlight lights the whole field of vision to some extent, even if some parts are brighter than others at any given moment; if all of me except one arm were in an ice bath, I don't think I'd be close to indifferent about putting the last arm in. But it does seem hard to be sure about these things.

Even if "scope of attention" is the thing that really matters in the way I'm proposing "size" does, though, I think most of what I'm suggesting in this post can be maintained, since presumably "scope" can't be bigger than "size", and both can in principle vary across species. And as for how either of those variables scales with neuron count, I get that there are intuitions in both directions, but I think the intuitions I put down on the side of superlinearity apply similarly to "scope".

Glad to see you found my post thought-provoking, but let me emphasize that my own understanding is also partial at best, to put it mildly!

Ah wait, did your first comment always say “similar”? No worries if not (I often edit stuff just after posting!) but if so, I must have missed it--apologies for just pointing out that they were different points and not addressing whether they are sufficiently similar.

But they do seem like significantly different hypotheses to me. The reason is that it seems like the arguments presented against many experiences in a single brain can convince me that there is probably (something like) a single, highly "integrative" field of hedonic intensities, just as I don't doubt that there is a single visual processing system behind my single visual field, and yet leave me fully convinced that both fields can come in different sizes, so that one brain can have higher welfare capacity than another for size reasons.

Thanks for the second comment though! It's interesting, and to my mind more directly relevant, in that it offers reasons to doubt the idea that hedonic intensities are spread across locations at all. They move me a bit, but I'm still mostly left thinking
- Re 1, we don't need to appeal to scientific evidence about whether it’s possible to have different amounts of, say, pain in different parts of the phenomenal field. It happens all the time that we feel pain in one hand but not the other. If that's somehow an illusion, it's the illusion that needs a lot of scientific evidence to debunk.
- Re 2, it's not clear why we would have evolved to create valence (or experience) at all in the first place, so in some sense the fact that it would evidently be more efficient to have less of it doesn't help here. But assuming that valence evolved to motivate us in adaptive ways, it doesn't seem like such a stretch to me to say that forming the feeling "my hand is on fire and it in particular hurts" shapes our motivations in the right direction more effectively than forming the feeling "my hand is on fire and I've just started feeling bad overall for some reason", and that this is worth whatever costs come with producing a field of valences.
- Re 3, the proposal I call (iii*) and try to defend is "that the welfare of a whole experience is, or at least monotonically incorporates, some monotonic aggregation of the hedonic intensities felt in these different parts of the phenomenal field" (emphasis added). I put in the "incorporates" because I don't mean to take a stand on whether there are also things that contribute to welfare that don't correspond to particular locations in the phenomenal field, like perhaps the social pains you mention. I just find it hard to deny from first-hand experience that there are some "location-dependent" pains; and if so, I would think that these can scale with "size".

Thanks! That's a really interesting thought. I hadn't thought of that possibility--I've been working on the assumption that they're not reducible--but now that you mention it, I don't have very strong intuitions about whether it seems more or less likely than there being two dimensions "at bottom".

One intuition against is that it seems a bit weirdly discrete to suppose that a "hedonic atom" can just be +1, 0, or -1. But I guess there's some discreteness at bottom with literal atoms (or perhaps a better analogy would be electrical charge) as well...

Thanks for sharing this. (Thank you very much as well for letting me start exploring a tricky idea like this without assuming this is all just an excuse for discriminating against those with disabilities!) I definitely agree that a risk of trying to account for differences in "experience size", even if the consideration is warranted, is that it could lead us to quickly dismiss experiences different from our own as smaller even if they aren't.

I am no expert on deafness or most of the other topics relevant here, but my understanding is that often, if someone loses a sensory faculty or body part but doesn't suffer damage to the relevant part of the brain, the brain in some sense rewires to give more attention (i.e., I would guess, more hedonic intensity and/or more "size") to the remaining sensory faculties. This is why, when bringing up the case of an amputee, I only consider the case of someone whose brain has not had time to make this adjustment. I think it could totally be the case that deaf people, at least post-adjustment (or throughout life, if they have been deaf from birth), have such richer experiences on other dimensions that their welfare capacities tend to be greater overall than non-deaf people.

I cited your post (at the end of the 2nd paragraph of "How these implications are revisionary") as an exploration of a different idea from mine, namely that one brain might have more moral weight than another because it contains more experiences at once. Your excerpt seems to highlight this different idea.

Are you saying your post should be read as also exploring the idea that one brain might have more moral weight than another even if they each contain one experience, because one experience is larger than the other? If so, can you point me to the relevant bit?

Ok great!

And ok, I agree that the answer to the first question is probably "yes", so maybe what I was calling an alternative anthropic principle in my original comment could be framed as SSA with this directly time-centric reference class. If so, instead of saying "that's not SSA", I should have said "that's not SSA with a standard reference class (or a reference class anyone seems to have argued for)". I agree that Bostrom et al. (2010) don't seem to argue for such a reference class.

On my reading (and Teru's, not coincidentally), the core insight Bostrom et al. have (and iterate on) is equivalent to the insight that if you haven't observed something before, and you assign it a probability per unit of time equal to its past frequency, then you must be underestimating its probability per unit of time. The response isn't that this is predicated on, or arguing for, any weird view on anthropics, but just that it has nothing to do with anthropics: it's true, but for the same reason that you’ll underestimate the probability of rain per unit time based on past frequency if it's never rained (though in the prose they convey their impression that the fact that you wouldn't exist in the event of a catastrophe is what's driving the insight). The right thing to do in both cases is to have a prior and update the probability downward as the dry spell lengthens. A nonstandard anthropic principle (or reference class) is just what would be necessary to motivate a fundamental difference from "no rain".

Good Ventures rather than Effective Ventures, no?

The title of this post might give the impression that Rory Stewart was a founder of GiveDirectly. To clarify, GiveDirectly was founded by other people in 2008; Stewart became its president in 2022.

Interesting, thanks for pointing this out! And just to note, that result doesn’t rely on any sort of suspicious knowledge about whether you’re on the planet labeled “x” or “y”; one could also just say “given that you observe that you’re in period 2, …”.

I don’t think it’s right to describe what’s going on here as anthropic shadow though, for the following reason. Let me know what you think.

To make the math easier, let me do what perhaps I should have done from the beginning and have A be the event that the risk is 50% and B be the event that it’s 0%. So in the one-planet case, there are 3 possible worlds:

  • A1 (prior probability 25%) -- risk is 50%, lasts one period
  • A2 (prior probability 25%) -- risk is 50%, lasts two periods
  • B (prior probability 50%) -- risk is 0%, lasts two periods

At time 1, whereas SIA tells us to put credence of 1/2 on A, SSA tells us to put something higher--

(0.25 + 0.25/2) / (0.25 + 0.25/2 + 0.5/2) = 3/5

--because a higher fraction of expected observers are at period 1 given A than given B. This is the Doomsday Argument. When we reach period 2, both SSA and SIA then tell us to update our credence in A downward. Both principles tell us fully to update downward for the same reasons that we would update downward on the probability of an event that didn’t change the number of observers: e.g. if A is the event you live in a place where the probability of rain per day is 50% and B is the event that it’s 0%; you start out putting credence 50% [or 60%] on A; and you make it to day 2 without rain (and would live to see day 2 either way). But in the catastrophe case SSA further has you update downward because the Doomsday Argument stops applying in period 2.

One way to put the general lesson is that, as time goes on and you learn how many observers there are, SSA has less room to shift probability mass (relative to SIA) toward the worlds where there are fewer observers.

  • In the case above, once you make it to period 2, that uncertainty is fully resolved: given A or B, you know you’re in a world with 2 observers. This is enough to motivate such a big update according to SSA that at the end the two principles agree on assigning probability 1/3 to A.
  • In cases where uncertainty about the number of observers is only partially resolved in the move from period 1 to period 2--as in my 3-period example, or in your 2-planet example*--then the principles sustain some disagreement in period 2. This is because
    • SSA started out in period 1 assigning a higher credence to A than SIA;
    • both recommend updating on the evidence given by survival as you would update on anything else, like lack of rain;
    • SSA further updates downward because the Doomsday Argument partially loses force; and
    • the result is that SSA still assigns a higher credence to A than SIA.

*To verify the Doomsday-driven disagreement in period 1 in the two-planet case explicitly (with the simpler definitions of A and B), there are 5 possible worlds:

  • A1 (prior probability 12.5%) -- risk is 50% per planet, both last one period
  • A2 (prior probability 12.5%) -- risk is 50% per planet, only x lasts two periods
  • A3 (prior probability 12.5%) -- risk is 50% per planet, only y lasts two periods
  • A4 (prior probability 12.5%) -- risk is 50% per planet, both last two periods
  • B (prior probability 50%) -- risk is 0% per planet, both last two periods

In period 1, SIA gives credence in A of 1/2; SSA gives (0.125 + 0.125*2/3 + 0.125*2/3 + 0.125/2) / (0.125 + 0.125*2/3 + 0.125*2/3 + 0.125/2 + 0.5/2) = 5/8.

One could use the term “anthropic shadow” to refer to the following fact: As time goes on, in addition to inferring existential risks are unlikely as we would infer that rain is unlikely, SSA further recommends inferring that existential risks are unlikely by giving up the claim that we’re more likely to be in a world with fewer observers; but this second update is attenuated by the (possible) existence of other planets. I don’t have any objection to using the term that way and I do think it’s an interesting point. But I think the old arguments cited in defense of an “anthropic shadow” effect were pretty clearly arguing for the view that we should update less (or even not at all) toward thinking existential risk per unit time is low as time goes on than we would update about the probabilities per unit time of other non-observed events.

Load more