T

trammell

1812 karmaJoined

Bio

Econ PhD student at Oxford, about to start a postdoc at the Digital Economy Lab at Stanford, and research affiliate at the Global Priorities Institute. I'm slightly less ignorant about economic theory than about everything else.

https://philiptrammell.com/

Comments
152

My understanding is that the consumption of essentially all animal products seems to increase in income at the country level across the observed range, whether or not you control for various things. See the regression table on slide 7 and the graph of "implied elasticity on income" on slide 8 here.

I'm not seeing the paper itself online anywhere, but maybe reach out to Gustav if you're interested.

Thank you!

And thanks for the IIT / Pautz reference, that does seem relevant. Especially to my comment on the "superlinearity" intuition that experience should probably be lost, or at least not gained, as the brain is "disintegrated" via corpus callosotomy... let me know (you or anyone else reading this) if you know whether IIT, or some reasonable precisification of it, says that the "amount" of experience associated with two split brain hemispheres is more or less than with an intact brain.

Thanks for noting this possibility--I think it's the same, or at least very similar, to an intuition Luisa Rodriguez had when we were chatting about this the other day actually. To paraphrase the idea there, even if we have a phenomenal field that's analogous to our field of vision and one being's can be bigger than another's, attention may be sort of like a spotlight that is smaller than the field. Inflicting pains on parts of the body lower welfare up to a point, like adding red dots to a wall in our field of vision with a spotlight on it adds redness to our field of vision, but once the area under the spotlight is full, not much (perhaps not any) more redness is perceived by adding red dots to the shadowy wall outside the spotlight. If in the human case the spotlight is smaller than "the whole body except for one arm", then it is about equally bad to put the amputee and the non-amputee in an ice bath, or for that matter to put all but one arm of a non-amputee and the whole of a non-amputee in an ice bath.

Something like this seems like a reasonable possibility to me as well. It still doesn't seem as intuitive to me as the idea that, to continue the metaphor, the spotlight lights the whole field of vision to some extent, even if some parts are brighter than others at any given moment; if all of me except one arm were in an ice bath, I don't think I'd be close to indifferent about putting the last arm in. But it does seem hard to be sure about these things.

Even if "scope of attention" is the thing that really matters in the way I'm proposing "size" does, though, I think most of what I'm suggesting in this post can be maintained, since presumably "scope" can't be bigger than "size", and both can in principle vary across species. And as for how either of those variables scales with neuron count, I get that there are intuitions in both directions, but I think the intuitions I put down on the side of superlinearity apply similarly to "scope".

Glad to see you found my post thought-provoking, but let me emphasize that my own understanding is also partial at best, to put it mildly!

Ah wait, did your first comment always say “similar”? No worries if not (I often edit stuff just after posting!) but if so, I must have missed it--apologies for just pointing out that they were different points and not addressing whether they are sufficiently similar.

But they do seem like significantly different hypotheses to me. The reason is that it seems like the arguments presented against many experiences in a single brain can convince me that there is probably (something like) a single, highly "integrative" field of hedonic intensities, just as I don't doubt that there is a single visual processing system behind my single visual field, and yet leave me fully convinced that both fields can come in different sizes, so that one brain can have higher welfare capacity than another for size reasons.

Thanks for the second comment though! It's interesting, and to my mind more directly relevant, in that it offers reasons to doubt the idea that hedonic intensities are spread across locations at all. They move me a bit, but I'm still mostly left thinking
- Re 1, we don't need to appeal to scientific evidence about whether it’s possible to have different amounts of, say, pain in different parts of the phenomenal field. It happens all the time that we feel pain in one hand but not the other. If that's somehow an illusion, it's the illusion that needs a lot of scientific evidence to debunk.
- Re 2, it's not clear why we would have evolved to create valence (or experience) at all in the first place, so in some sense the fact that it would evidently be more efficient to have less of it doesn't help here. But assuming that valence evolved to motivate us in adaptive ways, it doesn't seem like such a stretch to me to say that forming the feeling "my hand is on fire and it in particular hurts" shapes our motivations in the right direction more effectively than forming the feeling "my hand is on fire and I've just started feeling bad overall for some reason", and that this is worth whatever costs come with producing a field of valences.
- Re 3, the proposal I call (iii*) and try to defend is "that the welfare of a whole experience is, or at least monotonically incorporates, some monotonic aggregation of the hedonic intensities felt in these different parts of the phenomenal field" (emphasis added). I put in the "incorporates" because I don't mean to take a stand on whether there are also things that contribute to welfare that don't correspond to particular locations in the phenomenal field, like perhaps the social pains you mention. I just find it hard to deny from first-hand experience that there are some "location-dependent" pains; and if so, I would think that these can scale with "size".

Thanks! That's a really interesting thought. I hadn't thought of that possibility--I've been working on the assumption that they're not reducible--but now that you mention it, I don't have very strong intuitions about whether it seems more or less likely than there being two dimensions "at bottom".

One intuition against is that it seems a bit weirdly discrete to suppose that a "hedonic atom" can just be +1, 0, or -1. But I guess there's some discreteness at bottom with literal atoms (or perhaps a better analogy would be electrical charge) as well...

Thanks for sharing this. (Thank you very much as well for letting me start exploring a tricky idea like this without assuming this is all just an excuse for discriminating against those with disabilities!) I definitely agree that a risk of trying to account for differences in "experience size", even if the consideration is warranted, is that it could lead us to quickly dismiss experiences different from our own as smaller even if they aren't.

I am no expert on deafness or most of the other topics relevant here, but my understanding is that often, if someone loses a sensory faculty or body part but doesn't suffer damage to the relevant part of the brain, the brain in some sense rewires to give more attention (i.e., I would guess, more hedonic intensity and/or more "size") to the remaining sensory faculties. This is why, when bringing up the case of an amputee, I only consider the case of someone whose brain has not had time to make this adjustment. I think it could totally be the case that deaf people, at least post-adjustment (or throughout life, if they have been deaf from birth), have such richer experiences on other dimensions that their welfare capacities tend to be greater overall than non-deaf people.

I cited your post (at the end of the 2nd paragraph of "How these implications are revisionary") as an exploration of a different idea from mine, namely that one brain might have more moral weight than another because it contains more experiences at once. Your excerpt seems to highlight this different idea.

Are you saying your post should be read as also exploring the idea that one brain might have more moral weight than another even if they each contain one experience, because one experience is larger than the other? If so, can you point me to the relevant bit?

Ok great!

And ok, I agree that the answer to the first question is probably "yes", so maybe what I was calling an alternative anthropic principle in my original comment could be framed as SSA with this directly time-centric reference class. If so, instead of saying "that's not SSA", I should have said "that's not SSA with a standard reference class (or a reference class anyone seems to have argued for)". I agree that Bostrom et al. (2010) don't seem to argue for such a reference class.

On my reading (and Teru's, not coincidentally), the core insight Bostrom et al. have (and iterate on) is equivalent to the insight that if you haven't observed something before, and you assign it a probability per unit of time equal to its past frequency, then you must be underestimating its probability per unit of time. The response isn't that this is predicated on, or arguing for, any weird view on anthropics, but just that it has nothing to do with anthropics: it's true, but for the same reason that you’ll underestimate the probability of rain per unit time based on past frequency if it's never rained (though in the prose they convey their impression that the fact that you wouldn't exist in the event of a catastrophe is what's driving the insight). The right thing to do in both cases is to have a prior and update the probability downward as the dry spell lengthens. A nonstandard anthropic principle (or reference class) is just what would be necessary to motivate a fundamental difference from "no rain".

Good Ventures rather than Effective Ventures, no?

Load more