Brian_Tomasik

Comments

Differences in the Intensity of Valenced Experience across Species

Thanks for these astoundingly detailed posts. :)

Just to clarify on this:

others have speculated that animals with simpler nervous systems have characteristically much more intense experiences than humans. For example in his blog post “Is Brain Size Morally Relevant?” Brian Tomasik explores the idea that “to a tiny brain, an experience activating just a few pain neurons could feel like the worst thing in the world from its point of view.”

I didn't intend to suggest that small brains have characteristically greater intensities, but just that it would take fewer pain neurons to achieve the same (subjectively relative) intensity as in a larger brain.

In my opinion, the best way to argue for giving more moral weight to larger brains is not that larger brains have more intense experiences but that we just care more about them because they're more complex. As an analogy, we might care more if a very large painting was destroyed than if a small one was, not because the large painting is more "intense" but just because there's more of it. So I would say that

intrinsic value = duration * intensity * (how much we care about the brain),

where the last factor can be based on its complexity. (BTW, I didn't read most of this post, so sorry if you already discussed such things.)

"Disappointing Futures" Might Be As Important As Existential Risks

Ok. :) For that question I might give a slightly lower than 50% chance that human-inspired space colonization would create more suffering than happiness (where the numerical magnitudes of happiness and suffering are as judged by a typical classical utilitarian). I think the default should be around 50% because for a typical classical utilitarian, it seems unclear whether a random collection of minds contains more suffering or happiness. There are some scenarios in which a human-inspired future might either be relatively altruistic with wide moral circles or relatively egalitarian such that selfishness alone can produce a significant surplus of happiness over suffering. However, there are also many possible futures where a powerful few oppressively control a powerless many with little concern for their welfare. Such political systems were very common historically and are still widespread today. And there may also be situations analogous to animal suffering of today in which most of the sentience that exists goes largely ignored.

The expected value of human-inspired space colonization may be less symmetric than this because it may be dominated by a few low-probability scenarios in which the future is very good or very bad, with very good futures plausibly being more likely.

"Disappointing Futures" Might Be As Important As Existential Risks

Nice post. :) My question "Human-inspired colonization of space will cause net suffering if it happens" that I, Pablo, and you answered was worded poorly. I later rewrote it to be more clear: "Human-inspired colonization of space will cause more suffering than it prevents if it happens". As he explains in his post, Pablo (a classical utilitarian) interpreted my original wording to refer to the net balance of happiness minus suffering, while I (a negative utilitarian) meant merely the net balance of suffering. Which way did you read it?

While Pablo gave 1% probability of more suffering than happiness, he gave 99% probability that suffering itself would increase, saying: "But maybe Brian meant that colonization will cause a surplus of suffering relative to the amount present before colonization. I think this is virtually certain; I’d give it a 99% chance."

Physical theories of consciousness reduce to panpsychism

Cool post. :) I'm not sure if I understand the argument correctly, but what would you say to someone who cites the "fallacy of division"? For example, even though recurrent processes are made of feedforward ones, that doesn't mean the purported consciousness of the recurrent processes also applies to the feedforward parts. My guess is that you'd reply that wholes can sometimes be different from the sum of their parts, but in these cases, there's no reason to think there's a discontinuity anywhere, i.e., no reason to think there's a difference in kind rather than degree as the parts are arranged.

Consider a table made of five pieces of wood: four legs and a top. Suppose we create the table just by stacking the top on the four legs, without any nails or glue, to keep things simple. Is the difference between the table versus an individual piece of wood a difference in degree or kind? I'm personally not sure, but I think many people would call it a difference in kind.

I think an alternate route to panpsychism is to argue that the electron has not just information integration but also the other properties you mentioned. It has "recurrent processing" because it can influence something else in its environment (say, a neighboring electron), which can then influence the original electron. We can get higher-order levels by looking at one electron influencing another, which influences another, and so on. The thing about Y predicting X would apply to electrons as well as neurons.

The table analogy to this argument is to note that an individual piece of wood has many of the same properties as a table: you can put things on it, eat food from it, move it around your house as furniture, knock on it to make noise, etc.

How good is The Humane League compared to the Against Malaria Foundation?

Good points. :) That post of mine isn't really about the mosquitoes themselves but more about the impacts that a larger human population would have on invertebrates (assuming AMF does increase the size of the human population, which is a question I also mention briefly).

Should Longtermists Mostly Think About Animals?

Thanks for this detailed post!

My guess would be that Greaves and MacAskill focus on the "10 billion humans, lasting a long time" scenario just to make their argument maximally conservative, rather than because they actually think that's the right scenario to focus on? I haven't read their paper, but on brief skimming I noticed that the paragraph at the bottom of page 5 talks about ways in which they're being super conservative with that scenario.

Assuming that the goal is just to be maximally conservative while still arguing for longtermism, adding the animal component makes sense but doesn't serve the purpose. As an analogy, imagine someone who denies that any non-humans have moral value. You might start by pointing to other primates or maybe dolphins. Someone could come along and say "Actually, chickens are also quite sentient and are far more numerous than non-human primates", which is true, but it's slightly harder to convince a skeptic that chickens matter than that chimpanzees matter.

such as human’s high brain to body mass ratio

One might also care about total brain size because in bigger brains, there's more stuff going on (and sometimes more sophisticated stuff going on). As an example, imagine that you morally value corporations, and you think the most important part of a corporation is its strategic management (rather than the on-the-ground employees). You may indeed care more about corporations that have a greater ratio of strategic managers to total employees. But you may also care about corporations that have just more total strategic managers, especially since larger companies may be able to pull off more complex analyses that smaller ones lack the resources to do.

How Much Leverage Should Altruists Use?

That seems to be a common view, but I haven't yet been able to find any reason why that would be the case, except insofar as rebalancing frequency affects how leveraged you are. I discussed the topic a bit here. Maybe someone who knows more about the issue can correct me.

How Much Leverage Should Altruists Use?

Good point. I think such a fund would want to be very clear that it's not for the faint of heart and that it's done in the spirit of trying new risky things. If that message was front and center, I expect the backlash would be less.

How Much Leverage Should Altruists Use?

Thanks! From my reading of the post, that critique is not really specific to leveraged ETFs? Volatility drag is inherent to leverage in general (and even to non-leveraged investing to a smaller degree).

He says: "In my next post, I’m going to dive into more detail on what is to distinguish between good and bad uses of leverage." So I found his next post on leverage, which coincidentally is one mentioned in the OP: "The Line Between Aggressive and Crazy". There he clarifies why he doesn't like leveraged ETFs:

From this we start to see the problem with levered ETFs as they are currently constructed: they generally use too much leverage applied to too volatile of assets. Even with the plain vanilla S&P 500 3x leverage is too much. And after accounting for the hefty transactions costs and management fees these ETFs charge, even 2x might be suboptimal (especially if you believe returns will be lower in the future than they have in recent decades). And the S&P 500 is one of the most conservative targets for these products. Take a look at the websites of levered ETF providers and you will see ways to make levered bets on particular industries like biotech or the energy sector, or on commodities like oil and gold, or for more esoteric instruments yet, almost all of which are more volatile than a broadly diversified index like the S&P 500, and thus supporting much lower Kelly leverage ratios, probably less than 2x.

So unless transaction costs are a dealbreaker, it seems like he's mainly opposed to the fact that most leveraged ETFs use too much leverage for their level of volatility (relative to the Kelly Criterion, which assumes logarithmic utility of wealth), not that the instrument itself is flawed? Of course, leveraged ETFs implement a "constant leverage" strategy, and later in that post, Davis proposes adjusting the leverage ratio dynamically (which I agree is better, though it requires more work).

How Much Leverage Should Altruists Use?

Leveraged ETFs are one way to keep your leverage ratio from blowing up, without any investor effort.

Keeping all the considerations in this post in mind seems very difficult, so perhaps the ideal solution would be if there were an institution to do it for individuals, such as EA Funds or something like it. You could donate to the fund and let them adjust leverage, correlation with other donors to the same cause, and everything else on your behalf.

Load More