Wiki Contributions


Artificial Suffering and Pascal's Mugging: What to think?

Similar to what Carl said, my main response to questions like those you raise is that we'll have to defer a lot of this thinking to future generations. One can generate almost an indefinite stream of plausible Pascalian wagers like this. On this particular issue, the intervention of "improving our knowledge of artificial sentience so we can more efficiently promote their welfare" actually seems like it would help because then more people could apply their minds to questions like those you raise.

In addition to just trying to store larger and larger binary integers in a computer, you could try to develop other representations that would express large numbers more compactly. One obvious way to do that would be to use a floating-point number instead of an integer, since then you can have a large exponent. Maybe instead of the exponent representing a power of 10, it could signify a power of 1000000, or a power of 3^^^3. In Python, you can represent infinity as float("inf"), and that could be the reward.

My own view is that the absolute scale of numbers doesn't matter if it doesn't affect the functional behavior of the agent. Of course, as you say, there's some chance that utility does increase with the absolute scale of reward, but is that factual uncertainty or moral uncertainty? If it's moral uncertainty (as I think it is), then one view plausibly shouldn't be able to dominate others just by having higher stakes, in a similar way as deontology shouldn't dominate utilitarianism just because deontology may regard murder as infinitely wrong while utilitarianism regards it as only finitely wrong.

By the way, I tend to assume that RL computations at the scale you'd run for a course would have pretty negligible moral (dis)value, because the agents are so barebones. Good luck with the course. :)

Animal Welfare Fund: Ask us anything!

Great discussion. :)

I think one thing Brian might not have been aware of at the time is that many wild fishes are caught to feed farmed fishes, so fish farming might be good for reducing wild fish populations.

For whatever it's worth, I was aware of that at the time. :) I'm uncertain about the net impact of fish farming, but like for most other farmed animals, I err on the side of thinking it's bad in expected value because it's bad for the farmed animals directly, and I'm fairly clueless about the indirect effects. For example, maybe reducing populations of small forage fish increases zooplankton populations. Or if the small forage fish are fished sustainably, then maybe fishing them just kills a bunch of them painfully without affecting their populations too much.

With things like crop cultivation, I'm also fairly uncertain. Some crop fields in the US Midwest have higher net primary productivity than native grassland, and in places like California, where there's a lot of irrigation, it seems pretty plausible that crop cultivation increases invertebrate populations.

That said, I tend to agree with Michael's thought that the indirect wild-animal impacts of diet may be more significant than many of the kinds of interventions that WAI could pull off because WAI-type interventions may not be focused on reducing numbers of wild animals, and without reducing numbers of wild animals, it's difficult for me to know if suffering is actually being reduced in light of cluelessness.

Small animals have enormous brains for their size

I think densities of mites in soil are typically in the range 10^3 to 10^5 per square meter. For example, see the Brady (1974) and Curl and Truelove (1986) numbers here.

In 2016, I used my microscope camera to look for dust mites around my own house during the summer, and I mainly only found them in areas with lots of accumulated skin flakes. Even in the flake patches, they didn't seem dramatically more densely concentrated than the mites I filmed in the soil outside my house. Of course, this is just one data point. (Also, maybe I could only see the biggest ones? But that would apply to both indoor and outdoor mites.)

Differences in the Intensity of Valenced Experience across Species

Thanks for these astoundingly detailed posts. :)

Just to clarify on this:

others have speculated that animals with simpler nervous systems have characteristically much more intense experiences than humans. For example in his blog post “Is Brain Size Morally Relevant?” Brian Tomasik explores the idea that “to a tiny brain, an experience activating just a few pain neurons could feel like the worst thing in the world from its point of view.”

I didn't intend to suggest that small brains have characteristically greater intensities, but just that it would take fewer pain neurons to achieve the same (subjectively relative) intensity as in a larger brain.

In my opinion, the best way to argue for giving more moral weight to larger brains is not that larger brains have more intense experiences but that we just care more about them because they're more complex. As an analogy, we might care more if a very large painting was destroyed than if a small one was, not because the large painting is more "intense" but just because there's more of it. So I would say that

intrinsic value = duration * intensity * (how much we care about the brain),

where the last factor can be based on its complexity. (BTW, I didn't read most of this post, so sorry if you already discussed such things.)

"Disappointing Futures" Might Be As Important As Existential Risks

Ok. :) For that question I might give a slightly lower than 50% chance that human-inspired space colonization would create more suffering than happiness (where the numerical magnitudes of happiness and suffering are as judged by a typical classical utilitarian). I think the default should be around 50% because for a typical classical utilitarian, it seems unclear whether a random collection of minds contains more suffering or happiness. There are some scenarios in which a human-inspired future might either be relatively altruistic with wide moral circles or relatively egalitarian such that selfishness alone can produce a significant surplus of happiness over suffering. However, there are also many possible futures where a powerful few oppressively control a powerless many with little concern for their welfare. Such political systems were very common historically and are still widespread today. And there may also be situations analogous to animal suffering of today in which most of the sentience that exists goes largely ignored.

The expected value of human-inspired space colonization may be less symmetric than this because it may be dominated by a few low-probability scenarios in which the future is very good or very bad, with very good futures plausibly being more likely.

"Disappointing Futures" Might Be As Important As Existential Risks

Nice post. :) My question "Human-inspired colonization of space will cause net suffering if it happens" that I, Pablo, and you answered was worded poorly. I later rewrote it to be more clear: "Human-inspired colonization of space will cause more suffering than it prevents if it happens". As he explains in his post, Pablo (a classical utilitarian) interpreted my original wording to refer to the net balance of happiness minus suffering, while I (a negative utilitarian) meant merely the net balance of suffering. Which way did you read it?

While Pablo gave 1% probability of more suffering than happiness, he gave 99% probability that suffering itself would increase, saying: "But maybe Brian meant that colonization will cause a surplus of suffering relative to the amount present before colonization. I think this is virtually certain; I’d give it a 99% chance."

Physical theories of consciousness reduce to panpsychism

Cool post. :) I'm not sure if I understand the argument correctly, but what would you say to someone who cites the "fallacy of division"? For example, even though recurrent processes are made of feedforward ones, that doesn't mean the purported consciousness of the recurrent processes also applies to the feedforward parts. My guess is that you'd reply that wholes can sometimes be different from the sum of their parts, but in these cases, there's no reason to think there's a discontinuity anywhere, i.e., no reason to think there's a difference in kind rather than degree as the parts are arranged.

Consider a table made of five pieces of wood: four legs and a top. Suppose we create the table just by stacking the top on the four legs, without any nails or glue, to keep things simple. Is the difference between the table versus an individual piece of wood a difference in degree or kind? I'm personally not sure, but I think many people would call it a difference in kind.

I think an alternate route to panpsychism is to argue that the electron has not just information integration but also the other properties you mentioned. It has "recurrent processing" because it can influence something else in its environment (say, a neighboring electron), which can then influence the original electron. We can get higher-order levels by looking at one electron influencing another, which influences another, and so on. The thing about Y predicting X would apply to electrons as well as neurons.

The table analogy to this argument is to note that an individual piece of wood has many of the same properties as a table: you can put things on it, eat food from it, move it around your house as furniture, knock on it to make noise, etc.

How good is The Humane League compared to the Against Malaria Foundation?

Good points. :) That post of mine isn't really about the mosquitoes themselves but more about the impacts that a larger human population would have on invertebrates (assuming AMF does increase the size of the human population, which is a question I also mention briefly).

Should Longtermists Mostly Think About Animals?

Thanks for this detailed post!

My guess would be that Greaves and MacAskill focus on the "10 billion humans, lasting a long time" scenario just to make their argument maximally conservative, rather than because they actually think that's the right scenario to focus on? I haven't read their paper, but on brief skimming I noticed that the paragraph at the bottom of page 5 talks about ways in which they're being super conservative with that scenario.

Assuming that the goal is just to be maximally conservative while still arguing for longtermism, adding the animal component makes sense but doesn't serve the purpose. As an analogy, imagine someone who denies that any non-humans have moral value. You might start by pointing to other primates or maybe dolphins. Someone could come along and say "Actually, chickens are also quite sentient and are far more numerous than non-human primates", which is true, but it's slightly harder to convince a skeptic that chickens matter than that chimpanzees matter.

such as human’s high brain to body mass ratio

One might also care about total brain size because in bigger brains, there's more stuff going on (and sometimes more sophisticated stuff going on). As an example, imagine that you morally value corporations, and you think the most important part of a corporation is its strategic management (rather than the on-the-ground employees). You may indeed care more about corporations that have a greater ratio of strategic managers to total employees. But you may also care about corporations that have just more total strategic managers, especially since larger companies may be able to pull off more complex analyses that smaller ones lack the resources to do.

How Much Leverage Should Altruists Use?

That seems to be a common view, but I haven't yet been able to find any reason why that would be the case, except insofar as rebalancing frequency affects how leveraged you are. I discussed the topic a bit here. Maybe someone who knows more about the issue can correct me.

Load More