Physical theories of consciousness reduce to panpsychism

Cool post. :) I'm not sure if I understand the argument correctly, but what would you say to someone who cites the "fallacy of division"? For example, even though recurrent processes are made of feedforward ones, that doesn't mean the purported consciousness of the recurrent processes also applies to the feedforward parts. My guess is that you'd reply that wholes can sometimes be different from the sum of their parts, but in these cases, there's no reason to think there's a discontinuity anywhere, i.e., no reason to think there's a difference in kind rather than degree as the parts are arranged.

Consider a table made of five pieces of wood: four legs and a top. Suppose we create the table just by stacking the top on the four legs, without any nails or glue, to keep things simple. Is the difference between the table versus an individual piece of wood a difference in degree or kind? I'm personally not sure, but I think many people would call it a difference in kind.

I think an alternate route to panpsychism is to argue that the electron has not just information integration but also the other properties you mentioned. It has "recurrent processing" because it can influence something else in its environment (say, a neighboring electron), which can then influence the original electron. We can get higher-order levels by looking at one electron influencing another, which influences another, and so on. The thing about Y predicting X would apply to electrons as well as neurons.

The table analogy to this argument is to note that an individual piece of wood has many of the same properties as a table: you can put things on it, eat food from it, move it around your house as furniture, knock on it to make noise, etc.

How good is The Humane League compared to the Against Malaria Foundation?

Good points. :) That post of mine isn't really about the mosquitoes themselves but more about the impacts that a larger human population would have on invertebrates (assuming AMF does increase the size of the human population, which is a question I also mention briefly).

Should Longtermists Mostly Think About Animals?

Thanks for this detailed post!

My guess would be that Greaves and MacAskill focus on the "10 billion humans, lasting a long time" scenario just to make their argument maximally conservative, rather than because they actually think that's the right scenario to focus on? I haven't read their paper, but on brief skimming I noticed that the paragraph at the bottom of page 5 talks about ways in which they're being super conservative with that scenario.

Assuming that the goal is just to be maximally conservative while still arguing for longtermism, adding the animal component makes sense but doesn't serve the purpose. As an analogy, imagine someone who denies that any non-humans have moral value. You might start by pointing to other primates or maybe dolphins. Someone could come along and say "Actually, chickens are also quite sentient and are far more numerous than non-human primates", which is true, but it's slightly harder to convince a skeptic that chickens matter than that chimpanzees matter.

such as human’s high brain to body mass ratio

One might also care about total brain size because in bigger brains, there's more stuff going on (and sometimes more sophisticated stuff going on). As an example, imagine that you morally value corporations, and you think the most important part of a corporation is its strategic management (rather than the on-the-ground employees). You may indeed care more about corporations that have a greater ratio of strategic managers to total employees. But you may also care about corporations that have just more total strategic managers, especially since larger companies may be able to pull off more complex analyses that smaller ones lack the resources to do.

How Much Leverage Should Altruists Use?

That seems to be a common view, but I haven't yet been able to find any reason why that would be the case, except insofar as rebalancing frequency affects how leveraged you are. I discussed the topic a bit here. Maybe someone who knows more about the issue can correct me.

How Much Leverage Should Altruists Use?

Good point. I think such a fund would want to be very clear that it's not for the faint of heart and that it's done in the spirit of trying new risky things. If that message was front and center, I expect the backlash would be less.

How Much Leverage Should Altruists Use?

Thanks! From my reading of the post, that critique is not really specific to leveraged ETFs? Volatility drag is inherent to leverage in general (and even to non-leveraged investing to a smaller degree).

He says: "In my next post, I’m going to dive into more detail on what is to distinguish between good and bad uses of leverage." So I found his next post on leverage, which coincidentally is one mentioned in the OP: "The Line Between Aggressive and Crazy". There he clarifies why he doesn't like leveraged ETFs:

From this we start to see the problem with levered ETFs as they are currently constructed: they generally use too much leverage applied to too volatile of assets. Even with the plain vanilla S&P 500 3x leverage is too much. And after accounting for the hefty transactions costs and management fees these ETFs charge, even 2x might be suboptimal (especially if you believe returns will be lower in the future than they have in recent decades). And the S&P 500 is one of the most conservative targets for these products. Take a look at the websites of levered ETF providers and you will see ways to make levered bets on particular industries like biotech or the energy sector, or on commodities like oil and gold, or for more esoteric instruments yet, almost all of which are more volatile than a broadly diversified index like the S&P 500, and thus supporting much lower Kelly leverage ratios, probably less than 2x.

So unless transaction costs are a dealbreaker, it seems like he's mainly opposed to the fact that most leveraged ETFs use too much leverage for their level of volatility (relative to the Kelly Criterion, which assumes logarithmic utility of wealth), not that the instrument itself is flawed? Of course, leveraged ETFs implement a "constant leverage" strategy, and later in that post, Davis proposes adjusting the leverage ratio dynamically (which I agree is better, though it requires more work).

How Much Leverage Should Altruists Use?

Leveraged ETFs are one way to keep your leverage ratio from blowing up, without any investor effort.

Keeping all the considerations in this post in mind seems very difficult, so perhaps the ideal solution would be if there were an institution to do it for individuals, such as EA Funds or something like it. You could donate to the fund and let them adjust leverage, correlation with other donors to the same cause, and everything else on your behalf.

What ever happened to PETRL (People for the Ethical Treatment of Reinforcement Learners)?

PETRL was (to my knowledge) the only organization focused on the ethics of AI-qua-moral patient

There seems to be a lot of academic and popular discussion about robot rights and machine consciousness, but yeah, I can't name offhand another organization explicitly focused on this topic. (To some degree, Sentience Institute has this as a long-run goal, and many organizations care about it as part of what they work on.)

There's a spoof organization called People for Ethical Treatment of Robots.

Update: I see there's another organization: American Society for the Prevention of Cruelty to Robots. On the FAQ page they say:

Q: Are you serious?

A: The ASPCR is, and will continue to be, exactly as serious as robots are sentient.

Ethical offsetting is antithetical to EA

A problem is that different people have different views on what's most effective. If most people are quasi-egoists, then for them, spending money on themselves or their families is "the most effective charity" they can give to. Or even within the realm of what's normally understood to be charity, people might donate to their local church or arts center. Relative to their values, this might be the best charity to give to.

Ethical offsetting is antithetical to EA

I think offsetting makes sense when seen as a form of moral trade with other people (or even possibly other factions within your own brain's moral parliament).

Regarding objection #1 about reference classes, the answer can be that you can choose a reference class that's acceptable to your trading partner. For example, suppose you do something that makes the global poor slightly worse off. Suppose that a large faction of society doesn't care much about non-human animals but does care about the global poor. Then donating to an animal charity wouldn't offset this harm in their eyes, but donating to a developing-world charity would.

Regarding objection #2, trade by its nature involves spending resources on things that you think are suboptimal because someone else wants you to.

An objection to this perspective can be that in most offsetting situations, the trading partner isn't paying enough attention or caring enough to actually reciprocate with you in ways that make the trade positive-sum for both sides. (For trade within your own brain, reciprocation seems more likely.)

Load More