E

Evira

50 karmaJoined Mar 2019

Comments
11

I think that in the future, people will eliminate acquiring food by suffering animals in factory farms anyways. This is because people will presumably be able to live in virtual realities and efficiently create virtual food without causing any suffering. Thoughts?

Thank you for the detailed response. Some responses to your points:

Our values might get locked in this century through technology or totalitarian politics, in which case we need to rush to reach something tolerable as quickly as possible;

I'm having a hard time thinking of how technology could lock in our values. One possibility is that AGI would be programmed to value what we currently value with no ability to have moral growth. However, it's not clear to me why anyone would do this. People, as best as I can tell, value moral growth and thus would want AGI to be able to exhibit it.

There is the possibility that programming AGI to value only what we currently value right now without the possibility of moral growth would be technically easier. I don't see why this would be the case, though. Implementing people's CEV, as Eliezer proposed, would allow for moral growth. Narrow value learning, as Paul Christiano proposed, would presumably allow for moral growth if the AGI learns to avoid changing people's goals. AGI alignment via direct specification may be made easier by prohibiting moral growth, but the general consensus I've seen is that alignment via direct specification would be extremely difficult and thus improbable.

There's the possibility of people creating technology for the express purpose of preventing moral growth, but I don't know why people would do that.

As for totalitarian politics, it's not clear to me how they would stop moral growth. If there is anyone in charge, I would imagine they would value their personal moral growth and thus would be able to realize that animal rights are important. After that, I imagine the leader would then be able to spread their values onto others. I know little about politics, though, so there may be something huge I'm missing.

I'm also a little concerned that campaigning for animals rights may backfire. Currently many people seem unaware of just how bad animal suffering is. Many people also love eating meat. If people become informed of the extent of animal suffering, then to minimize cognitive dissonance I'm concerned people will stop caring about animals rather than stop eating meat.

So, my understanding is, getting a significant proportion of people to stop eating meat might make them more likely to exhibit moral growth by caring about other animals, which would be useful for one, unlikely to be used, alignment strategy. I'm not saying this is the entirety of your reasoning, but I suspect it would be much more efficient working on AI alignment by directly working on alignment research or by convincing people that such alignment research is important.

Another possibility is to attempt to spread humane values by directly teaching moral philosophy. Does this sound feasible?

Our values might end up on a bad but self-reinforcing track from which we can't escape, which is a reason to get to something tolerable quickly, in order to make that less likely;

Do you have any situations in mind in which this could occur?

Fixing the problem of discrimination against animals allows us to progress to other moral circle expansions sooner, most notably from a long-termist perspective, recognising the risks of suffering in thinking machines;

I'm wondering what your reasoning behind this is.

Animal advocacy can draw people into relevant moral philosophy, effective altruism and related work on other problems, which arguably increases the value of the long-term future.

I'm concerned this may backfire as well. Perhaps people would after becoming vegan, figure they have done a sufficiently large amount of good and thus be less likely to pursue other forms of altruism.

This might seem unreasonable: performing one good deed does not seem to increase the costs or decrease the benefits of performing other good deeds by much. However, it does seem to be how people act. As evidence, I heard that despite wealth having steeply diminishing returns to happiness, wealthy individuals give a smaller proportion of their money to charities. Further, some EA's have a policy of donating 10% of their income, even if after donating 10% they still have far more money than necessary for living comfortably.

I'm wondering what you mean when you say, "I think there are many 'AI catastrophes' that would be quite compatible with alien civs." Do you think that there are relatively probably existential catastrophes from rogue AI that would allow for alien colonization of Earth? I'm having a hard time thinking of any and would like to know your thoughts on the matter.

I'm not really considering AI ending all life in the universe. If I understand correctly, it is unlikely that we or future AI will be able to influence the universe outside of our Hubble sphere. However, there may be aliens that exist or in the future will in exist in our Hubble sphere, and I think it would be more likely than not nice if they are able to make use of our galaxy and the ones surrounding it.

As a simplified example, suppose there is on average one technologically advanced civilization for every group of 100 galaxies. And each civilization can access all surrounding 100 galaxies as well as the 100 galaxies of neighboring civilizations.

If rogue AI takes over the world, then it would probably also be able to take over the other hundred galaxies. Colonizing some galaxies sounds feasible for an agent that can single-handedly take over the world. If the rogue AI did take over the galaxies, then I'm guessing they would be converted into paperclips or something of the like and thus have approximately zero value to us. The AI would be unlikely to let any neighboring alien civilizations do anything we would value with the 100 galaxies.

Suppose instead there is an existential catastrophe due to a nanotechnology or biotechnology disaster. Then even if intelligent life never re-evolved on Earth, a neighboring alien civilization may be able to colonize those 100 galaxies and do something we would value with them.

Thus, for my reasoning to be relevant I don't think the first two ifs you listed are essential.

As for the third if, it is quite the conjunction that there isn't a single other alien civilization in the Universe and thus is unlikely. However, if the density of alien civilizations or future alien civilizations is so low that we will never be in the Hubble sphere of any of them, then this would make my reasoning less relevant.

Thoughts?

It takes a certain degree of investment knowledge and time to form an opinion about the historical performance of different factors and expected future performance.

People who are knowledgeable about investing, e.g. Ben Todd and Bayesian Investor, have already formed opinions about the future expected performance of different factors. Is there something wrong with non-advanced investors just following their advice? Perhaps this wouldn't be optimal, but I'm having a hard time seeing how it could be worse than not adding any tilts.

It also requires knowledge and time to determine how to appropriately incorporate factors into a portfolio and how to adjust exposure over time.

If a non-advanced investor using the recommended tilts merely maintains their current level of exposure and they shouldn't have, it seems unlikely to me that such an investor would end up under-performing a strategy that uses no tilts by much; even if the tilts no longer provide excess returns, I don't see why they would end up doing *worse* than the market. And perhaps eventually some knowledgeable investor would make a blog post saying you should stop adding tilts towards those factors.

I'm wondering why Todd suggests that "adding tilts to the portfolio for value, momentum and low volatility (either through security selection or asset selection or adding a long-short component) and away from assets owned for noneconomic reasons" should only been done if you know what you're doing. Bayesian Investor's recommendations seem to do this without requiring you to be very knowledgeable about investing.

What sort of other advice is out there that's somewhat conflicting but equally plausible? The only one I can think of is that you should basically just stick your money in whatever diversified index funds have the lowest feeds. But even if this advice is just as plausible as your advice, your advice still seems worth taking. This is because if you're wrong and I follow your strategy anyways, pretty much the only cost I'm bearing is decreasing my returns by only a small amount due to increased management fees. But if you're right and I don't follow your strategy, I'd miss out on a much less small amount of returns.

Interesting. Do you have any thoughts on why it's so hard to invoke behavior change?

Is there any empirical reason to think that knowledge about 'rationality' is particularly helpful for investing?

Yes. Rationalists are likely to know about, and adjust for, overconfidence bias, and to avoid the base-rate fallacy. Presumably Bayesian Investor already knows that most people who thought they could beat the market were wrong, and thus took this into account when forming their belief that the strategy can beat the market.

And it's not necessarily the case that Bayesian Investor's strategy is worth doing for everyone and that people just don't do it because they're stupid. The strategy carries more risk than average strategies, and this alone is possibly a good enough reason for most people to avoid it. Effective altruists, however, should probably be less risk averse and thus the strategy is more likely to be useful for them.

Also, we've been saying most people who think they can beat the market are wrong, but on reflection I'm not sure that's true. My understanding is that using leverage can, in expectation, result in you beat the market, and I suspect this is well know among those knowledgeable about investing. People just avoid doing it because it's very risky.

Load more