All of Evira's Comments + Replies

I think that in the future, people will eliminate acquiring food by suffering animals in factory farms anyways. This is because people will presumably be able to live in virtual realities and efficiently create virtual food without causing any suffering. Thoughts?

2
Aaron Gertler
5y
If we reach a point in history where humans can upload themselves and survive without the consumption of physical resources, the world will look different in so many ways that almost every cause area we think about will be totally unrecognizable. This is a thing that could someday happen, but that doesn't mean there's any less value in bringing about the end of factory farming much sooner.

Thank you for the detailed response. Some responses to your points:

Our values might get locked in this century through technology or totalitarian politics, in which case we need to rush to reach something tolerable as quickly as possible;

I'm having a hard time thinking of how technology could lock in our values. One possibility is that AGI would be programmed to value what we currently value with no ability to have moral growth. However, it's not clear to me why anyone would do this. People, as best as I can tell, value moral growth and thus wou... (read more)

1
MichaelStJules
5y
I think many of your concerns will come down to views on the probabilities assigned to certain possibilities. Even then, the initial values given to the AGIs may have a huge influence, and some of these can be very subjective, e.g. how much extra weight more intense suffering should receive compared to less intense suffering (if any) or other things we care about, and how much suffering we think certain beings experience in given circumstances. Besides being sensitive to the initial views, people hold contradictory views, so could there not be more than one CEV here? Some will be better or worse than others according to EAs who care about the wellbeing of sentient individuals, and if we reduce the influence of worse views, this could make better solutions more likely. It's of course possible, but is it almost inevitable that these leaders will value their own personal moral growth enough, and how many leaders will we go through before we get one that makes the right decision? Even if they do value personal moral growth, they still need to be exposed to ethical arguments or other reasons that would push them in a given direction. If the rights and welfare of certain groups of sentient beings are not on their radar, what can we expect from these leaders? Also, these seem to be extremely high expectations of politicians, who are fallible and often very self-interested, and especially in the case of totalitarian politics. There is indeed evidence that people react this way. However, I can think of a few reasons why we shouldn't expect the risks to outweigh the possible benefits: 1. Concern for animal rights and welfare seems to be generally increasing (despite increasing consumption of animal products; this is not driven by changing attitudes on animals), and I think there is popular support for welfare reform in many places, with the success of corporate campaigns and improving welfare legislation as evidence for this, and attitude surveys generally. I think peop

I'm wondering what you mean when you say, "I think there are many 'AI catastrophes' that would be quite compatible with alien civs." Do you think that there are relatively probably existential catastrophes from rogue AI that would allow for alien colonization of Earth? I'm having a hard time thinking of any and would like to know your thoughts on the matter.

2
Max_Daniel
5y
Yes, that's what I think. First, consider the 'classic' Bostrom/Yudkowsky catastrophe scenario in which a single superintelligent agent with misaligned goals kills everyone and then, for instrumental reasons, expands into the universe. I agree that this would be a significant obstacle for alien civilization (though not totally impossible - e.g. there's some, albeit perhaps tiny, chance that an expanding alien civilization could be a more powerful adversary, or there could be some kind of trade, or ...). However, I don't think we can be highly confident that this is how an existential catastrophe due to AI would look like. Cf. Christiano's What failure looks like, Drexler's Reframing Superintelligence, and also recent posts on AI risk arguments/scenarios by Tom Sittler and Richard Ngo. On some of the scenarios discussed there, I think it's hard to see whether they'd result in an obstacle to alien civilizations or not. More broadly, I'd be wary to assign very high confidence to any feature of a post-AI catastrophe world. AI that could cause an existential catastrophe is a technology we don't currently possess and cannot anticipate in all its details - therefore, I think it's quite likely that an actual catastrophe based on such AI would in at least some ways have unanticipated properties, i.e., would at least not completely fall into any category of catastrophe we currently anticipate. Relatively robust high-level considerations such as Omohundro's convergent instrumental goal argument can give us good reasons to nevertheless assign significant credence to some properties (e.g., a superintelligent AI agent seems likely to acquire resources), but I don't think they suffice for >90% credence in anything.

I'm not really considering AI ending all life in the universe. If I understand correctly, it is unlikely that we or future AI will be able to influence the universe outside of our Hubble sphere. However, there may be aliens that exist or in the future will in exist in our Hubble sphere, and I think it would be more likely than not nice if they are able to make use of our galaxy and the ones surrounding it.

As a simplified example, suppose there is on average one technologically advanced civilization for every group of 100 galaxies. And each civilizatio... (read more)

It takes a certain degree of investment knowledge and time to form an opinion about the historical performance of different factors and expected future performance.

People who are knowledgeable about investing, e.g. Ben Todd and Bayesian Investor, have already formed opinions about the future expected performance of different factors. Is there something wrong with non-advanced investors just following their advice? Perhaps this wouldn't be optimal, but I'm having a hard time seeing how it could be worse than not adding any tilts.

It also requires
... (read more)
2
Brendon_Wong
5y
The potential downside (and upside) of diversifying by adding some tilts and consistently sticking with them is limited, so I don’t see a major problem with “non-advanced investors” following the advice. Investors should be aware of things like rebalancing and capital gains tax; perhaps “intermediate investor” is a better term.

I'm wondering why Todd suggests that "adding tilts to the portfolio for value, momentum and low volatility (either through security selection or asset selection or adding a long-short component) and away from assets owned for noneconomic reasons" should only been done if you know what you're doing. Bayesian Investor's recommendations seem to do this without requiring you to be very knowledgeable about investing.

2
Brendon_Wong
5y
It takes a certain degree of investment knowledge and time to form an opinion about the historical performance of different factors and expected future performance. It also requires knowledge and time to determine how to appropriately incorporate factors into a portfolio and how to adjust exposure over time. For example, what should be done if a factor underperforms the market for a noticeable period of time? An investor needs to decide whether to reduce or eliminate exposure to a factor or not. Holding an investment that will continue to underperform is bad, but selling an investment that is experiencing cyclical underperformance is a bad timing decision which will worsen performance each time such an error is made. As a concrete example, the momentum factor has had notable crashes throughout history that could cause concern and uncertainty among investors that were not expecting that behavior. Decisions to add factors to portfolios need to take into account maintaining an appropriate level of diversification, tax concerns (selling a factor fund could incur capital gains taxes, and factor mutual funds will pass capital gains the fund incurs while following factors onto investors whereas factor ETFs almost definitely won't), and the impact of fees, among other considerations.

What sort of other advice is out there that's somewhat conflicting but equally plausible? The only one I can think of is that you should basically just stick your money in whatever diversified index funds have the lowest feeds. But even if this advice is just as plausible as your advice, your advice still seems worth taking. This is because if you're wrong and I follow your strategy anyways, pretty much the only cost I'm bearing is decreasing my returns by only a small amount due to increased management fees. But if you're right and I don't follow your strategy, I'd miss out on a much less small amount of returns.

5
PeterMcCluskey
5y
Here are a few examples of strategies that look (or looked) equally plausible, from the usually thoughtful blog of my fellow LessWronger Colby Davis . This blog post recommends: - emerging markets, which overlaps a fair amount with my advice - put-writing, which sounds reasonable to me, but he managed to pick a bad time to advocate it - preferred stock, which looks appropriate today for more risk-averse investors, but which looked overpriced when I wrote my post. This post describes one of his failures. Buying XIV was almost a great idea. It was a lot like shorting VXX, and shorting VXX is in fact a good idea for experts who are cautious enough not to short too much (alas, the right amount of caution is harder to know than most people expect). I expect the rewards in this area to go only to those who accept hard-to-evaluate risks. This post has some strategies that require more frequent trading. I suspect they're good, but I haven't given them enough thought to be confident.

Interesting. Do you have any thoughts on why it's so hard to invoke behavior change?

3
Brendon_Wong
5y
There are various frameworks like the transtheoretical model (TTM) that try to explain why individual behavior change is difficult. There are many prerequisites to change, like making people aware there may be an issue in the first place, convincing them that the possible problem is an actual problem, persuading them that the issue is urgent enough they should work on it in the near future, and helping them develop an effective plan of action. There are reasons why people do not proceed ahead at every step of change, like smokers believing that smoking is not harmful to them, or a perceived lack of urgency or time to implement changes in the near future. This problem may be magnified within organizations because multiple people within an organization often need to agree that change is necessary and should be implemented before anything gets done, and anyone that disagrees in the chain of command could prevent the intended change from happening.
Is there any empirical reason to think that knowledge about 'rationality' is particularly helpful for investing?

Yes. Rationalists are likely to know about, and adjust for, overconfidence bias, and to avoid the base-rate fallacy. Presumably Bayesian Investor already knows that most people who thought they could beat the market were wrong, and thus took this into account when forming their belief that the strategy can beat the market.

And it's not necessarily the case that Bayesian Investor's strategy is worth doing for everyone and that p... (read more)

3
PeterMcCluskey
5y
Hi, I'm Bayesian Investor. I doubt that following my advice would be riskier than the S&P 500 - the low volatility funds reduce the risk in important ways (mainly by moving less in bear markets) that roughly offset the features which increase risk. It's rational for most people to ignore my advice, because there's lots of other (somewhat conflicting) advice out there that sounds equally plausible to most people. I've got lots of evidence about my abilities (I started investing as a hobby in 1980, and it's been my main source of income for 20 years). But I don't have an easy way to provide much evidence of my abilities in a single blog post.

I agree that Bayesian Investor's strategy has a high chance of not beating the market, has somewhat higher risk, and would probably result in you occasionally rebalancing your portfolio, but it still seems like it's very much worth using or at least worth having someone look into.

The funds Bayesian Investor suggests you invest in are ETFs, which I think decreases the need for doing much rebalancing. And rebalancing takes little time. All you need to do is buy and sell from a handful of ETFs; I doubt this would take much more than an hour or so. T... (read more)

1[anonymous]5y
I don't know much about investing, but a couple of quick comments might be helpful: Is there any empirical reason to think that knowledge about 'rationality' is particularly helpful for investing? 1/3 chance seems possibly orders of magnitude too high to me.