All of ESRogs's Comments + Replies

Democratising Risk - or how EA deals with critics

opinion which ... is mainly advocated by billionaires

Do you mean that most people advocating for techno-positive longtermist concern for x-risk are billionaires, or that most billionaires so advocate?

I don't think either claim is true (or even close to true).

It's also not the claim being made:

...minority opinion which happens to reinforce existing power relations and is mainly advocated by billionaires and those funded by [them]...

Ambitious Altruistic Software Engineering Efforts: Opportunities and Benefits

The RSP idea is cool.

Dumb question — what part of the post does this refer to?

3Ben_West2moApologies, that line was referring to something from an earlier draft of this post. I've removed it from my comment.
Formalizing the cause prioritization framework

One reason to keep Tractability separate from Neglectedness is to distinguish between "% of problem solved / extra dollars from anyone" and "% of problem solved / extra dollars from you".

In theory, anybody's marginal dollar is just as good as anyone else's. But by making the distinction explicit, it forces you to consider where on the marginal utility curve we actually are. If you don't track how many other dollars have already been poured into solving a problem, you might be overly optimistic about how far the next dollar will go.

I think this may be close to the reason Holden(?) originally had in mind when he included neglectedness in the framework.

2Michael_Wiebe3moI'm not sure I follow. In my framework, "how many other dollars have already been poured into solving a problem" is captured by crowdedness, ie., total resources allocated to the problem, ie., the position on the x-axis.
Should Grants Fund EA Projects Retrospectively?

Note that Vitalik Buterin has also recently started promoting related ideas: Retroactive Public Goods Funding

7Pablo5moThanks for linking to that article, which I hadn't seen. I updated the 'certificates of impact' entry with a brief summary of the proposal.
3aaronhamlin1yWe may have a campaign in 2021 (our initial play is riskier here), but we can't say yet for sure. We have other cities lined up for 2022. What I can say is all the cities we have in mind are over 750,000 people and are all very well known. We've laid out a strategic plan involving polling and legal analysis to factor in where to prioritize given our available funding. We're working for a surprise win in 2021 to excite our funders.
Uncorrelated Investments for Altruists

Trendfollowing tends to perform worse in rapid drawdowns because it doesn't have time to rebalance

I wonder if it makes sense to rebalance more frequently when volatility (or trading volume) is high.

3MichaelDickens1yThat could help. "Standard" trendfollowing rebalances monthly because it's simple, frequent enough to capture most changes in trends, but infrequent enough that it doesn't incur a lot of transaction costs. But there could be more complicated approaches that do a better job of capturing trends without incurring too many extra costs. One idea I've considered is to look at buy-side signals monthly but sell-side signals daily, so if the market switches from a positive to negative trend, you'll sell the following day, but if it switches back, you won't buy until the next month. On the backtests I ran, it seemed to work reasonably well. These were the results of a backtest I ran using the Ken French data on US stock returns 1926-2018: CAGRStdevUlcerTrades/YrB&H9.516.823.0Monthly9.311.714.41.4Daily10.711.09.65.1 Sell-Daily9.710.39.22.3Buy-Daily10.612.312.31.8("Ulcer" is the ulcer index [http://www.tangotools.com/ui/ui.htm], which IMO is a better measure of downside risk than standard deviation. It basically tells you the frequency and severity of drawdowns.)
Uncorrelated Investments for Altruists

The AlphaArchitect funds are more expensive than Vanguard funds, but they're just as cheap after adjusting for factor exposure.

Do you happen to have the numbers available that you used for this calculation? Would be curious to see how you're doing the adjustment for factor exposure.

3MichaelDickens1yI'm not sure how to calculate it precisely, I think you'd want to run a regression where the independent variable is the value factor and the dependent variable is the fund or strategy being considered. But roughly speaking, a Vanguard value fund holds the 50% cheapest stocks (according to the value factor), while QVAL and IVAL hold the 5% cheapest stocks, so they are 10x more concentrated, which loosely justifies a 10x higher expense ratio. Although 10x higher concentration doesn't necessarily mean 10x more exposure to the value factor, it's probably substantially less than that. I just ran a couple of quick regressions using Ken French data [https://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html], and it looks like if you buy the top half of value stocks (size-weighted) while shorting the market, that gives you 0.76 exposure to the value factor, and buying the top 10% (equal-weighted) while shorting the market gives you 1.3 exposure (so 1.3 is the slope of a regression between that strategy and the value factor). Not sure I'm doing this right, though. To look at it another way, the top-half portfolio described above had a 5.4% annual return (gross), while the top-10% portfolio returned 12.8% (both had similar Sharpe ratios). Note that most of this difference comes from the fact that the first portfolio is size-weighted and the second is equal-weighted; I did it that way because most big value funds are size-weighted, while QVAL/IVAL are equal-weighted.
Uncorrelated Investments for Altruists

Looking at historical performance of those Alpha Architects funds (QVAL, etc), it looks like they all had big dips in March 2020 of around 25%, at the same time as the rest of the market.

And I've heard it claimed that assets in general tend to be more correlated during drawdowns.

If that's so, it seems to mitigate to some extent the value of holding uncorrelated assets, particularly in a portfolio with leverage, because it means your risk of margin call is not as low as you might otherwise think.

Have you looked into this issue of correlations during drawdowns, and do you think it changes the picture?

3MichaelDickens1yThe AlphaArchitect funds (except for VMOT) are long-only, so they're going to be pretty correlated with the market. The idea is you buy those funds (or something similar) while simultaneously shorting the market. This is true. Factors aren't really asset classes, but it's still true for some factors. This AQR paper [https://www.aqr.com/Insights/Research/Alternative-Thinking/It-Was-the-Worst-of-Times-Diversification-During-a-Century-of-Drawdowns] looked at the performance of a bunch of diversifiers during drawdowns and found that trendfollowing provided good return, as did "styles", by which they mean a long/short factor portfolio consisting of the value, momentum, carry, and quality factors. I'd have to do some more research to say how each of those four factors have tended to perform during drawdowns, so take this with a grain of salt, but IIRC: * value and carry tend to perform somewhat poorly * quality tends to perform well * momentum tends to perform well during drawdowns, but then performs really badly when the market turns around (e.g., this happened in 2009) I'm talking about long/short factors here, so e.g., if the value factor has negative performance, that means long-only value stocks perform worse than the market. Also, short-term trendfollowing (e.g., 3-month moving average) tends to perform better during drawdowns than long-term trendfollowing (~12 month moving average), but it has worse long-run performance, and both tend to beat the market, so IMO it makes more sense to use long-term trendfollowing. We never know how this will continue in the future. For example, the 2020 drawdown happened much more quickly than usual—the market dropped around 30% in a month, as opposed to, say, the 2000-2002 drawdown, where the market dropped 50% over the course of two years. Trendfollowing tends to perform worse in rapid drawdowns because it doesn't have time to rebalance, although it happened to perform reasonably well this year. There's a lot more I c
"Patient vs urgent longtermism" has little direct bearing on giving now vs later

Ah, good point! This was not already clear to me. (Though I do remember thinking about these things a bit back when Piketty's book came out.)

"Patient vs urgent longtermism" has little direct bearing on giving now vs later

I just feel like I don't know how to think about this because I understand too little finance and economics

Okay, sounds like we're pretty much in the same boat here. If anyone else is able to chime in and enlighten us, please do so!

9Max_Daniel1yI thought about this for another minute, and realized one thing that hadn't been salient to me previously. (Though quite possibly it was clear to you, as the point is extremely basic. - It also doesn't directly answer the question about whether we should expect stock returns to exceed GDP growth indefinitely.) When thinking about whether X can earn returns that exceed economic growth, a key question is what share of those returns is reinvested into X. For example, suppose I now buy stocks that have fantastic returns, but I spend all those returns to buy chocolate. Then those stocks won't make up an increasing share of my wealth. This would only happen if I used the returns to buy more stocks, and they kept earning higher returns than other stuff I own. In particular, the simple argument that returns can't exceed GDP growth forever only follows if returns are reinvested and 'producing' more of X doesn't have too steeply diminishing returns. For example, two basic 'accounting identities' from macroeconomics are: 1. β=sg 2. α=rβ Here,sis the savings rate (i.e. fraction of total income that is saved, which in equilibrium equals investments into capital),gis the rate of economic growth, andris the rate of return on capital. These equations are essentially definitions, but it's easy to see that (in a simple macroeconomic model with one final good, two factors of production, etc.)βcan be viewed as the capital-to-income ratio andαas capital's share of income. Note that from equations 1 and 2 it follows thatrg=αs. Thus we see that r exceeds g in equilibrium/'forever' if and only ifα>s- in other words, if and only if (on average across the whole economy) not all of the returns from capital are re-invested into capital. (Why would that ever happen? Because individual actors maximize their own welfare, not aggregate growth. So e.g. they might prefer to spend some share of capital returns on consumption.) Analog remarks apply to other situations where a basic model o
"Patient vs urgent longtermism" has little direct bearing on giving now vs later

My superficial impression is that this phenomenon it somewhat surprising a priori, but that there isn't really a consensus for what explains it.

Hmm, my understanding is that the equity premium is the difference between equity returns and bond (treasury bill) returns. Does that tell us about the difference between equity returns and GDP growth?

A priori, would you expect both equities and treasuries to have returns that match GDP growth?

3Max_Daniel1yYes, that's my understanding as well. I don't know, my sense is not directly but I could be wrong. I think I was gesturing at this because I took it as evidence that we don't understand why equities have such high return. (But then it is an additional contingent fact that these returns don't just exceed bond returns but also GDP growth.) I don't think I'd expect this, at least not with high confidence - but overall I just feel like I don't know how to think about this because I understand too little finance and economics. (In particular, it's plausible to me that there are strong a priori arguments about the relationships between GDP growth, bond returns, and equity returns - I just don't know what they are.)
"Patient vs urgent longtermism" has little direct bearing on giving now vs later

But if you delay the start of this whole process, you gain time in which you can earn above-average returns by e.g. investing into the stock market.

Shouldn't investing into the stock market be considered a source of average returns, by default? In the long run, the stock market grows at the same rate as GDP

If you think you have some edge, that might be a reason to pick particular stocks (as I sometimes do) and expect returns above GDP growth.

But generically I don't think the stock market should be considered a source of above-average returns. Am I m... (read more)

9MichaelDickens1yThe stock market should grow faster than GDP in the long run. Three different simple arguments for this: 1. This falls out of the commonly-used Ramsey model [https://plato.stanford.edu/entries/ramsey-economics/]. Specifically, because people discount the future, they will demand that their investments give better return than the general economy. 2. Corporate earnings should grow at the same rate as GDP, and stock price should grow at the same rate as earnings. But stock investors also earn dividends, so your total return should exceed GDP in the long run. (The reason this works is because in aggregate, investors spend the dividends rather than re-investing them.) 3. Stock returns are more volatile than economic growth, so they should pay a risk premium even if they don't have a higher risk-adjusted return.
3Max_Daniel1y[Low confidence as I don't really understand anything about finance.] It sounds right to me that the stock market can't grow more quickly than GDP forever. However, it seems like it has been doing so for decades, and that there is no indication that this will stop very soon - say, within 10 years. (My superficial impression is that this phenomenon [https://en.wikipedia.org/wiki/Equity_premium_puzzle] it somewhat surprising a priori, but that there isn't really a consensus for what explains it.) Therefore, in particular, for the window of time made available by moving spending from now to, say, in 1 year, it seems you can earn returns on the stock market that exceed world economic growth. If we know that this can't continue forever, it seems to me this would be more relevant for the part where I say "future longtermists would invest in the stock market rather than engaging in 'average activities' that earn average returns" etc. More precisely, the key question we need to ask about any longtermist investment-like spending opportunity seems to be: After the finite window of above-average growth from that opportunities, will there still be other opportunities that, from a longtermist perspective, have returns that exceed average economic growth? If yes, then it is important whether the distant returns from investment-like longtermist spending end up with longtermists; if no, then it's not important.
Thoughts on whether we're living at the most influential time in history

You could make an argument that a certain kind of influence strictly decreases with time. So the hinge was at the Big Bang.

But, there (probably) weren't any agents around to control anything then, so maybe you say there was zero influence available at that time. Everything that happened was just being determined by low level forces and fields and particles (and no collections of those could be reasonably described as conscious agents).

Today, much of what happens (on Earth) is determined by conscious agents, so in some sense the total amount of extant influ... (read more)

Thoughts on whether we're living at the most influential time in history

Separately, I still don’t see the case for building earliness into our priors, rather than updating on the basis of finding oneself seemingly-early.

Do you have some other way of updating on the arrow of time? (It seems like the fact that we can influence future generations, but they can't influence us, is pretty significant, and should be factored into the argument somewhere.)

I wouldn't call that an update on finding ourselves early, but more like just an update on the structure of the population being sampled from.

3ESRogs1yYou could make an argument that a certain kind of influence strictly decreases with time. So the hinge was at the Big Bang. But, there (probably) weren't any agents around to control anything then, so maybe you say there was zero influence available at that time. Everything that happened was just being determined by low level forces and fields and particles (and no collections of those could be reasonably described as conscious agents). Today, much of what happens (on Earth) is determined by conscious agents, so in some sense the total amount of extant influence has grown. Let's maybe call the first kind of influence time-priority, and the second agency. So, since the Big Bang, the level of time-priority influence available in the universe has gone way down, but the level of aggregate agency in the universe has gone way up. On a super simple model that just takes these two into account, you might multiply them together to get the total influence available at a certain time (and then divide by the number of people alive at that time to get the average person's influence). This number will peak somewhere in the middle (assuming it's zero both at the Big Bang and at the Heat Death). That maybe doesn't tell you much, but then you could start taking into account some other considerations, like how x-risk could result in a permanent drop of agency down to zero. Or how perhaps there's an upper limit on how much agency is potentially available in the universe. In any case, it seems like the direction of causality should be a pretty important part of the analysis (even if it points in the opposite direction of another factor, like increasing agency), either as part of the prior or as one of the first things you update on.
Thoughts on whether we're living at the most influential time in history

And the current increase in hinginess seems unsustainable, in that the increase in hinginess we’ve seen so far leads to x-risk probabilities that lead to drastic reduction of the value of worlds that last for eg a millennium at current hinginess levels.

Didn't quite follow this part. Are you saying that if hinginess keeps going up (or stays at the current, high level), that implies a high level of x-risk as well, which means that, with enough time at that hinginess (and therefore x-risk) level, we'll wipe ourselves out; and therefore that we can't have sust... (read more)

7Buck1yYour interpretation is correct; I mean that futures with high x-risk for a long time aren't very valuable in expectation.
Are we living at the most influential time in history?

Just a quick thought on this issue: Using Laplace's rule of succession (or any other similar prior) also requires picking a somewhat arbitrary start point.

Doesn't the uniform prior require picking an arbitrary start point and end point? If so, switching to a prior that only requires an arbitrary start point seems like an improvement, all else equal. (Though maybe still worth pointing out that all arbitrariness has not been eliminated, as you've done here.)

Who should / is going to win 2020 FLI award 2020?

The Nobel Prize comes with a million dollars (9,000,000 SEK). 50k doesn't seem like that much, in comparison.

EA reading list: miscellaneous

Another Karnofsky series that I thought was important (and perhaps doesn't fit anywhere else) is his posts on The Straw Ratio.

What's the big deal about hypersonic missiles?
ballistic ones are faster, but reach Mach 20 and similar speeds outside of the atmosphere

This seems notable, since there is no sound w/o atmosphere. So perhaps ballistic missiles never actually engage in hypersonic flight, despite reaching speeds that would be hypersonic if in the atmosphere? Though I would be surprised if they're reaching Mach 20 at a high altitude and then not still going super fast (above Mach 5) on the way down.

2Lancer212yExactly, ballistic missiles (or, at this point of the strike, their warheads) are slowed down when reentering the atmosphere - just like satellites and space capsules containing astro/cosmo/spationauts - at much slower speeds. The 2-digit Mach speeds are reached only outside of the atmosphere.
What's the big deal about hypersonic missiles?
according to Thomas P. Christie (DoD director of Operational Test and Evaluation from 2001–2005) current defense systems “haven’t worked with any degree of confidence”.[12] A major unsolved problem is that credible decoys are apparently “trivially easy” to build, so much so that during missile defense tests, balloon decoys are made larger than warheads--which is not something a real adversary would do. Even then, tests fail 50% of the time.

I didn't follow this. What are the decoys? Are they made by the attacki... (read more)

I'm Buck Shlegeris, I do research and outreach at MIRI, AMA

Thanks! Just read it.

I think there's a key piece of your thinking that I don't quite understand / disagree with, and it's the idea that normativity is irreducible.

I think I follow you that if normativity were irreducible, then it wouldn't be a good candidate for abandonment or revision. But that seems almost like begging the question. I don't understand why it's irreducible.

Suppose normativity is not actually one thing, but is a jumble of 15 overlapping things that sometimes come apart. This doesn't seem like it poses any... (read more)

I'm Buck Shlegeris, I do research and outreach at MIRI, AMA
Don't Make Things Worse: If a decision would definitely make things worse, then taking that decision is not rational.
Don't Commit to a Policy That In the Future Will Sometimes Make Things Worse: It is not rational to commit to a policy that, in the future, will sometimes output decisions that definitely make things worse.
...
One could argue that R_CDT sympathists don't actually have much stronger intuitions regarding the first principle than the second -- i.e. that their intuitions aren't actually very "targeted" on the first o
... (read more)
I'm Buck Shlegeris, I do research and outreach at MIRI, AMA
There may be a pretty different argument here, which you have in mind. I at least don't see it yet though.

Perhaps the argument is something like:

  • "Don't make things worse" (DMTW) is one of the intuitions that leads us to favoring R_CDT
  • But the actual policy that R_CDT recommends does not in fact follow DMTW
  • So R_CDT only gets intuitive appeal from DMTW to the extent that DMTW was about R_'s, and not about P_'s
  • But intuitions are probably(?) not that precisely targeted, so R_CDT shouldn't get to claim the full intuitive endors
... (read more)
4Ben Garfinkel2yHere are two logically inconsistent principles that could be true: Don't Make Things Worse: If a decision would definitely make things worse, then taking that decision is not rational. Don't Commit to a Policy That In the Future Will Sometimes Make Things Worse: It is not rational to commit to a policy that, in the future, will sometimes output decisions that definitely make things worse. I have strong intuitions that the fist one is true. I have much weaker (comparatively neglible) intuitions that the second one is true. Since they're mutually inconsistent, I reject the second and accept the first. I imagine this is also true of most other people who are sympathetic to R_CDT. One could argue that R_CDT sympathists don't actually have much stronger intuitions regarding the first principle than the second -- i.e. that their intuitions aren't actually very "targeted" on the first one -- but I don't think that would be right. At least, it's not right in my case. A more viable strategy might be to argue for something like a meta-principle: The 'Don't Make Things Worse' Meta-Principle: If you find "Don't Make Things Worse" strongly intuitive, then you should also find "Don't Commit to a Policy That In the Future Will Sometimes Make Things Worse" just about as intuitive. If the meta-principle were true, then I guess this would sort of imply that people's intuitions in favor of "Don't Make Things Worse" should be self-neutralizing. They should come packaged with equally strong intuitions for another position that directly contradicts it. But I don't see why the meta-principle should be true. At least, my intuitions in favor of the meta-principle are way less strong than my intutions in favor of "Don't Make Things Worse" :)
I'm Buck Shlegeris, I do research and outreach at MIRI, AMA
both R_UDT and R_CDT imply that the decision to commit yourself to a two-boxing policy at the start of the game would be rational

That should be "a one-boxing policy", right?

1Ben Garfinkel2yYep, thanks for the catch! Edited to fix.
I'm Buck Shlegeris, I do research and outreach at MIRI, AMA

Thanks! This is helpful.

It seems like following general situation is pretty common: Someone is initially inclined to think that anything with property P will also have property Q1 and Q2. But then they realize that properties Q1 and Q2 are inconsistent with one another.
One possible reaction to this situation is to conclude that nothing actually has property P. Maybe the idea of property P isn't even conceptually coherent and we should stop talking about it (while continuing to independently discuss properties Q1 and Q2). Often the more natural reactio
... (read more)
5Ben Garfinkel2yHey again! I appreciated your comment on the LW post. I started writing up a response to this comment and your LW one, back when the thread was still active, and then stopped because it had become obscenely long. Then I ended up badly needing to procrastinate doing something else today. So here’s an over-long document [https://docs.google.com/document/d/1AOpIeU_vIqxxwviysBKZLnHnzhjwZGMPCN6oHu5Bmeg/edit?usp=sharing] I probably shouldn’t have written, which you are under no social obligation to read.
I'm Buck Shlegeris, I do research and outreach at MIRI, AMA
But the arguments I've seen for "CDT is the most rational decision theory" to date have struck me as either circular, or as reducing to "I know CDT doesn't get me the most utility, but something about it just feels right".

It seems to me like they're coming down to saying something like: the "Guaranteed Payoffs Principle" / "Don't Make Things Worse Principle" is more core to rational action than being self-consistent. Whereas others think self-consistency is more important.

Mind you, if the sentence
... (read more)
1RobBensinger2yThe main argument against CDT (in my view) is that it tends to get you less utility (regardless of whether you add self-modification so it can switch to other decision theories). Self-consistency is a secondary issue. FDT gets you more utility than CDT. If you value literally anything in life more than you value "which ritual do I use to make my decisions?", then you should go with FDT over CDT; that's the core argument. This argument for FDT would be question-begging if CDT proponents rejected utility as a desirable thing. But instead CDT proponents who are familiar with FDT agree utility is a positive, and either (a) they think there's no meaningful sense in which FDT systematically gets more utility than CDT (which I think is adequately refuted by Abram Demski [https://www.lesswrong.com/posts/ySLYSsNeFL5CoAQzN/a-critique-of-functional-decision-theory#comments] ), or (b) they think that CDT has other advantages that outweigh the loss of utility (e.g., CDT feels more intuitive to them). The latter argument for CDT isn't circular, but as a fan of utility (i.e., of literally anything else in life), it seems very weak to me.
I'm Buck Shlegeris, I do research and outreach at MIRI, AMA

Just want to note that I found the R_ vs P_ distinction to be helpful.

I think using those terms might be useful for getting at the core of the disagreement.

I'm Buck Shlegeris, I do research and outreach at MIRI, AMA
is more relevant when trying to judge the likelihood of a criterion of rightness being correct

Sorry to drop in in the middle of this back and forth, but I am curious -- do you think it's quite likely that there is a single criterion of rightness that is objectively "correct"?

It seems to me that we have a number of intuitive properties (meta criteria of rightness?) that we would like a criterion of rightness to satisfy (e.g. "don't make things worse", or "don't be self-effacing"). And so far there doesn't se... (read more)

5Ben Garfinkel2yHappy to be dropped in on :) I think it's totally conceivable that no criterion of rightness is correct (e.g. because the concept of a "criterion of rightness" turns out to be some spooky bit of nonsense that doesn't really map onto anything in the real world.) I suppose the main things I'm arguing are just that: 1. When a philosopher expresses support for a "decision theory," they are typically saying that they believe some claim about what the correct criterion of rightness is. 2. Claims about the correct criterion of rightness are distinct from decision procedures. 3. Therefore, when a member of the rationalist community uses the word "decision theory" to refer to a decision procedure, they are talking about something that's pretty conceptually distinct from what philosophers typically have in mind. Discussions about what decision procedure performs best or about what decision procedure we should build into future AI systems [[EDIT: or what decision procedure most closely matches our preferences about decision procedures]] don't directly speak to the questions that most academic "decision theorists" are actually debating with one another. I also think that, conditional on there being a correct criterion of rightness, R_CDT is more plausible than R_UDT. But this is a relatively tentative view. I'm definitely not a super hardcore R_CDT believer. I guess here -- in almost definitely too many words -- is how I think about the issue here. (Hopefully these comments are at least somewhat responsive to your question.) It seems like following general situation is pretty common: Someone is initially inclined to think that anything with property P will also have property Q1 and Q2. But then they realize that properties Q1 and Q2 are inconsistent with one another. One possible reaction to this situation is to conclude that nothing actually has property P. Maybe the idea of property P isn't even c
5RobBensinger2yI mostly agree with this. I think the disagreement between CDT and FDT/UDT advocates is less about definitions, and more about which of these things feels more compelling: * 1. On the whole, FDT/UDT ends up with more utility. (I think this intuition tends to hold more force with people the more emotionally salient "more utility" is to you. E.g., consider a version of Newcomb's problem where two-boxing gets you $100, while one-boxing gets you $100,000 and saves your child's life.) * 2. I'm not the slave of my decision theory, or of the predictor, or of any environmental factor; I can freely choose to do anything in any dilemma, and by choosing to not leave money on the table (e.g., in a transparent Newcomb problem with a 1% chance of predictor failure where I've already observed that the second box is empty), I'm "getting away with something" and getting free utility that the FDT agent would miss out on. (I think this intuition tends to hold more force with people the more emotionally salient it is to imagine the dollars sitting right there in front of you and you knowing that it's "too late" for one-boxing to get you any more utility in this world.) There are other considerations too, like how much it matters to you that CDT isn't self-endorsing. CDT prescribes self-modifying in all future dilemmas so that you behave in a more UDT-like way. It's fine to say that you personally lack the willpower to follow through once you actually get into the dilemma and see the boxes sitting in front of you; but it's still the case that a sufficiently disciplined and foresightful CDT agent will generally end up behaving like FDT in the very dilemmas that have been cited to argue for CDT. If a more disciplined and well-prepared version of you would have one-boxed, then isn't there something off about saying that two-boxing is in any sense "correct"? Even the act of praising CDT seems a bit self-destructive here, inasmuch as (a) CDT prescribes ditching CDT, an
'Longtermism'

IMHO the most natural name for "people at any time have equal value" should be something like temporal indifference, which more directly suggests that meaning.

Edit: I retract temporal indifference in favor of Holly Elmore's suggestion of temporal cosmopolitanism.

'Longtermism'
Given this, I’m inclined to stick with the stronger version — it already has broad appeal, and has some advantages over the weaker version.

Why not include this in the definition of strong longtermism, but not weak longtermism?

Having longtermism just mean "caring a lot about the long-term future" seems the most natural and least likely to cause confusion. I think for it to mean anything other than that, you're going to have to keep beating people over the head with the definition (analogous to the sorry state of the phrase, "begs the que... (read more)

IMHO the most natural name for "people at any time have equal value" should be something like temporal indifference, which more directly suggests that meaning.

Edit: I retract temporal indifference in favor of Holly Elmore's suggestion of temporal cosmopolitanism.

Inadequacy and Modesty
  1. Extra potency may arise if the product is important enough to affect the market or indeed the society it operates in creating a feedback loop (what George Soros calls reflexivity). The development of credit derivatives and subsequent bust could be a devastating example of this. And perhaps ‘the Big Short’ is a good illustration of Eliezer’s points.

Could you say more about this point? I don't think I understand it.

My best guess is that it means that when changes to the price of an asset result in changes out in the world, which in turn cause the asset ... (read more)

2LukeDing4yThanks for the question and the opportunity to clarify (I think I may have inadvertently overemphasised the negative potentials in my post.) Yes there is a feedback loop, but it doesn’t have to result in a correction. I think cryptocurrencies and bitcoin could be a good example. You have a new product with a small group of users and uses initially. The user base grows and due to limited increase in supply by design the price rises. As the total value of bitcoin in circulation rises the liquidity or the ability to execute larger transactions also rises, and the number of services accepting the currency rises, and there are more providers providing new ways to access the currency; all these generate more demand which causes the price to rise even further, and so on.. But what was just described is a feedback mechanism, that in itself does not suggest whether a correction should be due or not. Of course at some point a correction could be due if the feedback loop operates too far. I think that’s why Soros said in 2009 “When I see a bubble forming, I rush in to buy” (I think he meant feedback loop when he said ‘bubble”). What I was speculating is whether there are more chances for anti-consensual views to turn out to be correct in a fast evolving system.
Lunar Colony

Is it good for keeping people safe against x-risks? Nope. In what scenario does having a lunar colony efficiently make humanity more resilient? If there's an asteroid, go somewhere safe on Earth...

What if it's a big asteroid?

3kbog5yThere are no known Earth-crossing minor planets large enough that a shelter on the other side of the world would be destroyed. All of them are approximately the size of the dinosaur-killer asteroid or smaller. We've surveyed of the large ones and there are no foreseeable impact risks from them. Large asteroids are easier to detect from a long distance. A very large asteroid would have to come in from some previously unknown, unexpected orbit for it to be previously undetected. So probably a comet-like orbit, which for a large asteroid is probably ridiculously unusual. I really don't know how big it would have to be to destroy a solid underground or underwater structure. Maybe around the size of the Vredefort asteroid if not larger. But we haven't had such an impact since the end of the late heavy bombardment period, three billion years ago, when these objects were cleared from Earth's orbit.
3Robert_Wiblin5yIf it's so big no bunkers work, how long would we have to wait on Mars before coming back?
Why I'm donating to MIRI this year

Note that this is particularly an argument about money. I think that there are important reasons to skew work towards scenarios where AI comes particularly soon, but I think it’s easier to get leverage over that as a researcher choosing what to work on (for instance doing short-term safety work with longer-term implications firmly in view) than as a funder.

I didn't understand this part. Are you saying that funders can't choose whether to fund short-term or long-term work (either because they can't tell which is which, or there aren't enough options to choose from)?

2Owen_Cotton-Barratt5yI'm saying that the ratio you can advance the different agendas as a funder versus as a researcher skews towards advancing short-term stuff as a researcher, because it's less funding constrained (more talent constrained).
EA Ventures Request for Projects + Update

The project was successfully funded for $19,000. We found the fundraising process to take slightly longer and be slightly more difficult than we were expecting.

Hey Kerry, I'm listed as one of the funders on the eaventures.org front page, but I didn't hear anything about this fund raise. Should I have?

1Kerry_Vaughan7yWe sent it to a selected group of people whose funding priorities we thought it fit with. I think when I emailed you to ask about what things you're interested in funding I didn't hear back. I'll resend that email now.
Should you give your best now or later?

The world in which Bostrom did not publish Superintelligence, and therefore Elon Musk, Bill Gates, and Paul Allen didn't turn to "our side" yet.

Has Paul Allen come round to advocating caution and AI safety? The sources I can find right now suggest Allen is not especially worried.

http://www.technologyreview.com/view/425733/paul-allen-the-singularity-isnt-near/

3Diego_Caleiro7yConfused my techno-tycoons. Wozniak in mind. Fixed.
Risk aversion and investment (for altruists)

You're multiplying by X inside the log.

Good catch, my bad.

Edit: how did you do the code? I had difficulty with formatting, hence the excess line breaks.

Add four spaces at the beginning of a line to make it appear as code.

~ E(log(1 + d/(Y+d)))

= E(log((Y + d + d)/Y))

How did you get from one of these steps to the other? Shouldn't the second be E[log((Y+d+d)/(Y+d))]?

1Owen_Cotton-Barratt7yYeah, it should. Hmm. Also, looking at my derivation as a whole it's implausible that E(d/X) ~ E(2d/Y) -- the second should be almost twice as big. So I think I made a mistake there. I suspect that to get the result we need to use the approximation d/X ~ d/Y (roughly right since d is very small in comparison to Y).
Risk aversion and investment (for altruists)

Hmm, I've thought about this some more and I actually still don't understand it. I might just be being dense, but I feel like you've made a very interesting claim here that would be important if true, so I'd really like to understand it. Perhaps others can benefit as well.

Here's what I was able to work out for myself. Given that log(X+d) ~ log(X) + d/X, then:

d/X ~ log(X+d) - log(X)
d/X ~ log((X+d)/X)
d/X ~ log(1 + d/X)

So maximizing E[d/X] should be approximately equivalent to maximizing E[log(1 + d/X)]. This is looking closer to what you said, but there ... (read more)

1Owen_Cotton-Barratt7yYou're multiplying by X inside the log. That amounts to adding log(X), and an expectation of a sum is just the sum of the expectation. But this does seem to change exactly what you're maximising. My derivation goes: Let Y denote the wealth of the world not controlled by you. By assumption Y is an independent variable (note: this assumption seems questionable, and without it the two conclusions definitely come apart, as the investor may have opportunities which increase the wealth of the rest of the world but not the wealth of the investor). So X = Y + d. So E(d/X) = E(d/(Y+d)) ~ E(log(1 + d/(Y+d))) = E(log((Y + d + d)/Y)) = E(log(1+2d)/Y)) ~ E(2d/Y) = 2 E(d/Y) = 2 E(log(1+d/Y)) = 2 E(log(Y + d) - log(Y)) = 2 E(log(X) - log(Y)) = 2 E(log(X)) - 2 E(log(Y)) Now since Y is independent, E(log(Y)) is constant, so maximising this is equivalent to maximising E(log(X)). Edit: how did you do the code? I had difficulty with formatting, hence the excess line breaks.
Risk aversion and investment (for altruists)

A simple argument suggests that an investor concerned with maximizing their influence ought to maximize the expected fraction of world wealth they control. This means that the value of an extra dollar of investment returns should vary inversely with the total wealth of the world. This means that the investor should act as if they were maximizing the expected log-wealth of the world.

Could someone explain how the final sentence follows from the others?

If I understand correctly, the first sentence says an investor should maximize E(wealth-of-the-investor / wealth-of-the-world), while the final sentence says they should maximize E(log(wealth-of-the-world)). Is that right? How does that follow?

1Paul_Christiano7ylog(X+d) ~ log(X) + d/X, for small d. So maximizing E[d/X] is equivalent to maximizing E[log(d+X)].
Brainstorming thread: ideas for large EA funders

The risk from investing in individual stocks rather than broad indices is pretty minor

This depends a lot on how many stocks you're buying, right? Or would you still make this claim if someone were buying < 10 stocks? < 5?