All of ESRogs's Comments + Replies

I've developed a clean mathematical framework in which possibilities like this can be made precise, the assumptions behind them can be clearly stated, and their value can be compared.

Sorry if I'm missing something (I've only skimmed the paper), but is the "mathematical framework" just the idea of integrating value over time?

I'm quite surprised to see this idea presented as new. Isn't this idea very obvious? Haven't we been thinking this way all along?

Like, how else could you possibly think of the value of the future of humanity? (The other mathematically s... (read more)

I'm surprised by all the disagree votes on a comment that is primarily a question.

Do all the people who disagreed think it's obvious whether Ben meant while he was working at AR or subsequently? If so, which one?

(I'm guessing the disagree votes were meant to register disagreement with my claim that it's relatively normal for interviewers / employers to tell candidates reasons a job might not be a good for them. Is that it, or something else?)

These people knew about one of the biggest financial frauds in U.S. history but didn't try to stop it

I think you're stretching here. Nowhere in the article does it suggest that the EA leaders actually knew about ongoing fraud.

It just says (as in the quotes you cited), that they'd been warned Sam was shady. That's very different from having actual knowledge of ongoing fraud. If the article wanted to make that claim, I think it would have been more direct about it.

Sam was fine with me telling prospective AR employees why I thought they shouldn’t join (and in fact I did do this)

Didn't quite follow this part. Is this referring to while you were still at AR or subsequently?

If it was while you were still working there, that seems pretty normal. Not every candidate should be sold on the job. Some should be encouraged not to join if it's not going to be a good fit for them. Why would this even be controversial with Sam? Or were you telling them not to join specifically because of criticisms you had of the CEO?

If it was subsequent, how do you know he was fine with it? What would he have done if he wasn't fine with it?

It was both.

And yeah, the article reports Sam telling someone that he would "destroy them", but I don't fully understand the threat model. I guess the idea is that Sam would tell a bunch of people that I was bad, and then I wouldn't be able to get a job or opportunities in EA?

I guess I don't know for sure that Sam never attempted this, but I can't recall evidence of it. 

3
ESRogs
1y
I'm surprised by all the disagree votes on a comment that is primarily a question. Do all the people who disagreed think it's obvious whether Ben meant while he was working at AR or subsequently? If so, which one? (I'm guessing the disagree votes were meant to register disagreement with my claim that it's relatively normal for interviewers / employers to tell candidates reasons a job might not be a good for them. Is that it, or something else?)

Your summary of the article's thesis doesn't seem right to me:

b. Even though those EAs (including myself) quit before FTX was founded and therefore could not have had any first-hand knowledge of this improper relationship between AR and FTX, they knew things (like information about Sam’s character) which would have enabled them to predict that something bad would happen

c. This information was passed on to “EA leaders”, who did not take enough preventative action and are therefore (partly) responsible for FTX’s collapse

I interpreted the article as argu... (read more)

[anonymous]1y41
13
7

The article reads to me like it's trying to get away with insinuating that EA leaders somehow knew about or at least suspected the fraud, based on what they were told by employees who had no such suspicions.

They take pains to emphasise the innocence of their sources, of course - I agree that they're painted as the heroes of the story (emphasis mine): 

None of the early Alameda employees who witnessed Bankman-Fried’s behavior years earlier say they anticipated this level of alleged criminal fraud. There was no “smoking gun,” as one put it, that revealed

... (read more)

FWIW, I think such a postmortem should start w/ the manner in which Sam left JS. As far as I'm aware, that was the first sign of any sketchiness, several months before the 2018 Alameda walkout.

Some characteristics apparent at the time:

  • joining CEA as "director of development" which looks like it was a ruse to avoid JS learning about true intentions
  • hiring away young traders who were in JS's pipeline at the time

I believe these were perfectly legal, but to me they look like the first signs that SBF was inclined to:

  • choose the (naive) utilitarian path over the v
... (read more)

In the past two years, the technical alignment organisations which have received substantial funding include:

In context it sounds like you're saying that Open Phil funded Anthropic, but as far as I am aware that is simply not true.

I think maybe what you meant to say is that, "These orgs that have gotten substantial funding tend to have ties to Open Phil, whether OP was the funder or not." Might be worth editin... (read more)

I'll limit myself to one (multi-part) follow-up question for now —

Suppose someone in our community decides not to defer to the claimed "scientific consensus" on this issue (which I've seen claimed both ways), and looks into the matter themselves, and, for whatever reason, comes to the opposite conclusion that you do. What advice would you have for this person?

I think this is a relevant question because, based in part on comments and votes, I get the impression that a significant number of people in our community are in this position (maybe more so on the r... (read more)

I would have to think more on this to have a super confident reply. See also my point in response to Geoffrey Miller elsewhere here--there are lots of considerations at play. 

One view I hold, though, is something like "the optimal amount of self-censorship, by which I mean not always saying things that you think are true/useful, in part because you're considering the [personal/community-level] social implications thereof, is non-zero." We can of course disagree on the precise amount/contexts for this, and sometimes it can go too far. And by definition... (read more)

Thanks, I appreciate the thoughtful response!

Generalizing a lot,  it seems that "normie EAs" (IMO correctly) see glaring problems with Bostrom's statement and want this incident to serve as a teachable moment

  1. As a "rationalist-EA", I would be curious if you could summarize what lessons you think should be drawn from this teachable moment (or link to such a summary that you endorse).
  2. In particular, do you disagree with the current top comment on this post?
    1. (To me, their Q1 seems like it highlights what should be the key lesson. While their Q2 provides important context that mitigates how censorious
... (read more)

My view is that the rationalist community deeply values the virtue of epistemic integrity at all costs, and of accurately expressing your opinion regardless of social acceptability.

The EA community  is focused on approximately maximising consequentialist impact.

 Rationalist EAs should recognise when theses virtue of epistemic integrity and epistemic accuracy are in conflict with maximising consequentialist impact, via direct, unintended consequences of expressing your opinions, or via effects on EA's reputation.

Happy to comment on this, though I'll add a few caveats first:

- My views on priorities among the below are very unstable
- None of this is intended to imply/attribute malice or to demonize all rationalists ("many of my best friends/colleagues are rationalists"), or to imply that there aren't some upsides to the communities' overlap
- I am not sure what "institutional EA" should be doing about all this
- Since some of these are complex topics and ideally I'd want to cite lots of sources etc. in a detailed positive statement on them, I am using the "things to t... (read more)

-15
Barry Cotter
1y

I think there’s evidence that both apologies are insincere, albeit for different reasons (though that may not be clear).

You literally listed the timeframe as a reason (among others) to reject both apologies.

Here are your words again:

The fact that Bostrom's statement comes 26 years after the post in question does little to support the idea that the apology might be motivated by genuine remorse.

and:

In my eyes, this timeframe really undermines the credibility of his previous apology, to the point of making it irrelevant. If you claim to reject views

... (read more)
3[anonymous]1y
Sure, I can see that. What I meant was the timeframe in context - 26 years later being the point at which someone threatened to dig things up. I think the apology could have succeeded, but owing to its content, it wasn’t. This is a position that's consistent with my original post, reading it back. 26 years does little to help Bostrom - why? Not because it’s a long time in itself, but because it’s clearly the point at which something outside happens - the threat of exposure. While I think the short timeframe is a reason itself to consider the first apology insincere, the long timeframe isn’t, at least intrinsically. Perhaps I should have made that clearer, but I hope you’ll forgive me for not doing so. If you’re disagreeing with me in good faith, I appreciate that, but my position simply isn’t how you’ve characterised it. Contrary to what you've said, at no point do I claim that the timeframe is a reason to reject the later apology. My view is: Bostrom’s statement was inadequate and it came 26 years later. I maintain the fact that the timeframe doesn’t help him, as the apology is so clearly motivated by the threat of bad PR surfacing 26 years later, but the timeframe itself doesn’t make it inadequate. The content of Bostrom’s statement informs my position here, where he expects people to believe that he didn’t endorse the position even then… I don’t need to explain this. My view is not: Bostrom’s statement was inadequate purely because it came 26 years later. In the case of the 26-year-later apology, the timeframe is salient to me because it represents the point at which Bostrom realised the risk that he might be exposed. This compromises how seriously I take the apology. If you found my OP unclear, I apologise, but stand by its content - I make it clear at the end that I think he could’ve apologised successfully 26 years later, but this wasn’t the solution. My post was originally motivated by finding the apology ultimately unsatisfactory (which I still do),

How can both a 24 hour turnaround and a 26 year delay be evidence of an insincere apology? Where is the apology delay sweet spot in your eyes — one week later? A month later?

Maybe you think he should have apologized once a year every year on the anniversary of the email?

Sorry for snarky tone, but I feel that being in the business of nitpicking and rejecting apologies is quite a bad policy.

2[comment deleted]1y
-2[anonymous]1y
I think there’s evidence that both apologies are insincere, albeit for different reasons (though that may not be clear). 24-hour apology: timeframe too short 26 years later: clearly motivated by fear of bad press NB I think the 26-year-later apology could have been successful, but considering its content, it isn’t. I also think it’s uncharitable to characterise my position as nitpicking - I just think that some apologies can fail, and this is one of them

The fact that Bostrom's statement comes 26 years after the post in question does little to support the idea that the apology might be motivated by genuine remorse.

Did you miss the fact that he also apologized within 24 hours of the original email?

-14[anonymous]1y

Nit: I was very explicitly asking why not sell, not suggesting a commitment to sell; I don't appreciate the rhetorical pivot to argue against a point I was not making.

I don't get this nit. Wasn't Oliver's comment straightforwardly answering your question, "Why not sell it now?" by giving an argument against selling it now?

How is that a pivot? He added the word "commiting", but I don't see how that changes the substance. I think he was just emphasizing what would be lost if we sold now without waiting for more info. Which seems like a perfectly valid answer to the question you asked!

-8
jai
1y

I had a similar impression. Some related thoughts here.

Copying over some comments I made on Twitter, in response to someone suggesting that Sam now appears to be "a sociopath who never gave a toss about EA or its ideals":
 

He does seem pretty sociopathic, but it's still unclear to me whether he really cared about EA.

I think it's totally possible that he genuinely wanted to improve the world by funding EA causes, and is also a narcissistic liar who is unwilling to place limits on his own behavior.

As Jess Riedel pointed out to me, it looks like Bill Gates ruthlessly exploited his monopoly in the 90s, and als

... (read more)

Yeah, "is a sociopath" is such a deceptively binary way to state it. He seems to be on that spectrum to a certain degree - likely aggravated by stress and psychopharmacology. I'm skeptical of the easy-out narrative to dismissively pathologize here; I also think that in doing so we lose the chance to more critically examine that spectrum as it relates to EAs at large

there is a thing where if you say stuff that seems weird from an EA framework this can come across as cringe to some people, and I do hate a bunch of those cringe reactions, and I think think it contributes a lot to conformity

Can you give an example (even a made up one) of the kind of thing you have in mind here? What kinds of things sound weird and cringy to someone operating within an EA framework, but are actually valuable from an EA perspective?

(Like, play-pumps-but-they-actually-work-this-time? Or some kind of crypto thing that looks like a scam but isn't? Or... what?)

My claims evoke cringe from some readers on this forum, I believe, so I can supply some examples:

  1. epistemology
    • ignore subjective probabilities assigned to credences in favor of unweighted beliefs.
    • plan not with probabilistic forecasting but with deep uncertainty and contingency planning.
    • ignore existential risk forecasts in favor of seeking predictive indicators of threat scenarios.
    • dislike ambiguous pathways into the future.
    • beliefs filter and priorities sort.
    • cognitive aids help with memory, cognitive calculation, or representation problems.
    • cognitive
... (read more)
-6
A.C.Skraeling
1y

The culture emphasizes analysis over practice, and it does not attract many of the leaders and builders that are critical for maximizing impact.

 

EA has a lot of rhetoric around openness to ideas and perspectives, but actual interaction with the EA universe can feel more like certain conclusions are encased in concrete.

It seems to me that there is some tension between these two criticisms — you want EA to focus less on analysis, but you also don't want us to be too wedded to our conclusions. So how are we supposed to change our minds about the conclusi... (read more)

3
Peter Elam
2y
Analysis is of course incredibly important no matter what you are trying to do. Analysis coupled with building/data gathering/experimentation is much better than analysis alone. “It’s much easier, and more reliable, to assess a project once it's already been tried.” Isn't not being wedded to your conclusions a core idea of the EA movement? So of course I am not suggesting EA take out the analysis. From my post: * It provides a place for analysis and practice to intersect - Many current EAs may be stronger in gathering and analyzing data to help guide and influence those on the front lines. At the same time, all analysis is improved by interaction with the real world. This represents a place where those that analyze and those that build can interact to their mutual benefit. My second thought is what is EAs core priority? Is it uniqueness or impact? If becoming less unique increases your impact would you choose to become less unique? If the core value is maximizing impact, all secondary values should be subordinate to that one.

And then they can read the post above to have that question clearly answered!

Any tips on the 'how' of funding EA work at such think tanks?

Reach out to individual researchers and suggest they apply for grants (from SFF, LTFF, etc.)? Reach out as a funder with a specific proposal? Something else?

3
Davidmanheim
2y
It's more helpful to have a good idea of which think tank and what groups there you should talk to for different things, since they have different output and different strengths. Happy to walk through details of how to do this if you have something specific, but yes, those general approaches would work. 
2
DavidZhang
2y
I think reaching out with a proposal (as Founders Pledge did with Carnegie) is probably the best bet, but it would also be worth ensuring think tanks are aware of the existence of the EA funds and that they can just apply for them. E.g. I don't think my think tank knows about them.
4
weeatquince
2y
For longtermist stuff directing think tank folk to the SFF would probably be best, and perhaps helping them think of a proposal that is aligned with longtermist ideas. I don’t think LTFF would fund this kind of thing (based on their payout reports etc). For animal rights stuff probably the Animal Welfare Fund (or Farmed Animal Funders) might be interested in funding think tank folk outside of Europe / North America. For other topics I have no good ideas.   If you have say £250k+ could just put out a proposal (e.g. on the EA Forum) that you would fund think tanks and share it with people in the EA policy space and see who gets in touch.

opinion which ... is mainly advocated by billionaires

Do you mean that most people advocating for techno-positive longtermist concern for x-risk are billionaires, or that most billionaires so advocate?

I don't think either claim is true (or even close to true).

It's also not the claim being made:

...minority opinion which happens to reinforce existing power relations and is mainly advocated by billionaires and those funded by [them]...

The RSP idea is cool.

Dumb question — what part of the post does this refer to?

3
Ben_West
2y
Apologies, that line was referring to something from an earlier draft of this post. I've removed it from my comment.

One reason to keep Tractability separate from Neglectedness is to distinguish between "% of problem solved / extra dollars from anyone" and "% of problem solved / extra dollars from you".

In theory, anybody's marginal dollar is just as good as anyone else's. But by making the distinction explicit, it forces you to consider where on the marginal utility curve we actually are. If you don't track how many other dollars have already been poured into solving a problem, you might be overly optimistic about how far the next dollar will go.

I think this may be close to the reason Holden(?) originally had in mind when he included neglectedness in the framework.

2
Michael_Wiebe
2y
I'm not sure I follow.  In my framework, "how many other dollars have already been poured into solving a problem" is captured by crowdedness, ie.,  total resources allocated to the problem, ie., the position on the x-axis.

Note that Vitalik Buterin has also recently started promoting related ideas: Retroactive Public Goods Funding

7
Pablo
3y
Thanks for linking to that article, which I hadn't seen. I updated the 'certificates of impact' entry with a brief summary of the proposal.

Which city is next?

3
aaronhamlin
3y
We may have a campaign in 2021 (our initial play is riskier here), but we can't say yet for sure. We have other cities lined up for 2022. What I can say is all the cities we have in mind are over 750,000 people and are all very well known. We've laid out a strategic plan involving polling and legal analysis to factor in where to prioritize given our available funding. We're working for a surprise win in 2021 to excite our funders.

Trendfollowing tends to perform worse in rapid drawdowns because it doesn't have time to rebalance

I wonder if it makes sense to rebalance more frequently when volatility (or trading volume) is high.

3
MichaelDickens
3y
That could help. "Standard" trendfollowing rebalances monthly because it's simple, frequent enough to capture most changes in trends, but infrequent enough that it doesn't incur a lot of transaction costs. But there could be more complicated approaches that do a better job of capturing trends without incurring too many extra costs. One idea I've considered is to look at buy-side signals monthly but sell-side signals daily, so if the market switches from a positive to negative trend, you'll sell the following day, but if it switches back, you won't buy until the next month. On the backtests I ran, it seemed to work reasonably well. These were the results of a backtest I ran using the Ken French data on US stock returns 1926-2018: CAGR Stdev Ulcer Trades/Yr B&H 9.5 16.8 23.0 Monthly 9.3 11.7 14.4 1.4 Daily 10.7 11.0 9.6 5.1 Sell-Daily 9.7 10.3 9.2 2.3 Buy-Daily 10.6 12.3 12.3 1.8 ("Ulcer" is the ulcer index, which IMO is a better measure of downside risk than standard deviation. It basically tells you the frequency and severity of drawdowns.)

The AlphaArchitect funds are more expensive than Vanguard funds, but they're just as cheap after adjusting for factor exposure.

Do you happen to have the numbers available that you used for this calculation? Would be curious to see how you're doing the adjustment for factor exposure.

3
MichaelDickens
3y
I'm not sure how to calculate it precisely, I think you'd want to run a regression where the independent variable is the value factor and the dependent variable is the fund or strategy being considered. But roughly speaking, a Vanguard value fund holds the 50% cheapest stocks (according to the value factor), while QVAL and IVAL hold the 5% cheapest stocks, so they are 10x more concentrated, which loosely justifies a 10x higher expense ratio. Although 10x higher concentration doesn't necessarily mean 10x more exposure to the value factor, it's probably substantially less than that. I just ran a couple of quick regressions using Ken French data, and it looks like if you buy the top half of value stocks (size-weighted) while shorting the market, that gives you 0.76 exposure to the value factor, and buying the top 10% (equal-weighted) while shorting the market gives you 1.3 exposure (so 1.3 is the slope of a regression between that strategy and the value factor). Not sure I'm doing this right, though. To look at it another way, the top-half portfolio described above had a 5.4% annual return (gross), while the top-10% portfolio returned 12.8% (both had similar Sharpe ratios). Note that most of this difference comes from the fact that the first portfolio is size-weighted and the second is equal-weighted; I did it that way because most big value funds are size-weighted, while QVAL/IVAL are equal-weighted.

Looking at historical performance of those Alpha Architects funds (QVAL, etc), it looks like they all had big dips in March 2020 of around 25%, at the same time as the rest of the market.

And I've heard it claimed that assets in general tend to be more correlated during drawdowns.

If that's so, it seems to mitigate to some extent the value of holding uncorrelated assets, particularly in a portfolio with leverage, because it means your risk of margin call is not as low as you might otherwise think.

Have you looked into this issue of correlations during drawdowns, and do you think it changes the picture?

3
MichaelDickens
3y
The AlphaArchitect funds (except for VMOT) are long-only, so they're going to be pretty correlated with the market. The idea is you buy those funds (or something similar) while simultaneously shorting the market. This is true. Factors aren't really asset classes, but it's still true for some factors. This AQR paper looked at the performance of a bunch of diversifiers during drawdowns and found that trendfollowing provided good return, as did "styles", by which they mean a long/short factor portfolio consisting of the value, momentum, carry, and quality factors. I'd have to do some more research to say how each of those four factors have tended to perform during drawdowns, so take this with a grain of salt, but IIRC: * value and carry tend to perform somewhat poorly * quality tends to perform well * momentum tends to perform well during drawdowns, but then performs really badly when the market turns around (e.g., this happened in 2009) I'm talking about long/short factors here, so e.g., if the value factor has negative performance, that means long-only value stocks perform worse than the market. Also, short-term trendfollowing (e.g., 3-month moving average) tends to perform better during drawdowns than long-term trendfollowing (~12 month moving average), but it has worse long-run performance, and both tend to beat the market, so IMO it makes more sense to use long-term trendfollowing. We never know how this will continue in the future. For example, the 2020 drawdown happened much more quickly than usual—the market dropped around 30% in a month, as opposed to, say, the 2000-2002 drawdown, where the market dropped 50% over the course of two years. Trendfollowing tends to perform worse in rapid drawdowns because it doesn't have time to rebalance, although it happened to perform reasonably well this year. There's a lot more I could say about the implementation of trendfollowing strategies, but I don't want to get too verbose so I'll stop there.

Ah, good point! This was not already clear to me. (Though I do remember thinking about these things a bit back when Piketty's book came out.)

I just feel like I don't know how to think about this because I understand too little finance and economics

Okay, sounds like we're pretty much in the same boat here. If anyone else is able to chime in and enlighten us, please do so!

9
Max_Daniel
3y
I thought about this for another minute, and realized one thing that hadn't been salient to me previously. (Though quite possibly it was clear to you, as the point is extremely basic. - It also doesn't directly answer the question about whether we should expect stock returns to exceed GDP growth indefinitely.) When thinking about whether X can earn returns that exceed economic growth, a key question is what share of those returns is reinvested into X. For example, suppose I now buy stocks that have fantastic returns, but I spend all those returns to buy chocolate. Then those stocks won't make up an increasing share of my wealth. This would only happen if I used the returns to buy more stocks, and they kept earning higher returns than other stuff I own. In particular, the simple argument that returns can't exceed GDP growth forever only follows if returns are reinvested and 'producing' more of X doesn't have too steeply diminishing returns. For example, two basic 'accounting identities' from macroeconomics are: 1. β=sg 2. α=rβ Here, s is the savings rate (i.e. fraction of total income that is saved, which in equilibrium equals investments into capital), g is the rate of economic growth, and r is the rate of return on capital. These equations are essentially definitions, but it's easy to see that (in a simple macroeconomic model with one final good, two factors of production, etc.) β can be viewed as the capital-to-income ratio and α as capital's share of income. Note that from equations 1 and 2 it follows that rg=αs. Thus we see that r exceeds g in equilibrium/'forever' if and only if α>s - in other words, if and only if (on average across the whole economy) not all of the returns from capital are re-invested into capital. (Why would that ever happen? Because individual actors maximize their own welfare, not aggregate growth. So e.g. they might prefer to spend some share of capital returns on consumption.) Analog remarks apply to other situations where a ba

My superficial impression is that this phenomenon it somewhat surprising a priori, but that there isn't really a consensus for what explains it.

Hmm, my understanding is that the equity premium is the difference between equity returns and bond (treasury bill) returns. Does that tell us about the difference between equity returns and GDP growth?

A priori, would you expect both equities and treasuries to have returns that match GDP growth?

3
Max_Daniel
3y
Yes, that's my understanding as well. I don't know, my sense is not directly but I could be wrong. I think I was gesturing at this because I took it as evidence that we don't understand why equities have such high return. (But then it is an additional contingent fact that these returns don't just exceed bond returns but also GDP growth.) I don't think I'd expect this, at least not with high confidence - but overall I just feel like I don't know how to think about this because I understand too little finance and economics. (In particular, it's plausible to me that there are strong a priori arguments about the relationships between GDP growth, bond returns, and equity returns - I just don't know what they are.)

But if you delay the start of this whole process, you gain time in which you can earn above-average returns by e.g. investing into the stock market.

Shouldn't investing into the stock market be considered a source of average returns, by default? In the long run, the stock market grows at the same rate as GDP

If you think you have some edge, that might be a reason to pick particular stocks (as I sometimes do) and expect returns above GDP growth.

But generically I don't think the stock market should be considered a source of above-average returns. Am I m... (read more)

9
MichaelDickens
3y
The stock market should grow faster than GDP in the long run. Three different simple arguments for this: 1. This falls out of the commonly-used Ramsey model. Specifically, because people discount the future, they will demand that their investments give better return than the general economy. 2. Corporate earnings should grow at the same rate as GDP, and stock price should grow at the same rate as earnings. But stock investors also earn dividends, so your total return should exceed GDP in the long run. (The reason this works is because in aggregate, investors spend the dividends rather than re-investing them.) 3. Stock returns are more volatile than economic growth, so they should pay a risk premium even if they don't have a higher risk-adjusted return.
3
Max_Daniel
3y
[Low confidence as I don't really understand anything about finance.] It sounds right to me that the stock market can't grow more quickly than GDP forever. However, it seems like it has been doing so for decades, and that there is no indication that this will stop very soon - say, within 10 years. (My superficial impression is that this phenomenon it somewhat surprising a priori, but that there isn't really a consensus for what explains it.) Therefore, in particular, for the window of time made available by moving spending from now to, say, in 1 year, it seems you can earn returns on the stock market that exceed world economic growth. If we know that this can't continue forever, it seems to me this would be more relevant for the part where I say "future longtermists would invest in the stock market rather than engaging in 'average activities' that earn average returns"  etc.  More precisely, the key question we need to ask about any longtermist investment-like spending opportunity seems to be: After the finite window of above-average growth from that opportunities, will there still be other opportunities that, from a longtermist perspective, have returns that exceed average economic growth? If yes, then it is important whether the distant returns from investment-like longtermist spending end up with longtermists; if no, then it's not important.

You could make an argument that a certain kind of influence strictly decreases with time. So the hinge was at the Big Bang.

But, there (probably) weren't any agents around to control anything then, so maybe you say there was zero influence available at that time. Everything that happened was just being determined by low level forces and fields and particles (and no collections of those could be reasonably described as conscious agents).

Today, much of what happens (on Earth) is determined by conscious agents, so in some sense the total amount of extant influ... (read more)

Separately, I still don’t see the case for building earliness into our priors, rather than updating on the basis of finding oneself seemingly-early.

Do you have some other way of updating on the arrow of time? (It seems like the fact that we can influence future generations, but they can't influence us, is pretty significant, and should be factored into the argument somewhere.)

I wouldn't call that an update on finding ourselves early, but more like just an update on the structure of the population being sampled from.

3
ESRogs
3y
You could make an argument that a certain kind of influence strictly decreases with time. So the hinge was at the Big Bang. But, there (probably) weren't any agents around to control anything then, so maybe you say there was zero influence available at that time. Everything that happened was just being determined by low level forces and fields and particles (and no collections of those could be reasonably described as conscious agents). Today, much of what happens (on Earth) is determined by conscious agents, so in some sense the total amount of extant influence has grown. Let's maybe call the first kind of influence time-priority, and the second agency. So, since the Big Bang, the level of time-priority influence available in the universe has gone way down, but the level of aggregate agency in the universe has gone way up. On a super simple model that just takes these two into account, you might multiply them together to get the total influence available at a certain time (and then divide by the number of people alive at that time to get the average person's influence). This number will peak somewhere in the middle (assuming it's zero both at the Big Bang and at the Heat Death). That maybe doesn't tell you much, but then you could start taking into account some other considerations, like how x-risk could result in a permanent drop of agency down to zero. Or how perhaps there's an upper limit on how much agency is potentially available in the universe. In any case, it seems like the direction of causality should be a pretty important part of the analysis (even if it points in the opposite direction of another factor, like increasing agency), either as part of the prior or as one of the first things you update on.

And the current increase in hinginess seems unsustainable, in that the increase in hinginess we’ve seen so far leads to x-risk probabilities that lead to drastic reduction of the value of worlds that last for eg a millennium at current hinginess levels.

Didn't quite follow this part. Are you saying that if hinginess keeps going up (or stays at the current, high level), that implies a high level of x-risk as well, which means that, with enough time at that hinginess (and therefore x-risk) level, we'll wipe ourselves out; and therefore that we can't have sust... (read more)

7
Buck
3y
Your interpretation is correct; I mean that futures with high x-risk for a long time aren't very valuable in expectation.

Just a quick thought on this issue: Using Laplace's rule of succession (or any other similar prior) also requires picking a somewhat arbitrary start point.

Doesn't the uniform prior require picking an arbitrary start point and end point? If so, switching to a prior that only requires an arbitrary start point seems like an improvement, all else equal. (Though maybe still worth pointing out that all arbitrariness has not been eliminated, as you've done here.)

The Nobel Prize comes with a million dollars (9,000,000 SEK). 50k doesn't seem like that much, in comparison.

Another Karnofsky series that I thought was important (and perhaps doesn't fit anywhere else) is his posts on The Straw Ratio.

ballistic ones are faster, but reach Mach 20 and similar speeds outside of the atmosphere

This seems notable, since there is no sound w/o atmosphere. So perhaps ballistic missiles never actually engage in hypersonic flight, despite reaching speeds that would be hypersonic if in the atmosphere? Though I would be surprised if they're reaching Mach 20 at a high altitude and then not still going super fast (above Mach 5) on the way down.

2
Lancer21
4y
Exactly, ballistic missiles (or, at this point of the strike, their warheads) are slowed down when reentering the atmosphere - just like satellites and space capsules containing astro/cosmo/spationauts - at much slower speeds. The 2-digit Mach speeds are reached only outside of the atmosphere.
according to Thomas P. Christie (DoD director of Operational Test and Evaluation from 2001–2005) current defense systems “haven’t worked with any degree of confidence”.[12] A major unsolved problem is that credible decoys are apparently “trivially easy” to build, so much so that during missile defense tests, balloon decoys are made larger than warheads--which is not something a real adversary would do. Even then, tests fail 50% of the time.

I didn't follow this. What are the decoys? Are they made by the attacki... (read more)

Thanks! Just read it.

I think there's a key piece of your thinking that I don't quite understand / disagree with, and it's the idea that normativity is irreducible.

I think I follow you that if normativity were irreducible, then it wouldn't be a good candidate for abandonment or revision. But that seems almost like begging the question. I don't understand why it's irreducible.

Suppose normativity is not actually one thing, but is a jumble of 15 overlapping things that sometimes come apart. This doesn't seem like it poses any... (read more)

Don't Make Things Worse: If a decision would definitely make things worse, then taking that decision is not rational.
Don't Commit to a Policy That In the Future Will Sometimes Make Things Worse: It is not rational to commit to a policy that, in the future, will sometimes output decisions that definitely make things worse.
...
One could argue that R_CDT sympathists don't actually have much stronger intuitions regarding the first principle than the second -- i.e. that their intuitions aren't actually very "targeted" on the first o
... (read more)
There may be a pretty different argument here, which you have in mind. I at least don't see it yet though.

Perhaps the argument is something like:

  • "Don't make things worse" (DMTW) is one of the intuitions that leads us to favoring R_CDT
  • But the actual policy that R_CDT recommends does not in fact follow DMTW
  • So R_CDT only gets intuitive appeal from DMTW to the extent that DMTW was about R_'s, and not about P_'s
  • But intuitions are probably(?) not that precisely targeted, so R_CDT shouldn't get to claim the full intuitive endors
... (read more)
4
bgarfinkel
4y
Here are two logically inconsistent principles that could be true: Don't Make Things Worse: If a decision would definitely make things worse, then taking that decision is not rational. Don't Commit to a Policy That In the Future Will Sometimes Make Things Worse: It is not rational to commit to a policy that, in the future, will sometimes output decisions that definitely make things worse. I have strong intuitions that the fist one is true. I have much weaker (comparatively neglible) intuitions that the second one is true. Since they're mutually inconsistent, I reject the second and accept the first. I imagine this is also true of most other people who are sympathetic to R_CDT. One could argue that R_CDT sympathists don't actually have much stronger intuitions regarding the first principle than the second -- i.e. that their intuitions aren't actually very "targeted" on the first one -- but I don't think that would be right. At least, it's not right in my case. A more viable strategy might be to argue for something like a meta-principle: The 'Don't Make Things Worse' Meta-Principle: If you find "Don't Make Things Worse" strongly intuitive, then you should also find "Don't Commit to a Policy That In the Future Will Sometimes Make Things Worse" just about as intuitive. If the meta-principle were true, then I guess this would sort of imply that people's intuitions in favor of "Don't Make Things Worse" should be self-neutralizing. They should come packaged with equally strong intuitions for another position that directly contradicts it. But I don't see why the meta-principle should be true. At least, my intuitions in favor of the meta-principle are way less strong than my intutions in favor of "Don't Make Things Worse" :)
both R_UDT and R_CDT imply that the decision to commit yourself to a two-boxing policy at the start of the game would be rational

That should be "a one-boxing policy", right?

1
bgarfinkel
4y
Yep, thanks for the catch! Edited to fix.

Thanks! This is helpful.

It seems like following general situation is pretty common: Someone is initially inclined to think that anything with property P will also have property Q1 and Q2. But then they realize that properties Q1 and Q2 are inconsistent with one another.
One possible reaction to this situation is to conclude that nothing actually has property P. Maybe the idea of property P isn't even conceptually coherent and we should stop talking about it (while continuing to independently discuss properties Q1 and Q2). Often the more natural reactio
... (read more)
5
bgarfinkel
4y
Hey again! I appreciated your comment on the LW post. I started writing up a response to this comment and your LW one, back when the thread was still active, and then stopped because it had become obscenely long. Then I ended up badly needing to procrastinate doing something else today. So here’s an over-long document I probably shouldn’t have written, which you are under no social obligation to read.
Load more