Robert_Wiblin

3812Joined Aug 2014

Comments
439

@Linch Thanks for these questions, I will definitely use them. Two quick thoughts:

"where there is strong evidence of their internal feelings (e.g. autobiographies or other detailed biographies/interview) are pretty hard on themselves (John Stuart Mill, Maurice Hilleman, Elon Musk..."

Given the way self-compassion is used in this research I would actually expect Elon Musk to show up as self-compassionate (or at least in the middle of the scale), because I doubt that he spends much time ruminating and feeling ashamed of his past mistakes, or feeling "disapproving and judgmental about his flaws and inadequacies".

The issue is that this construct of self-compassion is related to but somewhat different than the way the term is used in ordinary speech.

For the same reason I wouldn't assume that adult Mill or Hilleman or other names being mentioned would have been mentioned would show up as lacking self-compassion the way it's defined here (though some surely would).

"self-compassion is pretty opposed to growth-mindset and "I want to be stronger" attitudes"

I don't think self-compassion in the way that she uses it is in any way opposed to growth-mindset or aiming for self-improvement. You can listen to the interview and see if you're convinced! :)

That's interesting. My personal observation is that the most productive/successful people I know are more self-compassionate on average.

One of course needs to also look at people who achieve the least (or otherwise have bad lives) to avoid selecting on the dependent variable.

Among that group lack of self-compassion seems to have very high prevalence.

(If I recall John Stuart Mill is famous or having a severe mental breakdown at 20 and radically adjusting his world-view in order to make life more liveable: https://ps.psychiatryonline.org/doi/10.1176/appi.ps.54.10.1347 .)

This argument has some force but I don't think it should be overstated.

Re perpetual foundations: Every mention of perpetual foundations I can recall has opened with the Franklin example, among other historical parallels, so I don't think its advocates could be accused of being unaware that the idea has been attempted!

It's true at least one past example didn't pan out. But cost-benefit analysis of perpetual foundations builds in an annual risk of misappropriation or failure. In fact such analyses typically expect 90%+ of such foundations to achieve next to nothing, maybe even 99%+. Like business start-ups, the argument is that the 1 in 100 that succeeds will succeed big and pay for all the failures.

So seeing failed past examples is entirely consistent with the arguments for them and the conclusion that they are a good idea.

Re communist revolutions: Many groups have tried to change how society is organised or governed hoping that it will produce a better world. Almost all the past examples of such movements I can think of expected benefits to come fairly soon — within a generation or two at most — and though advocates for such changes usually hoped the benefits will be long-lasting, benefits to be derived millions of years in the future is hardly what motivated their participants.

Many, especially the most violent ones, have been disastrous, like the various communist revolutions you refer to. Others of course have been by and large positive, such as people advocating for broadening the franchise, ending slavery, weakening the power of monarchies or phasing out non-self-governing territories. Many also died for those causes. Advocates for those ideas hoped to benefit both current and future generations in much the same way as did communists, and on the whole I think human governance has improved as a result of all these efforts in aggregate over the last thousand years.

On balance I don't think communists are a closer match to longtermism than many other incremental and radical political movements (culturally it's probably the opposite). And if you broaden the range of reform movements considered then it's hard to know whether they've been a success or failure.

(Or, as they say, it's still too soon to tell whether the French revolution was a good idea!)

Re religions: Religions are a possible analogy for longtermism but while very numerous I think the similarities are not enough to make them an especially compelling reference class.

What's most distinctive about religions isn't that they're focused on the very long term. In fact many of them are millennialist, or see the universe as fundamentally cyclical, or are focused on reaching an unchanging Platonic realm in which time is a meaningless concept.

For comparison one could also argue from 'almost all religions proposed are wrong' (necessarily so because they are numerous and contradictory) to 'almost all broad worldviews or opinions are wrong, including the pro-incrementalist one you present in this blog post'. I don't find that a very strong rebuttal to your view for the same reason I don't find it a strong rebuttal of longtermism.

Re the longtermist paradox: I agree 99%+ of improvements to the world have been driven by people trying to improve things in a more immediate way than longtermists typically do. But using the same definition I also think 99.9%+ of all human effort has gone towards such things.

We need to divide the impact by the inputs to see how cost-effective those actions are. Even if longtermism is far more cost-effective than non-longtermism, we should expect non-longtermism to be dominating total impact because it's wildly wildly larger in scale.

Through history so few people have done things that would only be particularly recommended by longtermism that we just don't know yet whether it will pan out in practice. The distribution of impact across projects should be expected to be very fat tailed, so even after the fact it will require a huge sample to empirically assess in a statistically sound way. Sometimes life just sucks that way!

It seems to me like these amounts would be most useful if they were adjusted for inflation (alternatively, if you want to be fancy possibly even adjusted for an index of the wages of knowledge workers). As it is effective funding dispersed in the early years is being understated.

It's hard to follow your argument, but how is any of this different from "someone thought X was very unlikely but then X happened, so this shows estimating the likelihood of future events is fundamentally impossible and pointless."

That line of reasoning clearly doesn't work.

Things we assign low probability to in highly uncertain areas happen all the time — but that is exactly what we should expect and is consistent with our credences in many areas being informative and useful.

I use Anki and would also be interested to see any decks people have made about EA-related topics.

Let's say I offered you to bet that we'll have a commercially viable nuclear fusion plant operating by 2030 and I said you could take the bet in favour at 100-1 odds, or against, also at 100-1 odds.

(So in the first case you ~100x your money if it happens, in the second you ~100x your money if it doesn't.)

Would you be neutral between taking the 'yes' bet and the 'no' bet?

If not, I think it's because you know we can form roughly informed views and expectations about how likely various advances are, using all kinds of different methods, and need not be completely agnostic.

If you would be indifferent, I think your view is untenable and I would like to make a lot of bets about future technological/scientific progress with you.

There's no consensus in the field of AI that AGI poses a real risk.

I'm not sure what the threshold is for consensus but a new survey of ML researchers finds:

"Support for AI safety research is up: 69% of respondents believe society should prioritize AI safety research “more” or “much more” than it is currently prioritized, up from 49% in 2016. ...

The median respondent believes the probability that the long-run effect of advanced AI on humanity will be “extremely bad (e.g., human extinction)” is 5%. ... Many respondents put the chance substantially higher: 48% of respondents gave at least 10% chance of an extremely bad outcome. Though another 25% put it at 0%."

This is certainly closer to what folks involved in EA actually think (and reflects that there's many topics where people have a wide range of views and aesthetics.)

I interpreted them not as saying that Terminator underplays the issue but rather that it misrepresents what a real AI would be able to do (in a way that probably makes the problem seem far easier to solve). But that may be me suffering from the curse of knowledge.

Load More