Davidmanheim

Davidmanheim's Comments

Are there superforecasts for existential risk?

I'll speak for the consensus when I say I think there's not a clear way to decide if this is correct without actually doing it - and the outcome would depend a lot on what level of engagement the superforecasters had with these ideas already. (If I got to pick the 5 superforecasters, even excluding myself, I could guarantee it was either closer to FHI's viewpoints, or to Will's.) Even if we picked from a "fair" reference class, if I could have them spend 2 weeks at FHI talking to people there, I think a reasonable proportion would be convinced - though perhaps this is less a function of updating neutrally towards correct ideas as it is the emergence of consensus in groups.

Lastly, I have tremendous respect for Will, but I don't know that he's calibrated particularly well to make a prediction like this. (Not that I know he isn't - I just don't have any reason to think he's spent much time working on this skillset.)

Are there superforecasts for existential risk?

Yes, but it is hard, and they don't work well. They can, however, be done at least slightly better.

Good Judgement was asked to forecast the risk of a nuclear war in the next year - which helps somewhat with the time frame question. Unfortunately, the brier score incentives are still really weak.

Ozzie Gooen and others have talked a lot about how to make forecasting better. Some of the ideas that he has suggested relate to how to forecast longer term questions. I can't find a link to a public document, but here's one example (which may have been someone else's suggestion):

You ask people to forecast what probability people will assign in 5 years to the question "will there be a nuclear war by 2100?" (You might also ask whether there will be a nuclear war in the next 5 years, of course.) By using this trick, you can have the question (s) resolve in 5 years, and have an approximate answer based on iterated expectation. But extending this, you can also have them predict what probability people will assign in 5 years to the probability they will assign in another 5 years to the question "will there be a nuclear war by 2100" - and by chaining predictions like this, you can transform very long term questions into series of shorter term questions.

There is other work in this vein, but to simplify, all of it takes the form "can we do something clever to slightly reduce the issues that exist with the fundamentally hard question of getting short term answers to long term questions." As far as I can see, there aren't any simple answers.

Civilization Re-Emerging After a Catastrophic Collapse

I disagree somewhat on a few things, but I'm not very strongly skeptical of any of these points. I do have a few points to consider about these issues.

Re: stable long term despotism, you might look into the idea of "hydraulic empires" and their stability. I think that short of having a similar monopoly, short of a global singleton, other systems are unstable enough that they should evolve towards whatever is optimal. However, nuclear weapons, if developed early by one state, could also create a quasi-singularity. And I think the Soviet Union was actually less stable than it appears in retrospect, except for their nuclear monopoly.

I do worry that some aspects of central control would be more effective at creating robust technological growth given clear tech ladders, compared to the way uncontrolled competition works in market economies, since markets are better at the explore side of the explore-exploit spectrum, and dictatorships are arguably better at exploitation. (In more than one sense.)

Re: China, the level of technology is stabilizing their otherwise fragile control of the country. I would be surprised if similar stability is possible longer term without either a hydraulic empire, per above, or similarly invasive advanced technologies - meaning that they would come fairly late. It's possible faster technology development would make this more likely.

In retrospect, 1984 seems far less worrying than a Brave New World - style anti-utopia. (But it's unclear that lots of happy people guided centrally is actually as negative as it is portrayed, at least according to some versions of utilitarianism.)

I'm Linch Zhang, an amateur COVID-19 forecaster and generalist EA. AMA

"The right question" has 2 components. First is that the thing you're asking about is related to what you actually want to know, and second is that it's a clear and unambiguously resolvable target. These are often in tension with each other.

One clear example is COVID-19 cases - you probably care about total cases much more than confirmed cases, but confirmed cases are much easier to use for a resolution criteria. You can make more complex questions to try to deal with this, but that makes them harder to forecast. Forecasting excess deaths, for example, gets into whether people are more or less likely to die in a car accident during COVID-19, and whether COVID reduction measures also blunt the spread of influenza. And forecasting retrospective population percentages that are antibody positive runs into issues with sampling, test accuracy, and the timeline for when such estimates are made - not to mention relying on data that might not be gathered as of when you want to resolve the question.

I'm Linch Zhang, an amateur COVID-19 forecaster and generalist EA. AMA

I think that as you forecast different domains, more common themes can start to emerge. And I certainly find that my calibration is off when I feel personally invested in the answer.


And re:

How does the distribution skill / hours of effort look for forecasting for you?

I would say there's a sharp cutoff in terms of needing a minimal level of understanding (which seems to be fairly high, but certainly isn't above, say, the 10th percentile.) After that, it's mostly effort, and skill that is gained via feedback.

Civilization Re-Emerging After a Catastrophic Collapse

I'm very uncertain about details, and have low confidence in all of these claims we agree about, but I agree with your assessment overall.

I've assumed that while speed changes, the technology-tree is fairly unalterable - you need goods metals and similar to make many things through 1800s-level technology, you need large-scale industry to make good metals, etc. But that's low confidence, and I'd want to think about it more. (This paper looks interesting: http://gamestudies.org/1201/articles/tuur_ghys.)

Regarding political systems, I think that market economies with some level of distributed control, and political systems that allow feedback in somewhat democratic ways are social technologies that we don't have clear superior alternatives to, despite centuries of thought. I'd argue that Fukuyama was right in "End of History" about the triumph of democracy and capitalism, it's just that the end state seems to take longer than he assumed.

And finally, yes, the details of how they technologies and social systems play out in terms of cosmopolitan attitudes and the societal goals they reflect are much less clear. In general, I think that humans are far more culturally plastic than people assume, and very different values are possible and compatible with flourishing in the general sense. But (if it were possible to know the answer,) I wouldn't be too surprised to find out that nearly fixed tech trees + nearly fixed social technology trees mean that cosmopolitan attitudes are a very strong default, rather than an accidental contingent reality.

Civilization Re-Emerging After a Catastrophic Collapse

I was focusing on "how much similarity we should expect between a civilization that has recovered and one that never collapsed in the first place," and I was saying that the degree of similarity in terms of likely progress is low, conditioning on any level of societal memory of the idea that progress is possible, and knowing (or seeing artifacts of the fact) that there once were billions of people who had flying machines and instant communication.

Civilization Re-Emerging After a Catastrophic Collapse

I think there's a clear counterargument, which is that the central ingredient lacking in developing technologies was a lack of awareness that progress in a given area is possible. Unless almost literally all knowledge is destroyed, a recovery doesn't have this problem.

(Note: this seems to be a consensus view among people I talk to who have thought about collapse scenarios, but I can claim that only very loosely, based on a few conversations.)

Why "animal welfare" is a thing?

You still seem confused. You say your views are controversial, as if this community doesn't allow for and value controversial opinions, and think that it's the claims you made. That is not the case. Hopefully this comment is clear enough to explain.

1. This was a low-effort post. It was full of half-formed ideas, contained neither a title or a introduction that related to the remainder of the post, nor a clear conclusion. The sentences were not complete, and there was clearly no grammar check.

2. Look at successful posts on the forum. They contain full sentences, have a clear topic and thoughts about a topic that are explained clearly, and engage with past discussion. It's important to notice the standards in a given forum before participating. In this case, you didn't bother looking at other posts or understanding the community norms.

3. You have not engaged with other posts, and may not have even read them. Your first attempt to post or comment reflects that lack of broader engagement. You have no post history to make people think you have given this any thought whatsoever.

4. Your unrelated comments link to your other irrelevant work, which seems crass.

Load More