DM

David Mathers🔸

5948 karmaJoined

Bio

Superforecaster, former philosophy PhD, Giving What We Can member since 2012. Currently trying to get into AI governance. 

Posts
11

Sorted by New

Comments
722

Less clearly, sure. I'm mostly warning about complacency about liberals being safe from error just because you can use liberal ideas to criticize bad things liberals have done, rather than defending communism. Certainly lots of communists have, for example, attacked Stalinism in communist terms.

I don't really understand why liberalism is getting the prefix "classical" here though. The distinction between "classical" and other forms of liberalism, like social liberalism, is more about levels of government support for the poor through the welfare state and just how strong a presumption we should have in favour of market solutions vs government ones, with agreement on secularism, individual human rights, free speech, pluralism, a non-zero sum conception of markets and trade etc. I also think that insofar as "liberals" have an unusually good record, this doesn't distinguish "liberals" in the narrow sense from other pro-democratic traditions that accept pluralism: i.e. European social democracy on the left, and European Christian democracy, and Anglosphere mainstream conservatism 1965-2015 on the right. If anything classical liberals might have a worse record than many of these groups, because I think classical liberal ideas were used in the 19th century by the British Empire to justify not doing anything about major famines.  Of course there is a broad sense of liberal in which all these people are "liberals" too, and they may well have been influenced by classical liberalism. But they aren't necessarily on the same side as classical liberals in typical policy debates.

I think there is something to this, but the US didn't just "prop up" Suharto in the sense of had normal relations of trade and mutual favours even though he did bad things. (That indeed may well be the right attitude to many bad governments, and ones that many lefitsts might demand the US to take to bad left-wing governments, yes.) They helped install him, a process which was incredibly bloody and violent, even apart from the long-term effects of his rule: https://en.wikipedia.org/wiki/Indonesian_mass_killings_of_1965%E2%80%9366

Remember also that the same people are not necessarily making all of these arguments. Relatively few radical leftists saying the first two things are also making a huge moral deal about the US failing to help Ukraine, I think. Even if they are strongly against the Russian invasion. It's mostly liberals who are saying the 3rd one. 

Communism is a "reason-based" ideology, at least originally, in that it sees itself as secular and scientific and dispassionate and based on hard economics, rather than tradition or God. I mean, yes, Marxists tend to be more keen on evoking sociological explanations for people's beliefs than liberals are, but even Marxists usually believe social science is possible and even liberals admit people's beliefs are distorted by bias all the time, so the difference is one of emphasis rather than fundamental commitment I think.  

This isn't a defence of communism particularly. The mere fact that people claim that something is the output of reason and science doesn't mean it actually is. That goes for liberalism too. 

"Classical liberalism provides the intellectual resources to condemn the Jakarta killings. " 

Communism probably also provides intellectual resources that would enable you to condemn most of the many very bad things communists have done, but that doesn't mean that those outcomes aren't relevant to assessing how good an idea communism is in practice. 

Not that you said otherwise, and I am a liberal, not a communist. But I do think sometimes liberals can be a bit too quick to conclude that all crimes of liberal regimes having nothing distinctive to do with liberalism, while presuming that communist and fascist and theocratic crimes are inherent products of communism/fascism/theocracy. (I have less than zero time for fascism or theocracy, to be clear.) 

The report has many authors, some of whom maybe much less concerned or think the whole thing is silly. I never claimed that Bengio and Hinton's views were a consensus, and in any case, I was citing their views as evidence for taking the idea that AGI may arrive soon seriously, not their views on how risky AI is. I'm pretty sure I've seen them give relatively short time-lines when speaking individually, but I guess I could be misremembering. For what it's worth Yann LeCunn seems to think 10 years is about right, and Gary Marcus seems to think a guess of 10-20 years is reasonable: https://helentoner.substack.com/p/long-timelines-to-advanced-ai-have

I guess I'm just slightly confused about what economists actually think here since I'd always thought they took the idea that markets and investors were mostly quite efficient most of the time fairly seriously. 

I don't know if/how much EA money should go to AI safety either. EAs are trying to find the single best thing, and it's very hard to know what that is, and many worthwhile things will fail that bar. Maybe David Thorstad is right, and small X-risk reductions have relatively low value because another X-risk will get us in the next few centuries anyway*. What I do think is that society as a whole spending some resources caring about the risk of AGI arriving in the next ten years is likely optimal, and that it's not more silly to do so than to do many other obviously good things. I don't actually give to AI safety myself, and I only work on AI-related stuff-forecasting etc., I'm not a techy person-because it's what people are prepared to pay more for, and people being prepared to pay me to work on near-termist causes is less common, though it does happen. I myself give to animal welfare, not AI safety. 

If you really believe that everyone putting money into Open AI etc. will only see returns if they achieve AGI that seems to me to be a point in favour of "there is a non-negligible risk of AGI in the next 10 years". I don't believe that, but if I did I that alone would significantly raise the chance I give to AGI within the next 10 years. But yes, they have some incentive to lie here, or to lie to themselves, obviously. Nonetheless, I don't think that means their opinion should get zero weight. For it to actually have been some amazing strategy for them to talk up the chances of AGI, *because it attracted cash* you'd have to believe they can fool outsiders with serious money on the line, and that this will be profitable for them in the long term, rather than crashing and burning when AGI does not arrive. I don't think that is wildly unlikely or anything, indeed, I think it is somewhat plausible-though my guess is Anthropic in particular believe their own hype. But it does require a fairly high amount of foolishness on behalf of other quite serious actors. I'm much more sure of "raising large amounts of money for stuff that obviously won't work is relatively hard" than I am of any argument about how far we are from AGI that looks at the direct evidence, since the latter sort of arguments are very hard to evaluate. I'd feel very differently here if we were arguing about 50% chance of AI in ten years, or even 10% chance. It's common for people to invest in things that probably won't work but have a high pay-off if they do. But what your saying is that Richard is wrong for thinking there is a non-negligible risk, because the chance is significantly under 1%. I doubt there are many takers for like "1 in 1000" chance of a big pay-off.  

It is of course not THAT unlikely that they are fooling the serious money: serious investors make mistakes and even the stock market does. Nonetheless, being able to attract serious investment that is genuinely only investing because they think you'll achieve X, whilst simultaneously being under huge media attention and scrutiny is a credible signal that you'll eventually achieve X. 

I don't think the argument I've just given is all that definitive, because they have other incentives to hype, like attracting top researchers (who I think it is probably eaiser to fool, because if they are fooled about AGI working at a big lab was probably good for them anyway; quite different from what happens to people funding the labs who are fooled who just lose money.) So it's possible that the people pouring serious money in don't take any of the AGI stuff seriously. Nonetheless, I trust "serious organisations with technical prowess seem to be trying to do this" as a signal to take something minimally seriously, even if they have some incentive to lie. 

Similarly, if you really think Microsoft and Google have taken decisions that will crash their stock if AGI doesn't arrive, I think a similar argument applies: Are you really sure you're better at evaluating whether there is a non-negligible chance that a tech will be achieved by the tech industry than Microsoft and Google? Eventually, if AGI is not arriving from the huge training runs that are being planned in the near future, people will notice, and Microsoft and Google don't want to lose money in 5 years from now either. Again, it's not THAT implausible that they are mistaken, mistakes happen. But you aren't arguing that there probably won't be AGI in ten years-a claim I actually strongly agree with!-but rather that Richard was way out in saying that it's a tail risk we should take seriously given how important it would be. 

Slower progress on one thing than another does not mean no progress on the slower thing. 

"despite those benchmarks not really being related to AGI in any way." This is your judgment, but clearly it is not the judgment of some of the world's leading scientific experts in the area. (Though there may well be other experts who agree with you.) 

*Actually Thorstad's opinion is more complicated than that, he says that this is true conditional on X-risk currently being non-negligible, but he doesn't himself endorse the view that it is currently non-negligible as far as I can tell. 

METR has an official internal view on what time horizons correspond to "takeover not ruled out"? 

Yeah, I am inclined to agree-for what my opinion is worth which on this topic is probably not that much-that there will be many things AIs can't do even once they have a METR 80% time-horizon of say 2 days. But I am less sure of that than I am of the meta-level point about this being an important crux. 

Sure, but I I wasn't really thinking of people on LessWrong, but rather of the fact that at least some relevant experts outside of the LW milieu seem worried and/or think that AGI is not THAT far. I.e. Hinton, Bengio, Stuart Russell (for danger) and even people often quoted as skeptical experts* like Gary Marcus or Yann LeCunn often give back of the envelope timelines of 20 years, which is not actually THAT long. Furthermore I actually do think the predictions of relatively near term AGI by Anthropic and the fact that DeepMind and OpenAI have building AGI as a goal to carry some weight here. Please don't misunderstand me, I am not saying that these orgs are trustworthy exactly: I expect them to lie in their own interests to some degree, including about how fast their models will get better, and also to genuinely overestimate how fast they will make progress. Nonetheless they are somewhat credible in the sense that a) they are at the absolute cutting edge of the science here and have made some striking advancements, and b) they have some incentive not to overpromise so much that no one ever believes anything they say ever again, and c) they are convincing enough to outsiders with money that they keep throwing large sums of money at them, which suggests those outsiders at least expect reasonably rapid advancement, whether or not they expect AGI itself, and which is also evidence that these are serious organizations. 

I'd also say that near-term AGI is somewhat disanalogous to Hinduism, ESP, Christianity, crystal healing etc. in that all these things are actually in conflict with a basic scientific worldview fairly directly, in that they describe things that would plausibly violate known laws of physics, or are clearly supernatural in a fairly straightforward sense. That's not true of near-term AGI. 

Having said that I certainly agree that it is not completely obvious that there is enough real expertise behind predictions of near-term AGI to treat them with deference.  My personal judgment is that there is, but once we get away from obvious edge cases like textbook hard science on the one hand and "experts" in plainly supernatural things on the other, it gets hard to tell how much deference people deserve. 

There's also an issue of priors here of course: I don't think "AGI will be developed in the next 100 years" is an "extraordinary" claim in the same sense as supernatural claims, or even just something unlikely but possible like "Scotland will win the next football world cup". We know it is possible in principle, and that technology can advance quickly over a timespan of decades-just compare where flight was in 1900 to 1960-and that trillions of dollars are going to be spent advancing AI in the near term, and while Moore's law is breaking down, we haven't actually hit theoretical in principle limits on how good chips can be, and that more money and labour is currently going into making advanced AI than ever before. If we say there's a 25% chance of it being developed in the next hundred years, an even divide per decade of that would say 2.5% chance of it arriving next decade. Even if we cut that 5x for the next decade, that would give a 0.5% chance which I think is worth worrying about given how dramatic the consequence of AGI would be. (Of course, you personally have lots of arguments against it being near, but I am just talking about what it's reasonable to expect from broad features of the current situation before we get into the details.) But of course, forecasting technology 100 years out is extraordinarily hard. In general because forecasting is hard beyond the next few years, so I guess maybe 25% is way too high (although the uncertainty cuts both ways.)

I get that the worry here is that people can just say any possible technology might happen soon, so if it was very consequential we should worry about it now. But my response is just that if it's a technology that several of the world's largest or, fastest growing or most innovative companies claim or hint to be building it, and a Nobel winning scientist in the area in question agree that they very well might be, probably this is right, whereas if no one serious is working towards a technology, a higher degree of skepticism is probably warranted. (Presumption could be overcome if almost all scientists with relevant expertise think that Bengio and Hinton are complete cranks on this, but I haven't seen strong evidence of that.) 

*In fairness, my guess is that they are actually more bullish on AGI than many people in machine learning, but that is only a guess. 

Load more