I'm not sure what difference in prioritization this would imply or if we have remaining quantitative disagreements. I agree that it is bad for important institutions to become illiberal or collapse and so erosion of liberal norms is worthwhile for some people to think about. I further agree that it is bad for me or my perspective to be pushed out of important institutions (though much less bad to be pushed out of EA than out of Hollywood or academia).
It doesn't currently seem like thinking or working on this issue should be a priority for me (even within EA other people seem to have clear comparative advantage over me). I would feel differently if this was an existential issue or had a high enough impact, and I mostly dropped the conversation when it no longer seemed like that was at issue / it seemed in the quantitative reference class of other kinds of political maneuvering. I generally have a stance of just doing my thing rather than trying to play expensive political games, knowing that this will often involve losing political influence.
It does feel like your estimates for the expected harms are higher than mine, which I'm happy enough to discuss, but I'm not sure there's a big disagreement (and it would have to be quite big to change my bottom line).
I was trying to get at possible quantitative disagreements by asking things like "what's the probability that making pro-speech comments would itself be a significant political liability at some point in the future?" I think I have a probability of perhaps 2-5% on "meta-level pro-speech comments like this one eventually become a big political liability and participating in such discussions causes Paul to miss out on at least one significant opportunity to do good or have influence."
I'm always interested in useful thoughts about cost-effective things to do. I could also imagine someone making the case that "think about it more" is cost-effective for me, but I'm more skeptical of that (I expect they'd instead just actually do that thinking and tell me what they think I should do differently as a result, since the case for them thinking will likely be much better than the case for me doing it). I think your earlier comments make sense from the perspective of trying to convince other folks here to think about these issues and I didn't intend for the grandparent to be pushing against that.
For me it seems like one easy and probably-worthwhile intervention is to (mostly) behave according to a set of liberal norms that I like (and I think remain very popular) and to be willing to pay costs if some people eventually reject that behavior (confident that there will be other communities that have similar liberal norms). Being happy to talk openly about "cancel culture" is part of that easy approach, and if that led to serious negative consequences then it would be a sign that the issue is much more severe than I currently believe and it's more likely I should do something. In that case I do think it's clear there is going to be a lot of damage, though again I think we differ a bit in that I'm more scared about the health of our institutions than people like me losing influence.
My process was to check the "About the forum" link on the left hand side, see that there was a section on "What we discourage" that made no mention of hiring, then search for a few job ads posted on the forum and check that no disapproval was expressed in the comments of those posts.
I think that a scaled up version of GPT-3 can be directly applied to problems like "Here's a situation. Here's the desired result. What action will achieve that result?" (E.g. you can already use it to get answers like "What copy will get the user to subscribe to our newsletter?" and we can improve performance by fine-tuning on data about actual customer behavior or by combining GPT-3 with very simple search algorithms.)
I think that if GPT-3 was more powerful then many people would apply it to problems like that. I'm concerned that such systems will then be much better at steering the future than humans are, and that none of these systems will be actually trying to help people get what they want.
A bunch of people have written about this scenario and whether/how it could be risky. I wish that I had better writing to refer people to. Here's a post I wrote last year to try to communicate what I'm concerned about.
Hires would need to be able to move to the US.
No, I'm talking somewhat narrowly about intent alignment, i.e. ensuring that our AI system is "trying" to do what we want. We are a relatively focused technical team, and a minority of the organization's investment in safety and preparedness.
The policy team works on identifying misuses and developing countermeasures, and the applied team thinks about those issues as they arise today.
The conclusion I draw from this is that many EAs are probably worried about CC but are afraid to talk about it publicly because in CC you can get canceled for talking about CC, except of course to claim that it doesn't exist. (Maybe they won't be canceled right away, but it will make them targets when cancel culture gets stronger in the future.) I believe that the social dynamics leading to development of CC do not depend on the balance of opinions favoring CC, and only require that those who are against it are afraid to speak up honestly and publicly (c.f. "preference falsification"). That seems to already be the situation today.
It seems possible to me that many institutions (e.g. EA orgs, academic fields, big employers, all manner of random FB groups...) will become increasingly hostile to speech or (less likely) that they will collapse altogether.
That does seem important. I mostly don't think about this issue because it's not my wheelhouse (and lots of people talk about it already). Overall my attitude towards it is pretty similar to other hypotheses about institutional decline. I think people at EA orgs have way more reasons to think about this issue than I do, but it may be difficult for them to do so productively.
If someone convinced me to get more pessimistic about "cancel culture" then I'd definitely think about it more. I'd be interested in concrete forecasts if you have any. For example, what's the probability that making pro-speech comments would itself be a significant political liability at some point in the future? Will there be a time when a comment like this one would be a problem?
Looking beyond the health of existing institutions, it seems like most people I interact with are still quite liberal about speech, including a majority of people who I'd want to work with, socialize with, or take funding from. So hopefully the endgame boils down to freedom of association. Some people will run a strategy like "Censure those who don't censure others for not censuring others for problematic speech" and take that to its extreme, but the rest of the world will get along fine without them and it's not clear to me that the anti-speech minority has anything to do other than exclude people they dislike (e.g. it doesn't look like they will win elections).
in CC you can get canceled for talking about CC, except of course to claim that it doesn't exist. (Maybe they won't be canceled right away, but it will make them targets when cancel culture gets stronger in the future.)
I don't feel that way. I think that "exclude people who talk openly about the conditions under which we exclude people" is a deeply pernicious norm and I'm happy to keep blithely violating it. If a group excludes me for doing so, then I think it's a good sign that the time had come to jump ship anyway. (Similarly if there was pressure for me to enforce a norm I disagreed with strongly.)
I'm generally supportive of pro-speech arguments and efforts and I was glad to see the Harper's letter. If this is eventually considered cause for exclusion from some communities and institutions then I think enough people will be on the pro-speech side that it will be fine for all of us.
I generally try to state my mind if I believe it's important, don't talk about toxic topics that are unimportant, and am open about the fact that there are plenty of topics I avoid. If eventually there are important topics that I feel I can't discuss in public then my intention is to discuss them.
I would only intend to join an internet discussion about "cancellation" in particularly extreme cases (whether in terms of who is being canceled, severe object-level consequences of the cancellation, or the coercive rather than plausibly-freedom-of-association nature of the cancellation).
Thanks, super helpful.
(I don't really buy an overall take like "It seems unlikely" but it doesn't feel that mysterious to me where the difference in take comes from. From the super zoomed out perspective 1200 AD is just yesterday from 1700AD, it seems like random fluctuations over 500 years are super normal and so my money would still be on "in 500 years there's a good chance that China would have again been innovating and growing rapidly, and if not then in another 500 years it's reasonably likely..." It makes sense to describe that situation as "nowhere close to IR" though. And it does sound like the super fast growth is a blip.)
I took numbers from Wikipedia but have seen different numbers that seem to tell the same story although their quantitative estimates disagree a ton.
The first two numbers are all higher than growth rates could have plausibly been in a sustained way during any previous part of history (and the 0-1000AD one probably is as well), and they seem to be accelerating rather than returning to a lower mean (as must have happened during any historical period of similar growth).
My current view is that China was also historically unprecedented at that time and probably would have had an IR shortly after Europe. I totally agree that there is going to be some mechanistic explanation for why europe caught up with and then overtook china, but from the perspective of the kind of modeling we are discussing I feel super comfortable calling it noise (and expecting similar "random" fluctuations going forward that also have super messy contingent explanations).
If one believed the numbers on wikipedia, it seems like Chinese growth was also accelerating a ton and it was not really far behind on the IR, such that I wouldn't except to be able to easily eyeball the differences.
If you are trying to model things at the level that Roodman or I are, the difference between 1400 and 1600 just isn't a big deal, the noise terms are on the order of 500 years at that point.
So maybe the interesting question is if and why scholars think that China wouldn't have had an IR shortly after Europe (i.e. within a few centuries, a gap small enough that it feels like you'd have to have an incredibly precise model to be justifiably super surprised).
Maybe particularly relevant: is the claimed population growth from 1700-1800 just catch-up growth to Europe? (more than doubling in 100 years! And over the surrounding time period the observed growth seems very rapid even if there are moderate errors in the numbers) If it is, how does that work given claims that Europe wasn't so far ahead by 1700? If it isn't, then how does the that not very strongly suggest incredible acceleration in China, given that it had very recently had some of the fastest growth in history and is then experience even more unprecedented growth? Is it a sequence of measurement problems that just happen to suggest acceleration?
My model is that most industries start with fast s-curve like growth, then plateau, then often decline
I don't know exactly what this means, but it seems like most industries in the modern world are characterized by relatively continuous productivity improvements over periods of decades or centuries. The obvious examples to me are semiconductors and AI since I deal most with those. But it also seems true of e.g. manufacturing, agricultural productivity, batteries, construction costs. It seems like industries where the productivity vs time curve is a "fast S-curve" are exceptional, which I assume means we are somehow reading the same data differently. What kind of industries would you characterize this way?
(I agree that e.g. "adoption" is more likely to be an s-curve given that it's bounded, but productivity seems like the analogy for growth rates.)