I'm a researcher at London School of Economics and Political Science, working in the intersection of moral psychology and philosophy.
I also guess cry wolf-effects won't be as large as one might think - e.g. I think people will look more at how strong AI systems appear at a given point than at whether people have previously warned about AI risk.
Thanks, very interesting.
Regarding the political views, there are two graphs, showing different numbers. Does the first include people who didn't respond to the political views question, whereas the second exclude them? If so, it might be good to clarify that. You might also clarify that the first graph/sets of numbers don't sum to 100%. Alternatively, you could just present the data that excludes non-responses, since that's in my view the more interesting data.
Yes, I think that him, e.g. being interviewed by 80K didn't make much of a difference. I think that EA's reputation would inevitably be tied to his to an extent given how much money they donated and the context in which that occurred. People often overrate how much you can influence perceptions by framing things differently.
"Co-writing with Julia would be better, but I suspect it wouldn't go well. While we do have compatible views, we have very different writing styles, and I understand taking on projects like this is often hard on relationships."
Perhaps there are ways of addressing this. For instance, you could write separate chapters, or parts; or have some kind of dialogue between the two of you. The idea would be that each person owns part of the book. I'm unsure about the details, but maybe you could find a solution.
Informed speculation might ... confuse people, since there's already plenty of work people call "AI forecasting" that looks similar to what I'm talking about.
Yes, I think using the term "forecasting" for what you do is established usage - it's effectively a technical term. Calling it "informed speculation about AI" in the title would not be helpful, in my view.
Great post, btw.
I find some of the comments here a bit implausible and unrealistic.
What people write online will often affect their reputation, positively or negatively. It may not necessarily mean they, e.g. have no chance of getting an EA job, but there are many other reputational consequences.
I also don't think that updating one's views of someone based on what they write on the EA Forum is necessarily always wrong (even though there are no doubt many updates that are unfair or unwarranted).
Hm, Rohin has some caveats elaborating on his claim.
(Not literally so -- you can construct scenarios like "only investors expect AGI while others don't" where most people don't expect AGI but the market does expect AGI -- but these seem like edge cases that clearly don't apply to reality.)
Unless they were edited in after these comments were written (which doesn't seem to be the case afaict) it seems you should have taken those caveats into account instead of just critiquing the uncaveated claim.
I gave an argument for why I don't think the cry wolf-effects would be as large as one might think in World A. Afaict your comment doesn't engage with my argument.
I'm not sure what you're trying to say with your comment about World B. If we manage to permanently solve the risks relating to AI, then we've solved the problem. Whether some people will then be accused of having cried wolf seems far less important relative to that.