Sorted by New

Topic Contributions


Don’t Be Comforted by Failed Apocalypses

I'll re-word my comment to clarify the part re:  "the dangers of anthropic reasoning". I always forget if "anthropic" gets applied to not conditioning on existence and making claims, or the claim that we need to condition on existence when making claims.

Don’t Be Comforted by Failed Apocalypses

This is a good thing to flag. I actually agree re: anthropic reasoning (though frankly I always feel a bit unsettled by its fundamentally unscientific nature). 

My main claim re: AI—as I saw it—was that the contours of the AI risk claim matched quite closely to messianic prophesies, just in modern secular clothing (I'll note that people both agreed and disagreed with me on this point and interested people should read my short post and the comments).  I still stand by that fwiw—I think it's at minimum an exceptional coincidence.

One underrated response that I have been thinking about was by Jason Wagner, who paraphrased one reading of my claim as:

"AI might or might not be a real worry, but it's suspicious that people are ramming it into the Christian-influenced narrative format of the messianic prophecy.  Maybe people are misinterpreting the true AI risk in order to fit it into this classic narrative format; I should think twice about anthropomorphizing the danger and instead try to see this as a more abstract technological/economic trend."

In this reading AI risk is real, no one has a great sense of how to explain it because much of its nature is unknown and simply weird, and so we fall back on narratives that we understand—so Christian-ish messiah-type stories.

How to apply for a PhD

This was a good post overall, I just have one modification.

  1. Your advisor is the most important choice you can make. Talk to as many people as possible in the lab before you join it. If you and your advisor do not get along, your experience will be terrible.

I received this advice, and things worked out for me, but it's dangerously incomplete. It is true that you need a good relationship with an advisor, and their recommendation letter matters when you're on the job market. But for many areas the prestige of the department and university is more important. Put simply: you should probably go to the most prestigious PhD program that will take you. See this for example: "Across disciplines, we find that faculty hiring follows a common and steeply hierarchical structure that reflects profound social inequality." Prestige is especially important if you want an academic position.

Could economic growth substantially slow down in the next decade?

Personally, I'm more worried about this paper. Here is a vox writeup. I don't know that I think the linear growth story is true, and even if it was we could easily hit another break point (AI anyone?), but I'm more worried about this kind of decline than a blowup like LTG suggests.

I'm not an expert in this area, but think the paper you're pointing to is leaning way too hard on a complicated model with a bad track record, and I'm weirded out by how little they compare model predictions and real data (eg using graphs). If I wanted to show off how awesome some model was, I'd be much more transparent

(note: Jackson makes a similar point re: lack of transparency).

The AI Messiah

I'm not sure that it's purely "how much to trust inside vs outside view," but I think that is at least a very large share of it. I also think the point on what I would call humility ("epistemic learned helplessness") is basically correct. All of this is by degrees, but I think I fall more to the epistemically humble end of the spectrum when compared to Thomas (judging by his reasoning). I also appreciate any time that someone brings up the train to crazy town, which I think is an excellent turn of phrase that captures an important idea.

The AI Messiah

I really appreciate this response, which I think understands me well. I also think it expresses some of my ideas better than I did. Kudos Thomas. I have a better appreciation of where we differ after reading it.

The AI Messiah

I appreciate the pushback. I'm thinking of all claims that go roughly like this: "a god-like creature is coming, possibly quite soon. If we do the right things before it arrives, we will experience heaven on earth. If we do not, we will perish." This is narrower than "all transformative change" but broader than something that conditions on a specific kind of technology. To me personally, this feels like the natural opening position when considering concerns about AGI.

I think we probably agree that claims of this type are rarely correct, and I understand that some people have inside view evidence that sways them towards still believing the claim. That's totally okay. My goal was not to try to dissuade people from believing that AGI poses a possibly large risk to humanity, it was to point to the degree to which this kind of claim is messianic. I find that interesting. At minimum, people who care a lot about AGI risk might benefit from realizing that at least some people view them as making messianic claims.

The AI Messiah

Thanks for the kind words Richard.

Re: your first point: I agree people have inside view reasons for believing in risk from AGI. My point was just that it's quite remarkable to believe that, sure, all those other times the god-like figure didn't show up, but that this time we're right. I realize this argument will probably sound unsatisfactory to many people. My main goal was not to try to persuade people away from focusing on AI risks, it was to point out that the claims being made are very messianic and that that is kind of interesting sociologically.

Re: your second point: I  should perhaps have been clearer: I am not making a parallel to religion as a way of criticizing EA. I think religions are kind of amazing. They're one of the few human institutions that have been able to reproduce themselves and shape human behaviour in fairly consistent ways over thousands of years. That's an incredible accomplishment. We could learn from them.

Is EA "just longtermism" now?

I expected you to be right, but when I looked on the 80k job board right now of the 962 roles: 161 were in AI, 105 were in pandemics, and 308 were in global health and development. Hard to say exactly how that relates to funding, but regardless I think it shows development is also a major area of focus when measured by jobs instead of dollars.

Load More