I did not ask for impressions about CGD, JPAL, etc. I did ask an EA "feeling thermometer" question about EA in general (of the subset of people who said they knew enough about EA to discuss with a friend), and I got this (0 is as negative as possible and 100 is as positive as possible):
That spike at 50 is an answer of total indifference, which again affirms that many of the people who said they knew about EA probably didn't know very much about it.
The question about "which subsets of the profession might be more or less interested in EA" is a very good one... (read more)
This is the breakdown of "discipline of PhD" in my sample.
Academic discipline | Canada | United States | United Kingdom |
---|---|---|---|
Anthropology | 4 | 13 | 6 |
Economics | 11 | 47 | 16 |
Geography | 7 | 8 | 14 |
History | 2 | 2 | 2 |
Linguistics and languages | 0 | 1 | 0 |
Philosophy | 1 | 0 | 1 |
Political science | 24 | 59 | 17 |
Psychology | 0 | 1 | 1 |
Public Policy or Public Administration | 0 | 3 | 0 |
Sociology | 3 | 12 | 2 |
Other | 10 | 21 | 25 |
International Development Studies | 10 | 4 | 23 |
Nothing selected | 0 | 0 | 1 |
Development economics is a subfield of economics, international development is an interdisciplinary research area. The two are related but not the same. I think most international development p... (read more)
Thanks. I basically agree with what you say, I'd just note that lots of IDEV profs aren't economists. I'm writing something I'll aim at World Development (then JDS, then JID, etc) based on the survey data, for exactly the reasons you describe.
I'll re-word my comment to clarify the part re: "the dangers of anthropic reasoning". I always forget if "anthropic" gets applied to not conditioning on existence and making claims, or the claim that we need to condition on existence when making claims.
This is a good thing to flag. I actually agree re: anthropic reasoning (though frankly I always feel a bit unsettled by its fundamentally unscientific nature).
My main claim re: AI—as I saw it—was that the contours of the AI risk claim matched quite closely to messianic prophesies, just in modern secular clothing (I'll note that people both agreed and disagreed with me on this point and interested people should read my short post and the comments). I still stand by that fwiw—I think it's at minimum an exceptional coincidence.
One underrated respo... (read more)
This was a good post overall, I just have one modification.
- Your advisor is the most important choice you can make. Talk to as many people as possible in the lab before you join it. If you and your advisor do not get along, your experience will be terrible.
I received this advice, and things worked out for me, but it's dangerously incomplete. It is true that you need a good relationship with an advisor, and their recommendation letter matters when you're on the job market. But for many areas the prestige of the department and university is more important. Pu... (read more)
Personally, I'm more worried about this paper. Here is a vox writeup. I don't know that I think the linear growth story is true, and even if it was we could easily hit another break point (AI anyone?), but I'm more worried about this kind of decline than a blowup like LTG suggests.
I'm not an expert in this area, but think the paper you're pointing to is leaning way too hard on a complicated model with a bad track record, and I'm weirded out by how little they compare model predictions and real data (eg using graphs). If I wanted to show off how awesome som... (read more)
I'm not sure that it's purely "how much to trust inside vs outside view," but I think that is at least a very large share of it. I also think the point on what I would call humility ("epistemic learned helplessness") is basically correct. All of this is by degrees, but I think I fall more to the epistemically humble end of the spectrum when compared to Thomas (judging by his reasoning). I also appreciate any time that someone brings up the train to crazy town, which I think is an excellent turn of phrase that captures an important idea.
I really appreciate this response, which I think understands me well. I also think it expresses some of my ideas better than I did. Kudos Thomas. I have a better appreciation of where we differ after reading it.
I appreciate the pushback. I'm thinking of all claims that go roughly like this: "a god-like creature is coming, possibly quite soon. If we do the right things before it arrives, we will experience heaven on earth. If we do not, we will perish." This is narrower than "all transformative change" but broader than something that conditions on a specific kind of technology. To me personally, this feels like the natural opening position when considering concerns about AGI.
I think we probably agree that claims of this type are rarely correct, and I understand th... (read more)
I am commenting here and upvoting this specifically because you wrote "I appreciate the pushback." I really like seeing people disagree while being friendly/civil, and I want to encourage us to do even more of that. I like how you are exploring and elaborating ideas while being polite and respectful.
I appreciate the pushback. I'm thinking of all claims that go roughly like this: "a god-like creature is coming, possibly quite soon. If we do the right things before it arrives, we will experience heaven on earth. If we do not, we will perish."
I do think Jackson's example of what it might feel to non-European cultures with lower military tech to have white conquerers arrive with overwhelming force feels like a surprisingly fitting case study of this paragraph.
Thanks for the kind words Richard.
Re: your first point: I agree people have inside view reasons for believing in risk from AGI. My point was just that it's quite remarkable to believe that, sure, all those other times the god-like figure didn't show up, but that this time we're right. I realize this argument will probably sound unsatisfactory to many people. My main goal was not to try to persuade people away from focusing on AI risks, it was to point out that the claims being made are very messianic and that that is kind of interesting sociologically.
Re: ... (read more)
I completely agree.
I expected you to be right, but when I looked on the 80k job board right now of the 962 roles: 161 were in AI, 105 were in pandemics, and 308 were in global health and development. Hard to say exactly how that relates to funding, but regardless I think it shows development is also a major area of focus when measured by jobs instead of dollars.
I think that longtermism has grown very dramatically, but that it is wrong to equate it with EA (both as a matter of accurate description and for strategic reasons, as are nicely laid out in the above post).
I think the confusion here exists in part because the "EA vanguard" has been quite taken up with longtermism and this has led to people seeing it as more prominent in EA than it actually is. If you look to organizations like The Life You Can Save or Giving What We Can, they either lead with "global health and wellbeing"-type cause areas or focus on that... (read more)
EA is less focused on longtermism than people might think based on elite messaging. IIRC this is affirmed by past community surveys
This is somewhat less true when one looks at the results across engagement levels. Among the less engaged ~50% of EAs (levels 1-3), neartermist causes are much more popular than longtermism. For level 4/5 engagement EAs, the average ratings of neartermist, longtermist and meta causes are roughly similar, though with neartermism a bit lower. And among the most highly engaged EAs, longtermist and meta causes are dramaticall... (read more)
One additional useful starting point might be to look through EA funds grants to see find researchers that won. For example, in this batch I won one (and would be interested in chatting about your journal ideas whenever), as did Theron Pummer and it looks like a few other academics. These aren't nicely formatted lists, but they shouldn't be too hard to skim through. Someone over there might also be willing and able to make a list for you, as they presumably track all this in some database software.
I think that's fair (see also, footnote 2). Fwiw this was the actual question: "Consider a charity whose programs are among the most cost-effective ways of saving the lives of children. In other words, thinking across all charities that currently exist, this one can save a child’s life for the smallest amount of money.
Roughly what do you think is the minimum amount of money that you would have to donate to this charity in order to expect that your money has saved the life of one child?”