I research a wide variety of issues relevant to global health and development. I also consult as a researcher for GiveWell (but nothing I say on the Forum is ever representative of GiveWell). I'm always happy to chat - if you think we have similar interests and would like to talk, send me a calendar invite at karthikt@berkeley.edu!
But neglectedness as a heuristic is very good precisely for narrowing down what you think the good opportunity is. Every neglected field is a subset of a non-neglected field. So pointing out that great grants have come in some subset of a non neglected field doesn't tell us anything.
To be specific, it's really important that EA identifies the area within that neglected field where resources aren't flowing, to minimize funging risk. Imagine that AI safety polling had not been neglected and that in fact there were tons of think tanks who planned to do AI safety polling and tons of funders who wanted to make that happen. Then even though it would be important and tractable, EA funding would not be counterfactually impactful, because those hypothetical factors would lead to AI safety polling happening with or without us. So ignoring neglectedness would lead to us having low impact.
I consider myself good at sniffing out edited images but I can't spot any signs in Balenciaga Pope. Besides, for a deepfake to be useful, it only has to be convincing to a large minority of people, including very technologically unsophisticated people.
I've been thinking about this post for days, which is a great sign, and in particular I think there's a deep truth in the following:
Indeed, my guess is that people’s utility in the goods available today does have an upper asymptote, that new goods in the future could raise our utility above that bound, and that this cycle has been played out many times already.
I realize this is tangential to your point about GDP measurement, but I think Uzawa's theorem probably set growth theory back by decades. By axiomatizing that technical change is labor-augmenting, we became unable to speak coherently about automation, something that is only changing recently. I think there is so much more we can understand about technical change that we don't yet. My best guess of the nature of technological progress is as follows:
This idea is given some empirical support by Hubmer 2022 and theoretical clarity by Jones and Liu 2024, but it's still just a conjecture. So I think the really important question about AI is whether the tons of new products it will enable will themselves be labor-intensive or capital-intensive. If the new products are capital-intensive, breaking with historical trend, then I expect that the phenomenon you describe (good 2's productivity doesn't grow) will not happen.
Similar to Ollie's answer, I don't think EA is prepared for the world in which AI progress goes well. I expect that if that happens, there will be tons of new opportunities for us to spend money/start organizations that improve the world in a very short timeframe. I'd love to see someone carefully think through what those opportunities might be.
A history of ITRI, Taiwan's national electronics R&D institute. It was established in 1973, when Taiwan's income was less than Pakistan's income today. Yet it was single-handedly responsible for the rise of Taiwan's electronics industry, spinning out UMC, MediaTek and most notably TSMC. To give you a sense of how insane this is, imagine that Bangladesh announced today that they were going to start doing frontier AI R&D, and in 2045 they were the leaders in AI. ITRI is arguably the most successful development initiative in history, but I've never seen it brought up in either the metascience/progress community or the global dev community.
Something that I personally would find super valuable is to see you work through a forecasting problem "live" (in text). Take an AI question that you would like to forecast, and then describe how you actually go about making that forecast. The information you seek out, how you analyze it, and especially how you make it quantitative. That would
This exercise does double duty as "substantive take about the world for readers who want an answer" and "guide to forecasting for readers who want to do the same".