Author, The Roots of Progress (rootsofprogress.org)
Good observations. I wonder if it makes sense to have a role for this, a paid full-time position to seek out and expose liars. Think of a policeman, but for epistemics. Then it wouldn't be a distraction from, or a risk to, that person's main job—it would be their job. They could make the mental commitment up front to be ready for a fight from time to time, and the role would select for the kind of person who is ready and willing to do that.
This would be an interesting position for some EA org to fund. A contribution to clean up the epistemic commons.
Thanks. That is an interesting argument, and this isn't the first time I've heard it, but I think I see its significance to the issue more clearly now.
I will have to think about this more. My gut reaction is: I don't trust my ability to extrapolate out that many orders of magnitude into the future. So, yes, this is a good first-principles physics argument about the limits to growth. (Much better than the people who stop at pointing out that “the Earth is finite”). But once we're even 10^12 away from where we are now, let alone 10^200, who knows what we'll find? Maybe we'll discover FTL travel (ok, unlikely). Maybe we'll at least be expanding out to other galaxies. Maybe we'll have seriously decoupled economic growth from physical matter: maybe value to humans is in the combinations and arrangements of things, rather than things themselves—bits, not atoms—and so we have many more orders of magnitude to play with.
If you're not willing to apply a moral discount factor against the far future, shouldn't we at least, at some point, apply an epistemic discount? Are we so certain about progress/growth being a brief, transient phase that we're willing to postpone the end of it by literally the length of human civilization so far, or longer?
First, PS is almost anything but an academic discipline (even though that's the context in which it was originally proposed). The term is a bit of a misnomer; I think more in terms of there being (right now) a progress community/movement.
I agree these things aren't mutually exclusive, but there seems to be a tension or difference of opinion (or at least difference of emphasis/priority) between folks in the “progress studies” community, and those in the “longtermist EA” camp who worry about x-risk (sorry if I'm not using the terms with perfect precision). That's what I'm getting at and trying to understand.
Minor note: the “Pascal's Mugging” isn't about the chance of x-risk itself, but rather the delta you can achieve through any particular program/action (vs. the cost of that choice).
Followup: I did write that essay some ~5 months ago, but I got some feedback on it that made me think I needed to rethink it more carefully, and then other deadlines took over and I lost momentum.
I was recently nudged on this again, and I've written up some questions here that would help me get to clarity on this issue: https://forum.effectivealtruism.org/posts/hkKJF5qkJABRhGEgF/help-me-find-the-crux-between-ea-xr-and-progress-studies
Thanks ADS. I'm pretty close to agreeing with all those bullet points actually?
I wonder if, to really get to the crux, we need to outline what are the specific steps, actions, programs, investments, etc. that EA/XR and PS would disagree on. “Develop safe AI” seems totally consistent with PS, as does “be cautious of specific types of development”, although both of those formulations are vague/general.
a single percentage point of reduction of existential risks would be worth (from a utilitarian expected utility point-of-view) a delay of over 10 million years.
By the same logic, would a 0.001% reduction in XR be worth a delay of 10,000 years? Because that seems like the kind of Pascal's Mugging I was talking about.
(Also for what it's worth, I think I'm more sympathetic to the “person-affecting utilitarian” view that Bostrom outlines in the last section of that paper—which may be why I learn more towards speed on the speed/safety tradeoff, and why my view might change if we already had immortality. I wonder if this is the crux?)
OK, so maybe there are a few potential attitudes towards progress studies:
I've been perceiving a lot of EA/XR folks to be in (3) but maybe you're saying they're more in (2)?
Flipping it around, PS folks could have a similar (1) positive / (2) neutral / (3) negative attitude towards XR efforts. My view is not settled, but right now I'm somewhere between (1) and (2)… I think there are valuable things to do here, and I'm glad people are doing them, but I can't see it as literally the only thing worth spending any marginal resources on (which is where some XR folks have landed).
Maybe it turns out that most folks in each community are between (1) and (2) toward the other. That is, we're just disagreeing on relative priority and neglectedness.
(But I don't think that's all of it.)
That's interesting, because I think it's much more obvious that we could successfully, say, accelerate GDP growth by 1-2 points per year, than it is that we could successfully, say, stop an AI catastrophe.
The former is something we have tons of experience with: there's history, data, economic theory… and we can experiment and iterate. The latter is something almost completely in the future, where we don't get any chances to get it wrong and course-correct.
(Again, this is not to say that I'm opposed to AI safety work: I basically think it's a good thing, or at least it can be if pursued intelligently. I just think there's a much greater chance that we look back on it and realize, too late, that we were focused on entirely the wrong things.)
As to whether my four questions are cruxy or not, that's not the point! I wasn't claiming they are all cruxes. I just meant that I'm trying to understand the crux, and these are questions I have. So, I would appreciate answers to any/all of them, in order to help my understanding. Thanks!
I'm not making a claim about how effective our efforts can be. I'm asking a more abstract, methodological question about how we weigh costs and benefits.
If XR weighs so strongly (1e15 future lives!) that you are, in practice, willing to accept any cost (no matter how large) in order to reduce it by any expected amount (no matter how small), then you are at risk of a Pascal's Mugging.
If not, then great—we agree that we can and should weigh costs and benefits. Then it just comes down to our estimates of those things.
And so then I just want to know, OK, what's the plan? Maybe the best way to find the crux here is to dive into the specifics of what PS and EA/XR each propose to do going forward. E.g.:
But when the proposal becomes: “we should not actually study progress or try to accelerate it”, I get lost. Failing to maintain and accelerate progress, in my mind, is a global catastrophic risk, if not an existential one. And it's unclear to me whether this would even increase or decrease XR, let alone the amount—in any case I think there are very wide error bars on that estimate.
But maybe that's not actually the proposal from any serious EA/XR folks? I am still unclear on this.