I wouldn't say speed limits are for no one in particular; I'd say they are for everyone in general, because they are a case where a preference (not dying in car accidents) is universal. But many preferences are not universal.
I know that egoism is technically an ethical framework, but I don't see how it could ever get meaningful rules to come out of it that I think we'd agree we'd want as a society. It would be hard to even come up with rules like "You shouldn't murder others" if your starting point is your own ego and maximizing your own self interest.
Than...
I'm not using purely deontological reasoning, that is true. I have issues with deontological ethics as well.
I can understand not prioritizing these issues for grant-making, because of tractability. But if something is highly important, and no one is making progress on it, shouldn't there at least be a lot of discussion about it, even if we don't yet see tractable approaches? Like, shouldn't there be energy in trying to find tractability? That seems missing, which makes me think that the issues are underrated in terms of importance.
Yes, but I don't see why we have to evaluate any of those things on the basis of arguments or thinking like the population ethics thought experiments.
Increased immigration is good because it gives people freedom to improve their lives, increasing their agency.
The demographic transition (including falling fertility rates) is good because it results from increased wealth and education, which indicates that it is about women becoming better-informed and better able to control their own reproduction. If in the future fertility rates rise because people become ...
“What is the algorithm that we would like legislators to use to decide which legislation to support?”
I would like them to use an algorithm that is not based on some sort of global calculation about future world-states. That leads to parentalism in government and social engineering. Instead, I would like the algorithm to be based on something like protecting rights and preventing people from directly harming each other. Then, within that framework, people have the freedom to improve their own lives and their own world.
Re the China/US scenario: this does see...
I can't imagine a way to guide my actions in a normative sense without thinking about whether the future states my actions bring about are preferable or not.
Preferable to whom? Obviously you could think about whether they are preferable to yourself. I'm against the notion that there is such as thing as “preferable” to no one in particular.
Of course many people de facto think about their preferences when making a decision and they often give that a lot of weight, but I see ethics as standing outside of that…
Hmm, I don't. I see egoism as an alternative ethical framework, rather than as non-ethical.
These are good examples. But I would not decide any of these questions with regard to some notion of whether the world was better or worse with more people in it.
Good observations. I wonder if it makes sense to have a role for this, a paid full-time position to seek out and expose liars. Think of a policeman, but for epistemics. Then it wouldn't be a distraction from, or a risk to, that person's main job—it would be their job. They could make the mental commitment up front to be ready for a fight from time to time, and the role would select for the kind of person who is ready and willing to do that.
This would be an interesting position for some EA org to fund. A contribution to clean up the epistemic commons.
Thanks. That is an interesting argument, and this isn't the first time I've heard it, but I think I see its significance to the issue more clearly now.
I will have to think about this more. My gut reaction is: I don't trust my ability to extrapolate out that many orders of magnitude into the future. So, yes, this is a good first-principles physics argument about the limits to growth. (Much better than the people who stop at pointing out that “the Earth is finite”). But once we're even 10^12 away from where we are now, let alone 10^200, who knows what we'll ...
First, PS is almost anything but an academic discipline (even though that's the context in which it was originally proposed). The term is a bit of a misnomer; I think more in terms of there being (right now) a progress community/movement.
I agree these things aren't mutually exclusive, but there seems to be a tension or difference of opinion (or at least difference of emphasis/priority) between folks in the “progress studies” community, and those in the “longtermist EA” camp who worry about x-risk (sorry if I'm not using the terms with perfect precision). That's what I'm getting at and trying to understand.
Thanks JP!
Minor note: the “Pascal's Mugging” isn't about the chance of x-risk itself, but rather the delta you can achieve through any particular program/action (vs. the cost of that choice).
By that token most particular scientific experiments or contributions to political efforts may be such: e.g. if there is a referendum to pass a pro-innovation regulatory reform and science funding package, a given donation or staffer in support of it is very unlikely to counterfactually tip it into passing, although the expected value and average returns could be high, and the collective effort has a large chance of success.
Followup: I did write that essay some ~5 months ago, but I got some feedback on it that made me think I needed to rethink it more carefully, and then other deadlines took over and I lost momentum.
I was recently nudged on this again, and I've written up some questions here that would help me get to clarity on this issue: https://forum.effectivealtruism.org/posts/hkKJF5qkJABRhGEgF/help-me-find-the-crux-between-ea-xr-and-progress-studies
Thanks ADS. I'm pretty close to agreeing with all those bullet points actually?
I wonder if, to really get to the crux, we need to outline what are the specific steps, actions, programs, investments, etc. that EA/XR and PS would disagree on. “Develop safe AI” seems totally consistent with PS, as does “be cautious of specific types of development”, although both of those formulations are vague/general.
Re Bostrom:
...a single percentage point of reduction of existential risks would be worth (from a utilitarian expected utility point-of-view) a delay of over 10 mi
Side note: Bostrom does not hold or argue for 100% weight on total utilitarianism such as to take overwhelming losses on other views for tiny gains on total utilitarian stances. In Superintelligence he specifically rejects an example extreme tradeoff of that magnitude (not reserving one galaxy's worth of resources out of millions for humanity/existing beings even if posthumans would derive more wellbeing from a given unit of resources).
I also wouldn't actually accept a 10 million year delay in tech progress (and the death of all existing beings who would otherwise have enjoyed extended lives from advanced tech, etc) for a 0.001% reduction in existential risk.
OK, so maybe there are a few potential attitudes towards progress studies:
I've been perceiving a lot of EA/XR folks to be in (3) but maybe you're saying they're more in (2)?
Flipping it around, PS folks could have a similar (1) positive / (2) neutral / (3) negative attitude towards XR efforts. My view is not settled, but right now I'm somewhere between (1) and (2)...
Your 3 items cover good+top priority, good+not top priority, and bad+top priority, but not #4, bad+not top priority.
I think people concerned with x-risk generally think that progress studies as a program of intervention to expedite growth is going to have less expected impact (good or bad) on the history of the world per unit of effort, and if we condition on people thinking progress studies does more harm than good, then mostly they'll say it's not important enough to focus on arguing against at the current margin (as opposed to directly targeting urgent ...
I've been perceiving a lot of EA/XR folks to be in (3) but maybe you're saying they're more in (2)?
Yup.
Maybe it turns out that most folks in each community are between (1) and (2) toward the other. That is, we're just disagreeing on relative priority and neglectedness.
That's what I would say.
I can't see it as literally the only thing worth spending any marginal resources on (which is where some XR folks have landed).
If you have opportunity A where you get a benefit of 200 per $ invested, and opportunity B where you get a benefit of 50 per $ invested, you w...
That's interesting, because I think it's much more obvious that we could successfully, say, accelerate GDP growth by 1-2 points per year, than it is that we could successfully, say, stop an AI catastrophe.
The former is something we have tons of experience with: there's history, data, economic theory… and we can experiment and iterate. The latter is something almost completely in the future, where we don't get any chances to get it wrong and course-correct.
(Again, this is not to say that I'm opposed to AI safety work: I basically think it's a good thing, or...
I just think there's a much greater chance that we look back on it and realize, too late, that we were focused on entirely the wrong things.
If you mean like 10x greater chance, I think that's plausible (though larger than I would say). If you mean 1000x greater chance, that doesn't seem defensible.
In both fields you basically ~can't experiment with the actual thing you care about (you can't just build a superintelligent AI and check whether it is aligned; you mostly can't run an intervention on the entire world and check whether world GDP went up). Y...
As to whether my four questions are cruxy or not, that's not the point! I wasn't claiming they are all cruxes. I just meant that I'm trying to understand the crux, and these are questions I have. So, I would appreciate answers to any/all of them, in order to help my understanding. Thanks!
I'm not making a claim about how effective our efforts can be. I'm asking a more abstract, methodological question about how we weigh costs and benefits.
If XR weighs so strongly (1e15 future lives!) that you are, in practice, willing to accept any cost (no matter how large) in order to reduce it by any expected amount (no matter how small), then you are at risk of a Pascal's Mugging.
If not, then great—we agree that we can and should weigh costs and benefits. Then it just comes down to our estimates of those things.
And so then I just want to know, OK, what'...
If XR weighs so strongly (1e15 future lives!) that you are, in practice, willing to accept any cost (no matter how large) in order to reduce it by any expected amount (no matter how small), then you are at risk of a Pascal's Mugging.
Sure. I think most longtermists wouldn't endorse this (though a small minority probably would).
But when the proposal becomes: “we should not actually study progress or try to accelerate it”, I get lost.
I don't think this is negative, I think there are better opportunities to affect the future (along the lines of Ben's comment)....
Good points.
I haven't read Ord's book (although I read the SSC review, so I have the high-level summary). Let's assume Ord is right and we have a 1/6 chance of extinction this century.
My “1e-6” was not an extinction risk. It's a delta between two choices that are actually open to us. There are no zero-risk paths open to us, only one set of risks vs. a different set.
So:
But EA/XR folks don't seem to be primarily advocating for specific safety measures. Instead, what I hear (or think I'm hearing) is a kind of generalized fear of progress. Again, that's where I get lost. I think that (1) progress is too obviously valuable and (2) our ability to actually predict and control future risks is too low.
I think there's a fear of progress in specific areas (e.g. AGI and certain kinds of bio) but not a general one? At least I'm in favor of progress generally and against progress in some specific areas where we have good object-level...
A someone fairly steeped in Progress Studies (and actively contributing to it), I think this is a good characterization.
From the PS side, I wrote up some thoughts about the difference and some things I don't quite understand about the EA/XR side here; I would appreciate comments: https://forum.effectivealtruism.org/posts/hkKJF5qkJABRhGEgF/help-me-find-the-crux-between-ea-xr-and-progress-studies
As someone who is more on the PS side than the EA side, this does not quite resonate with me.
I am still thinking this issue through and don't have a settled view. But here are a few, scattered reactions I have to this framing.
On time horizon and discount rate:
Hi Jason, thank you for sharing your thoughts! I also much appreciated you saying that the OP sounds accurate to you since I hadn't been sure how good a job I did with describing the Progress Studies perspective.
I hope to engage more with your other post when I find the time - for now just one point:
...
- I don't think I'm assuming a short civilization. I very much want civilization to last millions or billions of years! (I differ from Tyler on this point, I guess)
- You say “what does it matter if we accelerate progress by a few hundred or even a few thousand year
I haven't forgotten this, but my response has turned into an entire essay. I think I'll do it as a separate post, and link it here. Thanks!
I don't have strong opinions on the reproducibility issues. My guess is that if it has contributed to stagnation it's been more of a symptom than a cause.
As for where to spend funding, I also don't have a strong answer. My feeling is that reproducibility isn't really stopping anything, it's a tax/friction/overhead at worst? So I would tend to favor a promising science project over a reproducibility project. On the other hand, metascience feels important, and more neglected than science itself.
I think advances in science leading to technology is only the proximal cause of progress. I think the deeper causes are, in fact, philosophical (including epistemic, moral, and political causes). The Scientific Revolution, the shift from monarchy to republics, the development of free markets and enterprise, the growth of capitalism—all of these are social/political causes that underlie scientific, technological, industrial, and economic progress.
More generally, I think that progress in technology, science, and government are tightly intertwined in history ...
It's hard to prioritize! I try to have overarching / long-term goals, and to spend most of my time on them, but also to take advantage of opportunities when they arise. I look for things that significantly advance my understanding of progress, build my public content base, build my audience, or better, all three.
Right now I'm working on two things. One is continued curriculum development for my progress course for the Academy of Thought and Industry, a private high school. The other, more long-term project is a book on progress. Along the way I intend to keep writing semi-regularly at rootsofprogress.org.
I am broadly sympathetic to Patrick's way of looking at this, yes.
If progress studies feels like a miss on EA's part to you… I think folks within EA, especially those who have been well within it for a long time, are better placed to analyze why/how that happened. Maybe rather than give an answer, let me suggest some hypotheses that might be fruitful to explore:
I have a theory of change but not a super-detailed one. I think ideas matter and that they move the world. I think you get new ideas out there any way you can.
Right now I'm working on a book about progress. I hope this book will be read widely, but above all I'd like it to be read by the scientists, engineers and entrepreneurs who are creating, or will create, the next major breakthroughs that move humanity forward. I want to motivate them, to give them inspiration and courage. Someday, maybe in twenty years, I'd love to meet the scientist who solved human...
Let me say up front that there is a divergence here between my ideological biases/priors and what I think I can prove or demonstrate objectively. I usually try to stick to the latter because I think that's more useful to everyone, but since you asked I need to get into the former.
Does government have a role to play? Well, taking that literally, then absolutely, yes. If nothing else, I think it's clear that government creates certain conditions of political stability, and provides legal infrastructure such as corporate and contract law, property law i...
I don't really have great thoughts on metrics, as I indicated to @monadica. Happy to chat about it sometime! It's a hard problem.
Re measuring progress, it's hard. No one metric captures it. The one that people use if they have to use something is GDP but that has all kinds of problems. In practice, you have to just look at multiple metrics, some which are narrow but easy to measure, and some which are broad aggregates or indices.
Re “piecewise” process, it's true that progress is not linear! I agree it is stochastic.
Re a golden age, I'm not sure, but see my reply to @BrianTan below re “interventions”.
I'll have to read more about progress in “renewables” to decide how big a breakthrough that is, but at best it would have to be counted, like genetics, as a potential future revolution, not one that's already here. We still get most of our energy from fossil fuels.
Well, the participants are high school students, so for most of them the work they are doing immediately is going to university. Like all education, it is more of a long-term investment.
Maybe there's just a confusion with the metaphor here? I generally agree that there is a practically infinite amount of progress to be made.
There isn't a lot out there. In addition to my own work, I would suggest Steven Pinker's Enlightenment Now and perhaps David Deutsch's The Beginning of Infinity. Those are some of the best sources on the philosophy of progress. Also Ayn Rand's Atlas Shrugged, which is the only novel I know of that portrays science, engineering and business as a noble quest for the betterment of humanity.
The Roots of Progress was really about following an opportunity at a specific moment in time, for me and for the world. Both starting the project as a hobby, when I was personally fascinated by the topic, and going full-time on it right when the “progress studies” movement was taking off. So I don't see how it could have happened any differently.
I think being an engineer helps me dig into the technical details of the history I'm researching, and to write explanations that go deeper into that detail. Many histories of technology are very light on technical detail and don't really explain how the inventions worked. One thing that makes me unique is actually explaining how stuff works. This is probably the most important thing.
I think being a founder is helpful in understanding some business fundamentals like marketing or finance. And I am constantly drawing parallels and making comparisons between t...
Alan Kay suggested that progress in education should be measured in “Sistine Chapel ceilings per lifetime.” Ultimately my goal is something similar, but maybe substitute “Nobel-worthy scientific discoveries”, “Watt-level inventions” or “trillion-dollar businesses” for the artistic goal. I'll know if I'm successful if in twenty years, or fifty, people who did those things are telling me they were given inspiration and courage from my work.
The problem with Sistine Chapel ceilings is that it's a lagging metric. We all need leading metrics to steer ourselves b...
Maybe when I have some interventions I'm more sure of! (And/or if some powerful person or agency was directly asking me for input.)
Epistemically, before I can recommend interventions I need to really understand causation, and before I can explain or hypothesize causation, I need to get clear on the specific timeline of events. And in terms of personal motivation, I'm much more interested in the detailed history of progress than in arguing policy with people.
But, yes, eventually the whole point of progress studies is to figure out how to make more (and bett...
“Are new fields getting harder to find?” I think this is the trillion-dollar question! I don't have an answer yet though.
Is progress open indefinitely? I think there is probably at least a theoretic end to progress, but it's so unimaginably far away that for our purposes today we should consider progress as potentially infinite. There are still an enormous number of things to learn and invent.
Hmm, I thought that running discussion sessions with the students might be hard, but it was quite natural! I was lucky to get a great group of students in the first cohort.
There were some gaps in their knowledge I didn't anticipate. They weren't very familiar with simple machines and mechanical advantage, with basic molecular biochemistry such as proteins and DNA, or with basic financial/accounting concepts such as fixed vs. variable cost.
Not sure what to say about an EA course, sorry!
Re my own focus:
The irony is that my original motivation for studying progress was to better ground and validate my epistemic and moral ideas!
One challenge with epistemic, moral, and (I'll throw in) political ideas is that we've literally been debating them for 2,500 years and we still don't agree. We've probably come up with many good ideas already, but they haven't gotten wide enough adoption. So I think figuring out how to spread best practices is more high-leverage than making progress in these fields as such.
Before I got into what would come to be cal...
I don't know much about it beyond that Wikipedia page, but I think that something like this is generally in the right direction.
In particular, I would say:
I should add, though, that I think there is an important truth in the concern about whether progress makes us happier. Material progress doesn't make us happier on its own: it also requires good choices and a healthy psychology.
Technology isn't inherently good or bad, it is made so by how we use it. Technology generally gives us more power and more choices, and as our choices expand, we need to get better at making choices. And I'm not sure we're getting better at making choices as fast as our choices are expanding.
The society-level version of this is that...
Off the top of my head:
In brief, I think: (1) subjective measures of well-being don't tell us the full story about whether progress is real, and (2) the measures we have are actually inconsistent, with some showing positive benefits of progress, others flat, and a few slightly negative (but most of them not epidemics).
To elaborate, on the second point first:
The Easterlin Paradox, to my understanding, dissolved over time with more and better data. Steven Pinker addresses this pretty well in Enlightenment Now, which I reviewed here: https://rootsofprogress.org/enlightenment-now
Our...
Oh, I should also point to the SSC response to “ideas getting harder to find”, which I thought was very good: https://slatestarcodex.com/2018/11/26/is-science-slowing-down-2/
In particular, I don't think you can measure “research productivity” as percent improvement divided by absolute research input. I understand the rationale for measuring it this way, but I think for reasons Scott points out, it's just not the right metric to use.
Another way to look at this is: one generative model for exponential growth is a thing that is growing in proportion to its si...
Only a little bit. In part they were a reaction to the religious wars that plagued Europe for centuries.