All of jasoncrawford's Comments + Replies

Only a little bit. In part they were a reaction to the religious wars that plagued Europe for centuries.

4
Wei Dai
2y
It seems key to the project of "defense of Enlightenment ideas" to figure out whether the Age of Enlightenment came about mainly through argumentation and reasoning, or mainly through (cultural) variation and selection. If the former, then we might be able to defend Enlightenment ideas just by, e.g., reminding people of the arguments behind them. But if it's the latter, then we might suspect that the recent decline of Enlightenment ideas was caused by weaker selection pressure towards them (allowing "cultural drift" to happen to a greater extent), or even a change in the direction of the selection pressure. Depending on the exact nature of the changes, either of these might be much harder to reverse. A closely related line of inquiry is, what exactly was/is the arguments behind Enlightenment ideas? Did the people who adopted them do so for the right reasons? (My shallow investigation linked above suggests that the answer is at least plausibly "no".) In either case, how sure are we that they're the right ideals/values for us? While it seems pretty clear that Enlightenment ideas historically had good consequences in terms of, e.g., raising the living standards of many people, how do we know that they'll still have net positive consequences going forward? To try to steelman the anti-Enlightenment position: 1. People in "liberal" societies "reason" themselves into harmful conclusions all the time, and are granted "freedom" to act out their conclusions. 2. In an environment where everyone has easy access to worldwide multicast communication channels, "free speech" may lead to virulent memes spreading uncontrollably (and we're already seeing the beginnings of this). 3. If everyone adopts Enlightenment ideas, then we face globally correlated risks of (1) people causing harm on increasingly large scales and (2) cultures evolving into things we wouldn't recognize and/or endorse.

I wouldn't say speed limits are for no one in particular; I'd say they are for everyone in general, because they are a case where a preference (not dying in car accidents) is universal. But many preferences are not universal.

I know that egoism is technically an ethical framework, but I don't see how it could ever get meaningful rules to come out of it that I think we'd agree we'd want as a society. It would be hard to even come up with rules like "You shouldn't murder others" if your starting point is your own ego and maximizing your own self interest.

Than... (read more)

I'm not using purely deontological reasoning, that is true. I have issues with deontological ethics as well.

I can understand not prioritizing these issues for grant-making, because of tractability. But if something is highly important, and no one is making progress on it, shouldn't there at least be a lot of discussion about it, even if we don't yet see tractable approaches? Like, shouldn't there be energy in trying to find tractability? That seems missing, which makes me think that the issues are underrated in terms of importance.

Yes, but I don't see why we have to evaluate any of those things on the basis of arguments or thinking like the population ethics thought experiments.

Increased immigration is good because it gives people freedom to improve their lives, increasing their agency.

The demographic transition (including falling fertility rates) is good because it results from increased wealth and education, which indicates that it is about women becoming better-informed and better able to control their own reproduction. If in the future fertility rates rise because people become ... (read more)

2
NunoSempere
2y
FWIW, to me it does seem that you are using some notion of aggregate welfare across a population when considering these cases,  rather than purely deontological reasoning
2
NunoSempere
2y
Seems underspecified. E.g., not sure how you would judge a ban or nudge against cousin marriage.
2
NunoSempere
2y
I've also seen the explanation that as child mortality dwindles, people choose to invest more of their resources into fewer children.

Not sure, maybe both? I am at least somewhat sympathetic to consequentialism though

“What is the algorithm that we would like legislators to use to decide which legislation to support?”

I would like them to use an algorithm that is not based on some sort of global calculation about future world-states. That leads to parentalism in government and social engineering. Instead, I would like the algorithm to be based on something like protecting rights and preventing people from directly harming each other. Then, within that framework, people have the freedom to improve their own lives and their own world.

Re the China/US scenario: this does see... (read more)

I can't imagine a way to guide my actions in a normative sense without thinking about whether the future states my actions bring about are preferable or not.

Preferable to whom? Obviously you could think about whether they are preferable to yourself. I'm against the notion that there is such as thing as “preferable” to no one in particular.

Of course many people de facto think about their preferences when making a decision and they often give that a lot of weight, but I see ethics as standing outside of that…

Hmm, I don't. I see egoism as an alternative ethical framework, rather than as non-ethical.

1
Devon Fritz
2y
Preferable to people in general. I don't think no one in particular means no one. When people set speed limits on roads they are for no one in particular, but it seems reasonable to assume people don't want to die in car accidents and legislate accordingly. I know that egoism is technically an ethical framework, but I don't see how it could ever get meaningful rules to come out of it that I think we'd agree we'd want as a society. It would be hard to even come up with rules like "You shouldn't murder others" if your starting point is your own ego and maximizing your own self interest. But I don't know much about egoism so I am probably missing something here.  

These are good examples. But I would not decide any of these questions with regard to some notion of whether the world was better or worse with more people in it.

  • Senator case: I think social engineering through the tax code is a bad idea, and I wouldn't do it. I would not decide on the tax reform based on its effect on birth rates. (If I had to decide separately whether such effects would be good, I would ask what is the nature of the extra births? Is the tax reform going to make hospitals and daycare cheaper, or is it going to make contraception and abort
... (read more)

Good observations. I wonder if it makes sense to have a role for this, a paid full-time position to seek out and expose liars. Think of a policeman, but for epistemics. Then it wouldn't be a distraction from, or a risk to, that person's main job—it would be their job. They could make the mental commitment up front to be ready for a fight from time to time, and the role would select for the kind of person who is ready and willing to do that.

This would be an interesting position for some EA org to fund. A contribution to clean up the epistemic commons.

4
Linch
2y
I think Ozzie Gooen and QURI are pretty interested in stuff like this.

Thanks. That is an interesting argument, and this isn't the first time I've heard it, but I think I see its significance to the issue more clearly now.

I will have to think about this more. My gut reaction is: I don't trust my ability to extrapolate out that many orders of magnitude into the future. So, yes, this is a good first-principles physics argument about the limits to growth. (Much better than the people who stop at pointing out that “the Earth is finite”). But once we're even 10^12 away from where we are now, let alone 10^200, who knows what we'll ... (read more)

6
Max_Daniel
3y
I think this actually does point to a legitimate and somewhat open question on how to deal with uncertainty between different 'worldviews'. Similar to Open Phil, I'm using worldview to refer to a set of fundamental beliefs that are an entangled mix of philosophical and empirical claims and values. E.g., suppose I'm uncertain between: * Worldview A, according to which I should prioritize based on time scales of trillions of years. * Worldview B, according to which I should prioritize based on time scales of hundreds of years. * This could be for a number of reasons: an empirical prediction that civilization is going to end after a few hundred years; ethical commitments such as pure time preference, person-affecting views, egoism, etc.; or epistemic commitments such as high-level heuristics for how to think about long time scales or situations with significant radical uncertainty. One way to deal with this uncertainty is to put both value on a "common scale", and then apply expected value: perhaps on worldview A, I can avert quintillions of expected deaths while on worldview B "only" a trillions lives are at stake in my decision. Even if I only have a low credence in A, after applying expected value I will then end up making decisions based just on A. But this is not the only game in town. We might instead think of A and B as two groups of people with different interests trying to negotiate an agreement. In that case, we may have the intuition that A should make some concessions to B even if A was a much larger group, or was more powerful, or similar. This can motivate ideas such as variance normalization or the 'parliamentary approach'. (See more generally: normative uncertainty.) Now, I do have views on this matter that don't make me very sympathetic to allocating a significant chunk of my resources to, say, speeding up economic growth or other things someone concerned about next few decades might prioritize. (Both because of my views on normative uncerta
2
Max_Daniel
3y
Thanks for sharing your reaction! I actually agree with some of it:  * I do think it's good to retain some skepticism about our ability to understand the relevant constraints and opportunities that civilization would face in millions or billions of years. I'm not 100% confident in the claims from my previous comment. * In particular, I have non-zero credence in views that decouple moral value from physical matter. And on such views it would be very unclear what limits to growth we're facing (if any). * But if 'moral value' is even roughly what I think it is (in particular, requires information processing), then this seems similarly unlikely as FTL travel being possible: I'm not a physicist, but my rough understanding is that there is only so much computation you can do with a given amount of energy or negentropy or whatever the relevant quantity is. * It could still turn out that we're wrong about how information processing relates to physics (relatedly, look what some current longtermists were interested in during their early days ;)), or about how value relates to information processing. But this also seems very unlikely to me. However, for practical purposes my reaction to these points is interestingly somewhat symmetrical to yours. :) * I think these are considerations that actually raise worries about Pascal's Mugging. The probability that we're so wrong about fundamental physics, or that I'm so wrong about what I'd value if only I knew more, seems so small that I'm not sure what to do with it. * There is also the issue that if we were so wrong, I would expect that we're very wrong about a number of different things as well. I think the modal scenarios on which the above "limits to growth" picture is wrong is not "how we expect the future to look like, but with FTL travel" but very weird things like "we're in a simulation". Unknown unknowns rather than known unknowns. So my reaction to the possibility of being in such a world is not "let's priori

First, PS is almost anything but an academic discipline (even though that's the context in which it was originally proposed). The term is a bit of a misnomer; I think more in terms of there being (right now) a progress community/movement.

I agree these things aren't mutually exclusive, but there seems to be a tension or difference of opinion (or at least difference of emphasis/priority) between folks in the “progress studies” community, and those in the “longtermist EA” camp who worry about x-risk (sorry if I'm not using the terms with perfect precision). That's what I'm getting at and trying to understand.

2
kbog
3y
OK, sorry for misunderstanding. I make an argument here that marginal long run growth is dramatically less important than marginal x-risk. I'm not fully confident in it. But the crux could be what I highlight - whether society is on an endless track of exponential growth, or on the cusp of a fantastical but fundamentally limited successor stage. Put more precisely, the crux of the importance of x-risk is how good the future will be, whereas the crux of the importance of progress is whether differential growth today will mean much for the far future. I would still ceteris paribus pick more growth rather than less, and from what I've seen of Progress Studies researchers, I trust them to know how to do that well. It's important to compare with long-term political and social change too. Arguably a higher priority than either effort, but also something that can be indirectly served by economic progress. One thing the progress studies discourse has persuaded me of is that there is some social and political malaise that arises when society stops growing. Healthy politics may require fast nonstop growth (though that is a worrying thing if true).

Thanks JP!

Minor note: the “Pascal's Mugging” isn't about the chance of x-risk itself, but rather the delta you can achieve through any particular program/action (vs. the cost of that choice).

By that token most particular scientific experiments or contributions to political efforts may be such: e.g. if there is a referendum to pass a pro-innovation regulatory reform and science funding package, a given donation or staffer in support of it is very unlikely to counterfactually tip it into passing, although the expected value and average returns could be high, and the collective effort has a large chance of success.

7
SiebeRozendal
3y
Sure, but the delta you can achieve with anything is small, depending on how you delineate an action. True, x-risk reduction is on the more extreme end of this spectrum, but I think the question should be "can these small deltas/changes add up to a big delta/change? (vs. the cost of that choice)" and the answer to that seems to be "yes." Is your issue more along the following? 1. Humans are bad at estimating very small percentages accurately, and can be orders of magnitudes off (and the same goes for astronomical values in the long-term future) 2. Arguments for the cost-effectiveness of x-risk reduction rely on estimating very small percentages (and the same goes for astronomical values in the long-term future) 3. (Conlusion) Arguments for the cost-effectiveness of x-risk reduction cannot be trusted. If so, I would reject 2, because I believe we shouldn't try to quantify things at those levels of precision. This does get us to your question "How does XR weigh costs and benefits?", which I think is a good question to which I don't have a great answer to. It would be something along the lines of "there's a grey area where I don't know how to make those tradeoffs, but most things do not fall into the grey area so I'm not worrying too much about this. If I wouldn't fund something that supposedly reduces x-risk, it's either that I think it might increase x-risk, or because I think there are better options available for me to fund". Do you believe that many more choices fall into that grey area?

Followup: I did write that essay some ~5 months ago, but I got some feedback on it that made me think I needed to rethink it more carefully, and then other deadlines took over and I lost momentum.

I was recently nudged on this again, and I've written up some questions here that would help me get to clarity on this issue: https://forum.effectivealtruism.org/posts/hkKJF5qkJABRhGEgF/help-me-find-the-crux-between-ea-xr-and-progress-studies

Thanks ADS. I'm pretty close to agreeing with all those bullet points actually?

I wonder if, to really get to the crux, we need to outline what are the specific steps, actions, programs, investments, etc. that EA/XR and PS would disagree on. “Develop safe AI” seems totally consistent with PS, as does “be cautious of specific types of development”, although both of those formulations are vague/general.

Re Bostrom:

a single percentage point of reduction of existential risks would be worth (from a utilitarian expected utility point-of-view) a delay of over 10 mi

... (read more)

Side note:  Bostrom does not hold or argue for 100% weight on total utilitarianism such as to take overwhelming losses on other views for tiny gains on total utilitarian stances. In Superintelligence he specifically rejects an example extreme tradeoff of that magnitude (not reserving one galaxy's worth of resources out of millions for humanity/existing beings even if posthumans would derive more wellbeing from a given unit of resources).

I also wouldn't actually accept a 10 million year delay in tech progress (and the death of all existing beings who would otherwise have enjoyed extended lives from advanced tech, etc) for a 0.001% reduction in existential risk.

8
AppliedDivinityStudies
3y
Good to hear! In the abstract, yes, I would trade 10,000 years for 0.001% reduction in XR. In practice, I think the problem with this kind of Pascal Mugging argument is that it's really hard to know what a 0.001% reduction looks like, and really easy to do some fuzzy Fermi estimate math. If someone were to say "please give me one billion dollars, I have this really good idea to prevent XR by pursuing Strategy X", they could probably convince me that they have at least a 0.001% chance of succeeding. So my objections to really small probabilities are mostly practical.

OK, so maybe there are a few potential attitudes towards progress studies:

  1. It's definitely good and we should put resources to it
  2. Eh, it's fine but not really important and I'm not interested in it
  3. It is actively harming the world by increasing x-risk, and we should stop it

I've been perceiving a lot of EA/XR folks to be in (3) but maybe you're saying they're more in (2)?

Flipping it around, PS folks could have a similar (1) positive / (2) neutral / (3) negative attitude towards XR efforts. My view is not settled, but right now I'm somewhere between (1) and (2)... (read more)

Your 3 items cover good+top priority, good+not top priority, and bad+top priority, but not #4, bad+not top priority.

I think people concerned with x-risk generally think that progress studies as a program of intervention to expedite growth is going to have less expected impact (good or bad) on the history of the world per unit of effort, and if we condition on people thinking progress studies does more harm than good, then mostly they'll say it's not important enough to focus on arguing against at the current margin (as opposed to directly targeting urgent ... (read more)

3
atlas
3y
There's a variant of attitude (1) which I think is worth pointing out: 1. b) Progress studies is good and we should put resources into it, because it is a good way to reduce X-risk on the margin. Some arguments for (1b): * Progress studies helps us understand how tech progress is made, which is useful for predicting X-risk. * The more wealthy and stable we are as a civilization, the less likely we are to end up in arms-race type dynamics. * Some technologies help us deal with X-risk (e.g. mRNA for pandemic risks, or intelligence augmentation for all risks). This argument only works if PS accelerates the 'good' types of progress more than the 'bad' ones, which seems possible.

I've been perceiving a lot of EA/XR folks to be in (3) but maybe you're saying they're more in (2)?

Yup.

Maybe it turns out that most folks in each community are between (1) and (2) toward the other. That is, we're just disagreeing on relative priority and neglectedness.

That's what I would say.

I can't see it as literally the only thing worth spending any marginal resources on (which is where some XR folks have landed).

If you have opportunity A where you get a benefit of 200 per $ invested, and opportunity B where you get a benefit of 50 per $ invested, you w... (read more)

That's interesting, because I think it's much more obvious that we could successfully, say, accelerate GDP growth by 1-2 points per year, than it is that we could successfully, say, stop an AI catastrophe.

The former is something we have tons of experience with: there's history, data, economic theory… and we can experiment and iterate. The latter is something almost completely in the future, where we don't get any chances to get it wrong and course-correct.

(Again, this is not to say that I'm opposed to AI safety work: I basically think it's a good thing, or... (read more)

I just think there's a much greater chance that we look back on it and realize, too late, that we were focused on entirely the wrong things.

If you mean like 10x greater chance, I think that's plausible (though larger than I would say). If you mean 1000x greater chance, that doesn't seem defensible.

In both fields you basically ~can't experiment with the actual thing you care about (you can't just build a superintelligent AI and check whether it is aligned; you mostly can't run an intervention on the entire world  and check whether world GDP went up). Y... (read more)

As to whether my four questions are cruxy or not, that's not the point! I wasn't claiming they are all cruxes. I just meant that I'm trying to understand the crux, and these are questions I have. So, I would appreciate answers to any/all of them, in order to help my understanding. Thanks!

3
Rohin Shah
3y
I kinda sorta answered Q2 above (I don't really have anything to add to it). Q3: I'm not too clear on this myself. I'm just an object-level AI alignment researcher :P Q4: I broadly agree this is a problem, though I think this: seems pretty unlikely to me, where I'm interpreting it as "civilization stops making any progress and regresses to the lower quality of life from the past, and this is a permanent effect".  I haven't thought about it much, but my immediate reaction is that it seems a lot harder to influence the world in a good way through the public, and so other actions seem better. That being said, you could search for "raising the sanity waterline" (probably more so on LessWrong than here) for some discussion of approaches to this sort of social progress (though it isn't about educating people about the value of progress in particular).

I'm not making a claim about how effective our efforts can be. I'm asking a more abstract, methodological question about how we weigh costs and benefits.

If XR weighs so strongly (1e15 future lives!) that you are, in practice, willing to accept any cost (no matter how large) in order to reduce it by any expected amount (no matter how small), then you are at risk of a Pascal's Mugging.

If not, then great—we agree that we can and should weigh costs and benefits. Then it just comes down to our estimates of those things.

And so then I just want to know, OK, what'... (read more)

2
Chris Leong
2y
I'd suggest that this is a failure of imagination (sorry, I'm really not trying to criticise you, but I can't find another phrase that captures my meaning!) Like let's just take it for granted that we aren't going to be able to make any real research progress until we're much closer to AGI. It still seems like there are several useful things we could be doing: • We could be helping potential researchers to understand why AI safety might be an issue so that when the time comes they aren't like "That's stupid, why would you care about that!". Note that views tend to change generationally, so you need to start here early. • We could be supporting the careers of policy people (such by providing scholarships), so that they are more likely to be in positions of influence when the time comes. • We could iterate on the AGI safety fundamentals course so that it is the best introduction to the issue possible at any particular time, even if we need to update it. • We could be organising conferences, fellowships and events so that we have experienced organisers available when we need them. • We could run research groups so that our leaders have experience in the day-to-day of these organisations and that they already have a pre-vetted team in place for when they are needed. We could try some kinds of drills or practise instead, but I suspect that the best way to learn how to run a research group is to actually run a research group. (I want to further suggest that if someone had offered you $1 million and asked you to figure out ways of making progress at this stage then you would have had no trouble in finding things that people could do).
8
Benjamin_Todd
3y
Cool to see this thread! Just a very quick comment on this: I don't think anyone is proposing this. The debate I'm interested in is about which priorities are most pressing at the margin (i.e. creates the most value per unit of resources). The main claim isn't that speeding up tech progress is bad,* just that it's not the top priority at the margin vs. reducing x-risk or speeding up moral progress.** One big reason for this is that lots of institutions are already very focused on increasing economic productivity / discovering new tech (e.g. ~2% of GDP is spent on R&D), whereas almost no-one is focused on reducing x-risk. If the amount of resources reducing xrisk grows, then it will drop in effectiveness relatively speaking. In Toby's book, he roughly suggests that spending 0.1% of GDP on reducing x-risk is a reasonable target to aim for (about what is spent on ice cream). But that would be ~1000x more resources than today. *Though I also think speeding up tech progress is more likely to be bad than reducing xrisk, my best guess is that it's net good. **This assumes resources can be equally well spent on each. If someone has amazing fit with progress studies, that could make them 10-100x more effective in that area, which could outweigh the average difference in pressingness.

If XR weighs so strongly (1e15 future lives!) that you are, in practice, willing to accept any cost (no matter how large) in order to reduce it by any expected amount (no matter how small), then you are at risk of a Pascal's Mugging.

Sure. I think most longtermists wouldn't endorse this (though a small minority probably would).

But when the proposal becomes: “we should not actually study progress or try to accelerate it”, I get lost.

I don't think this is negative, I think there are better opportunities to affect the future (along the lines of Ben's comment).... (read more)

Good points.

I haven't read Ord's book (although I read the SSC review, so I have the high-level summary). Let's assume Ord is right and we have a 1/6 chance of extinction this century.

My “1e-6” was not an extinction risk. It's a delta between two choices that are actually open to us. There are no zero-risk paths open to us, only one set of risks vs. a different set.

So:

  • What path, or set of choices, would reduce that 1/6 risk?
  • What would be the cost of that path, vs. the path that progress studies is charting?
  • How certain are we about those two estimates? (Or
... (read more)
4
AppliedDivinityStudies
3y
Thanks for clarifying, the delta thing is a good point. I'm not aware of anyone really trying to estimate "what are the odds that MIRI prevents XR", though there is one SSC sort of on the topic: https://slatestarcodex.com/2015/08/12/stop-adding-zeroes/ I absolutely agree with all the other points. This isn't an exact quote, but from his talk with Tyler Cowen, Nick Beckstead notes: "People doing philosophical work to try to reduce existential risk are largely wasting their time. Tyler doesn’t think it’s a serious effort, though it may be good publicity for something that will pay off later... the philosophical side of this seems like ineffective posturing. Tyler wouldn’t necessarily recommend that these people switch to other areas of focus because people motivation and personal interests are major constraints on getting anywhere. For Tyler, his own interest in these issues is a form of consumption, though one he values highly." https://drive.google.com/file/d/1O--V1REGe1-PNTpJXl3GHsUu_eGvdAKn/view That's a bit harsh, but this was in 2014. Hopefully Tyler would agree efforts have gotten somewhat more serious since then. I think the median EA/XR person would agree that there is probably a need for the movement to get more hands on and practical. R.e. safety for something that hasn't been invented: I'm not an expert here, but my understanding is that some of it might be path dependent. I.e. research agendas hope to result in particular kinds of AI, and it's not necessarily a feature you can just add on later. But it doesn't sound like there's a deep disagreement here, and in any case I'm not the best person to try to argue this case. Intuitively, one analogy might be: we're building a rocket, humanity is already on it, and the AI Safety people are saying "let's add life support before the rocket takes off". The exacerbating factor is that once the rocket is built, it might take off immediately, and no one is quite sure when this will happen.

But EA/XR folks don't seem to be primarily advocating for specific safety measures. Instead, what I hear (or think I'm hearing) is a kind of generalized fear of progress. Again, that's where I get lost. I think that (1) progress is too obviously valuable and (2) our ability to actually predict and control future risks is too low.

I think there's a fear of progress in specific areas (e.g. AGI and certain kinds of bio) but not a general one? At least I'm in favor of progress generally and against progress in some specific areas where we have good object-level... (read more)

A someone fairly steeped in Progress Studies (and actively contributing to it), I think this is a good characterization.

From the PS side, I wrote up some thoughts about the difference and some things I don't quite understand about the EA/XR side here; I would appreciate comments: https://forum.effectivealtruism.org/posts/hkKJF5qkJABRhGEgF/help-me-find-the-crux-between-ea-xr-and-progress-studies

As someone who is more on the PS side than the EA side, this does not quite resonate with me.

I am still thinking this issue through and don't have  a settled view. But here are a few, scattered reactions I have to this framing.

On time horizon and discount rate:

  • I don't think I'm assuming a short civilization. I very much want civilization to last millions or billions of years! (I differ from Tyler on this point, I guess)
  • You say “what does it matter if we accelerate progress by a few hundred or even a few thousand years”? I don't understand that framing
... (read more)

Hi Jason, thank you for sharing your thoughts! I also much appreciated you saying that the OP sounds accurate to you since I hadn't been sure how good a job I did with describing the Progress Studies perspective.

I hope to engage more with your other post when I find the time - for now just one point:

  • I don't think I'm assuming a short civilization. I very much want civilization to last millions or billions of years! (I differ from Tyler on this point, I guess)
  • You say “what does it matter if we accelerate progress by a few hundred or even a few thousand year
... (read more)
5
AppliedDivinityStudies
3y
Hey Jason, I share the same thoughts on pascal-mugging type arguments. Having said that, The Precipice convincingly argues that the x-risk this century is around ~1/6, which is really not very low. Even if you don't totally believe Toby, it seems reasonable to put the odds at that order of magnitude, and it shouldn't fall into the 1-e6 type of argument. I don't think the Deutsch quotes apply either. He writes "Virtually all of them could have avoided the catastrophes that destroyed them if only they had possessed a little additional knowledge, such as improved agricultural or military technology". That might be true when it comes to warring human civilizations, but not when it comes to global catastrophes. In the past, there was no way to say "let's not move on to the bronze age quite yet", so any individual actor who attempted to stagnate would be dominated by more aggressive competitors. But for the first time in history, we really do have the potential for species-wide cooperation. It's difficult, but feasible. If the US and China manage to agree to a joint AI resolution, there's no third party that will suddenly sweep in and dominate with their less cautious approach.

I haven't forgotten this, but my response has turned into an entire essay. I think I'll do it as a separate post, and link it here. Thanks!

6
jasoncrawford
3y
Followup: I did write that essay some ~5 months ago, but I got some feedback on it that made me think I needed to rethink it more carefully, and then other deadlines took over and I lost momentum. I was recently nudged on this again, and I've written up some questions here that would help me get to clarity on this issue: https://forum.effectivealtruism.org/posts/hkKJF5qkJABRhGEgF/help-me-find-the-crux-between-ea-xr-and-progress-studies

I don't have strong opinions on the reproducibility issues. My guess is that if it has contributed to stagnation it's been more of a symptom than a cause.

As for where to spend funding, I also don't have a strong answer. My feeling is that reproducibility isn't really stopping anything, it's a tax/friction/overhead at worst? So I would tend to favor a promising science project over a reproducibility project. On the other hand, metascience feels important, and more neglected than science itself.

I think advances in science leading to technology is only the proximal cause of progress. I think the deeper causes are, in fact, philosophical (including epistemic, moral, and political causes). The Scientific Revolution, the shift from monarchy to republics, the development of free markets and enterprise, the growth of capitalism—all of these are social/political causes that underlie scientific, technological, industrial, and economic progress.

More generally, I think that progress in technology, science, and government are tightly intertwined in history ... (read more)

1
gavintaylor
3y
Thanks for the perspective, this is interesting and a useful update for me.

It's hard to prioritize! I try to have overarching / long-term goals, and to spend most of my time on them, but also to take advantage of opportunities when they arise. I look for things that significantly advance my understanding of progress, build my public content base, build my audience, or better, all three.

Right now I'm working on two things. One is continued curriculum development for my progress course for the Academy of Thought and Industry, a private high school. The other, more long-term project is a book on progress. Along the way I intend to keep writing semi-regularly at rootsofprogress.org.

I am broadly sympathetic to Patrick's way of looking at this, yes.

If progress studies feels like a miss on EA's part to you… I think folks within EA, especially those who have been well within it for a long time, are better placed to analyze why/how that happened. Maybe rather than give an answer, let me suggest some hypotheses that might be fruitful to explore:

  • A focus on saving lives and relieving suffering, with these seen as more moral or important than comfort, entertainment, enjoyment, or luxury; or economic growth; or the advance of knowledge?
  • A data-
... (read more)

I have a theory of change but not a super-detailed one. I think ideas matter and that they move the world. I think you get new ideas out there any way you can.

Right now I'm working on a book about progress. I hope this book will be read widely, but above all I'd like it to be read by the scientists, engineers and entrepreneurs who are creating, or will create, the next major breakthroughs that move humanity forward. I want to motivate them, to give them inspiration and courage. Someday, maybe in twenty years, I'd love to meet the scientist who solved human... (read more)

Let me say up front that there is a divergence here between my ideological biases/priors and what I think I can prove or demonstrate objectively. I usually try to stick to the latter because I think that's more useful to everyone, but since you asked I need to get into the former.

Does government have a role  to play? Well, taking that literally, then absolutely, yes. If nothing else, I think it's clear that government creates certain conditions of political stability, and provides legal infrastructure such as corporate and contract law, property law i... (read more)

I don't really have great thoughts on metrics, as I indicated to @monadica. Happy to chat about it sometime! It's a hard problem.

Re measuring progress, it's hard. No one metric captures it. The one that people use if they have to use something is GDP but that has all kinds of problems. In practice, you have to just look at multiple metrics, some which are narrow but easy to measure, and some which are broad aggregates or indices.

Re “piecewise” process, it's true that progress is not linear! I agree it is stochastic.

Re a golden age, I'm not sure, but see my reply to @BrianTan below re “interventions”.

I'll have to read more about progress in “renewables” to decide how big a breakthrough that is, but at best it would have to be counted, like genetics, as a potential future revolution, not one that's already here. We still get most of our energy from fossil fuels.

Well, the participants are high school students, so for most of them the work they are doing immediately is going to university. Like all education, it is more of a long-term investment.

2
Jakob_J
3y
I would also highlight the contribution towards creating an educational platform that extends beyond the immediate participants in the course. I believe most of the talks are available on Youtube: https://www.youtube.com/channel/UCR4WNZP7Uxfe4F1XNugu5_g A great resource!

Maybe there's just a confusion with the metaphor here? I generally agree that there is a practically infinite amount of progress to be made.

There isn't a lot out there. In addition to my own work, I would suggest Steven Pinker's Enlightenment Now and perhaps David Deutsch's The Beginning of Infinity. Those are some of the best sources on the philosophy of progress. Also Ayn Rand's Atlas Shrugged, which is the only novel I know of that portrays science, engineering and business as a noble quest for the betterment of humanity.

2
Aaron Gertler
3y
Thank you for the reply!

See my reply to @BrianTan on a similar question, thanks!

The Roots of Progress was really about following an opportunity at a specific moment in time, for me and for the world. Both starting the project as a hobby, when I was personally fascinated by the topic, and going full-time on it right when the “progress studies” movement was taking off. So I don't see how it could have happened any differently.

I think being an engineer helps me dig into the technical details of the history I'm researching, and to write explanations that go deeper into that detail. Many histories of technology are very light on technical detail and don't really explain how the inventions worked. One thing that makes me unique is actually explaining how stuff works. This is probably the most important thing.

I think being a founder is helpful in understanding some business fundamentals like marketing or finance. And I am constantly drawing parallels and making comparisons between t... (read more)

Alan Kay suggested that progress in education should be measured in “Sistine Chapel ceilings per lifetime.” Ultimately my goal is something similar, but maybe substitute “Nobel-worthy scientific discoveries”, “Watt-level inventions” or “trillion-dollar businesses” for the artistic goal. I'll know if I'm successful if in twenty years, or fifty, people who did those things are telling me they were given inspiration and courage from my work.

The problem with Sistine Chapel ceilings is that it's a lagging metric. We all need leading metrics to steer ourselves b... (read more)

Maybe when I have some interventions I'm more sure of! (And/or if some powerful person or agency was directly asking me for input.)

Epistemically, before I can recommend interventions I need to really understand causation, and before I can explain or hypothesize causation, I need to get clear on the specific timeline of events. And in terms of personal motivation, I'm much more interested in the detailed history of progress than in arguing policy with people.

But, yes, eventually the whole point of progress studies is to figure out how to make more (and bett... (read more)

“Are new fields getting harder to find?” I think this is the trillion-dollar question! I don't have an answer yet though.

Is progress open indefinitely? I think there is probably at least a theoretic end to progress, but it's so unimaginably far away that for our purposes today we should consider progress as potentially infinite. There are still an enormous number of things to learn and invent.

1
So-Low Growth
3y
Quick thought here Jack and Jason (caveat - haven't thought about this much at all!).  Yes, the creation of new fields is important. However, even if there are diminishing returns to new fields (sidenote - I've been thinking about ways to try and measure this empirically), what's more important is the applicability of the new field to existing fields.  For example, even if we only create one new field but that field could be incredibly powerful. For example, APM (atomically precise manufacturing), or an AGI of some sorts, then it will have major ramifications on progress across all fields.  However, if we created a lot of new  insignificant fields, then even if we create hundreds of them, progress won't be substantially improved across other domains. I guess what I'm trying to say is the emphasis is not just on new fields per se. 

I will answer this, but there's a lot to read here, so I will come back to it later—thanks!

7
jasoncrawford
3y
I haven't forgotten this, but my response has turned into an entire essay. I think I'll do it as a separate post, and link it here. Thanks!

Hmm, I thought that running discussion sessions with the students might be hard, but it was quite natural! I was lucky to get a great group of students in the first cohort.

There were some gaps in their knowledge I didn't anticipate. They weren't very familiar with simple machines and mechanical advantage, with basic molecular biochemistry such as proteins and DNA, or with basic financial/accounting concepts such as fixed vs. variable cost.

Not sure what to say about an EA course, sorry!

2
Aaron Gertler
3y
Thank you for the reply! Just wanted to let you know I'd seen it :-)

Re my own focus:

The irony is that my original motivation for studying progress was to better ground and validate my epistemic and moral ideas!

One challenge with epistemic, moral, and (I'll throw in) political ideas is that we've literally been debating them for 2,500 years and we still don't agree. We've probably come up with many good ideas already, but they haven't gotten wide enough adoption. So I think figuring out how to spread best practices is more high-leverage than making progress in these fields as such.

Before I got into what would come to be cal... (read more)

7
Ozzie Gooen
3y
Thanks so much for the comment. This is obviously a complicated topic so I won’t aim to be complete, but here are some thoughts. From my perspective, while we don’t agree on everything, there has been a lot of advancement during this period, especially if one looks at pockets of intellectuals. The Ancient Greeks schools of thought,  The Renaissance,  The Enlightenment, and the growth of atheism are examples of what seems like substantial progress (especially to people who have agreement with them, like myself). I would agree that epistemic, moral, and political progress seems to be far slower than technological progress, but we definitely still have it and it seems more net positive. Real effort here also seems far more neglected.  There are clearly a fair number of academics in these areas, but I think in terms of number of people, resources, and “get it done” abilities, regular technical progress has been strongly favored. This means that we may have less leverage, but the neglectedness but  this could also mean that there are some really nice returns to highly competent efforts.  The second thing that I’d flag is that  it’s possible that advances in the Internet and AI could mean that progress in these areas become much more tractable in the next 10 to 100 years. I think I much agree with you here, though I myself am less interested in technical progress.  I agree that they can’t be separated. This is all the more reason I would encourage you to emphasize it in future work of yours :-).  I imagine any good study of epistemic and moral progress would include studies of technology for the reasons you mention. I’m not suggesting that you focus on epistemic and moral progress only, but rather that they could either be the primary emphasis where possible, or just a bit more emphasized here and there.  Perhaps this could be a good spot to collaborate directly with Effective Altruist researchers. My take was written quickly and  I think your impression is very diff

I don't know much about it beyond that Wikipedia page, but I think that something like this is generally in the right direction.

In particular, I would say:

  • Technology is not inherently risk-creating or safety-creating. Technology can create safety, when we set safety as a conscious goal.
  • However, technology is probably risk-creating by default. That is, when our goal is anything other than safety—more power, more speed, more efficiency, more abundance, etc.—then it might create risk as a side effect.
  • Historically, we have been reactive rather than proactive a
... (read more)

I should add, though, that I think there is an important truth in the concern about whether progress makes us happier. Material progress doesn't make us happier on its own: it also requires good choices and a healthy psychology.

Technology isn't inherently good or bad, it is made so by how we use it. Technology generally gives us more power and more choices, and as our choices expand, we need to get better at making choices. And I'm not sure we're getting better at making choices as fast as our choices are expanding.

The society-level version of this is that... (read more)

Off the top of my head:

  • Maximum life expectancy. We've pushed up life expectancy at birth enormously, and life expectancy at all ages has increased somewhat. But 80–90 years is still “old” and we haven't cured aging itself.
  • Art? I haven't looked into it much, but I don't really know of any significant improvement in fine arts for a very long time—not in style/technique and not even in the technology (e.g., methods of casting a bronze sculpture). I'd also suggest that music has gotten less sophisticated, but this is super-subjective and treads in culture-war
... (read more)
1
Erich_Grunewald
3y
  I'm a little bit late to the party here, but there are examples of improvements in sculpture technology/technique/style leading to new (& very beautiful) works of art, see e.g. Barry X Ball's works made with a combination of 3d-scanning, CAD software, CNC mills & traditional techniques. Not to mention he has a wide variety of stone available to him thanks to the global trade system. As for music, I guess that totally depends on what you're comparing. The proper comparison for today's popular music isn't Beethoven or Bach but folk music & perhaps music for drawing rooms & salons, which, although they had their own beauties, were nowhere near as complex & intricate as the traditional European art music that is most listened to today. Of the past, only the best survives, but in the present the good & the bad coexist. That said, I think maybe there's a kernel of truth in what you suggest. But we shouldn't trust our intuitive judgment on this.
1
BrownHairedEevee
3y
Housing affordability: There are new construction technologies on the horizon, such as modular construction and mass timber; mass timber is being incorporated into new versions of the International Building Code, so it's gradually being normalized. However, my colleagues in the YIMBY movement tell me that zoning laws limit competition among construction companies, which discourages them from investing in these innovations. (Also, construction unions seem to hate modular construction.) What makes you think there haven't been major breakthroughs in energy technology? As I understand it, there has been significant progress in making renewable energy cheap.

In brief, I think: (1) subjective measures of well-being don't tell us the full story about whether progress is real, and (2) the measures we have are actually inconsistent, with some showing positive benefits of progress, others flat, and a few slightly negative (but most of them not epidemics).

To elaborate, on the second point first:

The Easterlin Paradox, to my understanding, dissolved over time with more and better data. Steven Pinker addresses this pretty well in Enlightenment Now, which I reviewed here: https://rootsofprogress.org/enlightenment-now

Our... (read more)

4
MichaelPlant
3y
Hello. Thanks for engaging! First, there are a few different versions of the Easterlin paradox. The most relevant one, for this discussion, is whether economic growth over the long-term (i.e. 10+ years for economists - longer than the business cycle) increases subjective well-being. This version of the paradox holds in quite a few developed nations (see linked paper). That leaves it open what we might find for developing nations. Second, the only paper I know of that looks globally at SWB over time is Neve et al. (2018). Those authors use affect data from the Gallup World Poll and find: Which indicates we should not expect further global growth will increase happiness. At least, there's a case to answer. Third, the OWID point about flat rates of MH is interesting. I'd not seen that and I'll see if I can find out more.  Fourth, you make this hypothetical point along the lines of "if SWB data told us this, we should disbelieve it" and then you sort of assume it does show us that. But it doesn't. If you look at the causes and correlates of SWB they tell a pretty intuitive story, for the most part: higher SWB (measured as happiness or life satisfaction) is associated with greater health and wealth, being in a relationship, lower crime, lower suicide rates, less air pollution, etc. The only result that's puzzling is the Easterlin paradox. But if you think SWB measure get the 'wrong' result with Easterlin, that implies the measures aren't valid, e.g.  life satisfaction measures don't actually measure life satisfaction. But then you need to explain how they get the 'right' answers basically everywhere else. What's more, the Easterlin Paradox isn't that surprising when you try to explain it, e.g. that effect of income on SWB is mostly relative. 
3
jasoncrawford
3y
I should add, though, that I think there is an important truth in the concern about whether progress makes us happier. Material progress doesn't make us happier on its own: it also requires good choices and a healthy psychology. Technology isn't inherently good or bad, it is made so by how we use it. Technology generally gives us more power and more choices, and as our choices expand, we need to get better at making choices. And I'm not sure we're getting better at making choices as fast as our choices are expanding. The society-level version of this is that technology can be used for evil at a society level too, for instance, when it enables authoritarian governments or destructive wars. And just as at the individual level, I'm not sure our “moral technology” is advancing at the same rate as our physical technology. So, I do see problems here. I just don't think that technology is the problem! Technology is good and we need more of it. But we also need to improve our psychological, social, and moral “technology”. More in this dialogue: https://pairagraph.com/dialogue/354c72095d2f42dab92bf42726d785ff 

Oh, I should also point to the SSC response to “ideas getting harder to find”, which I thought was very good: https://slatestarcodex.com/2018/11/26/is-science-slowing-down-2/

In particular, I don't think you can measure “research productivity” as percent improvement divided by absolute research input. I understand the rationale for measuring it this way, but I think for reasons Scott points out, it's just not the right metric to use.

Another way to look at this is: one generative model for exponential growth is a thing that is growing in proportion to its si... (read more)

Load more