Hi Alexis, thank you for the post. I roughly agree with the case made here.
I thought I would share some of my thoughts on the "diffusion of institutional innovations":
* I worked in government for a while. When there is incentive to make genuine policy improvements and a motivation to do so this matters. One of the key things that would be asked of a major new policy would be, what do other countries do? (Of course a lot of policy making is not political so the motivation to actually make good policy may be lacking).
* Global shocks also force governments to learn. There stuff done in the UK after Fukishima to make sure our nuclear is safe. I expect after the Beirut explosions countries are learning about fertiliser storage.
* On the other hand I have also worked outside government trying to get new policies adopted, such as polices other countries already do, and it is hard, so this does not happen easily.
* I would tentatively speculate that it is easier for innovations to diffuse when the evidence for the usefulness of the policy is concrete. This might be a factor against some of the longtermist institution reforms that Tyler and I have written about. For example “policing style x helped cut crime significantly”is more likely to diffuse than “longtermism policy y looks like it might lead to a better future in 100 years”. That said I could imagine diffusing happing also where there are large public movement and very minimal costs, for example tokenistic polices like “declare a climate emergency”. This could work in favour of longtermist ideas as making a policy now to have an effect in many years time, if the cost now is low enough, might match this pattern.
I also think that senior government positions even in smaller countries can have a longterm impact on the world in other ways:
* Technological innovation. A new technological development in one country can spread globally. * Politics. Countries can have a big impact on each other. A simple example, the EU is made up of many member states who influence each other.
* Spending. Especially for rich countries like in Scandinavia they can impact others with spending, eg climate financing.
* Preparation for disasters. Firstly building global resilience -- Eg Norway has the seed bank -- innovations like that don’t need to spread to make the world more resilient to shocks, they just need to exist. Secondly countries copy each other a lot is in disaster response -- Eg look at how uniform the response to COVID has been -- having good disaster plans can help everyone else when a disaster actually hits.
I think it matters not forget the direct impact on citizens of that country. Even a small country will have $10-$100m annual budgets. Having a small effect on that can have a truly large scale positive direct impact
Hi, quick question, not sure this is the best place for it but curious:
Does work to "align GTP-3" include work to identify the most egregious uses for GTP-3 and develop countermeasures?
This is a fascinating question – thank you.
Let us think through the range of options for addressing Pascal's mugging. There are basically 3 options:
It is also possible that all of A and B and C fail for different reasons.*
Let's run through.
I think that in practice no one does A. If I email everyone in the EA/longtermism community and say: I am an evil wizard please give me $100 or I will cause infinite suffering! I doubt I will get any takers.
You made three suggestions for addressing Pascal's mugging. I think I would characterise suggestions 1 and 2 as ways of adjusting your expected value calculations to aim for more accurate expected value estimates (not as using an alternate decision making tool).
I think it would be very difficult to make this work, as it leads to problems such as the ones you highlight.
You could maybe make this work using a high discounting based on the "optimisers curse" type factors to reduce the expected value of high-uncertainty high-value decisions. I am not sure.
(The GPI paper on cluelessness basically says that expected value calculations can never work to solve this problem. It is plausible you could write a similar paper about Pascals mugging. It might be interesting to read the GPI paper and mentally replace "problem of clulessness" with "problem of pascals mugging" and see how it reads).
I do think you could make your third option, the common sense version, work. You just say: if I follow this decision it will lead to very perverse circumstances, such as me having to give everything I own to anyone who claims they will otherwise me cause infinite suffering. It seems so counter-intuitive that I should do this that I will decide not to do this. I think this is roughly the approach that most people follow in practice. This is similar to how you might dismiss this proof that 1+1=3 even if you cannot see the error. It is however a bit of a dissatisfying answer as it is not very rigorous, it is unclear when a conclusion is so absurd as to require outright objection.
It does seem hard to apply most of the DMDU approaches to this problem. An assumption based modeling approach would lead to you writing out all of your assumptions and looking for flaws – I am not sure where it would lead.
if looking for an more rigorous approach the flexible risk planning approach might be useful. Basically make the assumption that: when uncertainty goes up the ability to pinpoint the exact nature of the risk goes down. (I think you can investigate this empirically). So placing a reasonable expected value on a highly uncertain event means that in reality events vaguely of that type are more likely but events specifically as predicted are themselves unlikely. For example you could worry about future weapons technology that could destroy the world and try to explore what this would look like – but you can safely say it is very unlikely to look like your explorations. This might allow you to avoid the pascal mugger and invest appropriate time into more general more flexible evil wizard protection.
Does that help?
* I worry that I have made this work by defining C as everything else and that the above is just saying Paradox -> No clear solution -> Everything else must be the solutions.
Thank Ben super useful.@Linch I was taking a very very broad view of judgment.Ben's post is much better and breaks things done in a much nicer way.I also made a (not particularly successful) stab at explaining some aspects of not-foresight driven judgement here: https://forum.effectivealtruism.org/posts/znaZXBY59Ln9SLrne/how-to-think-about-an-uncertain-future-lessons-from-other#Story_1__RAND_and_the_US_military
Hi Andreas, Excited you are doing this. As you can maybe tell I really liked your paper on Heuristics for Clueless Agents (although not sure my post above has sold it particularly well). Excited to see what you produce on RDM.
Firstly, measures of robustness may seem to smuggle probabilities
This seems true to me (although not sure I would consider it to be "by the backdoor").Insofar as any option selected through a decisions process will in a sense be the one with the highest expected value, any decision tool will have probabilities inherent either implicit or explicitly. For example you could see a basic Scenario Planning exercise as implicitly stating that all the scenarios are of reasonable (maybe equal) likelihood.
I don't think the idea of RDM is to avoid probabilities, it is to avoid the traps of expected value calculation decisions. For example by avoiding explicit predictions it prevents users making important shifts to plans based on highly speculative estimates. I'd be interested to see if you think it works well in this regard.
Secondly, we wonder why a concern for robustness in the face of deep uncertainty should lead to adoption of a satisficing criterion of choice
Honestly I don’t know (or fully understand this), so good luck finding out. Some thoughts:In engineering you design your lift or bridge to hold many times the capacity you think it needs, even after calculating all the things you can think off that go wrong – this helps prevent the things you didn’t think of going wrong.I could imagine a similar principle applying to DMDU decision making – that aiming for the option that is satisfyingly robust to everything you can think of might give a better outcome than aiming elsewhere – as it may be the option that is most robust to the things you cannot think of.But not sure. Not sure how much empirical evidence there is on this. It also occurs to me that if some of the anti-optimizing sentiment could driven by rhetoric and a desire to be different.
Dear MichaelStJules and rohinmshahThank you very much for all of these thoughts. It is very interesting and I will have to read all of these links when I have the time.I totally took the view that the EA community relies a lot on EV calculations somewhat based on vague experience without doing a full assessment of the level of reliance, which would have been ideal, so the posted examples are very useful.
To clarify one points:
If the post is against the use of quantitative models in general, then I do in fact disagree with the post.
I was not at all against quantitative models. Most of the DMDU stuff is quantitative models. I was arguing against the overuse of quantitative models of a particular type.
To answer one question
would you have been confident that the conclusion would have agreed with our prior beliefs before the report was done?
Yes. I would have been happy to say that, in general, I expect work of this type is less likely to be useful than other research work that does not try to predict the long-run future of humanity. (This is in a general sense, not considering factors like the researchers background and skills and so forth).
I find this hard to engage with -- you point out lots of problems that a straw longtermist might have, but it's hard for me to tell whether actual longtermists fall prey to these problems.
Thank you ever so much, this is really helpful feedback. I took the liberty of making some minor changes to the tone and approach of the post (not the content) to hopefully make it make more sense. Will try to proof read more in future.
I tried to make the crux of the argument more obvious and less storylike here: https://forum.effectivealtruism.org/posts/znaZXBY59Ln9SLrne/how-to-think-about-an-uncertain-future-lessons-from-other#Why_expect_value_calculations_might_not_be_the_best_tools_for_thinking_about_the_longterm
The aim was not to create a strawman but rather to see what conclusions would be reached if the reader accepts a need for more uncertainty focused decision making tools for thinking about the future.
On your points:
I'm not sure which of GPI's and CLR's research you're referring to (and there's a good chance I haven't read it)
Examples: https://globalprioritiesinstitute.org/christian-tarsney-the-epistemic-challenge-to-longtermism/ and https://www.emerald.com/insight/content/doi/10.1108/FS-04-2018-0037/full/html (the later of which I have not read)
the Open Phil research you link to seems obviously relevant to cause prioritization. If it's very unlikely that there's explosive growth this century, then transformative AI is quite unlikely and we would want to place correspondingly more weight on other areas like biosecurity -- this would presumably directly change Open Phil's funding decisions.
I don’t see the OpenPhil article as that useful – it is interesting but I would not think it has a big impact on how we should approach AI risk. For example for the point of view you raise about deciding to prioritise AI over bio – who is to say based on this article that we do not get extreme growth due to progress in biotech and human enhancement rather than AI.
I assume from the phrasing of this sentence that you believe longtermists have concrete plans more than 30 years ahead, which I find confusing. I would be thrilled to have a concrete plan for 5 years in the future (currently I'm at ~2 years). I'd be pretty surprised if Open Phil had a >30 year concrete plan (unless you count reasoning about the "last dollar").
Sorry my bad writing. I think the point I was trying to make was that it could be nice to have some plans for a few years ahead, maybe 3, maybe 5, maybe (but not more than) 30 about what we want the world to look like.
Hi, I love that you are doing this.
One little bit of feedback: I really dislike the idea that being cause neutral is in some way "less emotional". I see myself as cause neutral because I am emotional and I care about how much impact I can have and I want to create as much change as possible. And I know many others who are highly passionate and highly emotional and cause neutral. I think this framing perpetuates the unhelpful stereotype that EA is all about being a cold hard calculation machine (rather than a calculation machine driven by love and concern for others).
Here is my breakdown that I did for myself: https://docs.google.com/document/d/1cYoPBcuV1jgFTli7OIZi6ylXJ8mlQHZePV0rZq1QorU/edit?usp=drivesdk
As you can see it's quite different. Hope it helps.
Maybe I've misunderstood but in my humble opinion, and limited experience, forecasting is just a tiny tiny fraction of good judgement, (maybe about 1% depending on how broad you define forecasting). It can be useful, but somewhat overrated by the EA community.
Other aspects of good judgment may include things like:
Hi evelynciara, Thank you so much for your positivity and for complementing my writing.
Also to say do not feel discouraged. It is super unclear exactly what the community needs and I we should each be doing what we can with the skills we have and see what form that takes.