weeatquince

Wiki Contributions

Comments

A practical guide to long-term planning – and suggestions for longtermism

Your push back here seems fair. These orgs certainly do some good work across this whole spectrum. My shoddy diagrams  were supposed to be more illustrative of a high level point than accurate. But perhaps they are somewhat over-exaggerated and critical. I still think the high-level point about expectations and reality is worth making (like the point about people's expectations about GovAI). 

A practical guide to long-term planning – and suggestions for longtermism

Hi Jack, some semi-unstructured thoughts for you  on this topic. And as the old quote goes "sorry, if I had more time, I would have written you a shorter letter":

  • What are we aiming to do here with this discussion? It seems like we are trying to work out what are the best thinking tools for various kinds of questions an altruist might want answered. And we are deciding between two somewhat distinct and also overlapping and also not well defined 'ways of thinking' whilst still acknowledging that both are going to be at least somewhat useful across the spectrum and that everyone will have a preference for the tools they are used to using (and speaking at least for myself I have limited expertise in judging the value of academic type work) !!! .... Well sure why not. Let's do that. Happy to throw some uniformed opinions out into the void and see where it goes ....
  • How should or could we even make such a judgement call? I believe we have evidence from domains such as global health and policy design that is you are not evaluating and testing you are likely not having  the impact you expect. I don't see why this would not apply to research. In my humble view of you wanted to know what research was most valuable you would want monitoring and evaluation and iteration. Clear theories of change and justification for research topics. Research could be judged by evidence of real world impact, feedback from users of the research, prediction markets, expert views, etc. Space for more exploratory research would be carved out. This would all be done transparently and maybe there would be competing research organisations* that faced independent evaluation. And so forth. And then you wouldn't need to guess at what tools were useful, you'd find out.
  • But easier said that done? I think people would push back, say they know they don't need evaluating, creativity will be stifled, evaluation is difficult, we would agree on how to evaluate, that’s not how academia works, we already know we are great, etc. They might be right. I have mostly looked at EA research orgs as a potential donor so this is at least the direction I would want things to move in – but I am sure there are other perspectives. And either way I don’t know if anyone is making an push to move EA research organisations in that direction.
  • So what can be done? I guess one thing would be if the people who used research to make decisions, would give feedback to researchers about what research they find more useful and less useful, that could be a step in the right direction.  As I have done somewhat here. And my feedback from the policy change side of the fence is that the research that looks more like the long-term planning tools (that I am used to using) is more useful. I have found it more useful to date and expect it would be more useful going forward. And I would like to see more of it. I don’t think that feedback from me is sufficient to answer the question, it is a partial answer at best!! There are (I am sure) at least some other folk in the world that use FHI/GPI/EA research who will hopefully have different views.

So with that lengthy caveat about partial answers / plea for more monitoring and evaluation out of the way, you asked for specific examples:

  • Long-term planning and worldview. I think the basic idea that the long-term future matters is probably true.  I think how tractable this is and how much that should effect our decisions is less clear. How to make decisions about influencing the world given uncertainty/cluelessness? How tractable it is to have that influence? Are attractor states the only way to have that long-run influence? Are stopping risks the only easy to reach attractor states? I think the long-term planning tools give us at least one way to approach these kinds of questions (as I set out above in Section B. 4.-5.). To design plans and to influence the future, ensure their robustness, then put them into action and see how they work, and so on.  Honestly I doubt you would get radically different answers (although I also doubt other ways of approaching this question would lead to radically different answers either, I just quite uncertain how tractable more worldview research work is).
  • Long-term planning and cause choice. This seems obvious to me. If you want to know, as per your example, what risks to care about – then mapping out future paths and scenarios the world the world might take, doing estimates of risks on 5, 10, 25, 50, 100 year timescales, explicitly evaluating your assumptions, doing forecasting, using risk management tools, identifying the signs to watch-out for that would warrant a change in priorities, and so on – all seems to be super useful.
    Also I think there might be a misunderstanding but the whole point of all the tools listed above is that they are for use in situations where you are dealing with the "unprecedented" and the unknown black swans. If something is easy to predict then you can just predict it and do a simple expected value calculation and you are done (and EA folk are already good at that). 
    Overall I doubt you would get drastically different answers to which risks matter, although I would expect there may be a greater focus on building a world that is robust to other "unforeseen anthropogenic risks", not just AI and bio. I also think in specific circumstances that people might be in, say a meeting with a politician or writing a strategy, they would hopefully have a better sense of which risks to focus on.

 

 * P.S. Not sure anyone will have read this far but if anyone is reading this and actually thinks it could be a good (or v bad) idea to start an research org with a focus on demonstrating impact, policy research, and planning type tools – then do get in touch.

A practical guide to long-term planning – and suggestions for longtermism

I guess my main point is that I’d like to see some applications of this framework (and some of the other frameworks you mention too) to important longtermist problems, before I accept it as useful.

I 100% fully agree. Totally. I think a key point I want to make is that we should be testing all our frameworks against the real world and seeing what is useful. I would love to sit down with CLR or FHI or other organisations and see how this framework can be applied. (Also expect a future post with details of some of the policy work I have been doing that uses some of this).

(I would also love people who have alternative frameworks to be similarly testing them in terms of how much they lead to real world outputs or changes in decisions.)

 

I’m still unsure if this would be the best approach to reducing existential risk 

The aim here is to provide a tool kit that folk can use when needed.

For example these tools they are not that useful where solutions are technical and fairly obvious. I don’t think you need to go through all these steps to conclude that we should be doing interpretability research on AI systems. But if you want to make plans to ensure the incentives of future researches who develop a transformative AI are aligned to the global good then you have a complex high-uncertainty long-term problem and I expect these kinds of tools become the sort of think you would want to use.

Also as I say in the post more bespoke tools beat more general tools. Even in specific cases there will be other toolboxes to use. Organisational design methods for aligning future actors incentives, vulnerability assessments for identifying risks, etc. The tools above are the most general form for anyone to pick up and use. 

 

I’m also sceptical about the claim that we can’t affect probabilities of lock-in events that may happen beyond the next few decades. As I also say here, what about growing the Effective Altruism/longtermist community, or saving/investing money for the future, or improving values?

I think this is a misunderstanding. You totally can affect those events. (I gave the example of patient philanthropy that has non-negligible expected value even in 300 years.) But in most cases a good way of having an impact in more than a few decades is to map out high level goals on shorter decade long timelines.  On climate change we are trying to prevent disaster in 2100 but we do it by stetting targets for 2050. The forestry commission might plant oak tress that will grow for 100s of years but they will make planting plans on 10 year cycles. Etc

 

What would the 30 year vision be? What would intermediate targets be? 

Some examples here if helpful.

A practical guide to long-term planning – and suggestions for longtermism

Part 2 – Also, a note on expectations

On a slightly separate point maybe some of the challenge I feel here comes from me having misplaced expectations. I think that before I dived into the longtermist academic research I was hoping that the world looked like this:

Most important qtns.EthicsWorldviewCauseInterventionCharityPlan
People solving them:

GPI and FHI, etc

  

and I could find the answers I needed and get on with driving change – YAY.

 

But maybe the world actually looks like more this:

Most important qtns.EthicsWorldviewCauseInterventionCharityPlan
People solving them:

GPI and FHI

    

and there is so much more to do – Awww. 


(Reminds me of talking to GovAI about policy and they said GovAI does not do applied policy research but people often think that they do it. )

I know it is not going to be top of anyone's to do list but I would love at some point to see an FHI post like this one from 80K setting out what is in scope and what is out of scope that could be great for others in the ecosystem to do.


(* diagrams oversimplified again but hopefully they make the point)

A practical guide to long-term planning – and suggestions for longtermism

Hi Toby,

Thank you for the thoughts and for engaging. I think this is a really good point. I mostly agree.

To check what you are saying. It seems like the idea here is that there are different aims of altruistic research. In fact we could imagine a spectrum, something like:

Ethics

Worldview

Cause

Intervention

Charity

Plan

At the top end, for people thinking about ethics etc, traditional longtermist ways of thinking are best and at the lower end for someone thinking about plans etc, long-term planning tools are the best.

I think this is roughly correct. 

My hope is to provide a set of long-term planning tools that people might find useful, not to rubbish the old tools.

That said, reading between the lines a bit, it feels like there is still some disconnect about the usefulness and importance of different kinds of research. I go into each of these a little bit below.

 

On the usefulness of different ways of thinking 

You said:

A useful analogy  would be the difference between using cost-effectiveness as a tool for selecting a top cause or intervention to work on, vs using it to work out the most cost-effective way to do what you are already doing.

I am speculating a bit (so correct me if I am wrong) but I get the impression from that analogy that you would see the best tools to use a bit like this

Ethics

Worldview

Cause

Intervention

Charity

Plan

Traditional longtermist ways of thinkingLong-term planning 

(Diagram is an over-simplification as both ways of thinking will be good across the spectrum so the cut of would be vague, but this chart is the best I can do on this forum.)
 

However I would see it a bit more like this:

Ethics

Worldview

Cause

Intervention

Charity

Plan

Trad. longtermism Long-term planning 

 

And the analogy I would use would be something like:

A useful analogy  would be the difference between using philosophical "would you rather?" thought experiments as a tool for selecting an ethical view , vs using thought experiments to work out the most important causes to work on .


Deciding what the best ways of thinking are best suited for different intellectual challenges is a huge debate. I could give views but  not sure we are going to solve it here. And it makes sense that we each prefer to rely on the ways of thinking that we are experienced in using.

One of my aims of writing this post is to give feedback to researchers, as a practitioner about what kind of work I find useful. Basically trying to shorten the feedback loop as much as I can to help guide future research.

So what I can do is provide my own experiences. I do have to on a semi-regular basis make decisions about causes and interventions to focus on (do we engage politicians about AI or bio or improving institutions, etc). And in making these high level decisions there is some good research and some less useful research (of the type I discuss in my post) and my general view is that more work like: scenarios, short term estimates of x-risk, vulnerability assessments, etc – would be particularly useful for me making even very high-level cause decisions.

Maybe that is useful for you or others (I hope so).

 

On the value of different research

Perhaps we also have different views on what work is valuable. I guess I already think that the future matters and see less value in more work on is longtermism true and more value on work into what are the risks we face now and how can we address them.

You said:

[Long term planning] is not always required in order to get large gains

Let's flesh out what we mean by "gains".

  • If gains at philosophy / ethics / deciding if longtermism is true, then yes this would apply.
  • If gains at actually reducing the chance of an existential catastrophe (other than in cases where the solution is predominately technical) then I don't think this would be true.

I expect we agree on that. So maybe the question is less about the best way of thinking about the world and and more about what the aims of additional research should be? Should we be pushing resources to more philosophy or towards more actionable plans to affect the long-term and/or reduce risks?

(Also worth considering the extent to which demonstrating practical actionable plans is useful for the philosophy, either for learning how to deal with uncertainty or for making the case that the long-term is a thing people can act on).

How to think about an uncertain future: lessons from other sectors & mistakes of longtermist EAs

Great question


Clarification:

I don’t think I said the the US military was good at risk management I think I said that 
a) the DMDU community (RAND,  US military and others) was good at making plans that manage uncertainty and 
b) that industry was good at risk management


Slight disagreement:

It feels wrong to use reference classes of X to implicitly say that actions the reference class does is good and we ought to emulate them, without ever an explicit argument that the reference classes' actions or decision procedures are good!

I do think where reference class X is large and dominant enough it does make sense to assume some trust in their approach or that it is worth some investigation of their approach before dismissing it. For example most (large) businesses have a board and a CEO and a hierarchical management structure so unless I had a good reason to do otherwise that sets a reasonable prior for how I think it is best to run a business.

For more on this see Common sense as a prior.

So even if I had zero evidence I think it would make sense for someone in the EA community to spend time looking into the topic of what tools had worked well in the past to deal with uncertainty and that the US military would be a good place to look for ideas.


Answer:

Answering: is the US military good at making plans that manage uncertainty:

  • Historical evidence – no.
    I have zero empirical historical evidence that DMDU tools have worked well for the US military.
  • Theoretical evidence – yes.
    I think the theoretical case for these tools is strong,  see the case here and here.
  • Interpersonal evidence – yes.
    I believe Taleb in Black Swan describes that the people he met in the US military had very good empirical ways of thinking about risk and uncertainty (I don’t have the book here so cannot double check). Similarly to Taleb I have been much  impressed by folk in the UK working on counter-terrorism etc, compared to other policy folk who work on risks. 
  • Evidence from trust – mixed.
    I mostly expect the US military have the right incentives in place to aim to do this well and the ability to test ideas in the field but also would not be surprised if there were a bunch of perverse incentives  that corrupted this.

So all in all pretty weak evidence.


Caveat:

My views are probably somewhat moved on from when I wrote this post a year ago. I should revisit it at some point 

A personal take on longtermist AI governance

Thank you Luke – great to hear this work is happening but still surprised by the lack of progress and would be keen to see more such work out in public!

(FWIW Minor point but I am not sure I would phrase a goal as "make government generically smarter about AI policy" just being "smart" is not good. Ideally want a combination of smart + has good incentives + has space to take action. To be more precise when planning I often use COM-B models, as used in international development governance reform work, to ensure all three factors are captured and balanced.)

 

EA for Jews - Proposal and Request for Comment

Also Ben, is there a Jews and EA Facebook group – any plans to set one up? Or if I set one up do you think you could email / share it?

A personal take on longtermist AI governance

Thank you Luke for sharing your views. I just want to pick up one thing you said where your experience of the longtermist space seems sharply contrary to mine.

You said: "We lack the strategic clarity ... [about] intermediate goals". Which is a great point and I fully agree. Also I am super pleased to hear you have been working on this. You then said:

I caution that several people have tried this ... such work is very hard

This surprised me when I read it.  In fact my intuition is that such work is highly neglected, almost no one has done any of this and I expect it is reasonably tractable. Upon reflection I came up with three reasons for my intuition on this.


1. Reading longtermist research and not seeing much work of this type.

I have seem some really impressive forecasting and trend analysis focused but if anyone had worked on setting intermediate goals I would expect to see some evidence of basic steps such as listing out a range of plausible intermediate goals or consensus building exercises to set viable short and mid term visions of what AI governance progress looks like (maybe it's there and I've just not seen it). If anyone had made a serious stab at this I would expect to have seen thorough exploration exercises to map out and describe possible near-term futures, assumption based planning, scenario based planning, strategic analysis of a variety of options, tabletop exercises, etc. I have seen very little of this.


2. Talking to key people in the longtermist space and being told this research is not happening.

For a policy research project I was considering recently I went and talked to a bunch of longtermists about research gaps (eg at GovAI, CSET, FLI, CSER, etc). I was told time and time again that policy research (which I would see as a combination of setting intermediate goals and working out what policies are needed to get there) was not happening, was a task for another organisation, was a key bottleneck that no-one was working on, etc. 
 

3. I have found it fairly easy to make progress on identifying intermediate goals and short-term policy goals that seem net-positive for long-run AI governance

I have an intermediate goal of: key actors in positions of influence over AI governance are well equipped to make good decisions if needed (at an AI crunch time).  This leads to specific policies such as: Ensuring clear lines of responsibility exist in military procurement of software /AI or, if regulation happens it should be expert driven outcome based regulation or some of the ideas here. I would be surprised if longtermists looking into this (or other intermediate goals I routinely use) would disagree with the above intermediate goal or that the policy suggestions move us towards that goal. I would say this work has not been difficult.

– – 

So why is our experience of the longtermist space so different. One hunch I have is that we are thinking of different things when we consider "strategic clarity on intermediate goals".

In supporting governments to make long-term decisions and has given me a sense of what long-term decision making and "intermediate goal setting" and long-term decision making involves. This colours the things I would expect to see if the longtermist community was really trying to do this kind of work and I compare longtermists' work to what I understand to be best practice in other long-term fields (from forestry to tech policy to risk management).  This approach leaves me thinking that there is almost no longtermist "intermediate goal setting" happening.  Yet maybe you have a very different idea of what "intermediate goal setting involves" based on other fields you have worked in.

It might also be that we read different materials and talk to different people. It might be that this work has happened I've just missed it or not read the right stuff.

– –
Does this matter? I guess I would be much more encouraging about someone doing this work than you are and much more positive about how tractable such work is. I would advise that anyone doing this work should have a really good grasp of how wicked problems are addressed and how long-term decision making works in a range of non-EA fields and the various tools that can be used.

EA for Jews - Proposal and Request for Comment

I have an idea and though a comment here would be a good place to put it:
I wonder if there should be a Jewish run EA charity or Charitable Fund that directs funds to good places (such as assorted EA organisations).


I think lots of Jews want to give to a Jewish run organisation or give within the Jewish community. If a Jewish run EA charity existed it could be helpful for making the case for more global effective giving.

It could be run with Jewish grant managers who ensure that funds are used well and in line with Jewish principles (there could be a Pikuach nefesh fund for saving the most lives, or a Maimonides ladder sustainable growth fund, etc).

To argue against this idea: one of the nice things about EA is it is not us asking for your money it is us advising on where you should give your money which feels nicer and is maybe an easier pitch.  So maybe if there was an EA run Jewish charity or fund it might detract form that or should be separate from the outreach efforts.

Happy to help a bit with this if it happens.

 

Load More