jackmalde

I am working as an economist at the Confederation of British Industry (CBI) and previously worked in management consulting.

I am interested in longtermism, global priorities research and animal welfare

Please get in touch if you would like to have a chat sometime.

Feel free to connect with me on LinkedIn: https://www.linkedin.com/in/jack-malde/

Wiki Contributions

Comments

A practical guide to long-term planning – and suggestions for longtermism

But maybe the world actually looks like more this:

Most important qtns.EthicsWorldviewCauseInterventionCharityPlan
People solving them:GPI and FHI    

and there is so much more to do – Awww. 

Is this fair? FHI's Research seems to me to venture into Cause and Intervention buckets and they seem to be working with government and industry to spur implementation of important policies/interventions that come out of their research? E.g. for each of FHI's research areas:

  • Macrostrategy: most recent publication, Bostrom's Vulnerable World Hypothesis calls for greatly amplified capacities for preventive policing and global governance (Cause)
  • AI Governance: the research agenda dicusses AI safety as a cause area, and much of the research should lead to interventions. For example the inequality/job displacement section discusses potential governance solutions, the AI race section discusses potential routes for avoiding races / ending those underway (e.g. Third-Party Standards, Verification, Enforcement, and Control), and there is discussion of optimal design of institutions. Apparently researchers are active in international policy circles, regularly hosting discussions with leading academics in the field, and advising governments and industry leaders.
  • AI Safety: Apparently FHI collaborates with and advises leading AI research organisations, such as Google DeepMind on building safe AI.
  • Biosecurity: As well as research on impacts of advanced biotech, FHI regularly advises policymakers including the US President’s Council on Bioethics, the US National Academy of Sciences, the Global Risk Register, the UK Synthetic Biology Leadership Council, as well as serving on the board of DARPA’s SafeGenes programme and directing iGEM’s safety and security system.

Overall it seems to me FHI is spurring change from their research?

You may also find this interesting regarding AI interventions.

A practical guide to long-term planning – and suggestions for longtermism

Toby has articulated what I was thinking quite well.

I also think this diagram is useful in highlighting the core disagreement:

However I would see it a bit more like this:

EthicsWorldviewCauseInterventionCharityPlan
Trad. longtermism Long-term planning 

Personally I'm surprised to see long-term planning stretching so far to the left. Can you expand on how long-term planning helps with worldview and cause choices?

Worldview: I presume this is something like the contention that reducing existential risk should be our overriding concern? If so I don't really understand how long-term planning tools help us get there. Longtermists got here essentially through academic papers like this one that relies on EV reasoning, and the contention that existential risk reduction is neglected and tractable.

Cause: I presume this would be identifying the most pressing existential risks? Maybe your tools (e.g. vulnerability assessments) would help here but I don't really know enough to comment. Longtermists essentially got to the importance of misaligned AI for example through writings like Bostrom's Superintelligence which I would say to some extent relies on thought experiments. Remember existential risks are different to global catastrophic risks, with the former unprecedented - so we may have to think outside the box a bit to identify them. I'm still unsure if established tools are genuinely up to the task (although they may be) - do you think we might get radically different answers on the most pressing existential risks if we use established tools as opposed to traditional longtermist thinking?

EDIT: long-term planning extending into worldview may be a glitch as it does on my laptop but not on my phone...

A practical guide to long-term planning – and suggestions for longtermism

I provided some comments on a draft of this post where I said that I was skeptical of the use of many of these tools for EA longtermists, although felt they were very useful for policymakers who are looking to improve the future across a shorter timeframe. On a second read I feel more optimistic of the use for EA longtermists, but am still slightly uncertain.

For example, you suggest setting a vision for a good future in 20-30 years and then designing a range of 10 year targets that move the world towards that vision. This seems reasonable a lot of the time, but I’m still unsure if this would be the best approach to reducing existential risk (which is currently the most accepted approach to improving the far future in expectation amongst EAs).

Take the existential risk of misaligned AI for example. What would the 30 year vision be? What would intermediate targets be? What is wrong with the current approach of “shout about how hard the alignment problem is to make important people listen, whilst also carrying out alignment research, and also getting people into influential positions so that they can act on this research”?

I guess my main point is that I’d like to see some applications of this framework (and some of the other frameworks you mention too) to important longtermist problems, before I accept it as useful. I think the framework does work well for more general goals like “let’s make a happier world in the next few decades” which is very vague and needs to be broken down systematically, but I'm unsure it would work well for more specific goals such as “let’s not let AI / nuclear weapons etc. destroy us”. I’m not saying the framework won’t work, but I’d like to see someone try to apply it.

Longtermists are going to have to make plans along the lines of: let’s minimise the chance we fall into a bad attractor state and maximise the chance we fall into a good attractor state within the length of time that we can reasonably influence, which is 10-100s of years

I’m also sceptical about the claim that we can’t affect probabilities of lock-in events that may happen beyond the next few decades. As I also say here, what about growing the Effective Altruism/longtermist community, or saving/investing money for the future, or improving values? These are all things that many EAs think can be credible longtermist interventions and could reasonably affect chances of lock-in beyond the next few decades as they essentially increase the number of thoughtful/good people in the future or the amount of resources such people have at their disposal. I do think it is important for us to carefully consider how we can affect lock-in events over longer timescales.

Should I have my full name as my username?

I'm not worried about doxxing, I'm worried about people I know googling me, coming across this and then thinking it's all a bit weird. For example I'm a single guy open to dating non-EAs who may not have an interest in philosophy or animal welfare or may find thinking about millions of years in the future just plain stupid. At some point they may google me and wonder to themselves if they're dating a bit of a weirdo!

Should I have my full name as my username?

I think we can get it changed by asking an admin. I probably will.

Honoring Petrov Day on the EA Forum: 2021

Surely we don't as anyone bringing down a site next year would still be some sort of reckless nihilist who just doesn't care. So tit-for-tat this year wouldn't actually change anything?

Why I am probably not a longtermist

I think you mean to say 'existential risk' rather than 'extinction risk' in this comment?

I think even with totalitarianism reaching existential security is really hard - the world would need to be permanently locked into a totalitarian state.

Something I didn't say in my other comment is that I do think the future could be very, very long under a misaligned AI scenario. Such an AI would have some goals, and it would probably be useful to have a very long time to achieve those goals. This wouldn't really matter if there was no sentient life around for the AI to exploit, but we can't be sure that this would be the case as the AI may find it useful to use sentient life.

Overall I am interested to hear your view on the importance of AI alignment as, from what I've heard, it sounds like it could still be important taking into account your various views.

Major UN report discusses existential risk and future generations (summary)

Great stuff. Surprised not to see any comments here. 

One thing I'd be interested for someone to look into is why the UN appears to have (unexpectedly) woken up to these concerns to the extent that they have. Could this be in part due to the EA community? An understanding of the relevant factors here might help us increase the amount of attention given to longtermism/existential risk by other important institutions.

Why I am probably not a longtermist

Thanks for that. To be honest I would say the inaccuracies I made are down to sloppiness by me rather than by you not being clear in your communication. Having said that none of your corrections change my view on anything else I said in my original comment.

Why I am probably not a longtermist

Thanks for this post I am always interested to hear why people are sceptical of longtermism. 

If I were to try to summarise your view briefly (which is helpful for my response) I would say:

  1. You have person-affecting tendencies which make you unconcerned with reducing extinction risks
  2. You are suffering-focused
  3. You don’t think humanity is very good now nor that it is likely to be in the future under a sort of ‘business as usual’ path, which makes you unenthusiastic about making the future long or big
  4. You don’t think the future will be long (unless we have totalitarianism) which reduces the scope for doing good by focusing on the future
  5. You’re sceptical there are lock-in scenarios we can affect within the next few decades, and don’t think there is much point of trying to affect them beyond this time horizon

I’m going to accept 1, 2 as your personal values and I won’t try to shift you on them. I don’t massively disagree on point 3.

I’m not sure I completely agree on point 4 but I can perhaps accept it as a reasonable view, with a caveat. Even if the future isn’t very long in expectation, surely it is kind of long in expectation? Like probably more than a few hundred years? If this is the case, might it be better to be some sort of “medium-termist” as opposed to a “traditional neartermist”. For example, might it be better to tackle climate change than to give out malarial bednets? I’m not sure if the answer is yes, but it’s something to think about.

Also, as has been mentioned, if we can only have long futures under totalitarianism, which would be terrible, might we want to reduce risks of totalitarianism?

Moving onto point 5 and lock-in scenarios. Firstly I do realise that the constellation of your views means that the only type of x-risk you are likely to care about is s-risks, so I will focus on lock in events that involve vast amounts of suffering. With that in mind, why aren’t you interested in something like AI alignment? Misaligned AI could lock-in  vast amounts of suffering. We could also create loads of digital sentience that suffers vastly. And all this could happen this century. We can’t be sure of course, but it does seems reasonable to worry about this given how high the stakes are and the uncertainty over timelines. Do you not agree? There may also be other s-risks that may have potential lock-ins in the nearish future but I’d have to read more.

My final question, still on point 5, is why don’t you think we can affect probabilities of lock-in events that may happen beyond the next few decades? What about growing the Effective Altruism/longtermist community, or saving/investing money for the future, or improving values? These are all things that many EAs think can be credible longtermist interventions and could reasonably affect chances of lock-in (including of the s-risk kind) beyond the next few decades as they essentially increase the number of thoughtful/good people in the future or the amount of resources such people have at their disposal. Do you disagree?

Load More