weeatquince's Comments

Reducing long-term risks from malevolent actors

Thank you for the insight. I really have no strong view on how useful each / any of the ideas I suggested were. They were just ideas.

I would add on this point that narcissistic politicians I have encountered worried about appearance and bad press. I am pretty sure that transparency and fact checking etc discouraged them from making harmful decisions. Not every narcissistic leader is like Trump.

Update from the Happier Lives Institute

Amazing job Clare and Michael and everyone else involved. Keep up the good work.

As mentioned previously I would be interested, further down the line, to see a broad cause prioritisation assessments that looked at how SWB metrics might shed insight on how we compare global heath, to global economic growth, to improving decisions, to farmed animals well-being, to existential risk prevention, etc.

Reducing long-term risks from malevolent actors

Hi, interesting article. Thank you for writing.

I felt that this article could have said more about possible policy interventions and that it dismisses policy and political interventions as crowded too quickly. Having thought a bit about this area in the past I thought I would chip in.


Even within established democracies, we could try to identify measures that avoid excessive polarization and instead reward cross-party cooperation and compromise. ... (For example, effective altruists have discussed electoral reform as a possible lever that could help achieve this.)

There are many things that could be done to prevent malevolent leaders within established democracies. Reducing excessive polarization (or electoral reform) are two minor ones. Other ideas you do not discuss include:

  • Better mechanisms for judging individuals. Eg ensuring 360 feedback mechanisms are used routinely to guide hiring and promotion decisions as people climb political ladders. (I may do work on this in the not too distant future)
  • Less power to individuals. Eg having elections for parties rather than leaders. (The Conservative MPs in the UK could at any time decide that Boris Johnson is no longer fit to be a leader and replace him with someone else, Republicans cannot do this with Trump, Labour MPs in the UK cannot do this with a Labour leader to the same extent).
  • Reduce the extent to which corruption / malevolence is beneficial for success. There are many ways to do this. In particular removing the extent to which individuals raising money is a key factor for their political success (in the UK most political fundraising is for parties not for individuals). Also removing the extent to which dishonesty pays, for example with better fact-checking services.
  • More checks and balances on power. A second house. A constitution. More independent government institutions (central banks, regulators, etc – I may do some work in this space soon too). More transparency of political decision making. Better complaint and whistle-blowing mechanisms. Limits on use of emergency powers. Etc.


Alternatively, we could influence political background factors that make malevolent leaders more or less likely... interventions to promote democracy and reduce political instability seem valuable—though this area seems rather crowded.

You might be correct, but this feels a bit like saying the AI safety space is crowded because lots of groups are trying to develop AI. However it may not be the case that those groups are focusing as much on safety as you would like. Although there are many groups (especially nation states) that want to promote democracy there may be very specific interventions that prevent malevolent leaders that are significantly under-discussed, such as elections for parties rather than leaders, or other points listed above. It seems plausible that academics and practitioners in this space may be able to make valuable shifts in the way fledgling democracies are developing that are not otherwise being considered.

And as someone in the improving government institutions space in the UK is is not evident to me that there is much focus on the kinds of interventions that would limit malevolent leaders.

What will 80,000 Hours provide (and not provide) within the effective altruism community?

Hi Ben, I think you are correct that the main difference in our views is likely to be the trade-off between breadth/inclusivity verses expected impact in key areas. I think you are also correct that this is not a topic that either of us could do justice in this thread (I am not sure I could truly do it justice in any context without a lot of work, although always happy to try). And ultimately my initial disappointment may just be from this angle.

I do think historically 80K has struggled more in communicating its priorities to the EA community than others (CEA / GiveWell / etc) and it seems like you recognise this has been a challenge. I think perhaps it was overly harsh of me to say that 80K was "clearly doing something wrong". I was focusing only on the communications front. Maybe the problems were unavoidable or the past decisions made were the net best decisions given various trade-offs. For example maybe the issues I pointed to were just artifacts of 80K at the time transitioning its messaging from more of a "general source of EA careers advice" to more of cause focused approach. (It is still unclear to me if this is a messaging shift or a strategy shift). Always getting messaging spot on is super difficult and time consuming.

Unfortunately, I am not sure my thoughts here have lead to much that is concretely useful (but thank you for engaging). I guess if I had to summarise some key points I would say: I am super in favour of transparency about priorities (and in that regard this whole post is great); if you are focusing more on your effect on the effective altruism movement then local community organisers might have useful insights (+CEA ect have useful expertise); if 80k gets broader over time that would be exciting to me; I know I have been critical but I am really impressed by how successful you have made 80k.

Coronavirus and long term policy [UK focus]

Hi, Thank you some super useful points here. Will look at some of the BBRSC reports. I know about NC3R and think it is a good approach.

Only point I disagree with:

In terms of having a minister for dual use research this seems quite high cost to ask for, and low worth think Piers Millet suggestion of liaison officer more useful.

To clarify this is not a new Minister but adding this area of responsibility to a Ministerial portfolio so not at all a high cost ask (although ideally would do so in legislation which would be higher cost).

I think this is needed as however capable the civil service is at coordination there needs to be a Minister who is interested and held accountable in order to drive change and maintain momentum.

What will 80,000 Hours provide (and not provide) within the effective altruism community?

Hi Ben, Thank you for the thoughtful reply. Super great to see a greater focus on community culture in your plans for 2020. You are always 2 steps ahead :-)

That said I disagree with most of what you wrote.

Most of your reply talks about communications hurdles. I don’t think these pose the barrier you think they pose. In face the opposite, I think the current approach makes communications and mistrust issues worse.

You talk about the challenge of being open about your prioritisation and also open to giving advice across causes, risks of appearing to bait and switch, transparency Vs demoralising. All of these issues can be overcome, and have been overcome by others in the effective altruism community and elsewhere. Most local community organisers and CEA staff have a view on what cause they care the most about yet still mange an impartial community and impartial events. Most civil servants have political views but still provide impartial advice to Ministers. Solutions involve separating your priotisation from your impartial advice, having a strong internal culture of impartiality, being open about your aims and views, being guided by community interests, etc. This is certainly not always easy (hence why I had so many conversations about how to do this well) but it can be done.

I say the current approach makes these problems worse. Firstly thinking back to my time focused on local community building (see examples above) it appeared to me that 80000 Hours had broken some of the bonds of trust that should exist between 80000 Hours and its readership. It seems clear that 80000 Hours was doing something wrong and that more impartiality would be useful. (Although take this with a pinch of salt as I have been less in this space for a few years now). Secondly it seems surprising to me that you think the best communications approach for the effective altruism community is to have multiple organisations in this space for different causes with 80000 Hours being an odd mix of everything and future focused. A single central organisation with a broader remit would be much clearer. (Maybe something like franchising out the 80000 Hours brand to these other organisations if you trust them could solve this.)

I fully recognise there are some very difficult trade-offs here: there is a huge value to doing one thing really well, costs of growing a team to quickly to delve into more areas, costs of having lower impact on the causes you care about, costs of switching strategy, etc.

Separately to the above I expect that I would place a much stronger emphasis than you on epistemic humility and have more uncertainty than you about the value of different causes and I imagine this pushes me towards a more inclusive approach.

What will 80,000 Hours provide (and not provide) within the effective altruism community?

Hi Michelle, Firstly I want to stress that no one in 80,000 Hours needs to feel bad because I was unimpressed with some coaching a few years ago. I honestly think you are all doing a really difficult job and doing it super well and I am super grateful for all the coaching I (and others) have received. I was not upset, just concerned, and I am sure any concerns would have been dealt with at the time.

(Also worth bearing in mind that this may have been an odd case as I know the 80K staff and in some ways it is often harder to coach people you know as there is a temptation to take shortcuts, and I think people assume I am perhaps more certain about far future stuff than I am.)

I have a few potentially constructive thoughts about how to do coaching well. I have included in case helpful, although slightly wary of writing these up because they are a bit basic and you are a more experienced career coach than me so do take this with a pinch of salt:

  • I have found it works well for me best to break the sessions into areas where I am only doing traditional coaching (mostly asking questions) and a section(s), normally at the end, where I step back from the coach role to an adviser role and give an opinion. I clearly demarcate the difference and tend to ask permission before giving my opinion and tend to caveat how they should take my advice.
  • Recording and listening back to sessions has been useful for me.
  • I do coaching for people who have different views from me about which beneficiaries count. I do exercises like asking them how much they care about 1 human or 100 pigs or humans in 100 years, and work up plans from there. (This approach could be useful to you but I expect this is less relevant as I would expect much more ethical alignment of the people you coach).
  • I often feel that personally being highly uncertain about which cause paths are most important is helpful to taking an open mind when coaching. This may be a consideration when hiring new coaches.

Always happy to chat if helpful. :-)

What will 80,000 Hours provide (and not provide) within the effective altruism community?

In many ways this post leaves me feeling disappointed that 80,000 Hours has turned out the way it did and is so focused on long-term future career paths.

- -

Over the last 5 years I have spent a fair amount of time in conversation with staff at CEA and with other community builders about creating communities and events that are cause-impartial.

This approach is needed for making a community that is welcoming to and supportive of people with different backgrounds, interests and priorities; for making a cohesive community where people with varying cause areas feel they can work together; and where each individual is open-minded and willing to switch causes based on new evidence about what has the most impact.

I feel a lot of local community builders and CEA have put a lot of effort into this aspect of community building.

- -
Meanwhile it seems that 80000 Hours has taken a different tack. They have been more willing, as part of trying to do the most good, to focus on the causes that the staff at 80000 Hours think are most valuable.

Don’t get me wrong I love 80000 Hours, I am super impressed by their content glad to see them doing well. And I think there is a good case to be made for the cause-focused approach they have taken.

However, in my time as a community builder (admittedly a few years ago now) I saw the downsides of this. I saw:

  • People drifting from EA. Eg: someone telling me, they were no longer engaging with the EA community because they felt that it was now all long-term future focused and point to 80000 Hours as the evidence.
  • People feeling that they needed to pretend to be long-termism focused to get support from the EA community . Eg: someone telling me they wanted career coaching “read between the lines and pretended to be super interested in AI”.
  • Personally feeling uncomfortable because it seemed to me that my 80000 Hours career coach had a hidden agenda to push me to work on AI rather than anything else (including paths that progressed by career yet kept my options more open to different causes).
  • Concerns that the EA community is doing a bait-and-switch tactic of “come to us for resources on how to do good. Actually, the answer is this thing and we knew all along and were just pretending to be open to your thing.”

- -

“80,000 Hours’ online content is also serving as one of the most common ways that people get introduced to the effective altruism community”

So, Ben, my advice to you would firstly to be to be super proud of what you have achieved. But also to be aware of the challenges that 80000 Hours’ approach makes for building a welcoming and cohesive community. I am really glad that 20% of content on the podcast and the job board goes into broader areas than your priority paths and would encourage you to find ways that 80000 Hours can put more effort into these areas, do some more online content on these areas and to think carefully about how to avoid the risks of damaging the EA brand or the EA community.

And best of luck with the future.

What posts you are planning on writing?

Hi, is be interested and have been thinking about similar stuff (meeting the impact of lobbying, etc) from a uk policy perspective.

If helpful happy to chat and share thoughts. Feel free to get in touch to: sam [at] appgfuturegenerations.com

Cotton‐Barratt, Daniel & Sandberg, 'Defence in Depth Against Human Extinction'

This is excellent. Very well done.

It crossed my mind to ponder on whether much can be said about where different categories* of risk prevention are under-resourced. For example it maybe that the globe spends enough resources on preventing natural risks as we have seen them in the past so understand them. It maybe that militarisation of states means that we are prepared for malicious risk. It maybe that we under-prepare for large risks as they have less small scale analogues.

Not sure how useful following that kind of thinking is but it could potentially help with prioritisation. Would be interested to hear if the authors have though through this.

*(The authors break down risks into different categories: Natural Risk / Accident Risk / Malicious Risk / Latent Risk / Commons Risk, and Leverage Risk / Cascading Risk / Large Risk, and capability risk / habitat risk / ubiquity risk / vector risk / agency risk).

Load More