The Center on Long-term Risk is looking for a Community Manager, to work with Chi Nguyen and me on growing and supporting the community around our mission of reducing risks of astronomical suffering. The application deadline is October 16th. Details and application form on our website here: https://longtermrisk.org/community-manager/
The work in this role will be across areas like event & project management, 1:1 outreach & advising calls, setting up & improving IT infrastructure, writing, giving talks, and attending in-person networking events – depending on the skill set of the successful candidate. Since we are a small team, each person can meaningfully shape our strategy, propose new ideas, and take ownership of projects early. They also have the chance to engage with our research team.
Previous community-building experience is a good demonstration of the relevant skills, but no specific experience or qualifications are required.
This question has been considered to some extent by people in the community already. Consider the following posts:
It's would also be worth pointing out that most people in this community who hold views that can be categorized as negative utilitarian or suffering-focused don't endorse bringing about human extinction, e.g.:
I am not claiming that these posts/articles have settled the debate, but I think any post on a sensitive topic like this would benefit from including such content.
Yes, the CLR Fund is still accepting applications. I will see that we clarify this in the appropriate places.
Answering this or similar questions will be challenging for any worldview that takes into account second-order and long-run consequences of actions, not just negative utilitarianism.
Saving a child has many such effects that will be very difficult to account for: not just effects on loved ones but also effects on the ecosystem, climate change, demand for meat, the economy more generally, etc. So assessing the grief experienced by loved ones is probably only a small piece of the answer to your overall question. At the same time, it might be particularly salient or important because the bond is personal and irreplaceable. If this life is not saved, we can do little to offset that harm.
For what it’s worth, a negative utilitarian theory might also include the frustration of preferences in the evaluation of an action. To the extent that the child wants to continue living, this would provide reasons to save them, even by negative utilitarian lights. Whether this is a decisive reason is another matter of course.
If you do find negative utilitarianism or other suffering-focused views compelling, I think it makes more sense to ask the question: according to this view, what could be the very best thing I could be doing with my time and money? Most people who have asked this question have come up with interventions that seem much more impactful than saving lives directly -- regardless of whether the latter would overall be a good thing. Here is one person's attempt to answer this very difficult question: https://reducing-suffering.org/
and 10% for Nicolás Maduro.
The time horizon for this is "before 1 June 2020." That seems reasonable.
Thanks for writing this! This seems to be very important if we want the community to tap increasingly into professional networks.
I agree with all of what you say here. Building things for others can often go badly wrong. Thanks for sharing this perspective!
I was referring to the option "Building the EA and related communities." If building such institutions is a form of community-building, then this gives some indication of its importance compared to other areas. Now, it might be the case that respondents didn't have this in mind when answering and if they did, they would give it a much lower score.
This introduction might in some ways be more accessible: S-risks: Why they are the worst existential risks, and how to prevent them (EAG Boston 2017)