Thanks a lot for posting this! I really enjoyed reading it and the linked google document - would anyone in the EA Philippines team be interested in a short meeting with me about this? I currently run EA Oxford and have some specific questions.
Thanks for the thoughtful comment Amber! I appreciate the honesty in saying both that you think people should think more about prioritisation and that you haven't always yourself. I have definitely been like this at times and I think it is good/important to be able to say both statements together. I would be happy/interested to talk through your thinking about prioritisation if you wanted. I have some other accounts of people finding me helpful to talk to about that kind of thing as it happens frequently in my community building work.
Re. (1), I agre...
Perhaps another consideration against is that it seems potentially bad to me for any one person to be the primary mediator for the EA community. There are some worlds where this position is subtly very influential. I dont think I would want a single person/worldview to have that, in order to avoid systematic mistakes/biases. To be clear, this is not intended as a personal comment - I have no context on you besides this post.
I am excited about having better community mediation though. Perhaps you coordinating a group/arrangement with external people could be a great idea.
Also I think this kind of post about personal career plans with detailed considerations is great so thanks for writing it.
Thanks David that all makes sense. Perhaps my comment was poorly phrased but I didn't mean to argue for caring about infohazards per se, but was curious for opinions on it as a consideration (mainly poking to build my/others'understanding of the space ). I agree that imposing ignorance on affected groups is bad by default.
Do you think the point I made below in this thread regarding pressure from third party states is important? Your point "it doesn't matter to them whether it also devastates agriculture in Africa or Australia" doesn't seem obviously true ...
Thanks for the reply and link to the study - I feel quite surprised by how minor the effect of impact awareness is but I suppose nuclear war feels quite salient for most people. I wonder if this could be some kind of metric used for evaluating the baseline awareness of a danger (ie. I would be very interested to see the same study applied to pandemics, AI, animals etc)
Re. The effects on government decision making, I think I agree intuitively that governments are sufficiently scope insensitive (and self interested in nuclear war circumstances?) that it woul...
Thanks for writing this - it seems very relevant for thinking about prioritization and more complex X-risk scenarios.
I haven't engaged enough to have a particular object-level take, but was wondering if you /others had a take on whether we should consider this kind of conclusion somewhat infohazardous? Ie. Should we be making this research public if it at all increases the chance that nuclear war happens?
This feels like a messy thing to engage with, and I suppose it depends on beliefs around honesty and trust in governments to make the right call with fuller information (of course there might be some situations where initating a nuclear war is good).
Thanks for writing this post Victor, I think your context section represents a really good and truth-seeking attitude coming into this with. From my perspective, it is also always good to have good critiques of key EA ideas. To respond to your points:
1 and 2. I agree that the messaging about maximisation has the danger of people taking it too far, but I think it is quite defensible as an anchor point. Maybe this should be more present in the handbook, but I think it is worth initially saying that >95% of EAs' lives don't look like some extre...
Thanks for writing this, I found it helpful for understanding the biosecurity space better!
I wanted to ask if you had advice for handling the issue around difficulties for biosecurity in cause prioritisation as a community builder.
I think it is easy to build an intuitive case for biohazards not being very important or an existential risk, and this is often done by my group members (even good fits for biosecurity like biologists and engineers), who then dismiss the area in favour of other things. They (and me) do not have access to the threat models which p...
This is more a response to "it is easy to build an intuitive case for biohazards not being very important or an existential risk", rather than your proposals...
My feeling is that it is fairly difficult to make the case that biological hazards present an existential as opposed to catastrophic risk and that this matters for some EA types selecting their career paths, but it doesn't matter as much in the grand scale of advocacy? The set of philosophical assumptions under which "not an existential risk" can be rounded to "not very important" seems common in th...
What's the kind of information you mean by semi-objective? Something comparable to this for instance? Nuclear Threat Initiative’s Global Biological Policy and Programs (founderspledge.com) (particularly the "why we recommend them" section)
I think it could be bad if it relies too much on a particular worldview for its conclusions, which causes people to unnecessarily anchor on it. Seems like it could also be bad from a certain perspective if you think that it could lead to preferred treatment for longtermist causes which are easier to evaluate (eg. climate change relative to AI safety).
Nice post - I think I agree that Ben's argument isn't particularly sound.
Are you thinking about this primarily in terms of actions that autonomous advanced AI systems will take for the sake of optimisation? If not, I imagine you could look at this with a different lense and consider one historical perspective which says something like "One large driver of humanity's moral circle expansion/moral improvement has been technological progress which has reduced resource competition and allowed groups to expand concern for others' suffering without undermin...
From a brief glance, it does appear that Founders Pledge's work is far more analogous to typical longtermist EA grantmaking than Givewell. Ie. it relies primarily on heuristics like organiser track record and higher-level reasoning about plans.
Thanks for the comment Jeff! I admit that I didn't have biosecurity consciously in mind where I think perhaps you have an unusually clear paradigm compared to other longtermist work (eg. AI alignment/governance, space governance etc), and my statement was likely too strong besides.
However, I think there is a clear difference between what you describe and the types of feedback in eg. global health. In your case, you are acting with multiple layers of proxies for what you care about, which is very different to measuring the number of lives saved by AMF...
Thanks for the detailed response! Your examples were helpful to illustrate your general thinking, and I did update slightly towards thinking some version of this could work, but I am still getting stuck on a few points:
Re. the GHD comparison: firstly to clarify, I meant "quality of reasoning" primarily in terms of the stated theory of change rather than a much more difficult to assess general statement. I would expect the quality of reasoning around a ToC to quite strongly correlate with expected impact. Of course this might not always cash out in actual i...
Whats the version/route to value of this that you are excited about? I feel quite skeptical anything like this could work (see my answer on this post) but would be eager for people to change my mind.
I agree that the lack of feedback loops and complicated nature of all of the things is a core challenge in making this useful.
I think you really can't do better than trying to evaluate people's track records and the quality of their higher level reasoning, which is essentially the meaning of grantmakers' statements like "just trust us".
I do have this sense that we can do better than illegible "just trust us". For example, in the GHD regime, it seems to me like the quality of the reasoning associated with people championing different interventions cou...
I am surprised no one has directly made the obvious point of there being no concrete feedback loops in longtermist work, which means that it would be very messy to try and compare. While some people have tried to get at the cost effectiveness of X-risk reduction, it is essentially impossible to be objective in evaluating how much a given charity has actually reduced X-risk. Perhaps there is something about creating clear proxies which allows for better comparison, but I am guessing that there would still be major disagreements over what could be best...
I think this post is interesting, while being quite unsure what my actual take is on the correctness of this updated version. I think I am worried about community epistemics in this world where we encourage people to defer on what the most important thing is.
It seems like there are a bunch of other plausible candidates for where the best marginal value add is even if you buy AI X- risk arguments eg. S risks, animal welfare, digital sentience, space governance etc. I am excited about most young EAs thinking about these issues for themselves.
How much do you...
This seems cool, thanks for running it!
What was the primary route to value of this retreat in your opinion? I'd be curious to know whether it was mainly about providing community and thus making participants more motivated, or if there were concrete new collaborations or significant individual updates derived from interactions at this retreat.
Do you plan on doing any research into the cruxes of disagreement with ML researchers?
I realise that there is some information on this within the qualitative data you collected (which I will admit to not reading all 60 pages of), but it surprises me that this isn't more of a focus. From my incredibly quick scan (so apologies for any inaccurate conclusions) of the qualitative data, it seems like many of the ML researchers were familiar with basic thinking about safety but seemed to not buy it for reasons that didn't look ...
Hi Kynan thanks for writing this post.
It is great to see other people looking into more rigorous community building work! I really like the objective and methodology you set out, and do think that there are currently huge inefficiencies and loss in how information is currently transferred between groups.
I think one thing I am worried about with doing this on a large scale is the loss of qualitative nuance behind quantitative data. It seems difficult to really develop good models of why things work and what the key factors to consider are, without act...
Thanks for your comment. For your first point, I definitely agree in an ideal world that benchmarks for improvement would be useful but I would be hesitant for a few reasons.
Firstly, you face quite a risk of putting people off a certain career when really you don't have the certainty to give that advice (especially when I am not a specialist in the field), and that could be really damaging and maybe not that useful. Secondly, these things are generally really context specific for how good X amount of progress is in Y amount of time. Eg. for your exam...
I think this is really cool and a great way of breaking things down - thanks for writing this up!
I downvoted this forum post because I think the quoted part of the text, while obviously informal, is an annoying strawman of criticisms EA faced and represents an attitude towards critique that I think is quite counterproductive. I think the rest of the linked post is significantly better though, and agree with the general point.