E

echoward

31 karmaJoined Mar 2021

Comments
3

Thanks for the detailed response kokotajlod, I appreciate it.

Let me summarize your viewpoint back to you to check I've understood correctly. It sounds as though you are saying that AI (broadly defined) is likely to be extremely important and the EA community currently underweights AI safety relative to its importance. Therefore, while you do think that not everyone will be suited to AI safety work and that the EA community should take a portfolio approach across problems, you think it's important to highlight where projects do not seem as important as working on AI safety since that will help nudge the EA community towards a better-balanced portfolio. Outside of AI safety, there are a few other things that you think are also important, mostly in the existential risk kind of category but also including improving collective sanity/rationality/decision-making and maybe others. Therefore, the critique of QRI is mostly part of the activity to keep the portfolio properly balanced, however, you do have some additional skepticism that learning about what we mean by happiness and suffering is useful.

Is that roughly right?

If that is approximately your view, I think I have a couple of disagreements/things I'm confused about.

A. Firstly, I don't think the WW2 example is quite right for this case. I think in the case of war, we understand the concept well enough to take the relevant actions and we don't predict defining the concept to change that. I don't think we understand the concepts of suffering or happiness well enough to take similar actions as in the WW2 case.

B. Secondly, I would have guessed that the EA community overweights AI safety so I'm curious as to why you think that is not the case. It could be that my intuitions are wrong about the focus it actually receives (vs the hype in the community) or it could be that I think it should receive less focus than you do. Not so much compared to its importance, more like its tractability when factoring in safety and the challenges of coordination. I worry that perhaps we overly focus on the technical side such that there's a risk that we just speed up development more than we increase safety.

C. While I don't know much about QRI's research, in particular, my concerns from point B make me more inclined to support research in areas related to social sciences that might improve our understanding of and ability to coordinate.

D. And finally, why include "improving collective sanity/rationality/decision-making" in the list of other important things but exclude QRI? Here I'm not necessarily disagreeing, I just don't quite get the underlying model that generates existential threats as the most important but then includes something like this and then excludes something like QRI.

To be clear, these are not confident viewpoints, they are simply areas where I notice my views seem to differ from many in the EA community and I expect I'd learn something useful from understanding why that is.

This type of reasoning seems to imply that everyone interested in the flourishing of beings and thinking about that from an EA perspective should focus on projects that contribute directly to AI safety. I take that to be the implication of your comment because it is your main argument against working on something else (and could equally be applied to any number of projects discussed on the EA Forum not just this one). That implies, to me at least, extremely high confidence in AI safety being the most important issue because at lower confidence we would want to encourage a wider range of bets by those who share our intellectual and ideological views.

If the implication I'm drawing from your comment matches your views and confidence, can you help me understand why you are so confident in this being the most important issue?


If I'm misunderstanding the implication of your comment, can you help me understand where I'm confused?
 

While I think this post was useful to have shared and this is a topic that is worth discussing, I want to throw out a potential challenge that seems at least worth considering: perhaps the name "effective altruism" is not the true underlying issue here? 

My (subjective, anecdotal) experience is that topics like this crop up every so often. Topics "like this" refer to things like:

  • concerns about the name of the movement/community/set of ideas, 
  • concerns about respected people adjacent to the movement not wanting to associate with "effective altruism" in some way and,
  •  discussions of potential other movements (for example having a separate long-term focused movement) and names (see comments about Global Priorities instead) 

I wonder if some of what is underpinning these discussions is less the accuracy or branding issues of particular names and more the difficulty of coordinating a growing community?

As the number of people interested in the ideas associated with effective altruism grows, more people enter the space with different values and interpretations of the various ideas. It becomes harder for everyone to get what they wanted from the community and less likely that all those involved agree that things are moving in a positive direction. 

My concern would be that even one were to wave a magic wand and successfully rebrand the movement to a new name at some point the same issues would arise when people again began to feel dissatisfied with something about the movement (or how others perceive it) and start casting around for a solution. Unfortunately, I think the solution is unlikely to be one of branding but might instead require us to figure out a lot more about what the goals of this endeavor are and how to successfully coordinate large groups of people who will inevitably have competing values and viewpoints.