All of echoward's Comments + Replies

Thanks for the detailed response kokotajlod, I appreciate it.

Let me summarize your viewpoint back to you to check I've understood correctly. It sounds as though you are saying that AI (broadly defined) is likely to be extremely important and the EA community currently underweights AI safety relative to its importance. Therefore, while you do think that not everyone will be suited to AI safety work and that the EA community should take a portfolio approach across problems, you think it's important to highlight where projects do not seem as important as work... (read more)

4
kokotajlod
3y
Thanks for the detailed engagement! Yep, that's roughly correct as a statement of my position. Thanks. I guess I'd put it slightly differently in some respects -- I'd say something like "A good test for whether to do some EA project is how likely it is that it's within a few orders of magnitude as good as AI safety work. There will be several projects for which we can tell a not-too-implausible story for how they are close to as good or better than AI safety work, and then we can let tractibility/neglectedness/fit considerations convince us to do them. But if we can't even tell such a story in the first place, that's a pretty bad sign." The general thought is: AI safety is the "gold standard" to compare against, since it's currently No. 1 priority in my book. (If something else was No. 1, it would be my gold standard.) I think QRI actually can tell such a story, I just haven't heard it yet. In the comments it seems that a story like this was sketched. I would be interested to hear it in more detail. I don't think the very abstract story of "we are trying to make good experiences but we don't know what experiences are" is plausible enough as a story for why this is close to as good as AI safety. (But I might be wrong about that too.) re: A: Hmmm, fair enough that you disagree, but I have the opposite intuition. re: B: Yeah I think even the EA community underweights AI safety. I have loads of respect for people doing animal welfare stuff and global poverty stuff, but it just doesn't seem nearly as important as preventing everyone from being killed or worse in the near future. It also seems much less neglected--most of the quality-adjusted AI safety work is being done by EA-adjacent people, whereas that's not true (I think?) for animal welfare or global poverty stuff.  As for traceability, I'm less sure how to make the comparison--it's obviously much more tractable to make SOME improvement to animal welfare or the lives of the global poor, but if we compare helping

This type of reasoning seems to imply that everyone interested in the flourishing of beings and thinking about that from an EA perspective should focus on projects that contribute directly to AI safety. I take that to be the implication of your comment because it is your main argument against working on something else (and could equally be applied to any number of projects discussed on the EA Forum not just this one). That implies, to me at least, extremely high confidence in AI safety being the most important issue because at lower confidence we would wan... (read more)

5
kokotajlod
3y
Good question.  Here are my answers: 1. I don't think I would say the same thing to every project discussed on the EA forum. I think for every non-AI-focused project I'd say something similar (why not focus instead on AI?) but the bit about how I didn't find QRI's positive pitch compelling was specific to QRI. (I'm a philosopher, I love thinking about what things mean, but I think we've got to have a better story than "We are trying to make more good and less bad experiences, therefore we should try to objectively quantify and measure experience." Compare: Suppose it were WW2, 1939. We are thinking of various ways to help the allied war effort. An institute designed to study "what does war even mean anyway? What does it mean to win a war? let's try to objectively quantify this so we can measure how much we are winning and optimize that metric" is not obviously a good idea. Like, it's definitely not harmful, but it wouldn't be top priority, especially if there are various other projects that seem super important, tractable, and neglected, such as preventing the Axis from getting atom bombs. (I think of the EA community's position with respect to AI as analogous to the position re atom bombs held by the small cohort of people in 1939 "in the know" about the possibility. It would be silly for someone who knew about atom bombs in 1939 to instead focus on objectively defining war and winning.) 2. But yeah, I would say to every non-AI-related project something like "Will your project be useful for making AI go well? How?" And I think that insofar as one could do good work on both AI safety stuff and something else, one should probably choose AI safety stuff. This isn't because I think AI safety stuff is DEFINITELY the most important, merely that I think it probably is. (Also I think it's more neglected AND tractable than many, though not all, of the alternatives people typically consider) 3. Some projects I think are still worth pursuing even if they don't help make A

While I think this post was useful to have shared and this is a topic that is worth discussing, I want to throw out a potential challenge that seems at least worth considering: perhaps the name "effective altruism" is not the true underlying issue here? 

My (subjective, anecdotal) experience is that topics like this crop up every so often. Topics "like this" refer to things like:

  • concerns about the name of the movement/community/set of ideas, 
  • concerns about respected people adjacent to the movement not wanting to associate with "effective altruism"
... (read more)