V

Vanessa

533 karmaJoined Jan 2019

Comments
37

Is there going to be a post-mortem including an explanation for the decision to sell?

Yes. Moreover, GCR mitigation can appeal even to partial altruists: something that would kill most of everyone, would in particular kill most of whatever group you're partial towards. (With the caveat that "no credence on longtermism" is underspecified, since we haven't said what we assume instead of longtermism; but the case for e.g. AI risk is robust enough to be strong under a variety of guiding principles.)

The framing "PR concerns" makes it sound like all the people doing the actual work are (and will always be) longtermists, whereas the focus on GCR is just for the benefit of the broader public. This is not the case. For example, I work on technical AI safety, and I am not a longtermist. I expect there to be more people like me either already in the GCR community, or within the pool of potential contributors we want to attract. Hence, the reason to focus on GCR is building a broader coalition in a very tangible sense, not just some vague "PR".

I can relate, as someone who also struggles with self-worth issues. However, my sense of self-worth is tied primarily to how many people seem to like me / care about me / want to befriend me, rather than to what "senior EAs" think about my work.

I think that the framing "what is the objectively correct way to determine my self-worth" is counterproductive. Every person has worth by virtue of being a person. (Even if I find it much easier to apply this maxim to others than to myself.) 

IMO you should be thinking about things like, how to do better work, but in the frame of "this is something I enjoy / consider important" rather than in the frame of "because otherwise I'm not worthy". It's also legitimate to want other people to appreciate and respect you for your work (I definitely have a strong desire for that), but IMO here also the right frame is "this is something I want" rather than "this is something that's necessary for me to be worth something".

I strongly disagree that utilitarianism isn't a sound moral philosophy, and don't understand the black and white distinction between longtermism and us not all dying. I might be missing something there is surely at least some overlap betwen those two reasons for preventing AI risk.

I don't know if it's a "black and white distinction", but surely there's a difference between:

  • Existential risk is bad because the future could have a zillion people, so their combined moral weight dominates all other considerations.
  • Existential risk is bad because (i) I personally am going to die (ii) my children are going to die (iii) everyone I love are going to die (iv) everyone I know are going to die, and also (v) humanity is not going to have a future (regardless of the number of people in it).

For example, something that "only" kills 99.99% of the population would be comparably bad by my standards (because i-iv still apply), whereas it would be way less bad by longtermism standards. Even something that "only" kills (say) everyone I know and everyone they know would be comparably bad for me, whereas utilitarianism would judge it a mere blip in comparison to human extinction.

Out of interest, if you aren't an effective altruist, nor a longermist then what do you call yourself?

I call myself "Vanessa" :) Keep your identity small and all that. If you mean, do I have a name for my moral philosophy then... not really. We can call it "antirealist contractarianism", I guess? I'm not that good at academic philosophy.

Strongly agreed.

Personally, I made the mitigation of existential risk from AI my life mission, but I'm not a longtermist and not sure I'm even an "effective altruist". I think that utilitarianism is at best a good tool for collective decision making under some circumstances, not a sound moral philosophy. When you expand it from living people to future people, it's not even that.

My values prioritize me and people around me far above random strangers. I do care about strangers (including animals) and even hypothetical future people more than zero, but I would not make the radical sacrifices demanded by utilitarianism for their sake, without additional incentives. On the other hand, I am strongly committed to following a cooperative strategy, both for reputational reasons and for acausal reasons. And, I am strongly in favor of societal norms that incentivize making the world at large better (because this is in everyone's interest). I'm even open to acausal trade with hypothetical future people, if there's a valid case for it. But, this is not the philosophy of EA as commonly understood, certainly not longtermism.

The main case for preventing AI risk is not longtermism. Rather, it's just that otherwise we are all going to die (and even going by conservative-within-reason timelines, it's at least a threat to our children or grandchildren).

I'm certainly hoping to recruit people to work with me, and I'm not going to focus solely on EAs. I won't necessarily even focus on people who care about AI risk: as long as they are talented, and motivated to work on the problems for one reason or the other (e.g. "it's math and it's interesting"), I would take them in.

Nice work! Many good hopes in there, but, hard to compete with "make furries real".

I'm confused. What are you trying to say here? You linked a proposal to prioritize violence against women and girls as an EA cause area (which I assume you don't object to?) and a tweet by some person unknown to me saying that critics of EA hold it to a standard they don't apply to feminism (which probably depends a lot on what kind of critics, and on their political background in particular). What do you expect the readers to learn from this or do about it?

Thanks so much for replying, I learned a lot from your response and its clarity helped me update my thinking.

You're very welcome, I'm glad it was useful!

I would expect these to be exceptions rather than norms (because if e.g. wanting to have a career was the norm, over enough time, that would tend to become culturally normative and even in the process of it becoming a more normative view the difference with a SWB measure should diminish).

I'm much more pessimistic. The processes that determine what is culturally normative are complicated, there are many examples of norms that discriminate against certain groups or curtail freedoms lasting over time, and if you're optimizing for the near future then "over enough time" is not a satisfactory solution.

I suppose I'm also thinking about the potential difference in specific SWB scales. Something like the SWLS scale or the single item measures would not be very domain specific but scales based around the e.g. Wheel of Life tradition tell you a lot more different facets of your life (e.g. you can see high overall scale but low for job satisfaction), so it seems to me that with the right scales and enough items you can address culture or other variance even further.

I don't know how those scales work, but (as I wrote in my reply to Joel), I would be much more optimistic about scales that are relative i.e. ask you to compare your well-being in situation A to situation B (whether these situations are familiar or hypothetical) rather than absolute  (in which case it's not clear what's the reference frame).

What I was unable to articulate well is that your individual preferences are not stable (or I suppose: per person, rather than across people), i.e. Alice when she has $5 will exchange a different amount of free time for an extra $1 then when Alice has $10. 

This is considered a consistent preference in standard (VNM) decision theory. It is entirely consistent that U(6$ and X free time) > U(5$ and Y free time) but U(11$ and X free time) < U(10$ and Y free time).

Hi Joel,

Thank you for the informative reply!

I think there's a big difference between asking people to rate their present life satisfaction and asking people what would make them more satisfied with their life. The latter is a comparison: either between several options or between future and present, depending on the phrasing of the questions. In a comparison it makes sense people report their relative preferences. On the other hand, the former is in some ill-posed reference frame. So I would be much more optimistic about a variant of WELLBY based on the former than on the latter.

Load more