JAD

just_a_dude

30 karmaJoined

Comments
3

A few thoughts from a long-time EtG'er working in finance:

* US tax code caps tax-deductibility of donations at 50% or 60%  (depending on if giving cash or appreciated assets). I give 50%, because that's the most that I'm able to deduct. (credit to Tyner for stating this first)

* For the first years of my career, I gave 10-20% of my income thinking that once I saved up $X I'd give much more (where $X is a fairly large number). Once I hit $X, I started giving 35% and then 50%. Would I say I was EtG during the 10-20% years? At the time, I did say I was EtG, but in retrospect I really think of it as I was doing the prereqs to EtG - firstly being in a financial spot where I was comfortable to give 50%, and secondly climbing the career ladder and increasing my earnings to make my future giving as much as possible.

* Personally I'd define EtG as someone giving >30-35% and trying to maximize income (or someone early-career working towards that eventual goal, maybe call that "aspiring EtG")

* I strongly reject the idea of EtG as a "sacrifice" in any significant way. I enjoy my job in finance and there's a good chance that if I never heard of EA I'd still work in finance (with some mildly negative feelings about spending my career on something zero sum). I'm not sacrificing some desire to do direct work nor enduring an unpleasant grind of a job towards altruistic ends. I also don't think of donating as sacrificing potential wealth, but as an opportunity to be a part of some tremendously good projects. I'd argue the only thing I sacrifice is retiring at an unusually young age, but I don't have a huge desire to do that

Perhaps it would be useful to talk to someone alive in 1960 about how they carried about their lives under the constant threat of nuclear war? 

I'm a neartermist with 0.01<P(doom from AI)<0.05 on a 30-year horizon. I don't consider myself a doomer, but I think this qualifies as taking AI risk seriously (or at least not dismissing it entirely).

I think of my neartermism as a result of 3 questions:

  1. how much x-risk is there from AI?  
    As I said above, I think there's between a 1% and 5% chance of extinction from AI in the next 30 years. In my mind, this is high. If I were a longtermist, this would be sufficient to motivate to me to work on AI safety.
     
  2. how bad is x-risk?
    I am sympathetic to person-affecting views, which to me means thinking of x-risk as primarily impacting people (& animals) alive today. I'm also sympathetic to the idea that it's somewhat good to create a positive life. However, I'd really rather not create negative lives, and I think there is uncertainty on the sign of all not-yet-existent lives. As an example of this uncertainty, consider that many people raised in excellent conditions (loving family, great education, good healthcare, good friends) still struggle with depression. Because of this uncertainty and risk-aversion, the non-person-affecting views part of me is roughly neutral on creating lives as an altruistic act.
     
  3. how much can I lower x-risk?
    I have a technical skillset and could directly do AI safety work. However, I think most technical AI safety work still accelerates AI and therefore may accelerate extinction. As an example, I believe (weakly! convince me otherwise please!) that RLHF and instruction-tuning led to the current LLM gold rush and that if LLMs were more toxic (aka less safe?) there would be less investment in them right now. Along these lines, I'm not sure that any technical AI safety work done thus far has decreased AI x-risk.
    I think the best mechanism to lowering AI x-risk is to slow down AI development, both to give us more time in the current safe-ish technological world and perhaps time to shift into a paradigm where we can develop clearly beneficial technical safety tools. I imagine this deceleration to primarily happen through policy. Policy is outside my skillset, but I'd happily write a letter to my congressperson.
    If I could lower AI x-risk by 0.0001%  (I think of this as lowering P(doom) from 0.020000 to 0.019999, or 1 part in 20,000), I'd consider this worth 8 billion people * 1e-6 probability = 8e3 = 8000 deaths averted. I think I have better options to add this many QALYs over the course of my life - without the downside risk of potentially accelerating extinction!

 

Other reasons I'm not a longtermist / I don't do technical AI safety work:

  • I aspire to serve the poor and to serve animals rather than neglecting them or being served by them. I'm interested in working on problems that disproportionately impact the poor (eg pandemics) and not problems that would primarily impact the rich or even impact everyone equally (eg AI) in order to provide a preferential option for the poor. I'd like a world where more people live to 60 rather than one where some people live forever.
  • I'm risk-averse with my life's work. If I spent my life working on something that seemed like it might be good and ended up being totally useless, I'd consider that a wasted life.
  • I'm not impressed by things like the 80K problem profiles page putting "space governance" above "factory farming" and "easily preventable or treatable illness", or the Wytham Abbey purchase, or FTX, or the trend of spending money on elite students in rich countries without evidence rather than on people in poor countries with great evidence of the good that could be done. This is not the sort of altruism I want to be associated with.