S

sammyboiz

387 karmaJoined

Comments
57

Oh I see. I was quick to bifurcate between deontology and utilitarianism. I guess I'm less familiar with other branches of consequentialism. Sorry for being unclear in the critique. My whole reply was just centered around being bad deontologically.

I see your point.

For the interest of the people today, there is an argument to be made for taking on risk of extinction. However, if this is not a purely utilitarian argument, I think it's extremely careless and condemnable to impose this risk on humanity just because you have personally deemed it acceptable. This would be a deontological nightmare. Who gave AI labs the right to risk the lives of 8 billion people? 

I was reluctant to get into the weeds here but how can anything near this model be possible if 2^300 is around how many atoms there are in the universe and we already have conquered 2^150 of them. At some point, there will likely be no more growing and then there will be millions of stable utopia years.

Reaping the benefits of AGI later is pretty insignificant in my opinion. If we get aligned AGI utopia, we will have utopia for millions of years. Acceleration by a few years if negligible if it increase p(doom) by >1%.

1% X 1 million utopia years = 10 thousand utopia years (better than 2 utopia years)

Dario gives a 25% p(doom) if I'm not mistaken. He still continues the build the tech that could knowingly bring doom. Dario and Anthropic are pro-acceleration via their messaging and actions according to a LW'er. How is this position coherent?

I don't think you can name another company that admits to building technology with a >1% chance of killing everyone... besides maybe OpenAI.

Are you implying that vegans will not eat lab meat because it is still imitation of flesh which is symbolically bad (or something similar)?

There are probably many vegans who aren't like this.

I've grown accustomed to not bringing up EA and AI safety in my regular non-EA circles. It is too much of a headache trying to explain to people and although some are curious, I don't enjoy being pushy and opinionated.

In EA circles, I get a lot of validation and a sense of connection. My novel ideas and culture are shared so tightly with other EAs. I almost feel as if there is a recoil effect where I am now too lazy to explore EA tenets with non-EAs. I don't care for trying to build EA in others to give myself someone to talk to. (I am incredibly grateful for the EA community and I think that the recoil is minor compared to the benefit)

I generally find it extremely easy to be a normal dude day to day but mixing my EA world with my normal world is difficult. I don't talk about EA with my family or my buddies. I don't try to convince anyone outside of EA of anything and I probably should! I know AGI lab people who I could probably sway towards quitting their jobs or something haha.

And some people have even suggested saving more than normal if you have relatively short TAI timelines


Short timeline AGI is not priced into the stock market. AI labs, big tech, GPU and data companies are good bets to map to short-term AGI. If this is your belief, you can expect thousandfold returns on investment during the singularity which is far superior to holding cash. While mitigating X-risk via donations might be the maximally altruistic thing to do, it wouldn't hurt to leave yourself in a good position in the case of a post-AGI world where money still matters.

 

I would say the major exception is if you think your job is going to be automated soon. Then savings would be more valuable, though I suspect that even more valuable than savings would be working on being flexible and being able to pivot quickly.

I am a CS undergrad. As soon as I have money to save, I am going to hedge against job automation by investing in AGI stock. This will offer better financial protection compared to holding cash.

What should you do with the money that you are not putting into retirement? Some have suggested doing things on your bucket list, because you might not have an opportunity to later.

I personally only care about being happy day to day. For me, I can't be sad that I didn't check off my bucket list because I would be dead. I'm different to most I assume however.

 

I would recommend donating much of that extra money to the causes you care about while you can still make a difference, whether that is reducing existential risk, increasing animal welfare, reducing global poverty

Given short timelines, certain charities become less appealing. For example, animal welfare campaigns which convince companies to make commitments by a certain date many years from now may not have any impact. Existential risk probably has the highest expected utility with the correct animal welfare charities in second place.

 

Given the vibes of this post, I would highly recommend investing in AGI stock for mostly non-EA reasons. It seems like it would give you the safety net to then donate to EA causes.

Load more