Jordan Arel

Researcher @ Independent
331 karmaJoined Mar 2022
jordanarel.com/

Bio

Participation
6

I’m am on a gap year from studying for a Master of Social Entrepreneurship degree at the University of Southern California.

I have thought along EA lines for as long as I can remember, and I recently wrote the first draft of a book “Ways to Save The World” about my top innovative ideas for broad approaches to reduce existential risk.

I am now doing more research on how these approaches interact with AI X-risk.

Comments
69

Topic contributions
1

Mmm yeah, I really like this compromise, it leaves room for being human, but indeed, I’m thinking more about career currently. Since I’ve struggled to find a career that is impactful and I am good at, I’m thinking I might actually choose a career that is a relatively stable normal job that I like (Like therapist for enlightened people/people who meditate), and then I can use my free time to work on projects that could be maximally massively impactful.

Yes! This is helpful. I think one of the main places where I get caught up is taking expected value calculations very seriously even though they are wildly speculative; it seems like there is a very small chance that I might make a huge difference on an issue that ends up being absurdly important, and so it is hard to use my intuition on this kind of thing, whereas my intuitions very clearly help me with things that are close by and hence more easier to see I am doing some good but more difficult to make wild speculations that I might be having a hugely positive impact. So I guess part of the issue is to what degree I depend on these wildly speculative EV calculations, I feel like I really want to maximize impact, yet it is always a tenuous balancing act with so much uncertainty.

Anyone else ever feel a strong discordance between emotional response and cognitive worldview when it comes to EA issues?

Like emotionally I’m like “save the animals! All animals deserve love and protection and we should make sure they can all thrive and be happy with autonomy and evolve toward more intelligent species so we can live together in a diverse human animal utopia, yay big tent EA…”

But logically I’m like “AI and/or other exponential technologies are right around the corner and make animal issues completely immaterial. Anything that detracts from progress on that is a distraction and should be completely and deliberately ignored. Optimally we will build an AI or other system that determines maximum utility per unit of matter, possibly including agency as a factor and quite possibly not, so that we can tile the universe with sentient simulations of whatever the answer is.”

OR, a similar discordance between what was just described and the view that we should also co-optimize for agency, diversity of values and experience, fun, decentralization, etc., EVEN IF that means possibly locking in a state of ~99.9999+percent of possible utility unrealized.

Very frustrating, I usually try to push myself toward my rational conclusion of what is best with a wide girth for uncertainty and epistemic humility, but it feels depressing, painful, and self-de-humanizing to do so.

Good question. Like most numbers in this post, it is just a very rough approximation used because it is a round number that I estimate is relatively close (~within an order of magnitude) to the actual number. I would guess that the number is somewhere between $50 and $200.

Thanks Mo! These estimates were very interesting.

As to discount rates, I was a bit confused reading William MacAskill's discount rate post, it wasn't clear to me that he was talking about the moral value of lives in the future, it seemed like it might be having something to do with value of resources. In "What We Owe The Future" which is much more recent, I think MacAskill argues quite strongly that we should have a zero discount rate for the moral patienthood of future people.

In general, I tend to use a zero discount rate, I will add this to the background assumptions section, as I do think it is an important point. In my opinion, future people and their experience do not have any more or less valuable than people live today, though of course other people may differ. I try to address this somewhat in the section titled "Inspiration."

Thank you so much for this reply! I’m glad to know there is already some work on this, makes my job a lot easier. I will definitely look into the articles you mentioned and perhaps just study AI risk / AI safety a lot more in general to get a better understanding of how people think about this. It sounds like what people call “deployment” may be very relevant, so well especially look into this.

Yes, I agree this is somewhat what Bostrom is arguing. As I mentioned in the post, I think there may be solutions which don’t require totalitarianism, i.e. massive universal moral progress. I know this sounds intractable, I might address why I think this maybe mistaken in a future post, but it is a moot point if a vulnerable world induced X-risk scenario is unlikely, hence why I am wondering if there has been any work on this.

Ah yes! I think I see what you mean.

I hope to research topics related to this in the near future, including in-depth research on anthropics, as well as on what likely/desirable end-states of the universe are (including that we may already be in an end-state simulation) and what that implies for our actions.

I think this could be a 3rd reason for acting to create a high amount of well-being for those close to you in proximity, including yourself.

Hey Carl! Thanks for your comment. I am not sure I understand. Are you arguing something like “comparing x-risk interventions to other inventions such as bed nets is invalid because the universe may be infinite, or there may be a lot of simulations, or some other anthropic reason may make other interventions more valuable”?

Highly Pessimistic to Pessimistic-Moderate Estimates of Lives Saved by X-Risk Work

This short-form supplements a post estimating how many lives x-risk work saves on average.

Following are four alternative pessimistic scenarios, two of which are highly pessimistic, and two of which fall between pessimistic and moderate.

Except where stated, each has the same assumptions as the original pessimistic estimate, and is adjusted from the baseline estimates of 10^16 lives possible and one life saved per hour of work or $100 donated.

  1. It is 100% impossible to prevent existential risk, or it is 100% impossible to accurately predict what will reduce X-risk. In this case, we get an estimate that in expectation, on average, x-risk work may extremely pessimistically have zero positive impact and have the negative impact of wasting resources. I think it is somewhat unreasonable to conclude with absolutely certainty an existential catastrophe is inevitable or unpredictable, but others may disagree.
  2. Humanity lasts as long as the typical mammalian species, ~1 million years. This would lead to three orders of magnitude reduction in expected value from the pessimistic estimate, giving an estimate that over the next 10,000 years, in expectation, on average, x-risk work will very pessimistically save one life for every 1,000 hours of work or every $100,000 donated. *Because humanity goes extinct in a relatively short amount of time in this scenario, x-risk work has not technically sustainably prevented existential risk, but this estimate has the benefit of using other species to give an outside view.
  3. Digital minds are possible, but interstellar travel is impossible. This estimate is highly speculative. My understanding is that Bostrom estimated 15 additional orders of magnitude if digital minds are possible, given that we are able to inhabit other star systems. I have no idea if anything like this holds up if we only inhabit earth. But if it does, assuming a 1/10 chance digital minds are possible, the possibility of digital minds gives a 14 orders of magnitude increase from the original pessimistic estimate so that, over the next 10,000 years, in expectation, on average, x-risk work will moderately pessimistically save approximately one trillion lives per minute of work or per dollar donated.
  4. Interstellar travel is possible, but digital minds are impossible. Nick Bostrom estimates that if emulations are not possible and so humans must remain in biological form, there could be 10^37 biological human lives at 100 years per life, or 21 orders of magnitude greater than the original pessimistic estimate. Assuming a 1/10 chance interstellar travel is possible, this adds 20 orders of magnitude so that, over the next 10,000 years, in expectation, on average, x-risk work will moderately pessimistically save approximately a billion billion (10^18) lives per minute of work or per dollar donated.
Load more