Jordan Arel

Researcher @ Independent
365 karmaJoined Mar 2022
jordanarel.com/

Bio

Participation
6

I’m am on a gap year from studying for a Master of Social Entrepreneurship degree at the University of Southern California.

I have thought along EA lines for as long as I can remember, and I recently wrote the first draft of a book “Ways to Save The World” about my top innovative ideas for broad approaches to reduce existential risk.

I am now doing more research on how these approaches interact with AI X-risk.

Comments
66

Topic Contributions
1

Good question. Like most numbers in this post, it is just a very rough approximation used because it is a round number that I estimate is relatively close (~within an order of magnitude) to the actual number. I would guess that the number is somewhere between $50 and $200.

Thanks Mo! These estimates were very interesting.

As to discount rates, I was a bit confused reading William MacAskill's discount rate post, it wasn't clear to me that he was talking about the moral value of lives in the future, it seemed like it might be having something to do with value of resources. In "What We Owe The Future" which is much more recent, I think MacAskill argues quite strongly that we should have a zero discount rate for the moral patienthood of future people.

In general, I tend to use a zero discount rate, I will add this to the background assumptions section, as I do think it is an important point. In my opinion, future people and their experience do not have any more or less valuable than people live today, though of course other people may differ. I try to address this somewhat in the section titled "Inspiration."

Thank you so much for this reply! I’m glad to know there is already some work on this, makes my job a lot easier. I will definitely look into the articles you mentioned and perhaps just study AI risk / AI safety a lot more in general to get a better understanding of how people think about this. It sounds like what people call “deployment” may be very relevant, so well especially look into this.

Yes, I agree this is somewhat what Bostrom is arguing. As I mentioned in the post, I think there may be solutions which don’t require totalitarianism, i.e. massive universal moral progress. I know this sounds intractable, I might address why I think this maybe mistaken in a future post, but it is a moot point if a vulnerable world induced X-risk scenario is unlikely, hence why I am wondering if there has been any work on this.

Ah yes! I think I see what you mean.

I hope to research topics related to this in the near future, including in-depth research on anthropics, as well as on what likely/desirable end-states of the universe are (including that we may already be in an end-state simulation) and what that implies for our actions.

I think this could be a 3rd reason for acting to create a high amount of well-being for those close to you in proximity, including yourself.

Hey Carl! Thanks for your comment. I am not sure I understand. Are you arguing something like “comparing x-risk interventions to other inventions such as bed nets is invalid because the universe may be infinite, or there may be a lot of simulations, or some other anthropic reason may make other interventions more valuable”?

Thanks Spencer, really appreciated the variety of guests, this was a great podcast.

Is this contest still active after the FTX fiasco?

Load more