Jordan Arel

I’m studying for a master’s of social entrepreneurship at the University of Southern California. I have thought along EA lines for as long as I can remember, and I recently wrote a book “Ways to Save The World” about my top innovative ideas for broad approaches to reduce existential risk.

Topic Contributions


Guided by the Beauty of One’s Philosophies: Why Aesthetics Matter

I love this post! Solarpunk was my first intuition as well, I think there is a lot of good evidence that green and natural environments support happiness and productivity, and so I don’t think it is actually out of alignment with utilitarianism or EA at all.

I have a theory of reality which makes aesthetics the fundamental force of the universe. To demonstrate this, if effective altruism is successful in colonizing space and ends up determining the shape of the future of the universe, then this “shape” will be whatever aesthetic shape we have determined creates maximum utility.

I think aesthetics is a much better fundamental for utilitarianism than pleasure, which intuitively seems quite base and basic. Therefore, I agree that aesthetics are exceedingly important in figuring out what future we want to create.

Is Our Universe A Newcomb’s Paradox Simulation?

Thank you! Yes, I’m pretty new here, and now that you say that I think you’re right, anthropics makes more sense.

I am inclined to think the main thing required to be an observer would be enough intelligence to ask whether one is likely to be the entity one is by pure chance, and this doesn’t necessarily require consciousness, just the ability to assess likelihood one is in a simulation into one’s decision calculus.

I had not thought about the possibility that future beings are mostly conscious, but very few are intelligent enough to ask the question. This is definitely a possibility. Though if the vast majority of future beings are unintelligent, you might expect there to be far fewer simulations of intelligent beings like ourselves, somewhat cancelling this possibility out.

So yeah, since I think most future beings (or at least a very large number) will most likely be intelligent, I think the selection affects do likely apply.

Is Our Universe A Newcomb’s Paradox Simulation?

Thank you for this reply!

Yes, the resolution of other moral patients is something I left out. I appreciate you pointing this out because I think it is important, I was maybe assuming something like that longtermists are simulated accurately and that everything else has much lower resolution such as only being philosophical zombies, though as I articulate this I’m not sure that would work. We would have to know more about the physics of the simulation, though we could probably make some good guesses.

And yes, it becomes much stronger if I am the only being in the universe, simulated or otherwise. There are some other reasons I sometimes think the case for solipsism is very strong, but I never bother to argue for them, because if I’m right then there’s no one else to hear what I’m saying anyways! Plus the problem with solipsism is that to some degree everyone must evaluate it for themselves, since the case for it may vary quite a bit for different individuals depending on who in the universe you find yourself as.

Perhaps you are right about AI creating simulations. I’m not sure they would be as likely to create as many, but they may still create a lot. This is something I would have to think about more.

I think the argument with aliens is that perhaps there is a very strong filter such that any set of beings who evaluate the decision will come to the conclusion that they are in a simulation, and so any thing that has the level of intelligence required to become spacefaring would also be intelligent enough to realize it is probably in a simulation and so it’s not worth it. Perhaps this could even apply to AI.

It is, I admit, quite an extreme statement that no set of beings would ever come to the conclusion that they might not be in a simulation, or would not pursue longtermism on the off-chance that they are not in a simulation. But on the other hand, it would be equally extreme not to allow the possibility that we are in a simulation to affect our decision calculus at all, since it does seem quite possible - though perhaps the expected value of the simulation is ttoo small to have much of an effect, except in the universe where the universe is tiled with meaning-maximizing hedonium of the most important time in history and we are it.

I really appreciate your comment on CDT and EDT as well. I felt like they might give the same answer, even though it also “feels” somewhat similar to a Necomb’s Paradox. I think I will have to Study decision theory quite a bit more to really get a handle on this.

Help Me Choose A High Impact Career!!!

Thank you so much Michelle, this reflection is really useful. It feels like a reflection of what I already know, and yet having it reflected back from the outside is very helpful, makes it feel more real and clear somehow. Much appreciated!!

Help Me Choose A High Impact Career!!!

Thanks, I don’t think I fully appreciated the importance of that. Just updated it above, and will share that version with others!

An easy win for hard decisions.

Thank you so much for this!! Incredibly helpful and inspired This Post - feedback appreciated!!

Which Post Idea Is Most Effective?

Dang yeah I did a quick search on creatine and the IQ number right before writing this post, but now it’s looking like that source was not credible. Would have to research more to see if I can find an accurate reliable measure of creatine cognitive improvement, it seems it at least has a significant impact on memory. Anecdotally, I noticed quite a difference when I took a number of supplements while vegan, and I know there’s some research on a number of differences of various nutrients which vegans lack related to cognitive function. Will do a short post on sometime!

I think human alignment is incredibly difficult, but too important to ignore. I have thought about it a very long time so do have some very ambitious ideas that could feasibly start small and scale up.

Yes! I have been very surprised since joining how narrowly longtermism is focused. I think if the community is right about AGI being within a few decades with fast takeoff then broad longtermism may be less appealing, but I think if there is any doubt about this then we are massively underinvested in broad longtermism and putting all eggs in one basket so to speak. Will definitely write more about this!

Right, definitely wouldn’t be exactly analogous to GiveWell, but I think nonetheless it is important to have SOME way of comparing all the longtermist projects to know what a good investment looks like.

Thanks again for all the feedback Aman! Really appreciate it (and everything else you do for the USC group!!) and really excited to write more on some of these topics :)

Which Post Idea Is Most Effective?

Yes! I think the main threats are hard to predict, but mostly involve terrorism with advanced technology, for example weaponized blackholes, intentional grey goo, super coordinated nuclear attacks, and probably many, many other hyper-advanced technilogies we can’t even conceive of yet. I think if technology continues to accellerate it could get pretty bad pretty fast, and even if we’re wrong about AI somehow, human malevolence will be a massive challenge.

Which Post Idea Is Most Effective?

Thanks William! This feedback is super valuable. Yes I think the massive scalable community building project would be novel and it actually ties in with the donor contest as well. Glad to know this would be useful! And good thought, I think writing about my own story will be easiest as well. And I will definitely write about broad longtermism, it is one of my main areas of interest.

This innovative finance concept might go a long way to solving the world's biggest problems

Thanks for writing up this idea! I think the risk management aspect of ESG is important, and this could definitely be a step in the right direction.

My main concern is that I am not sure how likely it is that there is a clear path to get investors to adopt Universal Ownership, it is not something I had heard of before. It seems to me the amount of risk reduction in the portfolio a single investor, caused by their individual marginal divestment/shareholder activism from a company with negative externalities would be quite small, so it would really only work if at least a majority of investors adopted a Universal Ownership model. Are there many investors who are adopting or taking this seriously already?

Also, to get a truly accurate pricing of externalities to maximize public/social good, each investor would ideally model and internalize the effects of externalities on ALL of society, not just their own portfolio, which would only incentivize them to consider a small fraction of the actual value investors and companies could provide to society. I realize this would be an even bigger ask of investors, but my hope is that there is an alternative social stock market or public goods market that systemically, financially rewards positive externalities and taxes negative externalities by design.

That said I could be wrong, would definitely be excited to see something like this gain more traction as it would be much better than what we currently have and I think it is possible something like this could gradually become more popular, especially if there was at least a small but reliable increase in value for investors by better accounting for risk which outweighs costs of modeling and is not countered by displacement effects.

Load More