All of Jordan Arel's Comments + Replies

Mmm yeah, I really like this compromise, it leaves room for being human, but indeed, I’m thinking more about career currently. Since I’ve struggled to find a career that is impactful and I am good at, I’m thinking I might actually choose a career that is a relatively stable normal job that I like (Like therapist for enlightened people/people who meditate), and then I can use my free time to work on projects that could be maximally massively impactful.

Yes! This is helpful. I think one of the main places where I get caught up is taking expected value calculations very seriously even though they are wildly speculative; it seems like there is a very small chance that I might make a huge difference on an issue that ends up being absurdly important, and so it is hard to use my intuition on this kind of thing, whereas my intuitions very clearly help me with things that are close by and hence more easier to see I am doing some good but more difficult to make wild speculations that I might be having a hugely po... (read more)

Anyone else ever feel a strong discordance between emotional response and cognitive worldview when it comes to EA issues?

Like emotionally I’m like “save the animals! All animals deserve love and protection and we should make sure they can all thrive and be happy with autonomy and evolve toward more intelligent species so we can live together in a diverse human animal utopia, yay big tent EA…”

But logically I’m like “AI and/or other exponential technologies are right around the corner and make animal issues completely immaterial. Anything that detracts from ... (read more)

I don't know if it helps, but your "logical" conclusions are far more likely to be wildly wrong than your "emotional" responses. Your logical views depend heavily on speculative factors like how likely AI tech is, or how impactful it will be, or what the best philosophy of utility is. Whereas the view on animals depends on comparitively few assumptions, like "hey, these creatures that are similar to me are suffering, and that sucks!". 

Perhaps the dissonance is less irrational than it seems...

8
saulius
1mo
I relate to that a lot, and I want to share how I resolved some of this tension. You currently allow your heart to only say “I want to reduce suffering and increase happiness” and then your brain takes over and optimizes, ignoring everything else your heart is saying. But it’s an arbitrary choice to only listen to the most abstract version of what the heart is saying. You could also allow your heart to be more specific like “I want to help all the animals!”, or even “I want to help this specific animal!” and then let your brain figure out the best way to do that. The way I see it, there is no objectively correct choice here. So I alternate on how specific I allow my heart to be.  In practice, it can look like splitting your donations between charities that give you a warm, fuzzy feeling, and charities that seem most cost-effective when you coldly calculate, as advised in Purchase Fuzzies and Utilons Separately. Here is an example of someone doing this. Unfortunately, it can be much more difficult to do this when you contribute with work rather than donations.

Good question. Like most numbers in this post, it is just a very rough approximation used because it is a round number that I estimate is relatively close (~within an order of magnitude) to the actual number. I would guess that the number is somewhere between $50 and $200.

Thanks Mo! These estimates were very interesting.

As to discount rates, I was a bit confused reading William MacAskill's discount rate post, it wasn't clear to me that he was talking about the moral value of lives in the future, it seemed like it might be having something to do with value of resources. In "What We Owe The Future" which is much more recent, I think MacAskill argues quite strongly that we should have a zero discount rate for the moral patienthood of future people.

In general, I tend to use a zero discount rate, I will add this to the backgroun... (read more)

Thank you so much for this reply! I’m glad to know there is already some work on this, makes my job a lot easier. I will definitely look into the articles you mentioned and perhaps just study AI risk / AI safety a lot more in general to get a better understanding of how people think about this. It sounds like what people call “deployment” may be very relevant, so well especially look into this.

Yes, I agree this is somewhat what Bostrom is arguing. As I mentioned in the post, I think there may be solutions which don’t require totalitarianism, i.e. massive universal moral progress. I know this sounds intractable, I might address why I think this maybe mistaken in a future post, but it is a moot point if a vulnerable world induced X-risk scenario is unlikely, hence why I am wondering if there has been any work on this.

Ah yes! I think I see what you mean.

I hope to research topics related to this in the near future, including in-depth research on anthropics, as well as on what likely/desirable end-states of the universe are (including that we may already be in an end-state simulation) and what that implies for our actions.

I think this could be a 3rd reason for acting to create a high amount of well-being for those close to you in proximity, including yourself.

Hey Carl! Thanks for your comment. I am not sure I understand. Are you arguing something like “comparing x-risk interventions to other inventions such as bed nets is invalid because the universe may be infinite, or there may be a lot of simulations, or some other anthropic reason may make other interventions more valuable”?

That there are particular arguments for decisions like bednets or eating sandwiches to have expected impacts that scale with the scope of the universes or galactic civilizations. E.g. the more stars you think civilization will be able to colonize, or the more computation that will be harvested, the greater your estimate of the number of sims in situations like ours (who will act  the same as we do, so that on plausible decision theories we should think of ourselves as setting policy at least for the psychologically identical ones). So if you update to... (read more)

Highly Pessimistic to Pessimistic-Moderate Estimates of Lives Saved by X-Risk Work

This short-form supplements a post estimating how many lives x-risk work saves on average.

Following are four alternative pessimistic scenarios, two of which are highly pessimistic, and two of which fall between pessimistic and moderate.

Except where stated, each has the same assumptions as the original pessimistic estimate, and is adjusted from the baseline estimates of 10^16 lives possible and one life saved per hour of work or $100 donated.

  1. It is 100% impossible to prevent ex
... (read more)

Thanks Spencer, really appreciated the variety of guests, this was a great podcast.

Is this contest still active after the FTX fiasco?

4
Writer
1y
Yes, it is still active

Fantastic news!!! My main question:

The Future Fund AI Worldview Prize had specific, very bold criteria, such as raising or lowering to certain thresholds the probability estimates of transformative AI timelines or probabilities of an AI related catastrophe, given certain timelines;

Will this AI Worldview Prize have very similar criteria, or do you have any intuitions what these criteria might be?

This would be very helpful for researchers like myself deciding whether to continue on a particular line of research!

How Happy Could Future People Be?

This short-form is a minimally edited outtake from a piece estimating how many lives x-risk work saves on average. It conservatively estimates how much better future lives might be with pessimistic, moderate, and optimistic assumptions.

A large amount of text is in footnotes because it was highly peripheral to the original post.

Pessimistic

(human remain on earth, digital minds are impossible)

We could pessimistically assume that future lives can only be approximately as good as present lives for some unknown reasons ... (read more)

Thanks! Changed it to reflect this. As I said, not knowledgeable about ethics.

It would be nice if others would say something like this rather than just down-voting haha.. In any case, your comment is exactly the type of guidance I appreciate!

😢 so incredibly sad. I’m definitely in shock as well.

I’m more optimistic on crypto generally, but this is so much what the crypto ethic was meant to stand against; non-transparency & too -big-to-fail mega billionaires ruining people’s lives.

I agree, I hope we can be much more cautious in the future.

Interesting points.

Yes, as I said, for me altruism and selfishness have some convergence. I try to always act altruistically, and enlightened self-interest and open individualism are tools (which I actually do think have some truth to them) that help me tame the selfish part of myself that would otherwise demand much more. They may also be useful in persuading people to be more altruistic.

While I think there is likely only one correct ethical system, I think it is most likely consequentialist, and therefore these conceptual tools are useful for helping me ... (read more)

1
Noah Scales
1y
You wrote: "Sorry if I wasn’t clear! I don’t understand what do you mean by the term “personally obliged”. I looked it up on Google and could not find anything related to it. Could you precisely defined the term and how it differs from ethically obliged? As I said, I don’t really think in terms of obligations, and so maybe this is why I don’t understand it." OK, a literal interpretation could work for you. So, while your ethics might oblige you to an action X, you yourself are not personally obliged to perform action X. Why are you not personally obliged? Because of how you consider your ethics. Your ethics are subject to limitations due to self-care, enlightened self-interest, or the proximity principle. You also use them as guidelines, is that right? Your ethics, as you describe them, are not a literal description of how you live or a do-or-die set of rules. Instead, they're more like a perspective, maybe a valuable one incorporating information about how to get along in the world, or how to treat people better, but only a description of what actions you can take in terms of their consequences. You then go on to choose actions however you do and can evaluate your actions from your ethical perspective at any time. I understand that you do not directly say this but it is what I conclude based on what you have written. Your ethics as rules for action appear to me to be aspirational. I wouldn't choose consequentialism as an aspirational ethic. I have not shared my ethical rules or heuristics on this forum for a reason. They are somewhat opaque to me. That said, I do follow a lot of personal rules, simple ones, and they align with what you would typically expect from a good person in my current circumstances. But am I a consequentialist? No, but a consequentialist perspective is informative about consequences of my actions, and those concern me in general, whatever my goals. In a submission to the Red Team Contest a few months back, I wrote up my thoughts on belief

Wow Noah! I think this is the longest comment I’ve had on any post, despite it being my shortest post haha!

First of all some context. The reason I wrote this shortform was actually just so I could link to it in a post I’m finishing which estimates how many lives longtermist save per minute. Here is the current version of the section in which I link to it, I think it may answer some of your questions:

 

“The Proximity Principle

The take-away from this post is not that you should agonize over the trillions of trillions of trillions of men, women, and child... (read more)

1
Noah Scales
1y
Well, your formulation, as stated, would lead me to multiple conceptual difficulties, but the most practical one for me is how to conceptualize altruism. How do you know when you are being altruistic? When you throw in the concepts of "enlightened self-interest" and "open individualism" to justify longtermism, it appears as though you have partial credence in fundamental beliefs that support your choice of ethical system. But you claim that there is only one correct ethical system. Would you clarify for me ? You wrote: * "On your other comments, I do not tend to think in terms of character traits much except that I may seek to develop good character traits in myself and support them in others, in order to achieve good consequences." * "If you are obliged, you can shirk your obligations, but others (and possibly yourself) will be harmed by this." * "I choose to do good, in-so-far as I am able and encourage others to do the same as the massively positive-sum world that results is best for all, including myself. " From how you write, you seem like a kind, well-meaning, and thoughtful person. Your efforts to develop good character traits seem to be paying off for you. You wrote: * "If I can be ethically but not personally obliged, then what makes an ethical action obligatory? ANSWER: I do not understand your distinction between personally obliged and ethically obliged, could you clarify? I take a moral realism stance in which there is only one correct ethical system, whether or not we know what it is. If you are obliged, you can shirk your obligations, but others (and possibly yourself) will be harmed by this." To clarify, if I apply your proximity principle, or enlightened self-interest, or your recommendations for self-care, but simultaneously hold myself ethically accountable for what I do not do (as your ethic recommends), then it appears as though I am not personally obliged in situations where I am ethically obliged. If you hold yourself ethically acco

Opportunity Cost Ethics

“Every man is guilty of all the good he did not do.”

~Voltaire

Opportunity Cost Ethics is a term I invented to capture the ethical view that failing to do good ought to carry the same moral weight as doing harm.

You could say that in Opportunity Cost Ethics, sins of omission are equivalent to sins of commission.

In this view, if you walk by a child drowning in a pond, and there is zero cost to you to saving the child, it would be equally morally inexcusable for you not to save the child from drowning as it would be to take a non-drowning... (read more)

5
Noah Scales
1y
Hi, Jordan. I thought over opportunity cost ethics in the context of longtermism, and formed the questions: 1. how do I decide what I am obliged to do when I am considering ethical vs selfish interests of mine? 2. if I can be ethically but not personally obliged, then what makes an ethical action obligatory? 3. am I morally accountable for the absence of consequences of actions that I could have taken? 4. who decides what the consequences of my absent actions would have been? 5. how do I compare the altruism of consequences of one choice of action versus another? 6. do intentions carry ethical weight, that is, when I intend well, do harmful consequences matter? 7. what do choices that emphasize either selfish or ethical interest mean about my character? I came up with examples and thought experiments to satisfy my own questions, but since you're taking such a radically different direction, I recommend the same questions to you, and wonder what your answers will be. I will offer that shaming or rewarding of behavior in terms of what it means about supposed traits of character (for example, selfishness, kind-heartedness, cruelness) has impact in society, even in our post-modern era of subjectivity-assuming and meta-perspective seeking. We don't like to be thought of in terms of unfavorable character traits. Beyond personality, if character traits show through that others admire or trust (or dislike or distrust), that makes a big difference to one's perceived sense of one's own ethics, regardless of how fair, rational, or applicable the ethics actually are. As an exercise in meta-cognition, I can see that my own proposals for ethical systems that I comfortably employ will plausibly lack value to others. I take a safe route, equating altruism with the service of other's interests, and selfishness with the service of my own. Conceptually consistent, lacking in discussion of character traits, avoiding discussion of ethical obligations of any sort. While I enj

The Proximity Principle

Excerpt from my upcoming book, "Ways To Save The World"

As was noted earlier, this does not mean we should necessarily pursue absolute perfect selflessness, if such a thing is even possible. We might conceive that this would include such activities as not taking any medicine so that those who are more sick can have it, not eating food and giving all of your food away to those who are starving, never sleeping but instead continually working for those who are less fortunate than yourself. As is obvious, all of these would lead imminentl... (read more)

Hmm.. I’d have to think more carefully about it. Was very much off-the-cuff. I mostly agree with your criticism, I think I was mainly thinking bio-risk makes most sense as a near-termist priority and so would get most of x-risk funding until solved, since it is much more tractable than AI Risk.

Maybe this is the main point I’m trying to make, and so the spirit of the post seems off, since near-termist x-risky stuff would mostly fund bio-risk and long-termist x-risky stuff would mostly go to AI.

This is so cool! Giving me a lot of good ideas for when I am able to hire a PA in the future.. Thanks Vaidehi!

4
Vaidehi Agarwalla
1y
It was all Holly + co's hard work, but I'm glad you found it useful :)  

I think this is post is mistaken. (If I remember correctly, not an expert,) estimates that AI will kill us all are put around only 5-10% by AI experts and attendees at an x-risk conference in a paper from Katja Grace. Only AI Safety researchers think AI doom is a highly likely default (presumably due to selection effects.) So from near-termist perspective AI deserves relatively less attention.

Bio-risk and climate change, and maybe nuclear war, on the other hand, I think are all highly concerning from a near-termist perspective, but unlikely to kill EVERYONE, and so relatively low priority for long-termists.

3
Linch
1y
"only" 5-10% of ~8 billion people dying this century is still 400-800 million deaths! Certainly higher than e.g. estimates of malarial deaths within this century!  What's the case for climate change being highly concerning from a near-termist perspective? It seems unlikely to me that marginal $s in fighting climate change are a better investment in global health than marginal $s spent directly on global health. And also particularly unlikely to be killing >400 million people.  I agree some biosecurity spending may be more cost-effective on neartermist grounds. 

Hi! I didn’t realize this thread existed until just now. Just wanted to make sure you were aware of my feature suggestion, “Fine-Grained Karma Voting”

How many highly engaged EAs are there? In 2020, Ben Todd estimated there were about 2666, (7:00 into video). I can’t find any info on how many there are now, where would I find this, and/or how many highly engaged EAs are there now?

3
Lorenzo Buonanno
2y
I don't think there's any single good definition of "highly engaged EAs". Giving What We Can lists 8,771 people that have signed their pledge https://www.givingwhatwecan.org/about-us/members 

Mm good point! I seem to remember something.. do you remember what chapter/s by chance?

2
Howie_Lempel
2y
My guess is that Part II, trajectory changes will have a bunch of relevant stuff. Maybe also a bit of part 5. But unfortunately I don't remember too clearly.

I really want to learn more about broad longtermism. In 2019, Ben Todd said that in a survey EAs said that it was the most underinvested cause area by something like a factor of 5. Where can I learn more about broad longtermism, what are the best resources, organizations, and advocates on ideas and projects related to broad longtermism?

2
Howie_Lempel
2y
The 80k podcast also has some potentially relevant episodes though they're prob not directly what you most want. * https://80000hours.org/podcast/episodes/phil-trammell-patient-philanthropy/ * https://80000hours.org/podcast/episodes/will-macaskill-ambition-longtermism-mental-health/ * Maybe especially the section on patient philanthropy. * https://80000hours.org/podcast/episodes/will-macaskill-what-we-owe-the-future/ * https://80000hours.org/podcast/episodes/sam-bankman-fried-high-risk-approach-to-crypto-and-doing-good/ * Some bits of this. E.g. some of the bits on political donations.
3
Howie_Lempel
2y
I think parts of What We Owe the Future by Will MacAskill discuss this approach a bit.

Sorry, stupid question, but just to clarify, questions should be posted in this thread, or in the general “questions” section on the forum?

2
Lizka
2y
For the purpose of trying this thread, it would be nice to post questions as "Answers" to this post.[1] Although you're welcome to post a question on the Forum if you think that's better: you can see a selection of those here.  Not a stupid question!  1. ^ The post is formatted as a "Question" post, which might have been a mistake on my part, as it means that I'm asking people to post questions in the form of "Answers" to the Question-post, and the terminology is super confusing as a result.

Thanks Kyle! Hm, this confirms my suspicion that according to what he said he should have gotten into EAGx, though maybe it was because he applied to conferences in a different region. I will investigate further and have him email them. Thanks again!! 🙂

Thanks tou for writing this. This is an incredibly good idea for a cost-effective cause in my opinion. I need about 9 1/2 hours of sleep every night and it is possibly the single greatest obstacle to achievement I face. I think for long-sleeper people like myself, further investigation could have an especially high upside if successful.

Thanks Christian! This was a well-written initial foray. I need about 9 1/2 hours of sleep every night and it is possibly the single greatest obstacle to achievement I face. I think for long-sleeper people like myself this could have an especially high upside if effective. I will definitely look into it more.

Thanks for sharing these learnings and for running this group Ninell! I would have been moderately less likely to start start studying AIS on my own if it were not for this group, and I appreciate how thoughtful you were and how much work you put into this group and this post.

2
nell
2y
Thank you, Jordan! It was great having you there.

Very good point. Considering last dollar spent or marginal dollar spent lowers these numbers by quite a lot - though I think even an order of magnitude still gives you quite high numbers

Yes, I think it is a very difficult and perhaps neccesarily uneasy balancing act, at least for those whose main or sole priority is to maximize impact. Minimum viable self-care is quite problematic, but it is not plausible we can maximize impact without any sacrifice whatsoever either

In the GiveWell article I quoted they estimate an uber-5-year-old life saved costs $7000, for 37 DALYs,  which equals about $189 per life year saved. But if it was actually $4500 per life as you suggest, that would be closer to $121 per life year saved or about 5 months of life for $50 instead of 3 months as I said, but I would rather err on the conservative side.

Damn. Yeah I guess I implicitly think like this a lot, I feel very torn between telling you it’s okay don’t worry about it each person has their own comfort level, versus yeah, it’s real and those are real people and we’re really sacrificing their lives for petty pleasures.

I think a few things that help me:

  1. Personally I feel I have much higher leverage with direct work rather than donations, so while money is a consideration it isn’t as important as time and focus on what’s highest leverage. Also, with direct work you can sometimes get sharply increasing

... (read more)

This was amazing. As a professional dropout, I would like to join your organization so that I can immediately quit it.

I have dropped out of college 3 1/2 times now, the 1/2 time was during Covid when I didn’t quite start the school year before dropping out and deciding to become a homeless vagabond.

I always wanted to become an Ivy League dropout, USC isn’t Ivy League but close enough, it’s way more expensive than most Ivy League schools at least, and now I feel much more confident in whatever I do next. Lots of great entrepreneurs were dropouts from fancy ... (read more)

5
Yonatan Cale
2y
I would love to learn the art of dropping out from you. (Just, if possible, could you give me a certificate when I finish learning from you?) You're officially a member

Wow. This is a really great concrete story of the benefit of signaling. Yeah, I find it so fascinating how Effective Altruism has evolved, and I really love all parts of it and think it is a very natural progression which somewhat mirrors my own. It is really unfortunate that not everyone sees this whole context, and I agree worth putting some effort into managing impressions even if it is in a sense a sort of marketing “pacing” which gradually introduces more advanced concepts rather than throwing out some of the crazier sounding bits of EA right off the bat

Great point, I thought I could go in more detail but I actually originally intended to make this post a fraction of the length it ended up being already. But that could be a great companion post, maybe a list of specific ways that we could be frugal and a detailed analysis of when it would make sense to spend money in order to save more valuable time and resources. And I appreciate those links will definitely check them out!

I like this! It’s a very clean solution that saves a lot of time and hassle. Maybe the downside is that it takes away some autonomy and feels a little paternalistic and onerous to have a list of rules, but I think it could be simple enough and is not an unreasonable ask such that the benefits may outweigh the downsides.

6
pete
2y
For personal spending, it should just be guidelines / recommendations that people can follow at will. For EA orgs’ money, it can be more like “we usually don’t comp first class plane tickets, talk to us if your situation is different”

Hm, yeah I thought about that but was thinking the way the grammar worked out it wouldn’t really make sense to interpret it as the EA Funds project. But after getting this feedback I think there is a low enough cost it makes sense to change the name, so I did!

Ah yes, I think this was what was referred to in the book. Thank you!

I agree this seems like a huge problem. I noticed that even though I am extremely committed to longtermism and have been for many years, the fact that I am skeptical of AI risk seems to significantly decrease my status in longtermist circles, there is a significant isularity and resistance to EA gospel being criticized, and little support for those seeking to do so

Thank you!

Yes, I agree teaching meditation in schools could be a very good idea, I think the tools are very powerful. Apparently Robert Wright Who wrote the excellent book “why Buddhism is true“ among other books, has started a project called the apocalypse aversion project which he talked about with Rob Wiblin on an episode of 80,000 hours, One of the main ideas being that if we systematically encourage mindfulness practice we could broadly reduce existential risk.

I think you’re right, EA can be a bit inscrutable and there are definitely some benefits to being appealing to a wider popular audience, though there may also be downsides to not focusing on the EA audience

3
JakubK
2y
Here's the most up-to-date version of the AGI Safety Fundamentals curriculum. Be sure to check out Richard Ngo's "AGI safety from first principles" report. There's also a "Further resources" section at the bottom linking to pages like "Lots of links" from AI Safety Support.

Thank you Dony, Denis, and Matt! I really enjoyed reading this post, and excited about the idea. Looking forward to seeing what posts are submitted!

Load more