Yes! This is helpful. I think one of the main places where I get caught up is taking expected value calculations very seriously even though they are wildly speculative; it seems like there is a very small chance that I might make a huge difference on an issue that ends up being absurdly important, and so it is hard to use my intuition on this kind of thing, whereas my intuitions very clearly help me with things that are close by and hence more easier to see I am doing some good but more difficult to make wild speculations that I might be having a hugely po...
Anyone else ever feel a strong discordance between emotional response and cognitive worldview when it comes to EA issues?
Like emotionally I’m like “save the animals! All animals deserve love and protection and we should make sure they can all thrive and be happy with autonomy and evolve toward more intelligent species so we can live together in a diverse human animal utopia, yay big tent EA…”
But logically I’m like “AI and/or other exponential technologies are right around the corner and make animal issues completely immaterial. Anything that detracts from ...
I don't know if it helps, but your "logical" conclusions are far more likely to be wildly wrong than your "emotional" responses. Your logical views depend heavily on speculative factors like how likely AI tech is, or how impactful it will be, or what the best philosophy of utility is. Whereas the view on animals depends on comparitively few assumptions, like "hey, these creatures that are similar to me are suffering, and that sucks!".
Perhaps the dissonance is less irrational than it seems...
Good question. Like most numbers in this post, it is just a very rough approximation used because it is a round number that I estimate is relatively close (~within an order of magnitude) to the actual number. I would guess that the number is somewhere between $50 and $200.
Thanks Mo! These estimates were very interesting.
As to discount rates, I was a bit confused reading William MacAskill's discount rate post, it wasn't clear to me that he was talking about the moral value of lives in the future, it seemed like it might be having something to do with value of resources. In "What We Owe The Future" which is much more recent, I think MacAskill argues quite strongly that we should have a zero discount rate for the moral patienthood of future people.
In general, I tend to use a zero discount rate, I will add this to the backgroun...
Thank you so much for this reply! I’m glad to know there is already some work on this, makes my job a lot easier. I will definitely look into the articles you mentioned and perhaps just study AI risk / AI safety a lot more in general to get a better understanding of how people think about this. It sounds like what people call “deployment” may be very relevant, so well especially look into this.
Yes, I agree this is somewhat what Bostrom is arguing. As I mentioned in the post, I think there may be solutions which don’t require totalitarianism, i.e. massive universal moral progress. I know this sounds intractable, I might address why I think this maybe mistaken in a future post, but it is a moot point if a vulnerable world induced X-risk scenario is unlikely, hence why I am wondering if there has been any work on this.
Ah yes! I think I see what you mean.
I hope to research topics related to this in the near future, including in-depth research on anthropics, as well as on what likely/desirable end-states of the universe are (including that we may already be in an end-state simulation) and what that implies for our actions.
I think this could be a 3rd reason for acting to create a high amount of well-being for those close to you in proximity, including yourself.
Hey Carl! Thanks for your comment. I am not sure I understand. Are you arguing something like “comparing x-risk interventions to other inventions such as bed nets is invalid because the universe may be infinite, or there may be a lot of simulations, or some other anthropic reason may make other interventions more valuable”?
That there are particular arguments for decisions like bednets or eating sandwiches to have expected impacts that scale with the scope of the universes or galactic civilizations. E.g. the more stars you think civilization will be able to colonize, or the more computation that will be harvested, the greater your estimate of the number of sims in situations like ours (who will act the same as we do, so that on plausible decision theories we should think of ourselves as setting policy at least for the psychologically identical ones). So if you update to...
This short-form supplements a post estimating how many lives x-risk work saves on average.
Following are four alternative pessimistic scenarios, two of which are highly pessimistic, and two of which fall between pessimistic and moderate.
Except where stated, each has the same assumptions as the original pessimistic estimate, and is adjusted from the baseline estimates of 10^16 lives possible and one life saved per hour of work or $100 donated.
Fantastic news!!! My main question:
The Future Fund AI Worldview Prize had specific, very bold criteria, such as raising or lowering to certain thresholds the probability estimates of transformative AI timelines or probabilities of an AI related catastrophe, given certain timelines;
Will this AI Worldview Prize have very similar criteria, or do you have any intuitions what these criteria might be?
This would be very helpful for researchers like myself deciding whether to continue on a particular line of research!
This short-form is a minimally edited outtake from a piece estimating how many lives x-risk work saves on average. It conservatively estimates how much better future lives might be with pessimistic, moderate, and optimistic assumptions.
A large amount of text is in footnotes because it was highly peripheral to the original post.
(human remain on earth, digital minds are impossible)
We could pessimistically assume that future lives can only be approximately as good as present lives for some unknown reasons ...
Thanks! Changed it to reflect this. As I said, not knowledgeable about ethics.
It would be nice if others would say something like this rather than just down-voting haha.. In any case, your comment is exactly the type of guidance I appreciate!
😢 so incredibly sad. I’m definitely in shock as well.
I’m more optimistic on crypto generally, but this is so much what the crypto ethic was meant to stand against; non-transparency & too -big-to-fail mega billionaires ruining people’s lives.
I agree, I hope we can be much more cautious in the future.
Interesting points.
Yes, as I said, for me altruism and selfishness have some convergence. I try to always act altruistically, and enlightened self-interest and open individualism are tools (which I actually do think have some truth to them) that help me tame the selfish part of myself that would otherwise demand much more. They may also be useful in persuading people to be more altruistic.
While I think there is likely only one correct ethical system, I think it is most likely consequentialist, and therefore these conceptual tools are useful for helping me ...
Wow Noah! I think this is the longest comment I’ve had on any post, despite it being my shortest post haha!
First of all some context. The reason I wrote this shortform was actually just so I could link to it in a post I’m finishing which estimates how many lives longtermist save per minute. Here is the current version of the section in which I link to it, I think it may answer some of your questions:
The take-away from this post is not that you should agonize over the trillions of trillions of trillions of men, women, and child...
Opportunity Cost Ethics
“Every man is guilty of all the good he did not do.”
~Voltaire
Opportunity Cost Ethics is a term I invented to capture the ethical view that failing to do good ought to carry the same moral weight as doing harm.
You could say that in Opportunity Cost Ethics, sins of omission are equivalent to sins of commission.
In this view, if you walk by a child drowning in a pond, and there is zero cost to you to saving the child, it would be equally morally inexcusable for you not to save the child from drowning as it would be to take a non-drowning...
The Proximity Principle
Excerpt from my upcoming book, "Ways To Save The World"
As was noted earlier, this does not mean we should necessarily pursue absolute perfect selflessness, if such a thing is even possible. We might conceive that this would include such activities as not taking any medicine so that those who are more sick can have it, not eating food and giving all of your food away to those who are starving, never sleeping but instead continually working for those who are less fortunate than yourself. As is obvious, all of these would lead imminentl...
Hmm.. I’d have to think more carefully about it. Was very much off-the-cuff. I mostly agree with your criticism, I think I was mainly thinking bio-risk makes most sense as a near-termist priority and so would get most of x-risk funding until solved, since it is much more tractable than AI Risk.
Maybe this is the main point I’m trying to make, and so the spirit of the post seems off, since near-termist x-risky stuff would mostly fund bio-risk and long-termist x-risky stuff would mostly go to AI.
This is so cool! Giving me a lot of good ideas for when I am able to hire a PA in the future.. Thanks Vaidehi!
I think this is post is mistaken. (If I remember correctly, not an expert,) estimates that AI will kill us all are put around only 5-10% by AI experts and attendees at an x-risk conference in a paper from Katja Grace. Only AI Safety researchers think AI doom is a highly likely default (presumably due to selection effects.) So from near-termist perspective AI deserves relatively less attention.
Bio-risk and climate change, and maybe nuclear war, on the other hand, I think are all highly concerning from a near-termist perspective, but unlikely to kill EVERYONE, and so relatively low priority for long-termists.
Hi! I didn’t realize this thread existed until just now. Just wanted to make sure you were aware of my feature suggestion, “Fine-Grained Karma Voting”
How many highly engaged EAs are there? In 2020, Ben Todd estimated there were about 2666, (7:00 into video). I can’t find any info on how many there are now, where would I find this, and/or how many highly engaged EAs are there now?
I really want to learn more about broad longtermism. In 2019, Ben Todd said that in a survey EAs said that it was the most underinvested cause area by something like a factor of 5. Where can I learn more about broad longtermism, what are the best resources, organizations, and advocates on ideas and projects related to broad longtermism?
Sorry, stupid question, but just to clarify, questions should be posted in this thread, or in the general “questions” section on the forum?
Thanks Kyle! Hm, this confirms my suspicion that according to what he said he should have gotten into EAGx, though maybe it was because he applied to conferences in a different region. I will investigate further and have him email them. Thanks again!! 🙂
Thanks tou for writing this. This is an incredibly good idea for a cost-effective cause in my opinion. I need about 9 1/2 hours of sleep every night and it is possibly the single greatest obstacle to achievement I face. I think for long-sleeper people like myself, further investigation could have an especially high upside if successful.
Thanks Christian! This was a well-written initial foray. I need about 9 1/2 hours of sleep every night and it is possibly the single greatest obstacle to achievement I face. I think for long-sleeper people like myself this could have an especially high upside if effective. I will definitely look into it more.
Thanks for sharing these learnings and for running this group Ninell! I would have been moderately less likely to start start studying AIS on my own if it were not for this group, and I appreciate how thoughtful you were and how much work you put into this group and this post.
Very good point. Considering last dollar spent or marginal dollar spent lowers these numbers by quite a lot - though I think even an order of magnitude still gives you quite high numbers
Yes, I think it is a very difficult and perhaps neccesarily uneasy balancing act, at least for those whose main or sole priority is to maximize impact. Minimum viable self-care is quite problematic, but it is not plausible we can maximize impact without any sacrifice whatsoever either
In the GiveWell article I quoted they estimate an uber-5-year-old life saved costs $7000, for 37 DALYs, which equals about $189 per life year saved. But if it was actually $4500 per life as you suggest, that would be closer to $121 per life year saved or about 5 months of life for $50 instead of 3 months as I said, but I would rather err on the conservative side.
Damn. Yeah I guess I implicitly think like this a lot, I feel very torn between telling you it’s okay don’t worry about it each person has their own comfort level, versus yeah, it’s real and those are real people and we’re really sacrificing their lives for petty pleasures.
I think a few things that help me:
Personally I feel I have much higher leverage with direct work rather than donations, so while money is a consideration it isn’t as important as time and focus on what’s highest leverage. Also, with direct work you can sometimes get sharply increasing
This was amazing. As a professional dropout, I would like to join your organization so that I can immediately quit it.
I have dropped out of college 3 1/2 times now, the 1/2 time was during Covid when I didn’t quite start the school year before dropping out and deciding to become a homeless vagabond.
I always wanted to become an Ivy League dropout, USC isn’t Ivy League but close enough, it’s way more expensive than most Ivy League schools at least, and now I feel much more confident in whatever I do next. Lots of great entrepreneurs were dropouts from fancy ...
Wow. This is a really great concrete story of the benefit of signaling. Yeah, I find it so fascinating how Effective Altruism has evolved, and I really love all parts of it and think it is a very natural progression which somewhat mirrors my own. It is really unfortunate that not everyone sees this whole context, and I agree worth putting some effort into managing impressions even if it is in a sense a sort of marketing “pacing” which gradually introduces more advanced concepts rather than throwing out some of the crazier sounding bits of EA right off the bat
Great point, I thought I could go in more detail but I actually originally intended to make this post a fraction of the length it ended up being already. But that could be a great companion post, maybe a list of specific ways that we could be frugal and a detailed analysis of when it would make sense to spend money in order to save more valuable time and resources. And I appreciate those links will definitely check them out!
I like this! It’s a very clean solution that saves a lot of time and hassle. Maybe the downside is that it takes away some autonomy and feels a little paternalistic and onerous to have a list of rules, but I think it could be simple enough and is not an unreasonable ask such that the benefits may outweigh the downsides.
Hm, yeah I thought about that but was thinking the way the grammar worked out it wouldn’t really make sense to interpret it as the EA Funds project. But after getting this feedback I think there is a low enough cost it makes sense to change the name, so I did!
I agree this seems like a huge problem. I noticed that even though I am extremely committed to longtermism and have been for many years, the fact that I am skeptical of AI risk seems to significantly decrease my status in longtermist circles, there is a significant isularity and resistance to EA gospel being criticized, and little support for those seeking to do so
Thank you!
Yes, I agree teaching meditation in schools could be a very good idea, I think the tools are very powerful. Apparently Robert Wright Who wrote the excellent book “why Buddhism is true“ among other books, has started a project called the apocalypse aversion project which he talked about with Rob Wiblin on an episode of 80,000 hours, One of the main ideas being that if we systematically encourage mindfulness practice we could broadly reduce existential risk.
I think you’re right, EA can be a bit inscrutable and there are definitely some benefits to being appealing to a wider popular audience, though there may also be downsides to not focusing on the EA audience
Thank you Dony, Denis, and Matt! I really enjoyed reading this post, and excited about the idea. Looking forward to seeing what posts are submitted!
Mmm yeah, I really like this compromise, it leaves room for being human, but indeed, I’m thinking more about career currently. Since I’ve struggled to find a career that is impactful and I am good at, I’m thinking I might actually choose a career that is a relatively stable normal job that I like (Like therapist for enlightened people/people who meditate), and then I can use my free time to work on projects that could be maximally massively impactful.