As a newcomer to EA, and a person with a fair amount of experience of cults and cult-like groups (and I'm 78 years old), I would like to report my experience.
I am very attracted to the ideas expressed in Doing Good Better. Having a science background, the idea of analyzing how effective my philanthropy may be is something I have pursued for many years, leading to many of the same conclusions.
On the other hand, many of the ideas of longtermism, trying to save human beings thousands of years in the future, being concerned about spreading to other planets, seeing malevolent AGI as among the most critical issues to address, strike me as similar to cults like Scientology and other groups whose vision and ideas seem contrary to common sense (if not downright wacky) but which seems to be common currency if not required by "members."
In What We Owe the Future, MacAskill often expresses reservations about his ideas, points out alternatives or potential flaws, and in general shows somewhat more humility that I encounter on this Forum, for example. I certainly disagree with some of his conclusions and approaches, which I have begun to attempt to express in my few posts here to date, but I do respect his and others' efforts to think long-term when accompanied by express recognition of our limitations in trying to impact the future (except in set-in-stone certainties) more than a couple of decades out. Without those ongoing acknowledgments of our limitations, our uncertainty, and the weirdness of our perspectives (from a "normal" viewpoint), we are bound to come across as potentially cult-like.
I have often been fascinated watching young children expressing great confidence, even though, from my adult point of view, they have no basis for confidence other than their internal sense (i.e. they don't understand the domain about which they are speaking in any adult way).
It is also my experience and belief that adults carry this same unwarranted sense of confidence in their opinions. Just to pick one example, 73% of Americans (including 80% of American men) believe they are better than average drivers.
Our culture selects for confidence, especially in men. This leads to overconfidence, especially among successful men. Successful people have often made at least one successful prediction (which may have led to their success), which may have simply been luck, but which reinforces their self-confidence.
I therefore strongly agree that longtermist predictions carry huge uncertainty despite expressions of confidence by those promoting them. I argue that in evaluating effective action, we should lower our expected value of any intervention based on how far in the future we are predicting, with a discount rate of 3.87% annually [/joke].
Thank you for your thoughts, and welcome to the forum! I recently posted my first post as well, which was on the same issue (although approached somewhat differently) It is at https://forum.effectivealtruism.org/posts/xvsmRLS998PpHffHE/concerns-with-longtermism. I have other concerns as well with longtermism, even though, like you, I certainly recognize that we need to take future outcomes into account. I hope to post one of those soon. I would welcome your thoughts.
Thank you so much for this post. Your example of capitalism points the way. I plan to write a post soon suggesting that individuals following their personal passions for where to do good could lead to just this kind of effective distribution of altruistic energy that is globally optimized even though apparently less efficient at the local level.
Could you please point me to some? Thanks!
It's not that a life 2000 years ago was more important than everyone living today, but rather that someone 2000 years ago trying to do good would be more effective trying to save one life that year than trying to save humanity 2000 years in their future (i.e. today).
Hi, Folks! I'm Wahhab Baldwin, a 78-year-old retired software developer and manager and minister. I have donated at least 10% of my income for decades, strongly favoring effectiveness. I ran into EA through the Podcast interview with William MacAskill on "People I (Mostly) Admire."
I strongly affirm much of EA, but I disagree with certain elements, and hope I am able to have some enlightening conversations here. I hope tomorrow to write a post on longtermism. As a preview, I will argue that we must discount a future good compared to a present good. It is better to save a life this year than to save a life next year. If we discount at the conservative rate of 2% per year, then a life 1000 years from now should be valued at 1/600,000,000 of a life today, meaning (imo) that we should really focus only on the next century. But before you argue, read my more detailed post! I look forward to our conversation. (Now at https://forum.effectivealtruism.org/posts/xvsmRLS998PpHffHE/concerns-with-longtermism).