Hide table of contents

Findings from psychology and related fields suggest that some aspects of human morality may have evolved and/or serves a social purpose among human groups (e.g., Awad et al., 2020). Thus, what is "good" depends to some extent on one's cultural background, to a greater extent on being human, and perhaps to an even greater extent on being a primate/mammal/vertebrate/etc (i.e., in terms of how features of our minds have been shaped over an even longer evolutionary time span). What happens if we try to think outside these evolutionary and social pressures? Many EAs try to think outside of social pressures, for instance, when they espouse utilitarism. If so, shouldn't we also try to "escape the shackles of evolution"?  I'm hoping this question will lead to recommendations for readings that discuss what dimensions of "good" would be relevant from non-human points of view (including from other animal species, from ecosystems, from AIs, etc.) 

9

0
0

Reactions

0
0
New Answer
New Comment


4 Answers sorted by

You might check out this SEP article: https://plato.stanford.edu/entries/morality-biology/. Haven't read it myself, but looking at the table of contents it seems like it might be helpful for you (SEP is generally pretty high-quality). People have made a lot of different arguments that start from the observation that human morality has likely been shaped by evolutionary pressures, and it's pretty complicated to try to figure out what conclusions to draw from this observation. It's not at all obvious that it implies we should try to "escape the shackles of evolution" as you put it. It may imply that, but it also may not. (In particular, "selective evolutionary debunking arguments" seem to have implications along these lines, but "general evolutionary debunking arguments" seem to lead to almost the opposite conclusion.)

You might also check out this post by Eliezer.

Whether or not we think or feel we are following in the footsteps of evolution, one way or another, we are indeed following the drives given us by the combination of direct, or genetic, nature and indirect, or culturally summated, nature. Obviously the chain of delegation of evolutionary will is going to be complicated, with various genetic and cultural intermingling between lineages. For example, the human mitochondria, or powerhouse of each cell, may well have derived from an external, exogenous microorganism. And humans further may borrow behaviours and design patterns from other lifeforms. Still, all roads lead back to nature and its inherent tendencies.

Interesting! The philosophical debates about the nature of morality in light of evolution is a great literature, which I very much recommend checking out further. However, the main question of contention in those debates is whether studies of the kind you allude to in fact show anything about morality itself. In fact, the mainstream view in metaethics is that the conclusion, which you have included in the title question, that morality emerges from evolutionary and functional pressures, is false. What usually happens in those studies is that some evolutionary psychologist identifies morality with some trait they can easily measure, and draw a bunch of conclusions. The entry to SEP that Ikaxas mentions is an excellent introduction to these debates. I can also recommend the podcast 'very bad wizards', which features a philosopher and psychologist talking about issues such as these. 

To get things started, I imagine one non-human perspective is that represented by "big history", where "good" is more complex, and bad results from failures to maintain or increase complexity 

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f