Hide table of contents

The following is a short supplement to an upcoming post on the implications for conscious significance (a framework for understanding the free will and determinism debate) but is a more general observation.

Paradigm Shifts vs Cultural Evolution

The world has undergone many paradigm shifts; where a profound truth has been revealed about the nature of the universe and our place within it. Individuals also go through their own personal paradigm shifts when they change their beliefs-which can be a frightening prospect. But I would argue it doesn’t need to be, because profound paradigm shifts seldom change as much as we expect.

This is because, if there is a significant practical benefit to behaviour in accordance with a fact about nature, cultural evolution will often find this behaviour before we discover the fact. A few examples without leaving the letter ‘ G’:

Gravity

Discovering gravity did not inform us about how we could move around by applying pressure with our limbs to the large gravitational body upon which we’re hurtling through space. We’d already worked out how to react to gravity without knowing exactly what it was (in fact, we still don’t know exactly what it is). It’s even feasible that we could have learned to fly while still maintaining a flat-earth perspective, and even with our incomplete understanding of gravity today, we are capable of space travel.

God is Dead

The 19th Century saw an increase in scientific discoveries such as Darwin’s theory of Evolution as well as increasingly secular forms of government born out of the Enlightenment. The associated atrophy of religious belief during this period in western philosophy was encapsulated in Nietzsche’s phrase

“God is dead, God remains dead, and we have killed him”

… leading philosophers to grapple with Dostoevsky’s assertion that…

“If God is dead, all is permitted.”

Both Dostoevsky and Nietzsche independently assumed that God’s death leaves a moral vacuum.

In reality, a materialist worldview demands similar interpersonal ethics as a religious one. So, when belief declined, moral behaviour persisted, not by divine coincidence, but because many religious morals had differentially survived, primarily due to their facility for social cohesion.

Germs

From a modern perspective, the germ theory of disease seems a perfect counter-example of a profound truth that made a tremendous difference to everyday people; the imperative to wash one’s hands has itself saved billions of lives. However, even this theory, a version of which was proposed by Girolamo Fracastoro in 1546, failed to make a splash, partly because pseudoscientific theories had lucked upon some practices that were effective. The prevailing Miasma (or “Bad Air”) Theory at the time, at least, warned people away from rotting food and flesh, despite having no sound scientific explanation for why they should.

Even the paradigm of spiritual possession and witchcraft had developed some practices that informed behaviour consistent with the germ theory; the idea of quarantine, animistic gods providing treatment via plant leaves and concepts of impurity. This does not mean to say there was any merit to these beliefs, they are better viewed as rationalisations to justify practices born of utility. But over time, practices evolved such that germaphobic tendencies were in full swing before the likes of Louis Pasteur lead to the Germ Theory of Disease being fully accepted in the late 19th Century.

Genes

Genes are a recent paradigm shift, and the most profound. The discovery of DNA and the genetic code has revolutionised our understanding of life itself. But even this discovery has not changed as much as we might expect. The idea of heredity was already well established, and the idea of selective breeding was already in practice. And how much does the fact that you’re made of genes change your day-to-day life? Not much, unless you’re a geneticist.

But…

Over time, paradigm shifts do change everything, in a sense that they make what was previously impossible possible. Science and technology have enabled us to fly to the moon, build universal ethical frameworks, save lives and even edit genes—feats that would not have been possible without gaining an accurate picture of the world. But the discoveries that enabled these feats were not bolts from the blue, before we knew about them, we had already developed practices that were consistent with them. Importantly the anticipated consequences of these profound discoveries didn’t eventuate, and the feats they enabled did not arrive immediately, but required a continued process of cultural evolution to reveal their utility.

So…

The lesson I take from this is not to be afraid of paradigm shifts, and to recognise that new ideas don’t destroy the world to make it anew, but rather they reframe our understanding of the world to reveal new possibilities. Humans have a (often maligned) capacity to rationalise their behaviour, to understand new information in terms of information they already have, sometimes with bizarre results. However I believe this capacity is applicable in the case of paradigm shifts, enabling us accept new information without abandoning all the hard-won lessons of our personal and civilisational history. This approach is key to understanding the implications of ‘conscious significance’.

  • This post is a supplement to Implications where determinism is a profound paradigm shift, which is somewhat rationalised in my concept of conscious significance
  • Another related post is It’s Subjective ~ the end of the conversation? Which tackles the idea of the centrality of conscious experience in ethical considerations, while distinguishing it from subjective relativism.
  • On the topic of rationalising beliefs, my contagious beliefs simulation uses the mechanism of belief adoption via alignment with prior beliefs as a core principle, essentially enshrining cognitive bias as our primary conduit for learning.
Comments1


Sorted by Click to highlight new comments since:

Executive summary: Paradigm shifts in human understanding, while profound in their implications for future possibilities, often validate rather than overturn existing behaviors and practices that evolved through cultural trial-and-error before the theoretical understanding was established.

Key points:

  1. Major scientific discoveries typically reframe our understanding without invalidating previously evolved practical behaviors.
  2. Cultural evolution often discovers beneficial practices before science explains why they work.
  3. Examples like gravity, germ theory, and genetics show that humans developed effective practices before understanding the underlying principles.
  4. The true impact of paradigm shifts is enabling new possibilities rather than invalidating existing knowledge.
  5. Humans should approach paradigm shifts with openness rather than fear, as they typically build upon rather than destroy practical knowledge.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
 ·  · 20m read
 · 
Advanced AI could unlock an era of enlightened and competent government action. But without smart, active investment, we’ll squander that opportunity and barrel blindly into danger. Executive summary See also a summary on Twitter / X. The US federal government is falling behind the private sector on AI adoption. As AI improves, a growing gap would leave the government unable to effectively respond to AI-driven existential challenges and threaten the legitimacy of its democratic institutions. A dual imperative → Government adoption of AI can’t wait. Making steady progress is critical to: * Boost the government’s capacity to effectively respond to AI-driven existential challenges * Help democratic oversight keep up with the technological power of other groups * Defuse the risk of rushed AI adoption in a crisis → But hasty AI adoption could backfire. Without care, integration of AI could: * Be exploited, subverting independent government action * Lead to unsafe deployment of AI systems * Accelerate arms races or compress safety research timelines Summary of the recommendations 1. Work with the US federal government to help it effectively adopt AI Simplistic “pro-security” or “pro-speed” attitudes miss the point. Both are important — and many interventions would help with both. We should: * Invest in win-win measures that both facilitate adoption and reduce the risks involved, e.g.: * Build technical expertise within government (invest in AI and technical talent, ensure NIST is well resourced) * Streamline procurement processes for AI products and related tech (like cloud services) * Modernize the government’s digital infrastructure and data management practices * Prioritize high-leverage interventions that have strong adoption-boosting benefits with minor security costs or vice versa, e.g.: * On the security side: investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the ris
saulius
 ·  · 22m read
 · 
Summary In this article, I estimate the cost-effectiveness of five Anima International programs in Poland: improving cage-free and broiler welfare, blocking new factory farms, banning fur farming, and encouraging retailers to sell more plant-based protein. I estimate that together, these programs help roughly 136 animals—or 32 years of farmed animal life—per dollar spent. Animal years affected per dollar spent was within an order of magnitude for all five evaluated interventions. I also tried to estimate how much suffering each program alleviates. Using SADs (Suffering-Adjusted Days)—a metric developed by Ambitious Impact (AIM) that accounts for species differences and pain intensity—Anima’s programs appear highly cost-effective, even compared to charities recommended by Animal Charity Evaluators. However, I also ran a small informal survey to understand how people intuitively weigh different categories of pain defined by the Welfare Footprint Institute. The results suggested that SADs may heavily underweight brief but intense suffering. Based on those findings, I created my own metric DCDE (Disabling Chicken Day Equivalent) with different weightings. Under this approach, interventions focused on humane slaughter look more promising, while cage-free campaigns appear less impactful. These results are highly uncertain but show how sensitive conclusions are to how we value different kinds of suffering. My estimates are highly speculative, often relying on subjective judgments from Anima International staff regarding factors such as the likelihood of success for various interventions. This introduces potential bias. Another major source of uncertainty is how long the effects of reforms will last if achieved. To address this, I developed a methodology to estimate impact duration for chicken welfare campaigns. However, I’m essentially guessing when it comes to how long the impact of farm-blocking or fur bans might last—there’s just too much uncertainty. Background In
 ·  · 2m read
 · 
In my opinion, we have known that the risk of AI catastrophe is too high and too close for at least two years. At that point, it’s time to work on solutions (in my case, advocating an indefinite pause on frontier model development until it’s safe to proceed through protests and lobbying as leader of PauseAI US).  Not every policy proposal is as robust to timeline length as PauseAI. It can be totally worth it to make a quality timeline estimate, both to inform your own work and as a tool for outreach (like ai-2027.com). But most of these timeline updates simply are not decision-relevant if you have a strong intervention. If your intervention is so fragile and contingent that every little update to timeline forecasts matters, it’s probably too finicky to be working on in the first place.  I think people are psychologically drawn to discussing timelines all the time so that they can have the “right” answer and because it feels like a game, not because it really matters the day and the hour of… what are these timelines even leading up to anymore? They used to be to “AGI”, but (in my opinion) we’re basically already there. Point of no return? Some level of superintelligence? It’s telling that they are almost never measured in terms of actions we can take or opportunities for intervention. Indeed, it’s not really the purpose of timelines to help us to act. I see people make bad updates on them all the time. I see people give up projects that have a chance of working but might not reach their peak returns until 2029 to spend a few precious months looking for a faster project that is, not surprisingly, also worse (or else why weren’t they doing it already?) and probably even lower EV over the same time period! For some reason, people tend to think they have to have their work completed by the “end” of the (median) timeline or else it won’t count, rather than seeing their impact as the integral over the entire project that does fall within the median timeline estimate or