Hide table of contents

When we cast a broad light, the evolution of any species is considered either by chance and/or by choice. The evolution of weapons, however, is often misinterpreted as the outcome of chance events. In the previous parts, we have seen the factors point to choices that vary across different species. Both options are feasible, often working in tandem, provided the effective cost of change is not significantly high.

Once the arms race is triggered, weapons start getting sizable and more sophisticated. In animals, as long as the selection pressure is applied, natural or artificial, the rate of development varies with the required application. In the book, the author demonstrated the evolution of weapons in rhinoceros beetles in an artificial setting to study the variation in cost and size. It is important to note that for an animal species, the primary loss is bodily changes affecting its life cycle. Since the life cycle of the beetles is short, it is possible to observe salient evolutionary changes in a short span of time. The author conducted his research for two and a half years.  The results of the research were clear. There was a significant increase in the size of the horns. However, since the availability of internal resources was limited, deficit spending resulted in unsustainable growth. The eyes of the beetles were shunted, along with affected wings and genitalia. In comparison, horn growth was three times as sensitive to wings and limbs.

Reliable signals and Deterrence

The affordability of a weapon is dependent on the resource pool of animal species. Similar to humans, family and bloodline play a major role in how capable the new generation can be. Weapons get bigger when the rest of the body turns fully grown. What propels the race is the opportunity cost. Whenever an opportunity to invest in weapons development leads to viable returns, animal species can create a discretionary pool to further the size. This is because weapons act as reliable signals to all parties, favoring females as well as rival males. For both battle purposes and protection, weapons advertise honest proofs of investment, and information like health status and fighting ability. This allows rivals to assess each other before engaging in dangerous battles. Avoidance is the best option for those not close to the top of the ladder as they live to fight another day.

For those who are stronger, the deterrence alone is enough. Weapons in animals are vastly more variable than other bodily elements. The dominant ones in a species need to fight with rivals of comparable strength and full attention. Small battles can cause minor injuries, which may later result in risks of distraction and exposure to predation. This is visible in the case of fiddler crabs with huge claspers. Most of the time, fiddlers employ their claws as warnings rather than instruments of battle. They use them as an agent of deterrence for weaker crabs. They do use them for intense fights, but only for a few minutes. After they spend hours waving up and down. This also acts as a welcoming signal for female crabs that are far away from fights as not all fights end well.  Normally, crabs are well protected from one another, since their exoskeletons are like armor, but in the heat of a battle, crabs get distracted and become easy targets for gulls and grackles.[1]

Deterrence acts as an integral stage in an arms race. The evolutionary increase in size keeps up with the rate of extreme possessors. As signals, weapons become more honest by pushing the evolution of deterrence to avoid deadly confrontations. Fight costs saved by deterrence boost the already-fulfilled gains for males with the largest weapons. For example, the total horn length of male ibexes, a wild goat species, is ~20 cm for up to 3 years, whereas for individuals ≥10 years, the average lies between 60 and 80 cm.[2] In a typical challenge, Ibex rams size each other up, comparing weapon sizes; most confrontations end without escalating to battle.

This cycle fuels the race to keep getting faster. To quote the author: Arms races and deterrence push each other forward, escalating in an evolutionary spiral. Nowadays, the odds of war on the sea are extremely low. It was different not so long ago. The size of the fleet was the measure of a country’s fighting ability—the perfect signal for deterrence.  The research underscores that certainty plays a more significant role in deterrence than severity.[3] Warships used to chase down rivals of comparable size while medium-sized ones focused on escapes. Smaller ships used to either get destroyed or shied away. These elongated battles were too expensive for states to afford. For naval battles, state-of-the-art weapons are still too expensive. As a result, warships are close to being extinct, rather, aircraft carriers serve as the new agents of deterrence.

 

Sneaks and Cheats/End of the Race

When fighting doesn't ensure a victory, you need to opt for plan B. Strong monopolies incentivize individuals to evolve their way of fighting. Many animals chose alternative ways to infiltrate rival territories for the sake of mating. In the case of dung beetles, males with big horns are often deceived when a weaker rival male reaches the guarded female through a tunnel from a different end, avoiding the confrontation altogether. That's why males are observed doing guarding duty in rounds, but the infiltrators with far smaller weapons are quick and agile. In animals like Bighorn sheep, the strategy to disguise is fairly common as sneaky males are found almost in every species. Since most eminent males are bigger in size and weapons compared to females, smaller males can effectively sneak past the guards and camouflage in the herd. 

Similar courses of action are visible when talking about human conflicts and warfare. The definition of cheating becomes irrelevant in an active war when one military force cannot stand a chance against a larger one. Non-combatants and spies blend inside rival warring states as sneak forces. They stay in the game just by surviving in disguise. The surprise element can dismantle large tanks by slipping IEDs (improvised explosive devices) and disabling deadly weapons, making the bulk of conventional forces a liability. Guerilla warfare, a paragon of sneak tactics, is the foremost way of battle in a majority of small countries with poor economies.[4] Even the conflicts between large nations are heavily dependent on sneaky submarines capable of knocking down aircraft carriers positioned in relatively safer territory.

The biggest cheat of the current era comes in the form of cyberattacks. Getting access to rival weapons can cripple the entire military force to a scary extent. A compromised security system is the worst nightmare for a warring nation as it may pose an existential risk to the population. In these Zero-day attacks, codes are deeply embedded until the day they are needed. Once active, hackers can gain control of everything from missile guidance systems to navigation and handling of submarines, to aircraft and aircraft carriers.

With the advent of such high risks in handling the deadliest of weapons, the end of the weapons race becomes the only option.

In the case of animals, the evolution of weapons is bound to reach an equilibrium stage. Bigger weapons in animals start losing their advantage, stalling the arms race, and the population settles on a new size. The relative benefit of weapon growth often fails to keep up with the associated cost, especially when the resources to sustain deplete in the surroundings. These circumstances are further exploited by others leading to the extinction of the entire species (for ex.  Sabertooth, Mammoth).

Human civilization is surrounded by a realm of costs, resources, expenses, payoffs, etc., at different points.  Enrichment of cheats is primarily backed by innovation and the trigger for change. When horse-mount soldiers were investing in shiny-bulky armor and engraved swords, foot soldiers invented the crossbow and longbows that pierced through centuries of investment in battle gear. As long-range weapons became more sophisticated and effective, we saw the birth of guns that collapsed the race for melee weapons. In such cases, even after the race ends, the weapons linger in various parts of society because of their low cost and considerable payoffs at the time.

 

In the final part, we'll see the extension of the parallels we have seen in the first three parts to the current state of human technologies, how it affects war in our time, and some important distinctions...

  

Comments1


Sorted by Click to highlight new comments since:

It was really interesting to read that evaluating whether you have, or not, chances of winning in a confrontation could lead to avoiding conflict overall. It is obviously common sense, but I have the feeling that in the human world, there are a lot of exceptions, haha. Ego and foolish ambitions often led people to poorly evaluate their chances and die prematurely.

I really liked the part about sneaking! I believe such strategies fall under intelligent fighting if I'm not wrong. One may not have physical strength, but they outsmart their enemy. Speaking of which, I believe in Asian combat styles, intelligent fighting is quite common. For instance, aged fighters who go against younger ones. The veterans don't move very much and use their blows efficiently. They don't hit often, but their hits are impactful. Also, female fighters can act deceiving and make a deadly weapon out of a simple pin or a fan.

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f