Jackson Wagner

Scriptwriter for RationalAnimations @ https://youtube.com/@RationalAnimations
3160 karmaJoined Working (6-15 years)Fort Collins, CO, USA

Bio

Scriptwriter for RationalAnimations!  Interested in lots of EA topics, but especially ideas for new institutions like prediction markets, charter cities, georgism, etc.  Also a big fan of EA / rationalist fiction!

Comments
325

To answer with a sequence of increasingly "systemic" ideas (naturally the following will be tinged by by own political beliefs about what's tractable or desirable):

There are lots of object-level lobbying groups that have strong EA endorsement. This includes organizations advocating for better pandemic preparedness (Guarding Against Pandemics), better climate policy (like CATF and others recommended by Giving Green), or beneficial policies in third-world countries like salt iodization or lead paint elimination.

Some EAs are also sympathetic to the "progress studies" movement and to the modern neoliberal movement connected to the Progressive Policy Institute and the Niskasen Center (which are both tax-deductible nonprofit think-tanks). This often includes enthusiasm for denser ("yimby") housing construction, reforming how science funding and academia work in order to speed up scientific progress (such as advocated by New Science), increasing high-skill immigration, and having good monetary policy. All of those cause areas appear on Open Philanthropy's list of "U.S. Policy Focus Areas".

Naturally, there are many ways to advocate for the above causes -- some are more object-level (like fighting to get an individual city to improve its zoning policy), while others are more systemic (like exploring the feasibility of "Georgism", a totally different way of valuing and taxing land which might do a lot to promote efficient land use and encourage fairer, faster economic development).

One big point of hesitancy is that, while some EAs have a general affinity for these cause areas, in many areas I've never heard any particular standout charities being recommended as super-effective in the EA sense... for example, some EAs might feel that we should do monetary policy via "nominal GDP targeting" rather than inflation-rate targeting, but I've never heard anyone recommend that I donate to some specific NGDP-targeting advocacy organization.

I wish there were more places like Center for Election Science, living purely on the meta level and trying to experiment with different ways of organizing people and designing democratic institutions to produce better outcomes. Personally, I'm excited about Charter Cities Institute and the potential for new cities to experiment with new policies and institutions, ideally putting competitive pressure on existing countries to better serve their citizens. As far as I know, there aren't any big organizations devoted to advocating for adopting prediction markets in more places, or adopting quadratic public goods funding, but I think those are some of the most promising areas for really big systemic change.

The Christians in this story who lived relatively normal lives ended up looking wiser than the ones who went all-in on the imminent-return-of-Christ idea. But of course, if christianity had been true and Christ had in fact returned, maybe the crazy-seeming, all-in Christians would have had huge amounts of impact.

Here is my attempt at thinking up other historical examples of transformative change that went the other way:

  • Muhammad's early followers must have been a bit uncertain whether this guy was really the Final Prophet. Do you quit your day job in Mecca so that you can flee to Medina with a bunch of your fellow cultists? In this case, it probably would've been a good idea: seven years later you'd be helping lead an army of 100,000 holy warriors to capture the city of Mecca. And over the next thirty years, you'll help convert/conquer all the civilizations of the middle east and North Africa.

  • Less dramatic versions of the above story could probably be told about joining many fast-growing charismatic social movements (like joining a political movement or revolution). Or, more relevantly to AI, about joining a fast-growing bay-area startup whose technology might change the world (like early Microsoft, Google, Facebook, etc).

  • You're a physics professor in 1940s America. One day, a team of G-men knock on your door and ask you to join a top-secret project to design an impossible superweapon capable of ending the Nazi regime and stopping the war. Do you quit your day job and move to New Mexico?...

  • You're a "cypherpunk" hanging out on online forums in the mid-2000s. Despite the demoralizing collapse of the dot-com boom and the failure of many of the most promising projects, some of your forum buddies are still excited about the possibilities of creating an "anonymous, distributed electronic cash system", such as the proposal called B-money. Do you quit your day job to work on weird libertarian math problems?...

People who bet everything on transformative change will always look silly in retrospect if the change never comes. But the thing about transformative change is that it does sometimes occur.

(Also, fortunately our world today is quite wealthy -- AI safety researchers are pretty smart folks and will probably be able to earn a living for themselves to pay for retirement, even if all their predictions come up empty.)

David Mathers makes a similar comment, and I respond, here.  Seems like there are multiple definitions of the word, and EA folks are using the narrower definition that's preferred by smart philosophers.  Wheras I had just picked up the word based on vibes, and assumed the definition by analogy to racism and sexism, which does indeed seem to be a common real-world usage of the term (eg, supported by top google results in dictionaries, wikipedia, etc).  It's unclear to me whether the original intended meaning of the word was closer to what modern smart philosophers prefer (and everybody else has been misinterpreting it since then), or closer to the definition preferred by activists and dictionaries (and it's since been somewhat "sanewashed" by philosophers), or if (as I suspect ) it was mushy and unclear from the very start -- invented by savvy people who maybe deliberately intended to link the two possible interpretations of the word.

Good to know!  I haven't actually read "Animal Liberation" or etc; I've just seen the word a lot and assumed (by the seemingly intentional analogy to racism, sexism, etc) that it meant "thinking humans are superior to animals (which is bad and wrong)", in the same way that racism is often used to mean "thinking europeans are superior to other groups (which is bad and wrong)", and sexism about men > women. Thus it always felt to me like a weird, unlikely attempt to shoehorn a niche philosophical position (Are nonhuman animals' lives of equal worth to humans?) into the same kind of socially-enforced consensus whereby things like racism are near-universally condemend.

I guess your definition of speciesism means that it's fine to think humans matter more than other animals, but only if there's a reason for it (like that we have special quality X, or we have Y percent greater capacity for something, therefore we're Y percent more valuable, or because the strong are destined to rule, or whatever).  Versus it would be speciesist to say that humans matter more than other animals "because they're human, and I'm human, and I'm sticking with my tribe".

Wikipedia's page on "speciesism" (first result when I googled the word) is kind of confusing and suggests that people use the word in different ways, with some people using it the way I assumed, and others the way you outlined, or perhaps in yet other ways:

The term has several different definitions.[1] Some specifically define speciesism as discrimination or unjustified treatment based on an individual's species membership,[2][3][4] while others define it as differential treatment without regard to whether the treatment is justified or not.[5][6] Richard D. Ryder, who coined the term, defined it as "a prejudice or attitude of bias in favour of the interests of members of one's own species and against those of members of other species".[7] Speciesism results in the belief that humans have the right to use non-human animals in exploitative ways which is pervasive in the modern society.[8][9][10] Studies from 2015 and 2019 suggest that people who support animal exploitation also tend to have intersectional bias that encapsulates and endorses racist, sexist, and other prejudicial views, which furthers the beliefs in human supremacy and group dominance to justify systems of inequality and oppression.

The 2nd result on a google search for the word, this Britannica article, sounds to me like it is supporting "my" definition:

Speciesism, in applied ethics and the philosophy of animal rights, the practice of treating members of one species as morally more important than members of other species; also, the belief that this practice is justified.

That makes it sound like anybody who thinks a human is more morally important than a shrimp, by definition is speciesist, regardless of their reasons.  (Later on the article talks about something called Singer's "principle of equal consideration of interests".  It's unclear to me if this thought experiment is supposed to imply humans == shrimps, or if it's supposed to be saying the IMO much more plausible idea that a given amount of pain-qualia is of equal badness whether it's in a human or a shrimp.  (So you could say something like -- humans might have much more capacity for pain, making them morally more important overall, but every individual teaspoon of pain is the same badness, regardless of where it is.)

Third google result: this 2019 philosophy paper debating different definitions of the term -- I'm not gonna read the whole thing, but its existence certainly suggests that people disagree.  Looks like it ends up preferring to use your definition of speciesism, and uses the term "species-egalitarianists" for the hardline humans == shrimp position.

Fourth: Merriam-Webster, which has no time for all this philosophical BS (lol) -- speciesism is simply "prejudice or discrimination based on species", and that's that, apparently!

Fifth: this animal-ethics.org website -- long page, and maybe it's written in a sneaky way that actually permits multiple definitions?  But at least based on skimming it, it seems to endorse the hardline position that not giving equal consideration to animals is like sexism or racism: "How can we oppose racism and sexism but accept speciesism?" -- "A common form of speciesism that often goes unnoticed is the discrimination against very small animals." -- "But if intelligence cannot be a reason to justify treating some humans worse than others, it cannot be a reason to justify treating nonhuman animals worse than humans either."

Sixth google result is PETA, who says "Speciesism is the human-held belief that all other animal species are inferior... It’s a bias rooted in denying others their own agency, interests, and self-worth, often for personal gain."  I actually expected PETA to be the most zealously hard-line here, but this page definitely seems to be written in a sneaky way that makes it sound like they are endorsing the humans == shrimp position, while actually being compatible with your more philosophically well-grounded definition.  Eg, the website quickly backs off from the topic of humans-vs-animals moral worth, moving on to make IMO much more sympathetic points, like that it's ridiculous to think farmed animals like pigs are less deserving of moral concern than pet animals like dogs.  And they talk about how animals aren't ours to simply do absolutely whatever we please with zero moral consideration of their interests (which is compatible with thinking that animals deserve some-but-not-equal consideration).

Anyways.  Overall it seems like philosophers and other careful thinkers (such as the editors of the the EA Forum wiki) would like a minimal definition, wheras perhaps the more common real-world usage is the ill-considered maximal definition that I initially assumed it had.  It's unclear to me what the intention was behind the original meaning of the term -- were early users of the word speciesism trying to imply that humans == shrimp and you're a bad person if you disagree?  Or were they making a more careful philosophical distinction, and then, presumably for activist purposes, just deliberately chose a word that was destined to lead to this confusion?

No offense meant to you, or to any of these (non-EA) animal activist sources that I just googled, but something about this messy situation is not giving me the best "truthseeking" vibes...

Excerpting from and expanding on a bit of point 1 of my reply to akash above.  Here are four philosophical areas where I feel like total hedonic utilitarianism (as reflected in common animal-welfare calculations) might be missing the mark:

  1. Something akin to "experience size" (very well-described by that recent blog post!)
  2. The importance of sapience -- if an experience of suffering is happening "all on its own", floating adrift in the universe with nobody to think "I am suffering", "I hope this will end soon", etc, does this make the suffering experience worse-than, or not-as-bad-as, human suffering where the experience is tied together with a rich tapestry of other conscious experiences?  Maybe it's incoherent to ask questions like this, or I am thinking about this in totally the wrong way?  But it seems like an important question to me.  The similiarities between layers of "neurons" in image-classifying AIs, and the actual layouts of literal neurons in the human retina + optical cortex (both humans and AIs have a layer for initial inputs, then for edge-detection, then for corners and curves, then simple shapes and textures, then eventually for higher concepts and whole objects) makes me think that possibly image-classifiers are having a genuine "experience of vision" (ie qualia), but an experience that is disconnected (of course) from any sense of self or sense of wellbeing-vs-suffering or wider understanding of its situation.  I think many animals might have experiences that are intermediate in various ways between humans and this hypothetical isolated-experience-of-vision that might be happening in an AI image classifier.
  3. How good of an approximation is it to linearly "add up" positive experiences when the experiences are near-identical?  ie, there are two identical computer simulations of a suffering emulated mind, any worse than one simulation?  what about a single simulation on a computer with double-thick wires?  what about a simulation identical in every respect except one?  I haven't thought super hard about this, but I feel like these questions might have important real-world consequences for simple creatures like blackflies or shrimp, whose experiences might not add linearly across billions/trillions of creatures, because at some point the experiences become pretty similar to each other and you'd be "double-counting".
  4. Something about "higher pleasures", or Neitzcheanism, or the complexity of value, that maybe there's more to life than just adding up positive and negative valence??  Personally, if I got to decide right now what happens to the future of human civilization, I would definitely want to try and end suffering (insomuch as this is feasible), but I wouldn't want to try and max out happiness, and certainly not via any kind of rats-on-heroin style approach.  I would rather take the opposite tack, and construct a smaller number of god-like superhuman minds, who might not even be very "happy" in any of the usual senses (ie, perhaps they are meditating on the nature of existence with great equanimity), but who in some sense are able to like... maximize the potential of the universe to know itself and explore the possibilities of consciousness.  Or something...

Yeah, I wish they had clarified how many years the $100m is spread out over.  See my point 3 in reply to akash above.

  1. Yup, agreed that the arguments for animal welfare should be judged by their best proponents, and that probably the top EA animal-welfare organizations have much better views than the median random person I've talked to about this stuff.  However:
    1. I don't have a great sense of the space, though (for better or worse, I most enjoy learning about weird stuff like stable totalitarianism, charter cities, prediction markets, etc, which doesn't overlap much with animal welfare), so to some extent I am forced to just go off the vibes of what I've run into personally.
    2. In my complaint about truthseekingness, I was kinda confusedly mashing together two distinct complaints -- one is "animal-welfare EA sometimes seems too 'activist' in a non-truthseeking way", and another is more like "I disagree with these folks about philosophical questions".  That sounds really dumb since those are two very different complaints, but from the outside they can kinda shade into each other... who's tossing around wacky (IMO) welfare-range numbers because they just want an argument-as-soldier to use in favor of veganism, versus who's doing it because they disagree with me about something akin to "experience size", or the importance  of sapience, or how good of an approximation it is to linearly "add up" positive experiences when the experiences are near-identical[1].  Among those who disagree with me about those philosophical questions, who is really being a True Philosopher and following their reason wherever it leads (and just ended up in a different place than me), versus whose philosophical reasoning is a little biased by their activist commitments?  (Of course one could also accuse me of being subconsciously biased in the opposite direction!  Philosophy is hard...)
      1. All that is to say, that I would probably consider the top EA animal-welfare orgs to be pretty truthseeking (although it's hard for me to tell for sure from the outside), but I would probably still have important philosophical disagreements with them.
  2. Maybe I am making a slightly different point as from most commenters -- I wasn't primarily thinking "man, this animal-welfare stuff is gonna tank EA's reputation", but rather "hey, an important side effect of global-health funding is that it buys us a lot of goodwill and mainstream legibility; it would be a shame to lose that if we converted all the global-health money to animal-welfare, or even if the EA movement just became primarily known for nothing but 'weird' causes like AI safety and chicken wellbeing."
    1. I get that the question is only asking about $100m, which seems like it wouldn't shift the overall balance much.  But see section 3 below.
    2. To directly answer your question about social perception: I wish we could completely discount broader social perception when allocating funding (and indeed, I'm glad that the EA movement can pull off as much disregarding-of-broader-social-perception as it already manages to do!), but I think in practice this is an important constraint that we should take seriously.  Eg, personally I think that funding research into human intelligence augmentation (via iterated embryo selection or germline engineering) seems like it perhaps should a very high-priority cause area... if it weren't for the pesky problem that it's massively taboo and would risk doing lots of damage to the rest of the EA movement.  I also feel like there are a lot of explicitly political topics that might otherwise be worth some EA funding (for example, advocating Georgist land value taxes), but which would pose similar risk of politicizing the movement or whatever.
    3. I'm not sure whether the public would look positively or negatively on the EA farmed-animal-welfare movement.  As you said, veganism seems to be percieved negatively and treating animals well seems to be percieved positively.  Some political campaigns (eg for cage-free ballot propositions), admittedly designed to optimize positive perception, have passed with big margins.  (But other movements, like for improving the lives of broiler chickens, have been less successful?)  My impression is that the public would be pretty hostile to anything in the wild-animal-welfare space (which is a shame because I, a lover of weird niche EA stuff, am a big fan of wild animal welfare).  Alternative proteins have become politicized enough that Florida was trying to ban cultured meat?  It seems like a mixed bag overall; roughly neutral or maybe slightly negative, but definitely not like intelligence augmentation which is guaranteed-hugely-negative perception.  But if you're trading off against global health, then you're losing something strongly positive.
  3. "Could you elaborate on why you think if an additional $100 million were allocated to Animal Welfare, it would be at the expense of Global Health & Development (GHD)?" -- well, the question was about shifting $100m from animal welfare to GHD, so it does quite literally come at the expense (namely, a $100m expense) of GHD!  As for whether this is a big shift or a tiny drop in the bucket, depends on a couple things:
    - Does this hypothetical $100m get spent all at once, and then we hold another vote next year?  Or do we spend like $5m per year over the next 20 years?
    - Is this the one-and-only final vote on redistributing the EA portfolio?  Or maybe there is an emerging "pro-animal-welfare, anti-GHD" coalition who will return for next year's question, "Should we shift $500m from GHD to animal welfare?", and the question the year after that...
    I would probably endorse a moderate shift of funding, but not an extreme one that left GHD hollowed out.  Based on this chart from 2020 (idk what the situation looks like now in 2024), taking $100m per year from GHD would probably be pretty devastating to GHD, and AW might not even have the capacity to absorb the flood of money.  But moving $10m each year over 10 years would be a big boost to AW without changing the overall portfolio hugely, so I'd be more amenable to it.
    Total Funding by Cause Area — EA Forum
  1. ^

    (ie, are two identical computer simulations of a suffering emulated mind, any worse than one simulation?  what about a single simulation on a computer with double-thick wires?  what about a simulation identical in every respect except one?  I haven't thought super hard about this, but I feel like these questions might have important real-world consequences for simple creatures like blackflies or shrimp, whose experiences might not add linearly across billions/trillions of creatures, because at some point the experiences become pretty similar to each other and you'd be "double-counting".)

The animal welfare side of things feels less truthseeking, more activist, than other parts of EA.  Talk of "speciesim" that implies animals' and humans' lives are of ~equal value, seems farfetched to me.  People frequently do things like taking Rethink's moral weights project (which kinda skips over a lot of hard philosophical problems about measurement and what we can learn from animal behavior, and goes all-in on a simple perspective of total hedonic utilitarianism which I think is useful but not ultimately correct), and just treat the numbers as if they are unvarnished truth.

If I considered only the immediate, direct effects of $100m spent on animal welfare versus global health, I would probably side with animal welfare despite the concerns above.  But I'm also worried about the relative lack of ripple / flow-through effects from animal welfare work versus global health interventions -- both positive longer-term effects on the future of civilization generally, and more near-term effects on the sustainability of the EA movement and social perceptions of EA.  Going all-in on animal welfare at the expense of global development seems bad for the movement.

I’d especially welcome criticism from folks not interested in human longevity. If your priority as a human being isn’t to improve healthcare or to reduce catastrophic/existential risks, what is it? Why?

Personally, I am interested in longevity and I think governments (and other groups, although perhaps not EA grantmakers) should be funding more aging research.  Nevertheless, some criticism!

  • I think there are a lot of reasonable life goals other than improving healthcare or reducing x-risks.  These things are indeed big, underrated threats to human life.  But the reason why human life is so worthwhile and in need of protection, is because life is full of good experiences.  So, trying to create more good experiences (and conversely, minimize suffering / pain / sorrow / boredom etc) is also clearly a good thing to do.  "Create good experiences" covers a lot of things, from mundane stuff like running a restaurant that makes tasty food or developing a fun videogame, to political crusades to reduce animal suffering or make things better in developing countries or prevent wars and recessions or etc, to anti-aging-like moonshot tech projects like eliminating suffering using genetic engineering or trying to build Neuralink-style brain-computer interfaces or etc.  Basically, I think the Bryan Johnson style "the zeroth rule is don't-die" messaging where antiaging becomes effectively the only thing worth caring about, is reductive and will probably seem off-putting to many people.  (Even though, personally, I totally see where you are coming from and consider longevity/health a key personal priority.)
  • This post bounces around somewhat confusingly among a few different justifications for / defenses of aging research.  I think this post (or future posts) would be more helpful if it had a more explicit structure, acknowledging that there are many reasons one could be skeptical of aging research.  Here is an example outline:
    • Some people don't understand transhumanist values at all, and think that death is essentially good because "death gives life meaning' or etc silliness.
      • Other people will kinda-sorta agree that death is bad, but also feel uncomfortable about the idea of extending lifespans -- people are often kinda confused about their own feelings/opinions here simply because they haven't thought much about it.
    • Some people totally get that death is bad, insofar as they personally would enjoy living much longer, but they don't think that solving aging would be good from an overall societal perspective.
      • Some people think that a world of extended longevity would have various bad qualities that would mean the cure for aging is worse than the disease -- overpopulation, or stagnant governments/culture (including perpetually stable dictatorships), or just a bunch of dependent old people putting an unsustainable burden on a small number of young workers, or conversely that if people never got to retire this would literally be a fate worse than death.  (I think these ideas are mostly silly, but they are common objections.  Also, I do think it would be valuable to try and explore/predict what a world of enhanced longevity would look like in more detail, in terms of the impact on culture / economy / governance / geopolitics / etc.  Yes, the common objections are dumb, and minor drawbacks like overpopulation shouldn't overshadow the immense win of curing aging.  But I would still be very curious to know what a world of extended longevity would look like -- which problems would indeed get worse, and which would actually get better?)
        • Most of this category of objections is just vague vibes, but a subcategory here is people actually running the numbers and worrying that an increase in elderly people will bankrupt Medicare, or whatever -- this is why, when trying to influence policy and public research funding decisions, I think it's helpful to address this by pointing out that slowing aging (rather than treating disease) would actually be positive for government budgets and the economy, as you do in the post.  (Even though in the grand scheme of things, it's a little absurd to be worried about whether triuphing over death will have a positive or negative effect on some CBO score, as if that should be the deciding factor of whether to cure aging!!)
      • Other people seem to think that curing death would be morally neutral from an external top-down perspective -- if in 2024 there are 8 billion happy people, and in  2100 there are 8 billion happy people, does it really matter whether it's the same people or new ones?  Maybe the happiness is all that counts.  (I have a hard time understanding where people are coming from when they seem to sincerely believe this 100%, but lots of philosophically-minded people feel this way, including many utilitarian EA types.)  More plausibly, people won't be 100% committed to this viewpoint, but they'll still feel that aging and death is, in some sense, less of an ongoing catastrophe from a top-down civilization-wide perspective than it is for the individuals making up that civilization.  (I understand and share this view.)
    • Some people agree that solving aging would be great for both individuals and society, but they just don't think that it's tractable to work on aging.  IMO this has been the correct opinion for the vast majority of human history, from 10,000 B.C. up until, idk, 2005 or something?  So I don't blame people for failing to notice that maybe, possibly, we are finally starting to make some progress on aging after all.  (Imagine if I wrote a post arguing for human expansion to other star systems, and eventually throughout the galaxy, and made lots of soaring rhetorical points about how this is basically the ultimate purpose of human civilization.  In a certain sense this is true, but also we obviously lack the technology to send colony-ships to even the nearest stars, so what's the point of trying to convince people who think civilization should stay centered on the Earth?)
      • I really like the idea of ending aging, so I get excited about various bits of supposed progress (rapamycin?  senescent cell therapy?  idk).  Many people don't even know about these small promising signs (eg the ongoing mouse longevity study).
      • Some people know about those small promising signs, but still feel uncertain whether these current ideas will pan out into real benefits for healthy human lifespans.  Reasonable IMO.
      • Even supposing that something like rapamycin, or some other random drug, indeed extends lifespan by 15% or something -- that would be great, but what does that tell me about the likelihood that humanity will be able to consistently come up with OTHER, bigger longevity wins?  It is a small positive update, but IMO there is potentially a lot of space between "we tried 10,000 random drugs and found one that slows the progression of alzheimers!" and "we now understand how alzheimers works and have developed a cure".  Might be the same situation with aging.  So, even getting some small wins doesn't necessarily mean that the idea of "curing aging" is tractable, especially if we are operating without much of a theory of how aging works.  (Seems plausible to me that humanity might be able to solve, like, 3 of the 5 major causes of aging, and lifespan goes up 25%, but then the other 2 are either impossible to fix for fundamental biological reasons, or we never manage to figure them out.)
      • A lot of people who appear to be in the "death is good" / "death isn't a societal problem, just an individual problem" categories above, would actually change their tune pretty quickly if they started believing that making progress on longevity was actually tractable.  So I think the tractability objections are actually more important to address than it seems, and the earlier stuff about changing hearts and minds on the philosophical questions is actually less important.

Probably instead of one giant comprehensive mega-post addressing all possible objections, you should tackle each area in its own more bite-sized post -- to be fancy, maybe you could explicitly link these together in a structured way, like Holden Karnofsky's "Most Important Century" blog posts.


I don't really know anything about medicine or drug development, so I can't give a very detailed breakdown of potential tractability objections, and indeed I personally don't know how to feel about the tractability of anti-aging.

Of course, to the extent that your post is just arguing "governments should fund this area more, it seems obviously under-resourced", then that's a pretty low bar, and your graph of the NIH's painfully skewed funding priorities basically makes the entire argument for you.  (Although I note that the graph seems incorrect??  Shouldn't $500M be much larger than one row of pixels??  Compare to the nearby "$7B" figures; the $500M should of course be 1/14th as tall...)  For this purpose, it's fine IMO to argue "aging is objectively very important, it doesn't even matter how non-tractable it is, SURELY we ought to be spending more than $500m/year on this, at the very least we should be spending more than we do on Alzheimers which we also don't understand but is an objectively smaller problem."

But if you are trying to convince venture-capitalists to invest in anti-aging with the expectation of maybe actually turning a profit, or win over philanthropists who have other pressing funding priorities, then going into more detail on tractability is probably necessary.

You might be interested in some of the discussion that you can find at this tag: https://forum.effectivealtruism.org/topics/refuges

People have indeed imagined creating something like a partially-underground town, which people would already live in during daily life, precisely to address the kinds of problems you describe (working out various kinks, building governance institutions ahead of time, etc).  But on the other hand, it sounds expensive to build a whole city (and would you or I really want to uproot our lives and move to a random tiny town in the middle of nowhere just to help be the backup plan in case of nuclear war?), and it's so comparatively cheap to just dig a deep hole somewhere and stuff a nuclear reactor + lots of food + whatever else inside, which after all will probably be helpful in a catastrophe.

In reality, if the planet was to be destroyed by nuclear holocaust, a rogue comet, a lethal outbreak none of these bunkers would provide the sanctity that is promised or the capability to ‘rebuild’ society. 

I think your essay does a pretty good job of pointing out flaws with the concept of bunkers in the Fallout TV + videogame universe.  But I think that in real life, most actual bunkers (eg constructed by militaries, the occasional billionare, cities like Seoul which live in fear of enemy attack or natural disasters, etc) aren't intended to operate indefinitely as self-contained societies that could eventually restart civilization, so naturally they would fail at that task.  Instead, they are just supposed to keep people alive through an acute danger period of a few hours to weeks (ie, while a hurricane is happening, or while an artillery barage is ongoing, or while the local government is experiencing a temporary period of anarchy / gang rule / rioting, or while radiation and fires from a nearby nuclear strike dissapate).  Then, in 9 out of 10 cases, probably the danger passes and some kind of normal society resumes (FEMA shows up after the hurricane, or a new stable government eventually comes to power, etc -- even most nuclear wars probably wouldn't result in the comically barren and devastated world of the Fallout videogames).  I don't think militaries or billionaires are necessarily wasting their money; they're just buying insurance against medium-scale catastrophes, and admitting that there's nothing they can do about the absolute worst-case largest-scale catastrophes.

Few people have thought of creating Fallout-style indefinite-civilizational-preservation bunkers in real life, and to my knowledge nobody has actually built one.  But presumably if anyone did try this in real life (which would involve spending many millions of dollars, lots of detailed planning, etc), they would think a little harder and produce something that makes a bit more sense than the bunkers from the Fallout comedy videogames, and indeed do something like the partially-underground-city concept.

Load more