I might elaborate on this at some point, but I thought I'd write down some general reasons why I'm more optimistic than many EAs on the risk of human extinction from AI. I'm not defending these reasons here; I'm mostly just stating them.
Skepticism of foom: I think it's unlikely that a single AI will take over the whole world and impose its will on everyone else. I think it's more likely that millions of AIs will be competing for control over the world, in a similar way that millions of humans are currently competing for control over the world. Power or wealth might be very unequally distributed in the future, but I find it unlikely that it will be distributed so unequally that there will be only one relevant entity with power. In a non-foomy world, AIs will be constrained by norms and laws. Absent severe misalignment among almost all the AIs, I think these norms and laws will likely include a general prohibition on murdering humans, and there won't be a particularly strong motive for AIs to murder every human either.
Skepticism that value alignment is super-hard: I haven't seen any strong arguments that value alignment is very hard, in contrast to the straightforward empirical evidence that e.g. GPT-4 seems to be honest, kind, and helpful after relatively little effort. Most conceptual arguments I've seen for why we should expect value alignment to be super-hard rely on strong theoretical assumptions that I am highly skeptical of. I have yet to see significant empirical successes from these arguments. I feel like many of these conceptual arguments would, in theory, apply to humans, and yet human children are generally value aligned by the time they reach young adulthood (at least, value aligned enough to avoid killing all the old people). Unlike humans, AIs will be explicitly trained to be benevolent, and we will have essentially full control over their training process. This provides much reason for optimism.
Belief in a strong endogenous response to AI: I think most people will generally be quite fearful of AI and will demand that we are very cautious while deploying the systems widely. I don't see a strong reason to expect companies to remain unregulated and rush to cut corners on safety, absent something like a world war that presses people to develop AI as quickly as possible at all costs.
Not being a perfectionist: I don't think we need our AIs to be perfectly aligned with human values, or perfectly honest, similar to how we don't need humans to be perfectly aligned and honest. Individual humans are usually quite selfish, frequently lie to each other, and are often cruel, and yet the world mostly gets along despite this. This is true even when there are vast differences in power and wealth between humans. For example some groups in the world have almost no power relative to the United States, and residents in the US don't particularly care about them either, and yet they survive anyway.
Skepticism of the analogy to other species: it's generally agreed that humans dominate the world at the expense of other species. But that's not surprising, since humans evolved independently of other animal species. And we can't really communicate with other animal species, since they lack language. I don't think AI is analogous to this situation. AIs will mostly be born into our society, rather than being created outside of it. (Moreover, even in this very pessimistic analogy, humans still spend >0.01% of our GDP on preserving wild animal species, and the vast majority of animal species have not gone extinct despite our giant influence on the natural world.)
ETA: feel free to ignore the below, given your caveat, though you may find it helpful if you choose to write an expanded form of any of the arguments later to have some early objections.
Correct me if I'm wrong, but it seems like most of these reasons boil down to not expecting AI to be superhuman in any relevant sense (since if it is, effectively all of them break down as reasons for optimism)? To wit:
Resource allocation is relatively equal (and relatively free of violence) among humans because even humans that don't very much value the well-being of others don't have the power to actually expropriate everyone else's resources by force. (We have evidence of what happens when those conditions break down to any meaningful degree; it isn't super pretty.)
I do not think GPT-4 is meaningful evidence about the difficulty of value alignment. In particular, the claim that "GPT-4 seems to be honest, kind, and helpful after relatively little effort" seems to be treating GPT-4's behavior as meaningfully reflecting its internal preferences or motivations, which I think is "not even wrong". I think it's extremely unlikely that GPT-4 has preferences over world states in a way that most humans would consider meaningful, and in the very unlikely event that it does, those preferences almost certainly aren't centrally pointed at being honest, kind, and helpful.
re: endogenous reponse to AI - I don't see how this is relevant once you have ASI. To the extent that it might be relevant, it's basically conceding the argument: that the reason we'll be safe is that we'll manage to avoid killing ourselves by moving too quickly. (Note that we are currently moving at pretty close to max speed, so this is a prediction that the future will be different from the past. One that some people are actively optimising for, but also one that other people are optimizing against.)
re: perfectionism - I would not be surprised if many current humans, given superhuman intelligence and power, created a pretty terrible future. Current power differentials do not meaningfully let individual players flip every single other player the bird at the same time. Assuming that this will continue to be true is again assuming the conclusion (that AI will not be superhuman in any relevant sense). I also feel like there's an implicit argument here about how value isn't fragile that I disagree with, but I might be reading into it.
I'm not totally sure what analogy you're trying to rebut, but I think that human treatment of animal species, as a piece of evidence for how we might be treated by future AI systems that are analogously more powerful than we are, is extremely negative, not positive. Human efforts to preserve animal species are a drop in the bucket compared to the casual disregard with which we optimize over them and their environments for our benefit. I'm sure animals sometimes attempt to defend their territory against human encroachment. Has the human response to this been to shrug and back off? Of course, there are some humans who do care about animals having fulfilled lives by their own values. But even most of those humans do not spend their lives tirelessly optimizing for their best understanding of the values of animals.
Correct me if I'm wrong, but it seems like most of these reasons boil down to not expecting AI to be superhuman in any relevant sense
No, I certainly expect AIs will eventually be superhuman in virtually all relevant respects.
Resource allocation is relatively equal (and relatively free of violence) among humans because even humans that don't very much value the well-being of others don't have the power to actually expropriate everyone else's resources by force.
Can you clarify what you are saying here? If I understand you correctly, you're saying that humans have relatively little wealth inequality because there's relatively little inequality in power between humans. What does that imply about AI?
I think there will probably be big inequalities in power among AIs, but I am skeptical of the view that there will be only one (or even a few) AIs that dominate over everything else.
I do not think GPT-4 is meaningful evidence about the difficulty of value alignment.
I'm curious: does that mean you also think that alignment research performed on GPT-4 is essentially worthless? If not, why?
I think it's extremely unlikely that GPT-4 has preferences over world states in a way that most humans would consider meaningful, and in the very unlikely event that it does, those preferences almost certainly aren't centrally pointed at being honest, kind, and helpful.
I agree that GPT-4 probably doesn't have preferences in the same way humans do, but it sure appears to be a limited form of general intelligence, and I think future AGI systems will likely share many underlying features with GPT-4, including, to some extent, cognitive representations inside the system.
I think our best guess of future AI systems should be that they'll be similar to current systems, but scaled up dramatically, trained on more modalities, with some tweaks and post-training enhancements, at least if AGI arrives soon. Are you simply skeptical of short timelines?
re: endogenous reponse to AI - I don't see how this is relevant once you have ASI.
To be clear, I expect we'll get AI regulations before we get to ASI. I predict that regulations will increase in intensity as AI systems get more capable and start having a greater impact on the world.
Note that we are currently moving at pretty close to max speed, so this is a prediction that the future will be different from the past.
Every industry in history initially experienced little to no regulation. However, after people became more acquainted with the industry, regulations on the industry increased. I expect AI will follow a similar trajectory. I think this is in line with historical evidence, rather than contradicting it.
re: perfectionism - I would not be surprised if many current humans, given superhuman intelligence and power, created a pretty terrible future. Current power differentials do not meaningfully let individual players flip every single other player the bird at the same time.
I agree. If you turned a random human into a god, or a random small group of humans into gods, then I would be pretty worried. However, in my scenario, there aren't going to be single AIs that suddenly become gods. Instead, in my scenario, there will be millions of different AIs, and the AIs will smoothly increase in power over time. During this time, we will be able to experiment and do alignment research to see what works and what doesn't at making the AIs safe. I expect AI takeof will be fairly diffuse, and AIs will probably be respectful of norms and laws because no single AI can take over the world by themselves. Of course, the way I think about the future could be wrong on a lot of specific details, but I don't see a strong reason to doubt the basic picture I'm presenting, as of now.
My guess is that your main objection here is that you think foom will happen, i.e. there will be a single AI that takes over the world and imposes its will on everyone else. Can you elaborate more on why you think that will happen? I don't think it's a straightforward consequence of AIs being smarter than humans.
I'm not totally sure what analogy you're trying to rebut, but I think that human treatment of animal species, as a piece of evidence for how we might be treated by future AI systems that are analogously more powerful than we are, is extremely negative, not positive.
My main argument is that we should reject the analogy itself. I'm not really arguing that the analogy provides evidence for optimism, except in a very weak sense. I'm just saying: AIs will be born into and shaped by our culture; that's quite different than what happened between animals and humans.
Individual humans are usually quite selfish, frequently lie to each other, and are often cruel, and yet the world mostly gets along despite this. This is true even when there are vast differences in power and wealth between humans. For example some groups in the world have almost no power relative to the United States, and residents in the US don't particularly care about them either, and yet they survive anyway.
Okay so these are two analogies: individual humans & groups/countries.
First off, "surviving" doesn't seem like the right thing to evaluate, more like "significant harm"/"being exploited "
Can you give some examples where individual humans have a clear strategic decisive advantage (i.e. very low risk of punishment), where the low-power individual isn't at a high risk of serious harm? Because the examples I can think of are all pretty bad: dictators, slaveholders, husbands in highly patriarchal societies.. Sexual violence is extremely prevalent and is pretty much always in a high power difference context.
I find the US example unconvincing, because I find it hard to imagine the US benefiting more from aggressive use it force, than trade and soft economic exploitation. The US doesn't have the power to successfully occupy countries anymore. When there were bigger power differences due to technology, we had the age of colonialism.
Can you give some examples where individual humans have a clear strategic decisive advantage (i.e. very low risk of punishment), where the low-power individual isn't at a high risk of serious harm?
Why are we assuming a low risk of punishment? Risk of punishment depends largely on social norms and laws, and I'm saying that AIs will likely adhere to a set of social norms.
I think the central question is whether these social norms will include the norm "don't murder humans". I think such a norm will probably exist, unless almost all AIs are severely misaligned. I think severe misalignment is possible; one can certainly imagine it happening. But I don't find it likely, since people will care a lot about making AIs ethical, and I'm not yet aware of any strong reasons to think alignment will be super-hard.
(Clarification about my views in the context of the AI pause debate)
I'm finding it hard to communicate my views on AI risk. I feel like some people are responding to the general vibe they think I'm giving off rather than the actual content. Other times, it seems like people will focus on a narrow snippet of my comments/post and respond to it without recognizing the context. For example, one person interpreted me as saying that I'm against literally any AI safety regulation. I'm not.
For a full disclosure, my views on AI risk can be loosely summarized as follows:
I think AI will probably be very beneficial for humanity.
Nonetheless, I think that there are credible, foreseeable risks from AI that could do vast harm, and we should invest heavily to ensure these outcomes don't happen.
I also don't think technology is uniformly harmless. Plenty of technologies have caused net harm. Factory farming is a giant net harm that might have even made our entire industrial civilization a mistake!
I'm not blindly against regulation. I think all laws can and should be viewed as forms of regulations, and I don't think it's feasible for society to exist without laws.
That said, I'm also not blindly in favor of regulation, even for AI risk. You have to show me that the benefits outweigh the harm
I am generally in favor of thoughtful, targeted AI regulations that align incentives well, and reduce downside risks without completely stifling innovation.
I'm open to extreme regulations and policies if or when an AI catastrophe seems imminent, but I don't think we're in such a world right now. I'm not persuaded by the arguments that people have given for this thesis, such as Eliezer Yudkowsky's AGI ruin post.
I find it slightly strange that EAs aren't emphasizing semiconductor investments more given our views about AI.
(Maybe this is because of a norm against giving investment advice? This would make sense to me, except that there's also a cultural norm about criticizing charities that people donate to, and EAs seemed to blow right through that one.)
I commented on this topic last year. Later, I was informed that some people have been thinking about this and acting on it to some extent, but overall my impression is that there's still a lot of potential value left on the table. I'm really not sure though.
Since I might be wrong and I don't really know what the situation is with EAs and semiconductor investments, I thought I'd just spell out the basic argument, and see what people say:
Credible models of economic growth predict that, if AI can substitute for human labor, then we should expect the year-over-year world economic growth rate to dramatically accelerate, probably to at least 30% and maybe to rates as high as 300% or 3000%.
This rate of growth should be sustainable for a while before crashing, since physical limits appear to permit far more economic value than we're currently generating. For example, at our current rate of approximately 5.6 megajoules per dollar, capturing the yearly energy output of the sun would allow us to generate an economy worth $6.8*10^25 dollars, more than 100 billion times the size of our current economy.
If AI drives this economic productivity explosion, it seems likely that the companies manufacturing computer hardware (i.e. semiconductor companies) will benefit greatly in the midst of all of this. Very little of this seems priced in right now, although I admit I haven't done any rigorous calculations to prove that.
I agree it's hard to know who will capture most of the value from the AI revolution, but semiconductor companies, and in particular the companies responsible for designing and manufacturing GPUs, seem like a safer bet than almost anyone else.
I agree it's possible that the existing public companies will be unseated by private competitors and so investing in the public companies risks losing everything, but my understanding is that semiconductor companies have a large moat and are hard to unseat.
I agree it's possible that the government will nationalize semiconductor production, but they won't necessarily steal all the profits from investors before doing so.
I agree that EAs should avoid being too heavily invested in one single asset (e.g. crypto) but how much is EA actually invested in semiconductor stocks? Is this actually a concern right now, or is it just a hypothetical concern? Also, investing in Anthropic seems like a riskier bet since it's less diversified than a broad semiconductor portfolio, and could easily go down in flames.
I agree that AI might hasten the arrival of some sort of post-property-rights system of governance in which investments don't have any meaning anymore, but I haven't seen any strong arguments for this. It seems more likely that e.g. tax rates will go way up, but people still own property.
In general, I agree that there are many uncertainties that this question is riding on, but that's the same thing with any other thing EA does. Any particular donation to AI safety research, for example, is always uncertain and might be a waste of time.
Investing in semiconductor companies plausibly accelerates AI a little bit which is bad to the extent you think acceleration increases x-risk, but if EA gets a huge payout by investing in these companies, then that might cancel out the downsides from accelerating AI?
Another thing I just thought of is that maybe there are good tax reasons to not switch EA investments to semiconductor stocks, which I think would be fair, and I'm not an expert in any of that stuff.
I mostly agree with this (and did also buy some semiconductor stock last winter).
Besides plausibly accelerating AI a bit (which I think is a tiny effect at most unless one plans to invest millions), a possible drawback is motivated reasoning (e.g., one may feel less inclined to think critically of the semi industry, and/or less inclined to favor approaches to AI governance that reduce these companies' revenue). This may only matter for people who work in AI governance, and especially compute governance.
I hold a few core ethical ideas that are extremely unpopular: the idea that we should treat the natural suffering of animals as a grave moral catastrophe, the idea that old age and involuntary death is the number one enemy of humanity, the idea that we should treat so-called farm animals with an very high level of compassion.
Given the unpopularity of these ideas, you might be tempted to think that the reason they are unpopular is that they are exceptionally counterinuitive ones. But is that the case? Do you really need a modern education and philosphical training to understand them? Perhaps I shouldn't blame people for not taking things seriously that which they lack the background to understand.
Yet, I claim that these ideas are not actually counterintuitive: they are the type of things you would come up on your own if you had not been conditioned by society to treat them as abnormal. A thoughtful 15 year old who was somehow educated without human culture would find no issue taking these issues seriously. Do you disagree? Let's put my theory to the test.
In order to test my theory -- that caring about wild animal suffering, aging, animal mistreatment -- are the things that you would care about if you were uncorrupted by our culture, we need look no further than the bible.
It is known that the book of Genesis was written in ancient times, before anyone knew anything of modern philosophy, contemporary norms of debate, science, advanced mathematics. The writers of Genesis wrote of a perfect paradise, the one that we fell from after we were corrupted. They didn't know what really happened, of course, so they made stuff up. What is that perfect paradise that they made up?
Death is a sad reality that is ever present in our world, leaving behind tremendous pain and suffering. Tragically, many people shake a fist at God when faced with the loss of a loved one and are left without adequate answers from the church as to death’s existence. Unfortunately, an assumption has crept into the church which sees death as a natural part of our existence and as something that we have to put up with as opposed to it being an enemy
Since creationists believe that humans are responsible for all the evil in the world, they do not make the usual excuse for evil that it is natural and therefore necessary. They openly call death an enemy, that which to be destroyed.
Later,
Both humans and animals were originally vegetarian, then death could not have been a part of God’s Creation. Even after the Fall the diet of Adam and Eve was vegetarian (Genesis 3:17–19). It was not until after the Flood that man was permitted to eat animals for food (Genesis 9:3). The Fall in Genesis 3 would best explain the origin of carnivorous animal behavior.
So in the garden, animals did not hurt one another. Humans did not hurt animals. But this article even goes further, and debunks the infamous "plants tho" objection to vegetarianism,
Plants neither feel pain nor die in the sense that animals and humans do as “Plants are never the subject of חָיָה ” (Gerleman 1997, p. 414). Plants are not described as “living creatures” as humans, land animals, and sea creature are (Genesis 1:20–21, 24 and 30; Genesis 2:7; Genesis 6:19–20 and Genesis 9:10–17), and the words that are used to describe their termination are more descriptive such as “wither” or “fade” (Psalm 37:2; 102:11; Isaiah 64:6).
In God's perfect creation, the one invented by uneducated folks thousands of years ago, we can see that wild animal suffering did not exist, nor did death from old age, or mistreatment of animals.
In this article, I find something so close to my own morality, it strikes me a creationist of all people would write something so elegant,
Most animal rights groups start with an evolutionary view of mankind. They view us as the last to evolve (so far), as a blight on the earth, and the destroyers of pristine nature. Nature, they believe, is much better off without us, and we have no right to interfere with it. This is nature worship, which is a further fulfillment of the prophecy in Romans 1 in which the hearts of sinful man have traded worship of God for the worship of God’s creation.
But as people have noted for years, nature is “red in tooth and claw.”4Nature is not some kind of perfect, pristine place.
Unfortunately, it continues
And why is this? Because mankind chose to sin against a holy God.
I contend it doesn't really take a modern education to invent these ethical notions. The truly hard step is accepting that evil is bad even if you aren't personally responsible.
I have now posted as a comment on Lesswrong my summary of some recent economic forecasts and whether they are underestimating the impact of the coronavirus. You can help me by critiquing my analysis.
A trip to Mars that brought back human passengers also has the chance of bringing back microbial Martian passengers. This could be an existential risk if microbes from Mars harm our biosphere in a severe and irreparable manner.
From Carl Sagan in 1973, "Precisely because Mars is an environment of great potential biological interest, it is possible that on Mars there are pathogens, organisms which, if transported to the terrestrial environment, might do enormous biological damage - a Martian plague, the twist in the plot of H. G. Wells' War of the Worlds, but in reverse."
Note that the microbes would not need to have independently arisen on Mars. It could be that they were transported to Mars from Earth billions of years ago (or the reverse occurred). While this issue has been studied by some, my impression is that effective altruists have not looked into this issue as a potential source of existential risk.
A line of inquiry to launch could be to determine whether there are any historical parallels on Earth that could give us insight into whether a Mars-to-Earth contamination would be harmful. The introduction of an invasive species into some region loosely mirrors this scenario, but much tighter parallels might still exist.
Since Mars missions are planned for the 2030s, this risk could arrive earlier than essentially all the other existential risks that EAs normally talk about.
I might elaborate on this at some point, but I thought I'd write down some general reasons why I'm more optimistic than many EAs on the risk of human extinction from AI. I'm not defending these reasons here; I'm mostly just stating them.
ETA: feel free to ignore the below, given your caveat, though you may find it helpful if you choose to write an expanded form of any of the arguments later to have some early objections.
Correct me if I'm wrong, but it seems like most of these reasons boil down to not expecting AI to be superhuman in any relevant sense (since if it is, effectively all of them break down as reasons for optimism)? To wit:
No, I certainly expect AIs will eventually be superhuman in virtually all relevant respects.
Can you clarify what you are saying here? If I understand you correctly, you're saying that humans have relatively little wealth inequality because there's relatively little inequality in power between humans. What does that imply about AI?
I think there will probably be big inequalities in power among AIs, but I am skeptical of the view that there will be only one (or even a few) AIs that dominate over everything else.
I'm curious: does that mean you also think that alignment research performed on GPT-4 is essentially worthless? If not, why?
I agree that GPT-4 probably doesn't have preferences in the same way humans do, but it sure appears to be a limited form of general intelligence, and I think future AGI systems will likely share many underlying features with GPT-4, including, to some extent, cognitive representations inside the system.
I think our best guess of future AI systems should be that they'll be similar to current systems, but scaled up dramatically, trained on more modalities, with some tweaks and post-training enhancements, at least if AGI arrives soon. Are you simply skeptical of short timelines?
To be clear, I expect we'll get AI regulations before we get to ASI. I predict that regulations will increase in intensity as AI systems get more capable and start having a greater impact on the world.
Every industry in history initially experienced little to no regulation. However, after people became more acquainted with the industry, regulations on the industry increased. I expect AI will follow a similar trajectory. I think this is in line with historical evidence, rather than contradicting it.
I agree. If you turned a random human into a god, or a random small group of humans into gods, then I would be pretty worried. However, in my scenario, there aren't going to be single AIs that suddenly become gods. Instead, in my scenario, there will be millions of different AIs, and the AIs will smoothly increase in power over time. During this time, we will be able to experiment and do alignment research to see what works and what doesn't at making the AIs safe. I expect AI takeof will be fairly diffuse, and AIs will probably be respectful of norms and laws because no single AI can take over the world by themselves. Of course, the way I think about the future could be wrong on a lot of specific details, but I don't see a strong reason to doubt the basic picture I'm presenting, as of now.
My guess is that your main objection here is that you think foom will happen, i.e. there will be a single AI that takes over the world and imposes its will on everyone else. Can you elaborate more on why you think that will happen? I don't think it's a straightforward consequence of AIs being smarter than humans.
My main argument is that we should reject the analogy itself. I'm not really arguing that the analogy provides evidence for optimism, except in a very weak sense. I'm just saying: AIs will be born into and shaped by our culture; that's quite different than what happened between animals and humans.
Okay so these are two analogies: individual humans & groups/countries.
First off, "surviving" doesn't seem like the right thing to evaluate, more like "significant harm"/"being exploited "
Can you give some examples where individual humans have a clear strategic decisive advantage (i.e. very low risk of punishment), where the low-power individual isn't at a high risk of serious harm? Because the examples I can think of are all pretty bad: dictators, slaveholders, husbands in highly patriarchal societies.. Sexual violence is extremely prevalent and is pretty much always in a high power difference context.
I find the US example unconvincing, because I find it hard to imagine the US benefiting more from aggressive use it force, than trade and soft economic exploitation. The US doesn't have the power to successfully occupy countries anymore. When there were bigger power differences due to technology, we had the age of colonialism.
Why are we assuming a low risk of punishment? Risk of punishment depends largely on social norms and laws, and I'm saying that AIs will likely adhere to a set of social norms.
I think the central question is whether these social norms will include the norm "don't murder humans". I think such a norm will probably exist, unless almost all AIs are severely misaligned. I think severe misalignment is possible; one can certainly imagine it happening. But I don't find it likely, since people will care a lot about making AIs ethical, and I'm not yet aware of any strong reasons to think alignment will be super-hard.
(Clarification about my views in the context of the AI pause debate)
I'm finding it hard to communicate my views on AI risk. I feel like some people are responding to the general vibe they think I'm giving off rather than the actual content. Other times, it seems like people will focus on a narrow snippet of my comments/post and respond to it without recognizing the context. For example, one person interpreted me as saying that I'm against literally any AI safety regulation. I'm not.
For a full disclosure, my views on AI risk can be loosely summarized as follows:
Thanks, that seems like a pretty useful summary.
I find it slightly strange that EAs aren't emphasizing semiconductor investments more given our views about AI.
(Maybe this is because of a norm against giving investment advice? This would make sense to me, except that there's also a cultural norm about criticizing charities that people donate to, and EAs seemed to blow right through that one.)
I commented on this topic last year. Later, I was informed that some people have been thinking about this and acting on it to some extent, but overall my impression is that there's still a lot of potential value left on the table. I'm really not sure though.
Since I might be wrong and I don't really know what the situation is with EAs and semiconductor investments, I thought I'd just spell out the basic argument, and see what people say:
Another thing I just thought of is that maybe there are good tax reasons to not switch EA investments to semiconductor stocks, which I think would be fair, and I'm not an expert in any of that stuff.
I mostly agree with this (and did also buy some semiconductor stock last winter).
Besides plausibly accelerating AI a bit (which I think is a tiny effect at most unless one plans to invest millions), a possible drawback is motivated reasoning (e.g., one may feel less inclined to think critically of the semi industry, and/or less inclined to favor approaches to AI governance that reduce these companies' revenue). This may only matter for people who work in AI governance, and especially compute governance.
I hold a few core ethical ideas that are extremely unpopular: the idea that we should treat the natural suffering of animals as a grave moral catastrophe, the idea that old age and involuntary death is the number one enemy of humanity, the idea that we should treat so-called farm animals with an very high level of compassion.
Given the unpopularity of these ideas, you might be tempted to think that the reason they are unpopular is that they are exceptionally counterinuitive ones. But is that the case? Do you really need a modern education and philosphical training to understand them? Perhaps I shouldn't blame people for not taking things seriously that which they lack the background to understand.
Yet, I claim that these ideas are not actually counterintuitive: they are the type of things you would come up on your own if you had not been conditioned by society to treat them as abnormal. A thoughtful 15 year old who was somehow educated without human culture would find no issue taking these issues seriously. Do you disagree? Let's put my theory to the test.
In order to test my theory -- that caring about wild animal suffering, aging, animal mistreatment -- are the things that you would care about if you were uncorrupted by our culture, we need look no further than the bible.
It is known that the book of Genesis was written in ancient times, before anyone knew anything of modern philosophy, contemporary norms of debate, science, advanced mathematics. The writers of Genesis wrote of a perfect paradise, the one that we fell from after we were corrupted. They didn't know what really happened, of course, so they made stuff up. What is that perfect paradise that they made up?
From Anwers In Genesis, a creationist website,
Since creationists believe that humans are responsible for all the evil in the world, they do not make the usual excuse for evil that it is natural and therefore necessary. They openly call death an enemy, that which to be destroyed.
Later,
So in the garden, animals did not hurt one another. Humans did not hurt animals. But this article even goes further, and debunks the infamous "plants tho" objection to vegetarianism,
In God's perfect creation, the one invented by uneducated folks thousands of years ago, we can see that wild animal suffering did not exist, nor did death from old age, or mistreatment of animals.
In this article, I find something so close to my own morality, it strikes me a creationist of all people would write something so elegant,
Unfortunately, it continues
I contend it doesn't really take a modern education to invent these ethical notions. The truly hard step is accepting that evil is bad even if you aren't personally responsible.
I have now posted as a comment on Lesswrong my summary of some recent economic forecasts and whether they are underestimating the impact of the coronavirus. You can help me by critiquing my analysis.
A trip to Mars that brought back human passengers also has the chance of bringing back microbial Martian passengers. This could be an existential risk if microbes from Mars harm our biosphere in a severe and irreparable manner.
From Carl Sagan in 1973, "Precisely because Mars is an environment of great potential biological interest, it is possible that on Mars there are pathogens, organisms which, if transported to the terrestrial environment, might do enormous biological damage - a Martian plague, the twist in the plot of H. G. Wells' War of the Worlds, but in reverse."
Note that the microbes would not need to have independently arisen on Mars. It could be that they were transported to Mars from Earth billions of years ago (or the reverse occurred). While this issue has been studied by some, my impression is that effective altruists have not looked into this issue as a potential source of existential risk.
A line of inquiry to launch could be to determine whether there are any historical parallels on Earth that could give us insight into whether a Mars-to-Earth contamination would be harmful. The introduction of an invasive species into some region loosely mirrors this scenario, but much tighter parallels might still exist.
Since Mars missions are planned for the 2030s, this risk could arrive earlier than essentially all the other existential risks that EAs normally talk about.
See this Wikipedia page for more information: https://en.wikipedia.org/wiki/Planetary_protection