neglecting animal welfare on the grounds that humans will dominate via space exploration seems to require further information about the relative probabilities of the various situations, multiplied by the relative populations in these situations.
I took the argument to mean that artificial sentience will outweigh natural sentience (eg. animals). You seem to be implying that the relevant question is whether there will be more human sentience, or more animal sentience, but I'm not quite sure why. I would predict that most of the sentience that will exist will be neither human or animal.
I also expect artificial sentience to vastly outweigh natural sentience in the long-run, though it's worth pointing out that we might still expect focusing on animals to be worthwhile if it widens people's moral circles.
I think we were both confused. But based on what Greg Colbourn said, my point still stands, albeit to a weaker extent.
I don't think this is a good summary for an important reason: I think the Wuhan Coronavirus is a few orders of magnitude more deadly than a normal seasonal flu. The mortality estimates for the Wuhan Coronavirus are in the single digit percentages, whereas this source tells me that the seasonal flu mortality rate is about 0.014%. [ETA: Sorry, it's closer to 0.1%, see Greg Colbourn's comment].
Current death rates are likely to underestimate the total mortality rate, since the disease has likely not begun to affect most of the people who are infected.
I'll add information about incubation period to the post.
An s-risk could occur via a moral failure, which could happen even if we knew how to align our AIs.
But- you won't be able to copy our generator by doing that, the thing that created those novel predictions
I would think this might be our crux (other than perhaps the existence of qualia themselves). I imagine any predictions you produce can be adequately captured in a mathematical framework that makes no reference to qualia as ontologically primitive. And if I had such a framework, then I would have access to the generator, full stop. Adding qualia doesn't make the generator any better -- it just adds unnecessary mental stuff that isn't actually doing anything for the theory.
I am not super confident in anything I said here, although that's mostly because I have an outside view that tells me consciousness is hard to get right. My inside view tells me that I am probably correct, because I just don't see how positing mental stuff that's separate from mathematical law can add anything whatsoever to a physical theory.
I'm happy to talk more about this some day, perhaps in person. :)
Thanks Matthew! I agree issues of epistemology and metaphysics get very sticky very quickly when speaking of consciousness.
My basic approach is 'never argue metaphysics when you can argue physics'
My main claim was that by only arguing physics, I will never agree upon your theory because your theory assumes the existence of elementary stuff that I don't believe in. Therefore, I don't understand how this really helps.
Would you be prepared say the same about many worlds vs consciousness causes collapse theories? (Let's assume that we have no experimental data which distinguishes the two theories).
One way to frame this is that at various points in time, it was completely reasonable to be a skeptic about modeling things like lightning, static, magnetic lodestones, and such, mathematically.
The problem with the analogy to magnetism and electricity is that fails to match the pattern of my argument. In order to incorporate magnetism into our mathematical theory of physics, we merely added more mathematical parts. In this, I see a fundamental difference between the approach you take and the approach taken by physicists when they admit the existence of new forces, or particles.
In particular, your theory of consciousness does not just do the equivalent of add a new force, or mathematical law that governs matter, or re-orient the geometry of the universe. It also posits that there is a dualism in physical stuff: that is, that matter can be identified as having both mathematical and mental properties.
Even if your theory did result in new predictions, I fail to see why I can't just leave out the mental interpretation of it, and keep the mathematical bits for myself.
To put it another way, if you are saying that symmetry can be shown to be the same as valence, then I feel I can always provide an alternative explanation that leaves out valence as a first-class object in our ontology. If you are merely saying that symmetry is definitionally equivalent to valence, then your theory is vacuous because I can just delete that interpretation from my mathematical theory and emerge with equivalent predictions about the world.
And in practice, I would probably do so, because symmetry is not the kind of thing I think about when I worry about suffering.
I think metaphysical arguments change distressingly few peoples' minds. Experiments and especially technology changes peoples' minds. So that's what our limited optimization energy is pointed at right now.
I agree that if you had made predictions that classical neuroscientists all agreed would never occur, and then proved them all wrong, then that would be striking evidence that I had made an error somewhere in my argument. But as it stands, I'm not convinced by your analogy to magnetism, or your strict approach towards talking about predictions rather than metaphysics.
(I may one day reply to your critique of FRI, as I see it as similarly flawed. But it is simply too long to get into right now.)
Mike, while I appreciate the empirical predictions of the symmetry theory of valence, I have a deeper problem with QRI philosophy, and it makes me skeptical even if the predictions come to bear.
In physics, there are two distinctions we can make about our theories:
The classic Many Worlds vs. Copenhagen is a dispute of the second kind, at least until someone can create an experiment which distinguishes the two. Another example of the second type of dispute is special relativity vs. Lorentz ether theory.
Typically, philosophers of science and most people who follow Lesswrong philosophy, will say that the way to resolve disputes of the second kind is to find out which interpretation is simplest. That's one reason why most people follow Einstein's special relativity over the Lorentz ether theory.
However, simplicity of an interpretation is often hard to measure. It's made more complicated for two reasons,
The first case is usually not a big deal because we mostly can agree on the right language to frame our theories. The second case, however, plays a deep role in why I consider QRI philosophy to be likely incorrect.
Take, for example, the old dispute over whether physics is discrete or continuous. If you apply standard Solomonoff induction, then you will axiomatically assign 0 probability to physics being continuous.
It is in this sense that QRI philosophy takes an ontological step that I consider unjustified. In particular, QRI assumes that there simply is an ontologically primitive consciousness-stuff that exists. That is, it takes it as elementary that qualia exist, and then reasons about them as if they are first class objects in our ontology.
I have already talked to you in person why I reject this line of reasoning. I think that an illusionist perspective is adequate to explain our beliefs in why we believe in consciousness, without making any reference to consciousness as an ontological primitive. Furthermore, my basic ontological assumption is that physical entities, such as electrons, have mathematical properties, but not mental properties.
The idea that electrons can have both mathematical and mental properties (ie. panpsychism) is something I consider to be little more than property dualism, and has the same known issues as every property dualist theory that I have been acquainted with.
I hope that clears some things up about why I disagree with QRI philosophy. However, I definitely wouldn't describe you as practicing crank philosophy, as that term is both loaded, and empirically false. I know you care a lot about critical reflection, debate, and standard scientific virtues, which immediately makes you unable to be a "crank" in my opinion.
I see. I asked only because I was confused why you asked "before crunch time" rather than leaving that part out.