David Kinney

85 karmaJoined


Sorted by New


Even though you disagreed with my post, I was touched to see that it was one of the "top" posts that you disagreed with :). However, I'm really struggling to see the connection between my argument and Deutsch's views on AI and universal explainers. There's nothing in the piece that you link to about complexity classes or efficiency limits on algorithms. 

I do think that the issues with Pascal's wager-type deals are compounded by the possibility that the positive probability you assign to the relevant outcome might be inconsistent with other beliefs you have, and settling the question of consistency is computationally intractable). In the classic Pascal's wager, there's no worry about internal inconsistency in your credences. 

Yes I think you're spot on in thinking that my thinking is more externalist, and a lot of longtermist reasoning has a distinctly internalist flavor. But spelling all that out will take even more work!

Thanks for clarifying that! I think there are few reasons to be wary of whole brain emulation as a route to super-intelligence (see this from Mandelbaum: Now I'm aware that if whole brain emulation isn't possible, then some of the computationalist assumptions in my post (namely, that the same limits on Turing machines apply to humans) seem less plausible. But I think there are at least two ways out. One is to suppose that computation in the human brain is sub-neural, and so brain emulation will still leave out important facets of human cognition. Another is to say that whole brain emulation may still be plausible, but that there are speed limits on the computations that the brain does that prevent the kind of speeding up that you imagine. Here, work on the thermodynamics of computation is relevant.

But, in any event (and I suspect this is a fundamental disagreement between me and many longtermists) I'm wary of the argumentative move from mere conceivability to physical possibility. We know so little about the physics of intelligence. The idea of emulating a brain and then speeding it up may turn out to be similar to the idea of getting something to move at the speed of light, and then speeding it up a bit more. It sounds fine as a thought experiment, but it turns out it's physically incoherent. On the other hand, whole brain emulation plus speed-ups may be perfectly physically coherent. But my sense is we just don't know.

In light of my earlier comment about logical induction, I think this case is different from the classical use-case for the principle of ignorance, where we have n that we know nothing about, and so we assign each probability 1/n. Here, we have a set of commitments that we know entails that there is either a strictly positive or an extreme, delta-function-like distribution over some variable X, but we don't know which. So if we apply the principle of ignorance to those two possibilities, we end up assigning equal higher-order-credence to the normative proposition that we ought to assign a strictly positive distribution over X and to the proposition that we ought to assign a delta-function-distribution over X. If our final credal distribution over X is a blend of these two distributions, then we end up with a strictly positive credal distribution over X. But, now we've arrived at a conclusion that we stipulated might be inconsistent with our other epistemic commitments! If nothing else, this shows that applying indifference reasoning here is much more involved than in the classic case. Garrabrant wants to say, I think, that this reasoning could be fine as long as the inconsistency that it potentially leads to can't be exploited in polynomial time. But then see my other worries about this kind of reasoning in my response above. 

This is the issue I was trying to address in counterargument 2. 

I love that work! And I think this fits in nicely with another comment that you make below about the principle of indifference. The problem, as I see it, is that you have an agent who adopts some credences and a belief structure that defines a full distribution over a set of propositions. It's either consistent or inconsistent with that distribution to assign some variable X a strictly positive probability. But, let's suppose, a Turing machine can't determine that in polynomial time. As I understand Garrabrant et al., I'm fine to pick any credence I like, since logical inconsistencies are only a problem if they allow you to be Dutch booked in polynomial time. As a way of thinking about reasoning under logical uncertainty, it's ingenious. But once we start thinking about our personal probabilities as guides to what we ought to do, I get nervous. Note that just as I'm free to assign X a strictly positive probability distribution under Garrabrant's criterion, I'm also free to assign it a distribution that allows for probability zero (even if that ends up being inconsistent, by stipulation I can't be dutch-booked in polynomial time). One could imagine a precautionary principle that says, in these cases, to always pick a strictly positive probability distribution. But then again I'm worried that once we allow for all these conceivable events that we can't figure out much about to have positive probability, we're opening the floodgates for an ever-more-extreme apportionment of resources to lower-and-lower probability catastrophes. 

I don't have a fully-formed opinion here, but for now I'll just note that the task that the examined futurists are implicitly given is very different from assigning a probability distribution to a variable based on parameters. Rather, the implicit task is to say some stuff that you think will happen. Then we're judging whether those things happen. But I'm not sure how to translate the output from the task into action. (E.g., Asimov says X will happen, and so we should do Y.) 

I think the contrast with elections is an important and interesting one. I'll start by saying that being able to coarse-grain the set of all possible worlds into two possibilities doesn't mean we should assign both possibilities positive probability. Consider the set of all possible sequences of infinite coin tosses. We can coarse-grain those sequences into two sets: the ones where finitely many coins land heads, and the ones where infinitely many coins lands heads. But, assuming we're actually going to toss infinitely many coins, and assuming each coin is fair, the first set of sequences has probability zero and the second set has probability one. 

In the election case, we have a good understanding of the mechanism by which elections are (hopefully) won. In this simple case with a plurality rule, we just want to know which candidate will get the most votes. So we can define probability distributions over the possible number of votes cast, and probability distributions over possible distributions of those votes to different candidates (where vote distributions are likely conditional on overall turnout), and coarse-grain those various vote distributions into the possibility of each candidate winning. This is a simple case, and no doubt real-world election models have many more parameters, but my point is that we understand the relevant possibility space and how it relates to our outcomes of interest fairly well. I don't think we have anything like this understanding in the AGI case.

I think so! At the societal level, we can certainly do a lot more to make our world resilient without making specific predictions. 

Load more