This is a special post for quick takes by D0TheMath. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

I saw this comment on LessWrong

This seems noncrazy on reflection.

10 million dollars will probably have very small impact on Terry Tao's decision to work on the problem. 

OTOH, setting up an open invitation for all world-class mathematicians/physicists/theoretical computer science to work on AGI safety through some sort of sabbatical system may be very impactful.

Many academics, especially in theoretical areas where funding for even the very best can be scarce, would jump at the opportunity of a no-strings-attached sabbatical. The no-strings-attached is crucial to my mind. Despite LW/Rationalist dogma equating IQ with weirdo-points, the vast majority of brilliant (mathematical) minds are fairly conventional - see Tao, Euler, Gauss. 

EA cause area?


I don't know what the standard approach would be. I haven't read any books on evolutionary biology. I did listen to a bit of this online lecture series: and it seems fun & informative.

I’ve been using the models I’ve been learning to understand the problems associated with inner alignment to model evolution during this discussion, as it is a stochastic gradient descent process, so many of the arguments for properties that trained models should have can be applied to evolutionary processes.

So I guess you can start with Hubinger et al’s Risks from Learned Optimization? But this seems a nonstandard approach to trying to learn evolutionary biology.

Do you feel it is possible for evolution to select for beings who care about their copies in Everett branches, over beings that don't? For the purposes of this question let's say we ignore the "simplicity" complication of the previous point, and assume both species have been created, if that is possible.

It likely depends on what it means for evolution to select for something, and for a species to care about it's copies in other Everett branches. It's plausible to imagine a very low-amplitude Everett branch which has a species that uses quantum mechanical bits to make many of it's decisions, which decreases its chances of reproducing in most Everett branches, but increases it's chances of reproducing in very very few.

But in order for something to care about it's copies in other Everett branches, the species would need to be able to model how quantum mechanics works, as well as how acausal trade works if you want it to be able to be selected for caring how it's decision making process will affect non-causally-reachable Everett branches. I can't think of any pathways for how a species could increase it's inclusive genetic fitness by making acausal trades with it's counterparts in non-causally-reachable Everett branches, but I also can't think of any proof for why it's impossible. Thus, I only think it's unlikely.

For the case where we only care about selecting for caring about future Everett branches, note that if we find ourselves in the situation I described in the original post, and the proposal succeeds, then evolution has just made a minor update towards species which care about their future Everett selves.

Evolution doesn't select for that, but it's also important to note that such tendencies are not disselected for, and the value "care about yourself, and others" is simpler than the value "care about yourself, and others except those in other Everett branches", so we should expect people to generalize "others" as including those in Everett branches, in the same way that they generalize "others" as including those in the far future.

Also, while you cannot meaningfully influence Everett branches which have split off in the past, you can influence Everett branches that will split off some time in the future.

I’m not certain. I’m tempted to say I care about them in proportion to their “probabilities” of occurring, but if I knew I was on a very low-“probability” branch & there was a way to influence a higher “probability” branch at some cost to this branch, then I’m pretty sure I’d weight the two equally.

Are there any obvious reasons why this line of argument is wrong:

Suppose Everett interpretation of qm is true, and an x-risk curtailing humanity's future is >99% certain, with no leads on the solution to it. Then, given a qm bit generator, which generates some high number of bits, for any particular combination of bits, there exists a universe in which that combination was generated. In particular, the combination of bits encoding actions one can take to solve the x-risk are generated in some world. Thus, one should use such a qm bit generator to generate a plan to stop the x-risk. Even though you will likely see a bunch of random letters, there will exist a version of you with a good plan, and the world will not end.

One may argue the chances of finding a plan which produces an s-risk is just as high as one curtailing the x-risk. This only seems plausible to me if the solution produced is some optimization process, or induces some optimization process. These scenarios should not be discounted.

Curated and popular this week
Relevant opportunities