Wiki Contributions

Comments

Is EA compatible with technopessimism?

If it was the only thing we wanted we could actually work to explicitly specify that as the AI's goal, and that's CEV and hence problem solved.

This is just an aside, but it might be informative. I actually think that

  • single alignment: "This specific blob of meat here "is" an "agent". Figure out its utility function and do that"

is going to be simpler to program than

  • Hard-coded: "make a large number of many "humanlike" things "experience" "happiness""

I think it's clear that.. there seem to be more things in the hard-coded solution that we don't know how to formalize, and we're much further from knowing how to formalize them (we're pretty deep into agency already). And it's fairly likely we'll arrive at a robust account of agency in the process of developing AGI, or as a result of stumbling onto it, it seems to be one of the short paths to it.

 

I agree about the hmm, murcurial quality of of human agency. There seem to be a lot of triggers that "change" the utility function. Note, I don't know if anyone knows what we mean when we say that a utility function was undermined, by a compelling speech, or by acclimating to the cold water, or by falling in love or whatever. It's strange. And humans generally tend to be indifferent to these changes, they identify with process of change itself. In a sense they all must be part of one utility function (~ "when I am in bed I want to stay in bed, but if I need to pee I would prefer to get out of bed, but once I'm out of bed I will not want to go back to bed. That is my will. That is what it means to be human. If you remove this then it's a dystopia."), but somehow we have to exclude changes like... being convinced by a superintelligence's rhetoric that maximizing paperclips is more important than preserving human life. Somehow we know that's a bad change, even though we don't have any intuitive aversion to reading superintelligences' rhetoric. Even though we (Well I think I would be at least) know we'd be convinced by it, somehow we know that we want to exclude that possibility.

Is EA compatible with technopessimism?

Hm. Well feel free to notify me if you ever write it up.

Is EA compatible with technopessimism?

Yeah, that objection does also apply to humans, which is why, despite it being so difficult to extract a coherent extrapolated volition from a mammal brain, we must find a way of doing it, and once we have it, although it might not produce an agenty utility function for things like spiders or bacteria, there's a decent chance it'll work on dogs or pigs.

Is EA compatible with technopessimism?

I'm referring to https://en.wikipedia.org/wiki/Uplift_(science_fiction) , sort of under the assumption that if we truly value animals we will eventually give them voice, reason, coherence. On reflection, I guess the most humane form of this would probably consist of just aligning an AI with the animals and letting it advocate for them. There's no guarantee that these beings adapted to live without speech will want it, but an advocate couldn't hurt.

Is EA compatible with technopessimism?

...assuming that particular example is a concern of such an impact primarily on humans, could that be articulated as anthropocentric technopessimism ?

  1. Why would you want to describe it that way?
  2. On reflection, I don't think it can be called anthropocentric, no. There are four big groups of beings involved here: Humanity, Animals, Transhumanist post-humanity (hopefully without value-drift), and Unaligned AI. Three of those groups are non-human. Those concerned with AI alignment tend to be fighting in favor of more of those non-human groups than they are fighting against.

    (It's a bit hard to tell whether we would actually like animals once they could speak, wield guns, occupy vast portions of the accessible universe etc. Might turn out there are fundamental irreconcilable conflicts. None apparent yet, though.)
Reasons and Persons: Watch theories eat themselves

There’s no answer for this

Sure there is. Just implement the decision theory whose nature is that which would have been the optimal nature for it to have always had.

That is, implement Logical Decision Theory.

I'm only being a little bit facetious. Logical Decision Theory often seems to me more like a mostly formal statement about the (arguably) perfect policy about coordination and pre-commitment and superrationality, rather than a method for actually unearthing it.

But pondering this statement does seem to have progressed my thinking a lot and I would generally recommend it to others.

Response to Phil Torres’ ‘The Case Against Longtermism’

We actually do have a good probability for a large asteroid striking the earth within the next 100 years, btw. It was the product of a major investigation, I believe it was 1/150,000,000.

Probabilities don't have to be a product of a legible, objective or formal process. It can be useful to state our subjective beliefs as probabilities to use them as inputs to a process like that, but also generally it's just good mental habit to try to maintain a sense of your level of confidence about uncertain events.

What posts do you want someone to write?

Regarding "change from within", I have since found confirmation from the excellent growth economist Mushtaq Kahn https://80000hours.org/podcast/episodes/mushtaq-khan-institutional-economics/ people within an industry are generally the best at policing others in the industry, they have the most energy for it, they know how to measure adherence, and they often have inside access. Without them, policing corruption often fails to happen.

How does Amazon deforestation actually work? It's not about soy.

Maybe a moratorium concerning soy and beef from the Amazon region would be enough to settle this issue; even so, given that the first driver of deforestation is speculation with land prices (besides illegal timber and mining),  I'm afraid such a ban wouldn't be enough to stop it.

The question then is, where is the value of the land coming from, how much of it is coming from each possible use, loggers, soy farmers, or meat farmers? If you stop those uses, wont speculation stop?

All Possible Views About Humanity's Future Are Wild

Crazyism about a topic is the view that something crazy must be among the core truths about that topic. Crazyism can be justified when we have good reason to believe that one among several crazy views must be true but where the balance of evidence supports none of the candidates strongly over the others

Eric Schwitzgebel, Crazyism

Load More