Sharmake

184Joined Feb 2022

Comments
132

Topic Contributions
2

I'm going to bite the bullet of absurdity, and say this already happened.

Imagine a noble/priest 500-1000 years ago trying to understand our western society, and they would likely find it absurd as well. Some norms have survived primarily due to the human baseline not changing through genetic engineering, but overall it would be weird and worrying.

For example, the idea that people are relatively equal would be absurd to a medieval noble, let alone our tolerance for outgroups/dissent.

The idea that religion isn't an all-powerful panacea or even optional would be absurd to a priest.

The idea that there are positive sum trades that are the majority, rather than zero or negative sum trades, would again be absurd to the noble.

Science would be worrying to the noble.

And much more. In general I think people underestimate just how absurd things can get, so I'm not surprised.

Admittedly that is a good argument against the idea that moral realism actually matters too much, albeit I would say that the EV of your actions can be very different depending on your perspective (if moral realism is false).

Also, this is a case where non-consequentialist moralities fail badly at probability, because it's asking for an infinite amount of evidence in order to update one's view away from the ordering, which is equivalent to asking for mathematical proof that you're wrong.

The basic answer is, computational complexity matters less than you think, primarily because it makes very strong assumptions, and even one of those assumptions failing weakens it's power.

The assumptions are:

  1. Worst case scenarios. In this setting, everything matters, so anything that scales badly will impact the overall problem.

  2. Exactly optimal, deterministic solutions are required.

  3. You have only one shot to solve the problem.

  4. Small advantages do not compound into big advantages.

  5. Linear returns are the best you can do.

This is a conjunctive argument, where if one of the premises are wrong, than the entire argument gets weaker.

And given the conjunction fallacy, we should be wary of accepting such a story.

Link to more resources here:

https://www.gwern.net/Complexity-vs-AI#complexity-caveats

If I had to make a criticism, it's that EA's ideas of improving morality only exist if moral realism is true.

Now to define moral realism, I'm going to define it as moral rules that are crucially mind independent, ala how physical laws are mind- independent.

If it isn't true (which I suspect will happen with 50% probability) than EA has no special claim to morality, although everyone else doesn't either. But moral realism is a big crux here, at least for universal EA.

My point is that contra the narrative in this post, cults are vastly less bad than the general public believes, so much so that the post is responding to a straw problem. I don't necessarily agree with the beliefs of the New Religious Movements/cults but the cult literature shows that they are vastly less bad then the general public thinks.

I know it's a counterintuitive truth, but I want people to understand that the general public believing something is bad does not equal badness.

I am not concerned too much with EA turning into a cult, for one reason:

Cults/New Religious Movements are vastly less bad than most people think, and the literature on cults repudiates a lot of claims that the general population believes on cults, especially anything to do with harm.

Link to it here:

https://www.lesswrong.com/posts/TiG8cLkBRW4QgsfrR/notes-on-brainwashing-and-cults

I agree, but with a caveat: EA should be willing to ditch any group that makes it a partisan issue, rather than a bipartisan consensus. Because I can easily see a version of this where it gets politicized, and AI safety starts to be viewed as a curse word similar to words like globalist, anti-racist, etc.

For a contrasting opinion by Kat Woods and Amber Dawn, here's this post: Two reasons we might be closer to solving alignment than it seems.

Link below:

https://forum.effectivealtruism.org/posts/RkpdA8763yGtEovj9/two-reasons-we-might-be-closer-to-solving-alignment-than-it

Sorry, but downvoted because of what Noah Scales said. This work could be prize worthy, but as it stands it isn't good.

Counterfactual Impact and Power-Seeking

It worries me that many of the most promising theories of impact for alignment end up with the structure “acquire power, then use it for good”.

This seems to be a result of the counterfactual impact framing and a bias towards simple plans. You are a tiny agent in an unfathomably large world, trying to intervene on what may be the biggest event in human history. If you try to generate stories where you have a clear, simple counterfactual impact, most of them will involve power-seeking for the usual instrumental convergence reasons. Power-seeking might be necessary sometimes, but it seems extremely dangerous as a general attitude; ironically human power-seeking is one of the key drivers of AI x-risk to begin with. Benjamin Ross Hoffman writes beautifully about this problem in Against responsibility.

I don’t have any good solutions, other than a general bias away from power-seeking strategies and towards strategies involving cooperation, dealism, and reducing transaction costs. I think the pivotal act framing is particularly dangerous, and aiming to delay existential catastrophe rather than preventing it completely is a better policy for most actors.

This is why AI risk is so high, in a nutshell.

Yet unlike this post (or Benjamin Ross Hoffman's post), I think this was a sad, but crucially necessary decision. I think the option you propose is at least partially a fabricated option. I think a lot of the reason is people dearly want to there be a better option, even if it's not there.

Link to fabricated options:

https://www.lesswrong.com/posts/gNodQGNoPDjztasbh/lies-damn-lies-and-fabricated-options

Load More