Jim Buhler

28Joined Sep 2020

Comments
14

The Future Fund’s Project Ideas Competition

Oh interesting! Ok so I guess there are two possibilities.

1) Either by “supperrationalists”, you mean something stronger than “agents taking acausal dependences into account in PD-like situations”, which I thought was roughly Caspar’s definition in his paper. And then, I'd be even more confused.

2) Or you really think that taking acausal dependences into account is, by itself, sufficient to create a significant correlation in two decision-algorithms. In that case, how do you explain that I would defect against you and exploit you in one-shot PD (very sorry, I just don’t believe we correlate ^^), despite being completely on board with supperrationality? How is that not a proof that common supperrationality is insufficient?

(Btw, happy to jump on a call to talk about this if you’d prefer that over writing.)

The Future Fund’s Project Ideas Competition

Thanks for the reply! :)

By "copies", I meant "agents which action-correlate with you" (i.e., those which will cooperate if you cooperate), not "agents sharing your values". Sorry for the confusion.

Do you think all agents thinking superrationaly action-correlate?  This seems like a very strong claim to me. My impression is that the agents with a decision-algorithm similar enough to mine to (significantly) action-correlate with me is a very small subset of all superrationalists .  As your post suggests, even your past-self doesn't fully action-correlate with you (although you don't need "full correlation" for cooperation to be worthwhile, of course).

In a one-shot prisoner's dilemma, would you cooperate with anyone who agrees that superrationality is the way to go?

In his paper on ECL, Caspar Oesterheld  says (section 2, p.9): “I will tend to make arguments from similarity of decision algorithms rather than from common rationality, because I hold these to be more rigorous and more applicable whenever there is not authority to tell my collaborators and me about our common rationality.”
However, he also often uses "the agents with a decision-algorithm similar enough to mine to (significantly) action-correlate with me"   and "all superrationalists " interchangeably, which confuses me a lot.
 

The Future Fund’s Project Ideas Competition

Caspar Oesterheld’s work on Evidential Cooperation in Large Worlds (ECL) shows that some fairly weak assumptions about the shape of the universe are enough to arrive at the conclusion that there is one optimal system of ethics: the compromise between all the preferences of all agents who cooperate with each other acausally. That would solve ethics for all practical purposes. It would therefore have enormous effects on a wide variety of fields because of how foundational ethics is.

ECL recommends that agents maximize a compromise utility function averaging their own and those of the agents that action-correlate with them (their "copies").  The compromise between me and my copies would look different from the compromise between you and your copies, right? So I could "solve ethics" for myself, but not for you, and vice versa. Ethics could be "solved" for everyone if all agents in the multiverse were action-correlated with each other to the exact same degree, which appears exceedingly unlikely. Do I miss something?

(Not a criticism of your proposal. I'm just trying to refine my  understanding of ECL) :)  

Apply for CEA event support

For EA group retreats, is it better to apply for the CEA event support you introduced, or for  CEA's support group funding?

FTX EA Fellowships

I haven't received anything on my side. I think a confirmation by email would be nice, yes. Otherwise, I'll send the application a second time just in case.

Prioritization Questions for Artificial Sentience

Thanks for writing this Jamie!

Concerning the "SHOULD WE FOCUS ON MORAL CIRCLE EXPANSION?"  question, I think something like the following sub-question is also relevant: Will MCE lead to a "near miss" of the values we want to spread? 

Magnus Vinding (2018) argues that someone who cares about a given sentient being, is absolutely not guaranteed to wish what we think is the best for this sentient being. While he argues from a suffering-focused perspective, the problem is still the same from any ethical framework. 
For instance, future people who "care" about wild animals and AS, will likely care about things that have nothing to do with their subjective experiences (e.g., their "freedom" or their "right to life"), which might lead them to do things that are arguably bad (e.g., creating a lot of faithful simulations of the Amazon rainforest), although well intentioned. 
Even in a scenario where most people genuinely care about the welfare of non-humans, their standards to consider such welfare positive might be incredibly low.

A longtermist critique of “The expected value of extinction risk reduction is positive”

I  completely agree with 3 and it's indeed worth clarifying. Even ignoring this, the possibility of humans being more compassionate than pro-life grabby aliens might actually be an argument against human-driven space colonization, since compassion -- especially when combined with scope sensitivity -- seems to increase agential s-risks related to potential catastrophic cooperation failure between AIs (see e.g., Baumann and Harris 2021, 46:24), which are the most worrying s-risks according to Jesse Clifton's preface of CLR's agenda. A space filled with life-maximizing aliens who don't give a crap about  welfare and suffering might be better than one filled with compassionate humans who create AGIs that might do the exact opposite of what they want (because of escalating conflicts, strategic threats, …). Obviously, uncertainty stays huge here.

Besides, 1 and 2 seem to be good counter-considerations, thanks!  :) 

I'm not sure I get why "Singletons about non-life-maximizing values are also convergent", though. Do you -- or anyone else reading this -- can point at any reference that would help me understand this?

A longtermist critique of “The expected value of extinction risk reduction is positive”

Interesting! Thank you for  writing this up. :) 

It does seem plausible that, by evolutionary forces, biological nonhumans would care about the proliferation of sentient life about as much as humans do, with all the risks of great suffering that entails.

What about the grabby aliens, more specifically? Do they not, in expectation, care about proliferation (even) more than humans do?

All else being equal, it seems -- at least to me -- that civilizations with very strong pro-life values (i.e., that thinks that perpetuating life is good and necessary, regardless of its quality) colonize, in expectation, more space  than compassionate civilizations willing to do the same only under certain conditions regarding others' subjective experiences.

Then, unless we believe that the emergence of dominant pro-life values in any random civilization is significantly unlikely in the first place (I see a priori more reasons to assume the exact opposite), shouldn't we assume that space is mainly being colonized by "life-maximizing aliens" who care about nothing but perpetuating life (including sentient life)  as much as possible?

Since I've never read such an argument anywhere else (and am far from being an expert in this field), I guess that is has a problem that I don't see.

EDIT: Just to be clear, I'm just trying to understand what the grabby aliens are doing, not to come to any conclusion about what we should do vis-à-vis the possibility of human-driven space colonization. :) 

Load More