Jim Buhler

Posts

Sorted by New

Wiki Contributions

Comments

Prioritization Questions for Artificial Sentience

Thanks for writing this, Jamie!

Concerning the "SHOULD WE FOCUS ON MORAL CIRCLE EXPANSION?"  question, I think something like the following sub-question is also relevant: Will MCE lead to a "near miss" of the values we want to spread? 

Magnus Vinding (2018) argues that someone who cares about a given sentient being, is absolutely not guaranteed to wish what we think is the best for this sentient being. While he argues from a suffering-focused perspective, the problem is still the same from any ethical framework. 
For instance, future people who "care" about wild animals and AS, will likely care about things that have nothing to do with their subjective experiences (e.g., their "freedom" or their "right to life"), which might lead them to do things that are arguably bad (e.g., creating a lot of faithful simulations of the Amazon rainforest), although well intentioned. 
Even in a scenario where most people genuinely care about the welfare of non-humans, their standards to consider such welfare positive might be incredibly low.

A longtermist critique of “The expected value of extinction risk reduction is positive”

I  completely agree with 3 and it's indeed worth clarifying. Even ignoring this, the possibility of humans being more compassionate than pro-life grabby aliens might actually be an argument against human-driven space colonization, since compassion -- especially when combined with scope sensitivity -- seems to increase agential s-risks related to potential catastrophic cooperation failure between AIs (see e.g., Baumann and Harris 2021, 46:24), which are the most worrying s-risks according to Jesse Clifton's preface of CLR's agenda. A space filled with life-maximizing aliens who don't give a crap about  welfare and suffering might be better than one filled with compassionate humans who create AGIs that might do the exact opposite of what they want (because of escalating conflicts, strategic threats, …). Obviously, uncertainty stays huge here.

Besides, 1 and 2 seem to be good counter-considerations, thanks!  :) 

I'm not sure I get why "Singletons about non-life-maximizing values are also convergent", though. Do you -- or anyone else reading this -- can point at any reference that would help me understand this?

A longtermist critique of “The expected value of extinction risk reduction is positive”

Interesting! Thank you for  writing this up. :) 

It does seem plausible that, by evolutionary forces, biological nonhumans would care about the proliferation of sentient life about as much as humans do, with all the risks of great suffering that entails.

What about the grabby aliens, more specifically? Do they not, in expectation, care about proliferation (even) more than humans do?

All else being equal, it seems -- at least to me -- that civilizations with very strong pro-life values (i.e., that thinks that perpetuating life is good and necessary, regardless of its quality) colonize, in expectation, more space  than compassionate civilizations willing to do the same only under certain conditions regarding others' subjective experiences.

Then, unless we believe that the emergence of dominant pro-life values in any random civilization is significantly unlikely in the first place (I see a priori more reasons to assume the exact opposite), shouldn't we assume that space is mainly being colonized by "life-maximizing aliens" who care about nothing but perpetuating life (including sentient life)  as much as possible?

Since I've never read such an argument anywhere else (and am far from being an expert in this field), I guess that is has a problem that I don't see.

EDIT: Just to be clear, I'm just trying to understand what the grabby aliens are doing, not to come to any conclusion about what we should do vis-à-vis the possibility of human-driven space colonization. :) 

"Disappointing Futures" Might Be As Important As Existential Risks

Thank you for writing this.

  • According to a survey of quantitative predictions, disappointing futures appear roughly as likely as existential catastrophes. [More]

It looks like that Bostrom and Ord included risks of disappointing futures in their estimates on x-risks, which might  make this conclusion a bit skewed, don't you think?

"Disappointing Futures" Might Be As Important As Existential Risks

Michael's definition of risks of disappointing futures doesn't include s-risks though, right? 

a disappointing future is when humans do not go extinct and civilization does not collapse or fall into a dystopia, but civilization[1] nonetheless never realizes its potential.

I guess we get something like "risks of negative (or nearly negative) future" adding up the two types.

How can we reduce s-risks?

Great piece, thanks !

Since you devoted a subsection to moral circle expansion as a way of reducing s-risks, I guess you consider that its beneficial effects outweigh the backfire risks you mention (at least if MCE is done "in the right way"). CRS' 2020 End-of-Year Fundraiser post also induces optimism regarding the impact of increasing moral consideration for artificial minds (the only remaining doubts seem to be about when and how to do it).

I wonder how confident we should be about this (the positiveness of MCE in reducing s-risks), at this point? Have you – or other researchers – made estimates confirming this, for instance? :) 

EDIT: Your piece Arguments for and against moral advocacy (2017) already raises relevant considerations but perhaps your view on this issue is clearer now.

Incentivizing forecasting via social media

Thanks for writing this! :)

Another potential outcome that comes to mind regarding such projects is a self-fulfilling prophecy effect (provided the predictions are not secret).  I have no idea how much of an (positive/negative) impact it would have though.