All of Jim Buhler's Comments + Replies

FTX EA Fellowships

I haven't received anything on my side. I think a confirmation by email would be nice, yes. Otherwise, I'll send the application a second time just in case.

Prioritization Questions for Artificial Sentience

Thanks for writing this Jamie!

Concerning the "SHOULD WE FOCUS ON MORAL CIRCLE EXPANSION?"  question, I think something like the following sub-question is also relevant: Will MCE lead to a "near miss" of the values we want to spread? 

Magnus Vinding (2018) argues that someone who cares about a given sentient being, is absolutely not guaranteed to wish what we think is the best for this sentient being. While he argues from a suffering-focused perspective, the problem is still the same from any ethical framework. 
For instance, future people who ... (read more)

A longtermist critique of “The expected value of extinction risk reduction is positive”

I  completely agree with 3 and it's indeed worth clarifying. Even ignoring this, the possibility of humans being more compassionate than pro-life grabby aliens might actually be an argument against human-driven space colonization, since compassion -- especially when combined with scope sensitivity -- seems to increase agential s-risks related to potential catastrophic cooperation failure between AIs (see e.g., Baumann and Harris 2021, 46:24), which are the most worrying s-risks according to Jesse Clifton's preface of CLR's agenda. A space filled with ... (read more)

2antimonyanthony5moSorry, I wrote that point lazily because that whole list was supposed to be rather speculative. It should be "Singletons about non-life-maximizing values could also be convergent." I think that if some technologically advanced species doesn't go extinct, the same sorts of forces that allow some human institutions to persist for millennia (religions are the best example, I guess) combined with goal-preserving AIs would make the emergence of a singleton fairly likely - not very confident in this, though, and I think #2 is the weakest argument. Bostrom's "The Future of Human Evolution [https://www.nickbostrom.com/fut/evolution.html]" touches on similar points.
A longtermist critique of “The expected value of extinction risk reduction is positive”

Interesting! Thank you for  writing this up. :) 

It does seem plausible that, by evolutionary forces, biological nonhumans would care about the proliferation of sentient life about as much as humans do, with all the risks of great suffering that entails.

What about the grabby aliens, more specifically? Do they not, in expectation, care about proliferation (even) more than humans do?

All else being equal, it seems -- at least to me -- that civilizations with very strong pro-life values (i.e., that thinks that perpetuating life is good and necessary, ... (read more)

4antimonyanthony5moThat sounds reasonable to me, and I'm also surprised I haven't seen that argument elsewhere. The most plausible counterarguments off the top of my head are: 1) Maybe evolution just can't produce beings with that strong of a proximal objective of life-maximization, so the emergence of values that aren't proximally about life-maximization (as with humans) is convergent. 2) Singletons about non-life-maximizing values are also convergent, perhaps because intelligence produces optimization power so it's easier for such values to gain sway even though they aren't life-maximizing. 3) Even if your conclusion is correct, this might not speak in favor of human space colonization anyway for the reason Michael St. Jules mentions in another comment, that more suffering would result from fighting those aliens.
"Disappointing Futures" Might Be As Important As Existential Risks

Thank you for writing this.

  • According to a survey of quantitative predictions, disappointing futures appear roughly as likely as existential catastrophes. [More]

It looks like that Bostrom and Ord included risks of disappointing futures in their estimates on x-risks, which might  make this conclusion a bit skewed, don't you think?

"Disappointing Futures" Might Be As Important As Existential Risks

Michael's definition of risks of disappointing futures doesn't include s-risks though, right? 

a disappointing future is when humans do not go extinct and civilization does not collapse or fall into a dystopia, but civilization[1] nonetheless never realizes its potential.

I guess we get something like "risks of negative (or nearly negative) future" adding up the two types.

1Kaj_Sotala5moDepends on exactly which definition of s-risks you're using; one of the milder definitions is just "a future in which a lot of suffering exists", such as humanity settling most of the galaxy but each of those worlds having about as much suffering as the Earth has today. Which is arguably not a dystopian outcome or necessarily terrible in terms of how much suffering there is relative to happiness, but still an outcome in which there is an astronomically large absolute amount of suffering.
How can we reduce s-risks?

Great piece, thanks !

Since you devoted a subsection to moral circle expansion as a way of reducing s-risks, I guess you consider that its beneficial effects outweigh the backfire risks you mention (at least if MCE is done "in the right way"). CRS' 2020 End-of-Year Fundraiser post also induces optimism regarding the impact of increasing moral consideration for artificial minds (the only remaining doubts seem to be about when and how to do it).

I wonder how confident we should be about this (the positiveness of MCE in reducing s-risks), at this point? Have yo... (read more)

5Tobias_Baumann10moThanks for the comment, this is raising a very important point. I am indeed fairly optimistic that thoughtful forms of MCE are positive regarding s-risks, although this qualifier of "in the right way" should be taken very seriously - I'm much less sure whether, say, funding PETA is positive. I also prefer to think in terms of how MCE could be made robustly positive, and distinguishing between different possible forms of it, rather than trying to make a generalised statement for or against MCE. This is, however, not a very strongly held view (despite having thought a lot about it), in light of great uncertainty and also some degree of peer disagreement (other researchers being less sanguine about MCE).
Incentivizing forecasting via social media

Thanks for writing this! :)

Another potential outcome that comes to mind regarding such projects is a self-fulfilling prophecy effect (provided the predictions are not secret).  I have no idea how much of an (positive/negative) impact it would have though. 

5David_Althaus1yThanks. :) That's true though this is also an issue for other forecasting platforms—perhaps even more so for prediction markets where you could potentially earn millions by making your prediction come true. From what I can tell, this doesn't seem to be a problem for other forecasting platforms, probably because most forecasted events are very difficult to affect by small groups of individuals. One exception that comes to mind is match fixing [https://en.wikipedia.org/wiki/Match_fixing_related_to_gambling]. However, our proposal might be more vulnerable to this problem because there will (ideally) be many more forecasted events, so some of them might be easier to affect by a few individuals wishing to make their forecasts come true.