1262Joined May 2015


Thanks for writing this. I think about these sorts of things a lot. Given the title, do you know of examples of movements that did not start academic disciplines and appear to have suffered as a result?

The Global Priorities Institute and clusters of work around that do work in economics, including welfare economics. I'd also be curious to hear what you think they should do differently.

I'm toying with a project to gather reference classes for AGI-induced extinction and AGI takeover. If someone would like to collaborate, please get in touch.

(I'm aware of and giving thought to reference class tennis concerns but still think something like this is neglected.)

I don't think it's right that the broad project of alignment would look the same with and without considering religion. I'm curious what your reasoning is here and if I'm mistaken.

One way of reading this comment is that it's a semantic disagreement about what alignment means. The OP seems to be talking about the  problem of getting an AI to do the right thing, writ large, which may encompass a broader set of topics than alignment research as you define it.

Two other ways of reading it are that (a) solving the problem the OP is addressing (getting an AI to do the right thing, writ large) does not depend on values, or (b) solving the alignment problem will necessarily solve the value problem. I don't entirely see how you can justify (a) without a claim like (b), though I'm curious if there's a way.

You might justify (b) via the argument that solving alignment involves coming up a way to extrapolate values. Perhaps it is irrelevant which particular person you start with, because the extrapolation process will end up at the same point. To me this seems quite dubious. We have no such method and observe deep disagreement in the world. Which methods we use to resolve disagreement and determine whose values we include seem to involve a question of values. And from my lay sense, the methods of alignment that are currently most-discussed involve aligning it with specific preferences.

One thing that's  sad and perhaps not obvious to people is that, as I understand it, Nathan Robinson was initially sympathetic to EA (and this played a role in at-times vocal advocacy for animals). I don't know that there's much to be done about this. I think the course of events was perhaps inevitable, but that's relevant context for other Forum readers who see this.

And worth noting that Ben Franklin was involved in the constitution, so at least some of his longtermist time seems to have been well spent.

I don't have a strong view on the original setup, but I can clarify what the argument is. For the first point, that we maximize . The idea is that we want to maximize the likelihood that the organism chooses the action that leads to enjoyment (the one being selected for). That probability is a function of how much better it is to choose that action than the alternative. So if you get E from choosing that action and lose S from choosing the alternative, the benefit from choosing that action is E - (-S) = E + S. However, you only pay to produce the experience of the action you actually take. This last reason is why the costs are weighted by probability, while the benefits, which are only about the anticipation of the experience you would get conditional on your action, are not.

It occurs to me that a fuller model might endogenize n, i.e. be something like max P(E(C_E) + S(C_S)) s.t. P(.) C_E +  (1 - P(.)) C_S = M. (Replacing n with 1 - P here so it's a rate, not a level. Also, perhaps this reduces to the same thing based on the envelope theorem.)

And on the last point, that point is relevant for the interpretation of the model (e.g. choosing the value of n), but it is not an assumption of the model.

Like others, I really appreciate these thoughts, and it resonates with me quite a lot. At this point, I think the biggest potential failure mode for EA is too much drift in this direction. I think the "EA needs megaprojects" thing has generated a view that the more we spend, the better, which we need to temper. Given all the resources, there's a good chance EA is around for a while and quite large and powerful. We need to make sure we put these tools to good use and retain the right values.

EA spending is often perceived as wasteful and self-serving

It's interesting here how far this is from the original version of EA and its criticisms; e.g. that EA was an unrealistic standard that involved sacrificing one's identity and sense of companionship for an ascetic universalism.

I think the old perception is likely still more common, but it's probably a matter of time (which means there's likely still time to change it). And I think you described the tensions brilliantly.

Yes, that's an accurate characterization of my suggestion. Re: digital sentience, intuitively something in the 80-90% range?

Yes, all those first points make sense. I did want to just point to where I see the most likely cruxes.

Re: neuron count, the idea would be to use various transformations of neuron counts, or of a particular type of neuron.  I think it's a judgment call whether to leave it to the readers to judge; I would prefer giving what one thinks is the most plausible benchmark way of counting and then giving the tools to adjust from there, but your approach is sensible too.

Thanks for writing this post. I have similar concerns and am glad to see this composed. I particularly like the note about the initial design of space colonies. A couple things:

  • My sense is that the dominance of digital minds (which you mention as a possible issue) is actually the main reason many longtermists think factory farming is likely to be small relative to the size of the future. You're right to note that this means future human welfare is also relatively unimportant, and my sense is that most would admit that. Humanity is instrumentally important, however, since it will create those digital minds. I do think it's an issue that a lot of discussion of the future treats it as the future "of humanity" when that's not really what it's about. I suspect that part of this is just a matter of avoiding overly weird messaging.
  • It would be good to explore how your argument changes when you weight animals in different ways, e.g. by neuron count, since that [does appear to change things](https://forum.effectivealtruism.org/posts/NfkEqssr7qDazTquW/the-expected-value-of-extinction-risk-reduction-is-positive). I think we should probably take a variety of approaches and place some weight on each, although there's a sort of Pascalian problem with considering the possibility that each animal mind has equal weight in that it feels somewhat plausible but also leads to wild and seemingly wrong conclusions (e.g. that it's all about insect larvae). But in general, this seems like a central issue worth adjusting for.
Load More