Just a bundle of subroutines sans free will, aka flawed post-monkey #~100,000,000,000. Peace and love xo
(Note: All my posts, except those where I state otherwise, reflect my views and not those of my employer(s).)
Attendees should focus on getting as much ea-related value as possible out of EA events, and we as organizers should focus on generating as much value as possible. Thinking about which hot community builder you can get with later distracts from that. And, thinking about which hot participant you can get with later on can lead to decisions way more costly than just lost opportunities to provide more value.
Strongly agree. Moreover, I think it's worth us all keeping in mind that the only real purpose of the EA community is to do the most good. An EA community in which members view, for example, EA Globals as facilitating afterparties at which to find hook ups, is an EA community which is likely to spend more {time, money, attention} on EAGs and other events than achieves the most good.
If the current resource level going toward EA community events does the most good,
I desire to believe that the current resource level going toward EA community events does the most good;
If less {time, money, attention} spent on EA community events does the most good,
I desire to believe that less {time, money, attention} spent on EA community events does the most good;
Let me not become attached to beliefs I may not want.
Many thanks to all who contributed toward this post. I agree with many of your points, and I appreciate the roundedness and nuance you bring to the topic.
To add to your “If you’re considering poly” section, I’m excerpting below a Clearer Thinking podcast episode (timestamp: 54:03–73:12) which I think does a great job of discussing polyamory in a balanced way. (Spencer Greenberg is the host, and Sam Rosen is the guest – in the episode, Sam talks about his experience with being poly; I’ve bolded the parts which speak to me the most.)
SPENCER: So let's switch topics to polyamory.
...
SPENCER: I have to say, the only examples of polyamorous couples that have lasted a really long time (like five plus years) that I know of, have been the hierarchical form where they have a primary that they're very committed to, and then they have secondary partners. But that being said, I'm sure there are examples of the more flexible kind lasting a long time, I just, I'm not as aware of it.
...
SAM: I've actually found that when people try to get other people to become poly that aren't already poly that tends to — it's very hard to get someone who's not already comfortable with that dynamic to become comfortable with it. I don't know what's going on with that. But yeah, all of us were already poly and already pretty chill people and there's not much to fight about.
SPENCER: Okay, what about jealousy, though? Because that's the natural question is like, “Okay, you're spending the night with your girlfriend and your wife feels like seeing you that night.” You can imagine there's a lot of opportunities for jealousy to flare up. And I think to a lot of people, just the idea that their partner might be having sexual relations with another person might make them insanely jealous, just that concept by itself. So what are your thoughts on jealousy?
SAM: So I think that in the same way that if you're like, in a room with an annoying noise, or bad smell long enough, you don't smell it anymore. I think jealousy has a similar thing where you — if you are poly for long enough, you just kind of get used to that feeling, and it doesn't even feel bad. I don't even really feel jealousy like I used to anymore just because I've been poly for so long.
SPENCER: So at the beginning, did you feel significant jealousy?
SAM: Yeah, I felt a lot of jealousy at the beginning.
SPENCER: And so why did you keep pushing through that? Why did you continue being poly?
SAM: I just felt like the benefits of having fun, new partners outweighed the costs of jealousy and it was a simple cost-benefit analysis.
…
SPENCER: I agree with you. I don't think it's [love is] zero-sum. I think someone can genuinely, deeply love two people and it doesn't necessarily — loving one does not necessarily interfere with loving the other just like loving one sibling doesn't make you love the other sibling less.
SAM: And to push back against some polyamory rhetoric, a lot of poly people say it's infinite, it's not zero-sum at all. Like I don't think that I could love (romantically) four people at the same time. Like, I think that would just be — I don't think I would really deeply feel the same way about them because I could just not have the emotional energy. Like, maybe if we all lived together, I could see them all the time then I could do that. But there's something about — I don't have enough emotional energy in the day to think about all four people in a very positive way. There's something that feels like it's not truly non-zero-sum.
SPENCER: Right? And clearly, time is zero-sum. But you only have so much time. So the more partners you have you essentially are taking away time from another partner at some point, right?
SAM: Yeah, absolutely. And that's, I've found that having two partners is optimal for me, like a wife and a girlfriend. When I start having more I find that the relationships start degrading in quality because I don't pay enough attention to each individual partner. And that just is a fact about my time and how I'm able to divvy up my affection.
…
SPENCER: Well, you know, another factor I think that comes in with the idea of polyamory is stability. I think it's probably true (I'm curious if you agree) that polyamory, all else being equal, might be less stable than monogamy because you have a situation where, you know, there's just more parties involved, there's more people that could get upset about things, there's more likelihood of shifting dynamics.
SAM: There's just more moving parts that could get a monkey wrench into them.
SPENCER: Exactly. And also more possibility of emotional flare-ups because one person is like, “I don't get enough time and the other person's getting more time,” or jealousy, or your secondary suddenly wants to be your primary, right? And then it's like, well, what is that dynamic like?
SAM: Yeah, I think polyamory is a bit high-risk/high-reward in that sense that like, I think they're slightly less stable, but I think they’re kind of more fun. So I think it is true that it's…there's more risks of like flare-ups, as you say. And I think if you don't have the skill of handling interpersonal conflicts well, you just shouldn't be poly. I think that you should know yourself and think, “Am I the sort of person that can comfortably handle/communicate my needs without it being a shouting match?”
SPENCER: Yeah, it seems like really clear, honest communication is just absolutely essential if you're going to navigate the complexity of multiple people's emotions simultaneously, including your own. What other traits would you say are really important if you're going to try polyamory?
SAM: I think innate low jealousy is probably really, really helpful. Even though I got over my jealousy, I think if you just start out kind of low, it's probably easier.
SPENCER: Well, I would just add that I think some people are just naturally more monogamous and that some are more naturally polyamorous. Like, I know people that once they have a partner, they actually just seem to have no attraction to anyone else, and the idea of being with anyone else is just odious to them. Whereas other people, it seems like when they're with one partner, they still actually feel a lot of attraction to others. And you know, if they're ethical, they're not going to cheat, but they still have those feelings. And then I actually think there might be a third type, where it's something like, when they're in a monogamous relationship, they're just attracted to that one person, but as soon as they're in a non-monogamous relationship, they’re actually attracted to multiple people, so is there something like a switch that can flip based on what the rules in the relationship are?
…
SAM: Yeah, I have two thoughts. One is, I don't think polyamory will work for everyone.
SPENCER: What percentage of people do you think would be happiest in a polyamorous situation?
SAM: My guess is 10% of people.
SPENCER: Okay, yeah.
SAM: It's a pretty small number. Now, obviously, I could change my mind on this. But I don't think that — if you don't have good communication techniques, and have already low jealousy and things like that — that you can do it without it being a disaster for everyone. And also people get into polyamory under duress sometimes where a person's like,” I want to break up with you,” and the other person's like, “Well, let’s be poly instead?” And then that's a kind of a tragic, unhappy situation because you're like…almost being…you're not happy with the arrangement, you're kind of just agreeing to it.
SPENCER: Right, right. And I think that, you know, people can certainly get really badly hurt if they're pushed into a polyamorous situation that they don't feel good about. And one has to be really careful about that.
Hey (I just met you), I appreciate this post, both the content and (this is crazy) the outstanding title :)
I don't think alignment is a problem that can be solved. I think we can do better and better. But to have it be existentially safe, the bar seems really, really high and I don't think we're going to get there. So we're going to need to have some ability to coordinate and say let's not pursue this development path or let's not deploy these kinds of systems right now.
Holden makes a similar point in "Nearcast-based 'deployment problem' analysis" (2022):
I don’t like the framing of “solving” “the” alignment problem. I picture something like “Taking as many measures as we can (see previous post) to make catastrophic misalignment as unlikely as we can for the specific systems we’re deploying in the specific contexts we’re deploying them in, then using those systems as part of an ongoing effort to further improve alignment measures that can be applied to more-capable systems.” In other words, I don’t think there is a single point where the alignment problem is “solved”; instead I think we will face a number of “alignment problems” for systems with different capabilities. (And I think there could be some systems that are very easy to align, but just not very powerful.) So I tend to talk about whether we have “systems that are both aligned and transformative” rather than whether the “alignment problem is solved.”
How we might make big improvements to decisionmaking via mechanisms like futarchy
On this point, see 'Issues with Futarchy' (Vaintrob, 2021). Vaintrob writes the following, which I take to be her bottom line: "The EA community should not spend resources on interventions that try to promote futarchy."
(I stumbled onto this post while exploring the existing literature adjacent to Bob Fischer's/Rethink Priorities' The Moral Weight Project Sequence.)
I found this post very interesting. Here are some pros and cons I've noted down on your factors, scores, and metric criteria scores:
Pros
Cons
See also the following posts, published a few months after this one, which discuss AGI race dynamics (in the context of a fictional AI lab named Magma):
Thanks for this very illuminating post.
One thing:
I am concerned that at some point in the next few decades, well-meaning and smart people who work on AGI research and development, alignment and governance will become convinced they are in an existential race with an unsafe and misuse-prone opponent [emphasis added].
Most people who've thought about AI risk, I think, would agree that most of the risk comes not from misuse risk, but from accident risk (i.e., not realizing the prepotent AI one is deploying is misaligned).[1] Therefore, being convinced the opponent is misuse-prone is actually not necessary, I don't think, to believe one is in an existential race. All that's necessary is to believe there is an opponent at all.
I'd define a prepotent AI system (or cooperating collection of systems) as one that cannot be controlled by humanity, and which is at least as powerful as humanity as a whole with respect to shaping the world. (By this definition, such an AI system need not be superintelligent, or even generally intelligent or economically transformative. It may have powerful capabilities in a narrow domain that enable prepotence, such as technological autonomy, replication speed, or social manipulation.)
I encourage those considering applying to note the following advice:
(From the post “Don’t think, just apply! (usually)”; I think this advice is especially appropriate for early career individuals, hence my signal boosting it here.)