M

Muireall

534 karmaJoined Jun 2022
muireall.space

Comments
38

A few weeks later, the article writer pens a triumphant follow-up about how well the whole process went and offers to do similar work for a high price in the future.

I think this is meant to be the fantasy version of Closing Notes on Nonlinear Investigation, where Ben writes,

I don't really want to do more of this kind of work. Our civilization is hurtling toward extinction by building increasingly capable, general, and unalignable ML systems, and I hope to do something about that. Still, I'm open to trades, and my guess is that if you wanted to pay Lightcone around $800k/year, it would be worth it to continue having someone (e.g. me) do this kind of work full-time. I guess if anyone thinks that that's a good trade, they should email me.

I understand that you're taking liberties for your allegory in imagining a "triumphant follow-up", but I think it's worth being clear that the actual follow-up all but states that this was a miserable experience and not worth the time and effort:

I did not work on this post because it was easy. I worked on it because I thought it would be easy. I kept wanting to just share what I'd learned. I ended up spending about ~320 hours (two months of work), over the span of six calendar months, to get to a place where I was personally confident of the basic dynamics (even though I expect I have some of the details wrong), and that Alice and Chloe felt comfortable with my publishing.

On June 15th I completed the first draft of the post, which I'd roughly say had ~40% overlap in terms of content with the final post. On Wednesday August 30th, after several more edits, I received private written consent from both Alice and Chloe to publish. A week later I published.

I worked on this for far too long. Had I been correctly calibrated about how much work this was at the beginning, I likely wouldn't have pursued it. But once I got started I couldn't see a way to share what I knew without finishing, and I didn't want to let down Alice and Chloe.

The high price communicates how little Ben wants to do this again, not that he thinks he did something that others should value at that price.

I feel like I'm confused by what you would find more convincing here given that there was no evidence in the first place that they did say something like that?

Like would them saying "No we didn't" actually be more persuasive than showing an example of how they did the opposite?

Or like... if we take for granted that words that someone might interpret that way left their mouth, at what point do we stop default trusting the person who clearly feels aggrieved by them and seems willing to exaggerate or lie when they then share those words to others?

I'm not sure if you meant to reply to a different comment, but yes, exactly.

I think what you're asking is, supposing Nonlinear has after all done nothing remarkable with respect to anyone's romantic partners, how do I come to believe that? How does Nonlinear present counterevidence or discredit Chloe in exactly the right way such that I'm swayed towards the true conclusion? If they deny it, it's just their word. If they show me a text conversation, well, no one actually said that they didn't have that text conversation, so it's not responsive to the complaint. There's basically no winning. It's genuinely, upsettingly unfair.

I mean, in some sense, there has to be such a way, or else I'm hopelessly irrational. Which is, yes, exactly, I think a professional, considerate Nonlinear would not have made this post. They would have done something else.

There are plenty of context in which the thing alleged is not at all abusive, and plenty of contexts where it is. Without reason to believe they were actually keeping them isolated, I'm not sure how much weight to put on it.

This is another thought feeding into my wondering how much this kind of "spot checking" really matters. While I'm glad people seem to have appreciated working forward from a particular claim, it would feel way more valuable to work backward from a decision. For me, at least, I don't think the question "did they keep people isolated in an abusive way" is on any back-chained path, which is good, because I don't expect to be able to answer that question.

But others are going to want to be convinced or not on different questions. This is why I tried to separate out the parent from my more high-feeling and reactive takes in these other comments. Maybe they can figure out how it fits in to the judgments they need to make.

Yeah. Still, I think there's something I'm groping towards here, which is, like, maybe they should do something else? Sure, you don't get to be a power broker if you're in exile. But I don't see how they were ever going to be able to argue their way back. Even with the perfectly worded response it won't suddenly make sense to trust them as mentors again; it's always going to take time and concrete actions to regain confidence. If that means they have to do something other than connecting people with ideas, funding, and mentorship, maybe they should just get started on that other thing.

Isn't Emerson independently wealthy and Nonlinear mostly self-funded? It's not totally clear to me how that limbo keeps them from getting things done. I guess I don't fully understand what Nonlinear does—I suppose they "incubate" projects, mostly remotely helping with mentoring and networking? I find the idea a little bewildering together with how they describe their activities, but being on the outs with the EA/AI safety community would be a pretty central obstacle.

So that's fair and I was probably venting a bit intemperately. I think something like what Stephen Clare outlines is probably better.

Maybe! I'm hoping it at least saves people some energy. It's too late for me, but I confess I'm ambivalent myself about the point of all this. Spot-checking some high level claims is at least tractable, but are there decisions that depend on the outcome? What I care about isn't whether Nonlinear accurately represented what happened or what Ben said. I was unlikely to ever cross paths with Nonlinear or even Ben beforehand. I want people to get healthy professional experience, and I want the EA community to have healthy responses to internal controversy and bad actors.

Something went wrong long before I started looking at any particular claim. Did they discourage Chloe from spending time with her boyfriend? Was it maybe a unreasonable amount of time, though? Are they being sincere in saying they were happy to see her happy? Is it toxic passive-aggressive behavior to emphasize that they felt that way even though she was distracted and unproductive with him around? Did they fail to invite him on all-expenses-paid world travel? Is Ben Pace a good person?

Like, huh? How did we even get here? Don't ask your employees to live with you. Don't engage in social experiments with your employees. Don't make their romantic partnerships your business. Don't put people in situations where these are the questions they're asking. My own suspicion is that everyone, even Nonlinear, would have been better off if Nonlinear had just let this lie and instead gone about earning trust by doing good work with normal working relationships.

Muireall
4mo176
30
4
18

I drew a random number for spot checking the short summary table. (I don't think spot checking will do justice here, but I'd like to start with something concrete.)

Chloe claimed: they told me not to spend time with my romantic partner 

- Also a strange, false accusation: we invited her boyfriend to live with us for 2 of the 5 months. We even covered his rent and groceries.

- We were just about to invite him to travel with us indefinitely because it would make Chloe happy, but then Chloe quit.

Evidence/read more

This seems to be about this paragraph from the original post:

Alice and Chloe report that they were advised not to spend time with ‘low value people’, including their families, romantic partners, and anyone local to where they were staying, with the exception of guests/visitors that Nonlinear invited. Alice and Chloe report this made them very socially dependent on Kat/Emerson/Drew and otherwise very isolated.

There aren't any other details in the original post specifically from Chloe or specifically about her partner, including in the comment in Chloe's words below the post. The only specific detail about romantic partners I see in the original post is about Alice, and it plausibly fits the "romantic partners" piece of this summary. ("Alice was polyamorous, and she and Drew entered into a casual romantic relationship. Kat previously had a polyamorous marriage that ended in divorce, and is now monogamously partnered with Emerson. Kat reportedly told Alice that she didn't mind polyamory "on the other side of the world”, but couldn't stand it right next to her, and probably either Alice would need to become monogamous or Alice should leave the organization.")

Both links in the summary table go to the same place, which says:

Chloe says: “They told me not to spend time with my boyfriend so I was kept socially dependent”, yet we invited her boyfriend to live with us (rent-free) for 2 of the 5 months, discrediting her as a reliable source of truth

False, Questionable, or Misleading Claim: Alice and Chloe report that they were advised not to spend time with ‘low value people’, including their families, romantic partners…

The Other Side: Simply a sad, unbelievable lie given how easily refutable it is. 

In direct contradiction to this, Chloe’s boyfriend traveled with us 40% of the time! (~2 of the 5 months she was with us) We thought he was wonderful, had high potential, and we even considered inviting him to join us indefinitely! Emerson also helped her boyfriend financially for part of the time by not charging him for rent/groceries

[cropped phone screenshots of text conversations from February and May 2022, respectively, with the description "LEFT: Kat and Emerson discuss whether they should let Chloe’s boyfriend stay with them longer. They’re leaning towards yes. Bear in mind, this is while Chloe’s boyfriend is already staying with us. She’s distracted and not getting much work done but we were happy to see her happy! RIGHT: Kat encourages Chloe to see her boyfriend in person, which she’s held off on because she got Covid."]

Kat regularly helped Chloe figure out ways to feel more connected to her boyfriend while they were long distance, such as encouraging her to have more frequent and longer calls.

Also note that dozens of EAs traveled with us and Emerson generously never charged anyone. He just wanted the highest quality people exchanging ideas to maximize impact in beautiful places.

As far as I can tell, "They told me not to spend time with my boyfriend so I was kept socially dependent" is not a direct quote or paraphrase. (Maybe I'd use ~tildes~ for a device like this.) Rather, it picks out one possible contributor to Ben's summary paragraph.

Taking the screenshots at face value, they establish that as of February, Kat and Emerson thought Chloe's boyfriend might have high enough potential for an extended invite (Emerson: "i'm open to it" "he seems eager to help build shit"), and as of May, Kat was not telling Chloe not to spend time with him. (Chloe was with Nonlinear from January to July.)

From my perspective, this is between "not responsive to the complaint" and "evidence for the spirit of the complaint". It seems an overreach to call "They told me not to spend time with my boyfriend..." a "sad, unbelievable lie" "discrediting [Chloe] as a reliable source of truth" when it is not something anyone has cited Chloe as saying. It seems incorrect to describe "advised not to spend time with 'low value people'" as in "direct contradiction" with any of this, which instead seems to affirm that traveling with Nonlinear was conditioned on "high potential" or being among the "highest quality people". Finally, having initially considered inviting Chloe's boyfriend to travel with them would still be entirely consistent with later deciding not to; encouraging a visit in May would still be consistent with an overall expectation that Chloe not spend too much time with her boyfriend in general for reasons related to his perceived "quality".

Edit: the summary table also says “We were just about to invite him to travel with us indefinitely because it would make Chloe happy, but then Chloe quit [in July].” This would fit with not inviting him in February and later being reluctant for “quality” reasons.

Congrats to the winners! It's interesting to see how surprised people are. Of these six, I think only David Wheaton on deceptive alignment was really on my radar. Some other highlights that didn't get much discussion:

  • Marius Hobbhahn's Disagreements with Bio Anchors that lead to shorter timelines (1 comment)
    • In the author's words: 'I, therefore, think of the following post as “if bio anchors influence your timelines, then you should really consider these arguments and, as a consequence, put more weight on short timelines if you agree with them”. I think there are important considerations that are hard to model with bio anchors and therefore also added my personal timelines in the table below for reference.'
    • Even though Bio Anchors doesn't particularly influence my timelines, I find Hobbhahn's thoughtful and systematic engagement worthwhile.
  • Kiel Brennan-Marquez's Cooperation, Avoidance, and Indifference: Alternate Futures for Misaligned AGI (1 comment)
    • Maybe this question is seen as elementary or settled by "you are made of atoms", but even so, I think other equilibria could be better explored. This essay is clear and concise, has some novel (to me) points, and could serve as a signpost for further exploration.
  • Matt Beard's AI Doom and David Hume: A Defence of Empiricism in AI Safety (6 comments)
    • On reasoning described by Tom Davidson: "This is significantly closer to Descartes’ meditative contemplation than Hume’s empiricist critique of the limits of reason. Davidson literally describes someone thinking in isolation based on limited data. The assumption is that knowledge of future AI capabilities can be usefully derived through reason, which I think we should challenge."
    • Beard doesn't mean to pick on Davidson, but I really think his methods deserve more skepticism. Even before specific critiques, I'm generally pessimistic about how informative models like Davidson's can be. I was also very surprised by some of his comments on the 80,000 Hours podcast (including those highlighted by Beard). Otherwise, Beard's recommendations are pretty vague but agreeable.

I see, thanks! Section 8.2, "Gray Dust":

The requirement for elements that are relatively rare in the atmosphere greatly constrains the potential nanomass and growth rate of airborne replicators. However, note that at least one of the classical designs exceeds 91% CHON by weight. Although it would be very difficult, it is at least theoretically possible that replicators could be constructed almost solely of CHON, in which case such devices could replicate relatively rapidly using only atmospheric resources, powered by sunlight.

I do think "diamondoid bacteria, that replicate with solar power and atmospheric CHON" from List of Lethalities is original to Eliezer. He's previously cited Nanomedicine in this context, but the parts published online so far don't describe self-replicating systems.

Edit: This is wrong—see Lumpyproletariat below.

With all materials available, my credence is very likely (above 95%) that something self-replicating that is more impressive than bacteria and viruses is possible, but I have no idea how impressive the limits of possibility are.

Much of the (purported) advantage of diamondoid mechanisms is that they're (meant to be) stiff enough to operate deterministically with atomic precision. Without that, you're likely to end up much closer to biological systems—transport is more diffusive, the success of any step is probabilistic, and you need a whole ecosystem of mechanisms for repair and recycling (meaning the design problem isn't necessarily easier). For anything that doesn't specifically need self-replication for some reason, it'll be hard to beat (e.g.) flow reactors.

Load more