1918 karmaJoined Dec 2020Working (6-15 years)Berlin,


Though I'm employed by Rethink Priorities, anything I write here is written purely on my own behalf (unless otherwise noted).


I would be excited to see a debate series on the meat eater problem. It is weird to me that there's not more discussion around this in EA, since it (a) seems far from settled, and (b) could plausibly imply that one of the core strands of EA -- global health and development -- is ultimately net negative.

Great post! I think this is a valuable introduction to an uncertain and rapidly developing situation.

Taiwan Semiconductor Manufacturing Company (TSMC) is responsible for over 90% of the manufacturing of the most advanced chips; South Korea makes the rest. ASML (based in the Netherlands) is the only company that makes the machinery needed for this process. And Japan controls photolithography, which draws circuit patterns on layers of silicon used in a chip. 

This seems a bit off. It seems to imply that ASML makes all or most of the machinery in the manufacturing process, and/or that ASML is the only company in the space. I think it would be more correct to say that ASML is the only company that makes the most advanced photolithography machines, and that photolithography is a key and necessary part of the chip fabrication process. (Other photolithography manufacturers -- Nikon (Japan), Canon (Japan), and SMEE (China) -- cannot produce EUV photolithography machines, and also seem to produce substantially worse machines than ASML of older types. So it is true that ASML is broadly dominant in photolithography as a whole.)

Similarly, it is misleading/ambiguous to say that Japan controls photolithography -- perhaps the sentence is meant to say that Japan controls some photolithography materials, like photoresists?

Some weak (not necessarily endorsed) suggestions:

  • Jonathan Baron about rational thinking and decision making
  • James Gleick about the history of information and information theory
  • Victoria Krakovna about AI alignment
  • Melanie Mitchell about complex systems and AI
  • Harold James about economic crises
  • Gregory Allen about US export controls
  • David Reich about population genetics
  • Kathryn Paige Harden about behaviour genetics
  • Larry Temkin about global aid and EA skepticism
  • Johan Norberg about global capitalism and liberalism
  • Elizabeth Barnes about critical disability theory and the use of DALYs
  • Chris Miller about semiconductors
  • Nate Silver about forecasting and media
  • Julia Wise about early EA and community health

Using "preoccupied" feels a bit strawmanny here. People using this situation as a way to enforce general conservativism in a naive way was one of the top concerns that kept coming up when I talked to Ben about the post and investigation. 

The post has a lot of details that should allow people to make a more detailed model than "weird is bad", but I don't think it would be better for it to take a stronger stance on the causes of the problems that its providing evidence for, since getting the facts out is IMO more important. 

Still, the most upvoted comment on this post does seem to push in the direction of "weird is bad":

This situation reminded me of this post, EA's weirdness makes it unusually susceptible to bad behavior. Regardless of whether you believe Chloe and Alice's allegations (which I do), it's hard to imagine that most of these disputes would have arisen under more normal professional conditions (e.g., ones in which employees and employers don't live together, travel the world together, and become romantically entangled).

Maybe a slightly better title for this post would be "There is little evidence on question decomposition"? Because the evidence against question decomposition seems equally weak as the evidence for it (based on your source).

I think a nice (maybe better) heuristic is "Do you want to see more/less of this type of post/comment on the Forum?"

Thanks! I've edited my original comment to point to your responses.

I’ve actually been working on a more complete list of all the projects we’ve funded and incubated! But have been very unproductive the last two months due to a combination of an extremely painful RSI and chronic nausea/gut issues. We changed our name from the Nonlinear Fund to Nonlinear. Kat made a basic list here:

Does this mean that what used to be the AI safety fund is no longer focused on AI safety? I am asking because the list on the Nonlinear website seems to have mostly assorted EA meta type projects, and you mention a name change.

Here is some additional possibly relevant information:

  • There was a New Yorker profile on Emerson in 2014, when he was working at Dose. Now, that was 9 years ago, and I think these things often paint an inaccurate picture of a person (though Emerson's website does lead with "Named 'The Virologist' by The New Yorker, Emerson Spartz is one of the world's leading experts on internet virality ...", so I guess he does not think the article was too bad). At any rate, the profile paints the picture of someone who seems to prioritise other things over epistemics and a healthy information ecosystem.
  • Nonlinear is or used to be a project of Spartz Philanthropies. According to the IRS website, Spartz Philanthropies had its 501(c)(3) status revoked in 2021 since it had not filed the necessary paperwork for three years straight. Now the Nonlinear website no longer mentions Spartz Philanthropies, and I am unsure whether Nonlinear is a tax-exempt nonprofit or what legal status it has. (ETA: Nonlinear, Inc. is a new 501(c)(3) -- see Drew's response below.)
  • Back in 2021, Nonlinear launched its AI safety fund with an announcement post which got some pushback/skepticism in the comments section. Does anyone know whether this fund has made any grants or seeded any new organisations? I have not managed to find any information about it on the Nonlinear website. (ETA: See Drew's response below.)

[Sunak] being from a different racial background doesn't influence at all the way he uses his power.

Not sure if I understand your point correctly, but I reckon you don't need to think Sunak has a "take me in, but don't let anyone enter after me" mindset to understand his policies. He is a conservative, and enacts conservative policies -- it seems like that is enough to explain it?

Or do you think that, even though he is a conservative, he as a person of Indian descent should understand that conservative policies are in fact harmful to minorities, and therefore not enact them?

I recently learned of this effort to model AI x-risk, which may be similar to the sort of thing you're looking for, though I don't think they actually put numbers on the parameters in their model, and they don't use any well-known formal method. Otherwise I suppose the closest thing is the Carlsmith report, which is a probabilistic risk assessment, but again not using any formal method.

Load more