This is a special post for quick takes by Ben Stewart. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

On the recent post on Manifest, there’s been another instance of a large voting group (30-40ish) arriving and downvoting any progressive-valenced comments (there were upvotes and downvotes prior to this, but in a more stochastic pattern). This is similar to what occured with the eugenics-related posts last year. Wanted to flag it to give a picture to later readers on the dynamics at play.

Manifold openly offered funding voting rings in their discord:

I would be surprised if it's 30-40 people. My guess is it's more like 5-6 people with reasonably high vote-strengths. Also, I highly doubt that the overall bias of the conversation here leans towards progressive-valenced comments being suppressed. EA is overwhelmingly progressive and has a pretty obvious anti-right bias (which I don't know, I am sympathetic to, but I feel like a warning in the opposite direction would be more appropriate)

My wording was imprecise - I meant 30-40ish in terms of karma. I agree the number of people is more likely to be 5-15. And my point is less about overall bias than just a particular voting dynamic - at first upvotes and downvotes occurring as is pretty typical, then a large and sudden influx of downvotes on everything from a particular camp.

I really enjoyed this 2022 paper by Rose Cao ("Multiple realizability and the spirit of functionalism"). A common intuition is that the brain is basically a big network of neurons with input on one side and all-or-nothing output on the other, and the rest of it (glia, metabolism, blood) is mainly keeping that network running. 
The paper's helpful for articulating how that model's impoverished, and argues that the right level for explaining brain activity (and resulting psychological states) might rely on the messy, complex, biological details, such that non-biological substrates for consciousness are implausible. (Some of those details: spatial and temporal determinants of activity, chemical transducers and signals beyond excitation/inhibition, self-modification, plasticity, glia, functional meshing with the physical body, multiplexed functions, generative entrenchment.)
The argument doesn't necessarily oppose functionalism, but I think it's a healthy challenge to my previous confidence in multiple realisability within plausible limits of size, speed, and substrate. It's also useful to point to just how different artificial neural networks are from biological brains. This strengthens my feeling of the alien-ness of AI models, and updates me towards greater scepticism of digital sentience. 
I think the paper's a wonderful example of marrying deeply engaged philosophy with empirical reality.

[comment deleted]2
Curated and popular this week
Relevant opportunities