New & upvoted

Customize feedCustomize feed
188
· · · 19m read

Quick takes

Show community
View more
Set topic
Frontpage
Global health
Animal welfare
Existential risk
Biosecurity & pandemics
11 more
Some women on the Facebook support group "Cluster Headache Patients" comparing labor pain to cluster headache pain: * "Honestly, I had a natural childbirth and a cesarean and cluster headaches are 10 times worse than both." * "2 unmedicated births for me. Would rather do that every day than have another cluster" * "every day though, really?" * "yes. I'd rather go through childbirth without pain relief than CH." * "tenfold worse than popping a baby out" * "Nah, labour/giving birth is a walk in the park compared to ch […] I was in labour with my son for nearly 3 days, then the midwife had to break my cervix with her hands, but I'd still rather do that again than have another CH" * "Labor pain doesn’t even come close to CH! I’d choose labor pain ANY day over suffering from another CH" * "CH is a million times worse" * "I had 4 children, 3 were natural. CH is worse." * "I'd rather have a baby. And my placenta tore during all natural childbirth." * "I gave birth to 4 different babies. The smallest being 8lbs 14oz. The biggest being 10lbs 15oz. I would much rather give natural birth all over again than a CH." * "I've had three babies—one was overdue and born with his arm over his head. Having a baby is still cake compared to clusters."   These are just a few. They go on and on. (So far only one woman claiming childbirth was worse, who "nearly bled out in childbirth, got an episiotomy with zero freezing/drugs.")
Just noticed that I tend to up/downvote and agree/disagree vote more or less depending on what the current vote count is at. Standard herding bias at work. Hoping that saying it out loud will make it weaker, and maybe other people can relate.
5
Linch
18h
1
Many people hold up 'AI As Normal Technology' as a reasonable "normal-people" case against the doomer position. I actually think it's wrong on a number of ways and falls flat on its own terms. I think I believe this for reasons mostly orthogonal to being a doomer (except inasomuch as being a doomer makes me more interested in thinking about AI). If anybody here is interested in fighting the good fight, it might be valuable to do a Andy Masley-style annilihation of the AI As Normal Technology position, trying to stick to minimally controversial arguments and just destroying their arguments with reference to obvious empirical and logical arguments. I suspect it won't be very hard. Eg here's a few obvious reasons they fail: 1. Their central empirical mechanism is already wrong: their story is that AI diffusion will be slow because this is the path of previous technologies like electricity, but consumer and developer adoption of LLMs has been faster than essentially any technology in history. 2. They completely ignore that AI will obviously do a ton to assist in its own diffusion: Even if I take their arguments that diffusion is what matters and I rule out software-only singularity by fiat, I still don’t think I or anybody else should buy their causal mechanisms. Like the single most obvious way in which AI diffusion might be distinct from previous technological changes is afaict unaccounted for in their arguments, even if I presume a diffusion-first model. 3. The reference class is unargued and load-bearing: The whole thesis rests on AI being like electricity or the internet (decades of diffusion) rather than like smartphones, SaaS, or cloud (years). 4. They have no framework that can engage software-only-singularity-style arguments. Their entire ontology is built around physical-world deployment friction. This practically assumes the conclusion! 5. The position is self-undermining for their vibes if you take it literally. 1) If AI really is like electricity, the
Sharing an overview of the CEA Online Team's Q2 OKRs (we run this Forum)
Here are some bullet points of reflection topics around lifestyle and priorities for EAs that I shared with some fellow EAs some months ago. I am sharing this text here in case it interests anyone. I will elaborate and expand on them more and better later if I have the opportunity. """ Support Systems: Seriously. I didn't even know this term until after all this happened, and it would have changed everything. There's something about how people are instructed in STEM institutions (and as a consequence, many EA institutions) that makes it all about careers, how one's impact is understood by their public professional life. And then it turns out that in reality a lot of the most publicly impactful people have these incredibly beautiful family and fraternity systems that were at the core of everything they've done, that never get talked about. Too many yang, public, external, wikipedia-worthy archetypes of impact. It would be really awesome if every youngling EA-in-training knew that having strong and abundant support systems, investing in true family and friends, investing in intimacy, figuring out relationships, being connected to non-EAs... that this sort of thing might be not a distraction from impact but a foundation for impact. Something something about impact theory: I don't know, there's something about EA theory where it wants it to be really convincing that being an EA is the most important thing to do, but somewhere in all the moral arguments, it takes way too many shortcuts. By taking shortcuts to force it to be the case that being an EA is the right moral thing to do, you are forced to ignore and push under the rug all forms of impact that don't currently fit well into EA career stories and don't have a legible trace of impact connecting it to an EA. I don't really know how to solve this. If I were to give any pointers, here's what first comes to mind: -- Legibility: there's a serious expectation that impact has to be legible. This is baked into the EA fo