Quick takes

Show community
View more
Set topic
Frontpage
Global health
Animal welfare
Existential risk
Biosecurity & pandemics
12 more
Cool news: Jesse Eisenberg donated a kidney to a stranger, and said it was after hearing a podcast on 'effective altruism' where they talked about kidney donations. He mentioned it in this podcast. I assume this might lead back to Dylan Matthews.
I just loved this from @Kelsey Piper on Twitter 🥺🥺 it's so true and I never appreciated it before EA.  I really appreciate you all 🙏🏻 https://x.com/KelseyTuoc/status/2031989126522945761?s=20 -- My ancestors buried half their children. All mine are alive. My ancestors' house had a dirt floor. Mine is wood. I have indoor plumbing, I have hot water, I have never in my life hauled a full bucket half a mile and I probably never will. Do you know how rare it is, in human history, for small children to wear shoes? Mine have multiple pairs. I can speak to my relatives who live thousands of miles away, for free, at any time. Video, if we want video. With machine translation, if we speak different languages. The original Library of Congress had 740 books in it. I have more than that. If I run out of books in my home my local public library has 350,000. If I want to take a hundred books with me on vacation, they all fit on a device that fits in my purse.  I have heat in the winter and AC in the summer and a washing machine and I have never, ever, ever had to scrub a dress clean by hand in the stream. I can look up recipes from more than a hundred different countries and I've tried dozens of them. I ride a clean and modern train across my city for $4, or take a robot taxi if I'm out too late for the train. I donate $40,000 every year to the cause of getting healthcare to the world's poorest people and even after the donations I never have to think about whether I can afford a book, or a pair of shoes, or a cup of coffee.  There is a great deal more to fight for, of course. I hope that our descendants will look back on our lives and list a thousand ways they're richer. Maybe we ourselves will do that, if some of the crazier stuff comes true.  But the abundance is all around you and to a significant degree you aren't feeling it only because fish don't notice water.
6
Linch
8h
0
How I orient towards thinking about AI persuasion and superpersuasion: Most people I talk to about superpersuasion from advanced AI seem quite confident that it either will or won’t be a major problem. My guess is that this confidence is significantly misplaced, and comes down to a failure of imagination.  Skeptics on AI persuasion argue that humans have long evolved to both persuade and be resilient to external influence, that we’ve long had propaganda, that we long had ads, and that challenges from persuasive AI won’t be qualitatively different from other technological transitions (broadcast television, the internet, social media and so forth), and people are stubborn and aren’t really liable to be persuaded by arguments anyway.  These abstract arguments may all well be true, but I think there’s a missing mood: skeptics implicitly tend to have a very specific picture in mind when they think about "AI persuasion." They imagine a chatbot making a single argument in a single session for a specific viewpoint, or a single AI-generated ad on TV, and correctly note that this doesn't seem very scary. You can just close the tab. People just aren’t that gullible.  Or sometimes when anchored on the term “superpersuasion”, people imagine heroic powers ascribed to AI in a specific situation, and assume that specific situation is implausible. Eg they point out that in a few sentences of text, an AI probably can’t convince you to kill your family, or otherwise take actions that immediately and “obviously” betray your well-defined interests. But real-world persuasion doesn't follow our narrowly carved categories, and AI-powered persuasion will look like that even less. The r/ChangeMyView experiments using GPT-4o were instructive for me. The bots were ~98th percentile persuasiveness, but that’s the least interesting update for me: a bigger update is how much they easily lied, including “AI pretending to be a victim of rape, AI acting as a trauma counselor specializing in abus
I'm pledging[1] to stop[2] saving[3] additional[4] money[5] & donate instead. Fine print: [1] This pledge is only good until 2030 unless renewed, and becomes invalid if I start working at a nonprofit. [2] I'm still allowed to max out my 401k, partially since I have a 50% match there. [3] Spending money is fine. I only spend 5% my gross, so that isn't the problem. [4] I'm allowed to keep up with inflation, should the stock market not already do so. [5] I'm allowed to keep saving illiquid equity, although I am encouraged to liquidate to the extent feasible to align with the spirit of the pledge.
2
Linch
8h
0
Excerpts from research notes on AI persuasion/AI superpersuasion Are cognitive exploits a thing? Cognitive exploits are an as-yet theoretical mechanism where relatively short strings or sensory inputs can one-shot someone and cause them to take almost arbitrary actions. In the earlier taxonomy, this is like “content-agnostic persuasion” on steroids, since it really doesn’t care about the content of the message at all.  Put another way, cognitive exploits are specific attacks on human neurology akin to adversarial examples or jailbreaks in ML. In yet another sense, they’re meaningfully qualitatively rather than quantitatively different from all prior examples of human persuasion [1] since they directly skip past usual cognitive and emotional defenses. It’s hard to really know what to defend against, since we have not (afaik) ever experienced one to date.  Do humans actually have such cognitive exploits, and if so are we likely to find them before ASI? I hope not. It seems bad if they’re real! I also think not, but I don’t know for sure. My best guess is that we probably can’t “stumble” onto a cognitive exploit via normal human thinking and experimentation and exploration, including “normal” persuasion-style exploration and experimentation. My guess is that this is continuously true even with AI making human-level cognitive labor substantially cheaper (in a way that's not true of other persuasion-related worries). So it’s probably safe. My best guess for how to find a cognitive exploit (if they’re real) before ASI is doing something similar to whitebox/gradient attacks that we do to find AI adversarial examples, on human neurology, likely in simulation. AFAIK this is not doable with current science, but I find it plausible trialing this can be achieved with dedicated effort before full ASI. But testing this (“capability elicitations”/”gain of function”) seems like a bad idea, since by default I don’t think companies or (probably) governments are incentivized