All of Linch's Comments + Replies

This seems like a pretty unlikely fallacy, but I agree it's theoretically possible (and ocassionally happens in practice).

The difference between 0 and 1 is significant! And it's very valuable to figure out when the transition point happens, if you can.

PSA: regression to the mean/mean reversion is a statistical artifact, not a causal mechanism.

So mean regression says that children of tall parents are likely to be shorter than their parents, but it also says parents of tall children are likely to be shorter than their children.

Put in a different way, mean regression goes in both directions. 

This is well-understood enough here in principle, but imo enough people get this wrong in practice that the PSA is worthwhile nonetheless.

8
Mo Putera
Nice post on this, with code: https://acastroaraujo.github.io/blog/posts/2022-01-01-regression-to-the-mean/index.html  Andres pointed out a sad corollary downstream of people's misinterpretation of regression to the mean as indicating causality when there might be none. From Tversky & Kahneman (1982) via Andrew Gelman:

I think something a lot of people miss about the “short-term chartist position” (these trends have continued until time t, so I should expect it to continue to time t+1) for an exponential that’s actually a sigmoid is that if you keep holding it, you’ll eventually be wrong exactly once.

Whereas if someone is “short-term chartist hater” (these trends always break, so I predict it’s going to break at time t+1) for an exponential that’s actually a sigmoid is that if you keep holding it, you’ll eventually be correct exactly once.

Now of course most chartists (my... (read more)

2
titotal
A fallacy that can come out of this dynamic is for someone to notice that the "trend continues" people have been right almost all the time, and the "trend is going to stop soon" people are continuously wrong, and to therefore conclude that the trend will continue forever. 

Also seems a bit misleading to count something like "one afternoon in Vietnam" or "first day at a new job" as a single data point when it's hundreds of them bundled together?

From a information-theoretic perspective, people almost never refer to a single data point as strictly as just one bit, so whether you are counting only one float in a database or a whole row in a structured database, or also a whole conversation, we're sort of negotiating price. 

I think the "alien seeing a car" makes the case somewhat clearer. If you already have a deep model of ... (read more)

EDIT: I noticed that in my examples I primed Claude a little, and when unprimed Claude does not reliably (or usually) get to the answer. However Claude 4.xs are still noticeable in how little handholding they need for this class of conceptual errors, Geminis often takes like 5 hints where Claude usually gets it with one. And my impression was that Claude 3.xs were kinda hopeless (they often don't get it even with short explanations by me, and when they do, I'm not confident they actually got it vs just wanted to agree).

"Most people make the mistake of generalizing from a single data point. Or at least, I do." - SA

When can you learn a lot from one data point? People, especially stats- or science- brained people, are often confused about this, and frequently give answers that (imo) are the opposite of useful. Eg they say that usually you can’t know much but if you know a lot about the meta-structure of your distribution (eg you’re interested in the mean of a distribution with low variance), sometimes a single data point can be a significant update.

This type of limited conc... (read more)

4
Mo Putera
Seems you and Spencer Greenberg (whose piece you linked to) are talking past each other because you both disagree on what the interesting epistemic question is and/or are just writing for different audiences?  * Spencer is asking "When can a single observation justify a strong inference about a general claim?" which is about de-risking overgeneralisation, a fair thing to focus on since many people generalise too readily    * You're asking "When does a single observation maximally reduce your uncertainty?" which is about information-theoretic value, which (like you said) is moreso aimed towards the "stats-brained"  Also seems a bit misleading to count something like "one afternoon in Vietnam" or "first day at a new job" as a single data point when it's hundreds of them bundled together? Spencer's examples seem to lean more towards actual single data points (if not all the way). And Spencer's 4th example on how one data point can sometimes unlock a whole bunch of other data points by triggering a figure-ground inversion that then causes a reconsideration of your vie seems perfectly aligned with Hubbard's point. That said I do think the point you're making is the more practically useful one, I guess I'm just nitpicking.

The significance, as I read it, is that you can now trust Claude roughly like a reasonable colleague for spotting such mistakes, both in your own drafts and in texts you rely on at work or in life.

I wouldn't go quite this far, at least from my comment. There's a saying in startups, "never outsource your core competency", and unfortunately reading blog posts and spotting conceptual errors of a certain form is a core competency of mine. Nonetheless I'd encourage other Forum users less good at spotting errors (which is most people) to try to do something like... (read more)

Recent generations of Claude seem better at understanding blog posts and making fairly subtle judgment calls than most smart humans. These days when I’d read an article that presumably sounds reasonable to most people but has what seems to me to be a glaring conceptual mistake, I can put it in Claude, ask it to identify the mistake, and more likely than not Claude would land on the same mistake as the one I identified.

I think before Opus 4 this was essentially impossible, Claude 3.xs can sometimes identify small errors but it’s a crapshoot on whether it ca... (read more)

2
Thomas Kwa🔹
what prompt did you use?
9
Linch
EDIT: I noticed that in my examples I primed Claude a little, and when unprimed Claude does not reliably (or usually) get to the answer. However Claude 4.xs are still noticeable in how little handholding they need for this class of conceptual errors, Geminis often takes like 5 hints where Claude usually gets it with one. And my impression was that Claude 3.xs were kinda hopeless (they often don't get it even with short explanations by me, and when they do, I'm not confident they actually got it vs just wanted to agree).
2
Benevolent_Rain
This resonates a lot. I’m keen to connect with others who are actively thinking about when it becomes justified to hand off specific parts of their work to AI. Reading this, it seems like the key discovery wasn’t “Claude is good at critique in general,” but that a particular epistemic function — identifying important conceptual mistakes in a text — crossed a reliability threshold. The significance, as I read it, is that you can now trust Claude roughly like a reasonable colleague for spotting such mistakes, both in your own drafts and in texts you rely on at work or in life. I’m interested in concrete ways people are structuring this kind of exploration in practice: choosing which tasks to stress-test for delegation, running those tests cheaply and repeatably, and deciding when a workflow change is actually warranted rather than premature. My aim is simple: produce higher-quality output more quickly without giving up epistemic control. If others are running similar experiments, have heuristics for this, or want to collaborate on lightweight evaluation approaches, I’d be keen to compare notes.

The dynamics you discuss here follow pretty intuitively from the basic conflict/mistake paradigm.

I think it's very easy to believe that the natural extension of the conflicts/mistakes paradigm is that policy fights are composed of a linear combination of the two. Schelling's "rudimentary/obvious" idea, for example, that conflict is and cooperation is often structurally inseparable, is a more subtle and powerful reorientation than it first seems.

But this is a hard point to discuss (because it's in the structure of an "unknown known"), and I didn't interview... (read more)

I like Scott's Mistake Theory vs Conflict Theory framing, but I don't think this is a complete model of disagreements about policy, nor do I think the complete models of disagreement will look like more advanced versions of Mistake Theory + Conflict Theory. 

To recap, here's my short summaries of the two theories:

Mistake Theory: I disagree with you because one or both of us are wrong about what we want, or how to achieve what we want)

Conflict Theory: I disagree with you because ultimately I want different things from you. The Marxists, who Scott was or... (read more)

4
Mjreard
I'll need to reread Scott's post to see how reductive it is,[1] but negotiation and motivated cognition here do feel like a slightly lower level of abstraction in the sense that they are composed or different kinds of (and proportions of) conflicts and mistakes. The dynamics you discuss here follow pretty intuitively from the basic conflict/mistake paradigm. This is still great analysis and a useful addendum to Scott's post. 1. ^ actually pretty reductive on a skim, but he does have a savings clause at the end: "But obviously both can be true in parts and reality can be way more complicated than either."

Good idea, I reposted the article itself here: https://forum.effectivealtruism.org/posts/GyenLpfzRKK3wBPyA/the-simple-case-for-ai-catastrophe-in-four-steps 

I've been trying to keep the "meta" and the main posts mostly separate so hopefully the discussions for the metas and the main posts aren't as close together.

I've now written it here, thanks for all the feedback! :) https://linch.substack.com/p/simplest-case-ai-catastrophe

The bets I've seen you post seem rather disadvantageous to the other side, and I believed so at the time. Which is fine/good business from your perspective given that you managed to find takers. But it means I'm more pessimistic on finding good deals by both of our lights.

Hmm right now this seems wrong to me, and also not worth going into in an introductory post. Do you have a sense that your view is commonplace? (eg from talking to many people not involved in AI)

Here's my current four-point argument for AI risk/danger from misaligned AIs. 

  • We are on the path of creating intelligences capable of being better than humans at almost all economically and militarily relevant tasks.
  • There are strong selection pressures and trends to make these intelligences into goal-seeking minds acting in the real world, rather than disembodied high-IQ pattern-matchers.
  • Unlike traditional software, we have little ability to know or control what these goal-seeking minds will do, only directional input.
  • Minds much better than humans at
... (read more)
2
Vasco Grilo🔸
Hi Linch. I am open to bets against short AI timelines, or what they supposedly imply, up to 10 k$. Do you see any that we could make that is good for both of us under our own views considering we could invest our money, and that you could take loans?
2
Michael St Jules 🔸
Here are a few things you might need to address to convince a skeptic: 1. Humans currently have access to, maintain and can shut down or destroy the hardware and infrastructure AI depends on. This is an important advantage. 2. Ending us all can be risky from an AI's perspective, because of the risk of shutdown (or losing humans to maintain, extract resource for, and build infrastructure AI depends on without an adequate replacement). 3. I'd guess we can make AIs risk-averse (or difference-making risk averse) for whatever goals they do end up with, even if we can't align them. 4. Ending us all sounds hard and unlikely. There are many ways we are resilient and ways governments and militaries could respond to a threat of this level.
1
Ulf Graf 🔹
I think that your list is really great! As a person who try to understand misaligned AI better, this is my arguments: * The difference between a human and an AGI might be greater than the difference between a human and a mushroom. * If the difference is that great, it will probably not make much difference between a cow and a human. The way humans treat other animals, the planet and each other makes it hard to see how we could possibly create AI alignment that is willing to save a creature like us. * If AGI has self-perservation, we are the only creatures that can threat their existence. Which means that they might want to make sure that we didn't exist anymore just to be safe. * AGI is a thing that we know nothing about. If it cane a spaceship with aliens, we would probably use enormous resources to make sure it would not threat our planet. But now we are creating this alien creature ourselves and don't do very much to make sure it isn't a threat to our planet.   I hope my list helps!
2
Charlie_Guthmann
2 thoughts here just thinking about persuasiveness. I'm not quite sure what you mean by normal people and also if you still want your arguments to be actually arguments or just persuasion-max.  * show don't tell for 1-3 * For anyone who hasn't intimately used frontier models but is willing to with an open mind, I'd guess you should just push them to use and actually engage mentally with them and their thought traces, even better if you can convince them to use something agentic like CC. * Ask and/or tell stories for 4 * What can history tell us about what happens when a significantly more tech savy/powerful nation finds another one? * no "right" answer here though the general arc of history is that significantly more powerful nations capture/kill/etc. * What would it be like to be a native during various european conquests in the new world (esp ignoring effects of smallpox/disease to the extent you can)? * Incan perspective? Mayan? * I especially like Orellena's first expedition down the amazon. As far as I can tell, Orellena was not especially bloodthirsty, had some interest/respect for natives. Though he is certainly misaligned with the natives. * Even if Orellana is “less bloodthirsty,” you still don’t want to be a native on that river. You hear fragmented rumors—trade, disease, violence—with no shared narrative; you don’t know what these outsiders want or what their weapons do; you don’t know whether letting them land changes the local equilibrium by enabling alliances with your enemies; and you don’t know whether the boat carries Orellana or someone worse. * Do you trade? attack? flee? coordinate? Any move could be fatal, and the entire situation destabilizes before anyone has to decide “we should exterminate them.” * and for all of these situations you can actually see what happened (approximately) and usually it doesn't end well. * Why is AI different? * not rhetorical and gives them space to think in

I have many disagreements, but I'll focus on one: I think point 2 is in contradiction with points 3 and 4. To put it it plainly: the "selection pressures" go away pretty quickly if we don't have reliable methods of knowing or controlling what the AI will do, or preventing it from doing noticeably bad stuff.  That applies to the obvious stuff like if AI tries to prematurely go skynet, but it also applies to more mundane stuff like getting an AI to act reliably more than 99% of the time. 

I believe that if we manage to control AI enough to make widespread rollout feasible, then it's pretty likely we've already solved alignment well enough to prevent extinction. 

What are people's favorite arguments/articles/essays trying to lay out the simplest possible case for AI risk/danger?

Every single argument for AI danger/risk/safety I’ve seen seems to overcomplicate things. Either they have too many extraneous details, or they appeal to overly complex analogies, or they seem to spend much of their time responding to insider debates.

I might want to try my hand at writing the simplest possible argument that is still rigorous and clear, without being trapped by common pitfalls. To do that, I want to quickly survey the field so I can learn from the best existing work as well as avoid the mistakes they make.

2
Linch
I've now written it here, thanks for all the feedback! :) https://linch.substack.com/p/simplest-case-ai-catastrophe
2
Will Aldred
my fave is @Duncan Sabien’s ‘Deadly by Default’
1
Jordan Arel
Max Tegmark explains it best I think. Very clear and compelling and you don’t need any technical background to understand what he’s saying. I believe his third or maybe it was second appearance on Lex Fridman’s podcast where I first heard his strongest arguments, although those are quite long with extraneous content, here is a version that is just the arguments. His solutions are somewhat specific, but overall his explanation is very good I think:

I often see people advocate others sacrifice their souls. People often justify lying, political violence, coverups of “your side’s” crimes and misdeeds, or professional misconduct of government officials and journalists, because their cause is sufficiently True and Just. I’m overall skeptical of this entire class of arguments.

This is not because I intrinsically value “clean hands” or seeming good over actual good outcomes. Nor is it because I have a sort of magical thinking common in movies, where things miraculously work out well if you just ignore tradeo... (read more)

Thanks! I agree the math isn't exactly right. The point about x^2 on the rationals is especially sharp.

The problem with calling it "the paradox of the heap" is to make it sound like an actual paradox, instead of a trivially easy connection re:tipping points. I wish I had a better terminology/phrase for the connection I want to make.

Happy holidays to you too.

I think your comment largely addresses a version of the post that doesn't exist. 

In brief:

I don't think I claimed novelty; the post is explicitly about existing concepts that seem obvious once you have them. I even used specific commonly known terms for them. 

Theory of mind, mentalization, cognitive empathy, and perspective taking are, of course, not actually "rare" but are what almost all people are doing almost all the time. The interesting question is what kinds of failures you think are common. The more opinionated y

... (read more)
-19
Yarrow Bouchard 🔸

I had the same initial reaction! I'd guess others would have the same misreading too, so it's worth rewriting. fyi @Yulia Chekhovska 

1
Yulia Chekhovska
Thank you! I will correct it.

For Inkhaven, I wrote 30 posts in 30 days. Most of them are not particularly related to EA, though a few of them were. I recently wrote some reflections. @Vasco Grilo🔸 thought it might be a good idea to share on the EA Forum; I don't want to be too self-promotional so I'm splitting the difference and posting just a shortform link here:

https://linch.substack.com/p/30-posts-in-30-days 

The most EA-relevant posts are probably

https://inchpin.substack.com/p/skip-phase-3

https://inchpin.substack.com/p/aging-has-no-root-cause

https://inchpin.substack.com/p/leg... (read more)

There are a number of implicit concepts I have in my head that seem so obvious that I don't even bother verbalizing them. At least, until it's brought to my attention other people don't share these concepts.

It didn't feel like a big revelation at the time I learned the concept, just a formalization of something that's extremely obvious. And yet other people don't have those intuitions, so perhaps this is pretty non-obvious in reality.

Here’s a short, non-exhaustive list:

  • Intermediate Value Theorem
  • Net Present Value
  • Differentiable functions are locally linear
  • Th
... (read more)

Thanks, I find the polls to be much stronger evidence than the other things you've said.

2
Yarrow Bouchard 🔸
I recommend looking at the Morning Consult PDF and checking the different variations of the question to get a fuller picture. People also gave surprisingly high answers for other viruses like Ebola and Zika, but not nearly as high as for covid.

My overall objection/argument is that you appear to selectively portray data points that show one side, and selectively dismiss data points that show the opposite view. This makes your bottom-line conclusion pretty suspicious. 

I also think the rationalist community overreached and their epistemics and speed in early COVID were worse compared to, say, internet people, government officials, and perhaps even the general public in Taiwan. But I don't think the case for them being slower than Western officials or the general public in either the US or Europe is credible, and your evidence here does not update me much.

4
Yarrow Bouchard 🔸
Let's look at the data a bit more thoroughly. It's clear that in late January 2020, many people in North America were at least moderately concerned about covid-19.  I already gave the example of some stores in a few cities selling out of face masks. That's anecdotal, but a sign of enough fear among enough people to be noteworthy. What about the U.S. government's reaction? The CDC issued a warning about travelling to China on January 28 and on January 31, the U.S. federal government declared a public health emergency, implemented a mandatory 14-day quarantine for travelers returning to China, and implemented other travel restrictions. Both the CDC warning and the travel restrictions were covered in the press, so many people knew about it, but even before that happened, a lot of people said they were worried. Here's a Morning Consult poll from January 24-26, 2020: An Ipsos poll of Canadians from January 27-28 found similar results: Were significantly more than 37% of LessWrong users very concerned about covid-19 around this time? Did significantly more than 16% think covid-19 posed a threat to themselves and their family? It's hard to make direct, apples-to-apples comparisons between the general public and the LessWrong community. We don't have polls of the LessWrong community to compare to. But those examples you gave from January 24-January 27, 2020 don't seem different from what we'd expect if the LessWrong community was at about the same level of concern at about the same time as the general public. Even if the examples you gave represented the worries of ~15-40% of the LessWrong community, that wouldn't be evidence that LessWrong users were doing better than average. I'm not claiming that the LessWrong community was clearly significantly behind. If it was behind at all, it was only by a few days or maybe a week tops (not much in the grand scheme of things), and the evidence isn't clear or rigorous enough to definitively draw a conclusion like that. My cla

Why does this not apply to your original point citing a single NYT article?

0
Yarrow Bouchard 🔸
It might, but I cited a number of data points to try to give an overall picture. What's your specific objection/argument?

See eg traviswfisher's prediction on Jan 24:

https://x.com/metaculus/status/1248966351508692992 

Or this post on this very forum from Jan 26:

https://forum.effectivealtruism.org/posts/g2F5BBfhTNESR5PJJ/concerning-the-recent-2019-novel-coronavirus-outbreak 

I wrote this comment on Jan 27, indicating that it's not just a few people worried at the time. I think most "normal" people weren't tracking covid in January. 

I think the thing to realize/people easily forget is that everything was really confusing and there was just a ton of contentious deba... (read more)

6
Yarrow Bouchard 🔸
It would be easy to find a few examples like this from any large sample of people. As I mentioned in the quick take, in late January, people were clearing out stores of surgical masks in cities like New York. 

I wrote a short intro to stealth (the radar evasion kind). I was irritated by how bad existing online introductions are, so I wrote my own!

I'm not going to pretend it has direct EA implications. But one thing that I've updated more towards in the last few years is how surprisingly limited and inefficient the information environment is. Like obvious concepts known to humanity for decades or centuries don't have clear explanations online, obvious and very important trends have very few people drawing attention to them, you can just write the best book review... (read more)

presupposes that EAs are wrong, or at least, merely luckily right

Right, to be clear I'm far from certain that the stereotypical "EA view" is right here. 

I guess really I was saying that "conditional on a sociological explanation being appropriate, I don't think it's as LW-driven as Yarrow thinks", although LW is undoubtedly important.

Sure that makes a lot of sense! I was mostly just using your comment to riff on a related concept. 

I think reality is often complicated and confusing, and it's hard to separate out contingency vs inevitable stories f... (read more)

2
Yarrow Bouchard 🔸
How many angels can dance on the head on a pin? An infinite number because angels have no spatial extension? Or maybe if we assume angels have a diameter of ~1 nanometre plus ~1 additional nanometre of diameter for clearance for dancing we can come up with a ballpark figure? Or, wait, are angels closer to human-sized? When bugs die do they turn into angels? What about bacteria? Can bacteria dance? Are angels beings who were formerly mortal, or were they "born” angels?[1] Well, some of the graphs are just made-up, like those in "Situational Awareness", and some of the graphs are woefully misinterpreted to be about AGI when they’re clearly not, like the famous METR time horizon graph.[2] I imagine that a non-trivial amount of EA misjudgment around AGI results from a failure to correctly read and interpret graphs. And, of course, when people like titotal examine the math behind some of these graphs, like those in AI 2027, they are sometimes found to be riddled with major mistakes. What I said elsewhere about AGI discourse in general is true about graphs in particular: the scientifically defensible claims are generally quite narrow, caveated, and conservative. The claims that are broad, unqualified, and bold are generally not scientifically defensible. People at METR themselves caveat the time horizons graph and note its narrow scope (I cited examples of this elsewhere in the comments on this post). Conversely, graphs that attempt to make a broad, unqualified, bold claim about AGI tend to be complete nonsense. Out of curiosity, roughly what probability would you assign to there being an AI financial bubble that pops sometime within the next five years or so? If there is an AI bubble and if it popped, how would that affect your beliefs around near-term AGI? 1. ^ How is correctness physically instantiated in space and time and how does it physically cause physical events in the world, such as speaking, writing, brain activity, and so on? Is this an importan

eh, I think the main reason EAs believe AGI stuff is reasonably likely is because this opinion is correct, given the best available evidence[1]

Having a genealogical explanation here is sort of answering the question on the wrong meta-level, like giving a historical explanation for "why do evolutionists believe in genes" or telling a touching story about somebody's pet pig for "why do EAs care more about farmed animal welfare than tree welfare." 

Or upon hearing "why does Google use ads instead of subscriptions?" answering with the history of the... (read more)

2
Denkenberger🔸
Yeah, and there are lots of influences. I got into X risk in large part due to Ray Kurzweil's The Age of Spiritual Machines (1999) as it said "My own view is that a planet approaching its pivotal century of computational growth - as the Earth is today - has a better than even chance of making it through. But then I have always been accused of being an optimist."
3
David Mathers🔸
Yeah, it's fair objection that even answer the why question like I did presupposes that EAs are wrong, or at least, merely luckily right. (I think this is a matter of degree, and that EAs overrated the imminence of AGI and the risk of takeover on average, but it's still at least reasonable to believe AI safety and governance work can have very high expected value for roughly the reasons EAs do.) But I was responding to Yarrow who does think that EAs are just totally wrong, so I guess really I was saying that "conditional on a sociological explanation being appropriate, I don't think it's as LW-driven as Yarrow thinks", although LW is undoubtedly important.)
  • Near-term AGI is highly unlikely, much less than a 0.05% chance in the next decade.

Is this something you're willing to bet on? 

6
Yarrow Bouchard 🔸
In principle, of course, but how? There are various practical obstacles such as: * Are such bets legal? * How do you compel people to pay up? * Why would someone on the other side of the bet want to take it? * I don’t have spare money to be throwing at Internet stunts where there’s a decent chance that, e.g. someone will just abscond with my money and I’ll have no recourse (or at least nothing cost-effective) If it’s a bet that takes a form where if AGI isn’t invented by January 1, 2036, people have to pay me a bunch of money (and vice versa), of course I’ll accept such bets gladly in large sums. I would also be willing to take bets of that form for good intermediate proxies for AGI, which would take a bit of effort to figure out, but that seems doable. The harder part is figuring out how to actually structure the bet and ensure payment (if this is even legal in the first place). From my perspective, it’s free money, and I’ll gladly take free money (at least from someone wealthy enough to have money to spare — I would feel bad taking it from someone who isn’t financially secure). But even though similar bets have been made before, people still don’t have good solutions to the practical obstacles. I wouldn’t want to accept an arrangement that would be financially irrational (or illegal, or not legally enforceable), though, and that would amount to essentially burning money to prove a point. That would be silly, I don’t have that kind of money to burn.

crossposted from https://inchpin.substack.com/p/legible-ai-safety-problems-that-dont

Epistemic status: Think there’s something real here but drafted quickly and imprecisely

I really appreciated reading Legible vs. Illegible AI Safety Problems by Wei Dai. I enjoyed it as an impressively sharp crystallization of an important idea:

  1. Some AI safety problems are “legible” (obvious/understandable to leaders/policymakers) and some are “illegible” (obscure/hard to understand)
  2. Legible problems are likely to block deployment because leaders won’t deploy until they’re sol
... (read more)

Some of the negative comments here gesture at the problem you're referring to, but less precisely than you had.

I wrote a quick draft on reasons you might want to skip pre-deployment Phase 3 drug trials (and instead do an experimental rollout with post-deployment trials, with option of recall) for vaccines for high diseases with high mortality burden, or for novel pandemics. https://inchpin.substack.com/p/skip-phase-3

It's written in a pretty rushed way, but I know this idea has been bouncing around for a while and I haven't seen a clearer writeup elsewhere, so I hope it can start a conversation!

Are the abundance ideas actually new to EA folks? They feel like rehashes of arguments we've had ~ a decade ago, often presented in less technical language and ignoring the major cruxes.

Not saying they're bad ideas, just not new.

2
lilly
I think you’re right that some of the abundance ideas aren’t exactly new to EA folks, but I also think it’s true that: (1) packaging a diverse set of ideas/policies (re: housing, science, transportation) under the heading of abundance is smart and innovative, (2) there is newfound momentum around designing and implementing an abundance-related agenda (eg), and (3) the implementation of this agenda will create opportunities for further academic research (enabling people to, for instance, study some of those cruxes). All of this to say, if were a smart, ambitious, EA-oriented grad student, I think I would find the intellectual opportunities in this space exciting and appealing to work on.

This post had 169 views on the EA forum, 3K on substack, 17K on reddit, 31K on twitter.

Link appears to be broken.

4
NunoSempere
<https://forum.effectivealtruism.org/posts/4DeWPdPeBmJsEGJJn/interview-with-a-drone-expert-on-the-future-of-ai-warfare>

This is great news; I'm so glad to hear that!!!

Linch
12
0
0
1
1

I wrote a field guide on writing styles. Not directly applicable to the EA Forum but I used some EA Forum-style writing (including/especially my own) as examples. 

https://linch.substack.com/p/on-writing-styles

I hope the article can increase the quality of online intellectual writing in general and EAF writing in particular!
 

x-posted from Substack

Now, of course, being vegan won’t kill you, right away or ever. But the same goes for eating a diet of purely McDonald’s or essentially just potatoes (like many peasants did). The human body is remarkably resilient and can survive on a wide variety of diets. However, we don’t thrive on all diets. 

Vegans often show up as healthier in studies than other groups, but correlation is not causation. For example, famously Adventists are vegetarians and live longer than the average population. However, vegetarian is importantly diffe

... (read more)
3
mal_graham🔸
Not relevant to the main text here, but based on this I suspect at least part of the reason white folks in the UK have lower life expectancy is rates of alcohol consumption. See figure 1, for example. I haven't dug into the report methodology so my confidence is low, but it at least tracks with my experience living there. These data on cause of death are interesting as well. 

I have a lot of sympathy towards being frustrated at knee-jerk bias against AI usage. I was recently banned from r/philosophy on first offense because I linked a post that contained an AI-generated image and a (clearly-labelled) AI summary of someone else's argument[1]. (I saw that the subreddit had rules against AI usage but I foolishly assumed that it only applied to posts in the subreddit itself). I think their choice to ban me was wrong, and deprived them of valuable philosophical arguments that I was able to make[2] in other subreddits like r/Phi... (read more)

2
Midtermist12
In the case of the author with the history of fraud, you are applying prejudice, albeit perhaps appropriately so.    You raise stronger points than I've yet heard on this subject, though I still think that if you read some kind of content and find it compelling on its merits, there is still a strong case to apply at least similar scrutiny regardless of whether there are signs of AI use. Although I still think there is too much knee-jerk sentiment on the matter, you've given me something to think about. 

I compiled a list of my favorite jokes, which some forum users might enjoy. https://linch.substack.com/p/intellectual-jokes

Yeah I think these two claims are essentially the same argument, framed in different ways. 

I appreciate this article and find the core point compelling. However, I notice signs of heavy AI editing that somewhat diminish its impact for me.

Several supporting arguments come across as flimsy/obvious/grating/"fake" as a result. For example, the "Addressing the Predictable Objections" reads more like someone who hasn't actually considered the objections but just gave the simplest answers to surface-level questions, rather than someone who  deeply brainstormed or crowdsourced the objections to the framework. Additionally, the article's tendency to... (read more)

8
Midtermist12
Thanks for reading and engaging, Linch. You're correct that I used AI as an editor - with limited time, it was that or no post at all. That resource allocation choice (ship something imperfect but provocative vs. nothing) exemplifies the framework itself. I think more people should use AI to help develop and present their ideas rather than letting perfectionism or time constraints prevent them from contributing to important discussions. The post was meant to provoke people to examine their own sacrifice allocations, not to be a comprehensive treatise. The objections section covers the predictable first-order pushbacks that stop people from even considering the framework. Deeper counterarguments about replaceability, offset quality, and norm-setting are important but would require their own posts. The binary framings you note are intentional - they make the core tension vivid. Most people's actual optimization should be marginal reallocation within their portfolio, but "consider shifting 20% of your sacrifice budget" doesn't create the same useful discomfort as "Bob does more good than Alice." The core point is that we should recognize how individual particularities - income, skills, psychological makeup, social context - dramatically affect how each person can maximize their impact. What's high-RoS for one person may be terrible for another. When we evaluate both our own choices and others' contributions, we need to account for these differences rather than applying uniform standards of virtue. The framework makes these personal tradeoffs explicit rather than hidden.

Thanks, appreciate the empirical note and graph on trendlines!

Preventing an AI takeover is a great way for countries to help their own people!

Tbh, my honest if somewhat flippant response is that these trials should update us somewhat against marginal improvements in the welfare state in rich countries, and more towards investments in global health, animal welfare, and reductions in existential risk. 

I'm sure this analysis will go over well to The Argument subscribers!

2
NickLaing
Ha that's interesting I feel like that might be technically true (and would be the same for any internal spending), but the realistic question here is how rich countries figure out the best way to help their own people with their tax dollar.

It's funny (and I guess unsurprising) that Will's Gemini instance and your Claude instance both reflected what I would have previously expected both of your ex ante views to be! 

lmao when I commented 3 years ago I said 

As is often the case with social science research, we should be skeptical of out-of-country and out-of-distribution generalizability. 

and then I just did an out-of-country and out-of-distribution generalization with no caveats! I could be really silly sometimes lol.

Re the popular post on UBI by Kelsey going around, and related studies:

I think it helped less than I “thought” it would if I was just modeling this with words. But the observed effects (or lack thereof) in the trials appears consistent with standard theoretical models of welfare economics. So I’m skeptical of people using this as an update against cash transfers, in favor of a welfare state, or anything substantial like that.

If you previously modeled utility as linear or logarithmic with income (or somewhere in between), these studies should be a update ag... (read more)

4
Sharmake
Another story is that this is a standard diminishing returns case, and once we have removed all the very big blockers like non-functional rule of law, property rights, untreated food and water, as well as disease, it's very hard to make the people who would still remain poor actually improve their lives, because all the easy wins have been taken, so what we are left with is the harder/near impossible poverty cases.
2
NickLaing
I feel like this does update me towards a welfare state to some degree. The correlation between welfare states and poorer people doing better (in rich countries) seems strong to overwhelming. Then the idea of UBI came in, which might have been better than a welfare state (I was a believer :( ). Now the evidence shows that it's clearly not. So I'm back to thinking that the welfare state is the best option for rich countries.
5
huw
It does seem like Kelsey is actually defending the idea that the US should prioritise welfare state programmes instead of cash transfers, not just talking about UBI. I think Matt Bruenig makes some good points about how income supplements are important and work in rich countries for people who can’t or don’t participate in the labour force (which seems tangential to UBI, which would be supplementing for those that do). But the whole argument is a bit confusing and I think they’re talking past each other a bit. I don’t like Piper’s appeal to political will, given that the US is much richer than the Nordic countries yet in her conception can’t seem to spare a little bit of extra money to directly give to labour force non-participants.
Load more