All of David Mathers🔸's Comments + Replies

I think my basic reaction here is that longtermism is importantly correct about the central goal of EA if there are longtermist interventions that are actionable, promising and genuinely longtermist in the weak sense of "better than any other causes because of long-term effects", even if there are zero examples of LT interventions that meet the "novelty" criteria, or lack some significant near-term benefits. 

Firstly, I'd distinguish here between longtermism as a research program, and longtermism as a position about what causes should be prioritized ri... (read more)

2
Yarrow Bouchard 🔸
Whether society ends up spending, in the end, more money on asteroid defense or, possibly, more money on monitoring large volcanoes, is orders of magnitude more important than whether people in the EA community (or outside of it) understand the intellectual lineage of these ideas and how novel or non-novel they are. I don't know if that's exactly what you were saying, but I'm happy to concede that point anyway. To be clear, NASA's NEO Surveyor mission is one of the things I'm most excited about in the world. It makes me feel so happy thinking about it. And exposure to Bostrom's arguments from the early 2000s to the early 2010s is a major part of what convinced me that we, as a society, were underrating low-probability, high-impact risks. (The Canadian journalist Dan Gardner's book Risk also helped convince me of that, as did other people I'm probably forgetting right now.) Even so, I still think it's important to point out ideas are not novel or not that novel if they aren't, for all the sorts of reasons you would normally give to sweat the small stuff, and not let something slide that, on its own, seems like an error or a bit of a problem, just because it might plausibly benefit the world in some way. It's a slippery slope, for one... I may not have made this clear enough in the post, but I completely agree that if, for example, asteroid defense is not a novel idea, but a novel idea, X, tells you that you should spend 2x more money on asteroid defense, then spending 2x more on asteroid defense counts as a novel X-ist intervention. That's an important point, I'm glad you made it, and I probably wasn't clear enough about it. However, I am making the case that all the compelling arguments to do anything differently, including spend more on asteroid defense, or re-prioritize different interventions, were already made long before "longtermism" was coined. If you want to argue that "longtermism" was a successful re-branding of "existential risk", with some mistakes

Year of AGI

25 years seems about right to me, but with huge uncertainty. 

I think on the racism fron Yarrow is referring to the perception that the reason Moskowtiz won't fund rationalist stuff is because either he thinks that a lot of rationalist believe Black people have lower average IQs than whites for genetic reasons, or he thinks that other people believe that and doesn't want the hassle. I think that belief genuinely is quite common among rationalists, no? Although, there are clearly rationalists who don't believe it, and most rationalists are not right-wing extremists as  far as I can tell. 

-7
Yarrow Bouchard 🔸

What have EA funders done that's upset you? 

Not everything being funded here even IS alignment techniques, but also, insofar as you just want general better understanding of AI as a domain through science, why wouldn't you learn useful stuff from applying techniques to current models. If the claim is that current models are too different from any possible AGI for this info to be useful, why do you think "do science" would help prepare for AGI at all? Assuming you do think that, which still seems unclear to me. 

2
Yarrow Bouchard 🔸
You might learn useful stuff about current models from research on current models, but not necessarily anything useful about AGI (except maybe in the slightest, most indirect way). For example, I don't know if anyone thinks if we had invested 100x or 1,000x more into research on symbolic AI systems 30 years ago, that we would know meaningfully more about AGI today. So, as you anticipated, the relevance of this research to AGI depends on an assumption about the similar between a hypothetical future AGI and current models. However, even if you think AGI will be similar to current models, or it might be similar, there might be no cost to delaying research related to alignment, safety, control, preparedness, value lock-in, governance, and so on until more fundamental research progress on capabilities has been made. If in five or ten or fifteen years or whatever we understand much better how AGI will be built, then a single $1 million grant to a few researchers might produce more useful knowledge about alignment, safety, etc. than Dustin Moskovitz's entire net worth would produce today if it were spend on research into the same topics.  My argument about "doing basic science" vs. "mitigating existential risk" is that these collapse into the same thing unless you make very specific assumptions about which theory of AGI is correct. I don't think those assumptions are justifiable. Put it this way: let's say we are concerned that, for reasons due to fundamental physics, the universe might spontaneously end. But we also suspect that, if this is true, there may be something we can do to prevent it. What we want to know is a) if the universe is in danger in the first place, b) if so, how soon, and c) if so, what we can do about it. To know any of these three things, (a), (b), or (c), we need to know which fundamental theory of physics is correct, and what the fundamental physical properties of our universe are. Problem is, there are half a dozen competing versions of string

I asked about genuine research creativity not AGI, but I don't think this conversation is going anywhere at this point. It seems obvious to me that "does stuff mathematicians say makes up the building blocks of real research" is meaningful evidence that the chance that models will do research level maths in the near future is not ultra-low, given that capabilities do increase with time. I don't think this analogous to IQ tests or the bar exam, and for other benchmarks, I would really need to see what your claiming is the equivalent of the transfer from frontier math 4 to real math that was intuitive but failed. 

2
Yarrow Bouchard 🔸
What percentage probability would you assign to your ability to accurately forecast this particular question? I'm not sure why you're interested in getting me to forecast this. I haven't ever made any forecasts about AI systems' ability to do math research. I haven't made any statements about AI systems' current math capabilities. I haven't said that evidence of AI systems' ability to do math research would affect how I think about AGI. So, what's the relevance? Does it have a deeper significance, or is it just a random tangent? If there is a connection to the broader topic of AGI or AI capabilities, I already gave a bunch of examples of evidence I would consider to be relevant and that would change my mind. Math wasn't one of them. I would be happy to think of more examples as well.  I think a potentially good counterexample to your argument about FrontierMath → original math research is natural language processing → replacing human translators. Surely you would agree that LLMs have mastered the basic building blocks of translation? So, 2-3 years after GPT-4, why is demand for human translators still growing? One analysis claims that growth is counterfactually less that it would have been without the increase in the usage of machine translation, but demand is still growing.  I think this points to the difficulty in making these sorts of predictions. If back in 2015, someone had described to you the capabilities and benchmark performance of GPT-4 in 2023, as well as the rate of scaling of new models and progress on benchmarks, would you have thought that demand for human translators would continue to grow for at least the next 2-3 years?   I don't have any particular point other than what seems intuitively obvious in the realm of AI capabilities forecasting may in fact be false, and I am skeptical of hazy extrapolations. The most famous example of a failed prediction of this sort is Geoffrey Hinton’s prediction in 2016 that radiologists’ jobs would be fully au

The forum is kind of a bit dead generally, for one thing. 

I don't really get on what grounds your are saying that the Coefficient Grants are not to people to do science, apart from the governance ones. I also think you are switching back and forth between: "No one knows when AGI will arrive, best way to prepare just in case is more normal AI science" and "we know that AGI is far, so there's no point doing normal science to prepare against AGI now, although there might be other reasons to do normal science." 

2
Yarrow Bouchard 🔸
If we don’t know which of infinite or astronomically many possible theories about AGI are more likely to be correct than the others, how can we prepare? Maybe alignment techniques conceived based on our current wrong theory make otherwise benevolent and safe AGIs murderous and evil on the correct theory. Or maybe they’re just inapplicable. Who knows?

I guess I still just want to ask: If models hit 80% on frontier math by like June 2027, how much does that change your opinion on whether models will be capable of "genuine creativity" in at least one domain by 2033. I'm not asking for an exact figure, just a ballpark guess. If the answer is "hardly at all", is there anything short of an 100% clear example of a novel publishable research insight in some domain, that would change your opinion on when "real creativity" will arrive? 

2
Yarrow Bouchard 🔸
What I just said: AI systems acting like a toddler or a cat would make me think AGI might be developed soon. I’m not sure FrontierMath is any more meaningful than any other benchmark, including those on which LLMs have already gotten high scores. But I don’t know.

I think what you are saying here is mostly reasonable, even if I am not sure how much I agree: it seems to turn on very complicated issue in the philosophy of probability/decision theory, and what you should do when accurate prediction is hard, and exactly how bad predictions have to be to be valueless. Having said that, I don't think your going to succeed in steering conversation away from forecasts if you keep writing about how unlikely it is that AGI will arrive near term. Which you have done a lot, right? 

I'm genuinely not sure how much EA funding... (read more)

2
Yarrow Bouchard 🔸
I don’t really know all the specifics of all the different projects and grants, but my general impression is that very little (if any) of the current funding makes sense or can be justified if the goal is to do something useful about AGI (as opposed to, say, make sure Claude doesn’t give risky medical advice). Absent concerns about AGI, I don’t know if Coefficient Giving would be funding any of this stuff. To make it a bit concrete, there at least five different proposed pathways to AGI, and I imagine the research Coefficient Giving is only relevant to one of the five pathways, if it’s even relevant to that one. But the number five is arbitrary here. The actual decision-relevant number might be a hundred, or a thousand, or a million, or infinity. It just doesn’t feel meaningful or practical to try to map out the full space of possible theories of how the mind works and apply the precautionary principle against the whole possibility space. Why not just do science instead? By word count, I think I’ve written significantly more about object-level technical issues relevant to AGI than directly about AGI forecasts or my subjective guesses of timelines or probabilities. The object-level technical issues are what I’ve tried to emphasize. Unfortunately, commenters seem fixated on surveys, forecasts, and bets, and don’t seem to be as interested in the object-level technical topics. I keep trying to steer the conversation in a technical direction. But people keep wanting to steer it back toward forecasting, subjective guesses, and bets. For example, I wrote a 2,000-word post called "Unsolved research problems on the road to AGI". There are two top-level comments. The one with the most karma proposes a bet. My post "Frozen skills aren’t general intelligence" mainly focuses on object-level technical issues, including some of the research problems discussed in the other post. You have the top comment on that post (besides SummaryBot) and your comment is about a forecasting s

I guess I feel like if being able to solve mathematical problems designed by research mathematicians to be similar to the kind of problems they solve in their actual work is not decent evidence that AIs are on track to be able to do original research in mathematics in less than say 8 years then what would you EVER accept as empirical evidence that we are on track for that, but not there yet?  

Note that I am not saying this should push your overall confidence to over 50% or anything, just that it ought to move you up by a non-trivial amount relative to... (read more)

2
Yarrow Bouchard 🔸
I am not breaking new ground by saying it would be far more interesting to see an AI system behave like a playful, curious toddler or a playful, curious cat than a mathematician. That would be a sign of fundamental, paradigm-shifting capabilities improvement and would make me think maybe AGI is coming soon. I agree that IQ tests were designed for humans, not machines, and that’s a reason to think it’s a poor test for machines, but what about all the other tests that were designed for machines? GPT-4 scored quite high on a number of LLM benchmarks in March 2023. Has enough time passed that we can say LLM benchmark performance doesn’t meaningfully translate into real world capabilities? Or do we have to reserve judgment for some number of years still? If your argument is that math as a domain is uniquely well-suited to the talents of LLMs, that could be true. I don’t know. Maybe LLMs will become an amazing AI tool for math, similar to AlphaFold for protein structure prediction. That would certainly be interesting, and would be exciting progress for AI. I would say this argument is highly irreducibly uncertain and approaches the level of uncertainty of something like guessing whether the fundamental structure of physical reality matches the fundamental mathematical structure of string theory. I’m not sure it’s meaningful to assign probabilities to that. It also doesn’t seem like it would be particularly consequential outside of mathematics, or outside of things that mathematical research directly affects. If benchmark performance in other domains doesn’t generalize to research, but benchmark performance in math does generalize to math research, well, then, that affects math research and only math research. Which is really interesting, but would be a breakthrough akin to AlphaFold — consequential for one domain and not others. You said that my argument against accepting FrontierMath performance as evidence for AIs soon being able to perform original math research i

"Rob Wiblin opines that the fertility crash would be a global priority if not for AI likely replacing human labor soon and obviating the need for countries to have large human populations"

This is a case where it really matters whether you are giving an extremely high chance that AGI is coming within 20-30 years, or merely a decently high chance. If you think the chance is like 75%, and the claim that conditional on no AGI, low fertility would be a big problem is correct, then the problem is only cut by 4x, which is compatible with it still being large and ... (read more)

I'm not actually that interested in defending:

  1. The personal honor of Yudkowsky, who I've barely read and don't much like, or his influence on other people's intellectual style. I am not a rationalist, though I've met some impressive people who probably are.
  2. The specific judgment calls and arguments made in AI 2027.
  3. Using the METR graph to forecast superhuman coders (even if I probably do think this is MORE reasonable than you do; but I'm not super-confident about its validity as a measure of real-world coding. But I was not trying to describe how I personally
... (read more)
8
Yarrow Bouchard 🔸
Strong upvoted. Thank you for clarifying your views. That’s helpful. We might be getting somewhere. With regard to AI 2027, I get the impression that a lot of people in EA and in the wider world were not initially aware that AI 2027 was an exercise in judgmental forecasting. The AI 2027 authors did not sufficiently foreground this in the presentation of their "results". I would guess there are still a lot of people in EA and outside it who think AI 2027 is something more rigorous, empirical, quantitative, and/or scientific than a judgmental forecasting exercise. I think this was a case of some people in EA being fooled or tricked (even if that was not the authors’ intention). They didn’t evaluate the evidence they were looking at properly. You were quick to agree with my characterization of AI 2027 as a forecast based on subjective intuitions. However, in one previous instance on the EA Forum, I also cited nostalgebraist’s eloquent post and made essentially the same argument I just made, and someone strongly disagreed. So, I think people are just getting fooled, thinking that evidence exists that really doesn’t. What does the forecasting literature say about long-term technology forecasting? I’ve only looked into it a little bit, but generally technology forecasting seems really inaccurate, and the questions forecasters/experts are being asked in those studies seem way easier than forecasting something like AGI. So, I’m not sure there is a credible scientific basis for the idea of AGI forecasting. I have been saying from the beginning and I’ll say once again that my forecast of the probability and timeline of AGI is just a subjective guess and there’s a high level of irreducible uncertainty here. I wish that people would stop talking so much about forecasting and their subjective guesses. This eats up an inordinate portion of the conversation, despite its low epistemic value and credibility. For months, I have been trying to steer the conversation away from fore

My thought process didn't go beyond "Yarrow seems committed to a very low chance of AI having real, creative research insights in the next few years, here is something that puts some pressure on that". Obviously I agree that when AGI will arrive is a different question from when models will have real insights in research mathematics. Nonetheless I got the feeling-maybe incorrectly, that your strength of conviction that AGI is partly based on things like "models in the current paradigm can't have 'real insight'", so it seemed relevant, even though "real ins... (read more)

2
Yarrow Bouchard 🔸
I have no idea when AI systems will be able to do math research and generate original, creative ideas autonomously, but it will certainly be very interesting if/when they do. It seems like there’s not much of a connection between the FrontierMath benchmark and this, though. LLMs have been scoring well on question-and-answer benchmarks in multiple domains for years and haven’t produced any original, correct ideas yet, as far as I’m aware. So, why would this be different? LLMs have been scoring above 100 on IQ tests for years and yet can’t do most of the things humans who score above 100 on IQ tests can do. If an LLM does well on math problems that are hard for mathematicians or math grad students or whatever, that doesn’t necessarily imply it will be able to do the other things, even within the domain of math, that mathematicians or math grad students do. We have good evidence for this because LLMs as far back as GPT-4 nearly 3 years ago have done well on a bunch of written tests. Despite there being probably over 1 billion regular users of LLMs and trillions of queries put to LLMs, there’s no indication I’m aware of an LLM coming up with a novel, correct idea of any note in any academic or technical field. Is there a reason to think performance on the FrontierMath benchmark would be different than the trend we’ve already seen with other benchmarks over the last few years? The FrontierMath problems may indeed require creativity from humans to solve them, but that doesn’t necessarily mean solving them is a sign of creativity from LLMs. By analogy, playing grandmaster-level chess may require creativity from humans, but not from computers. This is related to an old idea in AI called Moravec’s paradox, which warns us not to assume what is hard for humans is hard for computers, or what is easy for humans is easy for computers.

Working on AI isn't the same as doing EA work on AI to reduce X-risk. Most people working in AI are just trying to make the AI more capable and reliable. There probably is a case for saying that "more reliable" is actually EA X-risk work in disguise, even if unintentionally, but it's definitely not obvious this is true. 

4
Denkenberger🔸
I agree, though I think the large reduction in EA funding for non-AI GCR work is not optimal (but I'm biased with my ALLFED association).

"Any sort of significant credible evidence of a major increase in AI capabilities, such as LLMs being able to autonomously and independently come up with new correct ideas in science, technology, engineering, medicine, philosophy, economics, psychology, etc"

Just in the spirit of pinning people to concrete claims: would you count progress on Frontier Math 4, like say, models hitting 40%*, as being evidence that this is not so far off for mathematics specifically? (To be clear, I think it is very easy to imagine models that are doing genuinely significant re... (read more)

2
Yarrow Bouchard 🔸
I wonder if you noticed that you changed the question? Did you not notice or did you change the question deliberately? What I brought up as a potential form of important evidence for near-term AGI was: You turned the question into: Now, rather than asking me about the evidence I use to forecast near-term AGI, you’re asking me to forecast the arrival of the evidence I would use for forecasting near-term AGI? Why?

Yeah, it's fair objection that even answer the why question like I did presupposes that EAs are wrong, or at least, merely luckily right. (I think this is a matter of degree, and that EAs overrated the imminence of AGI and the risk of takeover on average, but it's still at least reasonable to believe AI safety and governance work can have very high expected value for roughly the reasons EAs do.) But I was responding to Yarrow who does think that EAs are just totally wrong, so I guess really I was saying that "conditional on a sociological explanation being appropriate, I don't think it's as LW-driven as Yarrow thinks", although LW is undoubtedly important.)

4
Linch
Right, to be clear I'm far from certain that the stereotypical "EA view" is right here.  Sure that makes a lot of sense! I was mostly just using your comment to riff on a related concept.  I think reality is often complicated and confusing, and it's hard to separate out contingency vs inevitable stories for why people believe what they believe. But I think the correct view is that EAs' belief on AGI probability and risk (within an order of magnitude or so)  is mostly not contingent (as of the year 2025) even if it turns out to be ultimately wrong. The Google ads example was the best example I could think of to illustrate this. I'm far from certain that Google's decision to use ads was actually the best source of long-term revenue (never mind being morally good lol). But it still seemed like the internet as we understand it meant it was implausible that Google ads was counterfactually due to their specific acquisitions. Similarly, even if EAs ignored AI before for some reason, and never interacted with LW or Bostrom, it's implausible that, as of 2025, people who are concerned with ambitious, large-scale altruistic impact (and have other epistemic, cultural, and maybe demographic properties characteristic of the movement) would not think of AI as a big deal. AI is just a big thing in the world that's growing fast. Anybody capable of reading graphs can see that. That said, specific micro-level beliefs (and maybe macro ones) within EA and AI risk might be different without influence from either LW or the Oxford crowd. For example there might be a stronger accelerationist arm. Alternatively, people might be more queasy with the closeness with the major AI companies, and there will be a stronger and more well-funded contingent of folks interested in public messaging on pausing or stopping AI. And in general if the movement didn't "wake up" to AI concerns at all pre-ChatGPT I think we'd be in a more confused spot.

Can you say more about what makes something "a subjective guess" for you? When you say well under 0.05% chance of AGI in 10 years, is that a subjective guess? 

Like, suppose I am asked, as a pro-forecaster, to say whether the US will invade Syria, after a US military build-up involving air craft carriers in the Eastern Med, and I look for newspaper reports of signs of this, look up the base rate of how often the US bluffs with a military build up rather than invading, and then make a guess as to how likely an invasion is, is that "a subjective guess". ... (read more)

2
Yarrow Bouchard 🔸
What does the research literature say about the accuracy of short-term (e.g. 1-year timescales) geopolitical forecasting? And what does the research literature say about the accuracy of long-term (e.g. longer than 5-year timescales) forecasting about technological progress? (Should you even bother to check the literature to find out, or should you just guess how accurate you think each one probably is and leave it at that?) Of course. And I'll add that I think such guesses, including my own, have very little meaning or value. It may even be worse to make them than to not make them at all. This seems like a huge understatement. My impression is that the construct validity and criterion validity of the benchmarks METR uses, i.e. how much benchmark performance translates into real world performance, is much worse than you describe. I think it would be closer to the truth to say if you're trying to predict when AI systems will replace human coders, the benchmarks are meaningless and should be completely ignored. I'm not saying that's the absolute truth, just that's it's closer to the truth than saying benchmark performance is "generally a bit misleading in terms of real-world competence". Probably there's some loose correlation between benchmark performance and real-world competence, but it's not nearly one-to-one. Definitely making a subjective guess. For example, what if performance on benchmarks simply never generalizes to real world performance? Never, ever, ever, not in a million years never? By analogy, what level of performance on go would AlphaGo need to achieve before you would guess it would be capable of baking a delicious croissant? Maybe these systems just can't do what you're expecting them to do. And a chart can't tell you whether that's true or not. AI 2027 admits the role that gut intuition plays in their forecast. For example: An example intuition: Okay, and what if it is hard? What if this kind of generalization is beyond the capabilities o

I don't think EAs AI focus is a product only of interaction with Less Wrong,-not claiming you said otherwise-but I do think people outside the Less Wrong bubble tend to be less confident AGI is imminent, and in that sense less "cautious".  

I think EAs AI focus is largely a product of the fact that Nick Bostrom knew Will and Toby when they were founding EA, and was a big influence on their ideas. Of course, to some degree this might be indirect influence from Yudkowsky since he was always interacting with Nick Bostrom, but it's hard to know in what dir... (read more)

eh, I think the main reason EAs believe AGI stuff is reasonably likely is because this opinion is correct, given the best available evidence[1]

Having a genealogical explanation here is sort of answering the question on the wrong meta-level, like giving a historical explanation for "why do evolutionists believe in genes" or telling a touching story about somebody's pet pig for "why do EAs care more about farmed animal welfare than tree welfare." 

Or upon hearing "why does Google use ads instead of subscriptions?" answering with the history of the... (read more)

-4
Yarrow Bouchard 🔸
Upvoted because I think this is interesting historical/intellectual context, but I think you might have misunderstood what I was trying to say in the comment you replied to. (I joined Giving What We Can in 2009 and got heavily involved in my university EA group from 2015-2018, so I’m aware that AI has been a big topic in AI for a very long time, but I’ve never had any involvement with Oxford University or had any personal connections with Toby Ord or Will MacAskill, besides a few passing online interactions.) In my comment above, I wasn’t saying that EA’s interpenetration with LessWrong is largely to blame for the level of importance that the ideas of near-term AGI and AGI risk currently have in EA. (I also think that is largely true, but that wasn’t the point of my previous comment.) I was saying that the influence of LessWrong and EA’s embrace of the LessWrong subculture is largely to blame for the EA community accepting ridiculous stuff like "Situational Awareness", AI 2027, and so on, despite it having glaring flaws.  Focus on AGI risk at the current level EA gives it could be rational, or it might not be. What is definitely true is that the EA community accepts a lot of completely irrational stuff related to AGI risk. LessWrong doesn’t believe in academia, institutional science, academic philosophy, journalism, scientific skepticism, common sense, and so on. LessWrong believes in Eliezer Yudkowsky, the Sequences, and LessWrong. So, members of the LessWrong community go completely off the rails and create or join cults at seemingly a much, much higher rate than the general population. Because they’ve been coached to reject the foundations of sanity that most people have, and to put their trust and belief in this small, fringe community.  The EA community is not nearly as bad as LessWrong. If I thought it was as bad, I wouldn’t bother trying to convince anyone in EA of anything, because I would think they were beyond rational persuasion. But EA has been infect

People vary a lot in how they interpret terms like "unlikely" or "very unlikely" in % terms, so I think >10% is not all that obvious. But I agree that it is evidence they don't think the whole idea is totally stupid, and that a relatively low probability of near-term AGI is still extremely worth worrying about. 

4
Yarrow Bouchard 🔸
I should link the survey directly here: https://aaai.org/wp-content/uploads/2025/03/AAAI-2025-PresPanel-Report-FINAL.pdf The relevant question is described on page 66: I frequently shorthand this to a belief that LLMs won’t scale to AGI, but the question is actually broader and encompasses all current AI approaches. Also relevant for this discussion: pages 64 and 65 of the report describe some of the fundamental research challenges that currently exist in AI capabilities. I can’t emphasize the importance of this enough. It is easy to think a problem like AGI is closer to being solved than it really is when you haven’t explored the subproblems involved or the long history of AI researchers trying and failing to solve those subproblems. In my observation, people in EA greatly overestimate progress on AI capabilities. For example, many people seem to believe that autonomous driving is a solved problem, when this isn’t close to being true. Natural language processing has made leaps and bounds over the last seven years, but the progress in computer vision has been quite anemic by comparison. Many fundamental research problems have seen basically no progress, or very little. I also think many people in EA overestimate the abilities of LLMs, anthropomorphizing the LLM and interpreting its outputs as evidence of deeper cognition, while also making excuses and hand-waving away the mistakes and failures — which, when it’s possible to do so, are often manually fixed using a lot of human labour by annotators. I think people in EA need to update on: * Current AI capabilities being significantly less than they thought (e.g. with regard to autonomous driving and LLMs) * Progress in AI capabilities being significantly less than they thought, especially outside of natural language processing (e.g. computer vision, reinforcement learning) and especially on fundamental research problems * The number of fundamental research problems and how thorny they are, how much time, eff

I don't think it's clear, absent further argument, that there has to be a 10% chance of full AGI in the relatively near future to justify the currently high valuations of tech stocks. New, more powerful models could be super-valuable without being able to do all human labour. (For example, if they weren't so useful working alone, but they made human workers in most white collar occupations much more productive.) And you haven't actually provided evidence that most experts think there's a 10% chance current paradigm will lead to AGI. Though the latter point... (read more)

8
Yarrow Bouchard 🔸
Thank you for pointing this out, David. The situation here is asymmetric. Consider the analogy of chess. If computers can’t play chess competently, that is strong evidence against imminent AGI. If computers can play chess competently — as IBM’s Deep Blue could in 1996 — that is not strong evidence for imminent AGI. It’s been about 30 years since Deep Blue and we still don’t have anything close to AGI.  AI investment is similar. The market isn’t pricing in AGI. I’ve looked at every analyst report I can find, and whatever other information I can get my hands on about how AI is being valued. The optimists are expecting AI to be a fairly normal, prosaic extension of computers and the Internet, enabling office workers to manipulate spreadsheets more efficiently, making it easier for consumers to shop online, they foresee social media platforms having chatbots that are somehow popular and profitable, LLMs playing some role in education, and chatbots doing customer support — which seems like one of the two areas, along with coding, where generative AI has some practical usefulness and financial value, although this is a fairly incremental step up from the pre-LLM chatbots and decision trees that were already widely used in customer support. I haven’t seen AGI mentioned as a serious consideration in any of the stuff I’ve seen from the financial world.
2
Mjreard
I agree there's logical space for something less than less than AGI making the investments rational, but I think the gap between that and full AGI is pretty small. Peculiarity of my own world model though, so not something to bank on.  My interpretation of the survey responses is selecting "unlikely" when there are also "not sure" and "very unlikely" options suggests substantial probability (i.e. > 10%) on the part of the respondents who say "unlikely," or "don't know." Reasonable uncertainty is all you need to justify work on something so important if-true and the cited survey seems to provide that.  

"i don't believe very small animals feel pain, and if they do my best guess would be it would be thousands to millions orders of magnitude less pain than larger animals."

I'll repeat what regular readers of the forum are bored of me saying about this. As a philosophy of consciousness PhD, I barely ever heard the idea that small animals are conscious, but their experiences are way less intense. At most, it might be a consequence of integrated information theory, but not one I ever saw discussed and most people in the field don't endorse that one theory anywa... (read more)

3
NickLaing
I don't appreciate the comment "It seems very suspiciously like it is just something EAs say to avoid commitments to prioritizing tiny animals that seem a bit mad." Why not instead that assume people like me are actually thinking and reasoning in good faith, and make objective arguments, rather than deriding them in the middle of your otherwise good argument. If we were just intellectually dodging, I doubt we would be here on the forum trying to discuss and figure this thing out... i could be wrong, but i think my kind of response here probably is common sense-ish (right or wrong). if you polled 100 in people on the street "assuming worms feel pain at all, do you think worms feel a lot less pain than humans?", i think the response would be an overwhelming yes. The dog question is a bit of a strawman because that's not the question being discussed, but i would guess most people would believe that their dog feels less pain than they do (or it appears) as well, although the  response here i agree would be quite different than the worm question. I have looked a little at theories of the mind and i must confess i find it hard to get my head around them well. I have seen though that they are many and varied and there seems to be little consensus.  to have a go at explaining my thoughts, I don't think that pain is like a substance, but i do think that the way tiny creatures are likely to experience the world is ways so wildly different from us, that even if there is something like sentience there, experiences for them might be so different to ours (including pain) that their experience, memory and integration of that "pain" (if we can even call out that same word) is likely to be so much smaller/blunted/muffled/different? compared with us (if it can even be compared), even if their responses to painful stimuli are similar ish. Pain responses are often binary-ish options (aversion or naught), but when it comes to felt experience the options are almost endless. I conside

I think when people say it is rapidly decreasing they may often mean that the the % of the world's population living in extreme poverty is declining over time, rather than that the total number of people living in extreme poverty is going down?

I think when people say it is rapidly decreasing they may often mean that the the % of the world's population living in extreme poverty is declining over time, rather than that the total number of people living in extreme poverty is going down?

2
Yarrow Bouchard 🔸
Accidental duplicate comment. :)

Yes, please do not downvote Yarrow's post just because it's style is a bit abrasive, and it goes against EA consensus. She has changed my mind quite a lot, as the person who kicked off the dispute, and Connacher who worked on the survey is clearly taking her criticisms seriously.

2
Yarrow Bouchard 🔸
God bless!

Yeah, the error here was mine sorry. I didn't actually work on the survey, and I missed that it was actually estimating the % of the panel agreeing we are in a scenario, not the chance that that scenario will win a plurality of the panel. This is my fault not Connacher's. I was not one of the survey designers, so please do not assume from this that the people at the FRI who designed the survey didn't understand their own questions or anything like that. 

For what it's worse, I think this is decent evidence that the question is too confusing to be usefu... (read more)

2
Yarrow Bouchard 🔸
I really appreciate that, but the report itself made the same mistake! Here is what the report says on page 38: And the same mistake is repeated again on the Forecasting Research Institute's Substack in a post which is cross-posted on the EA Forum: There are two distinct issues here: 1) the "best matching" qualifier, which contradicts these unqualified statements about probability and 2) the intersubjective resolution/metaprediction framing of the question, which I still find confusing but I'm waiting to see if I can ultimately wrap my head around. (See my comment here.) I give huge credit to Connacher Murphy for acknowledging that the probability should not be stated without the qualifier that this is only the respondents' "best matching" scenario, and for promising to revise the report with that qualifier added. Kudos, a million times kudos. My gratitude and relief is immense. (I hope that the Forecasting Research Institute will also update the wording of the Substack post and the EA Forum post to clarify this.) Conversely, it bothers me that Benjamin Tereick said that it's only "slightly inaccurate" and not "a big issue" to present this survey response as the experts' unqualified probabilities. Benjamin doesn't work for the Forecasting Research Institute, so his statements don't affect your organization's reputation in my books, but I find that frustrating. In case it's in doubt: making mistakes is absolutely fine and no problem. (Lord knows I make mistakes!) Acknowledging mistakes increases your credibility. (I think a lot of people have this backwards. I guess blame the culture we live in for that.)   You're right!   It would be very expensive and maybe just not feasible, but in my opinion the most interesting and valuable data could be obtained from long-form, open-ended, semi-unstructured, qualitative research interviews.  Here's why I say that. You know what the amazing part of this report is? The rationale examples! Specifically, the ones on page 1

I haven't done the sums myself, but do we know for sure that they can't make money without being all that useful, so long as a lot of people interact with them everyday? 

 Is Facebook "useful"? Not THAT much. Do people pay for it? No, it's free. Instagram is even less useful than Facebook which at least used to actually be good for organizing parties and pub nights.  Does META make money? Yes. Does equally useless TikTok make money? I presume so, yes. I think tech companies are pretty expert in monetizing things that have no user fee, and are... (read more)

4
Yarrow Bouchard 🔸
This is an important point to consider. OpenAI is indeed exploring how to put ads on ChatGPT.  My main source of skepticism about this is that the marginal revenue from an online ad is extremely low, but that’s fine because the marginal cost of serving a webpage or loading a photo in an app or whatever is also extremely low. I don’t have a good sense of the actual numbers here, but since a GPT-5 query is considerably more expensive than serving a webpage, this could be a problem. (Also, that’s just the marginal cost. OpenAI, like other companies, also has to amortize all its fixed costs over all its sales, whether they’re ad sales or sales directly to consumers.) It’s been rumoured/reported (not sure which) that OpenAI is planning to get ChatGPT to sell things to you directly. So, if you ask, "Hey, ChatGPT, what is the healthiest type of soda?", it will respond, "Why, a nice refreshing Coca‑Cola® Zero Sugar of course!" This seems horrible. That would probably drive some people off the platform, but, who knows, it might be a net financial gain. There are other "useless" ways companies like OpenAI could try to drive usage and try to monetize either via ads or paid subscriptions. Maybe if OpenAI leaned heavily into the whole AI "boyfriends/girlfriends" thing that would somehow pay off — I’m skeptical, but we’ve got to consider all the possibilities here.

Ok, there's a lot here, and I'm not sure I can respond to all of it, but I will respond to some of it. 

-I think you should be moved just by my telling you about the survey. Unless you are super confident either that I am lying/mistaken about it, or that the FRI was totally incompetent in assembling an expert panel, the mere fact that I'm telling you the median expert credence in the rapid scenario is 23% in the survey ought to make you think there is at least a pretty decent chance that you are giving it several orders of magnitude less credence than ... (read more)

2
Yarrow Bouchard 🔸
I think your suspicion toward my epistemic practices is based simply on the fact that you disagree very strongly, you don’t understand my views or arguments very deeply, you don’t know my background or history, and you’re mentalizing incorrectly. [Edited on 2025-11-18 at 05:10 UTC to add fancy formatting.] AI bubble For example, I have a detailed collection of thoughts about why I think AI investment is most likely in a bubble, but I haven’t posted about that in much detail on the EA Forum yet — maybe I will, or maybe it’s not particularly central to these debates or on-topic for the forum. I’m not sure to what extent an AI bubble popping would even change the minds of people in EA about the prospects of near-term AGI. How relevant is it? I asked on here about to what extent the AI bubble popping would change people’s views on near-term AGI and the only answer I got is that it wouldn’t move the needle. So, I’m not sure if that’s where the argument needs to go. Just because I briefly mention this topic in passing doesn’t mean my full thoughts about the topic are really only that brief. It is hard to talk about these things and treat every topic mentioned, even off-handedly, in full detail without writing the whole Encyclopedia Britannica. Also, I am much, much less sure about the AI bubble conclusion than I am about AGI or about the rapid scenario. It is extremely, trivially obvious that sub-AGI/pre-AGI/non-AGI systems could potentially generate a huge amount of profit and justify huge company valuations, and indeed I’ve written something like 100 articles about that topic over the last 8 years. I used to have a whole blog/newsletter solely about that topic and I made a very small amount of money doing freelance writing primarily about the financial prospects of AI. I actually find it a little insulting that you would think I have never considered that AI could be a big financial opportunity without AGI coming to fruition in the near term. [Edited on 2025-11-1

That's pretty incomprehensible to me even as a considerable skeptic of the rapid scenario. Firstly, you have experts giving a 23% chance and it's not moving you up even to like over 1 in a 100,000, let's say, although probably the JFK scenario is a hell of a lot less likely than that, even if his assassination was faked, despite there literally being a huge crowd who saw his head get blown off in public, still he have to be 108 to still be alive. Secondly, in 2018, AI could do to a first approximation basically nothing outside of highly specialized uses li... (read more)

2
Yarrow Bouchard 🔸
I didn’t update my views on the survey because I haven’t seen the survey. I did ask for the survey so I could see it. I haven’t seen it yet. I couldn’t find it on the website. I might change my mind after I see it. Who knows.  I agree the JFK scenario is extremely outlandish and would basically be impossible. I just think the rapid scenario is more outlandish and would also basically be impossible.  Everything you said about AI I just don’t think is true at all. LLMs are just another narrow AI, similar to AlphaGo, AlphaStar, AlphaFold, and so on, and not a fundamental improvement in generality that gets us closer to AGI. You shouldn’t have updated your AGI timelines based on LLMs. That’s just a mistake. Whatever you thought in 2018 about the probability of the rapid scenario, you should think the same now, or actually even less because more time has elapsed and the necessary breakthroughs have still not been made. So, what was your probability for the rapid scenario in 2018? And what would your probability have been if someone told you to imagine there would be very little progress toward the rapid scenario between 2018 and 2025? That’s what I think your probability should be.  To say that AI’s capabilities were basically nothing in 2018 is ahistorical. The baseline from which you are measuring progress is not correct, so that will lead you to overestimate progress. I also get the impression you greatly overestimate Claude’s capabilities relative to the cognitive challenges of generating the behaviours described in the rapid scenario. AI being able to do AI research doesn’t affect the timeline. Here’s why. AI doing AI research requires fundamental advancements in AI to a degree that would make something akin to AGI or something akin to the rapid scenario happen anyway. So, whether AI does AI research can’t accelerate the point at which we reach the rapid scenario. There are no credible arguments to the contrary. The vast majority of benchmarks are not just som

There is an ambiguity about "capabilities" versus deployment here to be fair. Your "that will not happen" seems somewhat more reasonable to me if we are requiring that the AIs are actually deployed and doing all this stuff versus merely that models capable of doing this stuff have been created. I think it was the latter we were forecasting, but I'm not 100% certain. 

2
Yarrow Bouchard 🔸
I think including widespread deployment in the rapid scenario makes it modestly more unlikely but not radically so. The fundamental issue is that these capabilities cannot be developed within 5 years. Is there a small chance? Yes, sure, but a very small one.

https://leap.forecastingresearch.org/  The stuff is all here somewhere, though it's a bit difficult to find all the pieces quickly and easily. 

For what it's worth, I think the chance of the rapid scenario is considerably less than 23%, but a lot more than under 0.1%. I can't remember the number I gave when I did the survey as a superforecaster, but maybe 2-3%? But I do think chances are getting rather higher by 2040, and it's good we are preparing now. 

 ". I think an outlandish scenario like we find out JFK is actually still alive is mo... (read more)

2
Yarrow Bouchard 🔸
No, I mean it literally. I literally think something as bizarre as it turning out somehow JFK has really been alive all this time and his assasination was a hoax is more likely than the rapid scenario. I don’t think that’s obviously false. I think it’s obviously correct. For instance, Toby Ord has calculated it’s physically impossible to continue the scaling trend of RL training for LLMs. Bizarre and outlandish things are more likely than physically impossible things. That’s not all there is to say about the subject, but it’s a good start.

For what it's worth, I think "less than 0.1% likely by 2032" is PROBABLY also not in line with expert opinion. The Forecasting Research Institute, where I currently work has just published a survey of AI experts and superforecasters on the future of AI, as part of our project LEAP, the Lognitudinal Expert Panel on AI. In it, experts and supers median estimate was a 23% chance of the survey's "rapid scenario" for AI progress by 2030 would occur. Here's how the survey  described the rapid scenario:

"By the end of 2030, in the rapid-progress world, AI sys... (read more)

2
Yarrow Bouchard 🔸
Very interested to look at the survey. Can you link to it?  I think there’s no chance of the rapid scenario, as in, much less than a 1 in 10,000 chance. I think an outlandish scenario like we find out JFK is actually still alive is more likely than that. Simply put, that will not happen. (99%+ confidence.)

My guess is you might find it hard to find EA people in global development stuff who are particularly interested in preserving/expanding cultural diversity. Generally the people who work on that stuff want to prioritize health, income and economic growth. 

I think most, though no doubt not all people you'd think of as EA leaders think AI is the most important cause area to work in and have thought that for a long time. AI is also more fun to argue about on the internet than global poverty or animal welfare, which drives discussion of it. 

 

But having said all that, there is still plenty EA funding of global health and development stuff, including by Open Philanthropy, who in fact have a huge chunk of the EA money in the world. People do and fund animal stuff too, including Open Phil. If you want to,... (read more)

0
AndreuAndreu
Hi David, oks this is the most enlightening and decision orienting answer I could get. Thanks!  Indeed i came to the Forums through a workshop and had a completely inverted expectative. That the leaders at the EA where conscientious of the AI fad and used that galvanising attention to redirect people to more pressing matters. But from your comment, especially this bit "most, though no doubt not all people you'd think of as EA leaders think AI is the most important cause area to work" really concerns me that the direction of the movement is somehow deceived and will come crashing few years down the road. Still, I hope what is structurally achieved by then might be 'effective' enough to survive the encounter with reality. Disclaimer, I come from theoretical and computational cosmology so I have insight on how over-bloated the topic is compared to realistic prospects -- not unlike holography in the 60s and everyday-use of nuclear power in the 50s. Humans, how lovely we are. 2nd disclaimer, now I work on anthropology and cultural loss.   So with that perspective I really need to weight the advantage vs inconvenience. My end game is to preserve and expand cultural diversity, which is a rather unaddressed topic, so to the law of logarithmic returns that this movement profeses I do have some hope of exponential return on focusing on cultural diversity as a theme. Inversely, overly focused on AI related seems logarithmically inefficient, especially in a fad dominated environment --i can cite plenty of research, plus personal experience, already on the later if anybody is interested.

"I think if you have enough control over your diet to be a vegan, you have enough control to do one of the other diets that has the weight effects without health side effects. "

Fair point, I was thinking of vegans as a random sample in terms of their capacity for deliberate weight-loss dieting, when of course they very much are not. 

"In fact, weight loss is a common side effect of a vegan diet, which could explain all or most of any health upsides, rather than being vegan itself."

This is more a point against your thesis than for it, I think. It doesn't matter if the ideal meat diet is better than the ideal vegan diet, because people won't ever actually eat either-this is just the point about how people won't actually eat 2 cups of sesame seeds a day or whatever. If going vegan in practice typically causes people to lose weight, and this is usually a benefit, that's a point in favour o... (read more)

5
Kat Woods 🔶 ⏸️
Ozempic seems potentially good. Also, many other diets also cause weight loss (e.g. Mediterranean, paleo, etc). As I understand it, most diets lead to weight loss as long as you can keep them up. So just pick a diet that you can maintain long term.  I think if you have enough control over your diet to be a vegan, you have enough control to do one of the other diets that has the weight effects without health side effects. 

There is plausibly some advantage from delay yes. For one thing even if you don't have any preference for which side wins the race, making the gap larger plausibly means the leading country can be more cautious because their lead is bigger, and right now the US is in the lead. For another thing, if you absolutely forced me to choose, I'd say I'd rather the US won the race than China did (undecided whether the US winning is better/worse than a multipolar world with 2 winners). It's true that the US has a much worse record in terms of invading other places a... (read more)

I don't remember the size, but I was thinking Dustin still has Facebook shares also, and probably still wants Facebook to do well on some level. EDIT: Although it's possible he has sold his Facebook shares since the last time I remember them being explicitly mentioned somewhere. 

Yeah, I don't think all Nat Sec stuff is bad. Competition and rivalry here is inevitable to some degree, and we really do need Nat Sec people to help manage the US end of it, especially as they are also the people who know most about making treaties and deals between rivals. 

We also want to avoid a conventional war between the US and China (plausibly over Taiwan and TSMC). Although I guess there is an argument that X-risk considerations should dominate that, but I think I go with commonsense over longtermist philosophical argument on that one, and pro... (read more)

2
Tristan W
Ah okay, I better understand what you mean when you say Natsec now. On the China front, do you think there's any advantage at all to delaying their capacity for developing AI? To put that another way, is there any degree of increased risk of a US-China conflict that you'd be willing to accept for delaying China's AI development?  As a small point, the world in which China has developed TAI before we have, and has taken back Taiwan, doesn't seem stable at all to me. There is a sense in which what will happen is less clear just by virtue of the world now having TAI, but it seems fairly clear to me that China taking Taiwan would raise tensions, and that it would be arguably worse in such a time because the US would already have to be contending with the fact it lost the AI race. So I don't think you can dismiss the risks such a situation would pose so quickly, and it's not clear to me that even if opposing war between the two powers was your only objective that it would be safer to not take any protective action now.  On the last point, I wonder if your definition of nat sec is more broad than mine? I think that most of the examples you cite seem to much more squarely fit into reducing the spread of communism, which I see as fair distinct from what I traditionally view as nat sec policy.  It feels like the inherently global and non-isolationist actions taken to prevent the spread of communism stand as a partially opposed approach even, because nat sec policy often seems to involve walking back the US's connection to other places in the world. At the very least, I think that current nat sec people would look fairly distinct from those pushing these past policies.  Thanks for the multiple sources though, I think some of the citations there do seem to paint a very negative picture of US actions in those places, though I think it's only properly clear in the case of the Indonesia mass murders and seems to be a bit more uncertain elsewhere. Do you know any good book or s

Yeah, obviously Moskovitz is not motivated by personal greed or anything like that since he is giving most of his money away, and I doubt Karnofsky is primarily motivated by money in that sense either. And I think both of them have done great things, or I wouldn't be on this forum! But having your business do well is a point of pride for business people, it means more money *for your political and charitable goals*, and people also generally want their spouses' business to succeed for reasons beyond "I want money for myself to buy nice stuff". (For people who don't know Karnofsky is married to Anthropic's President, which also means the CEO is his brother-in-law.) 

3
Tristan W
Ah okay yeah, the idea is that the success of the business itself is something they'll be apt to really care about, and on top of that there's a huge upside for positive impact if there's financial success because they can then deploy that towards further charitable ends.  Do you know off the top of your head how big a stake Dustin has in Anthropic? I think the amount would play a significant role here. 

"Center for AI Safety Action Fund (CAIS AF): CAIS AF is well placed to focus on the national security angle of AIS, with a current focus on chip security, supporting multiple promising efforts towards e.g. location verification and increased BIS capacity."

I am a bit worried that "the national security angle of AIS" is sort of code for "screwing over China to advance US hegemony with possible mild AI safety side effects", and I worry that Open Phil. has maybe been funding a bit too much of that sort of stuff lately, even if they haven't funded CAIS AF.... (read more)

2
Tristan W
I agree that I think it's wading into tricky territory, because worries over China's competitiveness can fuel further development as well as safety, but it seems to me that choosing very particular interventions as follow-ups to that framing can potentially reduce that risk.  Location verification mechanisms are a great example, where the framing can even just be "This is a strategic technology and we know other countries are violating rules we've placed around exports. Our law should be respected and this is one way to achieve that."  Again, I understand the worry, but I do think with careful framing it can be safety supporting. 
2
MichaelDickens
My rough impression is that there are indeed some "AI safety" orgs that operate in the way you describe, where they are focused more on promoting US hegemony and less on preventing AI from killing everyone.* But CAIS is more on the notkilleveryoneism side of things. *from what I've seen, the biggest offenders are CSET, Horizon Institute, and Fathom

Far-future effects are the most important determinant of what we ought to do

Weakly agree (at least if we caveat that I believe in some sort of deontic constraints on utility maximizing.) I think it is unclear that we can influence the far future in a predictable way, but slightly more likely than not, and I think the expected number of people and other sentient beings in the far future is likely very, very large as Greaves and McAskill argue. 

"This disagreement leads to my disagreement with their recommendations—relatively incremental interventions seem much more promising to me."

What's the reasoning here? 

I should say that I don't actually think Open Phil's leadership are anything other than sincere in their beliefs and goals. The sort of bias I am talking about operates more subtly than that. (See also the claim often attributed to Chomsky's Manufacturing Consent that the US media functions as pro-US, pro-business propaganda, but not because journalists are just responding to incentives in a narrow way, but because newspaper owners hire people who sincerely share their world view, which is common at elite universities (etc.) anyway.) 

3
Tristan W
That's a really interesting example, it does seem plausible to me that there's some selection pressure not just for more researchers but more AI-company-friendly views. What do you think would be other visible effects of a bias towards being friendly towards the AI companies? 

That is shockingly little money for advocacy if the 2% figure is correct. Maybe it's the right decision, this stuff is complex, but it's hard to avoid being a bit suspicious that the fact that leading EAs (i.e. Moskovitz and Karnofsky for starters) make money if certain AI companies do well has something to do with our reluctance to fund pressuring AI companies to do things they don't want to do. 

7
Tristan W
I'll flag that the actual amount it potentially a bit larger (the 2% is my quick estimate just based on public, rather than private reports), but yeah either way it's likely quite small.  FWIW I don't think it's likely that potential profit is playing a role per say, but, put slightly differently, that some major players in the space are more bought into the idea that the AI companies can be responsible and thus we might be jumping the gun to begin lobbying for safety measures they don't see as productive. 
3
Matrice Jacobine🔸🏳️‍⚧️
I agree with you on the meta case of suspicion about Open Philanthropy leadership but in this case AFAICT the Center for AI Policy was funded by the Survival and Flourishing Fund, which is aligned with the rationalist cluster and also funds PauseAI.

Yeah, I know what I described is not really relevant to Karthik's argument. But we don't want a situation where people hear "free trade is good, economists agree" and then decide to lobby for developing countries to drop their trade barriers in cases where that is actually harmful. 

3
David T
Fair. I agree with this Plenty of entities who aren't EAs doing that sort of lobbying already anyway

"And one of the least contested arguments amongst people that have actually studied it. "

Is it really that far from mainstream in economics to think that sometimes in poor countries, some trade barriers can help protect nascent industries and hence speed industrialization, and that this can be worth the cost to consumers in those countries of the trade barriers themselves? I kind of had a vague sense some famous books/people argue this? 

7
David T
There are some good arguments that in some cases, developing countries can benefit from protecting some of their own nascent industries. There are basically no arguments that the developed world putting tariffs (or anti dumping duties) on imports helps the developing world, which is the harmful scenario Karthik discusses in his article as an example of Nunn's argument that rich countries should stop doing things that harm poorer countries. Developed countries know full well these limit poorer countries' ability to export to them... but that's also why they impose them

Do you actually oppose transhumanism or atheism? That would slightly surprise me for an evo psych prof, but maybe I am totally wrong. Unlike you I am not, to put it mildly, a fan of National Conservatism (happy to see anyone tell anyone to care about AI takeover and mass unemployment from AI* though), but it seems a bit disrespectful and manipulative towards them to to talk like you share their fear of atheism and genetic engineering if you don't. 


*I actually think the point about how we can't just really on big tech's benevolence to keep paying basic... (read more)

7
Geoffrey Miller
David -- I considered myself an atheist for several decades (partly in alignment with my work in evolutionary psychology), and would identify now as an agnostic (insofar as the Simulation Hypothesis has some slight chance of being true, and insofar as 'Simulation-Coders' aren't functionally any different from 'Gods', from our point of view). And I'm not opposed to various kinds of reproductive tech, regenerative medicine research, polygenic screening, etc. However, IMHO, too many atheists in the EA/Rationalist/AI Safety subculture have been too hostile or dismissive of religion to be effective in sharing the AI risk message with religious people (as I alluded to in this post).  And, I think way too much overlap has developed between transhumanism and the e/acc cult that dismisses AI risk entirely, and/or that embraces human extinction and replacement by machine intelligences. Insofar as 'transhumanism' has morphed into contempt for humanity-as-it-is, and into a yearning for hypothetical-posthumanity-as-it-could be, then I think it's very dangerous. Modest, gradual, genetic selection or modification of humans to make them a little healthier or smarter, generation by generation? That's fine with me.  Radical replacement of humanity by ASIs in order to colonize the galaxy and the lightcone faster? Not fine with me.

I think you could have strengthened your argument here further by talking about how even in Dario's op-ed opposing the ban on state-level regulation of AI, he specifically says that regulation should be "narrowly focused on transparency and not overly prescriptive or burdensome". That seems to indicate opposition to virtually any regulations that would actually directly require doing anything at all to make models themselves safer. It's demanding that regulations be more minimal than even the watered-down version of SB 1047 that Anthropic publicly claimed to support. 

2
Remmelt
You’re right. I totally skipped over this. Let me try to integrate that quote into this post. 

"However the exclusive and calculated nature of EAs is a mismatch with the broad nature of politics and a spontaneity of the average voter. "

I think this is probably true to a large extent, but not all roles in politics are public facing ones. 

1
Jessica Udeh
True, though I think the behind the scenes approach has its limits too. Policy still needs to be sold and implemented by people who face the public. Even if EAs stay in wonky roles, someone eventually has to translate those ideas into something voters actually care about.

It seems unlikely that developing world EAs are in a particular good position to get broad political support for developing world interventions though. Maybe EAs from and in developing countries could do something here though. 

1
Jessica Udeh
Yeah, makes sense. Local EAs would definitely understand the political landscape better than outsiders designing policies from afar.
Load more