I think on the racism fron Yarrow is referring to the perception that the reason Moskowtiz won't fund rationalist stuff is because either he thinks that a lot of rationalist believe Black people have lower average IQs than whites for genetic reasons, or he thinks that other people believe that and doesn't want the hassle. I think that belief genuinely is quite common among rationalists, no? Although, there are clearly rationalists who don't believe it, and most rationalists are not right-wing extremists as far as I can tell.
Not everything being funded here even IS alignment techniques, but also, insofar as you just want general better understanding of AI as a domain through science, why wouldn't you learn useful stuff from applying techniques to current models. If the claim is that current models are too different from any possible AGI for this info to be useful, why do you think "do science" would help prepare for AGI at all? Assuming you do think that, which still seems unclear to me.
I asked about genuine research creativity not AGI, but I don't think this conversation is going anywhere at this point. It seems obvious to me that "does stuff mathematicians say makes up the building blocks of real research" is meaningful evidence that the chance that models will do research level maths in the near future is not ultra-low, given that capabilities do increase with time. I don't think this analogous to IQ tests or the bar exam, and for other benchmarks, I would really need to see what your claiming is the equivalent of the transfer from frontier math 4 to real math that was intuitive but failed.
The forum is kind of a bit dead generally, for one thing.
I don't really get on what grounds your are saying that the Coefficient Grants are not to people to do science, apart from the governance ones. I also think you are switching back and forth between: "No one knows when AGI will arrive, best way to prepare just in case is more normal AI science" and "we know that AGI is far, so there's no point doing normal science to prepare against AGI now, although there might be other reasons to do normal science."
I guess I still just want to ask: If models hit 80% on frontier math by like June 2027, how much does that change your opinion on whether models will be capable of "genuine creativity" in at least one domain by 2033. I'm not asking for an exact figure, just a ballpark guess. If the answer is "hardly at all", is there anything short of an 100% clear example of a novel publishable research insight in some domain, that would change your opinion on when "real creativity" will arrive?
I think what you are saying here is mostly reasonable, even if I am not sure how much I agree: it seems to turn on very complicated issue in the philosophy of probability/decision theory, and what you should do when accurate prediction is hard, and exactly how bad predictions have to be to be valueless. Having said that, I don't think your going to succeed in steering conversation away from forecasts if you keep writing about how unlikely it is that AGI will arrive near term. Which you have done a lot, right?
I'm genuinely not sure how much EA funding...
I guess I feel like if being able to solve mathematical problems designed by research mathematicians to be similar to the kind of problems they solve in their actual work is not decent evidence that AIs are on track to be able to do original research in mathematics in less than say 8 years then what would you EVER accept as empirical evidence that we are on track for that, but not there yet?
Note that I am not saying this should push your overall confidence to over 50% or anything, just that it ought to move you up by a non-trivial amount relative to...
"Rob Wiblin opines that the fertility crash would be a global priority if not for AI likely replacing human labor soon and obviating the need for countries to have large human populations"
This is a case where it really matters whether you are giving an extremely high chance that AGI is coming within 20-30 years, or merely a decently high chance. If you think the chance is like 75%, and the claim that conditional on no AGI, low fertility would be a big problem is correct, then the problem is only cut by 4x, which is compatible with it still being large and ...
I'm not actually that interested in defending:
My thought process didn't go beyond "Yarrow seems committed to a very low chance of AI having real, creative research insights in the next few years, here is something that puts some pressure on that". Obviously I agree that when AGI will arrive is a different question from when models will have real insights in research mathematics. Nonetheless I got the feeling-maybe incorrectly, that your strength of conviction that AGI is partly based on things like "models in the current paradigm can't have 'real insight'", so it seemed relevant, even though "real ins...
Working on AI isn't the same as doing EA work on AI to reduce X-risk. Most people working in AI are just trying to make the AI more capable and reliable. There probably is a case for saying that "more reliable" is actually EA X-risk work in disguise, even if unintentionally, but it's definitely not obvious this is true.
"Any sort of significant credible evidence of a major increase in AI capabilities, such as LLMs being able to autonomously and independently come up with new correct ideas in science, technology, engineering, medicine, philosophy, economics, psychology, etc"
Just in the spirit of pinning people to concrete claims: would you count progress on Frontier Math 4, like say, models hitting 40%*, as being evidence that this is not so far off for mathematics specifically? (To be clear, I think it is very easy to imagine models that are doing genuinely significant re...
Yeah, it's fair objection that even answer the why question like I did presupposes that EAs are wrong, or at least, merely luckily right. (I think this is a matter of degree, and that EAs overrated the imminence of AGI and the risk of takeover on average, but it's still at least reasonable to believe AI safety and governance work can have very high expected value for roughly the reasons EAs do.) But I was responding to Yarrow who does think that EAs are just totally wrong, so I guess really I was saying that "conditional on a sociological explanation being appropriate, I don't think it's as LW-driven as Yarrow thinks", although LW is undoubtedly important.)
Can you say more about what makes something "a subjective guess" for you? When you say well under 0.05% chance of AGI in 10 years, is that a subjective guess?
Like, suppose I am asked, as a pro-forecaster, to say whether the US will invade Syria, after a US military build-up involving air craft carriers in the Eastern Med, and I look for newspaper reports of signs of this, look up the base rate of how often the US bluffs with a military build up rather than invading, and then make a guess as to how likely an invasion is, is that "a subjective guess". ...
I don't think EAs AI focus is a product only of interaction with Less Wrong,-not claiming you said otherwise-but I do think people outside the Less Wrong bubble tend to be less confident AGI is imminent, and in that sense less "cautious".
I think EAs AI focus is largely a product of the fact that Nick Bostrom knew Will and Toby when they were founding EA, and was a big influence on their ideas. Of course, to some degree this might be indirect influence from Yudkowsky since he was always interacting with Nick Bostrom, but it's hard to know in what dir...
eh, I think the main reason EAs believe AGI stuff is reasonably likely is because this opinion is correct, given the best available evidence[1].
Having a genealogical explanation here is sort of answering the question on the wrong meta-level, like giving a historical explanation for "why do evolutionists believe in genes" or telling a touching story about somebody's pet pig for "why do EAs care more about farmed animal welfare than tree welfare."
Or upon hearing "why does Google use ads instead of subscriptions?" answering with the history of the...
People vary a lot in how they interpret terms like "unlikely" or "very unlikely" in % terms, so I think >10% is not all that obvious. But I agree that it is evidence they don't think the whole idea is totally stupid, and that a relatively low probability of near-term AGI is still extremely worth worrying about.
I don't think it's clear, absent further argument, that there has to be a 10% chance of full AGI in the relatively near future to justify the currently high valuations of tech stocks. New, more powerful models could be super-valuable without being able to do all human labour. (For example, if they weren't so useful working alone, but they made human workers in most white collar occupations much more productive.) And you haven't actually provided evidence that most experts think there's a 10% chance current paradigm will lead to AGI. Though the latter point...
"i don't believe very small animals feel pain, and if they do my best guess would be it would be thousands to millions orders of magnitude less pain than larger animals."
I'll repeat what regular readers of the forum are bored of me saying about this. As a philosophy of consciousness PhD, I barely ever heard the idea that small animals are conscious, but their experiences are way less intense. At most, it might be a consequence of integrated information theory, but not one I ever saw discussed and most people in the field don't endorse that one theory anywa...
Yeah, the error here was mine sorry. I didn't actually work on the survey, and I missed that it was actually estimating the % of the panel agreeing we are in a scenario, not the chance that that scenario will win a plurality of the panel. This is my fault not Connacher's. I was not one of the survey designers, so please do not assume from this that the people at the FRI who designed the survey didn't understand their own questions or anything like that.
For what it's worse, I think this is decent evidence that the question is too confusing to be usefu...
I haven't done the sums myself, but do we know for sure that they can't make money without being all that useful, so long as a lot of people interact with them everyday?
Is Facebook "useful"? Not THAT much. Do people pay for it? No, it's free. Instagram is even less useful than Facebook which at least used to actually be good for organizing parties and pub nights. Does META make money? Yes. Does equally useless TikTok make money? I presume so, yes. I think tech companies are pretty expert in monetizing things that have no user fee, and are...
Ok, there's a lot here, and I'm not sure I can respond to all of it, but I will respond to some of it.
-I think you should be moved just by my telling you about the survey. Unless you are super confident either that I am lying/mistaken about it, or that the FRI was totally incompetent in assembling an expert panel, the mere fact that I'm telling you the median expert credence in the rapid scenario is 23% in the survey ought to make you think there is at least a pretty decent chance that you are giving it several orders of magnitude less credence than ...
That's pretty incomprehensible to me even as a considerable skeptic of the rapid scenario. Firstly, you have experts giving a 23% chance and it's not moving you up even to like over 1 in a 100,000, let's say, although probably the JFK scenario is a hell of a lot less likely than that, even if his assassination was faked, despite there literally being a huge crowd who saw his head get blown off in public, still he have to be 108 to still be alive. Secondly, in 2018, AI could do to a first approximation basically nothing outside of highly specialized uses li...
There is an ambiguity about "capabilities" versus deployment here to be fair. Your "that will not happen" seems somewhat more reasonable to me if we are requiring that the AIs are actually deployed and doing all this stuff versus merely that models capable of doing this stuff have been created. I think it was the latter we were forecasting, but I'm not 100% certain.
https://leap.forecastingresearch.org/ The stuff is all here somewhere, though it's a bit difficult to find all the pieces quickly and easily.
For what it's worth, I think the chance of the rapid scenario is considerably less than 23%, but a lot more than under 0.1%. I can't remember the number I gave when I did the survey as a superforecaster, but maybe 2-3%? But I do think chances are getting rather higher by 2040, and it's good we are preparing now.
". I think an outlandish scenario like we find out JFK is actually still alive is mo...
For what it's worth, I think "less than 0.1% likely by 2032" is PROBABLY also not in line with expert opinion. The Forecasting Research Institute, where I currently work has just published a survey of AI experts and superforecasters on the future of AI, as part of our project LEAP, the Lognitudinal Expert Panel on AI. In it, experts and supers median estimate was a 23% chance of the survey's "rapid scenario" for AI progress by 2030 would occur. Here's how the survey described the rapid scenario:
"By the end of 2030, in the rapid-progress world, AI sys...
I think most, though no doubt not all people you'd think of as EA leaders think AI is the most important cause area to work in and have thought that for a long time. AI is also more fun to argue about on the internet than global poverty or animal welfare, which drives discussion of it.
But having said all that, there is still plenty EA funding of global health and development stuff, including by Open Philanthropy, who in fact have a huge chunk of the EA money in the world. People do and fund animal stuff too, including Open Phil. If you want to,...
"I think if you have enough control over your diet to be a vegan, you have enough control to do one of the other diets that has the weight effects without health side effects. "
Fair point, I was thinking of vegans as a random sample in terms of their capacity for deliberate weight-loss dieting, when of course they very much are not.
"In fact, weight loss is a common side effect of a vegan diet, which could explain all or most of any health upsides, rather than being vegan itself."
This is more a point against your thesis than for it, I think. It doesn't matter if the ideal meat diet is better than the ideal vegan diet, because people won't ever actually eat either-this is just the point about how people won't actually eat 2 cups of sesame seeds a day or whatever. If going vegan in practice typically causes people to lose weight, and this is usually a benefit, that's a point in favour o...
There is plausibly some advantage from delay yes. For one thing even if you don't have any preference for which side wins the race, making the gap larger plausibly means the leading country can be more cautious because their lead is bigger, and right now the US is in the lead. For another thing, if you absolutely forced me to choose, I'd say I'd rather the US won the race than China did (undecided whether the US winning is better/worse than a multipolar world with 2 winners). It's true that the US has a much worse record in terms of invading other places a...
Yeah, I don't think all Nat Sec stuff is bad. Competition and rivalry here is inevitable to some degree, and we really do need Nat Sec people to help manage the US end of it, especially as they are also the people who know most about making treaties and deals between rivals.
We also want to avoid a conventional war between the US and China (plausibly over Taiwan and TSMC). Although I guess there is an argument that X-risk considerations should dominate that, but I think I go with commonsense over longtermist philosophical argument on that one, and pro...
Yeah, obviously Moskovitz is not motivated by personal greed or anything like that since he is giving most of his money away, and I doubt Karnofsky is primarily motivated by money in that sense either. And I think both of them have done great things, or I wouldn't be on this forum! But having your business do well is a point of pride for business people, it means more money *for your political and charitable goals*, and people also generally want their spouses' business to succeed for reasons beyond "I want money for myself to buy nice stuff". (For people who don't know Karnofsky is married to Anthropic's President, which also means the CEO is his brother-in-law.)
"Center for AI Safety Action Fund (CAIS AF): CAIS AF is well placed to focus on the national security angle of AIS, with a current focus on chip security, supporting multiple promising efforts towards e.g. location verification and increased BIS capacity."
I am a bit worried that "the national security angle of AIS" is sort of code for "screwing over China to advance US hegemony with possible mild AI safety side effects", and I worry that Open Phil. has maybe been funding a bit too much of that sort of stuff lately, even if they haven't funded CAIS AF....
Far-future effects are the most important determinant of what we ought to do
Weakly agree (at least if we caveat that I believe in some sort of deontic constraints on utility maximizing.) I think it is unclear that we can influence the far future in a predictable way, but slightly more likely than not, and I think the expected number of people and other sentient beings in the far future is likely very, very large as Greaves and McAskill argue.
I should say that I don't actually think Open Phil's leadership are anything other than sincere in their beliefs and goals. The sort of bias I am talking about operates more subtly than that. (See also the claim often attributed to Chomsky's Manufacturing Consent that the US media functions as pro-US, pro-business propaganda, but not because journalists are just responding to incentives in a narrow way, but because newspaper owners hire people who sincerely share their world view, which is common at elite universities (etc.) anyway.)
That is shockingly little money for advocacy if the 2% figure is correct. Maybe it's the right decision, this stuff is complex, but it's hard to avoid being a bit suspicious that the fact that leading EAs (i.e. Moskovitz and Karnofsky for starters) make money if certain AI companies do well has something to do with our reluctance to fund pressuring AI companies to do things they don't want to do.
"And one of the least contested arguments amongst people that have actually studied it. "
Is it really that far from mainstream in economics to think that sometimes in poor countries, some trade barriers can help protect nascent industries and hence speed industrialization, and that this can be worth the cost to consumers in those countries of the trade barriers themselves? I kind of had a vague sense some famous books/people argue this?
Do you actually oppose transhumanism or atheism? That would slightly surprise me for an evo psych prof, but maybe I am totally wrong. Unlike you I am not, to put it mildly, a fan of National Conservatism (happy to see anyone tell anyone to care about AI takeover and mass unemployment from AI* though), but it seems a bit disrespectful and manipulative towards them to to talk like you share their fear of atheism and genetic engineering if you don't.
*I actually think the point about how we can't just really on big tech's benevolence to keep paying basic...
I think you could have strengthened your argument here further by talking about how even in Dario's op-ed opposing the ban on state-level regulation of AI, he specifically says that regulation should be "narrowly focused on transparency and not overly prescriptive or burdensome". That seems to indicate opposition to virtually any regulations that would actually directly require doing anything at all to make models themselves safer. It's demanding that regulations be more minimal than even the watered-down version of SB 1047 that Anthropic publicly claimed to support.
I think my basic reaction here is that longtermism is importantly correct about the central goal of EA if there are longtermist interventions that are actionable, promising and genuinely longtermist in the weak sense of "better than any other causes because of long-term effects", even if there are zero examples of LT interventions that meet the "novelty" criteria, or lack some significant near-term benefits.
Firstly, I'd distinguish here between longtermism as a research program, and longtermism as a position about what causes should be prioritized ri... (read more)