All of Linch's Comments + Replies

A relevant reframing here is whether having a PhD provides a high Bayes factor update to being hired. Eg, if people with and without PhDs have a 2% chance of being hired, but ">50% of successful applicants had a PhD" because most applicants have a PhD, then you should probably not include this, but if 1 in 50 applicants are hired, but it rises to 1 in 10 people if you have a PhD and falls to 1 in 100 if you don't, then the PhD is a massive evidential update even if there is no causal effect.

1
ixex
1d
Exactly
2
David_Moss
1d
I think this is one piece of information you would need to include to stop such a statement being misleading, but as I argue here, there are potentially lots of other pieces of information which would need to be included to make it non-misleading (i.e. information about any and all other confounders which explain the association). Otherwise, applicants will not know that conditional on X, they are not less likely to be successful, if they do not have a PhD (even though disproportionately many people with X have a PhD). Edit: TLDR, if you do not also condition on satisfying the role requirements, but only on applying, then this information will still be misleading (e.g. causing people who meet the requirements but lack the confounded proxy to underestimate their chances).

@JWS asked the question: why do EA critics hate EA so much? Are all EA haters just irrational culture warriors?

I genuinely don't know if this is an interesting/relevant question that's unique to EA. To me, the obvious follow-up question here is whether EA is unique or special in having this (average) level of vitriol in critiques of us? Like is the answer to "why so much EA criticism is hostile and lazy" the same answer to "why is so much criticism, period, hostile and lazy?" Or are there specific factors of EA that's at all relevant here?

I haven't been su... (read more)

Linch
3d48
14
1
1

Sure, social aggression is a rather subjective call. I do think decoupling/locality norms are relevant here. "Garden variety incompetence" may not have been the best choice of words on Sean's part,[1] but it did seem like a) a locally scoped comment specifically answering a question that people on the forum understandably had, b) much of it empirically checkable (other people formerly at FHI, particularly ops staff, could present their perspectives re: relationship management), and c) Bostom's capacity as director is very much relevant to the discussi... (read more)

It wasn't carefully chosen. It was the term used by the commenter I was replying to. I was a little frustrated, because it was another example of a truth-seeking enquiry by Milena getting pushed down the track of only-considering-answers-in-which-all-the-agency/wrongness-is-on-the-university side (including some pretty unpleasant options relating to people I'd worked with ('parasitic egregore/siphon money').

>Did Oxford think it was a reputation risk? Were the other philosophers jealous of the attention and funding FHI got? Was a beaurocratic parasitic e... (read more)

Interesting example! I don't know much about Tate, but I understand him as a) only "influential" in a very ephemeral way, in the way that e.g. pro wrestlers are, and b) only influential among people who themselves aren't influential.

It's possible we aren't using the word "influential" in the same way. E.g. implicit in my understanding of "influential" is something like "having influence on people who matter" whereas maybe you're just defining it as "having influence on (many) people, period?"

I claim that on net FHI would've brought more prestige to Oxford than the other way around, especially in the counterfactual world where it thrived/was allowed to thrive (which might be impractical for other reasons). 

I might think of FHI as having borrowed prestige from Oxford. I think it benefited significantly from that prestige. But in the longer run it gets paid back (with interest!).

That metaphor doesn't really work, because it's not that FHI loses prestige when it pays it back -- but I think the basic dynamic of it being a trade of prestige at different points in time is roughly accurate.

I might not be tracking all the exact nuances, but I'd have thought that prestige is ~just legible influence aged a bit, in the same way that old money is just new money aged a bit. I model institutions like Oxford as trying to play the "long game" here.

The point I’m trying to make is that there are many ways you can be influential (including towards people that matter) and only some of them increase prestige. People can talk about your ideas without ever mentioning or knowing your name, you can be a polarising figure who a lot of influential people like but who it’s taboo to mention, and so on.

I also do think you originally meant (or conveyed) a broader meaning of influential - as you mention economic output and the dustbins of history, which I would consider to be about broad influence.

Andrew Tate is very influential, but entirely lacking in prestige.

Erm, looking at the accomplishments of FHI, I'd be genuinely surprised if random philosophers from Oxford will be nearly as influential going forwards. "It's the man that honors the medal."

Influence =/= prestige

6
Ben Millwood
3d
This sounds like it's disagreeing with the parent comment but I'm not sure if it is?

The vast majority of academic philosophy at prestigious universities will be relegated to the dustbins of history, FHI's work is quite plausibly an exception. 

To be clear, this is not a knock on philosophy; I'd guess that total funding for academic philosophy in the world is on the order of 1B. Most things that are 0.001% of the world economy won't be remembered much 100 years from now. I'd guess philosophy in general punches well above its weight here, but base rates are brutal.

You’re answering a somewhat different question to the one I’m bringing up

What a champ. if institutions can be heroes, FHI is surely one. 

Linch
3d51
17
0
5

...I do not consider myself to be under the obligation that all negative takes I share about an organization...

Fwiw I think part of the issue that I had[1] with your comment is that the comment came across much more aggressively and personally, rather than as a critique of an organization. I do think the bar for critiquing individuals ought to be moderately higher than the bar for critiquing organizations. Particularly when the critique comes from a different place/capacity[2] than strictly necessary for the conversation[3].

I expect some other pe... (read more)

-2
Habryka
3d
Hmm, I agree that there was some aggression here, but I felt like Sean was the person who first brought up direct criticism of a specific person, and very harsh one at that (harsher than mine I think).  Like, Sean's comment basically said "I think it was directly Bostrom's fault that FHI died a slow painful death, and this could have been avoided with the injection of just a bit of competence in the relevant domain". My comment is more specific, but I don't really see it as harsher. I also have a prior to not go into critiques of individual people, but that's what Sean did in this context (of course Bostrom's judgement is relevant, but I think in that case so is Sean's).

People who prefer short videos to longform text might enjoy SereneDesiree's interview of me talking about Open Asteroid Impact's work:

We covered: 

  • Our mission to have as much impact as possible and reshape the Earth
  •  OAI's approach to safety 
  • Competitors 
  • Our commitment to DEI
  • Our windfall clause

I 'd expect there would be some details of some applications that wouldn't be appropriate to share on a public forum though

Hopefully grantees can opt-in/out as appropriate! They don't need so share everything. 

Grantees are obviously welcome to do this. That said, my guess is that this will make the forum less enjoyable/useful for the average reader, rather than more. 

2
Vasco Grilo
11d
Right, but they have not been doing it. So I assume EA Funds would have to at least encourage applicants to do it, or even make it a requirement for most applications. There can be confidential information in some applications, but, as you said below, applicants do not have to share everything in their public version. I guess the opposite, but I do not know. I am mostly in favour of experimenting with a few applications, and then deciding whether to stop or scale up.
3
David T
12d
I think a dedicated area would minimise the negative impact on people that aren't interested whilst potentially adding value (to prospective applicants in understanding what did and didn't get accepted, and possibly also to grant assessors if there was occasional additional insight offered by commenters) I 'd expect there would be some details of some applications that wouldn't be appropriate to share on a public forum though

This entire thread just demonstrates how confused and useless it is to argue "by definition", or argue about term definitions.

You keep inserting words into people's mouths lmao. Nobody said "by definition" before you did. (Control-F for "by definition" if you don't believe me). 

I did not miss your "if." I didn't think it was necessary to go into the semantics dive because I thought the analogy would be relatively clear. Let me try again:

In general, when someone says X group is Y, a reasonable interpretation is that members of X group are more likely t... (read more)

The comment you're replying to has somewhat sloppy language and reasoning. Unfortunately your comment managed to be even worse.

If white supremacists are by definition non-respectful to non-white people, and Hanania appears fairly respectful to non-white people, perhaps that allows us to conclude that Hanania does not, in fact, qualify for your definition of "white supremacist"?

This line of reasoning is implausible. If having a single nonwhite person over on a podcast without being rude is strong evidence against white supremacy, trusting nonwhite people en... (read more)

-6
Ebenezer Dukakis
15d

(Appreciate the upvote!)

At a high level, l I'm of the opinion that we practice better reasoning transparency than ~all EA funding sources outside of global health, e.g. a) I'm responding to your thread here and other people have not, b) (I think) people can have a decent model of what we actually do rather than just an amorphous positive impression, and c) I make an effort of politely delivering messages that most grantmakers are aware of but don't say because they're worried about flack. 

It's really not obvious that this is the best use of limited re... (read more)

2
Vasco Grilo
12d
Manifund already has quite a good infrastructure for sharing grants. However, have you considered asking applicants to post a public version of their applications on EA Forum? People who prefer to remain anonymous could use an anonymous account, and anonymise the public version of their grant. At a higher cost, there would be a new class of posts[1] which would mimic some of the features of Manifund, but this is not strictly necessary. The posts with the applications could simply be tagged appropriately (with new tags created for the purpose), and include a standardised section with some key information, like the requested amount of funding, and the status of the grant (which could be changed over time editing the post). The idea above is inspired by some thoughts from Hauke Hillebrandt. 1. ^ As of now, there are 3 types, normal posts, question posts and linkposts/crossposts.
2
Vasco Grilo
15d
To be clear, the criticisms I make in the post and comments apply to all grantmakers I mentioned in the post except for CE. I have skimmed some, but the vast majority of my donations have been going to AI safety interventions (via LTFF). I may read CE's reports in more detail in the futute, as I have been moving away from AI safety to animal welfare as the most promising cause area. I do not care about transparency per se[1], but I think there is usually a correlation between it and cost-effectiveness (for reasons like the ones you mentioned inside parentheses). So, a priori, lower transparency updates me towards lower cost-effectiveness. Cool! I can see this being the case, as people currently get to know about most accepted aplications, but nothing about the rejected ones. 1. ^ I fully endorse expected total hedonistic utilitarianism, so I only intrinsically value/disvalue positive/negative conscious experiences.

Hmm, I still think your numbers are not internally consistent but I don't know if it's worth getting into.

Really late to respond to this! Just wanted to quickly say that I've been mulling over this question for a while and don't have clear/coherent answers; hope other people (at EAIF and elsewhere) can comment with either more well-thought-out responses or their initial thoughts!

Less importantly,

In any case, EA Funds' mean amount granted is 76.0 k$, so 52 words/grant would result in 0.684 word/k$ (= 52/(76.0*10^3)), which is lower than the 1.57 word/k$ I estimated above

You previously said:
> The mean length of the write-up of EA Funds' grants is 14.4 words

So I'm a bit confused here.

Also for both LTFF and EAIF, when I looked at mean amount granted in the past, it was under $40k rather than $76k. I'm not sure how you got $76k. I suspect at least some of the difference is skewed upwards by our Global Health and Development fund. Ou... (read more)

2
Vasco Grilo
16d
This is the mean number of words of the write-ups on EA Funds' database. 52 words in my last comment was supposed to be the words per grant regarding the payout report you mentioned. I see now that you said 40 words, so I have updated my comment above (the specific value does not affect the point I was making). Makes sense. For LTFF and the Effective Altruism Infrastructure Fund (EAIF), I get a mean amount granted of 42.9 k$. For these 2 funds plus the Animal Welfare Fund (AWF), 47.4 k$, so as you say the Global Health and Development Fund (GHDF) pushes up the mean across all 4 funds. I am calculating the mean amount granted based on the amounts provided in the database, without any adjustments for inflation. I agree. However, the 2nd point also means donating to GHDF has basically the same effect as donating to GiveWell's funds, so I think GHDF should be doing something else. To the extent Caleb Parikh seems to dispute this a little, I would say it would be worth having public writings about it.

Thanks for engaging as well. I think I disagree with much of the framing of your comment, but I'll try my best to only mention important cruxes.

  • I don't think wordcount is a good way to measure information shared
  • I don't think "per amount granted" is a particularly relevant denominator when different orgs have very different numbers of employees per amount granted.
  • I don't think grantmakers and incubators are a good like-for-like comparison. 
  • As a practical matter, I neither want to write 500-1000 pages/year of grants nor think it's the best use of my tim
... (read more)
4
Vasco Grilo
16d
Thanks for the detailed comment. I strongly upvoted it. I agree the number of words per grant is far from an ideal proxy. At the same time, the median length of the write-ups on the database of EA Funds is 15.0 words, and accounting for what you write elsewhere does not impact the median length because you only write longer write-ups for a small fraction of the grants, so the median information shared per grant is just a short sentence. So I assume donors do not have enough information to assess the median grant. On the other hand, donors do not necessarily need detailed information about all grants because they could infer how much to trust EA Funds based on longer write-ups for a minority of them, such as the ones in your posts. I think I have to recheck your longer write-ups, but I am not confident I can assess the quality of the grants with longer write-ups based on these alone. I suspect trusting the reasoning of EA Funds' fund managers is a major reason for supporting EA Funds. I guess me and others like longer write-ups because transparency is often a proxy for good reasoning, but we had better look into the longer write-ups, and assess EA Funds based on them rather than the median information shared per grant. At least a priori, I would expect the information shared about a grant to be proportional to the amount of effort put into assessing it, and this to be proportional to the amount granted, in which case the information shared about a grant would be proportional to the amount granted. The grants you assessed in LTFF's most recent report were of 200 and 71 k$, and you wrote a few paragraphs about each of them. In contrast, CE's seed funding per charity in 2023 ranged from 93 to 190 k$, but they wrote reports of dozens of pages for each of them. This illustrates CE shares much more information about the interventions they support than EA Funds' shares about the grants for which there are longer write-ups. So it is possible to have a better picture of CE

EDIT: I think there's a database issue, when I try to delete this comment I think it might also delete a comment in a different thread. To be clear I still endorse this comment, just not its location.

This analysis can't be right. The most recent LTFF payout report alone is 13000 words, which covered 327 grantees, or an average of 40 words/grant (in addition to the other information in eg the database). 

EDIT: You say:

EA Funds' EA Forum posts only cover a tiny minority of their grants, and the number above would not be affected much if there were a few

... (read more)
[This comment is no longer endorsed by its author]Reply
2
Vasco Grilo
17d
Thanks for following up, Linch! I have replaced "I am excluding EA Forum posts [to calculate the mean length of the write-up per amount granted for EA Funds]" by "I am only accounting for the write-ups in the database", which was what I meant. You say that report covered 327 grantees, but it is worth clarifying you only have write-ups of a few paragraphs for 15 grants, and of 1 sentence for the rest. In any case, EA Funds' mean amount granted is 76.0 k$, so the 40 words/grant you mentioned would result in 0.526 word/k$ (= 40/(76.0*10^3)), which is lower than the 1.57 word/k$ I estimated above. I do not think it would be fair to add both estimates, because I would be double counting information, as you reproduce in this section of the write-up you linked the 1 sentence write-ups which are also in the database. Here is an easy way of seeing the Long-Term Future Fund (LTFF) shares way less information than CE. The 2 grants you evaluated for which there is a "long" write-up have 1058 words (counting the titles in bold), i.e. 529 words/grant (= 1058/2). So, even if EA Funds had similarly "long" write-ups for all grants, the mean length of the write-up per amount granted would be 6.96 word/k$ (= 529/(76.0*10^3)), which is still just 8.51 % (= 6.96/81.8) of CE's 81.8 word/k$. Given this, I (once again) reiterate my suggestion of EA Funds having write-ups of "a few paragraphs to 1 page instead of 1 sentence for more grants, or a few restrospective impact evaluations".

(I'm also not sure your list is comprehensive, eg Longview only has 12 writeups on their website and you say they "have write-ups roughly as long as Open Philanthropy," but I'm pretty sure they gave out more grants than that (and have not written about them at all).

(I work at EA Funds)

These seem like pretty unreasonable comparisons unless I'm missing something. Like entirely different orders of magnitude. For context, Long-Term Future Fund (which is one of 4 EA Funds) gives out about 200 grants a year. 

If I understand your sources correctly, CE's produces like 4 in-depth reports a cycle (a... (read more)

2
Elizabeth
15d
Comparing write-ups of incubatees (that CE has invested months in and would like to aid in fundraising), to grants seems completely out of left field to me. 

Hi Linch,

I estimated CE shares 52.1 (= 81.8/1.57) times as much information per amount granted as EA Funds:

  • For CE's charities, the ratio between the mean length of the reports respecting their top ideas in 2023 in global health and development and mean seed funding per charity incubated in 2023 was 81.8 word/k$ (= 11.7*10^3/(143*10^3)).
    • The mean length of the reports respecting their top ideas in 2023 in global health and development was 11.7 k words[1] (= (12,294 + 9,652 + 14,382 + 10,385)/4). I got the number of words clicking on "Tools" and "Word co
... (read more)
2
Vasco Grilo
19d
Thanks for the comment, Linch! I said: I was not clear, but I did not mean to encourage all grantmakers I listed to have write-ups as long as CE's report, which I agree would make little sense. I just meant longer write-ups relative to their current length. For EA Funds, I guess a few paragraphs to 1 page instead of 1 sentence for more grants, or a few restrospective impact evaluations would still be worth it.
6
Austin
20d
For sure, I think a slightly more comprehensive comparison of grantmakers would include the stats for the number of grants, median check size, and amount of public info for each grant made. Also, perhaps # of employees, or ratio of grants per employee? Like, OpenPhil is ~120 FTE, Manifund/EA Funds are ~2, this naturally leads to differences in writeup-producing capabilities.
4
Linch
20d
(I'm also not sure your list is comprehensive, eg Longview only has 12 writeups on their website and you say they "have write-ups roughly as long as Open Philanthropy," but I'm pretty sure they gave out more grants than that (and have not written about them at all).

We hope to impact humanity and even steer Earth's trajectory! 

Some 2nd edition book titles:

What We Owe to Shrimp
The Crustacean Precipice

The Most Good You Can Do (for Shrimp)
Supershrimp
Shrimping Good Better
Deep Sea Utopia

2
SofiaBalderson
22d
What We Owe to Shrimp +1 !! In case Will needs a title for his next book:) 

@Alexander_Berger Happy to explore a win-win-win opportunity! We are already in communications with VCs, but we'd love to get some philanthropic interest as well! Not donating to us will be an astronomical waste.

For $50M, we'd even consider giving you a permanent seat on our Board of Concurrers!

Working as a safety engineer at Lockheed Martin is a great idea! If for no other reason than career capital. Working for a few years as a junior safety engineer at Lockheed Martin can probably be a great skill-building opportunity that will later place you well for working at a high-impact startup like Open Asteroid Impact.

Yeah $20,000,000 here, $20,000,000 there, pretty soon we're talking about real money.

he got (and many white-collar criminals get) significantly less than his culpability level and harm caused would predict.

What do you think is the correct level of punishment for white collar crimes based on harm? If we only look at first-order effects [1], even stealing 1B is just really bad, consequentially. Like if we use a simple VSL framework it's equivalent to ~100 murders

But of course, this is very much not how the justice system currently operates, so overall I'm pretty confused.

[1] And not looking at second order effects of his crimes e.g. c... (read more)

2
Jason
1mo
As far as what they predict, 40-50 years as explained in the government's sentencing memo. As far as what the impact should be -- I would have to write a book on that. To start with, I see multiple, related harm-related measures here: * The amount of expected harm the offender knew or should have known about (this is the culpability-flavored measure); * The actual expected value of harm (this is more of a general deterrence-flavored measure); and * The actual harm (more of a retribution-flavored measure). I also don't see a unified measure of harm in economic-loss cases, as the harm associated with stealing $1,000 from a working class person is substantially higher than the harm of stealing it from me. Targeting vulnerable victims also gets you an enhancement for other reasons (e.g., it suggests a more extensive lack of moral compass that makes me value incapacitation more as a sentencing goal). But most fundamentally, both harm and culpability go into the mix, filtered through the standard purposes of sentencing, to produce a sentence that I think is sufficient, but not greater than necessary, to accomplish those goals. So I can tell you that the relationship between harm and sentence in fraud cases shouldn't be -0-, both because there is little or no general deterrence against making your frauds bigger, and because there is some relationship between culpability and fraud size. It also shouldn't be linear, both because this is impractical given the wide variance in harms, and because the degree of culpability does not ordinarily vary in a linear manner.  Most people in the sentencing realm think the federal sentencing guidelines increase the sentence too much based on loss amount (~ 25% uplift for each doubling of amount, plus some other uplifts tend to scale with size) and give too much weight to loss size in general. I agree with both of those views. Roughly and after considering a fuller measure of harm than aggregate financial loss, I might consider su

I think this is a fair compromise between what the prosecutors wanted and what the defense wanted; I don't have an opinion on what's the "correct" level of punishment for this type of crime. My guess is that if I did a first-principles analysis his crime is either the type of thing that gets ~5 years or something that gets life imprisonment without parole, but I'm not confident and also I don't see much value in forming my own independent impression on optimal deterrence theory, given that it's not decision-relevant to me at all.

(speaking for myself)
I had an extended discussion with Scott (and to a lesser extent Rachel and Austin) about the original proposed market mechanism, which iiuc hasn't been changed much since. 

I'm not particularly worried about final funders losing out here, if anything I remember being paternalistically worried that the impact "investors" don't know what they're getting into, in that they appeared to be taking on more risks without getting a risk premium.

But if the investors, project founders, Manifold, etc, are happy to come to this arrangement with... (read more)

4
Zach Stein-Perlman
1mo
My current impression is that there is no mechanism and funders will do whatever they feel like and some investors will feel misled... I now agree funders won't really lose out, at least.

If I had more time and energy I'd probably make some more evidenced claims about Meta issues, and how things like SBF, sexual misconduct cases or Nonlinear could have been helped with more of #2 than #1 but don't have the time or energy (I'm also less sure about this claim).

At the risk of saying the obvious, literally every single person at Alameda and FTX's inner circle worked in large corporations in the for-profit sector out of college and before Alameda/FTX. (SBF: Jane Street, Gary Wang: Google, Caroline Ellison: Jane Street, Nishad Singh: Facebook/Met... (read more)

6
yanni kyriacos
1mo
I wasn't clear. I was actually pointing to an intuition I have that SBF 'got away with it' by taking advantage of EAs unusually high levels of contentiousness. 

I believe we changed the text a bunch in August/early September. I think there were a few places we didn't catch the first time, and we made more updates in ~the following month (September). AFAIK we no longer have any (implicit or explicit) commitments for response times anywhere, we only mention predictions and aspirations.

Eg here's the text at near the beginning of the application form: 

The Animal Welfare Fund, Long-Term Future Fund and EA Infrastructure Fund aim to respond to all applications in 2 months and most applications in 3 weeks. However,

... (read more)

I'd be a bit surprised if you could find people on this forum who (still) work at Cohere. Hard to see a stronger signal to interview elsewhere than your CEO explaining in a public memo why they hate you.

but making an internal statement about it to your company seems really odd to me? Like why do your engineers and project managers need to know about your anti-EA opinions to build their products?

I agree it's odd in the sense that most companies don't do it. I see it as a attempt to enforce a certain kind of culture (promoting conformity, discouragement of d... (read more)

Thank you for your detailed, well-informed, and clearly written post.

America has about five times more vegetarians than farmers — and many more omnivores who care about farm animals. Yet the farmers wield much more political power.

This probably doesn't address your core points, but the most plausible explanation for me is that vegetarians on average just care a lot less about animal welfare than farmers care about their livelihoods. Most people have many moral goals in their minds that compete with other moral goals as well as more mundane concerns (which ... (read more)

3
LewisBollard
2mo
Thanks Linch. Yeah I think you're spot on about the salience / enthusiasm gap. I should have emphasized this more in the piece.
4
Jason
2mo
There are also plenty of people whose economic or other interests are indirectly affected by agricultural interests. If you live in an agriculture-heavy district, anything that has a material negative effect on your community's economics will indirectly affect you. That may be through a reduction in the amount consumers have to spend in your local area, local tax revenue, farm job loss increasing competition for non-farm jobs, etc.

Minor, but: searching on the EA Forum, your post and Quentin Pope's post are the only posts with the exact phrase "no evidence" (EDIT: in the title, which weakens my point significantly but it still holds) The closest other match on the first page is There is little (good) evidence that aid systematically harms political institutions, which to my eyes seem substantially more caveated.

Over on LessWrong, the phrase is more common, but the top hits are multiple posts that specifically argue against the phrase in the abstract. So overall I would not consider i... (read more)

The point is not that 1.5 is a large number, in terms of single variables -- it is -- the point is that 2.7x is a ridiculous number.

2.7x is almost exactly the amount world gdp per capita has changed in the last 30 years. Obviously some individual countries (e.g. China) have had bigger increases in that window.


30 years isn't that high in the grand scheme of things; it's far smaller than most lifetimes.

(EDIT: nvm this is false, the chart said "current dollars" which I thought meant inflation-adjusted, but it's actually not inflation adjusted)

[This comment is no longer endorsed by its author]Reply

Makes sense! I agree that fast takeoff + short timelines makes my position outlined above much weaker. 

e.g. decisions and character traits of the CEO of an AI lab will explain more of the variance in outcomes than decisions and character traits of the US President.

I want to flag that if an AI lab and the US gov't are equally responsible for something, then the comparison will still favor the AI lab CEO, as lab CEOs have much greater control of their company than the president has over the USG. 

I'm not convinced that he has "true beliefs" in the sense you or I mean it, fwiw. A fairly likely hypothesis is that he just "believes" things that are instrumentally convenient for him.

Thanks! I don't have much expertise or deep analysis here, just sharing/presenting my own intuitions. Definitely think this is an important question that analysis may shed some light on. If somebody with relevant experience (eg DC insider knowledge, or academic study of US political history) wants to cowork with me to analyze things more deeply, I'd be happy to collab. 

I can try, though I haven't pinned down the core cruxes behind my default story and others' stories. I think the basic idea is that AI risk and AI capabilities are both really big deals. Arguably the biggest deals around by a wide variety of values. If the standard x-risk story is broadly true (and attention is maintained, experts continue to call it an extinction risk, etc), this isn't difficult for nation-state actors to recognize over time. And states are usually fairly good at recognizing power and threats, so it's hard to imagine they'd just sit at th... (read more)

4
kokotajlod
2mo
I agree that as time goes on states will take an increasing and eventually dominant role in AI stuff. My position is that timelines are short enough, and takeoff is fast enough, that e.g. decisions and character traits of the CEO of an AI lab will explain more of the variance in outcomes than decisions and character traits of the US President.
2
Stefan_Schubert
2mo
Thanks, this is great. You could consider publishing it as a regular post (either after or without further modification). I think it's an important take since many in EA/AI risk circles have expected governments to be less involved: https://twitter.com/StefanFSchubert/status/1719102746815508796?t=fTtL_f-FvHpiB6XbjUpu4w&s=19 It would be good to see more discussion on this crucial question. The main thing you could consider adding is more detail; e.g. maybe step-by-step analyses of how governments might get involved. For instance, this is a good question that it would be good to learn more about: "does it look more like much more regulations or international treaties with civil observers or more like almost-unprecedented nationalization of AI as an industry[?]" But of course that's hard.

I want to separate out:

  1. Actions designed to make gov'ts "do something" vs
  2. Actions designed to make gov'ts do specific things.

My comment was just suggesting that (1) might be superfluous (under some set of assumptions), without having a position on (2). 

I broadly agree that making sure gov'ts do the right things is really important. If only I knew what they are! One reasonably safe (though far from definitely robustly safe) action is better education and clearer communications: 

> Conversely, we may be underestimating the value of clear conversati... (read more)

2
jackva
2mo
Sorry for not being super clear in my comment, it was hastily written. Let me try to correct: I agree with your point that we might not need to invest in govt "do something" under your assumptions (your (1)). I think the point I disagree with is the implicit suggestion that we are doing much of what would be covered by (1). I think your view is already the default view.  * In my perception, when I look at what we as a community are funding and staffing, > 90% of this is only about (2) -- think tanks and other Beltway type work that is focused on make actors do the right thing, not just salience raising, or, alternatively having these clear conversations. * Somewhat casually but to make the point, I think your argument would change more if Pause AI sat on 100m to organize AI protests, but we would not fund CSET/FLI/GovAI etc. * Note that even saying "AI risk is something we should think about as an existential risk" is more about "what to do" than "do something", it is saying "now that there is this attention to AI driven by ChatGPT, let us make sure that AI policy is not only framed as, say, consumer protection or a misinformation in elections problem, but also as an existential risk issue of the highest importance." This is more of an aside, but I think by default we err on the side of too much of "not getting involved deeply into policy, being afraid to make mistakes" and this itself seems very risky to me. Even if we have until 2030 until really critical decisions are to be made, the policy and relationships built now will shape what we can do then (this was laid out more eloquently by Ezra Klein in his AI risk 80k podcast).  

One perspective that I (and I think many other people in the AI Safety space) have is that AI Safety people's "main job" so to speak is to safely hand off the reins to our value-aligned weakly superintelligent AI successors.


This involves:
a) Making sure the transition itself goes smoothly and

b) Making sure that the first few generations of our superhuman AI successors are value-aligned with goals that we broadly endorse. 

Importantly, this likely means that the details of the first superhuman AIs we make are critically important. We may not be able to, ... (read more)

My default story is one where government actors eventually take an increasing (likely dominant) role in the development of AGI. Some assumptions behind this default story:

1. AGI progress continues to be fairly concentrated among a small number of actors, even as AI becomes percentage points of GDP.

2. Takeoff speeds (from the perspective of the State) are relatively slow.

3. Timelines are moderate to long (after 2030 say). 

If what I say is broadly correct, I think this may have has some underrated downstream implications For example, we may be currently... (read more)

1
Sharmake
2mo
I basically grant 2, sort of agree with 1, and drastically disagree with three (that timelines will be long.) Which makes me a bit weird, since while I do have real confidence in the basic story that governments are likely to influence AI a lot, I do have my doubts that governments will try to regulate AI seriously, especially if timelines are short enough.
1
CAISID
2mo
A useful thing to explore more here are the socio-legal interactions between private industry and the state, particularly when collaborating on high-tech products or services. There is a lot more interaction between tech-leading industry and the state than many people realise. It's also useful to think of states not as singular entities but of bundles of often fragmented entities organised under a singular authority/leadership. So some parts of 'the state' may have a very good insight into AI development, and some may not have a very good idea at all.  The dynamic of state to corporate regulation is complex and messy, and certainly could do with more AI-context research, but I'd also highlight the importance of government contracts to this idea also.  When the government builds something, it is often via a number of 'trusted' private entities (the more sensitive the project, the more trusted the entity - there is a license system for this in most developed countries) so the whole state/corporate role is likely to be quite mixed anyway and balanced mostly on contractual obligations. It may also differ by industry, too. 
6
NickLaing
2mo
Like you, I would prefer governments to take an increasing role, and hopefully even a dominant one. I find it hard to imagine how this would happen. Over the last 50 years, I think (not super high confidence) the movement in the Western world at least has been. through neoliberalism and other forces (in broad strokes) away from government control and towards private management and control. This includes areas such as... * Healthcare * Financial markets * Power generation and distribution In addition to this, government ambition both in terms of projects and new laws has I think reduced in the last 50 years. For example things like the Manhattan project, large public transport infrastructure projects and Power generation initiatives (nuclear, dams etc.) have dried up rather than increased. What makes you think that government will a) Choose to take control b) Be able to take control. I think its likely that there will be far more regulatory and taxation laws around AI in the next few years, but taking a "dominant role in the development of AI" is a whole different story. Wouldn't that mean something like launching whole 'AI departments' as part of the public service, and making really ambitious laws to hamstring private players? Also the markets right now seem to think this unlikely if AI company valuations are anything to go on. I might have missed an article/articles discussing why people think the government might actually spend the money and political capital to do this. Nice one.
3
jackva
2mo
This seems right to me on labs (conditional on your view being correct), but I am wondering about the government piece -- it is clear and unavoidable that government will intervene (indeed, already is) and that AI policy will emerge as a field between now and 2030 and that decisions early on likely have long-lasting effects. So wouldn't it be extremely important also on your view to now affect how government acts?
7
Stefan_Schubert
2mo
Thanks, I think this is interesting, and I would find an elaboration useful. In particular, I'd be interested in elaboration of the claim that "If (1, 2, 3), then government actors will eventually take an increasing/dominant role in the development of AGI".

Additional musing this made me think of: there's also the consideration that the next-best candidate also has a counterfactual, and if they're aligned will probably themselves end up doing something else impactful if they don't take this job

Agreed, if you or other people want to read about issues with naive counterfactuals, I briefly discuss it here

Quick update: 

  • The dashboard includes all individual giving through Giving What We Can, which is a partner of EA Funds that offers the public frontend that most people who want to donate to EA Funds see.
  • Some of individual donors to the different funds, particularly the largest ones, choose to donate to various funds within EA Funds through other ways, eg every.org or more idiosyncratic ways (which we're more willing to work with for larger donors). 
  • We don't currently have a public dashboard to expose all of our donations, unfortunately.

(My own guesses only)

For what it's worth my guess is that a key reason people aren't giving as much to our GH&D fund is due to the influx of healthy competitors. Ie, it's not offering much of a differentiated product from what people could get elsewhere. I haven't interviewed our donors about this so I can't be sure, but my impression is that when the GH&D fund first launched, there weren't any plausible competitors for donors for the niche of "I want to figure out my own cause prioritization, but I want to defer to external experts to source and p... (read more)

2
Vasco Grilo
2mo
Nice point, Linch! I have a post somewhat related to that: However, GiveWell’s All Grants Fund was only launched in August 2022, GWWC's Global Health and Wellbeing Fund was only launched in late 2023, and there might be some lag between decreased donations and decreased grants, so I do not think new competitors alone could explain GHDF's 52 % (= 1 - 4.8/10) decrease in grants from 2021 to 2022 reported by Ricardo. Importantly, GHDF has now updated the amounts they granted in 2022 and 2023. On December 20, in agreement with Ricardo's analysis, the amounts granted in 2022 and 2023 were 4.8 and 0 M$, whereas now they are 11 and 3.4 M$. I wonder why the amount granted in 2022 took 1 year to be updated to the correct value.

(I work for EA Funds)

Re 2: Yes this is correct. It does not include institutional funds. I'm also not sure if it includes non-cash donations from individuals either; I think the dashboard was created back when there was only one way to donate to EA Funds. I'll check.

(Thanks for your hard work on the post btw!)

4
Linch
2mo
Quick update:  * The dashboard includes all individual giving through Giving What We Can, which is a partner of EA Funds that offers the public frontend that most people who want to donate to EA Funds see. * Some of individual donors to the different funds, particularly the largest ones, choose to donate to various funds within EA Funds through other ways, eg every.org or more idiosyncratic ways (which we're more willing to work with for larger donors).  * We don't currently have a public dashboard to expose all of our donations, unfortunately.
Load more