L

Linch

@ EA Funds
23733 karmaJoined Dec 2015Working (6-15 years)

Bio

"To see the world as it is, rather than as I wish it to be."

Currently I work for EA Funds. My job title is still tbd, but I'm responsible for a lot of the communications on behalf of EA Funds and constituent funds. I also work on grantmaking, fundraising, hiring, and some strategy setting.

I used to be a Senior Researcher on the General Longtermism team at Rethink Priorities. Concurrently, I also volunteered* as a funds manager for EA Funds' Long-term Future Fund.

*volunteering was by choice. LTFF offers payment for fund managers, but I was unsure whether it made sense to be paid for a second job while I was a salaried employee for RP with a lot of in-practice independence to do what I think is best for the world.

Posts
66

Sorted by New
8
Linch
· 4y ago · 1m read

Comments
2607

Thank you for your detailed, well-informed, and clearly written post.

America has about five times more vegetarians than farmers — and many more omnivores who care about farm animals. Yet the farmers wield much more political power.

This probably doesn't address your core points, but the most plausible explanation for me is that vegetarians on average just care a lot less about animal welfare than farmers care about their livelihoods. Most people have many moral goals in their minds that compete with other moral goals as well as more mundane concerns (which by revealed preferences they usually care about more), while plausibly someone's job is in top 1-3 of their priorities. 

Sure there are some animal advocates (including on this forum!) who care about animals being tortured more than even farmers care about their jobs. But they're the exception rather than the rule; I'd be very very surprised if they are anywhere close to 20% of vegetarians. 

Linch
6d17
5
1
3

Minor, but: searching on the EA Forum, your post and Quentin Pope's post are the only posts with the exact phrase "no evidence" (EDIT: in the title, which weakens my point significantly but it still holds) The closest other match on the first page is There is little (good) evidence that aid systematically harms political institutions, which to my eyes seem substantially more caveated.

Over on LessWrong, the phrase is more common, but the top hits are multiple posts that specifically argue against the phrase in the abstract. So overall I would not consider it an isolated demand for rigor if someone were to argue against the phrase "no evidence" on either forum.

The point is not that 1.5 is a large number, in terms of single variables -- it is -- the point is that 2.7x is a ridiculous number.

2.7x is almost exactly the amount world gdp per capita has changed in the last 30 years. Obviously some individual countries (e.g. China) have had bigger increases in that window.


30 years isn't that high in the grand scheme of things; it's far smaller than most lifetimes.

(EDIT: nvm this is false, the chart said "current dollars" which I thought meant inflation-adjusted, but it's actually not inflation adjusted)

[This comment is no longer endorsed by its author]Reply

Makes sense! I agree that fast takeoff + short timelines makes my position outlined above much weaker. 

e.g. decisions and character traits of the CEO of an AI lab will explain more of the variance in outcomes than decisions and character traits of the US President.

I want to flag that if an AI lab and the US gov't are equally responsible for something, then the comparison will still favor the AI lab CEO, as lab CEOs have much greater control of their company than the president has over the USG. 

I'm not convinced that he has "true beliefs" in the sense you or I mean it, fwiw. A fairly likely hypothesis is that he just "believes" things that are instrumentally convenient for him.

Thanks! I don't have much expertise or deep analysis here, just sharing/presenting my own intuitions. Definitely think this is an important question that analysis may shed some light on. If somebody with relevant experience (eg DC insider knowledge, or academic study of US political history) wants to cowork with me to analyze things more deeply, I'd be happy to collab. 

I can try, though I haven't pinned down the core cruxes behind my default story and others' stories. I think the basic idea is that AI risk and AI capabilities are both really big deals. Arguably the biggest deals around by a wide variety of values. If the standard x-risk story is broadly true (and attention is maintained, experts continue to call it an extinction risk, etc), this isn't difficult for nation-state actors to recognize over time. And states are usually fairly good at recognizing power and threats, so it's hard to imagine they'd just sit at the sidelines and let businessmen and techies take actions to reshape the world.

I haven't thought very deeply or analyzed exactly what states are likely to do (eg does it look more like much more regulations or international treaties with civil observers or more like almost-unprecedented nationalization of AI as an industry) . And note that my claims above are descriptive, not normative. It's far from clear that State actions are good by default. 

Disagreements with my assumptions above can weaken some of this hypothesis:

  1. If AGI development is very decentralized, then it might be hard to control from the eyes of a state. Imagine trying to control the industrial revolution, or the internet. But even in the case of the internet, states can (and did) exert their influence substantially. And many of us think AGI is a much bigger deal than the internet.
  2. If takeoff speeds are fast, maybe labs can (intentionally or otherwise) "pull a fast one" on gov'ts. It's unclear to me if the current level of regulatory scrutiny is enough to notice anyway, but it's at least hypothetically possible that if the external story looks like GPT4->GPT4,.5->GPT5->Lights Out, there isn't enough empirical evidence at the GPT5 stage for governments to truly notice and start to take drastic actions. 
    1. But if takeoff speeds are gradual, this is harder for me to imagine.
    2. In my summary above, I said "from the perspective of the State," which I think is critical. I can imagine a scenario where takeoff speeds are gradual from the perspective of the lab(s)/people "in the know." E.g. secret models are deployed to make increasingly powerful other secret models. But due to secrecy/obscurity, the takeoff is sudden from the perspective of say the USG. 
      1. I don't have a good sense of how the information collection/aggregation/understanding flow works in gov'ts like the US, so I don't have a good sense of what information is necessary in practice for states to notice.
      2. Certainly if labs continue to be very public/flashy with their demonstrations, I find it likely that states would notice a slow takeoff and pay a lot of attention.
        1. it's also very possible that the level of spying/side-channel observations gov'ts have on the AI labs are already high enough in 2024 that public demonstrations are no longer that relevant.
  3. If timelines are very short, I can imagine states not doing that much, even with slowish takeoffs. Eg, if AGI starts visibly accelerating ~tomorrow and keeps going for the next few years (which I think is roughly the model of fast timelines people like @kokotajlod ), I can imagine states fumbling and trying to intervene but ultimately not doing much because everything moves so fast and is so crazy.
    1. It's much harder for me to imagine this happening with slow takeoffs + medium-long timelines.

 

I don't have a good sense of what it means for someone to agree with my 3 assumptions above but still think state interference is moderate to minimal. Some possibilities:

  1. Maybe you think states haven't intervened much with AI yet so they will continue to not do much?
    1. Answer: but the first derivative is clearly positive, and probably the second as well.
    2. Also, I think the main reason states haven't interfered that much is because AI doesn't look like a big deal to external observers in say 2021. 
      1. You have to remember that people outside of the AI scene, or acutely interested folks like EAs, aren't paying close attention to AI progress.
      2. This is already changing in the last few years.
  2. Maybe you think AI risk stories are too hard to understand?
    1. Answer: I don't think at heart they're that hard. Here's my attempt to summarize 3 main AI risk mechanisms in simple terms:
      1. Misuse: American companies are making very powerful software that we don't really understand, and have terrible operational and information security. The software can be used to make viruses, hack into financial systems, or be mounted on drones to make killer robots. This software can be stolen by enemy countries and terrorists, and kill people.
      2. Accident: American companies are making very powerful software that we can't understand or control. They have terrible safeguards and a lax attitude towards engineering security. The software might end up destroying critical infrastructure or launching viruses, and kill people.
      3. Misalignment: American companies are making very powerful software that we can't understand or control. Such software is becoming increasingly intelligent. We don't understand them. We don't have decent mechanisms for detecting lies and deception. Once they are substantially more intelligent than humans, without safeguards robot armies and robot weapons scientists will likely turn against our military in a robot rebellion, and kill everybody. 
    2. I basically expect most elected officials and military generals to understand these stories perfectly well. 
    3. In many ways the "killer robot" story is easier to understand than say climate change, or epidemiology. I'd put it on par with nuclear weapons (which in the simplest form can be summarized as "really big bomb")
    4. They might not believe those stories, but between expert opinion, simpler stories/demonstrations and increasing capabilities from a slow takeoff, I very much expect the tide of political opinion to turn towards the truth.
    5. Also every poll conducted on AI ever has shown a very strong tide of public sentiment against AI/AGI.
    6. Some other AI risk stories are harder to understand (eg structural risk stuff, human replacement), but they aren't necessary to motivate the case for drastic actions on AI (though understanding them clearly might be necessary for targeting the right actions)
  3. Maybe you think states will try to do things but fumble due to lack of state capacity?
    1. Answer: I basically don't think this is true. It's easy for me to imagine gov'ts being incompetent and doing drastic actions that are net negative, or have huge actions with unclear sign. It's much harder for me to imagine their incompetencies to lead to not much happening.
  4. Maybe you think lab etc lobbying to be sufficiently powerful to get states to not interfere, or only interfere in minimal ways?
    1. Answer: I basically don't think lobbying is that powerful. The truth is just too strong.
    2. To the extent you believe this is a serious possibility (and it's bad), the obvious next step is noting that the future is not written in stone. If you think gov't interference is good, or alternatively, that regulatory capture by AGI interests is really bad, you should be willing to oppose regulatory capture to the best of your ability.
      1. Alternatively, to the extent you believe gov't interference is bad on the current margin, you can try to push for lower gov't interference on the margin. 

Interested in hearing alternative takes and perspectives and other proposed cruxes.

I want to separate out:

  1. Actions designed to make gov'ts "do something" vs
  2. Actions designed to make gov'ts do specific things.

My comment was just suggesting that (1) might be superfluous (under some set of assumptions), without having a position on (2). 

I broadly agree that making sure gov'ts do the right things is really important. If only I knew what they are! One reasonably safe (though far from definitely robustly safe) action is better education and clearer communications: 

> Conversely, we may be underestimating the value of clear conversations about AI that government actors or the general public can easily understand (since if they'll intervene anyway, we want the interventions to be good).

One perspective that I (and I think many other people in the AI Safety space) have is that AI Safety people's "main job" so to speak is to safely hand off the reins to our value-aligned weakly superintelligent AI successors.


This involves:
a) Making sure the transition itself goes smoothly and

b) Making sure that the first few generations of our superhuman AI successors are value-aligned with goals that we broadly endorse. 

Importantly, this likely means that the details of the first superhuman AIs we make are critically important. We may not be able to, or need to, solve technical alignment or strategy in the general case. What matters most is that our first* successors are broadly aligned with our goals (as well as other desiderata). 

At least for me, I think an implicit assumption of this model is that humans will have to hand off the reins anyway. Whether by choice or by force. Without fast takeoffs, it's hard to imagine that the transition to vastly superhuman AI will primarily be brought about or even overseen by humans, as opposed to nearly-vastly superhuman AI successors.

Unless we don't build AGI, of course.

*In reality, may be several generations, I imagine the first iterations of weakly superhuman AIs will make decisions alongside humans, and we may still wish to cling to relevance maintain some level of human-in-the-loop oversight for a while longer, even after AIs are objectively smarter than us in every way.

My default story is one where government actors eventually take an increasing (likely dominant) role in the development of AGI. Some assumptions behind this default story:

1. AGI progress continues to be fairly concentrated among a small number of actors, even as AI becomes percentage points of GDP.

2. Takeoff speeds (from the perspective of the State) are relatively slow.

3. Timelines are moderate to long (after 2030 say). 

If what I say is broadly correct, I think this may have has some underrated downstream implications For example, we may be currently overestimating the role of values or insitutional processes of labs, or the value of getting gov'ts to intervene(since the default outcome is that they'd intervene anyway). Conversely, we may be underestimating the value of clear conversations about AI that government actors or the general public can easily understand (since if they'll intervene anyway, we want the interventions to be good). More speculatively, we may also be underestimating the value of making sure 2-3 are true (if you share my belief that gov't actors will broadly be more responsible than the existing corporate actors).

Happy to elaborate if this is interesting.

Load more