EG

Erich_Grunewald 🔸

Researcher @ Institute for AI Policy and Strategy
2466 karmaJoined Working (6-15 years)Berlin, Germanywww.erichgrunewald.com

Bio

Anything I write here is written purely on my own behalf, and does not represent my employer's views (unless otherwise noted).

Comments
276

I don't understand why so many are disagreeing with this quick take, and would be curious to know whether it's on normative or empirical grounds, and if so where exactly the disagreement lies. (I personally neither agree nor disagree as I don't know enough about it.)

From some quick searching, Lessig's best defence against accusations that he tried to steal an election seems to be that he wanted to resolve a constitutional uncertainty. E.g.,: "In a statement released after the opinion was announced, Lessig said that 'regardless of the outcome, it was critical to resolve this question before it created a constitutional crisis'. He continued: 'Obviously, we don’t believe the court has interpreted the Constitution correctly. But we are happy that we have achieved our primary objective -- this uncertainty has been removed. That is progress.'"

But it sure seems like the timing and nature of that effort (post-election, specifically targeting Trump electors) suggest some political motivation rather than purely constitutional concerns. As best as I can tell, it's in the same general category of efforts as Giuliani's effort to overturn the 2020 election, though importantly different in that Giuliani (a) had the support and close collaboration of the incumbent, (b) seemed to actually commit crimes doing so, and (c) did not respect court decisions the way Lessig did.

That still does not seem like reinventing the wheel to me. My read of that post is that it's not saying "EAs should do these analyses that have already been done, from scratch" but something closer to "EAs should pay more attention to strategies from development economics and identify specific, cost-effective funding opportunities there". Unless you think development economics is solved, there is presumably still work to be done, e.g., to evaluate and compare different opportunities. For example, GiveWell definitely engages with experts in global health, but still also needs to rigorously evaluate and compare different interventions and programs.

And again, the article mentions development economics repeatedly and cites development economics texts -- why would someone mention a field, cite texts from a field, and then suggest reinventing it without giving any reason?

It would be helpful if you mentioned who the original inventor was.

I don't see how this is reinventing the wheel? The post makes many references to development economics (11 mentions to be precise). It was not an instance of independently developing something that ended up being close to development economics.

I don't think you're wrong exactly, but AI takeover doesn't have to happen through a single violent event, or through a treacherous turn or whatever. All of your arguments also apply to the situation with H sapiens and H neanderthalensis, but those factors did not prevent the latter from going extinct largely due to the activities of the former:

  1. There was a cost to violence that humans did against neanderthals
  2. The cost of using violence was not obviously smaller than the benefits of using violence -- there was a strong motive for the neanderthals to fight back, and using violence risked escalation, whereas peaceful trade might have avoided those risks
  3. There was no one human that controlled everything; in fact, humans likely often fought against one another
  4. You allow for neanderthals to be less capable or coordinated than humans in this analogy, which they likely were in many ways

The fact that those considerations were not enough to prevent neanderthal extinction is one reason to think they are not enough to prevent AI takeover, although of course the analogy is not perfect or conclusive, and it's just one reason among several. A couple of relevant parallels include:

  • If alignment is very hard, that could mean AIs compete with us over resources that we need to survive or flourish (e.g., land, energy, other natural resources), similar to how humans competed over resources with neanderthals
  • The population of AIs may be far larger, and grow more rapidly, than the population of humans, similar to how human populations were likely larger and growing at a faster rate than those of neanderthals

I don't think people object to these topics being heated either. I think there are probably (at least) two things going on:

  1. There's some underlying thing causing some disagreements to be heated/emotional, and people want to avoid that underlying thing (that could be that it involves exclusionary beliefs, but it could also be that it is harmful in other ways)
  2. There's a reputational risk in being associated with controversial issues, and people want to distance themselves from those for that reason

Either way, I don't think the problem is centrally about exclusionary beliefs, and I also don't think it's centrally about disagreement. But anyway, it sounds like we mostly agree on the important bits.

Just noting for anyone else reading the parent comment but not the screenshot, that said discussion was about Hacker News, not the EA Forum.

I was a bit confused by this comment. I thought "controversial" commonly meant something more than just "causing disagreement", and indeed I think that seems to be true. Looking it up, the OED defines "controversial" as "giving rise or likely to give rise to controversy or public disagreement", and "controversy" as "prolonged public disagreement or heated discussion". That is, a belief being "controversial" implies not just that people disagree over it, but also that there's an element of heated, emotional conflict surrounding it.

So it seems to me like the problem might actually be controversial beliefs, and not exclusionary beliefs? For example, antinatalism, communism, anarcho-capitalism, vaccine skepticism, and flat earthism are all controversial, and could plausibly cause the sort of controversy being discussed here, while not being exclusionary per se. (There are perhaps also some exclusionary beliefs that are not that controversial and therefore accepted, e.g., some forms of credentialism, but I'm less sure about that.)

Of course I agree that there's no good reason to exclude topics/people just because there's disagreement around them -- I just don't think "controversial" is a good word to fence those off, since it has additional baggage. Maybe "contentious" or "tendentious" are better?

Perhaps Obamacare might be one example of this in America? I think Trump had a decent amount of rhetoric saying he would repeal it, then didn't do anything when he reached power.

My recollection was that Trump spent quite a lot of effort trying to repeal Obamacare, but in the end didn't get the votes he needed in the Senate. Still, I think your point that actual legislation often looks different from campaign promises is a good one.

Let me see if I can rephrase your argument, because I'm not sure I get it. As I understand it, you're saying:

  1. In humans, higher IQ means better performance across a variety of tasks. This is analogous to AI, where more compute/parameters/data etc. means better performance across a variety of tasks.
  2. AI systems tend to share a common underlying architecture, just as humans share the same basic biology.
  3. For humans, when IQ increases, there are improvements across the board, but still specialization, meaning no single human (the one with the most IQ) will be better than all other humans at all of those things.
  4. By analogy: For AIs, when they're scaled up, there are improvements across the board, but (likely) still specialization, meaning no single AI (the one with the most compute/parameters/data/etc.) will be better than all other AIs at all of those things.

Now I'm a bit unsure about whether you're saying that you find it extremely unlikely that any AI will be vastly better in the areas I mentioned than all humans, or that you find it extremely unlikely that any AI will be vastly better than all humans and all other AIs in those areas.

If you mean 1-4 to suggest that no AI is will be better than all humans and other AIs, I'm not sure about whether 4 follows from 1-3, but I think that seems plausible at least. But if this is what you mean, I'm not sure what you're original comment ("Note humans are also trained on all those abilities, but no single human is trained to be a specialist in all those areas. Likewise for AIs.") was meant to say in response to my original comment, which was meant as pushback against the view that AGI would be bad at taking over the planet since it wouldn't be intended for that purpose.

If you mean 1-4 to suggest that no AI will be better than all humans, I don't think the analogy holds, because the underlying factor (IQ versus AI scale/algorithms) is different. Like, it seems possible that even unspecialized AIs could just sweep past the most intelligent and specialized humans, given enough time.

Load more