Hide table of contents

Something been thinking about lately -- if anybody has any good existing discussions or books on the topic, would be deeply appreciated.

My thoughts: We can break it down into two scenarios, whether superpowers do and do not develop super intelligent AI.

If superpowers like the U.S. don't develop super intelligent AI:

  • Main problems:
    • Ultranationalistic states could secretly develop AGI and dominate the world. Unsure how feasible this is however.
  • Seems like it should be a problem, but may not be?
    • What if another superpower reaches AGI first? However, most current superpowers don't seem to be imperialistic -- they won't intentionally try to sabotage one another. Thus, even if only one superpower develops AGI, they would probably not use it for harm.

If superpowers do race towards AGI:

Pros:

  • We could theoretically figure out what resources are exactly needed to develop AGI, and develop systems to keep those resources out of ultranationalist states / humanity-threatening groups, similar to nuclear treaties

Cons:

  • Chance of bad actors seems much higher than nuclear bombs. It's hard to restrict access to compute resources and model architecture (could easily be leaked). While one could argue that the amount of compute needed for a AGI would be unattainable for ordinary person, the amount of compute may gradually lower over time.
    • Only solution I can think of is to either i) only allow approved scientists to use ML models, or ii) take away computers from general populace (who knows, maybe a desktop computer will be enough to run AGI in the future). Both do not seem ideal.

Would love to try and figure out with everybody else whether superpowers have a strong incentive to develop AGI.

1

0
0

Reactions

0
0
Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities