Hide table of contents

This would especially include outreach towards executives and employees in top AGI labs (e.g. OpenAI, DeepMind, Anthropic), the broader US tech community, as well as policymakers from major countries.
The arguments in favor are presented in this post 

16

0
0

Reactions

0
0
New Answer
New Comment

1 Answers sorted by

The consensus on lesswrong, for those who aren't insisting alignment is hopeless, tends to be in the direction of :

  1. AGI that are sparse. Meaning they lack the cognitive capacity to do anything but their current assignment. This can be enforced at runtime, and is what the MoE used by gpt-4 allegedly does, where model capabilities not needed in the present context are not active.

  2. Short duration sessions, where the AGI is only active until a time limit and will lose all memory after

  3. Limited domain subtasks. Not "replicate this strawberry by any means necessary" to "design n new bioscience experiments possible by this robotics hardware and rank by expected knowledge gain/cost" and "given this experiment another AI designed, check it for errors and rule violation" and "given this experiment and this robotic hardware, carry out the experiment" and so on. Note the same AGI can be potentially asked in a different session to do all subtasks, but it can't know who or what created the prior step or if it's actually in the real world unsupervised or in training.

With this structure, AGI will have no values humans don't hold. You could order an AGI system to manage a vast network of already constructed factory farms and optimize meat output, or you could order it to manage equipment that produces the protein from stem cells and to match all the observable parameters you have sensors for from samples taken from actual animals.

Allowing the AGI to act in accordance with values and do anything it wants including self modification to make child systems is probably a lethally bad idea.

Curated and popular this week
Relevant opportunities