This is a linkpost for https://mflb.com/ai_alignment_1/single_post_psr.html
A short overarching narrative of how humans in companies will, by default, bootstrap the existence of AGI that proceed to lethally modify the environment beyond human control.
Excerpts compiled below:
- where considering the shaping
of proto-AGI/AI systems
in interaction with humans:
- as mostly about how 'human to human' interactions
are bootstrapping of 'human to AI' interactions,
and also about how narrow AI becomes general AI.- where for example, consider an easy narrative
of how this might manifest in the overall trends
of how the present inexorably leads to the future.
- where considering the shaping
of now near-threshold-AGI systems
in interaction with world:
- as mostly about how prior 'human to AI' interactions
have effectively/functionally implemented a bootstrapping
of all kinds of possible 'AI to world' interactions,
and how these 'AI to world' interactions, in turn,
set the context and future for all manner
of 'AI to AI' interactions, etc.- where there are different outcomes/effects:
- that there is an overall movement:
- a; towards the environmental conditions
needed for artificial machine:
- substrate continuance; and;
- continued increase
(of total volume of substrate); and;- increase in the rate of increase
(of volume of substrate).- b; away from the environmental conditions
needed for human living.- as described in Three Worlds and No People as Pets.
- where considering the non-shaping
of now post-threshold-AGI
in interaction with itself:
- where as considered both internally or externally;
where given the complete failure of exogenous controls
(via market incentives; due to economic decoupling)- that they (the AGI/APS) will have started,
and will increasingly (be able to, and will),
more and more shape the world environment
to suit their own needs/process.- that humanity discovers, unfortunately, far too late,
that any type of attempted endogenous control
is also strictly, functionally, structurally,
completely impossible/intractable.
- as due to fundamental limits
of/in engineering control (note 4):
- cannot simulate.
- cannot detect.
- cannot correct.
- that any attempt to moderate or control AGI/APS
whether by internal or external techniques,
cannot not eventually fail.- where once the AGI/APS systems exist;
that the tendency of people
to keep them operating
becomes overwhelming.
- where stating the overall outcome/conclusion:
- If 'AGI' comes to exist and continues to exist,
then there will be eventually
human-species-wide lethal changes
{to / in the} overall environment.
Acronyms:
- AI: Artificial Intelligence (ie, Narrow AI).
- APS: Advanced, Planning, Strategically aware Systems.
- AGI: Artificial General Intelligence.
→ Read link to Forrest Landry's blog for more (though still just an overview).
Note: Text is laid out in his precise research note-taking format.