Framing AGI around continual learning reframes safety, evaluation, and deployment: policymakers and firms must plan for a post-launch learning period and set expectations for behavior, risk, and oversight accordingly.
Ilya Sutskever argues that the label "AGI" arose mainly as a reaction to "narrow AI," not as a precise descriptor of an endpoint; pre-training pushed models toward broadly useful capabilities and created momentum behind the AGI idea. He emphasizes that humans are not AGI either, because we depend heavily on continual learning and accumulated knowledge. Consequently, any truly powerful AI will also require ongoing learning and adaptation, and could resemble a highly capable but inexperienced learner at deployment. This implies that achieving and defining a safe superintelligence depends as much on its continual learning trajectory as on its initial capabilities.
Comments
Want to join the conversation?
Loading comments...