![The Gf-Gc Loop Protocol: How to Augment Intelligence Using AI. [Part 1]](/cdn-cgi/image/width=1200,quality=75,format=auto,fit=cover/https://substackcdn.com/image/fetch/$s_!Tkoq!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56d2aa92-6bb1-4e33-a933-863e59d06fc4_631x631.png)
The Gf-Gc Loop Protocol: How to Augment Intelligence Using AI. [Part 1]

Key Takeaways
- •Five-step ORSAR method structures human‑AI collaboration.
- •Step 1 anchors problem before AI involvement.
- •Step 3 introduces adversarial friction for deeper reasoning.
- •Step 5 converts insights into reusable mental models.
- •Loop cycles Gf to Gc, expanding cognitive capacity.
Summary
The author presents the ORSAR protocol, a five‑step framework for human‑led AI interaction that moves problems through pre‑analysis, framing, stress‑testing, metacognitive auditing, and knowledge consolidation. Each stage deliberately limits AI’s role to sharpen uncertainties, generate productive friction, and extract reusable mental models. The method is framed as a Gf‑Gc‑Gf loop, converting fluid intelligence gains into crystallised knowledge that fuels the next iteration. The post positions the protocol as an experimental but systematic way to augment intelligence with AI.
Pulse Analysis
The surge of generative AI has sparked enthusiasm for rapid insight generation, yet many organizations struggle with noisy outputs and over‑reliance on black‑box models. The ORSAR protocol counters this trend by embedding a human‑first pre‑analysis stage that forces practitioners to articulate the problem, their gut hypothesis, and key uncertainties before any AI prompt. This anchoring step preserves autonomy, curtails cognitive drift, and creates a clear baseline against which AI‑generated refinements can be measured.
Subsequent steps transform the interaction into a disciplined dialogue: the framing stage leverages AI as a structural optimizer, sharpening vague uncertainties into mutually exclusive decision questions. The stress‑test phase then assigns the AI an adversarial persona—Socratic or devil’s advocate—to inject productive friction, exposing hidden assumptions and potential failure modes. A metacognitive audit follows, where the human filters signal from hallucination, reinforcing epistemic dominance. Finally, the consolidation step distills the refined solution into reusable mental models, converting fluid intelligence (Gf) gains into crystallised knowledge (Gc) that fuels the next iteration of the loop.
For businesses, this loop promises measurable benefits: higher decision fidelity, reduced risk of AI‑driven errors, and a growing repository of codified reasoning patterns that can be embedded in training programs or decision‑support tools. Companies in consulting, product development, and strategic planning can adopt ORSAR to scale intelligent problem‑solving while preserving human oversight. As the protocol matures, it may become a cornerstone of responsible AI practice, aligning cognitive science insights with practical enterprise workflows.
Comments
Want to join the conversation?