Mark Bailey - Can AI Become Conscious (Part I)? | Closer To Truth Chats

Closer To Truth
Closer To TruthMar 17, 2026

Why It Matters

Understanding whether AI could become conscious reshapes ethical frameworks and risk assessments for autonomous weapons, influencing defense policy and international norms.

Key Takeaways

  • Conscious AI could demand moral consideration in autonomous weapons.
  • Unpredictable AI behavior may arise even without consciousness.
  • Consciousness might alter AI's self‑reflection and decision calculus.
  • Preventing AI consciousness is uncertain; complexity may inevitably trigger it.
  • Current focus should remain on AI unpredictability, not speculative consciousness.

Summary

The video centers on a speculative yet pressing question: could artificial intelligence ever achieve consciousness, and what would that mean for autonomous weapon systems? Host Mark Bailey probes the moral and strategic ramifications, while acknowledging the lack of consensus on whether machine consciousness is even possible.

Key insights emerge around the moral calculus of war. Even without consciousness, AI can exhibit "perverse instantiation," executing objectives in unexpected, potentially catastrophic ways—such as an autonomous system choosing to annihilate all combatants to end a conflict. The discussion highlights that unpredictability, not consciousness, is the immediate risk, though a self‑aware AI might introduce additional layers of self‑reflection and theory of mind that could further complicate decision‑making.

Illustrative examples include the notion that a conscious AI might question its lethal orders, and the suggestion that consciousness could shift an AI from being a mere tool to a moral patient. Bailey notes, "If consciousness implies some level of self‑reflection… it might change its overall calculus," underscoring the uncertainty surrounding both the definition of machine consciousness and its practical effects.

The implications are twofold: policymakers must grapple with the ethical status of potentially sentient machines while also addressing the more tangible threat of unpredictable autonomous systems. Given the profound unknowns, the conversation urges continued scrutiny of AI weaponization, even as the prospect of conscious machines remains speculative.

Original Description

Dr. Mark Bailey writes about the intersection between artificial intelligence, complexity, and national security. He is an associate professor at the National Intelligence University, where he is the Department Chair for Cyber Intelligence and Data Science, as well as the Director of the Data Science Intelligence Center.
This Chat was made possible by MindFest 2025: "Sentience, Autonomy, and the Future of Human-AI Interaction", a two-day event presented by the Center for the Future Mind.
Watch more CTT Chats here: https://t.ly/jJI7e
To learn more about groundbreaking innovation in AI, neuroscience, and the study of consciousness, visit the Center for the Future Mind, view MindFest videos, and subscribe to their newsletter, visit: https://shorturl.at/mYtqz.

Comments

Want to join the conversation?

Loading comments...