
China has institutionalized artificial intelligence within its military doctrine, branding the effort as “intelligentized warfare.” Recent PLA publications show AI is now embedded in decision‑support, situational awareness, and unmanned systems, while the more speculative mind‑control concepts remain largely theoretical. Disinformation and bot campaigns targeting Taiwan have intensified, reflecting a practical focus on mass persuasion rather than decisive psychological collapse. The evolution marks a shift from abstract cognitive warfare to concrete, AI‑enabled operational capabilities.
Since Koichiro Takagi’s 2022 briefing, China has moved AI from theory to the backbone of its military planning. The term “intelligentized warfare” now appears in PLA white papers and doctrine, signalling that algorithms are being embedded in command‑and‑control, sensor fusion, and unmanned platforms. Recent PLA Daily essays (2023‑2024) reveal a shift toward AI‑driven situational awareness and decision‑support tools rather than speculative mind‑control. This institutionalization means that Chinese planners can process massive data streams faster, but the technology is still framed as an enabler of existing kinetic capabilities.
The United States, by contrast, treats cognitive operations as a distinct domain that can directly influence adversary decision‑making, investing in deep‑fakes, targeted persuasion, and autonomous influence bots. Beijing’s current focus is on mass persuasion—amplifying disinformation, bot networks, and social‑cohesion erosion—rather than attempting to collapse senior leadership’s resolve. While American doctrine envisions AI‑powered psychological shock, Chinese strategy prioritizes practical integration: improving battlefield awareness, streamlining logistics, and supporting conventional forces. This divergence reflects differing risk tolerances and institutional histories, with China preferring incremental gains over high‑stakes manipulation.
The practical outcome for Taiwan and regional actors is a more sophisticated information‑warfare environment. Taiwanese security agencies have documented a sharp rise in AI‑generated propaganda and coordinated bot activity aimed at sowing doubt ahead of any potential conflict. Policymakers must therefore strengthen digital resilience, invest in AI‑assisted fact‑checking, and develop counter‑AI capabilities that can match Beijing’s speed. As AI continues to mature, the line between persuasive messaging and outright cognitive coercion may blur, making early detection and attribution essential for preserving strategic stability in the Indo‑Pacific.
Comments
Want to join the conversation?