Defense News and Headlines
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

Defense Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
DefenseNewsAI Is the Future of Warfare and US Is in the Lead
AI Is the Future of Warfare and US Is in the Lead
DefenseAI

AI Is the Future of Warfare and US Is in the Lead

•February 17, 2026
0
Asia Times – Defense
Asia Times – Defense•Feb 17, 2026

Companies Mentioned

NVIDIA

NVIDIA

NVDA

Anthropic

Anthropic

Google

Google

GOOG

Palantir

Palantir

PLTR

Elbit Systems

Elbit Systems

ESLT

OpenAI

OpenAI

Meta

Meta

META

Alibaba Group

Alibaba Group

BABA

Perplexity

Perplexity

Midjourney

Midjourney

Gartner

Gartner

Why It Matters

U.S. AI dominance could dictate future combat outcomes and set global standards for military technology, while adversaries’ attempts to exploit or replicate these systems raise acute security risks.

Key Takeaways

  • •US invests $108B data centers, $109B private AI funding.
  • •Pentagon allocates $2B yearly to AI weapons, billions indirect.
  • •China and Russia advancing AI, still trail US military edge.
  • •AI supported operations in Gaza, Ukraine, Venezuela, Iran conflicts.
  • •Adversaries exploit AI via model extraction, jailbreak propaganda attacks.

Pulse Analysis

The United States has turned AI into a strategic asset, funneling more than $108 billion into data‑center capacity and attracting a record $109 billion in private AI funding. This financial muscle fuels a thriving ecosystem of defense contractors—Palantir, Nvidia, Anthropic, and others—who embed large‑language models and computer‑vision tools into drones, target‑selection software, and battlefield analytics. By automating the "kill chain," AI shortens decision cycles, enhances situational awareness, and enables autonomous platforms to operate with minimal human oversight, fundamentally altering how wars are fought.

Across the globe, China and Russia are racing to catch up, but their progress is constrained by limited chip supplies, reliance on gray‑market hardware, and a lack of real‑world combat data. Russia’s home‑grown Svod platform, built on YOLO and adapted LLaMA models, illustrates a pragmatic approach that blends open‑source frameworks with Chinese‑origin Qwen components. China, while ahead in AI research, still lacks extensive battlefield experience, forcing it to model outcomes rather than learn from live engagements. These gaps mean the U.S. retains a decisive technological edge, though the proliferation of AI tools worldwide raises the specter of rapid diffusion and reverse‑engineering.

The strategic advantage comes with heightened vulnerability. Model‑extraction attacks, jailbreak prompts, and AI‑driven propaganda campaigns have already demonstrated how adversaries can compromise commercial AI systems to harvest algorithms or inject disinformation. As the Pentagon increasingly outsources AI capabilities to private vendors, the security of underlying models becomes a potential Achilles’ heel. Policymakers must therefore balance accelerated AI adoption with robust governance, supply‑chain security, and resilient architectures to safeguard national‑security interests in the emerging era of AI‑centric warfare.

AI is the future of warfare and US is in the lead

Article

While many experts have focused on the significance of drones in the new reality of warfare, the AI revolution is a much bigger deal. AI now enables drones to be far more impactful, helps select and prioritize battle targets, designs tactical operations and assesses results that go beyond iterative evaluations.

While AI is changing the battlefield space and the US has massive advantages, there are also significant risks that AI systems could be compromised by US adversaries and perhaps even by “friends.”

Recently AI has played an important role in several conflicts: the Gaza war (Operation Gideon’s Chariots); US‑Israel operations against Iran (Operation Rising Lion); the capture of Nicolás Maduro and his wife in Venezuela (Operation Absolute Resolve); operations to locate and stop “rogue” oil tankers; and the Ukraine war, where AI is playing a major role.

If the US and Israel take action against Iran in the coming days, planning and operations would likely be supported by AI.

While the US and Israel are among the leaders in using AI to support military operations, other consequential players are emerging. China is well along in developing home‑grown AI engines; Russia is using AI for drones and tactical operations; and US allies, especially the UK, France and Germany, are implementing AI in intelligence and military development.

Much of the intelligence integration in the Ukraine war is supported by major Western organizations with deep AI capabilities. One of them, Palantir Technologies, has emerged as the premier US company in big data and analytics, teamed with Nvidia and Anthropic.

In Israel, there is a blend of military units, especially Unit 8200, and private‑sector players, including well‑established companies such as Elbit Systems and startups such as Skyforce for edge‑computing and AI‑driven battlefield autonomy, Robotican for autonomous robotics and drones and Radiant Research Labs for zero‑click intelligence‑gathering tools.

OpenAI (GPT), Google (Gemini), Anthropic (Claude), and Meta (LLaMA) now control 32 % of the world’s large language model (LLM) market. The Pentagon is now in a major dispute with Anthropic over its use of Claude in the capture of Maduro.

Anthropic has said it does not allow Claude to be used for military operations, despite securing a $200 million contract with the US Department of Defense (DoD) in July 2025 to develop AI for national security.

The number of AI engines, including specialized systems, is rapidly growing. There are roughly 65 to 70 major AI tools that have reached mass‑market status in the US, including Perplexity for search, Midjourney for art and Grok for social media.

As of this year, there are over 62 000 AI‑related startups globally, with the US holding the largest share, roughly 25 000–30 000. One recent tally suggests there are now over 90 000 AI companies worldwide, though many are “wrappers” that use the Big Three’s engines to power their own specific services.

In the US, more than $108 billion has been invested so far in data centers, not including collateral investments in power generation and grid improvements. Private investment in US AI companies reached a record $109.1 billion, according to the 2025 AI Index Report from Stanford HAI.

Not counting the billions spent for new foundries and other high‑end chip‑related infrastructure via the CHIPS Act, the US government is investing billions every year in AI research and development and in specific applications.

Over time, these investments will reshape the federal government’s workforce, possibly eliminating tens of thousands of jobs. It will also change military ranks and roles. The Pentagon is investing at least $2 billion a year directly and tens of billions indirectly through weapons procurement, creating a future AI juggernaut that few competitors can hope to match.

This is bad news for Russia in particular, as it lacks both the infrastructure and investment to come close to match US spending and capabilities.

Russia uses AI in drones such as the Geran‑2 and the Zala Lancet. These drones are powered by older‑type Nvidia‑made chip sets, which Russia acquires on the gray market. In early 2026, Russia also introduced the “Svod” AI system, a tactical situational‑awareness platform.

Svod aggregates data from satellites and drones into a single map for commanders and is designed to “model scenarios,” offering Russian officers pre‑calculated tactical options. It was developed collaboratively between Russia’s Ministry of Defense research institutes and civilian software engineers tasked with digitizing the Russian military’s “kill chain.”

The system solves Russia’s historical “command bottleneck” by aggregating data from satellites, drones, and reconnaissance reports into a single digital “operational picture.” It runs on the domestically developed Astra Linux operating system to ensure “technological sovereignty” and reduce dependence on Western software.

For its primary function—identifying targets in drone feeds and satellite imagery—Svod utilizes the YOLO (You Only Look Once) framework, the global standard for real‑time object detection. To process text‑based intelligence reports and “reconnaissance summaries,” developers have adapted models like Mistral (French) and LLaMA (Meta).

These models are embedded in on‑premise, air‑gapped environments to ensure data doesn’t leak back to Western servers. There is increasing evidence that Russian developers are incorporating Qwen, developed by China’s Alibaba. Qwen’s architecture is particularly adept at the complex coding and logic tasks required for situational modeling.

On the front lines, Russian AI runs on “tactical tablets” and small, ruggedized computers. Due to sanctions, these often rely on smuggled or dual‑use Chinese chips rather than specialized Russian silicon.

Complex scenario modeling for predicting where a Ukrainian counter‑attack might occur is handled by regional command centers using server clusters that utilize diverted high‑end GPUs such as Nvidia H100s/A100s obtained through third‑party intermediaries.

Russia faces major hurdles in applying battlefield AI and net‑centric warfare, particularly for communications. Ukraine has a distinct advantage because it uses Elon Musk’s Starlink for the backbone of its communications, a system that is currently very difficult for Russia to jam or disrupt.

Russia’s earlier access to Starlink has been severed, leaving it with ad‑hoc communications that are far more vulnerable to disruption, far less reliable and lacking the bandwidth Starlink provides to Ukraine. While Russia is trying to figure out alternatives, it will be some time before a workaround is found, if ever.

Meanwhile, Russia is reportedly focusing on rougher, less elegant AI solutions, using external support—mainly Chinese—on projects. Over time, there is little hope the Russians can remain competitive unless they can work out a modus vivendi with the US, much as China appears to have done when it comes to trading off access to rare earths for Nvidia products.

China is far ahead of Russia in the AI space, although it is still playing catch‑up with the US. China’s wild card is its lack of modern battlefield experience. Thus, China will have to build its AI systems on estimates of battle effectiveness rather than real‑world data, unless, of course, Chinese intelligence penetrates Western systems.

Little is known about the security of AI machines. The Pentagon and US intelligence agencies rely on commercial AI products, especially for real‑time updates on threats and countermeasures. The recent Pentagon debacle on cloud computing, where it allowed Chinese engineers to provide “routine” service, suggests that, so far at least, the downside risks of relying on commercial systems are not part of Pentagon thinking.

Nor does the Defense Department have much free choice, as it does not own AI engines and outsources their use, either directly or through defense contractors. The security of AI could become the Achilles’ heel of US AI systems. Certainly, AI machines and communications will become a major target for America’s adversaries.

For example, in February 2026, Google’s Threat Intelligence Group reported a surge in “model extraction” attacks. Adversaries, notably from China, used automated scripts to send hundreds of thousands of prompts to Gemini to reverse‑engineer its internal logic and “steal” the proprietary reasoning capabilities for their own domestic models.

A 2025 Gartner study revealed that 32 % of organizations reported their AI applications had been targeted via malicious prompts. State‑sponsored hackers use these “jailbreaks” to force AI agents to leak sensitive data or bypass safety filters to generate malicious code.

In late 2025, reports surfaced that a Moscow‑based network dubbed “Pravda” successfully “infected” several popular AI chatbots. By flooding the internet with specific narratives, they ensured that when users asked about certain geopolitical events, the AI would repeat Russian propaganda roughly 33 % of the time.

Attacks are not limited to Russia and China. Iran and North Korea have joined the fray, and other “friends” may also seek commercial advantage by attacking and exploiting AI applications or simply by using them for their own military, economic and social operations.

Given AI’s national‑security significance, both for military use and economic security, much greater attention to AI system security is not only warranted but essential.

Stephen Bryen is a former US deputy undersecretary of defense and special correspondent at Asia Times. This article was first published on his newsletter Weapons and Strategy and is republished with permission.

Read Original Article
0

Comments

Want to join the conversation?

Loading comments...