AI Deals and Investments
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

AI Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
Anthropic Closes $30B Funding Round at $380B Valuation
Growth Stage

Anthropic Closes $30B Funding Round at $380B Valuation

CNBC – Top News & Analysis
CNBC – Top News & Analysis
•February 18, 2026
CNBC – Top News & Analysis
CNBC – Top News & Analysis•Feb 18, 2026
0

Participants

Anthropic

Anthropic

company

Why It Matters

The outcome will shape how frontier AI is integrated into U.S. defense, influencing industry standards and vendor risk assessments.

Key Takeaways

  • •Anthropic seeks ban on autonomous weapons usage
  • •DoD wants unrestricted lawful use of Anthropic models
  • •Contract under review could label Anthropic a supply‑chain risk
  • •Anthropic valued at $380 billion after $30 billion raise
  • •Only AI firm deployed on classified DoD networks

Pulse Analysis

The Pentagon’s rapid adoption of generative‑AI tools reflects a broader shift toward data‑driven decision‑making in modern warfare. By integrating large‑language models such as Anthropic’s Claude into classified environments, the Department of Defense hopes to accelerate intelligence analysis, logistics planning, and rapid prototyping of mission concepts. Anthropic’s unique position as the sole vendor with cleared access gives it a strategic foothold, but also places the company at the center of a policy debate over how far AI can be trusted in high‑stakes, national‑security contexts.

The current standoff stems from Anthropic’s insistence on ethical safeguards, specifically prohibitions on autonomous weaponization and mass surveillance, while the under secretary of war, Emil Michael, argues for “all lawful use cases” without carve‑outs. This clash highlights the tension between commercial AI firms seeking to protect brand reputation and the military’s demand for unrestricted operational flexibility. If negotiations fail, the DoD could designate Anthropic as a supply‑chain risk, forcing contractors to certify non‑use of its models—a move traditionally reserved for foreign adversaries that could cripple the startup’s growth trajectory.

Beyond the immediate contract, the dispute signals how AI governance will shape future defense procurement. Competitors such as OpenAI, Google and xAI have already accepted broader usage clauses, positioning themselves as low‑risk suppliers. Anthropic’s recent $30 billion funding round, which lifted its valuation to $380 billion, underscores investor confidence in its technology but also raises expectations for responsible deployment. As the DoD refines its AI policy, firms that can balance ethical constraints with operational utility are likely to secure long‑term partnerships, while those unable to compromise may face market marginalization.

Deal Summary

Anthropic announced it has closed a $30 billion funding round, valuing the AI startup at $380 billion. The round, disclosed in early February 2026, comes as the company negotiates its AI model usage with the U.S. Department of Defense. The capital will support further development of its Claude models.

Article

Source: CNBC – Top News & Analysis

Anthropic is clashing with the Pentagon over AI use. Here's what each side wants

Published Wed, Feb 18 2026 1:01 PM EST

By Ashley Capoot

Key Points

  • Anthropic's relationship with the Department of Defense is “under review” as the two sides negotiate over how the company's AI models can be used.

  • The startup wants assurance that its models will not be used for autonomous weapons or mass surveillance, according to a report from Axios.

  • The DoD wants to use Anthropic's models “for all lawful use cases” without limitation, according to Emil Michael, the under secretary of war for research and engineering.


Anthropic is at odds with the Department of Defense over how its artificial‑intelligence models should be used, and its work with the agency is “under review,” a Pentagon spokesperson told CNBC.

The five‑year‑old startup was awarded a $200 million contract with the DoD last year. As of February, Anthropic is the only AI company that has deployed its models on the agency’s classified networks and provided customized models to national‑security customers.

Negotiations about “going forward” terms of use have hit a snag, Emil Michael, the under secretary of war for research and engineering, said at a defense summit in Florida on Tuesday.

Anthropic wants assurance that its models will not be used for autonomous weapons or to “spy on Americans en masse,” according to a report from Axios.

The DoD, by contrast, wants to use Anthropic’s models “for all lawful use cases” without limitation.

“If any one company doesn’t want to accommodate that, that’s a problem for us,” Michael said. “It could create a dynamic where we start using them and get used to how those models work, and when it comes that we need to use it in an urgent situation, we’re prevented from using it.”

It’s the latest wrinkle in Anthropic’s increasingly fraught relationship with the Trump administration, which has publicly criticized the company in recent months. David Sacks, the venture capitalist serving as the administration’s AI and crypto czar, has accused Anthropic of supporting “woke AI” because of its stance on regulation.

An Anthropic spokesperson said the company is having “productive conversations, in good faith” with the DoD about how to “get these complex issues right.”

“Anthropic is committed to using frontier AI in support of U.S. national security,” the spokesperson said.

The startup’s rivals—OpenAI, Google and xAI—were also granted contract awards of up to $200 million from the DoD last year. Those companies have agreed to let the DoD use their models for all lawful purposes within the military’s unclassified systems, and one company has agreed across “all systems,” according to a senior DoD official who asked not to be named because the negotiations are confidential.

If Anthropic ultimately does not agree with the DoD’s terms of use, the agency could label the company a “supply chain risk,” which would require its vendors and contractors to certify that they do not use Anthropic’s models, the official said. The designation is typically reserved for foreign adversaries, so it would be a complex blow to Anthropic.

The company was founded by a group of former OpenAI researchers and executives in 2021, and is best known for developing a family of AI models called Claude. Anthropic announced earlier this month that it closed a $30 billion funding round at a $380 billion valuation, more than double what it was worth as of its last raise in September.

WATCH: Anthropic debuts Sonnet 4.6 model.

0

Comments

Want to join the conversation?

Loading comments...