CLARA aims to deliver trustworthy AI systems essential for defense‑critical tasks, setting standards that could ripple into commercial high‑risk domains. Its focus on verifiability and open‑source tools accelerates broader adoption of reliable AI across sectors.
The launch of CLARA reflects a growing consensus that next‑generation artificial intelligence must be both powerful and provably safe. Traditional machine‑learning models excel at pattern recognition but often lack transparent reasoning, a gap that automated reasoning techniques can fill. By mandating hierarchical composition of ML and logical inference, DARPA is pushing researchers toward architectures where decisions can be traced, audited, and mathematically verified, addressing longstanding concerns about AI reliability in mission‑critical environments.
Structurally, CLARA is divided into two technical areas: TA1 focuses on pioneering theory, algorithms, and open‑source implementations for high‑assurance ML/AR composition, while TA2 consolidates these advances into a reusable software library. Funding is capped at $2 million per award, with Phase 1 lasting 15 months to establish foundational methods and Phase 2 allocating nine months for integration and validation. The program’s open‑source mandate, favoring an Apache 2.0 license, ensures that breakthroughs can be rapidly adopted by industry and academia, fostering a collaborative ecosystem around trustworthy AI.
The implications extend well beyond defense. High‑assurance AI capable of verifiable reasoning is poised to transform sectors such as autonomous logistics, medical decision support, and complex planning where errors carry steep costs. By embedding explainability and polynomial‑time guarantees into AI pipelines, CLARA could set a new benchmark for responsible AI deployment, encouraging commercial firms to adopt similar standards and accelerating the maturation of trustworthy AI technologies across the economy.
By Colton Jones · Feb 11 2026

Key Points
DARPA issued a solicitation for the CLARA program to develop high‑assurance AI systems that integrate machine learning and automated reasoning.
The effort offers up to $2 million per award over 24 months and requires proposals by April 10 2026.
The Defense Advanced Research Projects Agency (DARPA), the Pentagon’s research and technology arm, issued a solicitation for its Compositional Learning‑And‑Reasoning for AI Complex Systems Engineering (CLARA) program, inviting proposals for high‑assurance artificial‑intelligence research.
The solicitation was released by DARPA’s Defense Sciences Office and seeks innovative basic or applied research concepts in “high‑assurance artificial intelligence systems”. Proposals are due by 4:00 p.m. Eastern Time on April 10 2026, with abstracts encouraged by March 2 2026.
According to the announcement, CLARA is “an exploratory, fundamental research program that aims to create high‑assurance, broadly applicable AI systems of systems”. The program will pursue a “scientific, theory‑driven architectural foundation for the hierarchical composition” of Machine Learning (ML) and Automated Reasoning (AR) subsystems.
DARPA states that assurance under CLARA “means verifiability with strong explainability to humans, based on automated logical proofs and hierarchical, vetted logic building blocks”. The agency anticipates performers will combine approaches such as higher‑order logic, probabilistic logic, and interoperable integration of AR and ML.

The program will be structured in two technical areas.
Technical Area 1 (TA1) focuses on developing new high‑assurance ML/AR composition approaches, including theory, algorithms, and open‑source software implementations.
Technical Area 2 (TA2) will create a software composition library to integrate and support validated TA1 tools into a common framework.
As detailed in the solicitation, awards will be made as Other Transactions for prototype projects, with a total award value for combined Phase 1 and Phase 2 efforts limited to $2,000,000. Phase 1 will run for 15 months and Phase 2 for 9 months, for a maximum 24‑month period of performance.
The program sets specific performance metrics, including “Verifiability without loss of performance,” composition of multiple AI kinds, and “Computational Time complexity is Polynomial”. For TA1 performers, Phase 2 also introduces sample‑complexity requirements in adapting models to new tasks.

CLARA will include program‑wide activities such as workshops and hackathons. Hackathons will involve wide‑scope integration scenarios developed with an Independent Verification and Validation team, with examples including “a partial kill web integrating limited components of target recognition, tracking, weapons selection, triaging decision support”.
Software developed during the program is expected to be open‑sourced with a commercialization‑friendly license, preferably Apache 2.0. The agency’s goal is to execute awards within 120 calendar days of the posting date, with a target of June 9 2026 for award execution.
DARPA’s solicitation outlines potential application domains for CLARA technologies, including course‑of‑action planning, multi‑condition medical guidance, and supply‑chain and logistics problems.
Comments
Want to join the conversation?
Loading comments...