
Updated: Where the House and Senate Are on Internal Use of AI
Key Takeaways
- •House policy mandates approved tools, human oversight
- •Senate adopts two‑tier framework, authorizes Copilot, Gemini, ChatGPT
- •Both chambers keep guidance internal, limiting staff awareness
- •Policies emphasize risk, not legislative innovation
- •Copilot’s political content filter hampers constituent communication
Summary
Congressional leadership has issued internal AI use policies for both chambers, but the guidance remains hidden behind firewalls and is largely unknown to staff. The House adopted HITPOL 8 in September 2024, outlining five guardrails, approved tools such as ChatGPT Pro and Microsoft Copilot, and a tiered use‑case model that restricts sensitive data and political content. The Senate released a two‑tier framework in October 2025, authorizing Copilot Chat, Gemini, and ChatGPT Enterprise for Tier 2 official data while establishing an AI Governance Board. Both policies focus on risk mitigation rather than leveraging AI for legislative innovation.
Pulse Analysis
The rapid diffusion of large language models since ChatGPT’s 2022 launch has prompted governments worldwide to codify AI use, and the United States Congress is no exception. In the House, the Chief Administrative Office’s HITPOL 8 policy establishes five core guardrails—human oversight, clear policies, testing, transparency, and education—while prescribing approved tools and a tiered approval process for internal versus public‑facing applications. This risk‑first approach, coupled with a mandatory disclosure regime, reflects the CAO’s jurisdiction over IT infrastructure but blurs the line between technical security requirements and legislative decision‑making, creating uncertainty for Members and their staff.
The Senate’s AI governance, formalized in the 2025 SAA‑CIO‑CYB‑040 directive, introduces a two‑tier risk model that differentiates between non‑official and official Senate data. An AI Governance Board, drawing from the CIO, legal, and acquisition offices, oversees tool vetting, resulting in the authorization of Microsoft Copilot Chat, Google Gemini, and OpenAI’s ChatGPT Enterprise for Tier 2 use. While this structure aligns with NIST risk‑management standards, the absence of these tools from the Senate’s supported‑software list signals a cautious integration strategy that may delay broader operational benefits.
Both chambers’ policies prioritize security, privacy, and compliance, yet they lack a forward‑looking vision for how generative AI could enhance legislative drafting, constituent outreach, or policy analysis. The internal‑only publication of guidance, combined with low staff awareness, undermines consistent adoption and sets a conservative tone for other federal and state legislatures. Greater transparency, clearer delineation of authority, and an explicit innovation agenda could transform AI from a compliance checkbox into a strategic asset for Congress and the public sector at large.
Comments
Want to join the conversation?