Fewer than 1% of HHS’s AI uses are ‘high impact.’ It stands out.
HHS has classified fewer than 1% of its nearly 450 AI use cases as 'high impact' requiring heightened risk management, a significantly lower rate than other major agencies like DHS (23%), DOJ (36%), and VA (59%). This raises concerns about potential misclassification and inconsistent AI governance practices across federal agencies, which could impact how contractors develop and implement AI solutions for HHS. The discrepancy suggests contractors may face varying AI risk management requirements depending on the agency, affecting compliance strategies and solution design.
Cabrillo Club
Editorial Team · February 23, 2026 · 5 min read

Also in this intelligence package
Action Kit: HHS AI Classification Discrepancy
Immediate Actions (This Week)
- [ ] Review all active HHS proposals and contracts for AI/ML components and document current risk classification assumptions used in your technical approach
- [ ] Audit your AI solution portfolio to identify which offerings could be affected by potential HHS reclassification from low-impact to high-impact status
- [ ] Flag HHS opportunities in your pipeline where AI risk management requirements may shift, particularly those in pre-award or early performance phases
- [ ] Assign a team member to monitor HHS AI governance announcements and OMB guidance updates for the next 30 days
- [ ] Document your current AI risk assessment methodology for HHS contracts to establish baseline compliance posture before potential policy changes
Short-Term Actions (30 Days)
- [ ] Develop dual-track AI compliance narratives for HHS proposals: one addressing current low-impact classification standards and one prepared for high-impact requirements (NIST AI RMF, enhanced testing, bias audits)
- [ ] Conduct gap analysis comparing your AI governance documentation against DHS (Department of Homeland Security) (23%), DOJ (36%), and VA (59%) high-impact standards to prepare for potential HHS alignment
- [ ] Update proposal boilerplate and past performance narratives to emphasize your firm's AI risk management capabilities, particularly for healthcare AI applications under HIPAA
- [ ] Engage with HHS program offices on active contracts to understand their internal AI classification rationale and anticipated policy direction
- [ ] Review contract vehicles (OASIS+, CIO-SP4, NITAAC CIO-CS) for AI-related task order language and risk management clauses that may become standard for HHS work
- [ ] Prepare technical white papers demonstrating your AI solutions can meet both low-impact and high-impact risk management requirements without significant re-architecture
Long-Term Actions (90+ Days)
- [ ] Build enhanced AI governance framework aligned with NIST AI RMF that can scale from low-impact to high-impact classifications, positioning your firm for multi-agency AI work
- [ ] Develop agency-specific AI compliance playbooks for HHS, DHS, DOJ, and VA that account for their divergent classification approaches and risk tolerance
- [ ] Invest in AI testing and validation capabilities (bias detection, explainability tools, performance monitoring) that satisfy high-impact requirements across all agencies
- [ ] Create reusable compliance artifacts (AI risk assessments, algorithmic impact statements, continuous monitoring plans) that can be rapidly customized for different agency standards
- [ ] Establish strategic partnerships with healthcare AI ethics experts and NIST AI RMF practitioners to strengthen your HHS positioning
- [ ] Monitor for HHS AI governance policy updates and prepare to submit public comments if HHS issues revised AI classification guidance or solicits industry feedback
- [ ] Track congressional oversight activity related to AI governance inconsistencies across agencies, as this may drive standardization requirements
Compliance Checklist
This event highlights potential compliance surface expansion. Prepare for these requirements if HHS aligns with other agencies' high-impact AI standards:
- [ ] OMB M-24-10 AI Governance Memo compliance: Documented AI use case inventory, impact assessments, and risk mitigation plans
- [ ] NIST AI Risk Management Framework (AI RMF): Comprehensive risk mapping, measurement, and management across AI lifecycle
- [ ] Enhanced algorithmic bias testing: Demographic parity analysis, disparate impact assessments, and fairness metrics for healthcare AI
- [ ] AI explainability and transparency: Model interpretability documentation, decision logic explanations, and human oversight protocols
- [ ] Continuous AI monitoring: Real-time performance tracking, drift detection, and incident response procedures
- [ ] HIPAA compliance for AI systems: Privacy impact assessments, de-identification protocols, and patient data protection for healthcare AI applications
- [ ] Section 508 accessibility: Ensure AI interfaces and outputs meet accessibility standards for users with disabilities
- [ ] FedRAMP (Federal Risk and Authorization Management Program) authorization: Cloud-based AI systems must maintain appropriate FedRAMP authorization levels (may increase from Low to Moderate/High)
- [ ] FISMA controls enhancement: Information security controls for AI systems may require elevation to match high-impact classification
- [ ] AI supply chain risk management: Vendor assessments, third-party AI component documentation, and software bill of materials (SBOM) for AI models
Resources
- OMB Memorandum M-24-10: Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence (https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10-Memorandum-on-Advancing-Governance-Innovation-and-Risk-Management-for-Agency-Use-of-Artificial-Intelligence.pdf)
- NIST AI Risk Management Framework (https://www.nist.gov/itl/ai-risk-management-framework)
- HHS AI Strategy and Implementation Plan (https://www.hhs.gov/about/agencies/asa/ocio/ai/index.html)
- FedRAMP AI Guidance (https://www.fedramp.gov/)
- CMMC (Cybersecurity Maturity Model Certification) Compliance Guide (/insights/cmmc-compliance-guide) — Essential for contractors handling CUI (Controlled Unclassified Information) in AI systems
- Secure Operations Guide (/insights/secure-operations-guide) — Framework for maintaining compliance across evolving AI governance requirements
- CUI-Safe CRM Guide (/insights/cui-safe-crm-guide) — Protecting sensitive data in AI-enhanced business systems
How Cabrillo Club Automates This
Cabrillo Signals War Room has already detected this HHS AI classification discrepancy and delivered this briefing to your dashboard within minutes of the policy analysis publication. The War Room continuously monitors OMB guidance, agency AI inventories, congressional oversight reports, and federal AI governance developments across all major agencies. You don't need to manually track HHS policy updates or compare AI classification rates across DHS, DOJ, and VA — the system automatically identifies these cross-agency discrepancies and flags them as medium-severity policy changes affecting your target markets.
Cabrillo Signals Match Engine is automatically rescoring your HHS opportunity pipeline right now based on this event. Any opportunities tagged with AI/ML, Healthcare IT, or Data Analytics market segments under HHS are being re-evaluated for competitive positioning. If you've been pursuing HHS AI opportunities assuming low-impact risk management requirements, the Match Engine adjusts keyword relevance scores and compliance complexity ratings to reflect potential high-impact reclassification. Your bid/no-bid recommendations update in real time as the competitive landscape shifts.
Cabrillo Signals Intelligence Hub tracks the affected agencies (HHS, DHS, DOJ, VA), NAICS codes (541512, 541511, 541715), and contract vehicles (OASIS+, CIO-SP4, NITAAC CIO-CS) associated with this event. Configure a saved search for "HHS + AI/ML + high-impact OR risk management" to receive instant alerts when follow-on solicitations appear on SAM.gov (System for Award Management) that reflect updated AI governance requirements. The Intelligence Hub will notify you when HHS issues revised AI classification guidance or when task orders under your target vehicles include enhanced AI risk management language.
Proposal Studio (Proposal OS) helps you prepare dual-track compliance narratives for HHS AI proposals. The AI-powered compliance matrix generator automatically maps your technical approach against both current low-impact standards and potential high-impact requirements from the NIST AI RMF. Your win theme library can store agency-specific AI governance narratives, and the system generates first-draft technical approaches that emphasize your scalable AI risk management capabilities. When you're responding to an HHS AI opportunity, Proposal OS pulls relevant past performance examples and automatically adjusts risk mitigation language based on the latest policy intelligence from the War Room.
Proposal Studio Workflow Tracker triggers a compliance review workflow when you flag an HHS opportunity as AI-related. The 9-gate capture process automatically routes AI risk assessment tasks to your technical team, schedules legal review of NIST AI RMF compliance claims, and generates audit-ready documentation packages for your AI governance framework. If HHS issues updated AI classification guidance during your proposal development cycle, the Workflow Tracker alerts your capture manager and prompts a compliance re-review before submission.
Ready to automate your response to evolving AI governance requirements? Explore how Cabrillo Club's integrated platform keeps you ahead of policy changes and automatically adjusts your capture strategy when agencies shift their risk management standards. Learn more in our Secure Operations Guide (/insights/secure-operations-guide).
---
How ready are you for CMMC?
Take our free readiness assessment. 10 questions, instant results, no email required until you want your report.
Check Your CMMC Readiness
Cabrillo Club
Editorial Team
Cabrillo Club is a defense technology company building AI-powered tools for government contractors. Our editorial team combines deep expertise in CMMC compliance, federal acquisition, and secure AI infrastructure to produce actionable guidance for the defense industrial base.