Building an ‘AI-Ready’ Healthcare Enterprise Using NIST and ISO Frameworks

18 hours ago 15

Rommie Analytics

Marty Barrack, Chief Legal and Compliance Officer, XiFin, Inc.

Technology leaders across diagnostic organizations, radiology practices, pharmacies, and specialty pharmacies are hearing a consistent message from technology providers: “Adopt AI or fall behind.”

The problem is that “adopt AI” is not a strategy. It’s an activity, and too often, it turns into a series of pilots that never become fully integrated enterprise capabilities.

That failure is well-documented. According to a  2025 “State of AI in Business” report on enterprise AI implementation by MIT NANDA, there is a stark “GenAI Divide”: broad experimentation and even deployment of general tools, but very limited workflow-integrated transformation and measurable P&L impact for most organizations across a broad range of industries. That report estimated that only 5% of custom enterprise AI tools reach production. 

Healthcare organizations have even less margin for error because the AI systems they use generally involve regulated data, patient impact, and highly audited processes. So, the goal isn’t just “use AI,” it’s to become AI-ready: a governance posture that facilitates safe, compliant, repeatable, and cost-effective adoption.

What “AI-Ready” Means in Practice

In an AI-ready healthcare enterprise, governance is not a side project. It must be a core operating framework, including:

Clear decision-making structures and accountabilityEthical guidelines and review processesSecure data environments and strong identity controlsDefined risk management and compliance practicesOngoing review of regulations, contracts, insurance, and assuranceScalable architecture and cost controls

This is the opposite of “innovation theater.” It is how you scale responsibly.

Start with the Regulatory Reality

For diagnostics, radiology, and pharmacy operations, the AI regulatory environment is layered:

Federal law can include sector-specific requirements, agency oversight, and enforcement risk depending on use cases (e.g., clinical or medical device workflows).State law may introduce substantive limitations as well as requirements relating to disclosure, transparency, professional licensing, and liability considerations.International requirements can come into play through cross-border operations, vendors, subcontractors, or cloud processing infrastructure. 

If you don’t map your regulatory obligations early, you will pay for it later—in remediation, contract changes, and delayed deployment, and potentially in agency enforcement and legal proceedings. 

Contract Obligations: The Hidden AI Control Plane

Many organizations focus on “AI laws,” but miss an additional important constraint: contracts.

Your obligations may be defined by:

Payor and business partner contractsVendor agreementsBroad “applicable law” clauses that expand what you must operationalize 

For RCM and finance leaders, this matters because AI risk is often shared, or shifted, through contract language, such as provisions regarding operational obligations, development, compliance, audit rights, documentation, intellectual property,  indemnification, and allowable uses of data.

Your AI governance program must include procurement and contracting; otherwise, your technology posture won’t match your legal posture. 

Pick a Governance Framework Best Suited for Your Organization

Two widely recognized anchors help organize AI governance across technology, security, compliance, and operations:

NIST AI Risk Management Framework (AI RMF)

NIST describes the AI RMF as voluntary guidance to help organizations incorporate trustworthiness throughout AI design, development, evaluation, and use. It also defines “trustworthy AI” characteristics that translate well to healthcare operations: 

Valid and reliableSafeSecure and resilientAccountable and transparentExplainable and interpretablePrivacy-enhancedFair with harmful bias managed

ISO/IEC 42001 (AI Management System) and 23894 (Guidance on AI Risk Management)

ISO/IEC 42001 provides requirements for establishing and continually improving an AI Management System (AIMS), a management-system approach to AI governance across an organization. 

ISO/IEC 23894 provides guidance on how organizations that develop, produce, deploy, or use products, systems, and services that use artificial intelligence (AI) can manage AI-specific risks. The guidance also aims to help organizations integrate risk management into their AI-related activities and functions and describes processes for effectively implementing AI risk management.

One potential approach: Consider using NIST AI RMF as your operating framework for risk and trustworthiness and treat ISO 42001 as a maturity and audit-readiness target. 

Catalog Current AI Use

Before you can govern AI, you need to know where it lives and what it’s doing.

A practical inventory should distinguish machine learning (ML), generative AI, and agentic AI, and capture the role AI plays in your organization. For example, are you using a particular AI application for decision-making, decision-assistance, or information support? 

For diagnostic providers, radiology practices, and pharmacy operations, “AI use” often spans:

Clinical and operational documentation workflowsDevice/equipment AI featuresRevenue cycle management and billing operationsAdministrative business processesTechnology and development workflows

Also critical: third-party tools that “include AI features,” as these also involve AI activities your organization may need to review, even if the AI is another company’s product or service.

And you must also acknowledge the shadow AI reality: employees frequently use consumer AI tools in daily work, and the adoption of these personal tools often happens faster than enterprise programs adopt approved solutions.

Define Your AI Strategy 

An AI strategy is not a one-page vision. It’s a set of well-considered choices:

Frameworks you will followRegulatory and contractual requirements Industry standards and assurance expectations Stakeholder concerns Prioritized use cases and resource constraints, including cost

If you want scalable value, governance must be designed to support operational adoption—not just approve it.


About Marty Barrack

Marty Barrack serves as the CISO and Chief Legal and Compliance Officer for XiFin, Inc.  XiFin is a leading provider of revenue cycle management software in a SAAS model for healthcare providers.  Marty serves on ISACA’s Emerging Trends Working Group, and holds industry certifications including ISACA’s CISM and CRISC certifications, as well as J.D. and MBA degrees. 

Read Entire Article