The following is a guest article by Daniel Vreeman, DPT, Chief Standards Development Officer and Chief AI Officer at HL7 International
The pace of AI innovation in healthcare is moving faster than the development of regulation, governance, infrastructure, and trust—particularly in healthcare. As health systems race to integrate generative AI and predictive models into clinical workflows, the foundational frameworks for data quality and safety are still catching up. This gap creates not only inconsistent adoption across organizations, but also increased risk of unintended harm to patients and clinicians alike.
Last year alone, we saw health systems experiment with everything from large language models in documentation to algorithm-driven triage in virtual care. But without shared expectations for how these tools are developed, validated and monitored, we’re headed toward a fragmented future, where trust in digital health systems erodes even as their reach expands.
Standards Are No Longer ‘Back Office,’ They’re Public Infrastructure
What healthcare needs now is the equivalent of a public utility model for digital infrastructure: not government-controlled, but collectively governed in the public interest. Standards are too often viewed as back-office tech, important but invisible. The truth is that they’re the backbone of trustworthy AI: explainability, safety, portability, and transparency.
Just as roads and clean water are foundational to physical health, interoperable data systems and shared rules for AI behavior are foundational to digital health. Without them, each model is its own black box, each integration is custom coding, and each approach to AI model monitoring is a silo. This slows progress, introduces risk, and makes it harder to scale tools equitably across health systems large and small.
Lessons from TEFCA and What AI Can Learn From It
The Trusted Exchange Framework and Common Agreement (TEFCA) has already shown that it’s possible to align federal policy, technical standards, and private-sector participation toward a shared vision: making clinical data exchange work for everyone, not just those with the most resources. The TEFCA model isn’t perfect, but it offers key lessons for AI in healthcare.
To succeed, we’ll need open, consensus-based frameworks that enable transparency across the entire AI lifecycle: from training data to deployment to evaluation and monitoring. We need standards-based approaches for defining how data cohorts are selected, how algorithms are versioned and tracked, how model outputs are tagged to support clinical interpretation, and how we continuously monitor a model’s performance for drift, unintended consequences, and bias. These principles aren’t theoretical; they’re operational, and they belong inside the health IT systems and APIs powering healthcare today.
Real Interoperability Requires More Than Connectivity
Interoperability for AI isn’t just about access, it’s about architecture. The data fueling the AI lifecycle must be consistent, comprehensible and crafted for safe machine consumption.
Deploy a clinical prediction model across multiple institutions with slightly different data definitions, and you can expect the unexpected. Outputs skew. Unknown biases emerge. Performance becomes less predictable. Without consistent data representation, explainability collapses.
That’s why technical standards for metadata, data provenance, and model context must be part of the conversation about our AI infrastructure, not just for model development or regulatory approval.
Equity Depends on Implementation, Not Just Innovation
One of the biggest risks in AI is the potential to widen existing disparities in healthcare. Large health systems and academic centers may have the expertise and resources to safely deploy and monitor AI, but rural hospitals and community clinics often don’t. Without a deliberate effort to make AI infrastructure accessible and implementable across settings, we risk repeating the same digital divide that accompanied EHR adoption.
Equitable AI isn’t just about the data going in. It’s about who gets to use these tools, under what conditions, and with what safeguards in place. Standards play a central role in that effort—leveling the playing field so that safety, transparency, and trust aren’t dependent on zip code or organizational budget.
A Call to Collective Action
The public and private sectors both have critical roles to play. Government can create incentives, establish guardrails, and align policies. Success, however demands shared accountability: developers, providers, payers and vendors building the shared vision for sustainable, safe AI.
The good news is that many of the building blocks already exist. What’s needed now is the commitment to align them, implement them consistently, and scale them equitably. Trusted neutral convenors can turn consensus into actionable, open technical standards; bridging the gap from high-level principles to interoperable tools make AI in healthcare more trustworthy and reliable.
About Daniel Vreeman
Daniel Vreeman, DPT is the Chief Standards Development Officer and Chief AI Officer of HL7 International, the global authority on interoperability of health information technology.