The following is a guest article by Martin Lewit, SVP Growth & Corporate Development at Nisum
In 2025, 32% of medical group leaders say AI tools are their top tech priority, and that share will only grow. The question is no longer if healthcare will adopt AI, but whether it’s truly ready.
Other industries, retail in particular, have already shown how powerful AI can be when it personalizes experiences or simplifies operations. Yet quick progress came at a price. Cautionary lessons have unfolded about the pitfalls of rapid adoption, including opaque algorithms, trust gaps, and misaligned expectations.
Healthcare faces a higher bar. Every algorithmic decision affects clinical accuracy, regulatory compliance, and patient trust. Efficiency may be essential, but the margin for error is minuscule.
For AI to be successful and scalable in healthcare, hospitals must balance automation with human judgment when the stakes are life and death. Getting this right will determine whether AI is seen as a trusted ally or an unproven risk.
Earning Clinician Trust in AI
Nearly two-thirds of physicians surveyed use AI, yet only 35% reported that their enthusiasm for health AI exceeded their concerns. For many, trust and security remain top concerns.
Other industries have faced similar growing pains. A Harvard Business School analysis of several retail chains showed what happens when AI runs on shaky data. Managers had to manually correct 84% of AI-generated staff schedules, erasing any promised efficiency and eroding trust in the system. Healthcare can’t afford the same mistake. Leaders need visibility into how algorithms make decisions, confidence in the data they’re trained on, and regular reviews to prove those systems are delivering real results.
Leaders must establish cross-functional AI ethics boards between clinicians, IT, and compliance. This establishes shared accountability, responsibility, and compliance between teams and with laws such as GDPR and HIPAA. And no matter the tools used, responsibility always lies with the clinician, and clear accountability builds confidence with both staff and patients.
Explainability is another cornerstone of trust. Clinicians must understand how a model comes to a conclusion rather than accept recommendations blindly. Clear, audit-friendly models allow clinicians to follow their processing framework. One mixed systematic review found that visual tools, like heat maps and feature attribution, were frequently mentioned as important enablers of trust and acceptance.
Trust grows through education, and training must mandate when and when not to rely on AI. Involving nurses, physicians, medical assistants, and schedulers in the selection, design, and rollout process also helps adoption feel collaborative, not imposed from above.
How to Optimize Resources
Retail shows how AI can transform day-to-day operations. In fact, McKinsey reports AI can cut inventory levels 20-30% by improving demand forecasting and optimizing stock through machine-learning tools.
The same inventory logic that prevents retail stockouts can ensure medications are available when needed. Similarly, just as retailers optimize staffing with foot traffic, hospitals can optimize clinician schedules against patient flow, reducing bottlenecks and wait times.
Automation also frees employees from repetitive tasks so they can focus on higher-value work like improving customer service. Healthcare can apply the same principle. In fact, 57% of physicians said reducing administrative burdens through automation was the biggest opportunity for AI.
AI platforms in retail can also anticipate customer behavior. This is increasingly mirrored in healthcare. For example, at The Groves Medical Centre in England, an AI-enabled triage platform cut pre-bookable wait times by 73% and reduced peak-hour call volumes by nearly half. Proof that predictive models can meaningfully improve access and alleviate staff strain.
But efficiency has its limits. In retail, over-optimization has led to empty shelves and frustrated customers. In healthcare, efficiency must never make the system brittle, jeopardizing patient safety.
Across industries, AI has never been about replacing people but reallocating human effort to the moments that matter most. Healthcare leaders must adopt the same mindset, seeing AI as an efficiency engine, not a workforce replacement.
Meeting Patient Expectations
Consumers now expect the same speed and personalization from healthcare that they get from retail or banking. HealthEdge’s Consumer Study shows how far that shift has gone: 78% of respondents have used—or would use—their health plan’s mobile app, and more than half prefer to manage key interactions digitally. That appetite for convenience is forcing health organizations to rethink how care and communication happen.
In retail, customers want targeted offers and communication. In healthcare, AI automation can offer this through sending appointment alerts, translating health information into a patient’s preferred language, and flagging when someone may be at risk of dropping off treatment or facing complications. The goal must be keeping patients engaged and connected to their care.
Researchers at Penn State developed a machine learning model that predicts no-shows and late cancellations with over 85% accuracy, helping clinics rebook or remind high-risk patients. Furthermore, a national nonprofit health system worked with PwC to integrate conversational AI across over 50 contact centers. This resulted in call abandonment falling by 85%, and care teams gained hundreds of hours per month to focus on patient-centered tasks.
Yet, once AI is patient-facing, transparency and guardrails are especially relevant. Patients expect personalized experiences, but also deserve to feel in control. They must know when AI powers a recommendation, their data is being handled ethically and that algorithms don’t play favorites. Personalization also has its limits; no one wants an app that feels like it’s reading their diary. Every platform should make it simple to opt out or switch back to human support when people prefer it. Retail already showed what happens when personalization turns opaque, as customers are left wondering why they were targeted, and data misuse scandals have eroded customer trust.
AI’s promise in healthcare is about efficiency and trust. Other industries have already proved a simple truth: technology only works when people trust it. In healthcare, that means making AI transparent, accountable, and visibly supportive of human judgment. The goal isn’t to replace clinicians’ expertise but to extend it, using technology to make care more personal, reliable, and ultimately more human.
About Martin Lewit
Martin Lewit is the Head of Growth & Corporate Development at Nisum, a global consulting partner specialized in digital commerce and evolution. In this role, he leads the company’s growth efforts and expansion into new markets, seeks to forge strategic partnerships, and accelerates business development across the U.S. and Latin America. Recently, Martin was also named Head of Nisum Latin America.