blog

When Good Technology Misses the Point: How Systems Thinking Unlocks AI’s Potential in Medical Device Innovation

Written by Annika Hey | Mar 24, 2026 12:00:01 PM

"Can we add AI to this?" is the new "Let’s build an app." Every era has its technology bandwagon, and every era leaves behind products that failed not because the technology was bad, but because the teams building them didn’t understand the systems they were entering.

AI carries the same magnetic pull for medical technology innovators today. The excitement is understandable. But when innovation begins with the format instead of the fit, even brilliant technologies fail. Not because they don’t work, but because they don’t work within the ecosystems they enter.

Consider a physician-entrepreneur who sought investor backing for a web portal designed to streamline hospital-to-care-facility transfers. The vision was substantive: a clinician-facing tool that would connect hospital records to care facility ratings and availability, giving the clinical team the information they needed to coordinate a patient’s transition in one place. Today, that coordination often falls to patients themselves, who are discharged with a printed list of facilities and left to research and contact them independently. The portal aimed to solve that. In isolation, the technology looked like it would perform beautifully. But the ecosystem mapping was incomplete.

The solution didn’t account for the burden it placed on clinicians. It asked physicians and nurses to add another login, another password, and another interface to an already fragmented digital landscape. In a healthcare environment already straining under cognitive overload and system switching, that friction was a fatal flaw. Responsibility for monitoring the portal was also unclear: who, in an already stretched clinical workflow, would own this new task?

The underlying problem remains unsolved. The original solution didn’t fail because it addressed the wrong problem. It failed because it underestimated the technical, workflow, and human complexities of the ecosystem it needed to operate within. A well-designed portal could still genuinely help. But only if it starts with the system, not the software.

Fortunately, prototyping revealed these issues before full development. The real cost was avoided. But the lesson is clear: understanding the system is not a luxury. It is the precondition for building anything that lasts.

What Research Reveals That Assumptions Miss

A recent Veranex engagement with a major global medical device manufacturer brings this into sharper relief. We conducted feature resonance testing for a next-generation interventional cardiology platform. The client had a range of AI-enabled concepts and needed to understand which ones clinicians across roles and regions would actually adopt.

Two features in particular produced an intriguing narrative. The first was an AI-driven guidance tool: it surfaced predicted treatment effects and offered system-generated recommendations before the clinician acted. The second was a confirmatory validation tool: it synthesized key indicators after treatment and presented a confidence score, supporting the clinician’s judgment without directing it. Both were technically sophisticated. Both addressed real clinical problems. Both drew on the same underlying data inputs and presented through similar interfaces.

In initial ranking exercises across 22 participants and six countries, they performed similarly. The guidance tool even scored slightly higher on desirability. But when we moved into deeper resonance testing, the two features produced completely different responses. The guidance tool collapsed in feasibility. Clinicians found it appealing in the abstract but consistently told us they could not imagine trusting or relying on it in a real procedure. The confirmatory tool landed with high adoption signals. Clinicians could picture it fitting seamlessly into their current workflows.

Feature desirability and feature adoption are not the same thing. What mattered wasn’t the underlying technology. It was the role AI was being asked to play in the clinical workflow.

The explanation came down to something deeper than feature preference: the system beliefs of the clinical environment itself.

Physicians in interventional cardiology have spent years developing skills to compensate for the limitations of their tools. The anatomy visualized by the technology is often distorted. Clinicians know this, and they’ve built expert workarounds into their practice. Ask them to trust AI functionality built on top of that same imperfect foundation, and the pushback is immediate and rational. If the tools already require constant human judgment to function well, why would they trust a system to do the critical thinking?

Feature 2 worked with that belief. It confirmed what clinicians already suspected and reduced manual burden without displacing their judgment. Feature 1 asked them to cede decision-making authority to a system they didn’t trust. The technical sophistication of both features was beside the point.

Understanding the System, Not Just the Widget

Systems thinking, as defined by Donella Meadows, is a way of understanding how the parts of a system interact to produce behavior. In medical device innovation, that behavior is adoption. The question it helps us answer isn’t just “Does this technology work?” It’s “Does this technology fit the system it’s entering?"

Stocks and Flows: What Actually Accumulates Over Time

Every system has stocks, things that build up or deplete over time, and flows, the forces that drive those changes. In a clinical environment, the primary stock is trust.

Trust builds slowly through consistent, low-burden, accurate experiences. It depletes quickly when a feature introduces uncertainty or asks clinicians to rely on inputs they don’t believe in. The flows that shape it include:

  • Features that reduce manual burden and present clear, confirmatory information build trust.

  • Features that produce ambiguous outputs, or depend on data clinicians already distrust, deplete it.

  • Infrastructure limitations like poor network reliability or inconsistent system performance also deplete trust, often invisibly.

Secondary stocks feed into trust as well: clinician confidence in data quality, perceived system reliability, and the depth of expertise clinicians have built through years of compensating for their tools’ limitations. Understanding what feeds and depletes these stocks is the first step to understanding whether an AI feature has a realistic path to adoption.

Feedback Loops: Why Two Similar Features Produce Opposite Outcomes

Stocks change through feedback loops. Our two AI features activated completely different ones.

The confirmatory validation tool forecasted a reinforcing loop:

  • It aligned with how clinicians already evaluated their work.

  • It confirmed what they already suspected, using familiar indicators.

  • Trust increased. Perceived feasibility increased.

  • Willingness to adopt grew. Each cycle strengthened the next.

The guidance tool forecasted a balancing loop:

  • Clinicians didn’t trust the underlying data enough to accept a system-generated recommendation.

  • They felt they’d need to manually verify any output.

  • That added oversight burden and increased perceived risk.

  • Trust decreased. The system resisted change.

This is why technically similar features can produce opposite adoption outcomes. The technology isn’t the deciding variable. The loop it activates is.

Loops Run at Different Speeds

Not all feedback loops respond at the same pace, and that matters for product strategy. Some loops are fast: a feature that reduces burden in a single procedure can begin building trust immediately. Others are slow or stubborn:

  • Infrastructure loops (network constraints, hardware limitations, EMR integration) change on timescales of months to years.

  • Regulatory loops respond to legal and institutional timelines, not product roadmaps.

  • The institutional memory loop, accumulated skepticism from past AI features that promised much and underdelivered, can persist for decades. New features inherit the credibility debt of everything that came before them.

Knowing which loops your feature will activate, and how fast they move, tells you not just whether adoption is possible but how long the path is and where the resistance will come from.

Zones of Influence: Where You Actually Have Leverage

Once you understand the loops, the practical question is: where can you intervene? Three zones help sort this out:

  • What you control: the design itself. How interpretable is the feature? Does AI support or displace clinical judgment? How much cognitive burden does it add or remove?

  • What you can influence: everything around the feature. How it is introduced, how clinicians are trained, how expectations are set about what AI will and won’t do. This is where early trust is built or broken, and where product teams chronically underinvest.

  • What you must adapt to: regulation, infrastructure, reimbursement, and the system beliefs clinicians carry into every encounter with new technology. These do not bend to product roadmaps. You design within them.

The most common mistake is investing energy in that third zone, trying to change things that will not move, while neglecting the second, where your real leverage lives.

Research Before Resolution

Before chasing the next shiny technological object, invest in understanding the system. Observational research, resonance testing, and stakeholder mapping are not delays in the development process. They are accelerators that prevent costly pivots later, and they consistently surface connections that technology-focused approaches miss.

We’ve seen clinicians reject features their peers would label “advanced” because those features didn’t fit how they actually work. We’ve seen nurses choose heavier equipment because it was easier to use within their workflow. We’ve seen hospital infrastructure make theoretically superior technologies impractical. In every case, the insight that mattered wasn’t about the technology. It was about the system.

The question isn’t whether AI belongs in medical devices. It does, and the opportunity is real. The question is whether we are disciplined enough to understand the ecosystems AI must navigate before we ask them to perform within those ecosystems.

The companies that succeed will be the ones that can answer not just “Can we build this?” but “Will clinicians trust it enough to change their workflow?”

Work With Us

Systems thinking is a practical strategic approach to MedTech innovation that can mean the difference between breakthrough success and expensive failure. At Veranex, we don’t just map stakeholders and analyze systems in isolation. As part of the industry’s first integrated innovation CRO, we connect research insights directly to design, regulatory, clinical, and commercialization expertise under one roof. Because understanding the system is powerful, but having a partner who can act on that understanding across every discipline? That’s the difference.

Start a conversation about your innovation opportunity or challenge.

Annika Hey is a Principal Design Researcher at Veranex, an integrated med tech product development consultancy. Her work focuses on translating complex system insights into actionable product strategy.