Why Most AI Vendors Selling to Medical Device Companies Have Never Been in an OR
9 min read
A consultant from a top-three firm pitched me last month. He showed up with a 47-slide deck on "AI transformation roadmaps for medtech." By slide 12 I had a question.
When his team rolls out the surgeon engagement workflow he was proposing, who on their side has actually been in an OR? Not "consulted with surgeons" — been there. Suited up, badged in, watched a Kirschner wire bend and known why it bent. He paused. "We have healthcare specialists." Sure, I said, but who has covered a case? Who has watched a surgeon get frustrated mid-procedure and had to think on their feet?
Silence.
He recovered, gave me the standard pivot to "we partner with clinical advisors," and walked me through the next 35 slides. I watched politely. We weren't going to do business and we both knew it by slide 13.
That's the problem with AI for medical devices right now.
The buyers in this market — VPs of Sales, COOs, CEOs at $50-500M device companies — are being pitched by armies of consultants who have built generic playbooks for "healthcare" and are now repackaging them for medtech. They use the word "customer" instead of "surgeon." They propose lead-scoring without understanding case volume signals. They treat your reps like SDRs. And they will spend your money for 18 months, deliver a system that nobody in the field uses, and leave behind a Slack channel that goes quiet six weeks after go-live.
I'm not going to argue against AI in medical devices. The technology is real and the value is real. I'm going to argue against the delivery model most of the industry is being sold.
Why generic playbooks fail in device sales
Software sales playbooks evolved over 20 years of selling SaaS to enterprise IT buyers. Lead scoring, MQL to SQL conversion, sales engineering, pipeline reviews. None of these translate cleanly to medical device sales, and the people building AI tools designed for the SaaS world don't know what they don't know.
A few specific examples of what generic AI tools get wrong:
They model the rep as a hunter-gatherer who books meetings and closes deals. In medical device sales, the rep is half coach, half logistics coordinator, half clinical assistant. They're in ORs at 6 AM helping the scrub tech set up trays. They're answering surgeon texts at 9 PM about how a different cage size would have changed a case from earlier that day. The "sales activity" data that drives every SaaS AI tool — calls made, emails sent, meetings booked — is the smallest, least important slice of what your reps actually do. Building AI on top of that data captures roughly 10% of what makes a great device rep great.
They model the buyer as a procurement officer. In device sales, the actual decision is made by the surgeon, but only after the VAC committee approves, but only after the OR director gets comfortable, but only after the case-coverage rep has been in enough cases that the surgeon trusts them, but only if the contract terms work for the hospital system. A generic AI tool that scores leads by "purchase intent signals" cannot model this. It produces noise.
They treat surgeon adoption as a sales funnel when it's actually an education funnel. A SaaS prospect can buy after one demo. A surgeon adopting a new spine implant or a new fixation system needs cadaveric training, a preceptored case, support across their first five cases, and routine support after that. The funnel has six to eight stages, not three. AI tools that don't understand this will count "first case" as a closed-won deal, declare victory, and leave you with a surgeon who never goes routine because nobody followed up after case three.
This isn't theoretical. Every generic AI consultancy I've watched pitch into medtech has made one or more of these mistakes inside the first 30 minutes. The CEOs and VPs of Sales sitting on the other side of the table know something is wrong. They can't always articulate it. They just know the deck doesn't feel like their business.
The demo curse
There's a particular failure mode worth naming. Generic AI vendors build beautiful demos. The demos work flawlessly because the vendor's team built them, in their environment, on curated data, with the workflows they understand best. The CEO sees the demo and gets excited. The contract gets signed.
Then the deployment starts. The vendor's "implementation team" — which is usually not the same team that built the demo — shows up at your company. They request data. The data isn't clean. There's no field for "preceptored cases completed" anywhere in your CRM. The distributor data is in spreadsheets. The surgeon contact records are in three different systems. The vendor's team, who has never had to deal with this kind of data fragmentation in their demo environment, gets stuck. The project slows down. Six months in, you have an "MVP" that nobody uses because it doesn't reflect how your business actually runs.
The demo curse is real and it's expensive. The way to avoid it isn't to ask harder questions during the demo. It's to ask who is going to be in the room during the deployment, and to require that those people have actually worked in a medtech operating environment before.
The OR test
Here's the test I'd suggest for any AI vendor pitching your company. Ask them these questions before you sign anything. Watch how they answer:
Has anyone on your delivery team — not your advisory board, not your sales engineering team, but the people who will actually be in our Slack channel three months from now — ever covered an OR case?
If the answer is no, you're going to spend a lot of money explaining your business to people who should be teaching you things.
Can you describe, without prompting, what happens at a VAC committee meeting and how the device rep's involvement changes the outcome?
If they can't, they don't understand who actually buys your product.
If we send you a sample dataset of our surgeon adoption funnel, can your team identify which stage of the funnel each surgeon is in within a week — using your tools, no extra training?
If they need to come back and ask for definitions of "preceptored" or "routine adopter," they don't know your industry.
What happens at month four when our distributors push back because they don't want the AI tool reading their account data?
If the answer is "we'll figure it out then," they've never deployed in a hybrid commercial model.
Name three surgeons you've spoken to in the last 90 days about how they actually use device reps in their cases.
This is the killer. The answer should be specific, with names and details. If it's vague, walk away.
What good looks like
The vendors and partners who deliver in this industry have a few traits in common. Their leadership has spent time in clinical environments. Their delivery team includes people who carried bags before they switched to consulting. They don't show up with a 47-slide deck — they show up with questions about your specific commercial model and a willingness to admit when they don't understand something. They write down what your reps tell them. They sit in on your QBR before they propose anything. They've worked with surgeons not as research subjects but as colleagues.
This isn't a high bar. It's just the bar that gets ignored because so many companies are racing into "AI for healthcare" without understanding that medtech is its own world. The right partner doesn't need to be a former Globus rep specifically. They need to be someone who can talk about Globus, Stryker, Smith+Nephew, ATEC, NuVasive, Orthofix, J&J MedTech, Medtronic Spine in the same room as your team and not need a glossary.
You can absolutely deploy AI in your medical device company in 2026. You should. The CEOs who win the next decade will be the ones who use AI to compress sales ramp times, accelerate surgeon adoption, drive regulatory velocity, and operate quality systems with less friction. All of that is real, available, and within reach.
But you have to pick the right people to deploy it. The AI isn't the hard part. The AI is commoditizing fast. The hard part is the operational knowledge to know which workflows in your specific business will move the needle — and which will fail no matter how good the underlying model is.
The reframe
The AI consulting market today looks exactly like the management consulting market did in the early 2000s. A handful of firms with credible operator backgrounds doing genuinely good work, surrounded by a vastly larger group of firms repackaging generic playbooks at scale. The buyers who got value were the ones who refused to let the famous logo win the deal — and instead asked the harder question: who specifically is going to be in my company every week, and do they understand what I do?
The same applies now. AI isn't your problem. Picking the right people to deploy it is.
If you can't find an AI partner whose team has been in an OR, build the partnership internally instead. Hire one good engineer who is willing to spend their first 90 days riding along with reps. Have your VP of Sales sit with them every Friday for an hour. That single arrangement will outperform a six-figure engagement with a brand-name firm 80% of the time.
The OR test isn't about credentials. It's about whether the people deploying technology into your business actually understand the business they're deploying into. For mid-market medical device companies in 2026, that distinction is the difference between AI that ships and AI that wastes a year.
See other articles
Find the medtech workflows AI should rebuild first
Rozeta runs a 4-week operational audit across your commercial, regulatory, quality, and field workflows — then ships production AI systems into the ones where coordination overhead, submission cycles, and manual handoffs are actually slowing the business down.
