Since leaning deeply into AI from a thesis POV circa 2022, what has stood out to me is how fast the models are improving and how consistently the same institutional forces shape what actually reaches patients. We spend a lot of time talking about performance, bias, validation, and regulation in health, and all of that matters, but when systems stall in the real world, it is sometimes because no one can agree who is responsible when something goes wrong, and no long-standing professional body has yet said, “this is now acceptable medical practice.”

That tension became very real to me in 2023 when I had the opportunity to speak during a session with Johnson & Johnson at SXSW, focused specifically on AI responsibility and clinical risk. I framed my talking points around what actually happens when automated systems intersect with professional standards, because it was already clear to me that those were the real choke points vs model capability. We even touched on how risk models intersect with ethnicity and population health, which deserves its own deeper discussion. What stuck with me from that session was how quickly the conversation moved away from pure technology and pivoted to institutional accountability.

Fast forward to this week, as I was moving around San Francisco during JPM 2026, the tone felt decisively different from prior years. Conversations I was part of were about how AI autonomy might actually play out and whether it can be tolerated, insured, reimbursed, and defended in front of boards and regulators. In parallel, headlines around policy and product started to move in ways that would have seemed premature not long ago.

For background, last week Utah announced a pilot that allows AI to autonomously renew a set of prescription medications without physician sign-off, under defined scope and escalation rules. The move drew a sharp response from the American Medical Association’s CEO, John Whyte, who warned on LinkedIn that removing physicians from medication decisions risks patient safety and undermines clinical accountability.

On the surface this might read like another skirmish in the “AI versus doctors” debate, but when you dig deeper there is more to unpack. What is really being tested is whether software can be recognized as an accountable participant in care delivery, at a time when much of the capital in healthcare AI is still flowing into pure clinical decision-support tooling and other areas of software innovation that’s starting to feel like a race to the bottom or a blowing up of cap tables.

The company behind that Utah pilot, Doctronic, led by founders Matt Pavelle and Dr. Adam Oskowitz, deserves credit for being willing to operate where incentives, liability, and clinical norms collide. Many teams build systems that recommend, flag, or summarize because those products are easier to sell and far safer commercially in today’s early-stage environment. In my view, Doctronic is doing something more complex by taking responsibility for a bounded clinical act and agreeing to operate under regulatory scrutiny. That is not a trivial step, because in medicine, execution is where accountability really lives.

At the same time, the AMA’s public concern should be taken seriously, but it is also worth noting that the organization’s formal policy already acknowledges that autonomous AI will exist and that when it does, liability should fall on the entity best positioned to manage system risk. In its own words, developers of autonomous AI systems with clinical applications are expected to accept liability for failures and maintain appropriate medical liability insurance. That framing is not anti-tech. It is clearly pro-accountability. It suggests that the real debate is whether current systems meet the threshold of responsibility that professional standards demand. And to John’s credit, a hot take at least forces the industry to engage.

This distinction matters because healthcare actually changes when medical executive committees, credentialing bodies, compliance officers, and malpractice carriers are satisfied that a new practice fits within an acceptable risk envelope. I have seen promising AI deployments stall because “risk committees” could not agree on how liability would be allocated if something went wrong. In those cases, no amount of performance data is enough to overcome the absence of professional precedent. This is why state level regulatory sandboxes may end up being more consequential than many people realize. Utah, Texas, and California are each experimenting, in different ways, with frameworks that allow new digital and AI enabled care models to operate under defined conditions, reporting obligations, and oversight. How these standards and protocols will merge nationally, is another topic.

The bull case is that these programs create real world evidence and operational precedent that professional boards and insurers can evaluate and eventually get behind. It is also worth remembering that federal agencies regulate products, but states regulate the practice of medicine. Prescribing authority, scope of practice, and licensure are state matters, which makes states the natural laboratories for testing.

At the same time, the reimbursement infrastructure is adapting in ways that quietly legitimize autonomy. CMS already reimburses for certain autonomous AI services, such as diabetic retinal screening (talking with a UCSF ophthalmologist I found these devices are rarely used and sit untouched), setting precedent that machines can perform billable medical acts when safety and outcomes are demonstrated.

This economic framing also helps explain why many enterprise AI efforts struggle to scale. If a system is sold as infrastructure, it competes for capital budgets and is evaluated as a cost center. If it is sold as a clinical service, it must meet higher standards of validation and liability, but it also gains access to reimbursement pathways that make sustained deployment possible. The reimbursement framework literature makes this explicit, arguing that autonomous AI requires financial incentives that reflect software costs and ongoing monitoring, validation, and risk management. Without that alignment, even highly effective automation remains trapped in pilot purgatory.

All of this is unfolding against very real capacity constraints. The Association of American Medical Colleges continues to project substantial physician shortages over the next decade, particularly in primary care. Prescription management and chronic disease follow-up represent enormous volumes of work that are often protocol driven but still consume clinician time.

Meanwhile, medication non-adherence remains one of the most persistent drivers of preventable hospitalizations and mortality, with estimates linking it to tens of thousands of avoidable deaths annually in the U.S. In that context, autonomous renewal and monitoring systems move beyond convenience and start to address structural mismatches between demand and supply in care delivery, which as early-stage investors we see as a meaningful signal.

What feels most different now, and what JPM 2026 crystallized for me, is malpractice coverage is increasingly part of the conversation. If carriers are willing to price and insure algorithmic clinical acts, it signals that risk can be modeled and mitigated in ways acceptable to the system. If they are not, adoption will remain constrained regardless of performance metrics. This is why companies that are willing to operate in a liability-bearing posture are potentially setting up a path to define the next phase of AI in healthcare, even if their early use cases seem narrow today.

For founders, this implies that building in healthcare autonomy, systems must be developed with audit trails, escalation logic, and clinical governance at the core. For fellow investors, it means diligence has to extend beyond technical moats and include credible connections to professional boards, payors, and insurers. And for clinicians and professional societies, it means engaging now in defining what responsible autonomy looks like, rather than reacting later to models that have already proven operationally viable outside traditional institutions.

Healthcare is experimenting with authority and accountability. And that, more than any benchmark score or demo, is what will determine how quickly and how deeply AI reshapes care delivery.

Follow the regulators and boards!

Julian Eison
Founding Managing Partner

Keep Reading