News

The future-proof approach to AI for health systems | Health IT

By October 6, 2025No Comments

Hospitals are grappling with how to integrate artificial intelligence in a way that drives value without overwhelming clinicians or creating new risks. At Stanford Medicine, CIO Dr. Michael Pfeffer has taken a deliberate approach to democratizing AI and transforming the workforce.

“We don’t have a chief AI officer. We have a team of data scientists because data science is a discipline, but we don’t have a team that does only AI,” said Dr. Pfeffer during a keynote panel at the Becker’s Health IT + Digital Health + Revenue Cycle Conference in early October. “We said AI is a tool that everybody needs to learn, not only within the IT department where we’ve built internal training to get everyone up to speed on what the capabilities are, but also the rest of the organization.”

The health system developed a course called AI Foundations – AI 101 – to teach non-IT experts about AI in a sandbox environment and generate ideas to automate at scale. One of the organization’s recent developments, ChatEHR, came from the course.

ChatEHR aims to embed large language model capabilities directly into the EHR to talk to the chart in real time. Select physicians within the organization are testing ChatEHR, which is designed to respond to questions within about three minutes.

“It’s been amazing,” said Dr. Pfeffer. “We’ve got over 1,000 users on it now, so it’s at scale, but in order to know how to use it, you actually have to learn how to talk to models. Everybody goes through training on how to use the tool and that has been really great because now you’ve got this army of clinicians that are really understanding how to use these capabilities and what they’re doing. It’s really about upskilling everybody and understanding that we need to think about problems we want to solve.”

But AI can’t solve every problem. Leaders need the discipline to know when AI can be beneficial and when a simpler solution is appropriate.

“You don’t want to go down the rabbit hole of trying to solve everything with AI if you don’t need to,” said Dr. Pfeffer. “That level of education and understanding is really critical moving forward. Everybody’s really excited and we’re all under a lot of pressure to get AI right.”

Patient data protection is also top of mind. CIOs are balancing the need to innovate quickly and creatively with HIPAA compliance. Stanford has developed a governance process to build a responsible AI life cycle so projects can be evaluated quickly with the right guardrails.The system’s chief data scientists developed a framework for fair, useful, reliable models with a built in bias and ethics assessment. The assessment was built around traditional AI including predictive modeling, and expanded to include generative AI so the team can evaluate how well the large language models perform in medicine.

“Our chief data scientist lab came up with something called Med Home, a kind of repository with many medical questions and answers that you can run on many different large language models to see how they perform,” said Dr. Pfeffer. “We’re going to need to get better and better at understanding the generative models and how they perform in medical tasks because they’re not great now. They’ll get better, but that’s part of the governance cycle and giving feedback as you deploy things.”

Monitoring models after deployment is also essential to keep the models on track.

“As we put more and more models into production, you can imagine the kind of teams you’re going to need to monitor them,” said Dr. Pfeffer. “Between understanding how to leverage technology to actually help with the monitoring is really key, but then I think it’s a young space, and as the models continue to advance we need to get better and better at understanding how they’re actually performing the way we want.”

Stanford is also testing AI agents in the clinical decision-making process. Dr. Pfeffer noted areas like sepsis detection could benefit from agents in the future.

“The output of every sepsis model is a recommendation right to somebody somewhere and so the workflow downstream from that output is really what drives the success or failure of the model,” he said. “But as fast forward to the future, the model should not only predict who’s likely to have sepsis, but order the antibiotics. That’s the sepsis antibiotic agent so you’re not depending on someone taking a recommendation and doing something with it. That doesn’t exist today, but we really have to start thinking about what happens when we’re at the level where agents can actually make an order in the system. That is truly transformative, and it’s going to take a lot of work to make sure they function really well. How are we going to make sure our governance and our monitoring capabilities are going to be tested on what we have today, so when we get to that future state, we’re ready.”

Health systems are certainly preparing for a more technology-driven future, but with the rapid pace of AI development it’s hard to look too far down the road.

“If you have an AI strategic plan that’s five years, you might as well throw it out,” said Dr. Pfeffer. “You have to be again about what’s coming, and be able to focus on the North Star for your health system, and aspire to provide excellent patient care, research and education. As long as we’re focused there, the rest of it will fall into place. When we start deviating from that is when we run into problems.”

The post The future-proof approach to AI for health systems appeared first on Becker’s Hospital Review | Healthcare News & Analysis.

Health IT

Leave a Reply