false
Catalog
Embracing the Digital Revolution in Osteopathic Me ...
DHI-AOIA - Video
DHI-AOIA - Video
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Hello, everybody. I'm Dr. David O. Shumway, DO, and this is the recorded version of Embracing the Digital Revolution in Osteopathic Medicine, Leveraging Technology to Enhance Patient Care and Practice Efficiency. My co-presenter, Dr. Samir Sood, is here as well, and you'll hear from him in just a second. Let's get started. This is a slide to remind you that this presentation is CME-accredited, and the accreditation details are there on the screen. As I said, I'm Dr. David Shumway. I recently graduated from my internal medicine residency at Keesler Air Force Base, and I've since PCSed to Royal Air Force Lake in Heath, where I'm stationed working in flight medicine. I'm very excited about this topic because we use artificial intelligence and large language models in clinic every day. I used it pretty much all throughout residency as long as it was available, and we're working on using it in operational medicine as well. It's an awesome technology that makes everything way easier. We're very excited to share that and other technologies with you today. Like I said, Samir is here as well, and he's going to talk about wearable technologies and innovations in medical device wearables. Here's our disclosures slide. Samir has no disclosures. I have no relevant financial conflicts of interest to disclose because I work for the government. I do have to say the views expressed in this material are my own and do not reflect the official policy or position of the U.S. government, the Department of Defense, or the Department of the Air Force. Large language model AI was also used in the production of this presentation. However, all generated text was verified and edited by myself prior to publication. Here are the learning objectives for this session here. The first one is to describe the newly launched AOIA Digital Health Initiative Community of Practice and how collective action solves pain points and promotes innovation for the broader osteopathic profession. Secondly, we want to explain key digital health technologies and use cases most relevant to the full scope of osteopathic medicine. Finally, we want to discuss potential implementation challenges and opportunities in adopting these health technologies with an osteopathic practice. So an outline of our session today. First off, we're going to start with an overview of the AOIA Digital Health Innovation Community of Practice, of which this webinar is just a small part. Talk about opportunities and challenges in digital health technology in the osteopathic world today. Then we're going to go over the first use case, generative large language model AI in clinic, which I'm going to be talking about, followed by the second use case, wearable devices, remote monitoring, which Samir is going to be talking about, and finally, facilitating a discussion and key takeaways. The discussion occurred in the live session of this recording, but we'll talk about some of the main points that came up. We're very excited because the purpose of this webinar is not to talk about abstract digital technologies that you can't really get your hands on or that might be coming in the future. It's really to talk about two entry-level use cases that you can put to use today in the clinic and get your feet wet. Just to set the stage for our discussion, in my opinion, the most exciting time to practice osteopathic medicine is right now, and that's because new and emerging technological advances in medicine promise to revolutionize patient care, improving outcomes while making osteopathic practice more efficient and sustainable. So what exactly does that mean to you? A promise to change the way we take care of patients forever, ensuring these better treatment outcomes, efficiency, and work-life balance for osteopathic physicians. And if you want to know how to get started, well, guess what? You've come to the right place, and we're happy to teach you. So what is that place exactly, and what are we doing here? The American Osteopathic Information Association has launched a Digital Health Innovation, or DHI, Community of Practice, or COP, altogether the DHI COP, to integrate osteopathic principles into digital health advancement. The community aims to facilitate collaboration, drive innovation, and provide networking opportunities for stakeholders across the osteopathic continuum. Participation in the community offers educational opportunities, professional development, and a platform to inform digital health policy and advocacy efforts while promoting the use of technology in alignment with osteopathic principles. Through the DHI COP, stakeholders can advance osteopathic distinction in digital health initiatives, enhance knowledge sharing and collaboration, and drive engagement and innovation by participating in the community. Individuals can contribute to the shaping of digital health technologies, engage with thought leaders, and support the application of clinical informatics as a subspecialty within the osteopathic profession. Ultimately, and because I know that was a lot of buzzwords, the DHI COP serves as a vital platform for individuals, which is you, to connect, learn, and advocate for the integration of osteopathic principles in the future of medicine. We want to be the central hub for all things digital health and technology in the osteopathic world, and we hope you'll join up and be a part of us. To facilitate that effort, we have some other webinars planned for the remainder of the fiscal year. The next one coming up, and all of these you can register for, and we highly encourage you to register for because there's some great voices here. The next one coming up is Modernizing Osteopathic Medicine, Advancing the Next Frontier of DOs, followed by Implementation of Digital Health Technologies, Ensuring Privacy, Security, Safety, and Usability, some caveats to help more effectively use the things that we're going to talk about here today and tackling some of those bigger issues. And then finally, the one I'm most interested in, a deeper dive at artificial intelligence in healthcare and the impact in osteopathic medicine. Always good to talk about this in a deeper way and foster more interesting communication. But yeah, some real experts here, fellow members on the committee, and great, great physicians, great teachers. So we really hope you'll register for these and get involved because there's a lot to learn here. So let's talk about the reason that we're here. We want to know what the DHICOP can offer you. Most people, when they think about digital health technologies, are confused. They might be excited. They might have a little bit of trepidation. They might not know how to get started. We would like to offer you the networking, collaboration space, and source of education and CME to help you figure out how to get started in a good and effective way, rather than spinning your wheels and dragging your heels and all of that. We want to know what questions you have about digital health technology that we can answer for you and suggestions for future educational opportunities and live webinars, topics we might not have covered. And of course, you can contact us at our contact information, which we'll have in this presentation. And we definitely want to hear from you because this is how we develop the content that's going to help the osteopathic community, and hopefully you as well. With that, let's go into our first use case, large language models, or LLMs, which is a form of AI and how you use it in clinic. Now, AI means artificial intelligence. And if that's all you know on this subject, don't worry. I've got you. We're going to talk about it a lot. Artificial intelligence is the simulation of human intelligence and machines. It allows them to perform tasks that typically require human-like cognitive function. AI can learn, reason, problem-solve, perceive, understand, and produce language. The word generative AI, which you may have heard being thrown about, specifically talks about this function. AI, or artificial intelligence, is highly integrated into modern life. AI takes the form of voice assistants like Siri, facial recognition, predictive text, driverless cars, automated telereceptionists, and even anthropomorphic paperclips like Microsoft's famous first AI precursor to ChatGPT and Copilot, the Microsoft Word assistant Clippy, the little animated paperclip who used to come up when you were trying to use Word. Remember him fondly. There's a few terms we have to define that you might hear people use when they're talking about AI that I want to talk about so that you're not completely lost in the weeds. The first distinguisher, narrow versus general AI, is something that divides the capabilities of different AI programs. Narrow AI is what we're used to. It's the current or historical version of AI, which is an AI that's designed to perform a specific task, something that's programmed to be really, really good at one thing, like a chess-playing program. This is contrasted with general AI, which is an emerging future technology and kind of what we're talking about today, which is more of a human-like intelligence that can learn, interpret, and apply knowledge to any domain. Large language models like ChatGPT, which you may have heard a lot about, are much closer to general AI than narrow AI. Machine learning is a term which talks about how AI is trained and developed. One form of machine learning is deep learning, which is machine learning that utilizes a neural network to analyze and learn from large amounts of data, often without explicit human understanding. This is called the black box because it means that we don't always know what AI does, why it does it, or how it learns what it does. It just sort of happens. So, the black box means you can't really peer into that process, and that is a phenomenon that sometimes occurs with AI. One form of machine learning is reinforcement learning, which utilizes human influence to guide AI development by using rewards or penalties based on its performance. Some other very important terms to talk about. Natural language processing, or NLP, is the process of a computer reading, understanding, and interpreting human language. This involves translation and production of language as well. Like we said the term generative AI earlier, this is a function of natural language processing. Optical character recognition, or OCR, involves the detection and recognition of printed or handwritten characters from a scan. This converts photos, scanned images, and uneditable formats like handwriting into editable and searchable text that can be accessed by a word processor, and this uses the generative capabilities of AI as well in order to analyze and produce this text. Ambient dictation or transcription is an emerging application of large language model natural language processing in healthcare and legal settings. This takes the form of an AI scribe. It pairs voice recognition with natural language processing and generative AI functionality in order to listen to spoken word, record it, write it down, and then do something with the transcript. We'll talk about that a little bit on the next couple slides. Prompt is the last term here, and a prompt is the way a user inputs or queries or communicates with a large language model AI. So I've said large language model a lot. What are those? Large language models are artificial intelligence algorithms which are based on neural networks designed to understand, generate, and manipulate human-like language. They're trained on vast amounts of data. Imagine every book ever written by humanity. These things have read, and it teaches them how to produce language. That's the size of data that we're talking about. The complexity and capabilities of a large language model is directly related to its size, and the size is based on the number of weights or training parameters and the amount of training data. Weights are integrally connected with nodes, which are basically the functional units of a neural network, much like neurons in a brain. Now large language models are pretty exciting because they've found to be indistinguishable from humans in many studies, and arguably are the first AI which can be said to have passed the legendary Turing test, which was long a benchmark for truly intelligent human-level AI. So this is the world that we're living in now. How exactly does a large language model produce language? When you ask ChatGPT a question, how does it answer? Well, this is very, very complicated. In fact, there are entire books written about it. But a summation of how a large language model produces language is basically to think of the large language model producing each sentence that it writes a word at a time, and it produces each word based on a list of probabilities that that word might be used in sequence compared to all the other times it's ever seen these words used before in the millions and millions of pages of human literature that it's read during the training process. Now what's cool about large language models and ChatGPT, and those like it specifically, is that someone figured out along the way that if you use the word that has the highest level of probability, see that list there, the highest level would be learn there at 4.5% versus predict at 3.5%, make at 3.2%, etc., etc. If you use the highest ranked word, often you get this very machine-like, unnatural, non-human sounding sentence. So the models have learned how to sort of vary the randomness with which they actually pick the word in order to more closely simulate what we as humans do when we're word finding. This is kind of a cool idea because I've listened to my four-year-old start the same sentence four or five times over trying to find the exact right word, and this is basically what ChatGPT is doing. Speaking of ChatGPT, there is a map of it over on the right if you wondered what it looked like. The image on the right is a heat map which shows the entire neural network, all the hundreds of billions of nodes that make up ChatGPT in one image. The T in ChatGPT stands for transformer, which is a piece of architecture that's used on several large language models and really makes them start to approach the general side of AI as opposed to the narrow side of AI. What the transformer does is it allows the large language model to really take all of the things it knows and only focus on the stuff that's relevant to answer your question, which is largely based on the role that you need it to perform. So if you needed ChatGPT, for instance, to act as a doctor, it would forget all the things it knows about race car driving and sailing and gardening and cooking and all of that in order to focus more closely on the medical textbooks and anatomy and everything that it knows about medicine. Much like when we're in the exam room, we have this entire makeup of all the knowledge we've accumulated across our lives, but we hone in on the stuff that's really relevant. So that's the T, which, like I said, stands for transformer. And the part of the heat map that's all lit up shows that transformer in action working on specific parts of the neural network to fit the role that the user requires. So that's kind of a cool overview of how large language models and ChatGPT actually work. But how are you actually going to use large language models in Clank? When we did the live webinar, many people wondered this exact thing. Fortunately, even though large language model AI is very complicated and has a lot of applications, most likely the way that you are going to encounter a large language model in clinic is really in one of these two uses as a scribe or documentation tool, or as a clinical decision support search type function. Some of these you'll seek out on your own because there are many products that are offering these functions. Some of them will just get handed to you. They might be embedded in your EMR, or one day you might log on to UpToDate and find out it has a large language model search tool now, that kind of thing. At some point, whether you like it or not, you're probably going to be exposed to one of these things, and it's probably going to be in one of these two uses. This is a good thing because both of these applications are things in medicine, which tend to take up a lot of cognitive power and time, and our lives would be a lot better if we could automate more of it to focus on other things that are important to us and our patients. The biggest example, for me, my biggest headache in medicine has always been documentation, and everyone I've always talked to in medicine has said that, if they could take care of the amount of time they take charting, their lives would be much better. Those of us that are fortunate enough to have access to human scribes, generally universally report better quality of life, better quality of practice, and understandably, better note completion statistics, and less pajama time or time off of work spent charting. Generative AI can act as a medical scribe because it has the ability to both produce and analyze text through natural language processing. The ability to listen to spoken text and convert it to written text is the function that we were talking about earlier, of ambient transcription or ambient dictation. What's cool about large language models is that they are actually trainable to your note writing style. With the large language models that offer fire upload, you can actually upload as a reference prior medical notes and writing samples to help the AI learn exactly what your note writing style is, and how to best imitate you when it's acting as your scribe. Also, large language models that offer memory can be backwards trained, which means that if you get a response you don't like, you can tell the LLM what it did wrong, and it will try to fix it. These medical scribes are capable of scaling based on the role setting and user requirements. You can have it just be a simple transcriptionist, you can have it act as more of like a medical student or a resident, with a note that you co-sign, or you could even tell it it's a fully board certified provider, and it's going to come up with a treatment plan and all of that as well, and you're just going to verify it did the right thing. It's really whatever you need it to do. There's a bit of a range here. Like we were saying, when paired with the AI transcription tool or the capability to recognize verbal language, the LLM can become a fully autonomous medical scribe. There's an example of a real-time large language model, writing a note, a progress note in a SOAP note format, based on a verbal transcription that it heard there on the side. But of course, this can be hard to visualize. Every time that I talk to physicians groups about this kind of thing, the number one feedback I get is, we want to see this in action. If you've never used one of these before, just seeing it work the first time can be honestly pretty life-changing. I made this video here to demonstrate how this whole process works, and we're going to watch it real quick. So this is one of our co-residents, Dr. Beckler, who's going to be our standardized patient. So this is one of our co-residents, Dr. Beckler, who's going to be our standardized patient for this demo. Dr. Beckler is going to pretend to be a patient. We're going to do a short interview and a live screen cap of the ambient dictation technology working. I felt like to really complete this presentation, you really deserved a live demo of this, because this is one of the things that really garners the most interest. So I'm going to call you Steve, Dr. Beckler, which is not his name, but he's going to be Steve for the purposes of this demo. So Steve, what brings you into clinic today? I've just been having this off and on chest pain that's been bothering me lately. When did this start, Steve? It's been going on for two, three weeks. Okay. Describe the pain for me. You can see it's capturing what Steve is saying in real time there. Sometimes it's kind of burning, sometimes it feels kind of like pressure. It does a pretty good job. Okay. Where is it located exactly? Kind of in the upper part of my chest. All right. Sort of in the middle or is it off to the sides at all? More in the middle. Is it more towards the upper part of your chest, which you said, or is it more towards the lower part of your chest? Or is it right in the middle? Middle, sometimes in the upper part of my chest. Okay. Have you done anything to make this pain go away when it does show up? Have you taken any medicines? No, I haven't taken anything for it. Have you noticed anything makes it worse? Sometimes like running after eating a meal, it gets worse. Sometimes like if I lie down after eating, it gets worse. Sometimes it just kind of comes on. This is supposed to be a pretty typical clinic encounter, so hopefully you can see just how well this works. Mostly just after eating. Okay. Do you ever get short of breath with this as well? No. No, not really. Can you go up a flight of stairs without getting short of breath? Yeah. Very good. All right. When this pain comes on, how long does it last? I would say anywhere between five minutes to half an hour. All right. Do you have any family history of heart disease, heart attacks, anything like that? Just a grandfather who has a little bit of heart disease. I think he got a stent or something. I don't really remember. Let me ask you about your personal life. Do you use any tobacco or alcohol? No tobacco, and then just alcohol socially. Okay. Any other drugs or illicit drugs? Mm-mm. Great. How about medications? What medications are you on? So I just take some allergy meds as needed, and then ibuprofen for an occasional headache. Okay. So nothing chronic? I'm basically just working through a blank template that we have for our clinic notes, like verbatim, in order to make this, so that it has all the components it has or needs for the note. Okay. But nothing consistent that you take? Okay. What about your medical history? What other conditions have you been diagnosed with in the past? I had my wisdom teeth taken out when I was a kid. Otherwise, no other surgeries. Nothing really medically. Just allergies kind of here and there, seasonally. Okay. And no surgeries, like you said. All right. Well, thank you, Steve. In this demo, I would do a physical exam of Steve and also talk about what Steve's vitals are. So his vitals are pretty normal. He's 120 over 80. His heart rate is 76. His temperature is normal. It's very important to do this, because at this point, the scribe can't actually see you doing the physical exam, so you do have to verbalize this out loud, much as if you were dictating a note. It can be a little bit of a learning curve for people. And the rest of his exam is relatively benign. Steve has some labs, as well. He has a CBC, which is relatively normal, and a CMP, which is also relatively normal. Steve had a chest X-ray when he went to the ER for this chest pain recently, and that was also without any acute cardiopulmonary findings. So now that Steve is in my clinic, I'm going to recommend that, given he's a very young person, with no previous cardiac testing, he has the option for reassurance or possibly getting a treadmill stress test. So, Steve, what do you think about getting a treadmill stress test? Be willing to give it a shot. You're relatively low-risk from your history, but you do have some concerning symptoms. So, at this point, I'm kind of dictating the assessment plan, as well, and you can create this the way you normally do or, depending on your large-language model role, you can actually have it also produce one and verify it's what you want to do. Steve's worried about this, and it's been happening for a little while. It's probably not a bad idea just to get an EKG and a treadmill stress test. I should also add that Steve's baseline EKG has no EKG changes, which would preclude him from doing a Duke treadmill stress test. Stellar clinical performance. Thank you, Steve. Do you have any questions on any of that? Nope, look forward to running on your treadmill. Perfect. So, once the transcript is completed by NABLA CoPilot, it will generate a note in several different types of templates. NABLA is a very good scribe. Unlike CHAT-GPT, which uses the Whisper engine and processes transcripts in one entire block for the entire transcript, NABLA will process transcripts one line at a time, moving on down. Effectively, what this means is that, in my experience, I have never lost a recording due to data overload with NABLA, whereas sometimes with CHAT-GPT, if your interview goes over 15 minutes, the transcript can get unstable and there is a potential to lose everything. But where NABLA falls short is it really doesn't match up to the clinical reasoning and clinical decision support abilities of other LLMs, like CHAT-GPT, for instance. Additionally, it's not formatted as a chat bot, so it's very difficult to directly converse with NABLA CoPilot, as opposed to CHAT-GPT, where you can talk to the model, make changes in real time, give the model feedback, and use prompt feedback engineering to improve the model and get exactly the output that you want. CHAT-GPT is also much better at writing in a narrative voice and also intelligently reprocessing the note to make specific changes, like billing codes, et cetera. In short, when you use a chat bot app like CHAT-GPT, the experience you have is really more like talking with a partner or colleague. This is why, for reliability's sake, I will usually partner a virtual scribe app like NABLA CoPilot to a generative general AI app like CHAT-GPT in order to actually craft my note, sort of going from the typewriter to the human scribe behind the typewriter in terms of its intelligence level. Now, a key caveat is CHAT-GPT and the general LLM AIs, like CLOD, et cetera, are generally not HIPAA-compliant, so if you do use them, you have to be aware of that and make sure you're not putting any PII in it, which can be a huge limitation. No matter which one of these you use, unless you own your own practice and really know what you're doing, highly, highly recommend that you run through your IT department or your administration, because you could potentially get yourself in some trouble otherwise. And here's some examples of products that are offering these type of technologies. Some of these are advertised as being HIPAA-compliant, some aren't, so you just need to make sure you know what you're getting yourself into. And generally, like I said, unless you own your own practice, you're gonna be probably going through some sort of your admin or IT department, which you really wanna make sure you do. To ensure that you don't get yourself in trouble. That's our disclaimer. Just gonna talk about prompts real briefly. Prompts are basically the way you communicate with a large language model. Prompt engineering is an art that is rapidly becoming a science. A lot of people get frustrated trying to use large language models initially, because they ask it to do too much in the wrong way and they get a bad result. This is just like if you Googled something that was confusing or too vague, you would get a bad Google search. It's always better when you ask something that's more particular and concise. We talked about the transformer earlier and its role and role setting. Prompt framework is often put forward in like a role task format, sort of a paradigm, where you say that large language model, I need you to act as a medical scribe to transcribe this conversation in a soap note format. There's some other prompt frameworks on the screen as well, but most of them are gonna be derivatives of that simple formula. Tone modifiers are a important way to role set. You can basically tell the LLM how you want them to sound when they write. We talked a little bit about memory. Many LLMs are capable of backwards training, where if they give you a response, you can actually prompt them again, say that wasn't a good response and here's why, and it'll learn and often not repeat that mistake in the future. And over time, will become more perfect to suit exactly what role you needed to play. Sometimes this memory is explicit, like in chat GPT, for instance, where you have to actually turn it on, and also features the ability to give specifically global instructions and to better train it, or it can be implicit and just sort of do it without you actually know that that's what it's doing. Moving on to the next use case, clinical decision support. One of the best clinical decision support large language models that's out there right now is called Open Evidence. It was developed by Harvard researchers and it was backed by the Mayo Clinic and Elsevier. The best thing about this is it's nonprofit and free for use by physicians for now, you just need an MPI to register. The thing I really like about Open Evidence is that whatever response it gives you is only based on peer reviewed papers. They use a proprietary algorithm to incorporate newly published peer reviewed open access resource and researches into their data bank and sort of grades those things on how reliable they are as sort of a backwards process, allowing it to stay up to date on the latest publications at a breakneck pace. And the really, really, really good thing that this does is that when it makes a response, it will source the peer reviewed papers that inform that response in a bibliographic format with links, making it very easy for you to actually self verify what it put out. Very highly, highly reliable, very few hallucinations or the phenomenon where an LLM makes up a source or information to satisfy your query. All based on real peer reviewed papers, a great, great, great product. And like we said, free. So encourage you to try this one out barring any limitations at your institution, of course. There are other clinical decision support products that are available. I highly expect we'll see one on up-to-date soon. I'm pretty surprised they don't have one yet, but if you think about an LLM being like an advanced search function, this is something we're pretty much going to see everywhere that we use as a reference for our clinical practice. So practical AI precautions, as much as you might want to jump into using these things right away, and we encourage you in the right circumstances, unless you own your own practice or and really know what you're doing, make sure you go through your administration or your legal office or your IT department to ensure you can do what you want to do with these things because there are a lot of there are a lot of rules about them and it's important to have top cover. So like I said, that was our disclaimer. Now when you are using them, always trust but verify. Hallucinations, which we just talked about, are way less common than when this first came out, but they still exist. Make sure you edit and cosign every generated document the same way that you're supposed to do when you're you're dictating with Dragon. It looks silly if there are mistakes in there that you didn't catch and then you put your name on it. If you're using AI, make sure you communicate with patients and that they're on board. Try to seek consent whenever possible. And finally, if you have the ability to incorporate or keep records of the original transcripts, that can be helpful in the future if you need to refer back to them. We'll jump into the use case of wearable devices and how these wearable devices, which have been around since around 2010, when they first came out during the quantified self-removement, we'll discuss how these are now getting integrated heavily into healthcare. So there are 440 million health-focused wearable devices shipped in 2024. This is heavily being led by the consumers with a market size of 82 billion dollars. I think that's a really important takeaway, noting that a lot of these devices will be brought in to us by our patients, not necessarily prescribed by us to them, even though we should actually consider that. They had been shown or expected to reduce healthcare costs by up to 16% in 2027, specifically from a prevention and monitoring standpoint. Those savings lead to over 200 billion dollars in cost savings by 2037, with these benefits of remote patient monitoring. Remote patient monitoring, you can think about it as it leads to increased access, increased compliance, increased potentially met adherence by really understanding and seeing, visually seeing, the results of doing things like exercising or taking medications. There's a variety of applications, so typical things like chronic disease management, chronic hypertension, diabetes, how to manage those at home from a remote standpoint, especially in things like rural areas where a lot of osteopathic physicians practice. There's remote monitoring in general, you know, real-time monitoring of SpO2 with a pulse ox for things like hospital at home recovery for pneumonia and COPD treatments. My favorite area where this is practiced, where I practice this, is in personalized preventative healthcare and specifically the longevity and healthspan movement that we're seeing a lot in healthcare these days. So how do patients who are pre-diabetic or don't even have diabetes at all start to just wear something like a continuous glucose monitor to understand what foods spike their glucose and how their stress might affect their sugar overnight, or just monitoring your blood pressure and seeing what it does on a day where you're stressed or on a day where you're dehydrated. These are all things that consumers and our patients could begin to start to understand about themselves well before disease ever actually comes into the picture. Advancements in AI, like Dr. Shumway just talked about, which helps process all the data that comes from this and helps us with the insights. Bluetooth networks and smartphone 5G network technology allows us to have access to these technologies in more rural areas with broadband networks, connectivity of these devices to our phones and to an internet connection to be uploaded to us as physicians or in our EHRs through integrations. Of course, if we're taking insurance and we have to look for the indications for some of these devices, of which they are limited if you're looking for insurance coverage, however, for things like chronic disease management, hospital at home, RPM, remote patient monitoring, there are a bunch of CPT codes available. Largely, there are the initiation, education, onboarding portion of the coding and billing year, followed by the monitoring and intervention giving. So if you collect, you know, CPT 99454, there are 16 days of data that are analyzed and then there's sort of that's the asynchronous analysis portion of it. There's also then a synchronous portion of it where you actually connect via telemedicine with your patient to actually talk about, you know, what intervention, what dose adjustment are we making to our blood pressure medications based on the last 10 values collected over the last 10 days from an ambulatory blood pressure cuff. There are other sort of disease specific codes as well across blood pressure, blood glucose for just certain short periods of time as well, but big takeaway, it's an educational component, asynchronous data analysis component, and then an actual intervention synchronous component working with the patient. So jumping into some of these wearable devices, you know, we have the continuous glucose monitors, a bunch of different brands at this point, the Apple iWatch and other heart rate monitoring devices, Cardia is another brand there that has a lot of data behind it, ambulatory blood pressure cuffs, Omron, iHealth that have been around for well over a decade I believe at this point. I remember when I first got my iHealth blood pressure cuff was probably 2015, 2016, and then a portable SpO2 pulse ox monitors as well. So jumping into the continuous glucose monitor or CGM, these are growing in popularity especially for those with type 2 diabetes, but even more so more upstream with those who have pre-diabetes and my preference is for those who just want to learn more about how their blood sugar responds to certain environmental and diet factors. So CGMs in patients with type 2 diabetes, studies in multiple RCTs have shown that they have improved glucose control compared to that with patients just doing self-monitoring, you know, the pinpricks a couple times a day and then dose adjusting off of that. Some of these CGMs have auto uploads to phones, others you have to scan using the back of your phone with some of that technology, and similarly one of the biggest reasons why insurance covers this and with people with type 2 diabetes who are on insulin, we've shown or studies have shown a reduction in hyperglycemia or glucose less than 70 for these patients. So a lot of this can be, you know, prescribed by a physician, covered by insurance, likely with a prior authorization unfortunately, or it's sold direct to consumer online. So brands like Stilo by Dexcom, Tastremonial is another site I've used in the past. Prices range roughly $100 to $200 per month. These sensors are about 14 days long, worn on the back of one of your arms, sampling the interstitial fluid, not necessarily blood glucose, and so there's sort of a discrepancy there with interstitial glucose compared to actual blood glucose. So some of these do actually require calibration, but I kind of think a bit about it as it's not necessarily the absolute value, especially for those who are non-diabetics. It's more just the change in glucose over time that you're looking at, assessing the spikes and how fast they come down, and then correlating and recording events such as, I was really stressed last night, or I only slept three hours, or, you know, I drank a little bit, and that helps you then looking back on it in the future just to understand why your sugar might have spiked based on what factors. For portable cardiac monitoring, there's a ton of, you know, consumable devices out there, things like the Fitbit, the Garmin watch, but only a few that are starting to have clinical evidence behind them. So one of those is the iWatch, another is the Cardia. iWatch is a single-lead rhythm strip ECG, and the Cardia, I believe, is a six-lead strip as well, and so the Apple Watch has been shown for some fairly effective detection for arrhythmia, specifically atrial fibrillation. The Apple Heart study showed a positive predictive value of those who received the notification clinically correlated with actual afib diagnosis of 84%. The REACT atrial fibrillation trial is ongoing right now, a really cool trial, actually showing the power of wearable devices in general, but the real-time monitoring of heart rate and detection of something like an arrhythmia leads to real-time intervention from a medication and to cargillation standpoint. So really just understanding to see if we can do real-time treatments based on real-time continuous data. A lot of implications for other disease states as well, however these devices have low sensitivity and specificity in general and not really useful for any other detection of adverse cardiac events, unfortunately, so sticking with arrhythmia and atrial fibrillation for now, but I only expect these to get a lot more sophisticated as time goes on. Ambulatory blood pressure monitoring has been going on for a while now, but I think again with the increase of consumer grade devices, patients buying them themselves, a lot more easy to get these days, very very user savvy, user friendly rather, and studies have shown or are showing that people with blood pressure cuffs at home with live titration based on real-time data have had, 25% of them have had better control at 6 and 12 months. Post-stroke, there's been improved blood pressure control as well with real-time monitoring. It's less expensive, possibly more accessible for someone rather than coming in, you know, for two weeks for a blood pressure check in the office, you know, less white coat syndrome associated with checking your blood pressure at home. There is some cost associated with getting everyone who needs one a device. There's the cuff, there's the app, they have to have a smartphone device usually. There's also the aspect of compliance. You're not seeing them in office and making sure they're sitting quietly with both feet on the ground before checking their blood pressure, and so it's important to make sure that education component of how to actually check your blood pressure at home is properly delivered. Lastly here, I want to talk about kind of a different type of wearable that's, you know, growing ever popular with patients. So these are two examples, no affiliation with either of these brands, but Oura Ring and the Whoop wristband device here. So they're starting to track different lifestyle metrics, you know, we don't really have clinical correlations necessarily with them just yet or disease states or CPT codes and diagnoses to make from them, but things like sleep quality, heart rate variability, and from those extrapolating stress, strain, and recovery, especially for those athletes who are trying to train hard for, you know, a marathon six days a week, this really helps them understand, like, based on your sleep quality and your heart rate variability or lack thereof, your body's in a high level of stress and low recovery today, so maybe train a little less. I find these, you know, fascinating. I think we'll grow the clinical correlations as more folks are wearing these devices and we're getting more data and correlating it to outcomes, but as of now, our patients are wearing them, whether we like it or not, and, you know, the question I have for us as the osteopathic, as an osteopathic community here is, do we want to own and be a part of supporting this really upstream, preventative, personalized, holistic type of care where it's not, you know, comfortable to us, not native to us, we didn't learn these things in medical school per se, but should we start teaching it? Should we start learning and trying these devices ourselves and, in fact, recommending them to our patients, knowing that it gives us a better snapshot of who they are, how their sleep is, which we know affects everything more downstream from a chronic disease standpoint. I personally love them and, you know, encourage my patients to bring in as much data as possible and learn about themselves as much as possible. It only helps enrich the conversation and ultimately get to the point of why we're doing more IYID medicine in the first place, which is to actually resolve or cure a lot of these chronic diseases, not necessarily just maintain them. So key takeaways here regarding AI tools, it's happening, it's part of, you know, our present, our future, and we can look at it as a threat or it's enabling us to do more hands-on, have more hands-on time with our patients, more face-to-face interactions while, you know, ambient dictation AI is listening in the background and writing our SOAP notes for us. So it really allows for more personalized care in clinic and I think thinking about how it's almost bringing back the humanity portion in medicine despite being highly technological. Obviously we can't fully rely on AI, whether it's clinical decision support or even a SOAP note transcription translation. Right now we, as physicians and providers, need to still be the, quote, human in the loop as the trusted source, the board-certified source. I don't know when, if ever, AI will be able to fully autonomously make decisions and where they don't need our oversight, but, you know, that's nowhere near in the foreseeable future. Remote monitoring offers a chance to get really personalized, really preventative, upstream, and, you know, things like providing access to those who cannot come to the clinic every day or as often as you need to see them. It allows them to stay in their home, comfort of their own home, may increase compliance and engagement and education and insight about one's own health. And then, yeah, so as we have these increased touch points, new sources of data, near continuous monitoring, what are we as osteopathic physicians open to? How do we want to promote health, holistic prevention, work with our patients to be able to, you know, treat themselves in a lot of ways with a lot of lifestyle behaviors that they can control? To me, that's what osteopathic medicine is about and looking forward to working with our field and challenging our field to step up here and, yeah, support the future. So as we're wrapping up this discussion, I just want you to think about some things. Think about what you saw today that you thought was interesting or that you might consider using in your practice. Try to think about what other devices you have questions about that we might be able to answer that we have not covered during this webinar and any suggestions you have for future educational opportunities and live webinars. And, of course, we wanted to conclude this webinar by saying a few things about reliability and digital health products, which we've been talking about, and the dime seal, which we have not mentioned yet. The dime seal is a symbol of quality and trust that is awarded to digital health software products that demonstrate performance against a comprehensive framework of standards and best practices and evidence, privacy, and security with equity woven throughout. The AOIA is a key partner in producing the dime seal, and it's a call to action for the entire digital health community. By working together, we can raise the bar for quality and build a future where digital health tools empower much better outcomes for everyone. So, this is just starting to roll out this year, and like I said, the AOIA is a key partner in this. So, consider searching for the dime seal when evaluating a potential digital health product, like a large language model or clinical decision support product, and encourage products you are currently using to apply for certification, and encourage your institutions to do so as well. Information about it is there on the the dimesociety.org dime seal. Thanks a lot for watching this webinar today. If you want to get in touch with us, our contact info is there. Always happy to talk with you, and then our wonderful AOIA digital health staff who run this whole show. You can get in touch with them at their contact info there as well. Great to talk with you all today. Hope to see you next time.
Video Summary
Dr. David O. Shumway, alongside Dr. Samir Sood, presents a webinar on the integration of digital technology in osteopathic medicine to enhance patient care and practice efficiency. The session introduces the American Osteopathic Information Association's Digital Health Innovation Community of Practice (DHI COP), emphasizing its role in fostering innovation and collaboration among osteopathic professionals. Participants learn about two primary digital health technologies: large language model (LLM) AI and wearable devices. LLMs, such as ChatGPT, are described as useful for medical documentation and clinical decision support, with potential to streamline clinical tasks and improve practice productivity. Dr. Sood discusses the increasing role and benefits of wearable devices in healthcare, including continuous glucose monitors, heart rate monitors, and blood pressure cuffs, which aid in chronic disease management and proactive health monitoring. The presentation highlights the importance of adopting digital health tools while ensuring compliance with privacy standards. Participants are encouraged to engage with the DHI COP for further education and collaboration opportunities, and to consider the dime seal as a benchmark for evaluating the quality and reliability of digital health products.
Keywords
digital technology
osteopathic medicine
patient care
practice efficiency
digital health innovation
large language model
wearable devices
chronic disease management
privacy standards
×
Please select your language
1
English