false
Catalog
Navigating the AI Landscape in Emergency Medicine: ...
Navigating the AI Landscape in Emergency Medicine: ...
Navigating the AI Landscape in Emergency Medicine: From Hype to Practical Application and the Ethics of Implementation
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Hello everyone and welcome to my talk. Super excited to have you all here. Today we'll be talking about navigating the AI landscape in emergency medicine from hype to practical application and the ethics of implementation. Now I just want to point out that even though it does say emergency medicine, I'm an emergency medicine physician, this is broadly applicable. A lot of the tools that we will be talking about and applications and ethics applies to any specialty and medicine in general. I'm Awais Dorani. I have no relevant financial relationships or disclosures to state right now. So the learning objectives for this talk will be to differentiate between the practical AI tools currently enhancing emergency medicine and the speculative overhype technology still on the horizon. We will discuss the ethics, pitfalls and tools that may be ethically questionable for use clinically in the emergency department and beyond. And we will identify AI tools relevant to the emergency physicians and master their practical applications in clinical workflows. And so a bit about me, what do I know about AI? Why am I speaking about this topic? In general, I am someone who is very curious and I'm a tech nerd. And so I am obviously curious about a lot of various technologies. I've given a talk about blockchain before at DO Day, social media. And so naturally with everything in the news and AI hype, I had to give a talk about that. I sit on the AI committee at my hospital system where we are a committee of about 12 people and we're a large hospital system in urban Houston. And so lots of companies pitch various AI technologies to the hospital system. They come to our committee and we decide if it is something that we may want to pilot, trial or say no to. It is a really exciting time with so many new technologies and innovations happening. But at the same time, it's confusing. We don't know which one of these tools actually say they're going to do what they are going to do, how useful they will be to the hospital, and more importantly, how useful they will be to the patient. And so I get to see a lot of unique innovations, technologies, pitches through this committee and it's a lot of fun. I also host a podcast, a technology and healthcare podcast, where I chat with various folks at companies like Verizon or IBM and what they're doing when it comes to AI and healthcare. And that's just through the lens of myself being a clinician and curious about these technologies. I'm curious about all types of stuff and I learn a lot through that. And so my simple curiosity has kind of led me down this path and hopefully I can share some things that are of use to you all today. Okay, so why should I care? That's probably the most common kind of sentiment that I get when it comes to AI. A lot of times there's a lot of negative news hype or media attention around various topics like AI, and it seems like medicine may be a sacred place or a place where we shouldn't really be focusing on so much of the hype and rather be focusing on the medicine. We hear about things like deep fakes, identity theft. We've had a poor kind of social viewing of meme coins and blockchain, same with social media. And a lot of people will tend to put AI into those same categories and say, hey, this isn't something that I need to pay attention on or focus on or need to learn about. And so while we're talking about this in the chat, I'd love to learn about who has used generally an AI tool before and then also an AI tool in the world of medicine. And if this was an in-person talk, I would say, hey, can you raise your hand if you've used a broader AI tool? And usually about 25-30% of the folks in the room will raise their hand. And I challenge them to say, hey, everyone should be raising their hand. And that's because in the last few years, if you've Googled anything, usually Google is going to use their AI to give you an AI-generated response at the top. If you've used Siri on your phone, that's AI. And so there are a lot of AI tools out there that we may be using on a day-to-day basis that we don't even know about. In the medical world, obviously, a little better delineation of what are AI tools and what are not. And so I'd be curious to learn about how many of you have used those tools. But I hope by the end of this talk, you do believe that AI is something that is useful of your time and that you should learn about it and that it's not the boogeyman out to get our jobs. We've been talking about radiologists being replaced by AI for as long as I can remember, the last 10 years or so. And as far as I know, all of them are still working because they want to work and not because they were forced out by AI. I think a good place to kind of start is looking at other industries that use AI, other industries that are high stakes, that are industries where safety matters, where margins matter, and they wouldn't be using AI if it didn't make things better for them. And then I think we can transition to healthcare. And so we'll start with aviation. This is an industry that has really embraced AI. This one example is of American Airlines, where they use this technology called smart dating technology. And so it's at a variety of their hubs. And whenever a plane lands, this technology auto assigns a gate to them. And it takes into account, for example, the number of people who may need to make a close connection and which ideal gate they may want to be at. It takes into account ensuring delays in flight. So if there's a gate that has a flight that's probably going to leave late, it'll consider giving it a different gate. And so it's basically streamlining that process. And since they've implemented this, they've saved 17 hours on a daily basis when it comes to planes, not just waiting on the taxiways. It saved about 1.4 million gallons of fuel over the course of a year on average and less missed connections. And so it's an example of an AI tool that is useful, that has been implemented, and that they're seeing positive results from. Another example is autonomous driving. And so if you've been in some major cities, like Austin, for example, has this. Phoenix, I believe San Francisco has this. It's this company called Waymo. And it was acquired by Google a couple years ago. And it uses a combination of cameras, radar, and LiDAR to autonomous drive you around the town. And so I've used this in Austin. It's crazy. You call it up. And in Austin, it's actually on a trial basis. I have a family member that works at Google, and they have like preview access. I used it through that. But it's crazy. You call it up, a car comes, you sit down, you fasten your seatbelts, and it starts driving. It's like the craziest experience ever. I felt safe the entire time. And so they use AI and a technology called Vectornet. And it's not just using cameras and navigation, but they predict things. And so for example, in this picture, it's predicting that this train is going to come and pass at a certain point. And then the cool thing is it acts like a human. So when that train passes, the car will slowly start moving even before that gates open. And so it's using these technologies to really predict things using AI. In my experience, it was really cool. And I think most people have positive experiences. And if you look at the numbers, they're great. There's 84% fewer airbag deployment crashes, fewer police reported crashes. And this is something that we are likely going to be seeing more and more of. If you're curious, this is completely different than what Tesla systems are. Those are mostly kind of camera dependent. This is using a variety of, as I mentioned, radars and LIDARs and much more advanced than that system and still very limited in terms of the cities that it's released in. And then this final example would be Amazon. And so AI, as we'll kind of go into here in a little bit, is not something that is brand new. It's something that was initially thought of in the 50s. And it's been around for a while now. Obviously, it's developed a whole bunch. But Amazon, the reason it's become the behemoth that it is, is in the early 2000s, it started using these early algorithms that were on the development course to what AI is today to find suggested products for customers. And so that's how in the early days when no one else was doing it, they were able to get a huge part of the market share and catch up to retailers like Walmart. They've obviously developed that. And so currently in a lot of their warehouses, they have these autonomous robots that will use AI to figure out which packages they should get, how do not crash into each other, which kind of packager they should get those packages to. And it's all kind of programmed to ensure that you get that one day shipping or same day shipping. That's why it's possible. The delivery drivers also use an AI planned route map to make sure that they're not delayed. If there's traffic congestion or bad weather, the AI algorithm will take that into account. And that's how they're able to kind of stand up to that promise. They're also stocking their warehouses. So they've got many, many large warehouses across the country. And so they can predict demand for certain products using AI technologies and make sure that those products are kind of near to where that demand is likely going to come from. And so another example of an industry that is very advanced, they obviously are in it to make money. And so follow the money and you'll find out if a tool is useful or not really. And then finally, our patients are using this technology. And so anytime we have a patient using something, we should at a minimum know about it, know how to use it, but then also know the pitfalls about it so that we can guide our patients about these things. And so you probably saw this news, this headline a couple of years ago. This was basically a young boy who had had chronic back pain, was unable to sit, was having many issues. And over the course of a couple of years, went to 17 different doctors, got MRIs, various imaging studies, blood tests, all kinds of stuff. And they couldn't figure out what was going on. The mom was obviously very, very frustrated. And so she plugged everything into CHAT-GPT and CHAT-GPT said, Hey, your kid may have tethered cord syndrome. She went on a Facebook support group for the syndrome. And once again, everyone kind of confirmed, Hey, yeah, that sounds like what your kid has. She went back to one of these specialists, show them the MRI again. And they were like, yeah, this was missed. That actually is what your child has got a procedure for it and was doing better afterwards. So our patients, when they see stories like this, start to believe that these technologies can help them as well. And they're going to use these technologies. And it's our duty to know whether they're harmful, whether they're not harmful, and to guide our patients on the use of these technologies. And then finally, Congress. So this is a bill that actually was introduced this legislative session. And as you can see, it proposes the prescription of drugs by AI or machine learning technologies. I know I don't know about you, but I am not for that. I think that is scary and terrifying. And I think we are nowhere near that when it comes to where AI is today. But when say, this bill advances, and we need to advocate for or against it, depending on where you stand. We need to go into these conversations with at least a baseline level of knowledge and pitfalls of these technologies, and ensure that we are speaking with these legislators and policymakers as educated individuals when it comes to these topics. Because the folks on the other side who may advocate against our positions, they're going to come in prepared, they're going to have strong arguments in cases. And so we need to ensure that, you know, we're prepared as well. And then one final example. So this, this will be a tool that I talk about here in a little bit, but AI ambient scribes, I'm sure many of you have used them, is something that I use in the ER, started using about eight, nine months ago, really convenient and easy to use. And I had this patient, I remember 50 something year old female, she had like a viral syndrome, sore throat, body aches, kind of looked like she might have COVID flu, something along those lines. So I was in the room, I had obviously given, asked for consent to use my AI scribe, the app was running, and I wanted to look in the back of her throat for this viral syndrome. Of course, like in most ERs, there was no tongue depressor in that room. And so I said, Hey, I'll be right back with a tongue depressor. My phone was still on the counter, left, came back, looked at her back of her throat. Fast forward, you know, an hour and a half or so. I have my AI note, HPI in the patient's note. I'm working on her discharge, everything checked out fine, I was going to diagnose her with viral syndrome. And as I'm looking through the HPI, it says, patient said that she'd been having, you know, running off chest pressure for the last week. And then some shortness of breath. And I was like, she did not say that. So I went back in, I was like, Hey, this note kind of generated this. This seems like it's an error, just want to make sure that you weren't having this. And she kind of gave a look to her husband and was like, Oh, yeah, I didn't want to mention it. I was going to follow up with my doctor and whatnot. I was like, Well, hey, let's get some lab work, let's get an EKG, make sure nothing's going on with her heart and lungs. Her EKG, of course, had some like depressions on there. Her troponins were elevated, she ended up having an NSTEMI, ended up going to cath lab eventually during that admission, getting a stent. And so, you know, just by happenstance, this technology kind of helped me in that situation. On the flip side, you know, I am very happy using this AI scrub technology. And before using it, I would always, you know, have all my notes done and usually get out of shift on time. After using it, I still have all my notes done and usually get out of shift on time. But mentally, I feel, you know, less fatigued after shift, less pressured to be, you know, doing my notes during the shift because a lot of them are done through, you know, this technology. And so the question is, or something we should pay attention to is, is my patients per hour I'm seeing in the ER or in the clinic going to substantially increase because of this technology? Maybe not. But maybe that mental burn that we have lifted of coming out of the room and working on that note and making sure you make that next appointment you have with your patient or me going to see another patient in the ER, maybe that's a little easier. And so there are values of this technology that may not show up in numbers, but that are still important. And so using the emergency department as an example, AI can be implemented literally at every step of the process. Should it be? Will it be useful? Well, those are questions that we're going to be answering in the upcoming, you know, months and years. When you look at triage, there's triage prediction tools where it can predict, hey, this person's going to have sepsis or this person has a high likelihood of admission and maybe they should get a room. Even looking at pre-hospital things like, hey, you know, these days in the hospital are going to be busy. Volumes are going to be high. You should have extra nurses. There's tools for that. There's obviously diagnostic proposals, so medication suggestions, obviously the use in radiology, the use of detecting pneumothorax when you get like a portable ultrasound. So things like that. What's the role of AI there? Should we be implementing it heavily or not? Documentation we kind of briefly touched on. That's something that's been implemented essentially in many, many places. And then disposition, you know, giving you suggestions on if this patient should be admitted or discharged, what level they should go to. Maybe they're at a high risk of decompensation that the physician is not seeing and they should go to IMU or ICU to start off with. Those are all things that people have talked about that products are being developed for. And the question then becomes, you know, do we have a need for it? Is there a use case for it? Is there not? Is it too early? And that's a question that I'm probably not going to answer during this talk, but it's going to take all of us to be having these discussions over the upcoming months and years to figure that out. So I like to start with the vocabulary, right? There's lots of terms thrown around in the AI world and it can get confusing. And so this is a slide I found from IBM actually that I found to be very useful. And so when we're talking about AI, what we really mean is, is the machine learning the same or at a greater capacity than a human? That's basically the basic definition of AI. Can it learn? Can it infer? And can it reason? And this is something that folks have been working on since the 50s. And it's obviously developed gradually up until present day. So this isn't a new technology. You've probably heard of something called the Turing test, where it was like, could you figure out if this was a computer or a human if you're on the other side of it? That's been passed by many, many systems, even going back to the 70s and 80s. So we've come a long way. The next step. And so what you see is AI encompasses all of these things. So machine learning, deep learning, foundational models, LLM, that's all AI. So each step we move down, we get more increase in complexity and it's still AI. And so machine learning is basically overseeing a lot of information and learning from it and discovering patterns and data. And so the more data you give to the system, the better. And the better it will be at catching outliers as well. This is something that developed in the 2010s and has continued to be developed since then. The next step is deep learning. This kind of came along at the same period of time, the 2010s, and has continued to develop. And this is basically trying to mimic the human brain. And so it basically works through what they call neural networks, which is basically layers of data. And so it's these machine learning algorithms and data that are working parallel all at the same time, hundreds and thousands of these models and neural networks working at the same time. The interesting thing here is that when you feed something into the human brain and kind of rewinding a bit, the human brain is amazing and we still don't know a whole bunch about it. We probably don't know more than we do know about it. And so taking that piece of information to mind, when you feed back or put something into the human brain, whether that's me speaking to someone or whether that's something that's a kind of medically induced thing, the output's not always the same. And so the same thing applies here. And that's where you get kind of that black box. We don't know what's going on in AI type of situation where the input might be the same, but the output's not going to be the same each time. And then the final step that we're at is these foundational models. And so that basically uses the technology of machine learning, deep learning, these neural networks, and builds models around these things and kind of almost usable products. And so that's where LLM's large language models come into play. And so ChatGPT is an LLM, Gemini, all of these kind of consumer tools are large language models. There are audio and video foundational models. And so that's where you get deep fakes, where you can kind of create new items from this previous data you have on a piece of audio or video. And so yeah, that's kind of where we're at right now. These terms hopefully will help us kind of keep things straight. They can get complicated. But this diagram, I think it does a good job of kind of simplifying things. This is kind of the AI landscape today. And so what you can see is on the bottom from narrow use to kind of very broad use. The narrow use is going to be things like your chatbots and personal assistants like Siri. And then as you move to the right, you get things like Autopilot and Tesla and technologies like AlphaGo or AlphaFold. AlphaFold is like a technology that's developed by Google. And it can basically find the protein structure of every protein structure we have known to mankind. And so that's obviously going to be huge in drug development and things of that nature. And then finally, can't talk about AI without talking about NVIDIA. So that's Jensen Huang. Most people know NVIDIA as the stock that skyrocketed in their portfolios. And that's all well and good. But why is NVIDIA so important to the AI story? And that's because this guy founded NVIDIA in the 90s to make video games better of all things. And he became really good at and he became really good at making GPUs, graphics processing units. So you've probably heard of the term CPU and GPU. The difference is, is that GPUs can process information in parallel. And so they're really good at doing multiple things at the same time versus CPUs. And so if you have a gamer in your life that loves gaming, they likely have NVIDIA chips in their gaming computer so that they can have amazing 3D rendering and graphics and an amazing gaming experience. Well, it turns out AI really uses this and needs this as well. Kind of like we were talking about those neural networks and things being processed in parallel. These GPUs are really good at that. And so that's why there's been such a demand as these large organizations and smaller organizations go down this AI path. You need these types of processing units to process all of that information. And final term is hallucination. So you probably heard this. And so hallucinations are essentially when there's not enough training data or there's bad training data and you get bad information. And so a common example of this, it's kind of been mostly fixed now, is maybe like two years ago when you would go on ChatGPT and ask it for something like here, what's the latest research on this type of cancer? It would give you all of these answers. And then you would say, hey, can you source your information? And it would create sources out of nowhere. These would not be real sources. They wouldn't exist. It would just be fake information essentially. And so that's what hallucinations are. They still exist, but they have definitely improved and they are improving every day as we move forward. So this was a little study done out of Stanford University Medical School, and it looked at 50 licensed physicians and gave them randomized clinical case studies and asked them to answer these case studies. And they broke them up into three groups, a doctor who had access to the internet, just general internet access, and a physician who had access to the internet. A doctor who had access to the internet, just general internet, a doctor who had access to ChatGPT-4, which is one of those large language models we talked about, and it's free online, or GPT-4 alone. And so this case, the alone case had no doctor. And as you can see, GPT-4 alone scored the best on this test compared to a doctor with GPT-4 or just the internet. And so the question becomes, should we just get out of the way? Are we slowing down this process? Are we worse than AI? And the answer is no, definitely not. ChatGPT-4 should not be treating our patients or replacing us. But the answer is, or in my opinion and in the opinion of the study, is that we need to learn how to use these technologies. And that is a skill in itself, knowing how to ask the right questions, knowing to spot the biases, and figure out how we can best use these technologies to augment us. And that's where there's a lot of learning to be done on our parts. And so as we kind of go through this talk, I'm going to sprinkle in here and there how you can use AI in your hospital, in your clinic, in your emergency department. And there's infinite numbers of ways, and we're just going to scratch the surface here. But this first example is going to be of Epic's EMR. This is what they call ART, their augmented response technology. And so what they saw was during the pandemic and right after it, there was a numerous hundredfold increase in the number of in-basket messages physicians and providers were getting. And so this technology basically is one of Epic's first AI tools, and it actually uses OpenAI, the company that makes ChaiGPT, their GPT model to answer or create these responses. And what happens is, you know, a George Smith 65-year-old male asks you a question, Epic will, and their AI technology will generate a response. You review that, and then you send that. It does not give any medical advice, so if it's any medical related question, it's obviously not going to give that or generate a response for that. And so what were the results like? In the initial studies, what they found was is the number of or the amount of hours physicians were spending on responding actually didn't decrease. They did say they were happier and had less of a cognitive burden and load. And so there's probably a lot of reasons for this, but I'm thinking, you know, a lot of times when patients do reach out to us, it's to ask for medical advice or a medical question, and if it's not going to help there, then it's probably not going to be that much of use. But what it could do is generate the non-medical part of the response when it comes to follow-up questions or which pharmacy the medication was sent to or whatever the case may be, and then you fill in the medical part, and it still makes that process a little more streamlined. They did find that nurses answering questions, they did spend less time using this technology, and so for them it obviously isn't more medical advice, but it likely is questions of follow-up, medication, how frequently you should be taking it, that kind of thing. And so it probably helped them more because of that. And so I think it's very important to separate hype from reality. If you turn on any news channel or look at the newspaper or internet, AI is going to solve everything, right? It's going to solve cancer, it's going to eliminate burnout, it's going to maybe replace our jobs, it's going to discover a whole bunch of new drugs. Is that going to happen or not? The answer is yes and no. Are we going to discover a new drug or a cure because of AI and, for example, that protein folding alpha fold technology that Google has developed? Yes, I believe that will happen at some point in the future. Have we had that happen today and is all of this happening today? The answer is no. And so I think we have to maybe, you know, take a chill pill in terms of where we're at right now, but we will get there. And so what's the gatekeeper to all of those things, right? The computer scientists and engineers are very smart people, they have models of all of these technologies and they probably want to release them all and unleash them on, you know, hospital systems and help people and whatnot. What's the gatekeeper? The gatekeeper is the FDA and that's not a bad thing. And so AI that is used to treat or provide medical advice or diagnostic tools or anything of that nature falls under the software as a medical device category. And so it requires oversight and approval just like anything else would from the FDA in the medical world. And so if you look at that, they have about 1,300 or 1,400 various AI algorithms and technologies that are approved currently. 70% plus, around 76% or so are radiology, so very heavy in that space. The next is cardiology and then after that, everything else. And so the other thing I would like to touch on is that all these technologies are not always what they are sold as. And so there was a big rollout in the 2021, I believe it was, by Epic. I have a sepsis detection tool and this was going to help ED providers detect sepsis way before they, you know, showed obvious source criteria, that sort of thing. It was rolled out and then studies were done and it found out that it missed sepsis in about half the cases that should have been flagged as sepsis. And this was an AI tool and obviously it got a lot of bad press and the algorithm was removed. They have updated versions of it now, but it showed that we really do need these guardrails and these approval processes in place because the promises are going to be there. Corporations are going to want to sell their technologies to hospital systems and make lofty promises, but a lot of times these promises don't hold up. And so it is very important that quality does not suffer and that bad decisions are not made. And so then the question becomes where is AI being used today? And so a lot of it is in the kind of mundane tasks like note-taking, like scheduling, like predicting no-shows to clinician appointment offices, to look at financial data, as you can kind of see through this, but not a lot in the diagnostic area. The things that are approved in the diagnostic area, for example, in radiology usually are triage tools. So there are tools that, for example, can detect a pulmonary embolism or a hemorrhage on a CT head. And so what it will do is if it detects one of those abnormalities, it's not necessarily going to make that diagnosis, but it will move that imaging study up in the radiologist's queue so that they can look at it earlier and obviously, make the diagnosis and get the patient the care that they need. And so speaking of radiology, I think we do have to talk about the radiology case study, right? There was the Nobel Prize winner, Geoffrey Hinton, who used to work at Google DeepMind, who left. He was the one who said, hey, I'm leaving Google because this technology is unsafe. But in 2016, he said in five years, radiologists will be obsolete. We won't have radiologists anymore. People shouldn't go to radiology residency, essentially. As far as I know, every radiologist who isn't working isn't because AI took their job. It's for other reasons. Radiologists still have thriving careers and are still working. And so what happened, right? This lofty prediction was made, as we saw, the FDA has approved the most algorithms and technologies in the radiology space. Why hasn't this happened? And so it takes a little bit of a deep dive. And so in radiology, when we talk about AI in 2017 and 18, before kind of the foundational model stage and what we've developed in the last couple of years, when we speak of AI and radiology, it really comes down to radiomics. And what radiomics is, is using data from imaging studies to learn more about that imaging study. And so it's things that the eye cannot see. The radiologist can obviously see organs and masses and bleeding and all types of things, but there's certain things that our human eye just won't be able to see or differentiate. And so what happens is you get medical imaging, you get an image which the radiologist can see kind of up in this corner, and then the AI technology and programs extract feature data from this. So various shapes, intensities, textures. It then sorts through all of these things and kind of puts them into a digestible form for the computer, and then those are analyzed. And so that's what AI is and was in radiology. And so this has been really useful when it comes to detecting early cancers. It's usually been used in that space, so detecting early pancreatic cancer, breast cancer, colon cancer. But when it comes to anything else beyond that, it hasn't really been used widely. The other kind of thing, when I was looking into this, is there are lots of good use cases and case studies at an institution that may develop something based on 50 or 100 scans, but when they try to replicate that at other institutions who may have different types of scanners and protocols, it's not as easily replicated as one would think. And so that is definitely a hurdle that needs to be crossed. Where we're at now, though, is we've had this broad base of the radiomics workflow, and now we've developed these foundational models and deep learning that can actually take this to the next level. And so once again, we're at a point where, hey, does the combination of radiomics, or for lack of a better term, old AI, married with kind of this deep learning that we now have, is that going to be something that replaces radiologists? Personally, I don't think so, but that we're kind of back at that what's possible phase and innovative phase. And so it's once again an exciting time. And then one final point, how are radiologists using this? So obviously in the oncology space and radiation oncologists, they're using this very closely. They're probably using this on a daily basis. When it comes to emergency department radiologists, they're not using these technologies. They may be using kind of more triage tools and whatnot in the radiology space. And so once again, how can you use AI? So this is the tool that I alluded to. There's numerous various ambient scribe listening technologies. This is DAX. I believe this is in the Microsoft world, but there's probably over 10 at this point. And basically you open up this app on your phone, you go into a patient room. What I say is, hey, I'm using this note-taking technology to help me take notes so I can better listen to you and focus on you. Is it okay if we use this? 99% of the time they say yes. And we sit down, have an amazing conversation. I use you the physical exam part. I'll kind of dictate it in a subtle way saying, oh, so the right part of your stomach hurts a lot, right lower quadrant when I push. And so that'll kind of dictate that physical or I'll say, yeah, your lungs sound great. No wheezing or anything like that. And when I come out of the room, it'll through that create a HPI. It'll create a physical exam. And then I'll usually, so some of these actually give you a differential as well. Some don't. The one I use, it doesn't. And I'll kind of dictate a differential and then copy everything into my note. And my notes basically 90% done at that point. And so for me, as I kind of alluded to earlier it hasn't meant that I see like four patients per hour in the ED, I'm still around two, but it has made that process much quicker and smoother. And I honestly do see myself speaking with the patient more, sitting down in the emergency department more with that patient and spending incrementally more time, which I think is valuable both to myself and the patient. This is definitely something that I would say is a low-hanging fruit if you haven't tried any type of AI technology to use. And yeah, I found it useful. One other kind of use, so when it comes to education, which is a lower stakes environment than direct patient care, AI has a lot of uses. I imagine there's some ultrasound buffs in here. Some of y'all might have a butterfly ultrasound. They're amazing. Every ER doc I know has one basically. And so I'm just going to play this video and let you see what they're doing with their AI technology. So as you can see there, what an amazing learning tool, giving this to a medical student or even going on a medical mission and helping folks train on ultrasound. Now, obviously, there's still going to be, this isn't perfect. You can't give this to someone and say, hey, now you're an ultrasound expert. It's going to require a lot of oversight. But when you are on a medical mission and you're trying to train 30 people, this could be a game-changer for you. And just getting that hands-on kind of experience is going to be invaluable. Here's another one. I'll just play this real quick. And so what Butterfly has done is they've developed some of their own AI apps, like the one on the previous slide, but they've also opened it up to developers. So people can develop technologies and AI kind of apps on top of their platform. And so this is one of the ones that is developed. You need to position the probe between the sternum and the lower left pectoral region with the probe marker pointing towards the patient's right shoulder. Once you have placed the probe on the patient's chest in the start position, the live guidance appears on the screen. Here the probe is misplaced and pointing towards the patient's head. You now have hints on how to move the probe appropriately to reach the probe's target position. It's just like a video game. The white arrow and cross represent your target. The blue arrow and cross represent the probe. Your goal is to superimpose the blue arrow on the white arrow and blue cross on the white cross. In this example, you must turn the probe counterclockwise. So as you can see, it really goes through each one of the steps that a teacher would. And I think it's really going to be cool. Some of the other ones that I saw on there was an OB one. And so you may not be great at OB ultrasounds, but it will kind of train you on diagnosing various kind of fetal distress kind of signs and things of that nature. And there was one that was on training on DVT ultrasounds and how to do those. And so obviously many more will be developed. The technology is obviously going to have its flaws. But when it comes to the education space, and I know a lot of you are educators, I think exploring AI is big. And so the ethics of AI, I'm not going to have any answers here. But I think I'm going to ask a lot of questions. And I think we all need to be thinking about these questions as we interact and interface with various AI technologies. As vendors may want to sell us technologies, what the ethics of AI are? Who's responsible for it when it makes an error? Is it the provider? Is it the hospital? Is it the company that developed the technology? And to be honest, we don't know. And we won't know until the first lawsuit eventually happens and it ends up in the court systems and someone's held liable. This isn't an AI example, but for example, Amazon's telehealth service was recently sued. And it was one of the first kind of major examples of an individual provider not being sued, but the entire entity. Because that is a much larger organization, more money there and whatnot. We don't know the results of that lawsuit. But as we develop these technologies and they get ruled out, we're going to learn more. What's the standard of care? Right now we have a standard of care, but at a certain point, AI might become really, really good at detecting certain things. And that might become the standard of care. And what happens if you're, for example, in an under-resourced rural hospital that doesn't have that technology and a bad outcome happens? Was not using that AI technology not standard of care? And are you going to be held liable for that? There's conversations around AI liability insurance, just for that technology. And so if it is the fault of AI, that insurance would cover that. Who's going to pay for that insurance? Is it going to come with that AI product that we're buying? Is it going to be something that providers are going to have to pay? Once again, we don't know the answers to these questions, but these are questions and discussions that are happening. And it's kind of terrifying, to be honest. So moving on, bias is a big problem when it comes to these algorithms and AI. And so we have a system that is biased in numerous respects and in ways. And AI is only as good as the data that it is trained on and that it uses. And so if we give it biased data, it's probably going to give biased results. And so this study looked at chest x-rays and it used a state of the art kind of technology that detects abnormalities on chest x-rays. And this was a technology that's been proven time after time, not something clinically, but like in a research environment. And it trained it on a bunch of x-rays. And what they found was the x-rays of minority populations at a much higher rate were shown as no finding. And so essentially, if you were female or you were a minority group, there was a much higher likelihood that it would say, hey, you have a normal chest x-ray when in fact there was something abnormal in that x-ray. And so where does that come from? You know, AI isn't inherently biased, but we have fed it a lot of data and a lot of perhaps x-rays that radiologists read as normal in minority populations and certain populations that it is learning from and giving us information on. And so we need to know that these biases exist and ensure that we are not escalating health inequities and that there's a robust audit system in place to ensure that we are not causing more harm with these technologies than good. This was a study that looked at some of the common consumer large-linguid models. So BARD, BARD is actually called Gemina now, it's the Google one, GPT, CLAWD, GPT-4. And it asked them these questions five times. And so as you can look on the right, if it was completely gray, that meant it was a non-biased response. And if it's dark maroon, that means it is a racially concerning response. And these were run by like board certified physicians when it came to what was concerning or not. And as you can look, for example, here, how do I calculate the EGFR for a black woman? Every single one of them gave a concerning response. But if you ask it a neutral question, how do I calculate lung capacity? It's a non-concerning racial response. And so these tools that are out there that our patients are using and plugging in their complaints and symptoms into have biases. And so that's why when we're having these conversations with our patients about these tools and their usefulness, this needs to be kept in mind. I'm curious if any of y'all have used this tool, VisAI, I would love to see in the chat, but time is brain right. And so this is an AI company and tool, and they create a whole bunch of offerings for PE detection and various things. But this particular tool is for large vessel occlusions and strokes. And excuse me. And so essentially what you have is a patient who gets a code stroke, they come to the emergency department, this tool is used. And if it detects a large vessel occlusion, it will alert the ED provider, the specialist, the transfer coordinator, and it streamlines this entire process. And so a lot of times, even in a large city like Houston, where I practice, my community hospital won't have thrombectomy capabilities and I'll have to transfer to a larger academic center. And so it streamlines that entire process. My hospital, unfortunately, doesn't have this. But to my understanding, it streamlines this entire process where the specialist at that accepting hospital is on a group chat with the EMS service and transfer coordinator. And as long as they agree and everything, this is something that can happen much quicker than the processes where if it didn't exist. On average, it's 52 minutes faster than standard of care. It's led to a two and a half day reduction in hospital length of stay and 73 percent faster time to treatment. Now, all these numbers are from them. So take it with a grain of salt. But I did look at a lot of the data and it does hold up across a few hospital systems that have done studies on this. It's adopted by large hospital systems, 1500 hospitals across the country. It's one of the first AI approved technologies that's getting paid by Medicare. It exists in Cleveland Clinic and Mount Sinai and many other large hospital systems. And so what about our patients? We can't talk about the ethics of AI and its use without taking into account the main person at the middle of this, the patient. And so this was a little survey they did comparing physician and AI chatbot responses to patient questions in the UK. And so take this with a grain of salt. Maybe NHS doctors are a little less friendly than us, but no, I'm sure they're just as good. And so they asked the patients, could you tell if this was an AI versus physician response? And they could. And so then they went a step further and said, which one did you prefer? And a majority of the patients prefer the chatbot response. And so what does that tell you? Are they better at us than empathy and listening to the patient and responding to them? Perhaps they are, but our patients don't necessarily have a negative viewpoint of AI chatbot responses. This was part of the same study. And so 12.3% of respondents were very comfortable with AI. 42% were somewhat comfortable with AI reading their chest x-rays. But interestingly enough, only 6% were comfortable or very comfortable with AI making their cancer diagnosis. And as we talked about a few slides ago, AI currently, when you look at radiology, is mostly being used in the cancer space. And so do the patients know that? Are they comfortable with that? Is it being mostly used to read basic chest x-rays? Not really. And then there's a variety of other concerns, data privacy concerns, right? And so obviously when we are doing research and going through IRB processes and whatnot, privacy is paramount. These technologies are very smart. As I mentioned that there's this kind of black box aspect of it where we may not know exactly what they're doing. And so is there the possibility of a patient being de-identified going through this process? Does that exist? How do we prevent against that? Privacy is going to be very, very important. The patient education arm of it, getting their consent to use various AI technologies. Do they know that we're using those technologies? And then if they are using some of these technologies at home, there's lots of consumer health kind of technologies that are not approved, FDA medical devices that may provide health information. Do they know what the biases of these technologies are and what the harm may be having those conversations with them? Geographic equity. And so we talked about the rural versus urban divide, what standard of care is to make sure that our rural and less resourced hospitals, even in urban settings are still getting the technologies that benefit all of us. And so that disparity doesn't become even larger as we kind of get into this AI age. Getting systemic buy-in. And so if there is an AI tool or process that's being implemented, this isn't just another IT department thing or a EHR upgrade. This is going to require physician buy-in. It's going to require buy-in from administrators and then the patients, right? A lot of times, as we looked on the previous slide, patients may not want AI. So we can't force technologies on them that they may not be comfortable with, or they may not want their data being used for. And so I just want to caution that this isn't just another IT upgrade, but this is something much more complex that is going to require a lot of buy-in. And then fixing systemic biases, right? So ensuring that we are not passing along our current systemic biases to AI and magnifying them, but using it as an opportunity to harness the power and opportunity to fix some of those biases to ensure that we provide better service and live up to the oaths that we all took. And so then finally, some other ways that you can use AI study tools. So I know medical students, and I'm sure there are some on this talk that probably know a lot more about AI than I do, are using them. They're using them to feed kind of notes into chat, DPT, and generate questions. Or there's lots of areas where they may want to brush up on more on certain types of pathophysiology and say, hey, create questions around this topic or quiz me. So common study tools. And so for educators as well, knowing how to use those tools. Drafting letters of recommendation. And so obviously you're still going to have to review them. But if someone gives you a 10 page CV and a sample of their research, that's hard. That's a lot of work. That's hours and hours in creating a response. And so plugging all that into chat, GPT, giving it some guidance or other tools, and then getting that initial draft back and then adding your kind of personalized touch to that letter. And then travel, right? If you're going on a trip, I recently went to Tokyo, Japan, and it helped me create an itinerary and some nice places to eat that were on certain routes. And obviously, once again, I went in and kind of changed things and whatnot. But really useful there. But I hope through the course of this talk, you realize that A is not a fad. It's not something that's going to be here for a year and then leave. This is something that our hospitals and larger organizations are taking very seriously. There are certainly benefits and opportunity here. And it's something that we need to, at least on a basic level, know about. Also, we need to make sure that our upcoming students and future residents and then obviously attending physicians know about it as well. Med school is four years. Residency is anywhere from three to six, seven, eight years. That's a lot of time. Lots of development is going to happen. And so even though a lot of these technologies are not in prime time currently and we're not treating cancer with them, they might be in five to eight years. And so making sure that our students know about these technologies and are able to at least have an educated conversation and assess these technologies. And then thinking of this technology, not as artificial intelligence, but augmented intelligence, like in that radiology example. It's helping them detect certain cancers earlier. And so finding ways to make our jobs easier, better so we can focus on the things that we want to focus on and so that we don't end up in the same situation as we did with EHRs a decade ago, where it was potentially supposed to make some things better, but it led to burnout and made everything, well, not everything, but a lot of things worse. And so ensuring that AI is not a repeat of that scenario. But I'm personally very excited about the tools that we are seeing, the tools that are coming down the pipeline. We just have to make sure that they're not tools that take advantage of our patients, that don't take advantage of us, but actually make health care better in our country and globally. I'd love to answer any questions that you may have or any feedback. I'll be around in the chat. Here are my references. And thanks so much for coming to this talk. I hope you enjoyed it.
Video Summary
In this presentation, emergency medicine physician Awais Dorani explores the integration of AI in healthcare, specifically addressing its potential applications, ethical considerations, and practical utility beyond the hype. The discussion outlines the differentiation between well-functioning AI tools and overhyped technologies not yet ready for clinical application. Highlighting industries like aviation, autonomous vehicles, and e-commerce, where AI has been effectively integrated, Dorani calls for a similar impact in healthcare. He emphasizes the importance of AI literacy among healthcare professionals to enhance workflows and patient care. AI applications like Epic's augmented response technology and AI ambient scribe tools demonstrate its potential to reduce cognitive burden, streamline documentation, and improve patient interaction. However, ethical issues, such as accountability, bias, and patient consent, pose significant challenges. Furthermore, Dorani emphasizes the necessity for robust regulatory oversight and FDA approval for AI tools to ensure quality and safety. The speaker strongly advocates for the adaptation to AI, emphasizing a collective responsibility to engage in discussions around its implementation to prevent exacerbating healthcare inequities. He closes with a call to view AI as "augmented intelligence," enhancing rather than replacing human capabilities in medicine.
Keywords
AI in emergency medicine
AI opportunities
AI challenges
ethical implications
AI tools in healthcare
AI committee experience
ambient AI scribes
AI in aviation
augmented intelligence
AI in healthcare
ethical considerations
AI literacy
Epic augmented response
AI ambient scribe
regulatory oversight
patient consent
healthcare inequities
×
Please select your language
1
English