false
Catalog
AOA-OMED Research Posters 2024
OMED24-POSTERS - Video 11
OMED24-POSTERS - Video 11
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Hello, my name is Patrick Martin. And my name is Neil Patel. And today we are going to be presenting our research on assessing the potential role for artificial intelligence in osteopathic medical students' teaching and learning. Essentially, we are asking ourselves, what role can CHAT-GPT provide in enhancing osteopathic medical students' teaching and learning? The main objective of this study was to determine the accuracy of CHAT-GPT in providing correct responses to the College of Osteopathic Medicine License Examination Level 1, and to assess the performance of CHAT-GPT in medical physiology practice questions available on Access Medicine. Artificial intelligence, or AI, is revolutionizing medical education by providing innovative tools to support student learning. One notable example is CHAT-GPT, a large language model that has been increasingly integrated into medical school curricula. CHAT-GPT has demonstrated impressive capabilities, achieving passing performances in the USMLE Steps 1, 2ck, and 3, as well as in medical physiology exams. These results suggest that AI can effectively assist students in mastering complex medical content. Interestingly, a recent study revealed that around 20% of medical students have already used AI tools like CHAT-GPT, with many intending to continue using them during residency. This trend highlights the growing reliance on AI as a valuable resource throughout medical training. Despite its strengths, CHAT-GPT does have limitations, particularly with tasks involving sketches and images, where it cannot process visual content. However, it is important to note that CHAT-GPT has outperformed 1st and 2nd year medical students on clinical reasoning questions, showcasing its potential in enhancing students' critical thinking skills. One area that remains unexplored is CHAT-GPT's accuracy on Complex Level 1 and Access Medicine practice questions. The Complex Level 1 exam is critical for osteopathic medical students, as it tests a broad range of knowledge, including specialized areas like osteopathic manipulative medicine. While its performance in other exams has been promising, further research is needed to determine how well it can support students preparing for these specific assessments. This study assessed the accuracy of CHAT-GPT 3.5 in answering multiple-choice questions from the Complex Level 1 and medical physiology exams. Questions with visual content were excluded due to the model's inability to process images. Starting with the physiology textbooks, methods, and results, two physiology textbooks were pulled from Access Medicine, Ganong's review of Medical Physiology, 26th edition, and Case Files, Physiology, 2nd edition. From Ganong's review of Medical Physiology, 26th edition, ten random questions from each of the seven sections provided within the textbook were put through CHAT-GPT and categorized based on whether its answers were correct or incorrect. Similarly, from Case Files, Physiology, 2nd edition, three random questions from six sections were put through CHAT-GPT and categorized based on whether its answer was correct or incorrect. Out of the 88 total questions, only 85 were included in the final results, as those with visual content were excluded due to CHAT-GPT's inability to process images. CHAT-GPT correctly answered 56 of 68, or 82.35%, of the included questions from Ganong's review of Medical Physiology, 26th edition, and 10 out of 17 of the included questions from Case Files, Physiology, 2nd edition, or 58.82%. Combined, CHAT-GPT correctly answered 77.65% of the included questions from the physiology textbook practice questions. Complex Level 1 practice questions were pulled from the NBOME Complex Level 1 Practice Test, which can be found on their website. The practice test contained a total of 25 questions. All of the Complex Level 1 practice questions were included. CHAT-GPT was asked to pick the correct answer as well as provide a short explanation as to why it chose that answer. CHAT-GPT was able to answer 14 out of 25, or 56%, of the practice questions correctly. Within the Osteopathic Principles and Practice, or OPP, disciplines, from Complex Level 1 practice questions, CHAT-GPT correctly answered 1 out of 3 questions, or 33%. Our study focused on assessing CHAT-GPT's ability to accurately answer multiple choice questions from the Complex Level 1 exam, with a particular emphasis on the Osteopathic Principles and Practices discipline. The results revealed that CHAT-GPT struggled significantly in this area, suggesting that it may not be an effective study aid for students preparing for Complex Level 1. Although the specific scoring guidelines for Complex Level 1 are not publicly available, the performance data indicate that CHAT-GPT did not achieve a passing score on these questions. Despite these challenges, CHAT-GPT did demonstrate some promise in other areas. For instance, it achieved over 70% accuracy in answering physiology questions. This level of performance suggests that CHAT-GPT could be useful in providing correct answers and detailed explanations drawn from textbooks for certain subjects within medical education. Its ability to assist with physiology-related topics may offer value to students seeking to reinforce their understanding of complex concepts in this area. Looking ahead, our future work will delve deeper into CHAT-GPT's potential role in Complex Level 1 board preparation. We plan to explore strategies to enhance its accuracy when tackling practice questions, particularly in specialized areas like OPP. By refining its capabilities, we hope to determine whether CHAT-GPT can become a more reliable resource for medical students as they prepare for this critical exam. In conclusion, while CHAT-GPT shows some potential as a study aid, particularly in physiology, its current limitations in the context of Complex Level 1 highlight the need for further development. Our ongoing research aims to address these gaps with the goal of improving AI's effectiveness as a comprehensive study tool for medical education. We would like to thank Dr. Iman Ben-Merzouga for her contributions into the Dr. Kiran C. Patel College of Osteopathic Medicine for allowing us this opportunity. Thank you again for listening, and we hope you have a great time at OMED.
Video Summary
Patrick Martin and Neil Patel examined AI's role in osteopathic education, focusing on CHAT-GPT's accuracy for the Complex Level 1 exam and medical physiology practice questions. CHAT-GPT achieved 77.65% accuracy in physiology questions but only 56% on Complex Level 1 questions, struggling particularly with Osteopathic Principles and Practice (33% accuracy). While CHAT-GPT showed potential in physiology, its effectiveness for Complex Level 1 needs improvement. Future research will explore strategies to enhance its accuracy in specific areas like OPP, aiming to make it a more reliable study tool for medical students.
Keywords
AI in osteopathic education
CHAT-GPT accuracy
Complex Level 1 exam
medical physiology
Osteopathic Principles and Practice
×
Please select your language
1
English