| dc.contributor.author | Ekici, Omer | |
| dc.date.accessioned | 2025-12-28T16:50:22Z | |
| dc.date.available | 2025-12-28T16:50:22Z | |
| dc.date.issued | 2025 | |
| dc.identifier.uri | https://doi.org/10.15311/selcukdentj.1674113 | |
| dc.identifier.uri | https://hdl.handle.net/20.500.12933/2978 | |
| dc.description.abstract | Background: The aim of the study is to evaluate the performance of four leading Large Language Models (LLMs) in the 2021 Dentistry Specialization Exam (DSE). Methods: A total of 112 questions were used, including 39 questions in basic sciences and 73 questions in clinical sciences, which did not include the figures and graphs asked in the 2021 DSE. The study evaluated the performance of four LLMs: Claude-3.5 Haiku, GPT-3.5, Co-pilot, and Gemini-1.5. Results: In basic sciences, Claude-3.5 Haiku and GPT-3.5 answered all questions correctly by 100%, while Gemini-1.5 answered by 94.9% and Co-pilot by 92.3%. In clinical sciences, Claude-3.5 Haiku showed an overall correct answer rate of 89%, Co-pilot 80.9%, GPT-3.5 79.7% and Gemini-1.5 65.7%. For all questions, Claude-3.5 Haiku showed a correct answer rate of 92.85%, GPT-3.5 86.6%, Co-pilot 84.8% and Gemini-1.5 75.9%. While the performance of LLMs in basic sciences was similar (p=0.134), there was a statistically significant difference between the performances of LLMs in clinical sciences and all questions (p=0.007 and p=0.005, respectively). Conclusion: In all questions and clinical sciences, Claude-3.5 Haiku performed best, Gemini-1.5 performed worst, and GPT-3.5 and Co-pilot performed similarly. The 4 LLM models examined showed a higher success rate in basic sciences than in clinical sciences. The results showed that AI-based LLMs can perform well in knowledge-based questions such as basic sciences but perform poorly in questions that require knowledge as well as clinical reasoning, discussion, and interpretation, such as clinical sciences. © 2025, Selcuk University. All rights reserved. | |
| dc.language.iso | en | |
| dc.publisher | Selcuk University | |
| dc.relation.ispartof | Selcuk Dental Journal | |
| dc.rights | info:eu-repo/semantics/openAccess | |
| dc.subject | Artificial intelligence | |
| dc.subject | Dentistry | |
| dc.subject | Dentistry specialization training | |
| dc.subject | Large language model | |
| dc.title | Comparative Evaluation of Four Large Language Models in Turkish Dentistry Specialization Exam | |
| dc.title.alternative | Türk Diş Hekimliği Uzmanlık Sınavında Dört Büyük Dil Modelinin Karşılaştırmalı Değerlendirilmesi | |
| dc.type | Article | |
| dc.department | Afyonkarahisar Sağlık Bilimleri Üniversitesi | |
| dc.identifier.doi | 10.15311/selcukdentj.1674113 | |
| dc.identifier.volume | 12 | |
| dc.identifier.issue | 4 | |
| dc.identifier.startpage | 6 | |
| dc.identifier.endpage | 10 | |
| dc.relation.publicationcategory | Makale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı | |
| dc.department-temp | Ekici, Omer, Department of Oral and Maxillofacial Surgery, Afyonkarahisar Health Sciences University, Afyonkarahisar, Afyonkarahisar, Turkey | |
| dc.identifier.scopus | 2-s2.0-105020858078 | |
| dc.identifier.scopusquality | N/A | |
| dc.indekslendigikaynak | Scopus | |
| dc.snmz | KA_Scopus_20251227 | |