Gelişmiş Arama

Basit öğe kaydını göster

dc.contributor.authorSalmanpour, Farhad
dc.contributor.authorCamci, Hasan
dc.contributor.authorGenis, Omer
dc.date.accessioned2025-12-28T16:40:23Z
dc.date.available2025-12-28T16:40:23Z
dc.date.issued2025
dc.identifier.issn1472-6831
dc.identifier.urihttps://doi.org/10.1186/s12903-025-06194-w
dc.identifier.urihttps://hdl.handle.net/20.500.12933/2549
dc.description.abstractObjective The aim of this study was to evaluate the adequacy of responses provided by experts and artificial intelligence-based chatbots (ChatGPT-4.0 and Microsoft Copilot) to frequently asked orthodontic questions, utilizing scores assigned by patients and orthodontists. Methods Fifteen questions were randomly selected from the FAQ section of the American Association of Orthodontists (AAO) website, addressing common concerns related to orthodontic treatments, patient care, and post-treatment guidelines. Expert responses, along with those from ChatGPT-4.0 and Microsoft Copilot, were presented in a survey format via Google Forms. Fifty-two orthodontists and 102 patients rated the three responses for each question on a scale from 1 (least adequate) to 10 (most adequate). The findings were analyzed comparatively within and between groups. Results Expert responses consistently received the highest scores from both patients and orthodontists, particularly in critical areas such as Questions 1, 2, 4, 9, and 11, where they significantly outperformed chatbots (P < 0.05). Patients generally rated expert responses higher than those of chatbots, underscoring the reliability of clinical expertise. However, ChatGPT-4.0 showed competitive performance in some questions, achieving its highest score in Question 14 (8.16 +/- 1.24), but scored significantly lower than experts in several key areas (P < 0.05). Microsoft Copilot generally received the lowest scores, although it demonstrated statistically comparable performance to other groups in certain questions, such as Questions 3 and 12 (P > 0.05). Conclusions Overall, the scores for ChatGPT-4.0 and Microsoft Copilot were deemed acceptable (6.0 and above). However, both patients and orthodontists generally rated the expert responses as more adequate. This suggests that current current chatbots does not yet match the theoretical adequacy of expert opinions.
dc.language.isoen
dc.publisherBmc
dc.relation.ispartofBmc Oral Health
dc.rightsinfo:eu-repo/semantics/openAccess
dc.subjectArtificial intelligence
dc.subjectChatbot
dc.subjectChatGPT
dc.subjectMicrosoft Copilot
dc.subjectOrthodontics
dc.titleComparative analysis of AI chatbot (ChatGPT-4.0 and Microsoft Copilot) and expert responses to common orthodontic questions: patient and orthodontist evaluations
dc.typeArticle
dc.departmentAfyonkarahisar Sağlık Bilimleri Üniversitesi
dc.identifier.doi10.1186/s12903-025-06194-w
dc.identifier.volume25
dc.identifier.issue1
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı
dc.department-temp[Salmanpour, Farhad; Camci, Hasan; Genis, Omer] Afyonkarahisar Hlth Sci Univ, Dept Orthodont, Ismet Inonu Cd 4, TR-03030 Afyonkarahisar, Merkez, Turkiye
dc.identifier.pmid40462054
dc.identifier.scopus2-s2.0-105007245741
dc.identifier.scopusqualityQ2
dc.identifier.wosWOS:001507484700014
dc.identifier.wosqualityN/A
dc.indekslendigikaynakWeb of Science
dc.indekslendigikaynakScopus
dc.indekslendigikaynakPubMed
dc.snmzKA_WoS_20251227


Bu öğenin dosyaları:

DosyalarBoyutBiçimGöster

Bu öğe ile ilişkili dosya yok.

Bu öğe aşağıdaki koleksiyon(lar)da görünmektedir.

Basit öğe kaydını göster