Advanced Search

Show simple item record

dc.contributor.authorSalmanpour, Farhad
dc.contributor.authorCamci, Hasan
dc.date.accessioned2025-12-28T16:40:39Z
dc.date.available2025-12-28T16:40:39Z
dc.date.issued2024
dc.identifier.issn2321-4600
dc.identifier.issn2321-1407
dc.identifier.urihttps://doi.org/10.25259/APOS_221_2023
dc.identifier.urihttps://hdl.handle.net/20.500.12933/2663
dc.description.abstractObjectives: The purpose of this study was to compare the predictive ability of different convolutional neural network (CNN) models and machine learning algorithms trained with frontal photographs and voice recordings. Material and Methods: Two hundred and thirty-seven orthodontic patients (147 women, 90 men, mean age 14.94 +/- 2.4 years) were included in the study. According to the orthodontic patient cooperation scale, patients were classified into two groups at the 12th month of treatment: Cooperative and non-cooperative. Afterward, frontal photographs and text-to-speech voice records of the participants were collected. CNN models and machine learning algorithms were employed to categorize the data into cooperative and non-cooperative groups. Nine different CNN models were employed to analyze images, while one CNN model and 13 machine learning models were utilized to analyze audio data. The accuracy, precision, recall, and F1-score values of these models were assessed. Results: Xception (66%) and DenseNet121 (66%) were the two most effective CNN models in evaluating photographs. The model with the lowest success rate was ResNet101V2 (48.0%). The success rates of the other five models were similar. In the assessment of audio data, the most successful models were YAMNet, linear discriminant analysis, K-nearest neighbors, support vector machine, extra tree classifier, and stacking classifier (%58.7). The algorithm with the lowest success rate was the decision tree classifier (41.3%). Conclusion: Some of the CNN models trained with photographs were successful in predicting cooperation, but voice data were not as useful as photographs in predicting cooperation.
dc.language.isoen
dc.publisherScientific Scholar Llc
dc.relation.ispartofApos Trends in Orthodontics
dc.rightsinfo:eu-repo/semantics/openAccess
dc.subjectArtificial intelligence
dc.subjectCooperation
dc.subjectOrthodontics
dc.titleArtificial intelligence for predicting orthodontic patient cooperation: Voice records versus frontal photographs
dc.typeArticle
dc.identifier.orcid0000-0003-0824-4192
dc.departmentAfyonkarahisar Sağlık Bilimleri Üniversitesi
dc.identifier.doi10.25259/APOS_221_2023
dc.identifier.volume14
dc.identifier.issue4
dc.identifier.startpage255
dc.identifier.endpage263
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı
dc.department-temp[Salmanpour, Farhad; Camci, Hasan] Afyonkarahisar Hlth Sci Univ, Dept Orthodont, Afyonkarahisar, Turkiye
dc.identifier.scopus2-s2.0-85211386223
dc.identifier.scopusqualityQ2
dc.identifier.wosWOS:001368388400007
dc.identifier.wosqualityQ4
dc.indekslendigikaynakWeb of Science
dc.indekslendigikaynakScopus
dc.snmzKA_WoS_20251227


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record