Please use this identifier to cite or link to this item: http://cmuir.cmu.ac.th/jspui/handle/6653943832/63897
Full metadata record
DC FieldValueLanguage
dc.contributor.authorWarangkhana Keerativiboolen_US
dc.contributor.authorPachitjanut Siripanichen_US
dc.date.accessioned2019-05-07T09:59:37Z-
dc.date.available2019-05-07T09:59:37Z-
dc.date.issued2017en_US
dc.identifier.issn0125-2526en_US
dc.identifier.urihttp://it.science.cmu.ac.th/ejournal/dl.php?journal_id=8044en_US
dc.identifier.urihttp://cmuir.cmu.ac.th/jspui/handle/6653943832/63897-
dc.description.abstractThis paper presents the derivations to unify the justifications of the criteria based on Kullback’s divergence; AIC, AICc, KIC, KICcC, KICcSB, and KICcHM. The results show that KICcC has the strongest penalty function under some condition, followed, respectively, by KICcSB, KICcHM, KIC and AIC. Also, KIC is greater than AICc under some condition, but AICc always greater than AIC. The performances of all model selection criteria are examined by the extensive simulation study. It can be concluded that, the model selection with a larger penalty term may lead to underfitting and slow convergence while a smaller penalty term may lead to overfitting and inconsistency. When the sample size is small to moderate and the true model is somewhat difficult to identify, the performances of AIC and AICc are better than others. However, they can identify the true model actually less accurate. When the sample size is large, the performances of all model selection criteria are insignificant difference, but all criteria can identify the true model still less accurate. As a result, we used the observed efficiency to assess model selection criteria performances. On the average, this measure suggests that in a weakly identifiable true model, whether the sample size is small or large, KICcC is the best criterion. For the small sample size and the true model can be specified more easily with small error variance, every model selection criteria still have the ability to select the correct model. If the error variance increase, the performances of all model selection criteria are bad. When the sample sizes are moderate to large, KICc performs the best, it can identify a lot of true model for small error variance. But, if the error variance increases and the sample size is not large enough, all model selection criteria can identify a little true model.en_US
dc.languageEngen_US
dc.publisherScience Faculty of Chiang Mai Universityen_US
dc.titleComparison of the Model Selection Criteria for Multiple Regression Based on Kullback-Leibler’s Informationen_US
dc.typeบทความวารสารen_US
article.title.sourcetitleChiang Mai Journal of Scienceen_US
article.volume44en_US
article.stream.affiliationsDepartment of Mathematics and Statistics, Faculty of Science, Thaksin University, Phatthalung, Thailand.en_US
article.stream.affiliationsFaculty of Business Administration, Dhurakij Pundit University, Bangkok, Thailand.en_US
Appears in Collections:CMUL: Journal Articles

Files in This Item:
There are no files associated with this item.


Items in CMUIR are protected by copyright, with all rights reserved, unless otherwise indicated.