Main Article Content
Classification is one of the most important techniques in data analysis. Decision tree is the most commonly used data classification
technique. Training data sets are not error free due to measurement errors in data collection process. In general, values of numerical attributes in
training data sets are always inherently associated with errors.
Measurement errors in training data sets can be properly handled by assuming an appropriate error correction model such as Gaussian error
distribution. Data errors are corrected by fitting appropriate error correction model to the training data set. Different types of errors in the
training data sets are not considered during the construction of existing decision tree classifiers. Hence, classification results of existing decision
tree classifiers are less accurate or inaccurate in many cases because of different types of data errors present in the training data sets.
It is proposed to employ existing decision tree classifier construction algorithm using error corrected numerical attributes of the training data sets
to construct new effective decision tree classifier. Errors in numerical attributes of the training data sets are corrected by using truncated
Gaussian distribution. This new decision tree classifier construction algorithm is called error corrected decision tree classifier construction
algorithm. It proves to be more effective regarding classification accuracy when compared with the existing decision tree classifier construction
Computational complexity of error corrected decision tree classifier construction algorithm is approximately same as that of existing decision
tree classifier construction algorithm but the classification accuracy of error corrected decision tree classifier construction algorithm is much
more than the existing decision tree classifier construction algorithm.
Keywords: decision tree, error corrected values of the numerical attributes of the training data sets, training data sets containing numerical
attributes, measurement errors in training data sets, types of errors in the training data sets, training data sets, classification, data mining,
Submission of a manuscript implies: that the work described has not been published before, that it is not under consideration for publication elsewhere; that if and when the manuscript is accepted for publication, the authors agree to automatic transfer of the copyright to the publisher.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work
- The journal allows the author(s) to retain publishing rights without restrictions.
- The journal allows the author(s) to hold the copyright without restrictions.