Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Learning and evaluating classifiers under sample selection bias

Learning and evaluating classifiers under sample selection bias Learning and Evaluating Classi ers under Sample Selection Bias Bianca Zadrozny IBM T.J. Watson Research Center, Yorktown Heights, NY 10598 zadrozny@us.ibm.com Abstract Classi er learning methods commonly assume that the training data consist of randomly drawn examples from the same distribution as the test examples about which the learned model is expected to make predictions. In many practical situations, however, this assumption is violated, in a problem known in econometrics as sample selection bias. In this paper, we formalize the sample selection bias problem in machine learning terms and study analytically and experimentally how a number of well-known classi er learning methods are a €ected by it. We also present a bias correction method that is particularly useful for classi er evaluation under sample selection bias. In both cases, even though the available examples are not a random sample from the true underlying distribution of examples, we would like to learn a predictor from the examples that is as accurate as possible for this distribution. Furthermore, we would like to be able to estimate its accuracy for the whole population using the available data. This problem has received a great deal of attention in econometrics, where it is http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png

Learning and evaluating classifiers under sample selection bias

Association for Computing Machinery — Jul 4, 2004

Loading next page...
/lp/association-for-computing-machinery/learning-and-evaluating-classifiers-under-sample-selection-bias-vvh4f4ehVB

References (18)

Datasource
Association for Computing Machinery
Copyright
Copyright © 2004 by ACM Inc.
ISBN
1-58113-838-5
doi
10.1145/1015330.1015425
Publisher site
See Article on Publisher Site

Abstract

Learning and Evaluating Classi ers under Sample Selection Bias Bianca Zadrozny IBM T.J. Watson Research Center, Yorktown Heights, NY 10598 zadrozny@us.ibm.com Abstract Classi er learning methods commonly assume that the training data consist of randomly drawn examples from the same distribution as the test examples about which the learned model is expected to make predictions. In many practical situations, however, this assumption is violated, in a problem known in econometrics as sample selection bias. In this paper, we formalize the sample selection bias problem in machine learning terms and study analytically and experimentally how a number of well-known classi er learning methods are a €ected by it. We also present a bias correction method that is particularly useful for classi er evaluation under sample selection bias. In both cases, even though the available examples are not a random sample from the true underlying distribution of examples, we would like to learn a predictor from the examples that is as accurate as possible for this distribution. Furthermore, we would like to be able to estimate its accuracy for the whole population using the available data. This problem has received a great deal of attention in econometrics, where it is

There are no references for this article.