University of Birmingham > Talks@bham > Artificial Intelligence and Natural Computation seminars > Controlled Permutations for Testing Adaptive Classifiers

Controlled Permutations for Testing Adaptive Classifiers

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Leandro Minku.

The talk will address evaluation of online classifiers that are designed to adapt to changes in data distribution over time (concept drift). A standard procedure to evaluate such classifiers is the test-then-train, which iteratively uses the incoming instances for testing and then for updating a classifier. Such learning risks to overfit, since a dataset is processed only once in a fixed sequential order while every output of the classifier depends on the instances seen so far. The problem is particularly serious when several classifiers are compared, since the same test set arranged in a different order may indicate a different winner. To reduce this risk we propose to run multiple tests with permuted data. The proposed procedure allows us to assess robustness of classifiers when changes happen unexpectedly.

This talk is part of the Artificial Intelligence and Natural Computation seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


Talks@bham, University of Birmingham. Contact Us | Help and Documentation | Privacy and Publicity.
talks@bham is based on from the University of Cambridge.