![]() |
![]() |
University of Birmingham > Talks@bham > Artificial Intelligence and Natural Computation seminars > Controlled Permutations for Testing Adaptive Classifiers
Controlled Permutations for Testing Adaptive ClassifiersAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Leandro Minku. The talk will address evaluation of online classifiers that are designed to adapt to changes in data distribution over time (concept drift). A standard procedure to evaluate such classifiers is the test-then-train, which iteratively uses the incoming instances for testing and then for updating a classifier. Such learning risks to overfit, since a dataset is processed only once in a fixed sequential order while every output of the classifier depends on the instances seen so far. The problem is particularly serious when several classifiers are compared, since the same test set arranged in a different order may indicate a different winner. To reduce this risk we propose to run multiple tests with permuted data. The proposed procedure allows us to assess robustness of classifiers when changes happen unexpectedly. This talk is part of the Artificial Intelligence and Natural Computation seminars series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsanalysis Reading Group in Combinatorics and Probability SERENE Group Seminar SeriesOther talksTBA TBA The tragic destiny of Mileva Marić Einstein Life : it’s out there, but what and why ? Quantum Sensing in Space Waveform modelling and the importance of multipole asymmetry in Gravitational Wave astronomy |