Some people aren't worth listening to: periodically retraining classifiers with feedback from a team of end users

Joshua Lockhart, Samuel Assefa, Tucker Balch, Manuela Veloso

Document classification is ubiquitous in a business setting, but often the end users of a classifier are engaged in an ongoing feedback-retrain loop with the team that maintain it. We consider this feedback-retrain loop from a multi-agent point of view, considering the end users as autonomous agents that provide feedback on the labelled data provided by the classifier. This allows us to examine the effect on the classifier's performance of unreliable end users who provide incorrect feedback. We demonstrate a classifier that can learn which users tend to be unreliable, filtering their feedback out of the loop, thus improving performance in subsequent iterations.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment