0
喜欢
0
书签
声明论文
Strategyproof Classification with Shared Inputs   
摘  要:   Strategyproof classification deals with a setting where a decision-maker must classify a set of in- put points with binary labels, while minimizing the expected error. The labels of the input points are reported by self-interested agents, who might lie in order to obtain a classifier that more closely matches their own labels, thus creating a bias in the data; this motivates the design of truthful mech- anisms that discourage false reports. Previous work (Meir et al., 2008) investigated both decision- theoretic and learning-theoretic variations of the setting, but only considered classifiers that belong to a degenerate class. In this paper we assume that the agents are inter- ested in a shared set of input points. We show that this plausible assumption leads to powerful re- sults. In particular, we demonstrate that variations of a truthful random dictator mechanism can guar- antee approximately optimal outcomes with respect to any class of classifiers.
发  表:   International Joint Conference on Artificial Intelligence  2009

论文统计图
共享有4个版本
 [展开全部版本] 

Bibtex
创新指数 
阅读指数 
重现指数 
论文点评
还没有人点评哦

Feedback
Feedback
Feedback
我想反馈:
排行榜