0
喜欢
0
书签
声明论文
Tight bounds for strategyproof classification   
摘  要:   Strategyproof (SP) classification considers situations in which a decision-maker must classify a set of input points with binary labels, minimizing expected error. Labels of input points are reported by self-interested agents, who may lie so as to obtain a classifier more closely matching their own labels. These lies would create a bias in the data, and thus motivate the design oftruthfulmechanisms that discourage false reporting.We here answer questions left open by previous research on strategyproof classification [12, 13, 14], in particular regarding the best approximation ratio (in terms of social welfare) that an SP mechanism can guarantee fornagents. Our primary result is a lower bound of 3--2/non the approximation ratio of SP mechanisms under the shared inputs assumption; this shows that the previously known upper bound (for uniform weights) is tight. The proof relies on a result from Social Choice theory, showing that any SP mechanism must select a dictator at random, according to some fixed distribution. We then show how different randomizations can improve the best known mechanism when agents are weighted, matching the lower bound with a tight upper bound. These results contribute both to a better understanding of the limits of SP classification, as well as to the development of similar tools in other, related domains such as SP facility location.
发  表:   Autonomous Agents and Multiagent Systems/Agent Theories, Architectures, and Languages  2011

共享有1个版本

Bibtex
创新指数 
阅读指数 
重现指数 
论文点评
还没有人点评哦

Feedback
Feedback
Feedback
我想反馈:
排行榜