Publication | Closed Access
Robbing the bandit
62
Citations
0
References
2006
Year
Mathematical ProgrammingEngineeringMachine LearningGame TheoryCriminal LawComputational ComplexityIterated Decision MakingGeometric OptimizationSystematic ExploitationOnline ProblemCombinatorial OptimizationDecision TheoryMechanism DesignOnline AlgorithmComputer ScienceExploration V ExploitationCriminal JusticeContextual BanditReward HackingPerformance StudiesStochastic OptimizationBusinessAdditive Regret
We consider "online bandit geometric optimization," a problem of iterated decision making in a largely unknown and constantly changing environment. The goal is to minimize "regret," defined as the difference between the actual loss of an online decision-making procedure and that of the best single decision in hindsight. "Geometric optimization" refers to a generalization of the well-known multi-armed bandit problem, in which the decision space is some bounded subset of Rd, the adversary is restricted to linear loss functions, and regret bounds should depend on the dimensionality d, rather than the total number of possible decisions. "Bandit" refers to the setting in which the algorithm is only told its loss on each round, rather than the entire loss function.McMahan and Blum [10] presented the best known algorithm in this setting, and proved that its expected additive regret is O(poly(d)T3/4). We simplify and improve their analysis of this algorithm to obtain regret O(poly(d)T2/3).We also prove that, for a large class of full-information online optimization problems, the optimal regret against an adaptive adversary is the same as against a non-adaptive adversary.