Weighted bandits or: How bandits learn distorted values that are not expected. (arXiv:1611.10283v2 [cs.LG] UPDATED)

Motivated by models of human decision making proposed to explain commonly
observed deviations from conventional expected value preferences, we formulate
two stochastic multi-armed bandit problems with distorted probabilities on the
reward distributions: the classic $K$-armed bandit and the linearly
parameterized bandit settings. We consider the aforementioned problems in the
regret minimization as well as best arm identification framework for
multi-armed bandits. For the regret minimization setting in $K$-armed as well
as linear bandit problems, we propose algorithms that are inspired by Upper
Confidence Bound (UCB) algorithms, incorporate reward distortions, and exhibit
sublinear regret. For the $K$-armed bandit setting, we derive an upper bound on
the expected regret for our proposed algorithm, and then we prove a matching
lower bound to establish the order-optimality of our algorithm. For the
linearly parameterized setting, our algorithm achieves a regret upper bound
that is of the same order as that of regular linear bandit algorithm called
Optimism in the Face of Uncertainty Linear (OFUL) bandit algorithm, and unlike
OFUL, our algorithm handles distortions and an arm-dependent noise model. For
the best arm identification problem in the $K$-armed bandit setting, we propose
algorithms, derive guarantees on their performance, and also show that these
algorithms are order optimal by proving matching fundamental limits on
performance. For best arm identification in linear bandits, we propose an
algorithm and establish sample complexity guarantees. Finally, we present
simulation experiments which demonstrate the advantages resulting from using
distortion-aware learning algorithms in a vehicular traffic routing
application.

Source: https://arxiv.org/abs/1611.10283

webmaster

Related post