Robust Multi-objective Learning with Mentor Feedback

27th Conf. on Learning Theory (COLT) |

We study decision making when each action is described by a set of objectives, all of which are
to be maximized. During the training phase, we have access to the actions of an outside agent
(“mentor”). In the test phase, our goal is to maximally improve upon the mentor’s (unobserved)
actions across all objectives. We present an algorithm with a vanishing regret compared with the
optimal possible improvement, and show that our regret bound is the best possible. The bound is
independent of the number of actions, and scales only as the logarithm of the number of objectives.
Keywords: multi-objective learning, apprenticeship learning, random matrix games.