A new variant of the well-known multi-armed bandit (MAB) is proposed. Each arm represents a distinct cache replacement policy, which advises on the page to evict. Feedback on the eviction comes in the form of a "miss" at an indeterminate time after the action is taken.
Inspired by the cache replacement problem, we propose and solve a new variant
of the well-known multi-armed bandit (MAB), thus providing a solution for
improving existing state-of-the-art cache management methods. Each arm (or
expert) represents a distinct cache replacement policy, which advises on the
page to evict from the cache when needed. Feedback on the eviction comes in the
form of a "miss", but at an indeterminate time after the action is taken, and
the cost of the eviction is set to be inversely proportional to the response
time. The feedback is ignored if it comes after a threshold value for the
delay, which we set to be equal to the size of the page eviction history. Thus,
for delays beyond the threshold, its cost is assumed to be zero. Consequently,
we call this problem with delayed feedback and decaying costs. We introduce an
adaptive reinforcement learning algorithm EXP4-DFDC that provides a solution to
the problem. We derive an optimal learning rate for EXP4-DFDC that defines the
balance between exploration and exploitation and proves theoretically that the
expected regret of our algorithm is a vanishing quantity as a function of time.
As an application, we show that LeCaR, a recent top-performing machine learning
algorithm for cache replacement, can be enhanced with adaptive learning using
our formulations. We present an improved adaptive version of LeCaR, called
OLeCaR, with the learning rate set as determined by the theoretical derivation
presented here to minimize regret for EXP4-DFDC. It then follows that LeCaR and
OLeCaR are theoretically guaranteed to have vanishing regret over time.