Non Stationary Bandits

Post Reply
Posts: 1
Joined: Tue Jan 12, 2021 12:50 pm

Non Stationary Bandits

Post by jiyer »

I reviewed the material on Non Stationary Bandits - and I understand the formula below works is a running calculation of the exponential weighted average favoring recent data:

new_mean = (1- alpha) * old_mean + alpha * x

I was trying to understand - how I could adapt this formula to perform a running calculation for the parameters - "a" and "b" of the beta distribution.
I haven't been able to figure this out yet. The closest I could get was to maintain a buffer containing the last N rewards - and then use that to estimate "a" and "b".

The problem with this approach is that my posterior is always roughly the same width - it doesn't get "skinnier" or "fatter".

Ideally, If the rewards of my bandit are changing over time - I was hoping the posteriors would shift from "skinny" (old stable click-thru rate) to "fat" (period of uncertainty) and then finally "skinny" again (new stable click-thru rate).

It would be great if the training material could touch on this subject.
BTW - I just hope I didn't miss it if you already covered this in the training.
Post Reply

Return to “Bayesian Machine Learning in Python: A/B Testing”