Hi LP,
In 4. Naive Bayes Handwritten and 5. Naive Bayes in Code with MNIST, I understand the concept the addone smoothing in p(XC) = count(X, C) + 1 / count(C) + V, in the discrete dataset.
I know we can use the continuous Gaussian function to approximate p(XC). The part that confused me is how to use/reason the smoothing technique from discrete probability in the continuous probability? I saw in lecture 5, you added smoothing to the varariance of each random variable. Why added here?
Thank you for your time!
smoothing in variance

 Site Admin
 Posts: 85
 Joined: Sat Jul 28, 2018 3:46 am
Re: smoothing in variance
Thanks for your inquiry.
As you recall, MNIST is an image dataset. Some pixels have a constant value of 0.
In that case, the variance is 0. If the variance is 0 the PDF goes to infinity, which cannot be used for computations.
As you recall, MNIST is an image dataset. Some pixels have a constant value of 0.
In that case, the variance is 0. If the variance is 0 the PDF goes to infinity, which cannot be used for computations.