Jump to content

Reinforcement Learning/Statistical estimators: Bias and Variance

From Wikiversity

Statistical estimator

[edit | edit source]

In statistics, an estimator is a rule for calculating an estimate of a given quantity based on observed data.

Suppose we have a statistical model, parameterized by a real number θ, giving rise to a probability distribution for observed data, .

Assume statistic serves as an estimator of θ based on any observed data . That is, we assume that our data follow some distribution with unknown value of θ (in other words, θ is a fixed constant that is part of this distribution, but is unknown). We construct some estimator that maps observed data to values that we hope are close to θ.

The bias of an estimator relative to is defined as

where denotes expected value over the distribution , i.e. averaging over all possible observations .

The meaning of a biased estimator is that there is a systematic difference between the estimated parameter () and the real value of the parameter (). However, usually the difference becomes smaller with growing number of input data and eventually a biased estimator becomes useful.

An estimator is said to be unbiased if its bias is equal to zero for all values of parameter θ.

Variance

[edit | edit source]

The meaning of an estimator with high variance is that the estimated parameter () is very sensitive to the input (observed data, )


The variance of an estimator is

Mean squared error

[edit | edit source]

The mean squared error (MSE) of an estimator is