In this post, I showed you that if you keep moving your mean in the right direction (in response to samples), eventually, you will reach the true (“expected”) mean.
This is more general than you think.
Consider the example of finding a weights vector to fit some training examples. The delta rule states that you can keep moving your weights vector in the direction such that, the resultant prediction is a little closer to the training example you just saw.
In this case, your “mean”, is a vector, called the weights vector. To head to the “true” weight vector (the one that predicts best), in response to a training sample, you move your weight vector in such a way that the resultant prediction is closer to the actual value of this sample.
Do you notice a pattern? In both the “mean” case, and the “weights” case, you want to move them in the correct direction as specified by the current sample. To move a mean to a closer direction is very straightforward. To move a weights vector in the correct direction has an additional layer of indirection, but conceptually it’s the same!
Some problems are perfect for this, because some problems are structured in such a way that as long as you move closer to each sample you see, if you receive an infinite number of samples, your guess will always reach (“converge”) the true value.
Even for problems that don’t fit this, you can often mold or remodel or manipulate them to make them fit. This is an extremely powerful strategy, so remember:
For a lot of problems, as you get new samples, as long as you move your “guess” such that it is “moving closer” (by some definition) towards this sample, you will eventually approach the expected guess.