[Machine Learning]: Computing Parameters Analytically
[Write Infront]:
We have already know the Gradient Descent Algo, Today we are going to talk another way to min the Cost Function, we call this Normal Equation!
[Normal Equation]:
The Normal Equation Formula is given below: θ=(XTX)−1XTY
here is an example of how to use it:
If we use this method, there is no need to do Feature Scaling
[Comparison]:
Gradient Descent | Normal Equation |
---|---|
Need to choose α | No need to choose α |
Needs many iterations | No need to iterate |
O(kn2) | O(n3) |
Works well when n is large | Slow if n is very large |
Comments
Post a Comment