[Machine Learning]: Computing Parameters Analytically
[Write Infront]:
We have already know the Gradient Descent Algo, Today we are going to talk another way to min the Cost Function, we call this Normal Equation!
[Normal Equation]:
The Normal Equation Formula is given below: $$\theta = {({X^T}X)^{ - 1}}{X^T}Y$$
here is an example of how to use it:
If we use this method, there is no need to do Feature Scaling
[Comparison]:
| Gradient Descent | Normal Equation |
|---|---|
| Need to choose \(\alpha\) | No need to choose \(\alpha\) |
| Needs many iterations | No need to iterate |
| \(O(k{n^2})\) | \(O({n^3})\) |
| Works well when n is large | Slow if n is very large |


Comments
Post a Comment