REVISED: Sunday, March 3, 2013
You will learn Linear Regression with one variable.
I. LINEAR REGRESSION WITH ONE VARIABLE
Linear Regression is Supervised Learning because you are given the "right answer" for each example in the data.
A Regression Problem has Predicted Real-Value Output.
A Classification Problem has Discrete-Valued Output.
A. How To Choose θi's
Choose θ0, θ1 so that hθ( x ) is close to the y value of the training examples ( x, y ).
Minimize J (θ0, θ1) = (1/(2m)) ∑ (i = 1 to m) hθ(x(i)-y(i)) 2
hθ(x) for fixed θ1, is a function of x.
J(θ1) is a function of the parameter θ1.
B. Examples
m = 3
Hypothesis: hθ(x) = θ0 + θ1x
Parameters: θ0, θ1
The cost function is
J (θ0, θ1) = (1/(2m)) ∑ (i = 1 to m) hθ(x(i)-y(i)) 2
Goal is to minimize J (θ0, θ1)
You have learned Linear Regression with one variable.Linear Regression is Supervised Learning because you are given the "right answer" for each example in the data.
A Regression Problem has Predicted Real-Value Output.
A Classification Problem has Discrete-Valued Output.
A. How To Choose θi's
Choose θ0, θ1 so that hθ( x ) is close to the y value of the training examples ( x, y ).
Minimize J (θ0, θ1) = (1/(2m)) ∑ (i = 1 to m) hθ(x(i)-y(i)) 2
hθ(x) for fixed θ1, is a function of x.
J(θ1) is a function of the parameter θ1.
B. Examples
m = 3
Hypothesis: hθ(x) = θ0 + θ1x
Parameters: θ0, θ1
The cost function is
J (θ0, θ1) = (1/(2m)) ∑ (i = 1 to m) hθ(x(i)-y(i)) 2
Goal is to minimize J (θ0, θ1)
Elcric Otto Circle
How to Link to My Home Page
It will appear on your website as:"Link to ELCRIC OTTO CIRCLE's Home Page"