Thursday, October 20, 2011

MATRIX MULTIPLICATION






REVISED: Sunday, March 3, 2013





You will learn Matrix Multiplication.


I.  MATRIX VECTOR MULTIPLICATION

A.  Generic Example

As shown below the multiplication of a 3x2 Matrix A, by a 2x1 Vector b, results in a 3x1 Vector c.

   A       *      b          =       c
1      2                                15
1      2               3                   15
4      5               6                   42

mxn matrix      nx1matrix           m-dimensional
(m rows,            (n-dimensioal      vector
  n columns)      vector)

To get ci, multiply A's ith row with elements of vector b, and add them up.

The multiplication steps to create vector c are shown below:            
                                              c
(1*3) + (2*6) =   3 + 12 = 15
(1*3) + (2*6) =   3 + 12 = 15
(4*3) + (5*6) = 12 + 30 = 42

For this to work, the number of columns in Matrix A, has to match the number of rows in Matrix b.

B.  Prediction Example

Our hypothesis is that we can predict the sales price of houses based on their square footage as follows:

hθ(x) = -20 + 0.35x

House sizes:
5678
9123
4567

S is our 3x2 house square footage matrix.
h is our 2x1 hypothesis matrix.
p is our 3x1 matrix product prediction of the sales price of houses resulting from the matrix multiplication of S*h.

     S                      h                   p
1      5678                             1967.30            
1      9123            -20            3173.00
1      4567          0.35            1578.40

(-20 1) + (0.35 * 5678) = -20 + 1987.30 = 1967.30
(-20 * 1) + (0.35 * 9123) = -20 + 3193.00 = 3173.00
(-20 * 1) + (0.35 * 4567) = -20 + 1598.40 = 1578.40

As shown below we get the same answer using linear algebra.

hθ(5678) = -20 + (0.355678) = -20 + 1987.30 = 1967.30
hθ(9123) = -20 + (0.359123) = -20 + 3193.00 = 3173.00
hθ(4567) = -20 + (0.354567) = -20 + 1598.40 = 1578.40

C.  Types

1.  Supervised Learning

Supervised Learning is when you are given the right answer for each example in the data set.

2.  Classification

Classification is when you have discrete-valued output.

3.  Regression Problem

Predict real-valued output.

d.  Notation

m = Number of training examples, rows.

(x,y) = One training example, one row.

(x(i), y(i)) = ith training example.  The superscript refers to the index of the training set, the ith row.

x's = "input" variable/features.

y's = "output" variable / "target" variable.

h = Hypothesis;  h is a function that maps from x's to y's.

The hypothesis is used to make predictions.

hθ (x) = θo + θ1x
shorthand h(x)

θo and θ1are parameters.

h is predicting y is a linear function of x.

Linear regression with one variable, x.

Univariate, one variable, linear regression.

You have learned Matrix Multiplication.

Elcric Otto Circle









--> --> -->





How to Link to My Home Page

It will appear on your website as:
"Link to ELCRIC OTTO CIRCLE's Home Page"




No comments:

Post a Comment