For more information, please see full course syllabus of Linear Algebra

For more information, please see full course syllabus of Linear Algebra

## Discussion

## Download Lecture Slides

## Table of Contents

## Transcription

## Related Books

### Matrix of a Linear Map

Lecture Slides are screen-captured images of important points in the lecture. Students can download and print out these lecture slide images to do practice problems as well as take notes while watching the lecture.

- Intro 0:00
- Matrix of a Linear Map 0:11
- Theorem 1
- Procedure for Computing to Matrix: Step 1
- Procedure for Computing to Matrix: Step 2
- Procedure for Computing to Matrix: Step 3
- Matrix of a Linear Map: Property
- Example 1
- Example 2
- Example 3

### Linear Algebra Online Course

### Transcription: Matrix of a Linear Map

*Welcome back to Educator.com, welcome back to linear algebra.*0000

*Today we are going to talk about the matrix of a linear map.*0004

*Okay. Let us just jump right in. We have already seen that when you have a linear map from RN to RM, let us say form R3 to R5... that that linear map is always representable by some matrix... a 5 by 3 map in this case. Always.*0009

*So, today, we want to generalize that result and deal with linear maps in general, not necessarily from one Euclidian space to another Euclidian space, but any vector space at all.*0030

*So, let us go ahead and start with a theorem. Some people might call this... most of you are familiar, of course, with the fundamental theorem of calculus. There's also something called the fundamental theorem of algebra that concerns the roots of a polynomial equation.*0042

*In some sense, if you want this theorem that I am about to write, you can consider it the fundamental theorem of linear algebra.*0058

*It sort of ties everything together. If you want to call it that. Some people do, some people don't, it certainly is not historically referred to that way the way the others are... but this sort of brings the entire course that we have done... everything has sort of come to this one point.*0065

*Let us go ahead and write it down very carefully, and talk about it, do some examples.*0085

*Here - okay - the statement of this theorem is a bit long, but there is nothing strange about it. Let L from v to w be a linear map from an n-dimensional vector space into an m-dimensional vector space.*0094

*Again, we are talking about finite dimensional vector spaces, always. We are not talking about infinite dimensional vector spaces.*0132

*There is a branch of mathematics that does deal with that called functional analysis, but we are concerned with finite.*0137

*N - dimensional vector space, sorry about that, okay... we will let s, which equals the set v1, v2, so on, to vN, be a basis for v.*0152

*And, t which equals w1, w2, so on and so forth, on to wM... be a basis for w, the arrival space.*0177

*I really love referring to them as departure and arrival space. It is a lot more clear that way.*0195

*Then, the m by n matrix a, whose j ^{th} column is the vector L(v_{j}), the coordinate vector with respect to t... and I will explain what all of this means in just a minute, do not worry about the notation... is the matrix associated with the linear map and has the following property.*0202

*L(x) with respect to t is equal to a × x, with respect to s. Okay. So, let me read through this and talk a little bit about what it means.*0274

*So, L is a linear map from a finite dimensional vector space, excuse me, to another finite dimensional vector space. The dimensions do not necessarily have to be the same, it is a linear map.*0293

*Okay. S is a basis for v, the departure space. T is a basis for the arrival space. Okay. Then, there is a matrix associated with this linear map of 2 arbitrary vector spaces.*0302

*There is a matrix associated with this, and the columns of the matrix happen to be, so for example if I want the first column of this particular matrix, I actually perform L the linear map, I perform it on the basis vectors for the departure space.*0319

*Then once I find those values, I find their coordinate vectors with respect to the basis t.*0338

*You remember coordinates... and once I put them in the columns, that is the matrix that is associated with this linear map. The same way that this matrix was associated with the linear map from one Euclidian space to another, R4 to R7.*0345

*You know, giving you a 7 by 4 matrix. Well, remember we talked about coordinates. You know a vector can be represented by a linear combination of the elements of the basis.*0361

*So, if we have a basis for the elements of a particular space, all we have to do is... the coordinates are just the particular constants that make up that linear combination.*0372

*In some sense, we are sort of associating a 5-dimensional random vector space with R5. We are giving it numbers, that is... we are labeling it. That is what we are doing, and it has an interesting property.*0380

*That if I take some random vector in the departure space, and I perform some operation on it, and then I find its coordinate vector with respect to the basis t, it is the same as if I take that x before I do anything to it. Find its coordinate vector with respect to s in the departure space, and then multiply it by this particular matrix. I get the same answer.*0394

*So, let us just do some examples and I think it will make a lot more sense. So, let us see, but before I do... let me write out a procedure explicitly for how to compute the matrix of the linear map.*0420

*Okay. So, let us do this in red. Oops - there it is. Wow, that was interesting. That is a strange line. Alright.*0440

*So, procedure for computing the matrix of L from v to w, the matrix of a linear map with s and t as respective bases.*0458

*So s is a basis for v, t is a basis for w, and it is the same as the theorem up here. S we will represent as v1, v2, all the way to vN. W we will... t we will represent with w1, w2, all the way... all the way through there.*0495

*Okay. So, the first thing you do, is compute L _{vj}, in other words, take all of the vector in the basis for the departure space, and perform the particular linear operation on them. Just perform the function and see what you get.*0514

*Step 2, now, once you have those, you want to find the coordinate vector with respect to t, what that means is if you remember right, and if you do not you can review the previous lesson where we talked about coordinate vectors.*0539

*We did a fair number of examples if I remember right - express the L _{vj} that you got from the first step as a linear combination of the vectors w1, w2, to wM. The vectors in the t bases... w1, w2... wM, and we will be doing this in a minute, so do not worry about the procedure if you do not remember it.*0558

*Three, we set the thing we got from R2, we set this as the j ^{th} column of the matrix.*0592

*So, we do it for each bases vector. For the bases of the departure space, and we... so if we have n vectors, we will have n columns.*0608

*That will be our matrix, and we are done. Okay.*0617

*Let us just... before I do that, I actually want to give you a little pictorial representation of what it is that is actually going on here. So, if I take x... let me show you what it is that I am talking about. Let me go back to blue.*0622

*What that property means. What it really means, t = a, this was the last thing that we wrote in the theorem, and we said that the coordinate vector under the transformation of some random vector in the departure space is equal to the matrix that we end up computing times the coordinate vector with respect to the bases s from the departure space. Here is what this means.*0649

*It means if I take some random x in the departure space, I can perform L on it and I get L(x), of course. Well, and then of course from there I can go ahead and find the coordinate vector, which is the L(x) with respect to some bases t.*0680

*So, in other words, I go from my departure space to my arrival space and then I actually convert that to some coordinate, because I need to deal with some numbers.*0707

*Well, as it turns out, instead what I can do is I can go ahead and just take x in my departure space, find its coordinate with respect to the bases of the departure space, and then I can just multiple by a, the matrix a that I compute.*0714

*They end up actually being the same thing. I can either go directly, or I can go through the matrix. That is what... you will see these often in algebraic courses in mathematics... is you will often see different paths to a particular place that you want to get to.*0732

*You can either do it directly from L, or you can do it through the matrix. And, it is nice to have these options, because sometimes this option might not be available, sometimes this might be the only one available. At least you have a path to get there.*0749

*a... x... s... those of you who have studied multi-variable calculus, or are doing so now, you are going to be discussing something called Green's Theorem and Stoke's Theorem, and possibly the generalized version of that.*0762

*I don't know, depending on the school that you are attending. But, essentially what those theorems do is they allow you to express an integral as the different kind of integral.*0780

*Instead of solving a line integral or a surface integral, you end up solving an area integral or a volume integral which you know how to do already from your basic calculus. It allows you a different path to the same place. That is what is going on.*0792

*That is essentially what the fundamental theorem of calculus is, is an alternate path to the same place. This is just an algebraic version of it... that is what we want in path.*0804

*We want different paths just to get some place. Because often one path is better than the other. Easier than the other. So again, it means if I want to take a vector, I can find its coordinate in the arrival space by just doing it directly.*0813

*But if that path is not available and I have the matrix, I can just take its coordinate vector in the departure space, multiply it by the matrix, and I end up with the same answer. That is kind of extraordinary.*0829

*Again, it is all a property of this linear map, and the maintenance of the structure of one space to another. Okay, let us just jump in to the examples because I think that is going to make the most sense.*0842

*Example... so, our linear map is going to be from R3 to R2. In this case we are using Euclidian spaces, 2-dimensional, 2 dimensions to 3 dimensions, and it is defined by L(x,y,z) = x + y, y - z. We take a 3 vector, we map it to a 2 vector. This is one entry, that is two entries. Okay.*0856

*Now, we have our two bases that we are given. So, basis is (1,0,0), no, that is not going to work. We have (1,0,0), (0,1,0), (0,0,1). The natural bases for R3.*0885

*And t is going to be... in the natural bases for R2. This is not going to work, I need to get this a little more clear. I know you guys know what is going on, but I definitely want... okay.*0909

*So, the first thing we are going to do is we are going to calculate L(v1), which is L(1,0,0), which equals 1 + 0, 0 - 0, which equals (1,0). Boom. That is that one.*0930

*We do the same thing for L of... well, let me... I am just going to go straight into it. Let us do L(v2), which is... (0,1,0). That is going to equal (1,1), and if I do L(0,0,1), I end up with 0 - 1. Okay.*0953

*So, I found L of the bases vectors of the departure space. Now, I want to express these... these numbers with respect to the basis for the arrival space, mainly with respect to t.*0981

*Well, since t is the natural basis, I do not have to do anything here, and again, when we are dealing with a natural basis, namely (1,0,0), (0,1,0), (0,0,1)... we do not have to change anything.*0998

*So, as it turns out, L(v1) with respect to the basis t, this thing with respect to t happens to equal (1,0), and the same for the others.*1016

*L(v2), with respect to the basis t because it is the natural basis is equal to (1,1), and then L(v3) with respect to the natural basis for t is equal to (0,-1).*1039

*Now, I take these 3 and I arrange them in a column, therefore my matrix a is... I arrange them as columns... (1,0), (1,1), (0,-1), that is my matrix for my transformation.*1055

*My linear transformation is associated with this matrix. Very nice.*1071

*Okay. Now, let us do the second example, which is going to... we are going to change the basis. We are not going to use the natural basis, we are going to change it, and we are going to see how the matrix changes. Alright.*1080

*I am going to do this one in red... Okay. So, everything is the same as far as the linear map is concerned. The only difference now is... well, let me write it out again... so L is a mapping from R3 to R2, defined by L(x,y,z) = x + y and y - z.*1093

*Now, our basis s = (1,0,1), (0,1,1) and (1,1,1)... and our basis t is now going to equal (1,2) and (-1,1), so we have changed the bases.*1129

*Well, we will see what happens. So, let us calculate L(v1), okay, that is going to equal 1 - 1, I will let you verify this.*1165

*L(v2) = (1,0) and L(v3) = (2,0). Okay.*1194

*To find L(v _{j}) with respect to the basis t, here is what we have to do. We need to express each of these as a linear combination of the basis for the arrival space.*1210

*In other words, we need to express L(v1), which is equal to (1,-1) as a1 × (1,2) + a2... 2 constants × (-1,1).*1233

*(1,2) and (-1,1), that is the basis for our arrival space. So, we need to find constants a1 and a2 such that a linear combination of them, this linear combination of them equals this vector. That is the whole idea behind the coordinate vectors.*1254

*Okay, and I am going to write out all of this explicitly so we see what it is that we are looking at... L(v2), which is equal to (1,0), I want that to equal b1 × (1,2) + b2 × (-1,1)... and I want L(v3) which equals (2,0), I want that to equal c1 × (1,2) + c2 × (-1,1).*1271

*Okay. Well this is just a... we are going to solve an augmented system, except now we are looking for 3 solutions, just 1, 2, 3, so we are going to augment with 3 new columns.*1302

*So, here is what we form. We form (1,2), this is our matrix, (-1,1), and then we augment it with these 3 (1,-1), (1,0), (2,0).*1313

*I convert to reduce row echelon form, and I get (1,0,0,1), I get (0,-1,1/3,-2/3,2/3,-4/3). There we go.*1329

*Now I have expressed the L, I found the L, I converted each of these L into the coordinate vector... here, here, here, that is what these are. These are the coordinates of these 3 things that I found up here, with respect to the basis of the arrival space.*1350

*Now I just take these, and that is my matrix... a = (0,... mmm, these random lines are killing me... (0,-1,1/3,-2/3,2/3,-4/3). This is our matrix and notice this is not the same matrix that we had before.*1369

*This is (0,1/3,2/3,-1,-2/3,-4/3). The previous a that we got with respect to the natural basis, if you remember, let me do this in black actually, a natural basis, the first example, we ended up with (1,0,1,1... this is not going to work, what is going on here with these lines, it is driving me crazy... (1,0,1,1,0,-1).*1402

*This is the matrix, the same linear map... this is one matrix with respect to one basis, the same linear map. This matrix is different because we changed the basis. This is very, very important, and I will discuss this towards the end of the lesson.*1447

*I will actually digress into a bit of a philosophical discussion for what it is that is going on here.*1465

*Okay. So, we have this. So, we found our matrix with respect to that, now, let us confirm that property that we have.*1472

*So, we said that there is this property... let me go back to blue... L of some random vector in the departure space, the coordinate vector with respect to the basis is equal to this a × the x with respect to the basis of the arrival space.*1485

*Okay. So, a is the matrix that we just found, the one with (0,-1,1/3,-2/3,2/3,-4/3). Okay.*1514

*Let us just let x, pick a random... let us let it equal (3,7,5), random vector.*1527

*Okay. Well, L(x), or L(3,7,5) = 3 + 7, 7 - 5. It is equal to (10,2). Okay.*1538

*Let us circle that in red. Let us just set that aside. That is our transformation, (10,2). Okay.*1554

*Now, x... (3,7,5), with respect to f, the basis of the departure space equals the following... we are looking for... so we want to express this with respect to the basis s, which was (1,0,1,0,1,1,1,1,1,1), if you want to flip back and take a look at that basis.*1560

*We are looking for a1, a2, a3, such that a1 × (1,0,1) + a2 × (0,1,1) + a3 × (1,1,1) = this (3,7,5). Okay.*1595

*As it turns out, this a1, a2, a3... when I actually perform this ax = b, set this up, augmented matrix, I solve it... I end up with the following... (-2,2,5).*1620

*This equals x, my (3,7,5) with respect to the basis s. Okay. So, we will set that off for a second. Now, let me take this x _{x}, which is (-2,2,5), and let me multiply it by my... so let me basically perform the right side here... let me multiply it by the matrix that I just got.*1641

*When I do that, I get (-1,1/3,-2/3,2/3,-4/3) × what I just got, which is (-2,2,5).*1672

*When I perform that, I end up with 4 - 6... put a blue circle around that.*1692

*Okay. That means... so, if I take... this is the coordinate vector of the thing that I got the (10,2). By getting it directly, when I solve the transformation, this is the coordinate vector with respect to its basis t, the arrival space.*1707

*It is telling me that that thing is equal to a × the x _{s}, well I found the ax_{s} and I multiplied times the matrix, and this is what I got.*1739

*Now... that means that this thing times the basis t should give me (10,2), right?*1748

*So, if I take 4 times the basis for t, which is (1,2) - 6 × the other basis vector (1,1), if I actually perform this, I end up with (10,2).*1757

*This is the same as that. That confirms this property. That is what is going on here. Really, what is ultimately important here is the ability, is the first part of this example, the first two examples, the ability to compute the transformation matrix.*1775

*It allows me instead of doing the transformation directly, which may or may not be difficult, to go through and just do a matrix multiplication problem, which is usually very, very easy.*1792

*Okay. So, now, to our philosophical discussion. You notice that the same linear transformation gave rise to two different matrices.*1803

*Well, that is because we used two different bases. So, what you are seeing here is an example of something very, very, very profound. Not just mathematically, but very profound physically. Very profound with respect to the nature of reality.*1819

*Something exists. A linear map in this case. Something independent exists and it is independent of our representation.*1836

*In other words, in order for us to handle it, notice this linear map... we have to deal with coordinates. We have to choose a basis.*1846

*In one example, we chose the natural basis. In the second part of the example, we chose a different basis. Both of them are perfectly good bases, and this matrix that actually represents the matrix changes... the linear map does not change.*1853

*So, as it turns out, the linear map is that thing underneath which does exist, but in order to handle that linear map, we need to give it labels. We need to give it a frame of reference. We need to be able to "measure" it.*1867

*That is what science is all about... we are taking things that exist and we are assigning labels to them.*1879

*Well, let me take this a little bit deeper... all of your lives you have been told... so if you have some coordinate system... the standard Cartesian coordinate system, and if I tell you from this (0,0) point, if I move 3 spaces to the right, and then if I move 7 spaces up, that there is this point (3,7).*1885

*Well, (3,7) is just the label that we have attached to it. That point in that 2-dimensional vector space, Euclidian vector space exists whether I call it (3,7) or if I change bases, it might be an entirely... it might be (4,-19), depending on the basis that I choose.*1905

*Again, these things exist independent of the labels that we assign to them. The representations that we use to handle them. We need to... we need to be able to represent them somehow so that we can handle them, but the representation is not the thing in itself.*1924

*It is very, very important to be able to distinguish between those two. Okay.*1939

*Is it reducing it to something that it is not? No. It is just a label, but it is important for us to actually recognize that, so when we speak about the point (3,7), we need to understand that we have given that point a representation so that we can handle it.*1948

*We can manipulate it mathematically so that we can assign it some sort of value in the real world.*1963

*But its existence is not contingent on that. It exists whether we handle it or not, that is what is amazing. So what is amazing about abstract mathematics is we can make statement about the existence of something without having to handle it.*1969

*It is the engineers and physicists that actually label them, give them frames of references, so that they can actually manipulate them. That is all that is going on here.*1982

*Okay, with that, I am going to close it out.*1993

*Thank you for joining us at Educator.com, and thank you for joining us for linear algebra. Take good care, bye-bye.*1997

1 answer

Last reply by: Professor Hovasapian

Thu May 1, 2014 9:24 PM

Post by Josh Winfield on April 21, 2014

I thank you for every hour, minute and second you spent understanding these concepts, so you could explain it simply to us and inspire a love for mathematics and a sense of intuition that will only continue to grow. Cheers Raffi, thanks Chief.

1 answer

Last reply by: Professor Hovasapian

Fri Mar 28, 2014 5:46 PM

Post by Hoa Huynh on March 23, 2014

Dear Professor,

At 24:30, you said A(nat basis) = [(1,0), (1,0), (0,-1)]. I do not get where it comes from. Please, explain me

0 answers

Post by Manfred Berger on June 25, 2013

Thank you, I'll see you, quite literally in Multivariate Calculus

1 answer

Last reply by: Professor Hovasapian

Tue Sep 11, 2012 12:23 AM

Post by Ian Vaagenes on September 10, 2012

Hi Raffi,

Great course, any change we'll get a lecture on singular value decomposition?

Best,

Ian

0 answers

Post by Shahaz Shajahan on August 24, 2012

Hi, sorry to be a pain but I have sent another question on the facebook page, if you can just look at it for me please, thank you

1 answer

Last reply by: Professor Hovasapian

Wed Aug 15, 2012 6:17 PM

Post by Brendan Hu on August 15, 2012

I watched all of your lectures this summer and I've learned so much. As Peter has, I just wanted to express my gratitude. I've talked to my dad of your knowledge of and passion for math, and we both agree that we wish we saw teachers like you more often. I also feel (and hope) that it was a great preparation for my Linear Algebra class at Berkeley in the Fall. Thanks so much Dr. Hovasapian :)

1 answer

Last reply by: Professor Hovasapian

Thu Jul 26, 2012 12:55 AM

Post by Hengyao [Peter] Han on July 25, 2012

I really cant express how useful your lectures were to me, I really wouldn't do well in my class if I never found your lectures on this site. It's so worth it. Thank you so much Professor Hovasapian!