For more information, please see full course syllabus of Linear Algebra

For more information, please see full course syllabus of Linear Algebra

### Kernel and Range of a Linear Map, Part II

Lecture Slides are screen-captured images of important points in the lecture. Students can download and print out these lecture slide images to do practice problems as well as take notes while watching the lecture.

### Linear Algebra Online Course

### Transcription: Kernel and Range of a Linear Map, Part II

*Hello and welcome back to Educator.com and welcome back to linear algebra.*0000

*Today we are going to continue our discussion of the kernel and range of a linear map of a linear transformation.*0004

*From the previous lesson, we left it off defining what the range of a linear map is.*0011

*Real quickly though, let me go back and discuss what the kernel of a linear map is.*0017

*Basically, the kernel of a linear map, from a vector space v to a vector space w is all those vectors in v that map to the 0 vector. That is it.*0022

*So, if I have one vector that goes to 0, that is the kernel. If I have 5 vectors that map to 0, those 5 vectors, they form the kernel. If I have an infinite number of vectors that form... that all map to the same thing, the 0 vector in w, that is what the kernel is.*0031

*Recall that the kernel is not only a subset of the vector space v, but it is also a sub-space, so it is a very special kind of thing.*0049

*As a subspace, you can find a basis for it. Okay. Now, we defined the range also. So, the range is all those vectors in w, the arrival space that are images of some vector in v.*0056

*So, if there is something in v that maps to w, all of those w that are represented... that is the range.*0071

*That does not mean it is all of w... it can be all of w, which we actually give a special name and we will talk about that in a second, but it is just those vectors in w that are mapped from vectors in v, under the linear transformation L.*0080

*Okay. Now, let us go ahead and get started with our first theorem concerning the range. *0096

*Well, just like the kernel is a subspace of the departure space, the range happens to be a subspace of the arrival space.*0102

*So, our first theorem says... the range L is a subspace of w for L being a linear mapping from v to w.*0111

*So, again, kernel is a subspace in v, range is a subspace of w.*0146

*Okay. Let us do an example here, concerning ranges and kernels and things like that. Ranges actually.*0152

*So, we will say that L is a mapping from R3 to R3 itself, which again, when the dimension is the same, or when it is a mapping from itself onto itself, we call it a linear operator, but it is still just a linear map - let it be defined by L of some vector x is equal to a matrix product... (1,0,1), (1,1,2), (2,1,3) × x, which happens to be x1, x2, x3 in component form.*0161

*So if I have a vector in x, the transformation, the linear transformation is multiplication by a matrix on the left.*0196

*Okay. Our question is... is L onto. Okay, so, this onto thing... remember we said that the range of a linear map is those vectors in the arrival space, w, that are the image of some vector v from the departure space.*0204

*Well, if every vector in w is the image of some vector in v, that means if every single vector in w is represented, that is what we mean it is onto.*0224

*That means the linear map literally maps onto the entire space w, as opposed to the range which is just a subspace of it, a part of it. That is all onto means, all of w is represented.*0235

*Okay. So, well, let us take a random vector in w, and in the case the arrival space, in R3, so let us just go ahead and... in R3, and we will just call it w and we will call it a, b, and c.*0248

*It is just some random vector in the arrival space. Okay.*0275

*Now, the question is, can we find some vector in the departure space that is the pre-image of this w in the arrival space? That is the whole idea. So, we speak about the image, we speak about the pre-image.*0281

*So, I am starting from the perspective of w, some random vector in w... can I find... if I take every vector w... can I find something in v that actually maps to that w. That is what we want to know. Is every vector in w represented? Okay.*0299

*So, the question we want to answer is can we find (x,y,z), also in R3 because R3 is the departure space such that (1,0,1), (1,1,2), (2,1,3) × (x,y,z) equals our a,b,c, which is our random in w. That is what we want to find.*0314

*We want to find x,y,z... x, y and, z, such that this is represented. What values of a,b,c, will make this possible. Well, we go ahead and we form the augmented system, (1,0,1), (1,1,2), (2,1,3), and we augment it with a.*0349

*We augment it with a... b... c... okay, that is our augment, and then we subject it to Gauss - Jordan elimination to take it to reduced row echelon form, and when we do that we end up with the following. (1,0,1,0,1,1,0,0,0)*0372

*Over here we end up with a... b - a... c - a - b, so let us talk about what this means. Notice this last row here is all 0's, and this is c - a - b over here.*0398

*The only way that this a consistent system, the only way that this has a solution is if c - a - b = 0.*0411

*So, the only way that some random vector, when I take a random vector in w, and I subject it to the conditions of this linear map, x, y, and z, the relationship between a, b, and c, has to be that c - a - b = 0.*0425

*What this means it that this is very specific. I cannot just take any random numbers. I cannot just take (5,7,18).*0440

*The relationship among these 3 numbers for the vector in w... a, b, c, has to be such that c - a - b = 0, which means that not every vector in w is represented. So, this is not onto.*0450

*Okay. I hope that makes sense. Again, I took a random vector, I need to be able to solve this system and have every single vector be possible, but this solution, this system tells me that this has a solution, because if c - a - b = 0.*0462

*Those are very specific numbers that actually do that. Yes, there may be an infinite number of them, but they do not represent all of the vectors in w, therefore this linear map is not onto.*0480

*Okay. Now, let us do something else. Continue the example... part b.*0490

*Now, the question is, find a basis for the range of L. Find a basis for the range of L.*0501

*So, we know it is a subspace, so we know the basis for it. So, let us go ahead and see what this range is.*0513

*In other words, let us take L of some random x,y,z, which is in v, the departure space... well, L is of course, that matrix, (1,0,1), (1,1,2), (2,1,3), × x,y,z, and when I do this matrix multiplication, I end up with the following.*0520

*The vector that I get is +z, x + y, + 2z, and the third entry is going to be 2x + y + 3z, and I just got that from basic matrix multiplication... this times that + this times that + this times that, and then go to the second row... this times that, this times that... *0546

*Remember? Matrix multiplication. We did it really, really, really early on. Okay.*0570

*So, this thing, I can actually pull out... it becomes the following. It is x × (1,1,2), I just take the coefficients of the x's, +y × (0,1,1)... + z × (1,2,3).*0575

*Therefore, that vector, that vector, and that vector... let me actually write them as a set, but we have not found the basis yet. This just gives me the span. Okay.*0603

*So, the vector is (1,1,2), (0,1,1), and (1,2,3), they span the range of L.*0616

*So remember, a series of vectors, a set of vectors that spans a subspace or a space, in order for it to be a basis, it has to be a linearly independent set.*0632

*So, once we have our span, we need to check these three vectors to makes sure that they are linearly independent, and we do that by solving, by taking this matrix... augmenting it with a 0 vector, and then solving... turning it into reduced row echelon form and then the vector of the one's with the leading entries... those actually form a basis for a space.*0644

*So, let us take this, the matrix again, so we do (1,1,2), (0,1,1), (1,2,3), we augment with (0,0,0), we turn it into reduced row echelon form, and we end up with the following.*0668

*We end up with (1,0,0), (0,1,0), both of those columns have leading entries, we end up with (1,1,0), no leading entry there, and of course (0,0,0).*0684

*So, our first column and our second column have leading entries which means the vectors corresponding to the first and second column, namely that and that, they form a basis for this space.*0693

*That means that this was actually a linear combination of these two. This set is not linearly independent, but these two alone are linearly independent. That is what this process did.*0707

*So, now, I am going to take the first two vectors, (1,1,2), the first two columns... and (0,1,1), this is a basis for the range. Notice, the basis for the range has 2 vectors in it.*0717

*A basis, the number of vectors in the basis is the dimension of that subspace. So, the range has a dimension of 2.*0744

*However, our w, our arrival space was R3. It has dimension of 3. Since this dimension is 2, it is not the entire space. So, this confirms the fact of what we did in a.*0751

*It confirms the fact that this linear map is not onto, and in fact this procedure is probably the best thing to do... find the spanning set for the range, and then reduce it to a basis, and then just count the number of vectors in the basis.*0765

*If the dimension of the range is less than the dimension of the arrival space, well, it is not onto. If it is, it is onto. *0779

*Okay. Let us continue with this and do a little bit more, extract a little bit more information here. Let us find the kernel of this linear map, find the kernel of L. Okay.*0790

*So, now, -- let me erase this -- now what we want, we want to take... well, the kernel is again, all the... we want to find all of the vectors that map to 0 in the arrival space, which means we want to solve the homogeneous system.*0803

*We want to find a, all the vectors x, that map to 0 in w. Well, that is just... take the matrix 1... let me do it as rows (1,0,1), (1,1,2), (2,1,3) × (x,y,z) is equal to (0,0,0). Okay.*0820

*Then, what we end up doing is... well, this, when we form the augmented matrix of this matrix + that, which is just the one that we just did, we end up with the following... (x,y,z), the vectors in v that satisfy this take the form (-r,-r,r), which is the same as r × (-1,-1,1).*0848

*This is one vector, therefore, the vector (-1,-1,1), this vector is a basis for the kernel. The kernel is a subspace of the departure space. This is a basis for the kernel. It is one dimensional.*0871

*Okay. d is L... 1 to 1... do you remember what 1 to 1 means? It means any two different vectors in the departure space mapped to two different vectors in the arrival space... 1 to 1.*0896

*Okay. Well, let us see. The dimension of the kernel of L = 1, which is what we just got up here. That implies that it is not 1 to 1.*0918

*The reason is... in order to have 1 to 1, the dimension of the kernel needs to be 0. It means this map, the only thing that should be in the kernel is the 0 vector in the departure space. That means 0 maps only to 0.*0932

*That is the only thing that maps to 0. Everything else maps to something else. If that is the case, when only the 0 is in the kernel, 0 in the departure space, we can say -- that is one of the theorems we had in the last lesson -- we can say that this linear map is 1 to 1.*0949

*We know that it is not onto. Now we also know that it is not 1 to 1. Okay.*0965

*So, this is not 1 to 1. Now, I would like you to notice something. Let me put this in blue.*0972

*We had our departure space v as R3, 3-dimensional. The dimension of the range of the linear map is equal to 2. The dimension of the kernel of this linear map was equal to 1. 2 + 1 = 3. This is always true. This is not a coincidence.*0983

*So, let us express this as a theorem. Profound, profound theorem.*1011

*Okay. Let L be a mapping from v to w, let it be a linear mapping -- because we are talking about linear maps after all -- let it be a linear map of an n-dimensional space... n-dimensional vector space into -- notice we did not say onto -- into an m-dimensional vector space w, vector space v, w.*1025

*Then, the dimension of the kernel of L + the dimension of the range of L is equal to the dimension of v. The departure space. Let us stop and think about that for a second.*1066

*If we have a linear map, and let us say my departure space is 5-dimensional, well, I know that the relationship that exists between the kernel of that linear map and the range of that linear map is that their sum of the dimensions is always going to equal the dimension of the departure space.*1086

*So, if I have a 5-dimensional departure space, let us say R5 -- excuse me -- and I happen to know that my kernel has a dimension 2, I know that my range has a dimension 3. I know that I am already dealing with a linear map that is neither 1 to 1 nor onto.*1109

*This is kind of extraordinary. Now, recall from a previous discussion when we were talking about matrices, and how matrix has those fundamental spaces.*1125

*It has the column space, it has the row space, it has the null space, which is the -- you know -- the space of all of the vectors that map to 0, which is exactly what the kernel is.*1136

*So, we can also express this in the analogous form, you have already seen this theorem before... the nullity of a matrix plus the rank of the matrix, which is the dimension of the row space, or column space, is equal to n for an m x n matrix.*1149

*Of course, we have already said that when we are dealing with the Euclidian space RN and RM, every linear map is representable by a matrix. So, we already did the matrix version of this. *1177

*Now we are doing the general linear mapping. We do not have to necessarily be talking about Euclidian space R2 to R3, it can be any n-dimensional space. For example, a space of polynomials of degree < or = 5. Boom. There you have your particular, you know, finite dimensional vector space.*1191

*Again, given a linear map, any linear map, the dimension of the kernel of that linear map, plus the dimension of the range of that linear map, is going to equal the dimension of the departure space. Deep theorem, profound theorem.*1211

*Okay. Now, let me see here, I wanted to talk a little bit about the nature of 1 to 1 and onto, just to give you a pictorial representation of what it is that really means, and then state a theorem for linear operators, when the arrival space and the departure space happen to be of the same dimensions.*1224

*So, let us draw some pictures and as you will see I am actually going to draw them... the sizes that I draw them are going to be significant.*1250

*If I have v, and if I have w, so this is v, this is w... departure space, arrival space... if I have a 1 to 1 map, a 1 to 1 map means that everything over here... everything in v maps to a different element in w.*1259

*It does not map to everything in w, but it maps to something different in w. However, everything in v is represented, and it goes to something over here.*1287

*I drew it like this to let you know that there is this size issue. In some sense, w is bigger than v, and I use the term bigger in quotes.*1296

*Now, let us do an onto map. So, this is 1 to 1, but it is not onto. Let me actually write that... "not onto".*1305

*Now we will do the other version. We will do something that is onto, but not 1 to 1.*1318

*So, let us make this v, and let us make this w. So now, an onto map means everything, every single vector in w comes from some vector in v.*1323

*But that does not mean that every vector in v maps to something in w. Every single one in here, so now, everything in w is represented, so this is onto, not 1 to 1.*1336

*Okay. It could also be that -- you know -- two different vectors here actually map to the same vector here.*1356

*The point is that every single vector in w comes from is the image under the linear map of something from v.*1364

*But, it does not mean that it is everything from v... something from v.*1372

*Okay. Now, 1 to 1 and onto, now let me state my theorem and we will draw our picture.*1378

*Theorem. Let L be mapping from v to w... be a linear map and let the dimension of v equal the dimension of w.*1386

*So, it is a mapping -- not necessarily the same space, but the have the same dimension -- so, the most general idea, not just R3 to R3, or R4 to R4, but some finite dimensional vector space that has dimension 5, some set of objects, it maps to a different set of objects that happen to have the same dimension. They do not have to be the same objects.*1409

*But, the dimensions of the two spaces are the same... then we can conclude the following: L... -- well, let me write it as an if, then statement -- if L is 1 to 1, then L is onto, and vice versa... If L onto, then L is 1 to 1.*1432

*So, if I am dealing with a map from one vector space to another, where the dimensions of the two spaces, the departure and the arrival are the same ... if I know that it is 1 to 1, I know that it is onto... if I know that it is onto, I know that it is 1 to 1.*1463

*This makes sense intuitively if you think about the fact that we want every vector in v to map to every vector in w. We call this a 1 to 1 correspondence, completely for the both sets. *1476

*The fact that they are the same dimension, it intuitively makes sense. In some sense, those spaces are the same size, if you will.*1489

*Again, we use the term size in quotes, so it looks something like this... everything in v is represented, and it maps to everything... something in v, everything in v maps to something in w, and all of w is represented.*1496

*In some sense, I am saying that they are equal, that they are the same size.*1519

*Again, we use those terms sort of loosely, but it sort of gives you a way of thinking about what it means... a smaller space going into a smaller space, onto. *1525

*This whole idea if they happen to be of the same dimension, the same size so to speak.*1537

*Then a 1 to 1 map implies that it is onto, and onto implies that it is 1 to 1.*1542

*Okay. Thank you for joining us at Educator.com, we will see you next time.*1549

0 answers

Post by Manfred Berger on June 25, 2013

Am I correct in assuming that Theorem 3b only holds for finite dimensional spaces?