WEBVTT mathematics/linear-algebra/hovasapian
00:00:00.000 --> 00:00:04.000
Welcome back to Educator.com, and welcome back to linear algebra.
00:00:04.000 --> 00:00:10.000
This lesson, we are going to continue the discussion of row rank and column rank that we started in the last lesson.
00:00:10.000 --> 00:00:15.000
So, this is going to be the rank of a matrix, part 2.
00:00:15.000 --> 00:00:22.000
Let us just go ahead and jump right in -- go ahead and switch over to a blue ink here.
00:00:22.000 --> 00:00:32.000
Recall from a previous lesson the following matrix.
00:00:32.000 --> 00:00:58.000
So, we have a matrix a... it is (1,3,2,1), (-2,2,3,2), (0,8,7,0), (3,1,2,4), (-4,4,3,-3).
00:00:58.000 --> 00:01:04.000
Okay. So, this is a 4 by 5.
00:01:04.000 --> 00:01:10.000
A 4 by 5 matrix. Okay. Now, let us consider just the columns of this matrix.
00:01:10.000 --> 00:01:39.000
I will call it the set c. So, we have the (1,3,2,1), we have (-2,2,3,2), we have (0,8,7,0), we have (3,1,2,4), (-4,4,3,-3).
00:01:39.000 --> 00:01:52.000
What we have is 1, 2, 3, 4, 5 vectors in R4... 5 vector in R4. Okay.
00:01:52.000 --> 00:02:04.000
Now, we said that the rows of a matrix form the span as we treat it as vectors, individual vectors... they span a space called the row space.
00:02:04.000 --> 00:02:17.000
Well, similarly, the columns of the matrix, they span a space like we define in the previous lesson. They span a space, a subspace called... the column space.
00:02:17.000 --> 00:02:36.000
Now, what we want to do is find a basis for the column space.
00:02:36.000 --> 00:02:42.000
Consisting of arbitrary vectors... they do not necessarily have to be from this set.
00:02:42.000 --> 00:02:48.000
We want to find a basis for the span of this set, but I do not necessarily want them to be from this set.
00:02:48.000 --> 00:03:02.000
So, find a basis for the column space consisting of arbitrary vectors.
00:03:02.000 --> 00:03:22.000
Now, if you remember from our last lesson, when we have a set of vectors, and we want to find a basis for the span of that set of vectors, but we do not care if the vectors in that basis come from the original set... we set up those vectors as rows.
00:03:22.000 --> 00:03:26.000
Then, we do reduced row echelon form, and then the number of non-zero rows, those actually form a basis.
00:03:26.000 --> 00:03:34.000
So, let us do that. Here, the column... the columns are this way.
00:03:34.000 --> 00:03:45.000
We want to find a basis for the column space arbitrary vectors, so I am going to write the columns as rows, because that is the procedure.
00:03:45.000 --> 00:04:08.000
So, I am going to write (1,3,2,1), (-2,2,3,2), (0,8,7,0), (3,1,2,4), (-4,4,3,-3).
00:04:08.000 --> 00:04:37.000
I am going to convert that to reduced row echelon form, and I end up with (1,0,0,11/24), (0,1,0,-49/24), (0,0,1,7/3), and 0's everywhere else.
00:04:37.000 --> 00:04:52.000
My 3 non-zero rows are these. They form a basis for the span of the columns, the original matrix.
00:04:52.000 --> 00:05:10.000
So, I can choose the set (1,0,0,11/24)... that is one vector.
00:05:10.000 --> 00:05:20.000
Notice, in the matrix I had written them as rows, but now I am just writing them as columns because I just tend to prefer writing them this way.
00:05:20.000 --> 00:05:32.000
(0,1,0,-49/24), and (0,0,1,7/3), if I am not mistaken, that is correct.
00:05:32.000 --> 00:05:47.000
Yes. This set forms a basis for the column space.
00:05:47.000 --> 00:05:56.000
Column rank, three. There are three vectors in there.
00:05:56.000 --> 00:06:11.000
Okay. Now, we want to find a basis for the original set of vectors consisting of vectors from that actual set... either all of them, or few of them.
00:06:11.000 --> 00:06:17.000
So, when we do that, we set them up as columns, and we solve the associated homogeneous system.
00:06:17.000 --> 00:06:46.000
So, here is what we are going to set up... (1,3,2,-1), (-2,2,3,2), (0,8,7,0), (3,1,2,4), (-4,4,3,3), and of course the associated system goes that way.
00:06:46.000 --> 00:06:52.000
We convert to reduced row echelon form, we end up with the following.
00:06:52.000 --> 00:07:11.000
We end up with (1,0,0,0), (0,1,0,0), (2,1,0,0), (0,0,1,0), (1,1,-1,0), and 0's in the final column.
00:07:11.000 --> 00:07:20.000
Let us go to blue. Leading entry, leading entry, leading entry. In other words, the first, the second, and the fourth column.
00:07:20.000 --> 00:07:40.000
Therefore, the first, second and fourth column form a basis. Therefore, I can take the vectors (1,3,2,-1), here.
00:07:40.000 --> 00:07:47.000
I can take the vector (-2,2,3,2).
00:07:47.000 --> 00:07:55.000
The fourth column, (3,1,2,4)... this set forms a basis.
00:07:55.000 --> 00:08:05.000
A good basis for the column space.
00:08:05.000 --> 00:08:21.000
Column rank equals 3, because there are 3 vectors that go into the basis. Again, the rank is the dimension of that space.
00:08:21.000 --> 00:08:36.000
Okay. Now, let us recap what we did. Just now, and from the previous lesson. Here is what we did.
00:08:36.000 --> 00:09:02.000
We had a, okay? I will write it one more time. I know it is getting a little tedious, but I suppose it is always good to see it... (2,8,7,0), (3,1,2,4), (4,4,3,3).
00:09:02.000 --> 00:09:09.000
We had this original matrix a. Okay, it is a 4 by 5 matrix.
00:09:09.000 --> 00:09:15.000
The column... the row space consists of 4 vectors in R5.
00:09:15.000 --> 00:09:26.000
The column space consists of 5 vectors in R4. Okay.
00:09:26.000 --> 00:10:00.000
Using two different techniques, we found a basis for the row space, alright? that was in the previous lesson.
00:10:00.000 --> 00:10:11.000
For the row space, we dealt with the rows using two different techniques. One, we set them up as rows, and we got vectors arbitrary vectors for a basis consisting of arbitrary vectors.
00:10:11.000 --> 00:10:19.000
Then, we set up these rows as columns, we solve the associated system and we got a basis consisting of vectors from the original set.
00:10:19.000 --> 00:10:28.000
So, 2 different techniques, we ended up with a row rank equal to 3.
00:10:28.000 --> 00:10:40.000
Okay. Now the columns, like we said, the columns form a set of 1,2,3,4,5 vectors in R4.
00:10:40.000 --> 00:10:50.000
Well, again, this was for rows, now for columns.
00:10:50.000 --> 00:11:19.000
The problem we just did, using 2 different techniques, we found a basis in the column space.
00:11:19.000 --> 00:11:26.000
Column rank is equal to 3. Let me stop this for a second.
00:11:26.000 --> 00:11:47.000
Random matrix... random matrix a... the rows consist of 4 vectors in R5. Using the two techniques, we found a basis, and the basis consists of the 3 vectors a piece. Row rank was 3.
00:11:47.000 --> 00:11:55.000
The columns, the columns are 5 vectors in R4. R5 and R4 have nothing to do with each other, they are completely different spaces.
00:11:55.000 --> 00:12:03.000
I mean, their underlying structure might be the same, but they are completely different spaces. One has 4 elements, the vectors in the other space have 5 elements in them.
00:12:03.000 --> 00:12:10.000
Using two different techniques, we found a basis for the column space. Column rank ends up being 3. Okay.
00:12:10.000 --> 00:12:22.000
This 3, this is not a coincidence... not a coincidence.
00:12:22.000 --> 00:12:40.000
As it turns out, for any random matrix, m by n, row rank equals the column rank.
00:12:40.000 --> 00:12:46.000
So, let me put this in perspective for you. I took a rectangular array, random, in this case 4 by 5.
00:12:46.000 --> 00:12:57.000
It could be 7 by 8... it could be 4 by 13... if I treat the rows as vectors, and if I treat the columns as vectors, and if I calculate...
00:12:57.000 --> 00:13:13.000
If I find a basis for both of the bases that those two... a basis for the span of the collection of vectors that make up the rows... collection of vectors that make up the columns, they have nothing to do with each other.
00:13:13.000 --> 00:13:16.000
Yet, they end up with the same number of vectors.
00:13:16.000 --> 00:13:20.000
Well, the column rank is the row rank, now we call it the rank.
00:13:20.000 --> 00:13:33.000
So, because it is the case, because the column space and the row space end up having the same number of vectors in their bases, we just call it the rank.
00:13:33.000 --> 00:13:40.000
So, we no longer refer to it as the row rank of a matrix, or the column rank of a matrix, we call it the rank of a matrix.
00:13:40.000 --> 00:13:43.000
Now, I want you to stop and think about how extraordinary this is.
00:13:43.000 --> 00:13:50.000
A collection, a rectangular array of numbers... let us say 3 by 17.
00:13:50.000 --> 00:14:06.000
You have some vectors in R3, and you have vectors in R17. They have absolutely nothing to do with each other, and yet a basis for this space that these vectors span... they end up with the same number of vectors.
00:14:06.000 --> 00:14:12.000
There is no reason in the world for believing that that should be the case. There is no reason in the world for believing that that should be the case, and yet there it is.
00:14:12.000 --> 00:14:21.000
Simply by virtue of a rectangular arrangement of numbers. That is extraordinary beyond belief, and we have not even gotten to the best part yet.
00:14:21.000 --> 00:14:30.000
Now, we are just going to call it the rank from now on. So, I do not necessarily have to find the row rank and the column rank of a matrix, I can just take my pick.
00:14:30.000 --> 00:14:34.000
So, let us just stick with rows. I go with rows. You are welcome to go with columns if you want.
00:14:34.000 --> 00:15:00.000
Okay. So, as a recap... our procedure for computing the rank of a matrix a.
00:15:00.000 --> 00:15:11.000
Okay. 1. Transform the matrix a to reduced row echelon matrix b.
00:15:11.000 --> 00:15:28.000
2. The number of non-zero rows is the rank, that is it. Nice and easy.
00:15:28.000 --> 00:15:45.000
Now, recall from a previous lesson. We defined something called the nullity, defined the nullity.
00:15:45.000 --> 00:16:15.000
That was the dimension of the null space. In other words, it is the dimension of the solution space for the homogeneous system ax = 0.
00:16:15.000 --> 00:16:29.000
Okay? Theorem. Profoundly important result, insanely beautiful result. We have to know this.
00:16:29.000 --> 00:16:40.000
If you do not walk away with anything else from linear algebra, know this theorem, because I promise you, if you can drop this theorem in one of your classes in graduate school, you will make one hell of an impression on your professors.
00:16:40.000 --> 00:16:47.000
They probably do not even know this themselves, some of them... but beautiful, beautiful theorem.
00:16:47.000 --> 00:16:58.000
The rank of a matrix a plus the nullity of a matrix a is equal to n.
00:16:58.000 --> 00:17:08.000
So, think about what this means. If I have an m by n matrix, a 5 by 6 matrix... 5 by 6... 5 rows, 6 columns.
00:17:08.000 --> 00:17:18.000
n is 6. The rank of that matrix plus the nullity of that matrix equals 6.
00:17:18.000 --> 00:17:26.000
If I know that I have a matrix that is n = 6, and I find the nullity, I know what the rank is automatically, by virtue of this equation.
00:17:26.000 --> 00:17:33.000
If I know what the rank is, I know what the nullity is. If I know what the rank and nullity is, I know what space I am dealing with.
00:17:33.000 --> 00:17:44.000
If I have a rank of 5, and if I have a nullity of 3, then I know that I am dealing with an 8-dimensional space. Amazing, amazing, amazing theorem. Comes up in a lot of places.
00:17:44.000 --> 00:17:49.000
Okay. Let us do some examples here.
00:17:49.000 --> 00:18:03.000
Let us go... okay... simply by virtue of a random arrangement of numbers in a rectangular array that we call a matrix.
00:18:03.000 --> 00:18:26.000
(1,1,4,1,2), (0,1,2,1,1), (0,0,0,1,2), (1,-1,0,0,2), (2,1,6,0,1)... okay.
00:18:26.000 --> 00:18:38.000
Reduced row echelon. We have this random matrix, it is 1, 2, 3, 4, 5... 1, 2, 3, 4, 5... this is a 5 by 5 matrix, so here n = 5.
00:18:38.000 --> 00:19:00.000
Okay. Reduced row echelon form, you get (1,0,2,0,1), (0,1,2,0,-1), (0,0,0,1,2), and we get (0,0,0,0,0), (0,0,0,0,0)...
00:19:00.000 --> 00:19:09.000
We have 1, 2, 3 non-zero rows. Rank = 3.
00:19:09.000 --> 00:19:24.000
Well, if rank = 3 and n = 5, I know that my solution space to the associated homogeneous system that goes with this matrix... I know that it has to have a dimension 2, because rank + nullity = n.
00:19:24.000 --> 00:19:33.000
3 + 2 = 5. That is extraordinary. In fact, this is from a previous example.
00:19:33.000 --> 00:19:41.000
If you go back to a previous lesson where we actually calculated the solution space, you will find that there were 2 vectors.
00:19:41.000 --> 00:19:49.000
So, 2 vectors, dimension 2, here you have dimension 3 and it confirms the fact that 3 + 2 = 5.
00:19:49.000 --> 00:20:00.000
Okay. Now, let us throw out a theorem here, that has to do with rank in singularity.
00:20:00.000 --> 00:20:20.000
Actually, you know what, we define here... let me go to blue... rank and singularity, and if you remember singularity has to do with determinance.
00:20:20.000 --> 00:20:35.000
So, a non-singular matrix is one whose determinant... well a non-singular matrix is one that actually has an inverse, that is the actual definition of singularity, something that has an inverse and that corresponds to a determinant not being equal to 0.
00:20:35.000 --> 00:20:41.000
And... you remember that list of non-singular equivalences? We are actually going to recap it at the end of this lesson and add a few more things to it.
00:20:41.000 --> 00:21:00.000
So, rank and singularity... a n by n, a n by n matrix is non-singular... it means it has an inverse, if and only if rank = n.
00:21:00.000 --> 00:21:09.000
So, if I calculate the rank and the rank equals n, that means it is not singular. That means it has an inverse. That means its determinant is non-zero.
00:21:09.000 --> 00:21:24.000
Okay. Let us do some quick examples of this one. We will let a equal (1,2,0,0,1,3,2,1,3).
00:21:24.000 --> 00:21:33.000
We convert to reduced row echelon. We get (1,0,0,0,1,0,0,0,1).
00:21:33.000 --> 00:21:45.000
Okay. There is 1, there is 2, there is 3 non-zero vectors in that reduced row echelon. Rank = 3.
00:21:45.000 --> 00:22:10.000
Well, we have a 3 by 3, the rank = 3, therefore that implies that this is non-singular, and it implies that the solution space... okay... has only the trivial solution.
00:22:10.000 --> 00:22:18.000
Again, this goes back to that list of equivalences. One thing implies a whole bunch of other things.
00:22:18.000 --> 00:22:32.000
Okay. Another example. Let us let matrix b equal (1,2,0), (1,1,-3), (1,3,3).
00:22:32.000 --> 00:22:44.000
Let us convert to reduced row echelon form. We end up with (1,0,-6), (0,1,3), we get (0,0,0).
00:22:44.000 --> 00:22:50.000
We have that, we have that... we have 2. So, rank is equal to 2.
00:22:50.000 --> 00:23:06.000
Well, the n is 3, the rank is 2. It is less than 3, it is not equal to 3, therefore... so 2 less than 3, it implies that a is singular.
00:23:06.000 --> 00:23:22.000
It does not have an inverse. It implies that there does exist a non-trivial solution for the homogeneous system.
00:23:22.000 --> 00:23:33.000
Okay. One more theorem here, that is very, very nice.
00:23:33.000 --> 00:23:39.000
We will not necessarily do an example of this, but it is good to know.
00:23:39.000 --> 00:24:01.000
The non-homogeneous system, ax = b, has a solution if and only if the rank of the matrix a is equal to the rank of the matrix a augmented by b.
00:24:01.000 --> 00:24:14.000
So, if I take a, take the rank, and if I take a, make the augmented matrix, and then calculate the length of that matrix, if those are equal... then I know that the actual system has a solution.
00:24:14.000 --> 00:24:24.000
Now, of course we have techniques for finding this solution, you know, and that is important, but sometimes it is nice just to know that it does have a solution.
00:24:24.000 --> 00:24:31.000
Okay. Now, let us talk about our list of non-singular equivalences, and let us add to that list.
00:24:31.000 --> 00:24:49.000
So, list of non-singular equivalences. This is for a n by n... you remember, because an n by n matrix is the only one for which a determinant is actually defined. Okay.
00:24:49.000 --> 00:24:56.000
All of the following are equivalent. In other words, one is the same as the other.
00:24:56.000 --> 00:25:02.000
Each one implies each and every other one.
00:25:02.000 --> 00:25:11.000
One, well, a is non-singular.
00:25:11.000 --> 00:25:32.000
Two, ax = 0, the homogeneous system has only the trivial solution... and you remember the trivial solution is just 0, all 0's.
00:25:32.000 --> 00:25:52.000
Three, a is row-equivalent to i < font size="-6" > n < /font > , the identity matrix. That is the one with all 1's in the main diagonal. Everything else is 0.
00:25:52.000 --> 00:26:09.000
Four, ax = b, the associated non-homogeneous solution has a weak solution... only 1.
00:26:09.000 --> 00:26:18.000
Five, the determinant of a is non-zero.
00:26:18.000 --> 00:26:25.000
Six, a has rank n.
00:26:25.000 --> 00:26:33.000
Seven, a has nullity 0.
00:26:33.000 --> 00:27:05.000
Eight, rows of a form a linearly independent set of vectors in RN.
00:27:05.000 --> 00:27:26.000
Nine, the columns do the same thing. The columns of a form a linearly independent -- I will just abbreviate it as LI -- set of vectors in RN.
00:27:26.000 --> 00:27:36.000
So, I can make all of these statements if I have some random a matrix, which is n by n, let us say 5 by 5.
00:27:36.000 --> 00:27:45.000
If I know that it is... let us say I know... I calculate its rank and its rank ends up being n. I know that all of these other things are true.
00:27:45.000 --> 00:27:51.000
A is non-singular, that means that it has an inverse. I know that the associated homogeneous system has the trivial solution only.
00:27:51.000 --> 00:28:03.000
I know that I can convert a to the identity matrix, in this case i5... I know that the associated non-homogeneous solution for any particular b has one and only one solution.
00:28:03.000 --> 00:28:06.000
I know that the determinant is not 0.
00:28:06.000 --> 00:28:14.000
I know that the nullity, the solution space is 0, which is the same as the... yeah, only the trivial solution.
00:28:14.000 --> 00:28:19.000
I know that the rows of a form a linearly independent set of vectors in RN.
00:28:19.000 --> 00:28:28.000
I know that the columns of a form a linearly independent set of vectors in RN.
00:28:28.000 --> 00:28:40.000
So, again, we have a matrix... the rows are a set of vectors, and they behave a certain way, they span a space. The dimension of that space is the row rank.
00:28:40.000 --> 00:28:51.000
The columns of that matrix span a space. The dimension of that subspace is called the column rank.
00:28:51.000 --> 00:29:00.000
The row rank and the column rank end up being the same, no matter what rectangular array we have. We call that the rank.
00:29:00.000 --> 00:29:14.000
The rank + the nullity, which is the dimension of the solution space, of the associated homogeneous system is always equal to n.
00:29:14.000 --> 00:29:20.000
That is amazing. That is beautiful, and it is going to have even further consequences as we see in our subsequent lessons.
00:29:20.000 --> 00:29:26.000
Thank you for joining us here at Educator.com, thank you for joining us for linear algebra, we will see you next time, bye-bye.