WEBVTT mathematics/linear-algebra/hovasapian
00:00:00.000 --> 00:00:04.000
Welcome back to Educator.com and welcome back to linear algebra.
00:00:04.000 --> 00:00:13.000
Last couple of lessons, we talked about linear independence, and we talked about the span.
00:00:13.000 --> 00:00:21.000
Today we are going to talk about something called basis and dimension, and we are going to use linear independence and span to define those things.
00:00:21.000 --> 00:00:31.000
So, let us get started. Okay. Let us start with a definition here.
00:00:31.000 --> 00:00:37.000
Again, math usually always starts with some definition. Okay.
00:00:37.000 --> 00:01:05.000
Vectors v1, v2, and so on... all the way to vk are said to form a basis for vector space v.
00:01:05.000 --> 00:01:26.000
If 1... v1, v2, all the way to v < font size="-6" > k < /font > , with a span b.
00:01:26.000 --> 00:01:43.000
And 2... v1, v2... v < font size="-6" > k < /font > are independent, linearly independent, but I will just write independent.
00:01:43.000 --> 00:01:59.000
So, again, in the case of a set of vectors that is both independent and happens to span a vector space or some subspace, span something that we are, that we happen to be interested in dealing with.
00:01:59.000 --> 00:02:10.000
We actually give it a special name. It is called a basis. Now, you can have vectors that are independent, but do not necessarily span a space.
00:02:10.000 --> 00:02:15.000
So, for example, if I had 3-space, I could take the i vector and the j vector.
00:02:15.000 --> 00:02:25.000
Well, they certainly are independent, they are orthogonal, they have nothing to do with each other... and yet they do not span the entire space. They only span a part of the space... in other words, the xy plane.
00:02:25.000 --> 00:02:32.000
Or, you can have vectors that span the entire space, but are not necessarily independent.
00:02:32.000 --> 00:02:45.000
So, again, let us take 3-space. I can have i,j,k, and let us say I decided to take also the vector 5k, another vector in the direction of k, but 5 times its length.
00:02:45.000 --> 00:02:59.000
Well, it is a different vector. So, there are four vectors but... and they span the space, you know, every single vector can be written as a linear combination of i,j,k, and 5k, but they are not linearly independent.
00:02:59.000 --> 00:03:10.000
They are dependent, because 5k can be written as a, well, constant × k. It is just they are parallel, they are the same thing,
00:03:10.000 --> 00:03:19.000
So, again, you can have something that spans a space but is not independent, and you can have vectors that are independent but do not necessarily span the space.
00:03:19.000 --> 00:03:27.000
What we want is something that does both. When it does both, it is called a basis for that space... profoundly important.
00:03:27.000 --> 00:03:38.000
Okay. Let us see what we can do. Let us just throw out some basic examples.
00:03:38.000 --> 00:03:55.000
Okay. So, the one we just discussed, e1, e2, and e3, they form a basis for R3.
00:03:55.000 --> 00:04:03.000
Just like e1, e2, e3, e4, e5 would form a basis for R5.
00:04:03.000 --> 00:04:09.000
Let us do a computational example here.
00:04:09.000 --> 00:04:14.000
We are going to take a list of four vectors.
00:04:14.000 --> 00:04:33.000
v1 = (1,0,1,0), v2 = (0,1,-1,2), these are vectors by the way, I better notate them as such.
00:04:33.000 --> 00:04:50.000
v3 = (0,2,2,1) and v4 = (1,0,0,1), again I just wrote them in a list, you can write them as vectors, anything you want.
00:04:50.000 --> 00:05:09.000
Let me see. We want to show that these four vectors are... so, show that these form a basis for R4.
00:05:09.000 --> 00:05:21.000
Well, what do we need to show that they form a basis. Two things, we need to show that the vectors span R4, in this case, and we need to show that they are linearly independent.
00:05:21.000 --> 00:05:26.000
So, let us get started, and see which one we want to do first.
00:05:26.000 --> 00:05:36.000
Let us go ahead and do independence first. So, again, we form the following.
00:05:36.000 --> 00:05:54.000
So, equation 1. Remember, c1v1 + c2v2, I am not going to write out everything, but it is good to write out the equation which is the definition of dependence and independence... c3v3 + c4v4 = the 0 vector.
00:05:54.000 --> 00:06:08.000
When I put the vectors, v1, v2, v3, v4 in here, multiply the c's, get a linear system, convert that to a matrix, I get the following (1,0,0,1,0).
00:06:08.000 --> 00:06:32.000
Again, that final 0 is there... (0,1,2,0,0), (1,-1,2,0,0), again the columns of the matrix are just the vectors through v4 (0,2,1,1,0). Okay.
00:06:32.000 --> 00:06:43.000
When I subject this to reduced row echelon, I get the following. c1 = 0, c2 = 0, c3 = 0, c4 = 0.
00:06:43.000 --> 00:06:55.000
Again, you can confirm this with your mathematical software. This is the non-trivial solution. It implies independence. Good.
00:06:55.000 --> 00:07:05.000
So, part of it is set, now let us see about the span. Well, for the span, we need to pick an arbitrary vector in R4, since we are dealing with R4.
00:07:05.000 --> 00:07:12.000
We can just call it -- I do not know -- (a,b,c,d), and, we need to find to set up the following equation.
00:07:12.000 --> 00:07:32.000
I will not use c because we used them before, I will use k... k1v1, constant, k2v2 + k3v3 + k4v4... symbolism in mathematics just gets crazy. Very tedious sometimes.
00:07:32.000 --> 00:07:38.000
And... I will just call it v arbitrary... just some vector v.
00:07:38.000 --> 00:07:50.000
Although, again, we can set up the solution, we can go (1,0,1,0), (0,1,-1,2), (0,2,2,1), (1,0,0,1), and we can do (a,b,c,d).
00:07:50.000 --> 00:07:56.000
You know what, let me go ahead and just write it out, so you see it.
00:07:56.000 --> 00:08:17.000
We have (1,0,0,1), (0,1,2,0), (1,-1,2,0), (0,2,2,1), and of course our vector, this time it is not a (0,0,0,0), it is going to be (a,b,c,d).
00:08:17.000 --> 00:08:23.000
Again, the nice thing about mathematical software is that it actually solves this symbolically. Not necessarily numerically.
00:08:23.000 --> 00:08:33.000
So, it will give you a solution for k1, k2, k3, k4 in terms of a, b, c, and d. Well, there does exist a solution.
00:08:33.000 --> 00:08:47.000
Okay. There does exist a solution. That means that any arbitrary vector can be represented by these 4 vectors.
00:08:47.000 --> 00:09:03.000
So, let us see, so v1, v2, v3, and v4, which are just v1, v2, v3, v4, span R4.
00:09:03.000 --> 00:09:20.000
We found something that spans R4, and we also found that they are linearly independent, so yes, these vectors are a good basis.
00:09:20.000 --> 00:09:35.000
Are they the best basis? Maybe, maybe not, it depends on the problem at hand... but they are a basis and it is a good basis for R4.
00:09:35.000 --> 00:10:03.000
Okay. Let us list a theorem here... theorem... if s, the set of vectors, v1 so on and so forth onto vk, is a basis for v.
00:10:03.000 --> 00:10:55.000
So, if the set is a basis for v, then every vector in v can be written in 1 and only 1 way, as a linear combination of the vectors in s.
00:10:55.000 --> 00:11:05.000
That is not s, we should write the vectors in s.
00:11:05.000 --> 00:11:15.000
So, in other words, if I know that s is a basis for the vector space, any vector in that vector space can only be represented 1 way.
00:11:15.000 --> 00:11:24.000
That means the particular representation, the constants that are chosen is unique. Not multiple ways, it is unique.
00:11:24.000 --> 00:11:42.000
Another theorem... actually, let me write this one in blue because we are possibly going to do something with this one.
00:11:42.000 --> 00:12:16.000
Let s be v1... vk be a set of non-zero vectors in v, and we will let w equal the span of s.
00:12:16.000 --> 00:12:32.000
So, we have this set s, there is a span of it, we will call that w, because it may not span the entire vector space, that is why we are giving it different, but obviously it is going to... I mean it is in v, so it is going to be some subset of it.
00:12:32.000 --> 00:12:51.000
Then, some subset of s is a basis for w. Okay, let us stop and think about what this means.
00:12:51.000 --> 00:12:59.000
I have a vector space v, I have some arbitrary collection of vectors that I have taken from v and I call that set s, just a list of vectors.
00:12:59.000 --> 00:13:05.000
I know that these vectors span some part of v.
00:13:05.000 --> 00:13:11.000
I call that w, if I need to give it a name, or I can just refer to it as the span of s.
00:13:11.000 --> 00:13:25.000
Well, if I take some subset of this, maybe all of it, but so... either k vectors or less than k vectors, some subset of it, it actually forms a basis for the span.
00:13:25.000 --> 00:13:40.000
That makes sense. Again, you have some set of vectors that spans an entire space, well, either all of the vectors together are independent, in which case that is your basis.
00:13:40.000 --> 00:13:45.000
Or, they might be dependent, which means that you should be able to throw out a couple of them and reduce the number.
00:13:45.000 --> 00:13:54.000
But to get something, some set of vectors from here, some collection that actually forms a basis for the span.
00:13:54.000 --> 00:14:01.000
let us see how this works in terms of... a real life example. Okay.
00:14:01.000 --> 00:14:14.000
We are going to list a procedure for finding the subset of s, of any s that is a basis for the span of s.
00:14:14.000 --> 00:14:26.000
Let me actually move forward. Let me write down what this is.
00:14:26.000 --> 00:14:50.000
Procedure for finding a subset of s, that is a basis for this span of s.
00:14:50.000 --> 00:15:07.000
Okay. First thing we are going to do. We want to form c1v1 + c2v2 + so on and so forth... ckvk = 0.
00:15:07.000 --> 00:15:11.000
We want to set up the homogeneous system, okay?
00:15:11.000 --> 00:15:30.000
Now, we want to solve the system by taking it to reduced row echelon form.
00:15:30.000 --> 00:16:03.000
Now, here is the best part. The vectors corresponding to the leading entries form a basis for span s.
00:16:03.000 --> 00:16:09.000
This is actually kind of extraordinary. I love this, and I do not know why, but it is amazing.
00:16:09.000 --> 00:16:14.000
I have this collection of vectors that spans a particular space.
00:16:14.000 --> 00:16:26.000
I set up the homogeneous system and I subject it to Gauss Jordan elimination, bring it down to reduced row echelon form, and as you know, not every column needs to have a leading entry.
00:16:26.000 --> 00:16:32.000
Well, the columns that do have a leading entry, that means I throw out all of the others.
00:16:32.000 --> 00:16:38.000
The original vectors that correspond to those columns that have leading entries, they actually form a basis for my span of s.
00:16:38.000 --> 00:16:47.000
So, let us just do an example and see what happens. Let us take the following vectors.
00:16:47.000 --> 00:17:09.000
let me do this in red, actually... so v1 = 1... this is not going to work... (1,2,-2,1).
00:17:09.000 --> 00:17:34.000
v2 = (-3,0,-4,3). v3, and of course these are vectors, so let me notate them as such... (2,1... this is also 1,-1).
00:17:34.000 --> 00:17:45.000
v4 = (-3,3,-9,6).
00:17:45.000 --> 00:17:53.000
v5 = (9,3,7,-6).
00:17:53.000 --> 00:18:11.000
So, we have 5 vectors... we want to find a subset of these vectors, it might be all 5, it might be 2, it might be 3, it might be 4... that form a basis for the span of s.
00:18:11.000 --> 00:18:25.000
Okay? Okay. So, we form for step... we do this thing right here.
00:18:25.000 --> 00:18:35.000
So, we set up this equation and we put these vectors in for this equation, and we end up with the following system.
00:18:35.000 --> 00:18:46.000
Columns... these vectors just going down... (1,2,-2,1)... or you can do them across... either way.
00:18:46.000 --> 00:19:00.000
(1,-3,2,-3,9), 1, 2, 3, 4, 5 because we have 5 vectors... 5 columns, and of course the augmented is going to be 0.
00:19:00.000 --> 00:19:21.000
(2,0,1,3,3,0), (-2,-4,1,-9,7,0), (3,-1,6,-6,0)... good.
00:19:21.000 --> 00:19:25.000
We are going to subject that to reduced row echelon form.
00:19:25.000 --> 00:19:32.000
When we do that, let me just put that there and write that there. Let me see...
00:19:32.000 --> 00:19:36.000
Let me move on to the next page, that is not a problem.
00:19:36.000 --> 00:20:06.000
So, we have subjected that matrix to reduced row echelon and we end up with the following... (1,0,0,0), (0,1,0,0), (1/2,3/2,3/2,0), (-1/2,3/2,5/2,0), and 0's everywhere else.
00:20:06.000 --> 00:20:10.000
So our reduced row echelon looks like this.
00:20:10.000 --> 00:20:17.000
Well, leading entry, leading entry, no leading entries anywhere else.
00:20:17.000 --> 00:20:31.000
So, vector number 1, vector number 2, v1 and v2 form a basis.
00:20:31.000 --> 00:20:36.000
So, it is not these, it is not (1,0,0,0), (0,1,0,0).
00:20:36.000 --> 00:20:42.000
This is the reduced row echelon from the matrix, the columns of which are the vectors that we are talking about.
00:20:42.000 --> 00:20:54.000
So, those vectors, the actual columns from the original matrix, those 2 vectors, so we started off with 5 vectors, and we found two of them that actually span the entire space.
00:20:54.000 --> 00:21:00.000
We threw out the other three, they are not that important. We can describe the entire span with just these 2 vectors.
00:21:00.000 --> 00:21:06.000
Form a basis for the span of s.
00:21:06.000 --> 00:21:10.000
Again, this is really, really extraordinary.
00:21:10.000 --> 00:21:48.000
Okay. Let us... another theorem... if s, v1... so on and so forth all the way to vk and t, which is, let us say w1 all the way to wk... okay?
00:21:48.000 --> 00:22:00.000
wN, so if s is the set of vectors v1 to vk, k could be 13 so we might have 13 vectors in this one... and t is equal to w1 all the way to wN.
00:22:00.000 --> 00:22:08.000
So, k and n do not have to necessarily be the same, but here is what the theorem says.
00:22:08.000 --> 00:22:20.000
If these 2 sets are bases for v, then k = n.
00:22:20.000 --> 00:22:29.000
In other words, if I have a given vector space, and if I have the bases for them, the bases have the same number of vectors.
00:22:29.000 --> 00:22:38.000
So, the basis set has the same number of vectors. In other words, I cannot have a vector space that has one basis that is 3 vectors and another that is 5 vectors.
00:22:38.000 --> 00:22:44.000
That is not what basis is. Basis expands the set, and it is linearly independent.
00:22:44.000 --> 00:22:48.000
Therefore, if I have 2 bases, they have to have the same number of elements in them.
00:22:48.000 --> 00:23:02.000
It makes sense. Okay. Now, because of this, once again, every basis of a given vector space has the same number of vectors in it.
00:23:02.000 --> 00:23:13.000
There are an infinite number of bases for a vector space... but of that infinite number, they all have the same number of vectors in them.
00:23:13.000 --> 00:24:00.000
Therefore, we define... again, very, very important definition... the dimension of a non-zero vector space dimension -- fancy word -- is the number of vectors in a basis for the vector space.
00:24:00.000 --> 00:24:12.000
So, read this again, the dimension of a non-zero vector, of a non-zero vector space is the number of vectors in the basis for that space.
00:24:12.000 --> 00:24:18.000
So, dimension is kind of a fancy word that a lot of people throw around.
00:24:18.000 --> 00:24:27.000
So, we talk about 3-dimensional space, the space that we live in. Well, 3-dimensional space, there are a couple of ways to think about it.
00:24:27.000 --> 00:24:35.000
Yes, it means 3-dimensional space because it will require 3 numbers to actually describe a point, (x,y,z)... three coordinates.
00:24:35.000 --> 00:24:49.000
However, the actual mathematical definition is 3 space is 3 dimensional because any basis for 3-space has to be made up of 3 vectors... 5 dimensional space.
00:24:49.000 --> 00:25:01.000
Any basis for 5-dimensional space has to be made up of 5 vectors. I cannot have 4 vectors describing it for 5 dimensional space. It is not going to happen.
00:25:01.000 --> 00:25:13.000
Can I have 6 vectors that actually describe it? Yes, I can have 6 vectors that span the 5-dimensional space, but that span is not linearly independent.
00:25:13.000 --> 00:25:26.000
So, because... and that is the whole idea. The dimension of a space is the number of vectors that form the basis and a basis is expansive and it is linearly independent.
00:25:26.000 --> 00:25:34.000
Okay. Let us see here. p2, which we have used a lot.
00:25:34.000 --> 00:25:48.000
That is the vector space of all polynomials of degree < or = 2.
00:25:48.000 --> 00:25:55.000
The dimension is 3, and here is why. The basis, we will list a basis, and that should tell you.
00:25:55.000 --> 00:26:02.000
This is the best part, if you just want to list a basis you can just count the number of vectors... that is how many dimensions that space is.
00:26:02.000 --> 00:26:20.000
t², t, and 1. Any linear combination of t², t, and 1 will give you every single possible polynomial of degree < or = 2.
00:26:20.000 --> 00:26:32.000
For example, 3t² + 6t - 10. Well, it is 3 × t², 6 × t, -10 × 1.
00:26:32.000 --> 00:26:42.000
3t + 2... 3 × t... 2... it is of degree 2, this is degree 1. degree less than or equal to 2.
00:26:42.000 --> 00:26:55.000
So, this one has to be in there. So, p2 has a dimension 3.
00:26:55.000 --> 00:27:01.000
pn has dimension n + 1.
00:27:01.000 --> 00:27:16.000
Okay. Now, here is where it gets really, really interesting and sort of just a sideline discussion, something sort of think about a little bit of mathematical culture. A little bit of abstraction...
00:27:16.000 --> 00:27:29.000
Notice that this p2 has a dimension of 3. Well, our 3-space, our normal 3-space that we live in also has a dimension of 3.
00:27:29.000 --> 00:27:38.000
As it turns out, all vector spaces of a given dimension, the only different between the vector spaces is the identity of their elements.
00:27:38.000 --> 00:27:44.000
In one vector space, R3, we are talking about points, or vectors, arrows.
00:27:44.000 --> 00:27:55.000
In this vector space, where this is a basis, it is a dimension of 3... the elements are actual polynomials.
00:27:55.000 --> 00:28:03.000
As it turns out, the identity of the elements is the only thing that is different about those 2 spaces. These two spaces have the exact same algebraic properties.
00:28:03.000 --> 00:28:17.000
They behave exactly the same way. In fact, I do not even need to think about it... if I can find myself 15 other vector spaces that have a dimension of 3, the identity of those elements completely does not matter.
00:28:17.000 --> 00:28:24.000
In fact, it does not even matter, I can treat it completely symbolically. I can call them whatever I want. I can label them whatever I want.
00:28:24.000 --> 00:28:35.000
What is important is the underlying algebraic property, and it is the same for every single vector space of a given dimension. That is what is extraordinary, that is what gives mathematics its power.
00:28:35.000 --> 00:28:44.000
Once I understand, let us say R3, and we understand R3 pretty well... we live in this space, we enjoy the world around us, look at what we have done with the world around us.
00:28:44.000 --> 00:28:57.000
If I find any other vector space with strange objects in it, if it has a dimension of 3, I know everything about it. I know absolutely everything about it because it behaves the same way that R3 does.
00:28:57.000 --> 00:29:08.000
Again, that is really, really extraordinary... the last thing that I want to leave you with in this particular lesson is that what we have dealt with are finite dimensional vector spaces.
00:29:08.000 --> 00:29:17.000
In other words, we know that R3 has an infinite number of vectors in them, but the basis, the dimension is finite... 3.
00:29:17.000 --> 00:29:21.000
That means I only need 3 vectors in order to describe the entire space.
00:29:21.000 --> 00:29:32.000
Now, that is not always true. There are infinite dimensional vector spaces that require an infinite number of vectors to actually describe them.
00:29:32.000 --> 00:29:45.000
Those of you that go on into higher mathematics, or not even that, those of you who are engineering and physics majors, at some point you will be discussing something called the Fourier series, which is an infinite series of trigonometric polynomials.
00:29:45.000 --> 00:29:56.000
Sin(x), cos(x), sin2(x), cos2(x), sin3(x), cos3(x), and so on. That is an infinite dimensional vector space.
00:29:56.000 --> 00:30:04.000
Okay. So, I will list... let us see... 2 infinite dimensional vector spaces, we of course are not going to deal with it.
00:30:04.000 --> 00:30:11.000
Linear algebra, mostly we stick with finite dimensional vector spaces, but I do want you to be aware of them.
00:30:11.000 --> 00:30:17.000
p, the space of all polynomials... all polynomials, that is an infinite dimensional vector space.
00:30:17.000 --> 00:30:24.000
It requires... it has an infinite number of vectors in its basis. Not like p2 or R3, that only have 3.
00:30:24.000 --> 00:30:38.000
The other one is the space of continuous functions on the real line.
00:30:38.000 --> 00:30:45.000
So, the space of continuous functions, you will see it represented like this... from negative infinity to infinity... that is defined on the entire real line.
00:30:45.000 --> 00:30:59.000
That space has an infinite number of dimensions. I need an infinite number of functions in order to be able to describe all of the other functions, if I need to do so.
00:30:59.000 --> 00:31:10.000
I just wanted to throw that out there. Really, what I wanted you to take away from this is that the identity for a vector space of any given dimension, the identity of the elements is completely irrelevant.
00:31:10.000 --> 00:31:16.000
The underlying behavior is what we are concerned with, and the underlying behavior is exactly the same.
00:31:16.000 --> 00:31:20.000
Thank you for joining us here at Educator.com, linear algebra, we will see you next time.