WEBVTT mathematics/linear-algebra/hovasapian
00:00:00.000 --> 00:00:09.000
Welcome back to educator.com and welcome back to linear algebra, in the last lesson we introduced the idea of a linear transformation or a linear mapping.
00:00:09.000 --> 00:00:15.000
They are synonymous, I will often say linear mapping, occasionally how you transformation, but they are synonymous.
00:00:15.000 --> 00:00:22.000
Today I am going to continue the discussion, a couple of more examples, just to develop more of a sensitive intuition about what it is that's going on.
00:00:22.000 --> 00:00:33.000
This is a profoundly important concept, as we move on from here, we are going to move on into studying the structure, as actually after we discussed lines and planes.
00:00:33.000 --> 00:00:52.000
We are going to talk about the structure of something called a vector space, and linear mappings are going to be profoundly important, and how we discuss the transformations from one vector space to another, so this idea of linear mapping for many of you really is the first introduction to this abstraction, you know up to now you have been dealing with functions.
00:00:52.000 --> 00:01:03.000
X², radical X, 3X + 5, but now we are going to take, make it a little bit more general, and make the spaces from which we pull something, manipulate it, and land someplace else.
00:01:03.000 --> 00:01:21.000
A lot more abstract, we are not going to necessity, I mean we will with specific examples, namely N space, R₂, R₃, R < font size="-6" > n < /font > , but the idea, the underlying notions are what we really want to study, the underlying structure that's what's important.
00:01:21.000 --> 00:01:34.000
Let's go ahead and get started, and recap what we did with linear transformations, and do some more examples, okay, so recall what a linear map means.
00:01:34.000 --> 00:01:54.000
And again we are using this language linear line, but as it turns out we are using it as a language, because historically we did lines before we came up with a definition of what linear means, that's the only reason we call it linear, linearity is an algebraic property, it actually has nothing to do with lines at all.
00:01:54.000 --> 00:02:15.000
Something is linear, means if we have a mapping or transformation from R < font size="-6" > N < /font > to R < font size="-6" > M < /font > , it has to satisfy the following properties...
00:02:15.000 --> 00:02:33.000
... A + B = L(A) + L(B), these are vectors of course, because we are taking something from R < font size="-6" > N < /font > and moving it over to R < font size="-6" > M < /font > and it also has to satisfy this other property, that if i multiply.
00:02:33.000 --> 00:02:51.000
take a vector, multiply it by a scalar, and then do something to it, it's the same as taking the vector, doing something to it, and then multiplying it by that scalar, so these two properties have to be satisfied for any particular function or mapping that we are dealing with.
00:02:51.000 --> 00:03:10.000
Let's show what that looks like pictorially, so remember we are talking about two spaces, one of them w call the departure, and we call tit the departure space because we are taking something from this space, fiddling around with it and landing someplace else.
00:03:10.000 --> 00:03:25.000
Now they could be the same space, like for example the function F(X) = X²., I am pulling a number like 5 and i am squaring it and I am getting back another number, 25, so the two spaces are the same, but they don't necessarily have to be the same, that's what makes this beautiful.
00:03:25.000 --> 00:03:39.000
Okay, so let's say we have the vector A, and we have the vector B, well in this space we can of course add, so let's say we end up with this vector A + B, and we know that we can do that with vectors.
00:03:39.000 --> 00:03:59.000
Let's see, now when we add here, we, this, we addition in this space, might be defined a certain way, now mind you, it doesn't have to be the same as addition in this space, the operations are different, because the spaces may be different.
00:03:59.000 --> 00:04:12.000
Okay, so addition in these two spaces may not necessarily be the same, usually they will be, won't be a problem, you know we will always specify when it is different, but understand it, there is no reason to believe that it has to be the same.
00:04:12.000 --> 00:04:26.000
Okay, so in this case we take A, so this left part here, it means I add A + B first, and then I do L to it.
00:04:26.000 --> 00:04:40.000
And I end up some place, well what this says is that if this is a linear transformation, it has to satisfy the following properties, that mean if I add these two first, and I, then I transform it and move it to this space, what I end up with.
00:04:40.000 --> 00:04:54.000
I should be able to get the same thing if I take A first under L and then if I do B first under L, and of course I am going to end up in two different places, and he if I add these two, I should do the same thing.
00:04:54.000 --> 00:05:08.000
In other words, adding first and then applying the transformation, or applying the transformation separately and then adding, if I can reverse those, and if I still end up in the same place, that's what makes this a linear transformation.
00:05:08.000 --> 00:05:20.000
And again that's pretty extra ordinary, and the same thing, if I take a vector A, if I multiply it by some scalar 18, and then i operate on it with linear transformation, I am going to end up some place.
00:05:20.000 --> 00:05:34.000
Let's say I end up here, that means I should, if I take the vector A, map it to L, and then multiply by 18, I should end up in the same place.
00:05:34.000 --> 00:05:54.000
Again these two things have to be satisfied for something to make it linear, and again not all maps as we saw from the previous examples satisfy the property, this is a very special property, that the structure from one space to another, the relationship is actually maintained, that's what makes this beautiful, now we are getting into deep mathematics.
00:05:54.000 --> 00:06:11.000
Okay, let's actually represent this a little bit better, so that you can see it, so A, I can transform A under, it becomes l(A), I can transform B under L, it becomes L(B).
00:06:11.000 --> 00:06:15.000
Now, I can...
00:06:15.000 --> 00:06:40.000
... Add these two in my departure space, so I get A + B, and then I can apply L to it, to get L(A + B) or, you can do L first, do L for B, and then add to get here.
00:06:40.000 --> 00:06:55.000
This is more of an expanded, so this is an expanded version of what it is that i sort of drew up here, it's up to you, if you want to work pictorially, if you want to work algebraically, this is what's going on, again profoundly important concept.
00:06:55.000 --> 00:07:12.000
And again addition in this space does not necessarily need to be the same in addition in the arrival space, they often will be like for example, if this is R₂ and this is R₃, well addition of vectors is the same, you know from space to space you are adding components, but it doesn't necessarily need to be that way.
00:07:12.000 --> 00:07:16.000
And again that's the power of this thing...
00:07:16.000 --> 00:07:26.000
... Okay let's state a theorem, so...
00:07:26.000 --> 00:07:32.000
... We will let L...
00:07:32.000 --> 00:07:40.000
... From R < font size="-6" > N < /font > to R < font size="-6" > M < /font > and you will notice, sometimes I will do a single line, sometimes a double line, it's just the real numbers.
00:07:40.000 --> 00:07:48.000
Let it be a linear mapping...
00:07:48.000 --> 00:07:51.000
... Excuse me...
00:07:51.000 --> 00:08:15.000
... then L of C₁ times A₁ + C₂ time A₂ + and so on all the way to C < font size="-6" > k < /font > times A < font size="-6" > k < /font > = C₁ time L, A₁.
00:08:15.000 --> 00:08:27.000
Oops, no yes that's correct, let me erase this here + C₂ times LA₂ and so on.
00:08:27.000 --> 00:08:42.000
Essentially this is just an extension of linearity, so I can do, I can add more than just two things, you know A + B, I can add a whole bunch of things, then I can multiply each of those vectors by a constant, so essentially what's happening here.
00:08:42.000 --> 00:08:51.000
If you think about this algebraically from what you remember as far as distribution, the linear mapping actually distributes over each of these.
00:08:51.000 --> 00:09:03.000
It says that I can multiply, I and take K vectors, multiply each of them by some constant, not all of them 0, and then apply the linear transformation to it.
00:09:03.000 --> 00:09:12.000
Or, well that, this theorem says that it is actually equal to taking each individual vector, applying the linear transformation to it, and then multiplying it by a constant.
00:09:12.000 --> 00:09:20.000
It's just a generalization onto an infinite, any number of vectors that you take, that's all this says.
00:09:20.000 --> 00:09:25.000
And the second theorem...
00:09:25.000 --> 00:09:40.000
... Okay, again we will let R < font size="-6" > N < /font > to R < font size="-6" > M < /font > be a linear map, L of the 0 vector in R < font size="-6" > N < /font > ...
00:09:40.000 --> 00:09:55.000
... Maps to the 0 vector in R < font size="-6" > M < /font > , okay, this notation is very important, notice this 0 with a vector, this 0 is a vector, because we are talking about a particular space, let's say in this case R₂.
00:09:55.000 --> 00:10:08.000
this 0 point is actually considered a vector, well the 0 vector in R < font size="-6" > N < /font > and the 0 vector in R < font size="-6" > M < /font > are not the same, one is two vector, one is a three vector, it might be an N vector.
00:10:08.000 --> 00:10:23.000
What this is saying that if I take the 0, and if I subject it to the linear transformation, it actually maps it to the 0 in the other space, that's kind of extraordinary, so again if i draw a quick little picture, you know two different spaces.
00:10:23.000 --> 00:10:37.000
let's say this is R₃, and let's say this is R₄, 3 space and 4 space, if I have this 0 vector here, and the 0 vector here, they are not the same thing, they fulfil the same row, in their, in their perspective spaces, they are still the 0 vector.
00:10:37.000 --> 00:10:53.000
The additive identity, but if I subject it to transformation L, i actually map the 0 in this space to the 0 in that space, again it's maintained, it doesn't just end up randomly some place, the 0 goes to the 0.
00:10:53.000 --> 00:11:02.000
And another one which is actually pretty intuitive if I take the transformation of U - V...
00:11:02.000 --> 00:11:17.000
That's the same as L(U) - L(V), and again you know that the (-) sign is basically the just the addition of the negatives, so it's not a problem, okay.
00:11:17.000 --> 00:11:24.000
Lets see if we can do an example here...
00:11:24.000 --> 00:11:40.000
... Should I go for it, yeah that's okay, we can, we can start over here, let me o that, let me change over to a red ink here, okay.
00:11:40.000 --> 00:11:55.000
We will let L in this particular case be a transformation from R₃ to R₂ so a three vector, we are going to do something to it and we are going to end up with a two vector...
00:11:55.000 --> 00:12:00.000
... Be defined by...
00:12:00.000 --> 00:12:18.000
... L(1, 0, 0), so in this case my definition is, I don't actually have the specific mapping that I am doing, but in this case this example is going to demonstrate that I know something about the unit vectors in this particular space.
00:12:18.000 --> 00:12:29.000
Or, in this, where you will see, i know something about three of the vectors and we will see what happens, equals 2 - 1...
00:12:29.000 --> 00:12:52.000
... L (0, 1, 0) is equal to (3, 1), excuse me, and L(0, 0, 1) is equal to (-1, 2), so again it says that if I take the vector (1, 0, 0) in R₃ in three space.
00:12:52.000 --> 00:13:02.000
Under this transformation I am defining it, I am saying that it equals this, that the vector (0, 1, 0) under the transformation L is equal to this, so I have made a statement about three vectors.
00:13:02.000 --> 00:13:10.000
Now recall...
00:13:10.000 --> 00:13:29.000
... That (1, 0, 0), we have specific symbols for these, we call them E₁, they are unit vectors, they are vectors of length 1, and we happen to give them special symbols because they are very important, (0, 1, 0) in three space.
00:13:29.000 --> 00:13:40.000
They actually form the unit vectors that are mutually orthogonal, remember X coordinate, Y coordinate, Z coordinate, E₁ , we also call it I.
00:13:40.000 --> 00:13:56.000
Remember, and we call this one J, so there are different kinds of symbols that we can use, they all represent the same thing, and (0, 0, 1) is called E₃, and it is represented by a K vector.
00:13:56.000 --> 00:14:12.000
Okay, our task is to find L of the vector (-3, 4, 2), so again we are given that the three unit vectors map to these three points under the transformation.
00:14:12.000 --> 00:14:29.000
Can we find where, if we take a random vector, (-3, 4, and 2), can we actually find the point in R₂ that l map's knowing just about these three vectors, well as it turns out, yes we can.
00:14:29.000 --> 00:14:43.000
Let me, over here, well let's see, now (-3, 4, and 2) can be written as...
00:14:43.000 --> 00:14:58.000
... -3I + 4J + 2K, right, we are just representing them as a linear combination of the unit vectors I, J, K, so L...
00:14:58.000 --> 00:15:13.000
... Of (-3. 4. 2) is equal to L of -3I + 4J + 2K.
00:15:13.000 --> 00:15:27.000
Well that's equal to, and again this is linear, so I can just sort of distribute this linearity if you will, it is -3 times L(I)...
00:15:27.000 --> 00:15:39.000
... + 4 times L(J) + 2 time L(K), well we already know what these are, we already know what L(I), L(J), L(K) is.
00:15:39.000 --> 00:15:58.000
It's the L(1, 0, 0), this is L(0, 1, 0), this is L(0, 0, 1), so we write -3 times, and I am going to write these as vectors, column vectors + 4 times 3, 1.
00:15:58.000 --> 00:16:08.000
+ 2 times -1, 2, because this 2, -1 is L(I), we defined it earlier, that was part of the definition.
00:16:08.000 --> 00:16:25.000
We know that the linear transformation maps these three vectors to these three points, that much we know, no we just sort of set it up in such way, and now end up with -6 and 3.
00:16:25.000 --> 00:16:36.000
I am going to write everything out here, 12 and 4, please check my arithmetic because i will often make arithmetic mistakes, -2 and 4.
00:16:36.000 --> 00:16:59.000
And then when we add these together, we end up with 4 and 11, or we can do it in coordinate form, (4, 11), so there we go, knowing where the linear transformation actually maps the unit vectors , allows us to find the linear transformation of any other vector in that space.
00:16:59.000 --> 00:17:09.000
That's kind of extraordinary, okay...
00:17:09.000 --> 00:17:15.000
... Now let's do another example here...
00:17:15.000 --> 00:17:19.000
... Okay...
00:17:19.000 --> 00:17:33.000
... Let F, this time I use the capital F, be a mapping from R₂ to R₃, so I am mapping something from two space to three space, okay.
00:17:33.000 --> 00:17:39.000
Be defined by...
00:17:39.000 --> 00:17:55.000
... The following, F of the vector XY is equal to, now I am going to represent this as a matrix, so again this is just a mapping that I am throwing out there.
00:17:55.000 --> 00:18:07.000
(1, 0, 0, 1, 1, -1), this is a 3 by 2...
00:18:07.000 --> 00:18:20.000
... This matrix multiplication is perfectly well defined, so it says F is a mapping, notice I haven't said anything about it being linear, I just said it's a mapping, that takes a vector in R₂, transforms it and turns it into a vector in R₃.
00:18:20.000 --> 00:18:35.000
Lets exactly what happens here, this is the definition that says take XY, some 2 vector, multiply on the left by this matrix, and you actually do end up getting, so this is 3 by 2, this is a 2 by 1.
00:18:35.000 --> 00:18:50.000
Well sure enough, you will end up getting a 3 by 1 matrix, which is a 3 vector, so we have taken a vector in R₂, mapped it to R₃, now our question is, is it linear?...
00:18:50.000 --> 00:18:59.000
... That's just kind of interesting, I have this matrix multiplication, now I want to find out if it's linear, again the power of linearity, this has nothing to do with lines at all.
00:18:59.000 --> 00:19:18.000
Okay, so again when we check linearity, we check two things, we check the addition and we check the scalar multiplication, we will go through the addition here, I will have, you go ahead and check these scalar multiplication if you want to, so check this, check that F of...
00:19:18.000 --> 00:19:35.000
... U + V for any two vectors equals F(U) + F(V), that we can exchange the addition and the linear, and the actual function itself, okay.
00:19:35.000 --> 00:19:48.000
We will say that U is equal to U₁, U₂, oops, then make that like that, we will say that V is...
00:19:48.000 --> 00:20:06.000
... V₁ and V₂, okay, now U + V is exactly what you think it is, it is U₁ + V₁, U₂ + V₂, okay.
00:20:06.000 --> 00:20:19.000
I am going to write that, let me actually write it a little differently, let me write it as a, as a 2 vector, column vector, I think it might be a little bit clear.
00:20:19.000 --> 00:20:29.000
I will do this, because we are dealing with matrix multiplication, when, we will just deal with matrices, so U₁ + V₁, U₂ + V₂.
00:20:29.000 --> 00:20:36.000
L:et me make sure I have my indices correct, yes, okay, now...
00:20:36.000 --> 00:20:52.000
I will do a little 1 here, and now let's transform, let's do F(U + V), okay, well that's going to equal the matrix...
00:20:52.000 --> 00:21:01.000
... (1, 0, 0, 1, 1, -1) times...
00:21:01.000 --> 00:21:14.000
... U₁ + V₁, U₂ + V₂, okay, so again it's, when we do this times that + this times that and then this times that + this times that.
00:21:14.000 --> 00:21:26.000
And then this times that + this + this times that, that's how matrix multiplication works, you choose a row and you go down the column, there are two elements in this row, two elements in this column, you add them together.
00:21:26.000 --> 00:21:40.000
What you end up with is the following, U₁ + V₁, U₂ + V₂, and you get...
00:21:40.000 --> 00:21:53.000
... U₁ + V₁ - U₂ + V₂.
00:21:53.000 --> 00:22:03.000
This is the three vector, that's the first entry, that's the second entry, that whole thing is the third entry, so we have done this first part, the left, okay.
00:22:03.000 --> 00:22:07.000
Now let's do the right...
00:22:07.000 --> 00:22:13.000
F(U) is equal to...
00:22:13.000 --> 00:22:31.000
... (1, 0, 0, 1, 1, -1) times U₁, U₂, that's equal to U₁, U₂, U₁ - U₂...
00:22:31.000 --> 00:22:40.000
... Okay, now let's move to the next page...
00:22:40.000 --> 00:22:43.000
... We will do F(V)...
00:22:43.000 --> 00:23:00.000
... That's equal to (1, 0, 0, 1, 1, -1) times V₁, V₂, that's equal to V₁, v₂, V₁ - V₂.
00:23:00.000 --> 00:23:20.000
Now we have to add the F(U) and the F(V), so F(U), which we just did + F(V), which was the second thing we just did, is equal to U₁, U₂, U₁ - U₂.
00:23:20.000 --> 00:23:31.000
V₁V₂, V₁ - V₂, that's equal to U₁ + V₁...
00:23:31.000 --> 00:23:37.000
... U₂ + v₂...
00:23:37.000 --> 00:23:52.000
... U₁ + V₁, I have just rearranged and put them, and grouped the U₁ and, the U₁ with the V₁, the U₂ and the V₂ +....
00:23:52.000 --> 00:24:08.000
... + U₂ + V₂, there we go, and as it turns out, F(U) + F(V), does in fact equal F(U + V), quantity, so yes, so let me write that out.
00:24:08.000 --> 00:24:20.000
F(U + V), does in fact equal F(U) + F(V).
00:24:20.000 --> 00:24:32.000
Now when we check the scalar multiplication, it will also check out, so yes this map is linear...
00:24:32.000 --> 00:24:50.000
... This is rather extra ordinary, matrix multiplication is a linear mapping, matrix multiplication allows you to map something in N space, like R₅, into, let's say seven space, R₇.
00:24:50.000 --> 00:25:05.000
And to retain the structure of being able to add the vectors in five spaces first, and then do a linear transformation, or do the linear transformation first, end u in seven space, and then ad, you end up in the same place.
00:25:05.000 --> 00:25:13.000
That's extraordinary, matrix multiplication is a linear mapping, notice it has nothing to do with linear, with a line, this is an algebraic property.
00:25:13.000 --> 00:25:17.000
An underlying structure of the mapping itself...
00:25:17.000 --> 00:25:21.000
... Okay...
00:25:21.000 --> 00:25:33.000
... Therefore if you have some mapping...
00:25:33.000 --> 00:25:48.000
... L, defined by the following L of some vector X, is actually equal to some matrix, some M by N matrix, multiplied by X...
00:25:48.000 --> 00:25:52.000
... Then L is linear...
00:25:52.000 --> 00:25:57.000
... We just proved it, always...
00:25:57.000 --> 00:26:04.000
... Okay, now let's state a theorem here...
00:26:04.000 --> 00:26:11.000
... If L is a mapping from R < font size="-6" > N < /font > ...
00:26:11.000 --> 00:26:22.000
... To R < font size="-6" > M < /font > , is a linear mapping...
00:26:22.000 --> 00:26:31.000
... here is what's amazing, then there exists a unique M by N matrix...
00:26:31.000 --> 00:26:42.000
... A, such that the mapping, the, is actually equal to some matrix, times...
00:26:42.000 --> 00:26:52.000
... For any vector in R < font size="-6" > N < /font > ...
00:26:52.000 --> 00:26:56.000
... Okay, this is profoundly important...
00:26:56.000 --> 00:27:10.000
... We just proved that matrix multiplication is a linear mapping, the other way around i also true, if I have a linear mapping that has nothing to do with the matrix, because remember the examples that we have been dealing up to this point have nothing to do with matrices necessarily.
00:27:10.000 --> 00:27:21.000
They were just mapping, function, if it turns out that, that mapping is linear, what this theorem tells me is that there is some matrix.
00:27:21.000 --> 00:27:37.000
Some matrix somewhere, that actually represents that mapping, in other words I can always, I may not really need to find, but it tells me that the matrix actually exist, that every linear mapping is associated with some M by N matrix.
00:27:37.000 --> 00:27:53.000
And some M by N matrix is associated with some linear mapping, that's extraordinary, there is a correspondence between the set of all linear mappings, and the set of all M by N matrices, that's extra ordinary.
00:27:53.000 --> 00:27:58.000
Actually there is way to find the matrix and here is how it is, so...
00:27:58.000 --> 00:28:03.000
... The matrix A...
00:28:03.000 --> 00:28:11.000
... And it's quite beautiful, is found as follows...
00:28:11.000 --> 00:28:20.000
... The matrix of A is equal to the matrix of...
00:28:20.000 --> 00:28:37.000
... I take the unit vectors in my space, in my departure space, I subject them to transformation, whatever the linear mapping happens to be, and then the vectors that I get, i set them up as columns in a matrix.
00:28:37.000 --> 00:28:45.000
And that's actually the matrix of my transformation, of my linear transformation, L of...
00:28:45.000 --> 00:28:50.000
... E to the N, okay...
00:28:50.000 --> 00:28:55.000
... Yes, alright...
00:28:55.000 --> 00:29:14.000
... I will write it out, so here's what I am doing, so the ith column, let's say the fourth column is just them linear transformation of the fourth unit vector for that space, we should probably just do an example about, or work out much better.
00:29:14.000 --> 00:29:16.000
Okay...
00:29:16.000 --> 00:29:24.000
... Let's...
00:29:24.000 --> 00:29:37.000
... Let this be a mapping, defined, R₃ to R₂, so we are taking a three vector, mapping some, transforming it into a two vector.
00:29:37.000 --> 00:29:48.000
Let it be defined by the following, L(XYZ) is equal to...
00:29:48.000 --> 00:30:09.000
... I am sorry, no this is mapping from R₃ to R₃, so we are mapping, we are mapping it onto itself essentially, so it's mapping from three space onto three space, which by the way, when the spaces happen to be the same that you are mapping to and from, it's called an operator, a linear operator...
00:30:09.000 --> 00:30:23.000
... X + Y is the first entry, Y- Z is the second entry, X + Z is the third entry, so I take a vector, do something to it, and arrange it like this, this is what the mapping is defined by.
00:30:23.000 --> 00:30:39.000
Now the question is this, we said that any linear mapping has a matrix associated with it, I can always represent a linear mapping as a matrix multiplication, very convenient, let's find that matrix...
00:30:39.000 --> 00:30:51.000
... And it's called the standard matrix by the way, I myself am not too big on nomenclature, am more interested that you actually understand what's happening, you could call it whatever name you want.
00:30:51.000 --> 00:31:06.000
Okay, we said that all we have to do is take the linear transformation, or take the unit vectors in the space, in this case R₃ or departure space, and just subject them to this transformation, and then set up this columns, and that's our matrix.
00:31:06.000 --> 00:31:26.000
L of E₁ equals L of, well in three space (1, 0, 0) is the first unit vector, the X, the I, well that equals, well, let's go up here, 1 + 0...
00:31:26.000 --> 00:31:31.000
... 0 - 0, and 1 + 0...
00:31:31.000 --> 00:31:41.000
... We end up with (1, 0, 1) okay, this is going to be column 1 of our matrix...
00:31:41.000 --> 00:31:57.000
... L of E₂ equals L(0, 1, 0), well X + Y, 0 + 1, Y - Z, 1 - 0 and X + Z, 0 + 0.
00:31:57.000 --> 00:32:25.000
Then I should end up with (1, 1, 0), and that's going to be column 2, if I take L(E₃), which is L(0, 0, 1) , well X+ Y, 0,+ 0, Y - Z, Y - Z, 0 - 1, and X + Z, 0 + 1.
00:32:25.000 --> 00:32:30.000
I end up with (0, -1, 1)...
00:32:30.000 --> 00:32:49.000
... This is my column 3, so A, the standard matrix is (1, 0, 1), (1, 1, 0), (0, -1, 1)...
00:32:49.000 --> 00:32:53.000
... That is my answer...
00:32:53.000 --> 00:33:09.000
Let me change it to blue, this was how the linear mapping was defined, the linear mapping. Therefore I know that there are some matrix associated with this linear mapping, I could represent it as a matrix multiplication, which is very convenient.
00:33:09.000 --> 00:33:16.000
Well, I take the unit vectors for this space, I subject them to this transformation, I get these things.
00:33:16.000 --> 00:33:23.000
I arrange these things one after the other as the columns of the matrix, and I end up with my matrix.
00:33:23.000 --> 00:33:30.000
This means that...
00:33:30.000 --> 00:33:40.000
... If I want to do this mapping, all I have to do is take any vector X, and multiply by this matrix on the left, profoundly important.
00:33:40.000 --> 00:33:49.000
Every liner mapping is associated with an M by N matrix, and every M by N matrix represents some linear mapping somewhere.
00:33:49.000 --> 00:34:04.000
That's extraordinary, so now you are not just talking about numbers arranged randomly in a square, or in some rectangular fashion, that this actually represents a linear mapping, a linear function from one space to another.
00:34:04.000 --> 00:34:08.000
Okay we will talk a little bit more about this next time, thank you for joining us here at educator.com, we will see you again, bye, bye.